Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: most expeditious way to update a large number of rows

Re: most expeditious way to update a large number of rows

From: <mylesv_at_gmail.com>
Date: 17 Oct 2006 05:25:30 -0700
Message-ID: <1161087930.335090.298950@i3g2000cwc.googlegroups.com>


DA Morgan wrote:
> My point was that Oracle gives you the power to make decisions not
> possible in most other products. For example one gets maximum
> performance by not having any log file switches (at the risk of
> increasing the risk of data loss) which would indicate very large
> files. But the need to minimize possible loss due to a catastrophic
> hardware failure dictates more frequent log switches. As an Oracle
> DBA it is your job to achieve a balance between speed and safety.
>
> In the other products you have worked with, Sybase and SQL Server,
> log files work in a completely different manner and must be sized to
> the amount of redo created by a transactions and the possibility that
> it will need to be rolled back. Not the case in Oracle where we can
> perform an infinitely large transaction in finite log space.
>
> Your original question led me to believe you were trying to size your
> log files based on the amount of redo ... rather than a calculation
> of risk (data loss) vs. reward (speed).

Daniel,

I see your point. I could perform my update with two 1 MB redo logs, but it wouldn't be practical. I'm seeking a balance between performance and safety. I seems larger redo logs and ARCHIVE_LAG_TARGET should work for me. Received on Tue Oct 17 2006 - 07:25:30 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US