Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: most expeditious way to update a large number of rows

Re: most expeditious way to update a large number of rows

From: DA Morgan <damorgan_at_psoug.org>
Date: Mon, 16 Oct 2006 22:29:08 -0700
Message-ID: <1161062944.154206@bubbleator.drizzle.com>


mylesv_at_gmail.com wrote:
> DA Morgan wrote:

>> Unless my guess about your background is incorrect, you didn't respond
>> to that part ... the question not relevant because the size of redo
>> logs does not relate to the size of transactions.

>
> Daniel,
>
> Your guess about my background is partially correct. I've worked with
> Sybase and SQL Server in the past. I don't consider myself a DBA, I
> work _as_ a DBA. It's one of many things on my plate. That's (often)
> the reality of working for a small company.
>
> With Brian's help, I estimated that my update would create 700+ MB of
> redo. You're saying that the size and number of redo log groups is
> irrelevant? It seems I'll see a log switch 13 or so times with my five
> 10 MB log groups.
>
> If that's not the case, I won't even consider changing anything. In
> day to day use, the logs switch at ARCHIVE_LAG_TARGET rather than due
> to filling.
>
> Thanks again,
> Myles

My point was that Oracle gives you the power to make decisions not possible in most other products. For example one gets maximum performance by not having any log file switches (at the risk of increasing the risk of data loss) which would indicate very large files. But the need to minimize possible loss due to a catastrophic hardware failure dictates more frequent log switches. As an Oracle DBA it is your job to achieve a balance between speed and safety.

In the other products you have worked with, Sybase and SQL Server, log files work in a completely different manner and must be sized to the amount of redo created by a transactions and the possibility that it will need to be rolled back. Not the case in Oracle where we can perform an infinitely large transaction in finite log space.

Your original question led me to believe you were trying to size your log files based on the amount of redo ... rather than a calculation of risk (data loss) vs. reward (speed).

-- 
Daniel A. Morgan
University of Washington
damorgan_at_x.washington.edu
(replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
Received on Tue Oct 17 2006 - 00:29:08 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US