Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: most expeditious way to update a large number of rows

Re: most expeditious way to update a large number of rows

From: <mylesv_at_gmail.com>
Date: 16 Oct 2006 10:30:10 -0700
Message-ID: <1161019810.430875.94040@m73g2000cwd.googlegroups.com>

hpuxrac wrote:
> mylesv_at_gmail.com wrote:
> You are running a system that's important enough to support with a
> standby that has only 10 meg log files?
>
> Wow.
>
> Is there some history about why those log files were created so small?
> To me, it doesn't make sense to create log files with anything less
> than say multiples of 100's of meg and potentially much larger.
>
> Do you understand the extra work that oracle performs on each log
> switch?
>
> So with your current system you can only generate 50 meg of redo before
> you are out potentially of available space for more activity?
>
> Ouch. If you must stay with 10 meg log files ( I wouldn't ) then
> seriously think about allocating a whole lot more of then. Maybe a gig
> worth?

I inherited this configuration.

I've haven't run out of redo (yet), and I agree with your assessment of the log groups.

To reach the standby database, the data must traverse a fractional T1 (768 kbps). It will take a long time to push a 100MB archived redo log across that.

I have ARCHIVE_LAG_TARGET set, so I guess no matter how I size the log groups the rate at which archived redo is pushed across the pipe should remain constant, right?

So, what's the best strategy for redo? More smaller log groups or fewer large log groups? My concern is the time it takes to perform a switchover.

Thanks for the constructive criticism. Bear in mind that not everyone posting here is a full-time Oracle DBA. Folks working for smaller firms often wear many hats :-) Received on Mon Oct 16 2006 - 12:30:10 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US