Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: Redo-Log Archiv Problem
"Howard J. Rogers" <howardjr_at_www.com> wrote:
>
>"Kathinka Diehl" <kathinka_at_rrr.de> wrote in message
>news:8k4hbs$1p6lr$1_at_ID-6887.news.cis.dfn.de...
>>
>> Peter Miller <p.miller_at_brocom.de> schrieb:
>> >
>> > Thread 1 cannot allocate new log, sequence 4185
>> > Checkpoint not complete
>>
>> I just kown this message, perhaps the other is a following
error. And this
>> one is easy to remove: You just have to use larger redo log
files or more
>> redo log groups.
>>
>
>
>Seems to be a popular suggestion, this 'use larger redo log
files' one. It
>won't help at all. If you have larger logs, they take longer
to switch away
>from, but they then take longer to checkpoint... which puts you
back at
>square one. The only cure is to add extra log groups, so that
the
>checkpoint has a chance to complete before you end up switching
back to an
>earlier log file. In other words, it's not the size of what
you've got that
>matters but how many of 'em!
>
>That's not to say that size is totally irrelevant of course:
you control the
>frequency of checkpointing by altering the size of the logs.
Bigger=less
>frequent, smaller=more frequent.
>
>Personally, for ultimate performance reasons, I would recommend
absolutely
>enormous redo logs that never switch, and schedule a manual
'alter system
>switch logfile' command in the dead of night. You then get one
almighty
>checkpoint that has plenty of time to complete before business
cranks up
>again the next day. This only works on non-24/7 database of
course, and you
>also need to make sure you haven't set things like
log_checkpoint_timeout or
>log_checkpoint_interval.
>
>Oh, and you're then in trouble if you ever get instance
failure, because
>recovery will take ages, but if it's performance v. recovery
time, you take
>your stance somewhere on the spectrum and arrange things
accordingly.
>
>Regards
>HJR
>
>> HTH, Kathinka
>>
To HJR, I disagree with your comment that larger logs take
longer to swith. A checkpoint requires flushing all dirty
database buffers to disk and updating the file headers as well
as archiving the just active redo log when in archive log mode.
The more dirty buffers and the more file headers the longer the
checkpoint takes, and the size of the redo log has no direct
effect on this. With very small redo logs they can fill before
the previous checkpoint completes, but this means the switch
completed and you are using a new redo before the checkpoint
completed. If the archive process is slow then you can end up
looping around and the next log becomes the log that is in the
process of being archived. In this situation it means you need
either more logs, better log archive destination IO, or both.
Got questions? Get answers over the phone at Keen.com.
Up to 100 minutes free!
http://www.keen.com
Received on Sun Jul 09 2000 - 00:00:00 CDT
![]() |
![]() |