Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Help me tuning this wait event:log file sync

Re: Help me tuning this wait event:log file sync

From: Howard J. Rogers <>
Date: Wed, 17 Jul 2002 09:24:04 +1000
Message-ID: <ah29td$nd6$>

"Yong Huang" <> wrote in message > "Howard J. Rogers" <> wrote in message news:<agvicv$s65$>...
> > "Yong Huang" <> wrote in message
> >
> > > Never use a log_buffer larger than 1M. It's useless over that limit.
> > >
> >
> > Not quite true. On very heavy-transactional-load systems, a buffer uo to
> > or 6Mb *may* (or may not!) be appropriate. Had Oracle ever decided that
> > anything bigger than 1Mb was utterly "useless", they would never have
> > introduced the rule about LGWR flushing every 1Mb of uncommitted redo
> > (because that rule doesn't get invoked until the buffer is bigger than
> > because of the 'flush when 1/3rd full' rule).

> OK. Maybe setting log_buffer to up to 3 MB is appropriate. Above that,
> LGWR has to write when the uncommitted redo reaches 1MB anyway. It'll
> be very unlikely for foreground sessions to generate redo to fill more
> than 2 MB log buffer when LGWR is writing to disk. (Otherwise reaching
> to the 1MB mark would also be much more frequent). Not sure if I
> express myself clearly.
> In fact, this discussion is not relevant to the original poster's
> problem. His commit rate is so high adjusting log_buffer shouldn't
> have much effect, because LGWR writing is triggered by the
> application, not by the (1) 3 second, (2) 1/3 buffer full, or (3) 1MB
> buffer full rule. Steve Adams once had a newsletter "condemning" most
> developers' bad habit of too frequent commits, not using PL/SQL stored
> code. I still propose that his application be overhauled to solve the
> problem.

> >
> > > Do you have multiple log members per group? Use 1 member.
> >
> > I definitely can't agree with this last suggestion. 1 member groups? A
> > recipe for data loss.
> >
> > You're right that it might help eliminate some performance woes, of
> > Oracle has to do less work, so yeah, of course things work faster. But
> > the eternal trade-off between security and performance, you've just
> > squarely on the performance side of things. Which may or may not be
> > acceptable to the original poster, but it's certainly not what I would
> > to recommend.
> The original poster says he's using Veritas Quick I/O. We should
> assume data redundancy is already done at the lower level. I always
> propose 1 log member at the company I'm working at, because we either
> have Veritas or (on non-production servers) Solaris Disk Suite. Using
> 2 or more log members used to be prevalent.

But disk mirroring (if that's what you're referring to) is a disaster just waiting to happen if that's all you've got. LGWR makes an error writing to your one member -and the hardware immediately replicates that corruption onto your mirror(s). You've now got two identical copies of useless redo logs. You've no protection against corruption at all, merely against the physical loss of the redo log (which is better than nothing, I suppose, but it's not an adequate level of protection).

For protection against those sorts of errors you *have* to have multiple members. Mirror the multiple members, by all means, but it's only by getting LGWR to write more than once that you have protection against corruption of redo.

Multiple members ought to be considered compulsory, whatever hardware you happen to be running on.

HJR > Yong Huang Received on Tue Jul 16 2002 - 18:24:04 CDT

Original text of this message