Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Help me tuning this wait event:log file sync

Re: Help me tuning this wait event:log file sync

From: chao_ping <>
Date: 17 Jul 2002 07:18:13 GMT
Message-ID: <3d351a35$>

Yong Huang wrote:

  > chao_ping <> wrote in message
  > news:<3d312489$>...
  > > Yong Huang wrote:
  > >   > If you can't control the frequency of commits, reduce log_buffer.
  > >   > How big is it?
  > > now log buffer is 2M, In early time, it was 1M, and i see much wait
  > > time in log buffer space,(once it was the 4th top wait event), so i
  > > enlarged the log buffer size from 1m to 2m.
  > Never use a log_buffer larger than 1M. It's useless over that limit.
  > Do you have multiple log members per group? Use 1 member.
  > If you got 40 commits per second, the only solution may be rewrite the
  > application. If you can't, try a competitor's product.
  > Yong Huang

Thanks for yong, Howard, Lewis and others who read and answer it.You are all experts i admire. Since this database support internet application, and all other service depend on this database, I cannot restart it as i wish,So, have to think it again and again before i make any modification to it then bounce it. As for the commits/second, i caculate it from the statspack report: Load Profile ~
~~~~~~~~~~~~~~~~~~~~~~~~Per~Second~~~~~~~Per~Transaction ~

Logical~reads:~~~~~~~~~~~~~12,555.73~~~~~~~~~~~~~~~~363.32 I get the
commits per second by devide the (values per second)/(values per transacion).Is the method to caculate the commits(transactions per second, since few rollback) right?

I carefully checked the IO on the redo log volume(all redo log files on a seperate volumn with Veritas Quick IO), and get the following data:(via vxstat ) There is about about 10-12 write action to the redo volumn per second, and every write action write about 10 blocks, so every write action is about 5k data, and there is 60-80K data written to the redo log file per second, and there is around 250M data written to redo log(log switch happens about 40-60 minutes and my redo log size is 256M).

So, it is clear that it is not amount of data written to the redo logfile and the disk array, it is the frequency of the write, that caused the log file sync wait event.

The painful thing is i cannot change the slow raid-5 disk array, CTO said it need too much downtime and some risk, and As for the log_buffer size,
SQL> select * from v$sysstat where name like '%redo%';




I also read some thread on google and ixora site, from link:, steve adams said that 120-160K should be proper, but i wonder maybe that value is not suitable to my system as there is still "log buffer space request" and "redo log space wait time" even if i set it to 2M. Currently both my former dba and me did not change the value of log_checkpoint, leave it as the default value: log_checkpoint_interval =10000 and log_checkpoint_timeout = 1800 So, I think i has better set interval to some larger value and disable timeout checkpoint. As for the redo log members, i have just one member. We are on raid5 and if single disk fail, it is still ok.Have two member give the raid more pressure, i would like to take this risk, for we have an standby database:) So, the concern here is the log_buffer size. My main concern is to reduce the log file sync wait event. I want to do some test on my another machine with similiar hardware and software. Can someone give me some suggestions to simulate the workload? I hear from Lewis's post that loop and commit in PLSQL doesnot cause log file sync.So, how to test whether large log buffer is better or smaller one is ok? Thanks!!!

an oracle fan, 
an oracle beginner

an oracle fan, 
an oracle beginner

an oracle fan, 
an oracle beginner

an oracle fan, 
an oracle beginner

Posted via dBforums
Received on Wed Jul 17 2002 - 02:18:13 CDT

Original text of this message