Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> [oracle-l] Re: Help With Veritas and my search for the Holy "I/O" Grail (testing with the tool from

[oracle-l] Re: Help With Veritas and my search for the Holy "I/O" Grail (testing with the tool from

From: <Brian_P_MacLean_at_eFunds.Com>
Date: Sun, 25 Jan 2004 20:45:33 -0700
Message-ID: <OFD34EB7A5.28388F8B-ON07256E27.0012A75E-07256E27.0014A6C8@DeluxeData.Com>

No, I can't say that we did. Our testing has also involved benchmarking the application, creating of 10gig tablespaces, redo log switches, and 8 gig partitioned table index creation. All tests give the same time or worse with the aforementioned Veritas options. The Veritas options we tested with included the recommendations from Steve Adams site, Veritas documentation, MetaLink forums, and a support ticket with Veritas. One of the reasons I did not test with the iozone option of "O_SYNC" is that from what I can tell, that is not what Oracle uses to open tablespace datafiles. I did a truss of an Oracle shadow process on Solaris and as you can see below I grep'ed out the open call which shows that Oracle uses the O_DSYNC option, not the O_SYNC. Regardless, per the Veritas documentation the options we are using should force all opens to direct/no buffer mode with direct write through to disk (or the disk array memory on machines that have it).

I will test the "-o" option tomorrow but I'm not sure what it will get me.

BTW- The reason I use iozone is it's original author is an Oracle employee.

Anyway, thanks for taking the time to reply Jonathan.

24952:d99490_at_mkeux002> grep -i SYNC trs.out | grep dbf 8621: open("/t04/oracle/oradata/avsqa/TIF_IDX_01.dbf", O_RDWR|O_DSYNC) = 13
8621: open("/t05/oracle/oradata/avsqa/TIF_IDX_02.dbf", O_RDWR|O_DSYNC) = 13

                      "Jonathan Lewis"                                                                                              
                      <jonathan_at_jlcomp.d        To:       <>                                                  
            >               cc:                                                                                 
                      Sent by:                  Subject:  [oracle-l] Re: Help With Veritas and my search for the Holy "I/O" Grail   
                      oracle-l-bounce_at_fr         (testing with the tool from                                        
                      01/25/2004 05:00                                                                                              
                      Please respond to                                                                                             

I haven't tried running iozone, so I don't know what the default output looks like so this question may be irrelevant, but are you checking the O_SYNC figures against Oracle's performance (iozone option -o) ?

If not, then you're not comparing like with like.


Jonathan Lewis

  The educated person is not the person
  who can answer the questions, but the
  person who can question the answers -- T. Schick Jr

Next public appearances:
 Jan 29th 2004 UKOUG Unix SIG - v$ and x$The Burden of Proof  March 2004 Charlotte NC OUG - CBO Tutorial  April 2004 Iceland

One-day tutorials:

Three-day seminar:
see ____UK___February

The Co-operative Oracle Users' FAQ

We have been trying to squeak out a little bit more bandwidth from our disk sub-system. I have been doing benchmarks using the tool from The command I use for the basic baseline test is
"iozone -Rab base.wks" which in effect takes all system default values to
do the reads/writes. I then test with the command "iozone -IRab vx_direct.wks" which now forces the reads/writes to direct I/O by issuing the command "ioctl(fd,VX_SETCACHE,VX_DIRECT);" in the .c program. When we compare the performance of the base to the direct timings it's like "WOW, Where have you been all my life".

So now we are puzzled as to just how in the 'el do we get Oracle to use the direct option. We have read the Veritas manuals and combed several pages on (Steve Adam's site). The mount options/combinations we have tried are "mincache=direct,convosync=direct" and
"convosync=direct,mincache=dsync". The base line testing with iozone is
between 50 and 100 percent slower with either of these mount options in place. Running iozone with the "-I" (vx_direct enabled) continues to scream regardless of what we do to the mount points.

So now, WTF do I do.

PS: The first person who suggests raw disk or QIO has to find me a shrubbery) Received on Sun Jan 25 2004 - 21:45:33 CST

Original text of this message