Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: forcedirectio on; more disks, performance degrading? (Solaris, Veritas VM, Oracle 7)
This is just a guess, but might you have a problem with block-size mismatches ?
I note your db_block_size is 2K.
If your memory page is 4K and you are
bypassing the file-system buffer, will your
direct writes have to do a read into memory,
copy in 2K, write cycle (thus roughly doubling
the I/O time), whereas your previous setup
is more likely on a busy system to have
the relevant 4K already buffered in the file
system buffer and therefore not have to
do the extra read ?
--
Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk
NetComrade wrote in message <389c868b.530124_at_news.earthlink.net>...
>After reading how forcedirectio would improve Oracle performace and
>several books, and even some Veritas documentation I have enabled it
>on all Oracle mounted drives(no more double buffering, etc). We use
>Veritas Volume Manager, and therefore they're all striped volumes (and
>mirrored too, RAID 0+1). We have also installed more disks on one of
>the disk arrays (I had to move files in and out).
>
>All of a sudden our batch job that used to take 7-8 hours takes 18, so
>I am trying to figure out what's going on, and when top utility used
>to show on average 40-60% iowait, now it shows 70-90% iowait even when
>the 'load' is very low (like 1-3). The only weird part is the output
>of iostat -xtc, the disks don't seem to be that busy, but the iowait
>is there. I've read in some book that direct i/o works 'sequentially'
>instead of async, but it only mentioned that on Sequent systems you
>have to use init.ora parameter _direct_read=TRUE. but says nothing
>about Solaris (Oracle8 and Unix performance tuning pg. 114)
>the oracle parameters say:
Received on Sun Feb 06 2000 - 15:11:36 CST