Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: High wio on New Hitachi SAN Storage of DB Server
Hi Vivek,
Your note interested me; we have exactly the same configuration and symptoms but we are still running async_io = TRUE. I concur with your findings on the extreme performance issues - short term server freezes are occurring. You mention a bug; is this hearsay or do you have details?
If you've turned of async_io (the default on aix), then you may want to use DBWR_IO_SLAVES. I've not tried this yet.
Anyway, ours is a HDS9990 with McData Switches attached to a p690 running AIX 5.2 ML4. The filesystems are jfs2.
I can reproduce the problem just by creating a 10GB datafile, multiple users doing random i/o can also get the same issue, or indeed when a parallel rman backup is in progress. However concurrent cp's of multiple files do not reproduce the issue (hence I believe it is likely an async i/o problem.
Under load Topas shows high wait for i/o and Sar -q shows that swpocc% = 100 and swpocc > 20
My unix admins are currently looking at the async i/o settings as per metalink note 271444.1, but are heavily loaded and this is not prod (yet) so the urgency is low.
If you or anyone else has any pointers with this configuration please let me know.
Kind Regards
Adrian
-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]
On Behalf Of VIVEK_SHARMA
Sent: 11 November 2005 18:56
To: Oracle-L
Subject: High wio on New Hitachi SAN Storage of DB Server
Folks,
While doing Application Testing of Hybrid Trans (OLTP mostly though) by 200 users (approx) on a NEWLY configured HITACHI SAN Storage, on DB Server (of AIX) High wait for IO i.e. wio = 70 % till 1400 Hours is observed.
NOTE - wio reduced to about 10 % gradually from 1400 Hours to 2000 Hours.
Average sar Output from morning to 1400 Hours:-
13:38:00 %usr %sys %wio %idle 13:43:00 6 5 67 22 13:48:00 10 6 74 10 13:53:00 10 5 66 19 13:58:00 7 5 61 27 14:03:00 5 5 67 22 14:08:00 7 5 74 15 14:13:00 9 6 69 15
CONFIG
Comments by IBM
The seek rate is 95.72% on the hdsdb9960lv LV's indicates a high degree of random IO, usually caused by the application or a high degree of disk fragmentation.
STATSPACK report (will provide any other sections as needed)
DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- ---
Snap Id Snap Time Sessions ------- ------------------ -------- Begin Snap: 5 15-Oct-05 13:00:55 352 End Snap: 6 15-Oct-05 14:00:37 352 Elapsed: 59.70 (mins)
Cache Sizes
db_block_buffers: 215000 log_buffer: 18874368 db_block_size: 8192 shared_pool_size: 754288000 Load Profile ~~~~~~~~~~~~ Per Second Per Transaction --------------- --------------- Redo size: 298,013.21 837.31 Logical reads: 55,540.47 156.05 Block changes: 2,296.74 6.45 Physical reads: 3,109.99 8.74 Physical writes: 399.33 1.12 User calls: 2,657.16 7.47 Parses: 64.98 0.18 Hard parses: 5.44 0.02 Sorts: 75.56 0.21 Logons: 0.75 0.00 Executes: 1,783.45 5.01 Transactions: 355.92 % Blocks changed per Read: 4.14 Recursive Call %: 15.67 Rollback per transaction %: 94.71 Rows per Sort: 10.87 Top 5 Wait Events ~~~~~~~~~~~~~~~~~ Wait % Total Event Waits Time (cs)Wt Time
db file sequential read 6,230,712 1,640,351 43.84 log file sync 1,087,475 1,286,467 34.38 db file scattered read 351,411 416,508 11.13 log file parallel write 706,201 288,168 7.70 buffer busy waits 334,943 69,830 1.87 -------------------------------------------------------------
Qs How might this issue be approached?
Qs Are there any special O.S. parameters that might be set?
-- http://www.freelists.org/webpage/oracle-lReceived on Fri Nov 11 2005 - 13:33:41 CST