Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> ASM overhead?
Consider the following excerpt from iostat:
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn ------- --- ---------- ---------- -------- -------- sdb 9.38 0.00 4340.12 0 21744 sdb1 6.19 0.00 2887.03 0 14464 sdc 8.78 0.00 4298.60 0 21536 sdc1 5.79 0.00 2864.67 0 14352 sdd 9.58 0.00 4311.38 0 21600 sdd1 6.19 0.00 2871.06 0 14384 sde 14.17 12.77 5863.47 64 29376 sde1 9.38 9.58 3883.43 48 19456
These devices make up part of an an LVM managed by the Oracle facility ASM (Automated Storage Management). There is nothing else there. Why, then, would there be such a discrepancy between the overall volume (e.g., sdc) and the actual device Oracle uses (e.g., sdc1)? FYI, these are LUNs on an EMC DMX, latest software, patched up, etc. Server is a Proliant 580, 4 dual-core Xeons, 16GB memory, RHAS4 patched up.
Does anyone have any experience with this phenomenon? Received on Tue Jul 10 2007 - 15:42:49 CDT