Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Mailing Lists -> Oracle-L -> RE: I/O and db_file_multiblock_read_count
I'm guessing that I'm limited by CPU on this IBM JS21 blade's LPAR (MPV,
2 cores max) with an SVC (virtualized SAN) backend. topas showed kernel
mode CPU >95% for most of the tests.
/oracle $ time dd if=S.dbf of=/dev/zero bs=1024k
16384+1 records in
16384+1 records out
real 1m3.044s
user 0m0.271s
sys 0m42.458s
/oracle $ time dd if=S.dbf of=/dev/zero bs=1024k
16384+1 records in
16384+1 records out
real 1m2.923s
user 0m0.249s
sys 0m42.325s
t/oracle $ time dd if=S.dbf of=/dev/zero bs=128k
131072+1 records in
131072+1 records out
real 1m2.789s
user 0m0.709s
sys 0m41.665s
/oracle $ time dd if=S.dbf of=/dev/zero bs=128k
131072+1 records in
131072+1 records out
real 1m2.497s
user 0m0.688s
sys 0m41.422s
/oracle $ time dd if=S.dbf of=/dev/zero bs=8192k
2048+1 records in
2048+1 records out
real 1m2.601s
user 0m0.106s
sys 0m41.666s
/oracle $ time dd if=S.dbf of=/dev/zero bs=8192k
2048+1 records in
2048+1 records out
real 1m2.628s
user 0m0.099s
sys 0m41.621s
Rich
From: oracle-l-bounce_at_freelists.org
[mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Kevin Closson
Sent: Monday, December 11, 2006 3:19 PM
To: oracle-l
Subject: RE: I/O and db_file_multiblock_read_count
OK, we should all throw our dd(1) microbenchmark results out there...
This
is a DL-585 with 2Gb FCP to a PolyServe CFS monted in direct I/O mode.
The
single LUN is RAID 1+0 st_width 1MB striped across 65 15K RPM drives
(hey,
I get to play with nice toys...)
The file is 16GB
$ time dd if=f1 of=/dev/zero bs=1024k
16384+0 records in
16384+0 records out
real 1m47.220s
user 0m0.009s
sys 0m5.175s
$ time dd if=f1 of=/dev/zero bs=128k
131072+0 records in
131072+0 records out
real 2m52.157s
user 0m0.056s
sys 0m7.126s
For grins I through in huge I/O sizes (yes this is acutally issuing 8MB blocking reads)
$
$ time dd if=f1 of=/dev/zero bs=8192k
2048+0 records in
2048+0 records out
real 1m32.710s
user 0m0.002s
sys 0m3.984s
Large I/Os get chopped up in the scsi midlayer of Linux, but like what
is happening if you
get less tput with larger I/Os is you have few drives and a stripe with
that is causing each
disk to be hit more than once for every I/O (that is bad).
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Dec 12 2006 - 09:15:56 CST