Re: ORacle IO Numbers (ORION)

From: hpuxrac <johnbhurley_at_sbcglobal.net>
Date: Thu, 15 Jan 2009 17:00:31 -0800 (PST)
Message-ID: <92d9a4e1-25ee-4d49-8ed9-1d4cdcb0f7a3_at_h41g2000yqn.googlegroups.com>



On Jan 15, 1:08 pm, Jan Krueger <j..._at_stud.uni-hannover.de> wrote:
> Hi,
>
> in the last weeks I was heavily using the ORION test tool available from OTN. I found some
> rediculous results from a 32 disks array. The array is supposed to physically deliver about
> 4500 iops.
>
> I believe that the pseudo random number generator (or the way it is used to calculate the block
> to read) within ORION is not appropriate, as I see cache hits in our SAN storage backend where
> none are expected from the number of blocks read. But I might be wrong and the effect has a
> different origin. Look into this test taken, where the latency goes down again after
> continously increasing:
>
> ran (small): my 1 oth 0 iops 121 size 8 K lat 8.25 ms bw = 0.95 MBps dur 59.99 s READ
> ran (small): my 2 oth 0 iops 211 size 8 K lat 9.43 ms bw = 1.66 MBps dur 59.98 s READ
> ran (small): my 3 oth 0 iops 299 size 8 K lat 10.03 ms bw = 2.34 MBps dur 59.97 s READ
> ran (small): my 4 oth 0 iops 371 size 8 K lat 10.75 ms bw = 2.91 MBps dur 59.96 s READ
> ran (small): my 5 oth 0 iops 456 size 8 K lat 10.95 ms bw = 3.57 MBps dur 59.97 s READ
> ran (small): my 6 oth 0 iops 511 size 8 K lat 11.74 ms bw = 3.99 MBps dur 59.96 s READ
> ran (small): my 7 oth 0 iops 573 size 8 K lat 12.21 ms bw = 4.48 MBps dur 59.95 s READ
> ran (small): my 8 oth 0 iops 639 size 8 K lat 12.51 ms bw = 4.99 MBps dur 59.97 s READ
> ran (small): my 16 oth 0 iops 1019 size 8 K lat 15.70 ms bw = 7.96 MBps dur 59.96 s READ
> ran (small): my 24 oth 0 iops 1294 size 8 K lat 18.53 ms bw = 10.11 MBps dur 59.92 s READ
> ran (small): my 32 oth 0 iops 1547 size 8 K lat 20.68 ms bw = 12.09 MBps dur 59.92 s READ
> ran (small): my 40 oth 0 iops 1739 size 8 K lat 22.99 ms bw = 13.59 MBps dur 59.90 s READ
> ran (small): my 48 oth 0 iops 1936 size 8 K lat 24.78 ms bw = 15.13 MBps dur 59.91 s READ
> ran (small): my 56 oth 0 iops 2082 size 8 K lat 26.88 ms bw = 16.27 MBps dur 59.89 s READ
> ran (small): my 64 oth 0 iops 2207 size 8 K lat 28.98 ms bw = 17.25 MBps dur 59.87 s READ
> ran (small): my 72 oth 0 iops 2324 size 8 K lat 30.96 ms bw = 18.16 MBps dur 59.87 s READ
> ran (small): my 80 oth 0 iops 2479 size 8 K lat 32.25 ms bw = 19.37 MBps dur 59.86 s READ
> ran (small): my 88 oth 0 iops 2611 size 8 K lat 33.67 ms bw = 20.40 MBps dur 59.84 s READ
> ran (small): my 96 oth 0 iops 2655 size 8 K lat 36.13 ms bw = 20.75 MBps dur 59.83 s READ
> ran (small): my 104 oth 0 iops 2809 size 8 K lat 36.98 ms bw = 21.95 MBps dur 59.85 s READ
> ran (small): my 112 oth 0 iops 2911 size 8 K lat 38.45 ms bw = 22.74 MBps dur 59.82 s READ
> ran (small): my 120 oth 0 iops 3007 size 8 K lat 39.88 ms bw = 23.50 MBps dur 59.81 s READ
> ran (small): my 128 oth 0 iops 3097 size 8 K lat 41.29 ms bw = 24.20 MBps dur 59.79 s READ
> ran (small): my 136 oth 0 iops 4254 size 8 K lat 31.97 ms bw = 33.24 MBps dur 59.77 s READ
> ran (small): my 144 oth 0 iops 4752 size 8 K lat 30.30 ms bw = 37.13 MBps dur 59.77 s READ
> ran (small): my 152 oth 0 iops 5180 size 8 K lat 29.34 ms bw = 40.47 MBps dur 59.77 s READ
> ran (small): my 160 oth 0 iops 5533 size 8 K lat 28.91 ms bw = 43.23 MBps dur 59.77 s READ
>
> The vlun size is 3TB where the cache of the SAN box is 96GB so the cache hit should be minimal
> (1% of the vlun size was loaded into cache in the end of the test).
>
> The following outcome was taken on the same array with smaller vlun size:
> ran (small): my 1 oth 0 iops 2468 size 8 K lat 0.40 ms bw = 19.29 MBps dur 59.97 s READ
> ran (small): my 2 oth 0 iops 8482 size 8 K lat 0.24 ms bw = 66.27 MBps dur 60.00 s READ
> ran (small): my 3 oth 0 iops 8220 size 8 K lat 0.36 ms bw = 64.22 MBps dur 60.00 s READ
> ran (small): my 4 oth 0 iops 8850 size 8 K lat 0.45 ms bw = 69.14 MBps dur 59.99 s READ
> ran (small): my 5 oth 0 iops 9264 size 8 K lat 0.54 ms bw = 72.38 MBps dur 59.99 s READ
> ran (small): my 6 oth 0 iops 10177 size 8 K lat 0.59 ms bw = 79.52 MBps dur 59.99 s READ
> ran (small): my 7 oth 0 iops 11159 size 8 K lat 0.63 ms bw = 87.19 MBps dur 59.99 s READ
> ran (small): my 8 oth 0 iops 11321 size 8 K lat 0.71 ms bw = 88.45 MBps dur 59.99 s READ
> ran (small): my 16 oth 0 iops 16646 size 8 K lat 0.96 ms bw = 130.05 MBps dur 59.98 s READ
> ran (small): my 24 oth 0 iops 22483 size 8 K lat 1.07 ms bw = 175.65 MBps dur 59.97 s READ
> ran (small): my 32 oth 0 iops 23312 size 8 K lat 1.37 ms bw = 182.13 MBps dur 59.96 s READ
> ran (small): my 40 oth 0 iops 28100 size 8 K lat 1.42 ms bw = 219.54 MBps dur 59.95 s READ
> ran (small): my 48 oth 0 iops 29322 size 8 K lat 1.64 ms bw = 229.08 MBps dur 59.95 s READ
> ran (small): my 56 oth 0 iops 31635 size 8 K lat 1.77 ms bw = 247.15 MBps dur 59.93 s READ
> ran (small): my 64 oth 0 iops 32101 size 8 K lat 1.99 ms bw = 250.79 MBps dur 59.93 s READ
> ran (small): my 72 oth 0 iops 33458 size 8 K lat 2.15 ms bw = 261.39 MBps dur 59.92 s READ
> ran (small): my 80 oth 0 iops 36226 size 8 K lat 2.21 ms bw = 283.02 MBps dur 59.91 s READ
> ran (small): my 88 oth 0 iops 35770 size 8 K lat 2.46 ms bw = 279.46 MBps dur 59.90 s READ
> ran (small): my 96 oth 0 iops 37845 size 8 K lat 2.54 ms bw = 295.67 MBps dur 59.89 s READ
> ran (small): my 104 oth 0 iops 37771 size 8 K lat 2.75 ms bw = 295.09 MBps dur 59.88 s READ
> ran (small): my 112 oth 0 iops 38373 size 8 K lat 2.92 ms bw = 299.79 MBps dur 59.87 s READ
> ran (small): my 120 oth 0 iops 38978 size 8 K lat 3.08 ms bw = 304.52 MBps dur 59.86 s READ
> ran (small): my 128 oth 0 iops 41127 size 8 K lat 3.11 ms bw = 321.31 MBps dur 59.85 s READ
> ran (small): my 136 oth 0 iops 48625 size 8 K lat 2.80 ms bw = 379.89 MBps dur 59.85 s READ
> ran (small): my 144 oth 0 iops 51091 size 8 K lat 2.82 ms bw = 399.15 MBps dur 59.84 s READ
> ran (small): my 152 oth 0 iops 53775 size 8 K lat 2.83 ms bw = 420.12 MBps dur 59.83 s READ
>
> It reaches up to the limit of the fibre link even though the array of 32 disks is only expected
> to support about 4500 IO/s. In this second testcase, the proportion of data in the cache is
> higher (vlun size is 512G) but even if the cache is filled, the expected cache hit ratio is
> only 96G/512G=19%. Given that a read from cache lasts only 0.3 ms and one from disk lasts 8ms
> the expected maximum would be somewhere about 8000 or so.
>
> Best regards.
>
> Jan

I tend to like putting together IO benchmark based on real data available in one of my databases versus generic tools.

Not hard usually to put together a couple of plsql procedures that push IO reading or writing using actual data from your systems.

Then you can use real unix/linux tools to measure or storage vendor tools or even OEM ( the new 11g IO charting is pretty good ). Received on Thu Jan 15 2009 - 19:00:31 CST

Original text of this message