RE: I/O performance

From: Taylor, Chris David <ChrisDavid.Taylor_at_ingrambarge.com>
Date: Fri, 8 Jun 2012 08:02:23 -0500
Message-ID: <C5533BD628A9524496D63801704AE56D75B291E9CF_at_SPOBMEXC14.adprod.directory>



Niall,

( I know you know some of this already but I'll post it anyway :)

Regarding typical SAS or SATA disks (not sure what general rules to follow for SSDs), if you know the brand and model of the disks in your array (SAN or otherwise) you can calculate [approximately] the IOPS per disk (see this link for an example https://communities.netapp.com/community/netapp-blogs/databases/blog/2011/08/11/formula-to-calculate-iops-per-disk).

Of the approximate calculated IOPS, you can only count on about 75% of that capacity as you will never reach the full calculated IOPS because there is a direct relationship between disk utilization and response time - as utilization increases, response time decreases, so only count on about 75% of the calculated IOPS per disk. (I tried to find a specific link illustrating this relationship but gave up)

So, let's say you have an array full of SAS disks capable of ~210 IOPS per disk. You're only going to get about ~157 IOPS per disk. So if you have a LUN configured across 20 of these particular disks you can count on ~ 3140 IOPS *at the disk level*. This is important because you have many other factors to consider, Frame Sizes, number of Frames, Link speeds etc (see http://www.thesanman.org/2012/03/understanding-iops.html for a discussion)

*Now*, typically if I'm interested in IO performance, I want to see what the *server* connected to the disk array is doing - specifically I want to see if it's queueing IOs and if so, how many IOs are being queued. Typically large IO queues are a bad thing that illustrate the IOs are not getting taken care of fast enough between the server and the disk system. As a rule of thumb, average queue length shouldn't exceed the physical number of disks in the array times 2. (To me that doesn't seem valid - if I have 256 disks in a stripe and my queue length is 512 [on the server] and is staying there, I would consider that *bad*)

For Windows use perfmon to find the queue lengths by disk (average queue length and current queue lengths) For Linux use iostat -x 1 100 will show the average queue each second for 100 seconds (plus a lot of other stuff of course)

Chris Taylor

"Quality is never an accident; it is always the result of intelligent effort."
-- John Ruskin (English Writer 1819-1900)

Any views and/or opinions expressed herein are my own and do not necessarily reflect the views of Ingram Industries, its affiliates, its subsidiaries or its employees.

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Niall Litchfield Sent: Friday, June 08, 2012 1:03 AM
To: ORACLE-L
Subject: I/O performance

I'm curious how many of you measure I/O performance ( IOPS, service times and MB/s ) regularly on your databases? And for those in SAN environments if you have access to ballpark figures for what you should be getting.

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Fri Jun 08 2012 - 08:02:23 CDT

Original text of this message