Re: I/O performance

From: Purav Chovatia <puravc_at_gmail.com>
Date: Fri, 8 Jun 2012 13:48:24 +0530
Message-ID: <CADrzpjHyehFHJ2Xu1+JE3xXRm7VPXvJWmxV05hc9+ZmX=HW5LQ_at_mail.gmail.com>



Hi Niall,
We benchmark all of our products in our labs on hardware similar to what we deploy in production. During benchmarking we always measure and analyze IO stats at an interval of 1 minute which we will soon reduce to 10 second.

We also run a forensic home-grown script in production that captures IO stats again at a granularity of 1 minute which will soon be reduced.

However, when I am personally diagnosing issue, especially if IO seems to be the bottleneck (and as we know, that is one of the most common reasons of performance issues) then I gather the IO stats at a granularity of 2 seconds. I know it adds overheads but that is only for a period of 10 minute or so.

Very rarely do we have a SAN (say 1 or 2 out of 500 deployments), we always have a DAS (or a shared storage in case of RAC). Let me know if more details of the same helps. One thing that is always true is that for redo (which in our case is always on a dedicated volume) the service time is 0.2 - 0.3 msec. And for data it is less than 5-7 msec.

HTH On Fri, Jun 8, 2012 at 11:33 AM, Niall Litchfield < niall.litchfield_at_gmail.com> wrote:

> I'm curious how many of you measure I/O performance ( IOPS, service times
> and MB/s ) regularly on your databases? And for those in SAN environments
> if you have access to ballpark figures for what you should be getting.
>
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Jun 08 2012 - 03:18:24 CDT

Original text of this message