Re: troubleshooting slow I/O performance.

From: Stefan Knecht <knecht.stefan_at_gmail.com>
Date: Tue, 8 May 2018 14:17:29 +0700
Message-ID: <CAP50yQ_C6e7+0x+FJ1s=Boc=4nX+VXJc6c_Q7EWJdUwHv+Y7rg_at_mail.gmail.com>



Interesting. Looks like you should be getting more out of that type of set up than you are seeing to me too, at first glance.

It's been a while since I looked into disk drives like this but it looks like you're running 4TB disks. Most of these have stats similar to those:

https://www.disctech.com/Western-Digital-WD4000FYYZ-4TB-Enterprise-SATA-Hard-Drives

I haven't spend that much time looking around, but it seems to paint the picture that most of those spin at 7200 RPM. Which leads to a seek time of approx 10ms. If you're hitting the array hard enough so that you're bypassing any caches and other smart gimmicks (which slob is certainly doing) - you would be essentially limited by the seek time of the spindles themselves. So this sounds about right to me.

On Tue, May 8, 2018 at 12:40 AM, Chris Stephens <cstephens16_at_gmail.com> wrote:

> 8 physical disks / lun.
>
> Fibre Channel
>
> raid 10 with external ASM redundancy.
>
> thanks stefan!
>
> On Mon, May 7, 2018 at 12:12 PM Stefan Knecht <knecht.stefan_at_gmail.com>
> wrote:
>
>> By path I meant how the storage is connected to the server. Fibre
>> channel, iSCSI, etc...
>>
>> And I assume you mean that you have 30TB LUNs - I don't think there are
>> 30TB physical disks. The actual number of physical spinning rusty things is
>> the number that's important. E.g. how is your LUN made up?
>>
>> And of course, it's also important to know how the disks are arranged
>> when the LUN is built - RAID 1, RAID 5 (I hope not but that might explain
>> the issues you're seeing) RAID 10, etc...
>>
>> I'd say a friendly chat with the storage guy is probably in order :) It
>> could be any number of things - but what's really key to get good
>> performance out of bare metal drives is the number of drives you got.
>>
>>
>>
>>
>>
>> On Tue, May 8, 2018 at 12:08 AM, Chris Stephens <cstephens16_at_gmail.com>
>> wrote:
>>
>>> ASM
>>> 6 30TB spinning drives
>>> type of path? we are not using asmlib or asm_fd (whatever its called).
>>>
>>>
>>> On Mon, May 7, 2018 at 11:54 AM Stefan Knecht <knecht.stefan_at_gmail.com>
>>> wrote:
>>>
>>>> Some details would help:
>>>>
>>>> - ASM or FS? Type of FS?
>>>> - LUNs? Numbers, sizes?
>>>> - metal? SSD? Type of path to the disks?
>>>>
>>>> Etc...
>>>>
>>>>
>>>> On Mon, 7 May 2018, 21:00 Chris Stephens, <cstephens16_at_gmail.com>
>>>> wrote:
>>>>
>>>>> We have a new 5 node 12.2 RAC system that is not performing the way we
>>>>> want.
>>>>>
>>>>> The glaring issue is that "db file sequential read"'s are taking
>>>>> ~10ms. before i lob this over to the storage administrators, are there any
>>>>> possible areas in the clusterware/database configuration that I should
>>>>> investigate first? i have root access to all of the nodes. is there any
>>>>> information i can collect that would expedite the process of figuring out
>>>>> why we have such slow I/O times?
>>>>>
>>>>> slow i/o was discovered by running slob. if you don't know about that
>>>>> tool, you should. we all owe kevin a debt of gratitude. ;)
>>>>>
>>>>> if nothing else, i hope to learn a little more about storage than i
>>>>> currently know (which isn't much).
>>>>>
>>>>> thanks for any help.
>>>>>
>>>>> chris
>>>>>
>>>>>
>>
>>
>> --
>> //
>> zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
>> Visit us at zztat.net | _at_zztat_oracle | fb.me/zztat | zztat.net/blog/
>>
>

-- 
//
zztat - The Next-Gen Oracle Performance Monitoring and Reaction Framework!
Visit us at zztat.net | _at_zztat_oracle | fb.me/zztat | zztat.net/blog/

--
http://www.freelists.org/webpage/oracle-l
Received on Tue May 08 2018 - 09:17:29 CEST

Original text of this message