Re: Storage choice for Oracle database on VMware

From: Radoulov, Dimitre <cichomitiko_at_gmail.com>
Date: Sun, 11 Nov 2018 21:27:32 +0100
Message-ID: <CAGJBphRJtk1KOSHHhiMDsum3E+HnyOhW9jHW=Nd8hDiizkc-5Q_at_mail.gmail.com>



Yes,
and in our environment it really makes a difference.

Regards
Dimitre

Il giorno dom 11 nov 2018, 20:57 Ls Cheng <exriscer_at_gmail.com> ha scritto:

> Hi Radoulov
>
> Just wondering in youtr tests did you set FILESYSTEMIO_OPTIONS to SETALL
> in xfs?
>
> Thanks
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> On Fri, Nov 9, 2018 at 3:46 PM Radoulov, Dimitre <cichomitiko_at_gmail.com>
> wrote:
>
>> Hello all,
>>
>>
>> after a few quick tests on XFS and ASM (calibrate_io and swingbench) I
>> see that direct and asynchronous I/O definitely make a difference.
>>
>> Stefan and Neil, thank you for your suggestions!
>>
>>
>>
>> Regards
>>
>> Dimitre
>>
>>
>>
>> On 31/10/2018 12:29, Neil Chandler wrote:
>>
>> Radoulov,
>>
>> The caching in the SGA understands your data usage patterns through the
>> LRU algorithms and will have cached all of the best data. The FS cache, if
>> you dump it out, will look a lot more like white noise with few discernable
>> patterns. The SAN cache even more so. The more single block reads you have,
>> the more like white noise it all looks. The liklihood of there being a
>> cache hit in the FS or SAN cache is relatively low. The advantage of direct
>> path reads significantly outweights the advantage of both of those caches.
>> It is worth noting in that on most SAN caches, if you specify that the LUN
>> is for a database it will disable read-ahead to pre-populate the cache as
>> it understands that it is not the best use of the cache (the general rule
>> is that SAN cache should be reserved exclusively for writes when the SAN is
>> used for the database.)
>>
>> Note that these statements are generalisation, and that there may be
>> cases where your assertion is true but they will be an edge case and I
>> would recommend that you have a provable scenario to justify running in
>> that configuration.
>>
>> Neil Chandler
>> Database Guy.
>> ------------------------------
>> *From:* oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org>
>> <oracle-l-bounce_at_freelists.org> on behalf of Radoulov, Dimitre
>> <cichomitiko_at_gmail.com> <cichomitiko_at_gmail.com>
>> *Sent:* 31 October 2018 07:20
>> *To:* Andrew Kerber
>> *Cc:* lkaing_at_gmail.com; contact_at_soocs.de; Oracle-L Group
>> *Subject:* Re: Storage choice for Oracle database on VMware
>>
>> Thank you all for the valuable input!
>>
>> > what is the problem with direct I/O? You should never run an Oracle
>> database through page cache anyway :)
>>
>> I'm not sure if direct I/O is always the best choice. I think that
>> certain workloads may benefit from the FS cache.
>>
>> Anyway, I'm wondering why setall is still not the default value for
>> filesystemio_options on Linux (most probably because of the bugs with
>> certain filesystems and kernel versions).
>>
>>
>>
>> Regards
>> Dimitre
>>
>>
>> Il giorno mar 30 ott 2018, 22:38 Andrew Kerber <andrew.kerber_at_gmail.com>
>> ha scritto:
>>
>> Most places with growing databases and heavy duty environments on vmware
>> use ASM. Some use XFS or similar and LVM, though I am not fond of those.
>>
>> On Tue, Oct 30, 2018 at 4:34 PM Leng <lkaing_at_gmail.com> wrote:
>>
>> Asm is great when you plan correctly. If you don’t it’s very painful. Eg.
>> If you have different sized disks asm will be forever rebalancing, and
>> failing as there is not enough space on the odd disk. So you need to vacate
>> the diskgroup to rebuild it. (Yes, you know... not my fault, the previous
>> consultant did it...) If there’s an asm bug you may have to take an outage
>> on the Asm to apply the patch.
>>
>> Normal disk operations like dd to asm is almost impossible. Trying to
>> find that corrupted data block on the asm disk takes great asm expertise
>> from a great oracle support engineer.
>>
>> Those were some up of my worst asm nightmares. It was only 2 years ago. I
>> have since moved on...
>>
>> Cheers,
>> Leng
>>
>> > On 31 Oct 2018, at 7:20 am, Stefan Koehler <contact_at_soocs.de> wrote:
>> >
>> > Hello Dimitre,
>> > what is the problem with direct I/O? You should never run an Oracle
>> database through page cache anyway :)
>> >
>> > I would go with tweaked XFS (e.g. "nobarrier" as this information is
>> usually not passed through correctly with VMDKs on VMFS, etc.) if it is
>> just one single instance in this VM.
>> >
>> > Best Regards
>> > Stefan Koehler
>> >
>> > Independent Oracle performance consultant and researcher
>> > Website: http://www.soocs.de
>> > Twitter: _at_OracleSK
>> >
>> >> "Radoulov, Dimitre" <cichomitiko_at_gmail.com> hat am 30. Oktober 2018
>> um 19:12 geschrieben:
>> >>
>> >> Thank you Chris, Matthew and Niall,
>> >>
>> >> so the question is if performancewise ASM is worth it.
>> >>
>> >> With the default Oracle database settings the I/O on XFS would be
>> synchronous, right?
>> >>
>> >> And if I understand correctly Note 1987437.1, on Linux you cannot
>> enable async I/O without turning on direct I/O too.
>> >>
>> >> Regards
>> >> Dimitre
>> > --
>> > http://www.freelists.org/webpage/oracle-l
>> >
>> >
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
>>
>>
>> --
>> Andrew W. Kerber
>>
>> 'If at first you dont succeed, dont take up skydiving.'
>>
>>
>>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_8776224909084966304_m_-8563382580318212349_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Nov 11 2018 - 21:27:32 CET

Original text of this message