Re: To LVM or not to LVM?

From: Stefan Koehler <contact_at_soocs.de>
Date: Wed, 8 May 2019 15:50:31 +0200 (CEST)
Message-ID: <592665821.300615.1557323431925_at_ox.hosteurope.de>


Hello Rich,
VMFS or RDM is important because the number of LUNs per VMware host is limited ( https://www.virten.net/vmware/vmware-vsphere-esx-and-vcenter-configuration-maximums/ ) and also important because of the disk device queue.

> If that's the case, then I can see potential for LVM in creating small(er) LUNs and grouping the corresponding PVs into a VG to spread the load with a striped LV.

I know IBM's XIV as well but my point is a different one as it is related to the host site only. You got only one disk device queue if you just got one LUN (e.g. like you mentioned in the other mail "get a 500GB virtual disk and I create a single partition on it, add XFS, and create a mountpoint for it") for your database. This is not a big deal if your database is not I/O intensive but as soon as you got high I/O load you will see effects of increasing I/O latency because of increasing device queue ( e.g. storage wait time vs. host wait time: https://bartsjerps.com/2011/03/04/io-bottleneck-linux/ ) up to a point of a full disk device queue ( e.g. wait event "db file async I/O submit": https://fritshoogland.wordpress.com/2018/05/30/oracle-database-wait-event-db-file-async-i-o-submit-timing-bug/ ).

These disk device queue problems can be handled with striped LVs :)

Best Regards
Stefan Koehler

Independent Oracle performance consultant and researcher Website: http://www.soocs.de
Twitter: _at_OracleSK

> Rich J <rjoralist3_at_society.servebeer.com> hat am 8. Mai 2019 um 15:31 geschrieben:
>
>
> On 2019/05/08 01:27, Stefan Koehler wrote:
>
>
> I should have read this more closely...
>
> > the first question would be - are you using VMFS or RDM for your "virtual disks"?
> >  
>  
> Not sure, as I'm not the vSphere person.  I guess I assumed (poorly) that it would be VMFS.  Why?
>  
> > However go for LVM - especially if you have some I/O intensive databases because of you can easily spread out the I/O load over several VSCSI/LUNs/Virtual Disks with LVM (even use of disk queues).
> >
> > Recommendation based on experience and a lot of SLOB benchmarks in client environments (also with All-Flash SAN): LVM + XFS.
>  
> Coming from an XIV, where there's near-zero control over where LUNs are created (and that worked very well for us), I'm interested in how this SAN will be setup.  I'm thinking it'll be some RAID5/RAID6, but that's just a guess.  If that's the case, then I can see potential for LVM in creating small(er) LUNs and grouping the corresponding PVs into a VG to spread the load with a striped LV.
>  
> I think my brain's light bulb is starting to glow now...
>  
> Thanks!
> Rich

--
http://www.freelists.org/webpage/oracle-l
Received on Wed May 08 2019 - 15:50:31 CEST

Original text of this message