RE: To LVM or not to LVM?

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Wed, 8 May 2019 19:27:31 -0400
Message-ID: <063801d505f5$9c4cc160$d4e64420$_at_rsiz.com>



Just curious. Moons ago I did a fair amount of disk to volume to Oracle optimizations (as Kevin is fond of saying “I partied like it was 1999” from about 1988 until about 2000.”  

Does anyone create little volumes that are used for nothing to take up the block header on its own track and so forth any more?  

Probably the amount of cache fronting disks system makes that irrelevant and/or SSD of course it is just math and the spinning geometry just doesn’t matter.  

Does anyone put multiple volume stripes across disks anymore? IF so, do you stagger which disk is first in each volume so the first disk doesn’t get all the header block writes for all the volumes?  

Did any of the volume managers make the default when adding a volume to move to the next disk as a starting point after checking any already allocated volumes?  

You may now return to the current millennium,  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Rich J Sent: Wednesday, May 08, 2019 4:09 PM
To: oracle-l_at_freelists.org
Subject: Re: To LVM or not to LVM?  

On 2019/05/08 08:50, Stefan Koehler wrote:

Hello Rich,
VMFS or RDM is important because the number of LUNs per VMware host is limited ( https://www.virten.net/vmware/vmware-vsphere-esx-and-vcenter-configuration-maximums/ ) and also important because of the disk device queue.

If that's the case, then I can see potential for LVM in creating small(er) LUNs and grouping the corresponding PVs into a VG to spread the load with a striped LV.

I know IBM's XIV as well but my point is a different one as it is related to the host site only. You got only one disk device queue if you just got one LUN (e.g. like you mentioned in the other mail "get a 500GB virtual disk and I create a single partition on it, add XFS, and create a mountpoint for it") for your database. This is not a big deal if your database is not I/O intensive but as soon as you got high I/O load you will see effects of increasing I/O latency because of increasing device queue ( e.g. storage wait time vs. host wait time: https://bartsjerps.com/2011/03/04/io-bottleneck-linux/ ) up to a point of a full disk device queue ( e.g. wait event "db file async I/O submit": https://fritshoogland.wordpress.com/2018/05/30/oracle-database-wait-event-db-file-async-i-o-submit-timing-bug/ ).

These disk device queue problems can be handled with striped LVs :)

Interesting! Queuing then would seem to depend in part on the SAN setup. If I only get a single RAID group exposed to OL7 via VMWare, much of this is moot.

Lots of things to consider, but this would definitely be a large point in favor of using LVM. I get it now. Having been on the XIV for so long had blinded me...lol.

Thanks much!

Rich

--
http://www.freelists.org/webpage/oracle-l
Received on Thu May 09 2019 - 01:27:31 CEST

Original text of this message