RE: SSDs and LUNs

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Thu, 19 Oct 2017 19:11:48 -0400
Message-ID: <0fb901d3492f$a4579490$ed06bdb0$_at_rsiz.com>



… as everyone in the universe (with ACFS filesystems on too old a version) rushes off to change unlimiteds to meg or two less than the software limit, I have déjà vu about individual file extensions ? years ago. The bug resolution was that it let file extensions go beyond the addressable size back then… I think that might have been only 2 or 4 GB, but I really don’t remember.  

Thanks for the tip, Niall.  

On the OP’s issue: JL mentioned software queues (check) and some kit will allow what Niall suggests. It is very difficult to make suggestions without knowing exactly what you have. There can be a lot of layers of software and hardware between a persistent memory device and the cpu. It can be hard to tell if “the disk seems slow” means you’ve overdriven your persistent media or your persistent media is waiting to send the data intermittently and that is reported as the i/o being slow. OS level utilities can usually clarify throughput capacities of the hardware, and of course SLOB will show you what your configuration will do with Oracle.  

I also wouldn’t bet against spinning rust for things that can be isolated to receive rarely interfering large batch writes. ARCH reading redo (from wherever) and writing it in big chunks to the archived log destination for example could save money and possibly be even faster. (It’s the seeks, man, arithmetic is faster than a mechanical head move). There’s a real question whether your SSD array is optimized to transfer a high bandwidth back to the server (‘cause it is way more important that it beats the heck out of seeks to find your data) and what happens between the device and the cpu.  

Good luck. And yes, read Kevin’s thing from Connor’s link. (I haven’t checked that particular one, but let’s not party like it’s 1999.)  

Oh, and don’t forget to ask if the hardware vendor has a configuration recommendation that makes sense for Oracle. There are plenty of round wheels in existence, and you don’t want to hand build one if someone mass markets on that fits your bike.  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Niall Litchfield Sent: Thursday, October 19, 2017 3:59 PM To: Vee Aar
Cc: Jonathan Lewis; ORACLE-L
Subject: Re: SSDs and LUNs  

What you would normally do is to increase the number of paths to the storage, and use the mpath device as your ASM disk. You'll get an O/S queue per path that way. It is, of course, possible to generate queueing within the array by being over enthusiastic.  

As a point of reference, we are happy with larger LUNS (4tb) and more paths for our all flash array based databases. If you do use LUN sizes larger than 2T *and* use ACFS make sure you are on a current version ( > 12.1.0.2.5) otherwise once your ACFS filesystem gets more than 2T of data added to it, you'll lose it all :( https://support.oracle.com/epmos/faces/DocumentDisplay?id=2065748.1      

On Thu, Oct 19, 2017 at 8:09 PM, Ram Raman <veeeraman_at_gmail.com> wrote:  

"If there is a layer at which you have a single queue to each LUN you will have an I/O bottleneck" is exactly what I am thinking of. just 2 queues for TB of data?

Let me read up on the link Connor emailed.

Rich J: I cannot understand your question since those terms sound new to me. I will have to research that.

Mladen: It is going to be in house, not on cloud. and 1.5Tb LUNs are common (with SSDs?)? How is the IO in those installations? I understand that SSDs outperform HDDs, but I am wondering, just wondering, that by providing 2 or 4 luns we are losing the advantage that SSDs give us?  

On Thu, Oct 19, 2017 at 1:39 AM, Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk> wrote:

At some layer between Oracle and the silicon the various software components will have some queues. If there is a layer at which you have a single queue to each LUN you will have an I/O bottleneck when you've got lots of Oracle processes trying to read from just 2 (or 4) LUNs.

I'm not an expert with stuff that far away from the Oracle software but I would be a little surprised if you got bad performance because you were configured as 40 LUNs, while I have seen bad performance from a system where the solid state SAN had been configured as just 2 LUNs (one for data, one for redo).

Regards
Jonathan Lewis



From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on behalf of Ram Raman <veeeraman_at_gmail.com> Sent: 19 October 2017 06:49:23
To: ORACLE-L
Subject: SSDs and LUNs

We are moving one of the systems to vm. The consultants who have been hired to do the implementation are recommending that we create just 2 or 4 'LUNS' for data diskgroup for the db that is 3Tb in size which exhibits hybrid IO. They are promising it is best rather than having 30 or 40 LUNs since the new disks will all be SSDs.They are claiming that it will perform better than having 40 'LUNs'. I still have the 'old way of thinking' when it comes to IO. Can someone confirm one way or other, or point to any paper. thanks.

Ram.

--

--
     

--

Niall Litchfield
Oracle DBA
http://www.orawin.info

--

http://www.freelists.org/webpage/oracle-l Received on Fri Oct 20 2017 - 01:11:48 CEST

Original text of this message