Re: ASM vs. dNFS?

From: Stefan Koehler <>
Date: Wed, 24 Feb 2016 09:54:48 +0100 (CET)
Message-ID: <>

Hi Chris,
my clients are using both ASM with FC and dNFS or kNFS for older Oracle releases.

I recently did an I/O benchmark at a client environment (VSphere 6, OEL 6.7 as guest, Oracle 12c, NetApp NFS, 10GE, no Jumbo Frames, W-RSIZE 64k) with SLOB and we reached out close to the max of 1GB/s by an average single block I/O performance of 4 ms (if it was coming from disk it was round about 8-10 ms and the other stuff was coming from storage cache).

I just comment some of your points.

2a) You can do this with ASM or dNFS by RMAN. I highly recommend that you do not rely on storage snapshot / backup mechanism only as you will not notice any physical or logical block corruption until it may be too late. Trust me i have seen more than enough of such cases.

4b) When you are using dNFS in a VMWare environment for Oracle you have no VMDKs for the Oracle files (data,temp,control,redo,arch) at all. You map the NFS share directly into the VM and access it via dNFS inside the VM. You only have VMDKs for the OS (and Oracle software) for example. In addition to scale with dNFS you may not do NIC teaming on VMware level, but rather put each interface into the VM and let dNFS do all the load balancing, etc. (e.g. ARP).

In sum nowadays there is no reason to demonize NFS for Oracle (with dNFS). It works very well with good performance (FC like).

… i am a kid from the FC decade and i am saying this ;-)

Best Regards
Stefan Koehler

Freelance Oracle performance consultant and researcher Homepage:
Twitter: _at_OracleSK  

> "Ruel, Chris" <> hat am 23. Februar 2016 um 16:35 geschrieben:
> This is sort of long so bear with me…
> TL;DR: Who has compared ASM vs. dNFS (not used together) and what did you choose and why?
> I was wondering if anyone on the list has opinions on, or has evaluated, ASM vs. dNFS in a mutually exclusive configuration for
> datafile/arch/ctl/redo storage?
> We have been using ASM on NetApp over fiber channel for many years now with great success. We particularly like the ability to add/remove spinning
> disks or SSD on the fly. We can even "upgrade" our filers with no down time by adding in and removing LUNs and letting ASM do it's rebalance thing.
> Recently, some new technology changes have become available for us. These changes are in the form of moving our compute platform to UCS/Flexpod
> environments and the introduction of VMware. Operating on the UCS gives us access to 10gE (currently our infrastructure is primarily 1gE) which
> brings the option of using dNFS to the table.
> Now, I am just starting down the path of comparing the two for pluses and minuses and I do not have all the data yet. Thought I would reach out to
> the list.
> There are a few things that attract us to dNFS:
> 1. Less complication…maybe? In RAC environments, I still think we need ASM for OCR/Voting…someone correct me if I am wrong. But, we will
> not have to manage ASM disk groups like we do now. However, after so many years of using ASM, our team is pretty well versed in it…so, is it really
> an added complication?
> 2. Better ability to use NetApp snapshots:
> a. We can do file level recovery with dNFS which cannot be done with ASM
> b. Right now we have to manage separate disk groups for each database (when we have multiple databases on a node/cluster) if we want
> to use NetApp snaps since restore is done all-or-nothing at the disk group level. In some cases, we have hit the maximum number of disk groups (63)
> in ASM. I think multiple disk groups like this also results in more overheard managing and monitoring. Furthermore, more disk groups seems to
> waste more space as it is sometime hard to predict storage needs…I think in the end the best approach is to over allocate storage instead of having
> to manage it constantly.
> 3. Our primary OS platform is OELinux x86-64. Linux has a LUN path limit of 1024. That sounds like a lot, but, with multiple LUNs per disk
> group and multipathing in place, each disk group takes up a minimum of 8 LUNs. This is not to mention LUNs supporting the OS and shares. Since we
> need to have separate disk groups for each database to support snaps, a cluster with a lot of compute power will either hit the LUN path limit or
> the ASM disk group limit before we run out of compute. My understanding is that we do not have this limit problem with dNFS.
> 4. It seems that dNFS will lend itself better to VMware:
> a. Setting up snaps with ASM on VMware led us down the path of using RDM's (which have limits feature wise) instead of VMDK's.
> b. VMDK's with dNFS seems like less configuration which will allow for quicker provisioning on VMware. VMDK's are also the preferred
> approach according to VMware.
> c. Using ASM and LUNs with VMware still is an issue with the 1024 LUN path limit. However, it moves to the physical hosts in the ESX
> cluster...not just the guest OS. Therefore, we are seriously limiting the number of guest OS's on our ESX clusters before we run out of compute
> because will hit the LUN path limit first.
> So, that's it in a nutshell. I am sure there is a lot more to it but I appreciate any input people may have.
> Thanks,
> Chris

Received on Wed Feb 24 2016 - 09:54:48 CET

Original text of this message