RE: ASM vs Hardware RAID vs ASM + Hardware RAID

From: Mark W. Farnham <>
Date: Thu, 3 Dec 2009 17:38:43 -0500
Message-ID: <>

I'm a fan of letting the hardware do what it is best at and using ASM for the things it does best.

If you're buying Hardware RAID that supports RAID 10 in hardware well, then I claim it is a good idea to use that to present at least two sets of LUNs constituted of whole disk drives duplexed pairwise to ASM. By using whole underlying disk drives to assemble the LUNs you minimize the chance that someone will figure out a way to carve up the drives so that after you build disk groups in ASM the disk groups interfere with each other. If there is something in your environment such that creating LUNs that big, ie. several pairwise hardware duplexed disks striped together has a total size you can't handle, then you should be careful to carve it up in the most BORING way possible and you should be careful to put all the pieces of the same disk drives into the same disk group in ASM. Whether you need more than two disk groups is a complex consideration. If you're buying new stuff and it is all the same speed and you don't have any tablespaces in your database that contain objects that require service levels to that must be maintained in the face of highly varying loads on other tablespaces, then maybe it is best to start with just one disk group for your "flash recovery area" and another for everything else. I would refrain from duplexing at both the hardware and the software layers. On the issue of giving it a go with just the two disk groups, I wouldn't lose any sleep over it, because you can move stuff and withdraw disks from a disk group and reconfigure. On the other hand, if you have multiple speeds of disks or objects whose throughput you must protect against other loads, then I'd take the time to design that in to your construction of disk groups. Definitely keep the components of your disk groups in ASM all the same speed. And if you suspect you might have to break out some objects to isolate throughput from other loads, then at least make sure those objects are in different tablespaces.

If you're talking about a giant high throughput database, then I like to isolate the i/o for online REDO as well. That tends to waste a lot of acreage on modern SANs, but if you will really need the throughput, then you're not counting size but rather spindles.

And if you're really high throughput then you might have to balance the load "traywise" on the SAN and one the HBAs or Infiniband bus. (Well I don't think I've seen anyone overload a properly configure Infiniband bus, but I've seen HBAs get differentially overloaded.)

I'm confident others will disagree with what I think. And some other threads here have dealt with netapp.



-----Original Message-----

From: [] On Behalf Of Ray Feighery
Sent: Thursday, December 03, 2009 4:54 PM To: Oracle-L List
Subject: ASM vs Hardware RAID vs ASM + Hardware RAID


Red Hat 4.0 x86_64
HP DL 585 G2
Oracle EE

Looking for experiences of using ASM vs Hardware RAID. Currently have a database running on internal filesystem disks (no ASM, just standard ext3). The database is outgrowing the internal disks, so the next step is to attach an external array.

Can anyone recommend a good strategy?
Options under consideration:

1) Array with raw disks attached and ASM
2) Array with hardware raid and filesystem (no ASM)
3) Some combination of hardware raid and ASM

Also we're looking at the HP range of storage (MSA and EVA). Any recommendations or warnings about those would be good too. The storage will be dedicated to the database and flash recovery area.

Any advice is appreciated.



-- Received on Thu Dec 03 2009 - 16:38:43 CST

Original text of this message