Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Storage array advice anyone?

RE: Storage array advice anyone?

From: Hameed, Amir <>
Date: Wed, 15 Dec 2004 14:31:24 -0500
Message-ID: <>

While the discussion is going on these heavy duty SAN boxes, I would also like to bounce a question on the disks layout in SAN. We have recently acquired an EMC DMX 3000 box. Our current production is running on EMC 8830, four-way striped, and is going out of lease in a few months. So, we will be migrating our mission critical production system to the newly arrived DMX 3000 box soon. I have gone through a white paper from James Morle, "Sane SAN", which basically suggests that for optimal SAN disk layout, assume that there is no cache available and stripe disks optimally and consider cache as a added benefit.

In our existing configuration on the 8830 frame, the Meta Volumes is created from four hypers and is 20GB in size. The Metas are then presented as a volume to the server and each mount point is based on a 20GB volume. We are not double-striping the volume at the host level. The drives in the 8830 frame are 73 GB in size and do an average of ~ 120 reads/seconds and ~ 110 writes/seconds. So, the I/O bandwidth of a Meta would be ~ 480 r/s (4x120) and ~ 440 w/s (4x110).

Having said that, I have done some basic calculations on the IOs that Oracle is issuing (on the 8830 frame) from the v$filestat and v$tempstat views, aggregated on per mount point basis. From what I have seen is that on some mount points Oracle is doing up to 800 reads per second. Based upon the fact that on a highly available system, it is not always possible to move around hot files without incurring a downtime, I am exploring the idea of striping the new DMX frame 8-ways. This DMX frame has 146 GB drives and based upon these drives specifications, they can do ~ 130 r/s and ~ 120 w/s. So, an 8-way striped Meta volume would be able to do 1040 r/s and 960 w/s. I was in the HotSos symposium this Summer and I asked Steve Adams this question and he also suggested going with 8-way striping. Is there anyone in this DL who is using a DMX frame and striped 8-ways ?

Does anyone has any advise on 4-way versus 8-way striping ? EDS is our service provider and they are not buying the idea of 8-way striping as they and EMC think that the cache on frame can resolve all the issues, which is not true because the cache has to de-stage at some point and I have seen high IO waits on the 8830 frame from sar and I don't believe that cache is nirvana.

Thank you

-----Original Message-----

[] On Behalf Of Stephen Lee Sent: Wednesday, December 15, 2004 10:33 AM To:
Subject: RE: Storage array advice anyone?

I appreciate the discussion on the topic. I think additional considerations on this particular array (Hitachi TagmaStore 9990) are that the "normal" configuration (according to Hitachi) is that the disks are in groups of 8; each group is a stripe with parity; the parity cycles around all drives. When a bad block occurs, the block is NOT replaced by a spare block on the drive, but the drive is failed and replaced by a hot spare, and phone home occurs. Which -- I guess -- is a fairly aggressive drive replacement scheme.

There appears to be agreement that the best performance for most cases (note: most cases) is to stripe everything across all drives. There does appear to be some remaining discussion, from a fault tolerance standpoint, about whether to go strictly with stripe + parity and trust that Hitachi really has worked out the fault tolerance issues, or assume that claims from Hitachi are just a bunch of sales hype and insist on stripe + mirror. Healthy skepticism is useful, but one does not want to be basing that skepticism on outdated ideas. That is what a lot of this comes down to: Which ideas and rules are outdated -- given the capabilities of this new gee whiz hardware -- and which still hold.

The astute reader will note that the stripe + parity is, more or less, raid 5-ish. But yet again, we have a manufacturer who claims that in their case the I/O speed penalty is no longer an issue. In the case of this array, there appears to be some real world experience to support that claim. Any comments from those who know otherwise, are most welcome. Again, another one of those "Have some of the ideas about this become outdated?" sort of thing.


-- Received on Wed Dec 15 2004 - 13:34:45 CST

Original text of this message