Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Disk Configuration question.

Re: Oracle Disk Configuration question.

From: Michael Austin <>
Date: Wed, 02 Jun 2004 17:54:23 GMT
Message-ID: <jvovc.2120$> wrote:

> bc9am <> wrote in message news:<ph7vc.248$>...

>>I'm trying to understand what the best disk configuration is for an
>>Oracle server I am trying to build. Currently the server is spec'ed to
>>have 4 disks. In my mind that gives me two choices. Either two sets of
>>RAID 1, or a big RAID 5 array. It's a high availability server so I
>>can't afford downtime due to a dead disk.
>>After a bit of research I have understood that you are supposed to put
>>all your core database (table?) files (and any random access stuff [inc
>>OS files?]) on one disk, and sequential stuff on their own disks (e.g.
>>logs). I obviously don't have enough disks to put all sequential files
>>on their own disks. So, with this in mind I guess logic suggests putting
>>them all on one disk. However this seems a bit silly as then its just
>>random access (?) - so I might as well have a RAID 5 array for the whole
>>Obviously with this particular configuration cost is an issue, so
>>performance isn't the number one priority.
>>Can anyone comment on any of the above or point me in the right
>>direction of a good resource talking about the options available (I seem
>>to just come across resources that are either just wrong, or only say
>>the 100% right way to do it with no flexibility for cost saving - I
>>can't afford a 10+ physical disk solution).
>>Are there any generic examples of how to partition for Oracle in a Linux
> The disadvantages of using RAID-5 for online redolog files and
> controlfiles have been explained here many, many, many, many times.
> RAID-5 has a write penalty associated to it by design.
> If you want to read about this just visit and
> you'll have some recognised Oracle specialists explain why you should
> NOT use RAID-5.
> You may also want to read up on the SAME methodology. (SAME is an
> acronym for Stripe And Mirror Everything)
> Sybrand Bakker
> Senior Oracle DBA


I understand the argument of "don't use RAID5", but in my experience, one of the things you must know about your environment when deciding on these sort of things is the mix of transactions. I have been in an environment that was 75/25% read/write. So the read benefits of RAID5 were well suited to the application. Not to mention the 150 or so databases ranging in size from 100GB to 1.4TB on a SAN of ~240TB in size. All database drives were RAID5, all root, binary and log drives were mirrored at the controller level. (Compaq/HP Storage Works (EMA and EVA style) and new HP XP-series (rebranded Hitachi)).

To be quite blunt here, using a PC as the server with 4 drives does not a high availability server make. Really his only options are: 1 RAID5 (and I seriously doubt this will impact *his* performance) or 2 mirrors. But, only if his system supports hot-swap... anything less will suffer downtime.

The RAID5 argument really only holds water in high transaction rate scenarios... deploying on a PC with 4 drives does not give me the idea they are dealing with anything close to a high volume system. Now if I am building multi-hundredGB >1000TPS system, I would consider SAME technology. -- But then again, you also must consider your controllers and drives. Using HP EVA or Hitachi technology with 146GB-309GB fibre drives and many GB of cache, many of the concerns of RAID5 dissipate into oblivion.

95% of the overall performance comes from proper coding (application and sql) and the other 5% deals with hardware/network/disk technology.

I have moved applications from a pegged 4 CPU 800MHz Alpha to a 16way 1.2GHz Alpha (with only 3 cpu's pegged) with absolutely no user-perceived performance improvement. We moved the database from older 72GB 7K/rpm drives with 32K of cache to 146GB 10K/rpm drives w/2GB of cache with no perceived performance improvement. We fixed the scripts and now the system "flys".


BTW: the EVA is a virtual array where you allocate space and not spindles. A 100GB LUN would be stripped/raided across an entire disk group -- and you do not control where it puts it. In the case above, that would be 120 spindles. If you ask is this fast... we restored a 750+GB database to a second EVA using RMAN in under 3.5 hours - recovered and operational -while writing backups from other systems to the source EVA at the same time. Received on Wed Jun 02 2004 - 12:54:23 CDT

Original text of this message