Home » Infrastructure » Unix » RAID Level (HP-UX, oracle 11g)
RAID Level [message #476833] Sun, 26 September 2010 10:36 Go to next message
Messages: 222
Registered: March 2007
Senior Member
My storage admin created RAID10 and RAID5 for database, I would like to know which RAID Level is best for keeping the REDO logs. Can someone tell me what's best for REDO?
Re: RAID Level [message #476835 is a reply to message #476833] Sun, 26 September 2010 10:40 Go to previous messageGo to next message
Michel Cadot
Messages: 59427
Registered: March 2007
Location: Nanterre, France, http://...
Senior Member
Account Moderator
Better ask on a HP forum for this.
Oracle has no way to know how disks are configured.


[Updated on: Sun, 26 September 2010 10:40]

Report message to a moderator

Re: RAID Level [message #476842 is a reply to message #476835] Sun, 26 September 2010 12:43 Go to previous messageGo to next message
Messages: 22911
Registered: January 2009
Senior Member
>Can someone tell me what's best for REDO?
not RAID-5; which has high WRITE overhead.
RAID-5 must write new XOR block for every data block written.
Re: RAID Level [message #477239 is a reply to message #476842] Wed, 29 September 2010 12:01 Go to previous message
Messages: 147
Registered: October 2009
Location: Dallas, TX
Senior Member
This is a tricky question, and requires a LOT more information than you have supplied. Raid 1+0 (sometimes referred to Raid 10 - which doesn't exist by the way that I have ever seen . . . ) is usually the fastest kind of disk configuration with regards to writes. This is because you are mirroring (Raid 1) and stripping (Raid 0) the disk I/O across multiple disks. Raid 1+0 combines good redundancy with high speed, but comes with the cost of 100% overhead in terms of disk storage. So, if you have six disks in your SAN, you effectively only get three disks worth of usable storage, because everything is mirrored from the first three disks to the second three. Raid 5 has read times usually identical to Raid 0 because Raid 5 stripes information across disks. However, read times for Raid 5 can diminish if you loose a disk because now the Raid controller must calculate what data was on the missing disk by using the information contained on the remaining disks. The benefit of Raid 5 is that you only loose from 25-33% of your disk storage to the redundancy. So, Raid 5 gives you the biggest bang for the buck in terms of available storage. Raid 5 takes a LONG time (relative to Raid 0 or Raid 1+0) in terms of writes. Most Raid 5 implementations also suffer from longer write times if they are doing random updates inside of a file instead of doing large streaming file write operations. This is because the Raid 5 check-sums need to be recalculated across all the disks for the little byte of data you just changed inside of a file. The physically results in a lot of writes to the other drives in the Raid 5 disk group to support the write.

For databases, Raid 5 can result in very bad performance - but there are a LOT of things to consider. In just about every SAN or Raid controller I have seen, there is a dedicated amount of RAM used for disk write-cache as well as read-cache. When the OS sends a write to the Raid device, and write cache is enabled, then the write is marked as 'done' to the OS once the data has been accepted by the controller and written into the write-cache memory. Then background processes on the controller pick up those writes and write them to physical disks as they can. This memory is usually redundant and backed up by battery so that it can survive a power failure. If you have an application that writes or changes data infrequently, then Raid 5 might be OK for you. If you are doing a huge data warehouse where you are constantly updating fact-tables, then Raid 5 will kill your performance. (One of the most interesting performance tuning engagements I went on was a case where a company was doing a huge data refresh, and they observed that for the first twenty minutes or so, the performance to the SAN was excellent, then dropped like a rock. It was later determined that the write-cache was being saturated by the refresh, and when this happened, the SAN was performing at physical disk speeds. They had implemented Raid 5 for their Oracle disk groups, and Raid 5 performed so badly, that a refresh that should have taken three hours was taking 15 or more. They tried increasing the amount of write-cache which only made their refresh run at full steam until about 35 minutes in. The write-cache would then cause the OS to wait until the controller's wrote to the physical disk, thereby freeing memory in the write-cache to accept another write request. Moving to Raid 1+0 allowed their refreshes to finish in approximately 3.5 hours.) If you have sufficient write-cache and your application doesn't churn through that much data, Raid 5 can work - but know that if you have poor performance to your disk storage system, Raid 5 is usually the culprit.

www.baarf.com Wink (yes - that is a real site - battle against any raid 5)

Previous Topic: Pretty new to UNIX, need to know if I can do this is in unix
Next Topic: AIX vs LiNUX
Goto Forum:

Current Time: Sat Oct 25 05:13:49 CDT 2014

Total time taken to generate the page: 0.05787 seconds