Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: HP autoRaid Disk config?

Re: HP autoRaid Disk config?

From: Gaja Krishna Vaidyanatha <>
Date: Fri, 15 Sep 2000 06:08:19 -0700 (PDT)
Message-Id: <>


The short answer to your question is - No if we are talking about just 1 array. There is no inherent benefit that I have noticed in creating multiple LUNs and logical volumes, on 1 AutoRAID array, as all volumes utilize all disks (this is part of the Auto Magic!!!). If you have multiple arrays, then store DATA and INDX on different arrays. The automatic conversion of blocks in a volume from RAID 5 to RAID 0+1 and vice versa on an ongoing basis (based on access patterns) is both a boon and a bane. Here is an excerpt about that "feature", from a paper that I will be presenting at OOW 2000 - "Implementing RAID on Oracle Systems".

Auto RAID:

With Auto RAID (implemented by HP), the controller along with the intelligence built within the I-O sub-system, dynamically modifies the level of RAID on a given “disk block” to either RAID 0+1 or RAID 5, depending on the near historical nature of the I-O requests on that block. The recent history of I-O patterns on the disk block is maintained using the concept of a “working set” (which is a set of disk blocks). For obvious reasons, there is one working set each for reads and writes, and blocks keep migrating back and forth between the two sets, based on the type activity. A disk block in this context is 64K in size.

Said in a different way, a RAID 5 block can be dynamically converted into a RAID 0+1 block, if the “intelligence” determines and predicts that the block will be primarily being accessed for writes. The controller can also perform the converse of the previous operation, namely converting a RAID 0+1 block into a RAID 5 block, if it determines and predicts, that the block will be primarily accessed for reads. To support this configuration, all the drives in the array are used for all RAID volumes that are configured on that array. This means that physical drive-independence across volumes cannot be achieved. While this implementation of RAID relieves a significant amount of work and maintenance for the System Administrator and the Oracle Database Administrator, care should be exercised while implementing this on hybrid Oracle systems which can be “write and read intensive”, at the same time.

If your system becomes suddenly and seriously “write intensive” after a period of “read intensive” activity, the conversion process may not occur immediately and your blocks may get stuck in RAID 5 (from the read phase) even though you are in the write phase. This happens when the system load is high, and the conversion process defers to a quieter period. This behavior may be prevalent on busy hybrid Oracle systems.  

If you implement this RAID technology on heavy-duty Oracle systems, be prepared for unpredictable changes in I-O performance, unless your system is normally “write intensive” or normally “read intensive” with occasional changes like those nightly batch jobs. So, when implementing Auto RAID, every effort should be taken to segregate write-intensive and read-intensive components on separate arrays.

As mentioned in the previous section, configuring and using Hewlett Packard’s Auto RAID requires care, as the automatic disk block conversion process, constantly converts disk blocks from RAID 0+1 segments to RAID 5 and vice versa, based on its determination and prediction of I-O on those blocks. The key exposure areas for Oracle are the Rollback Segment (RBS) and Temporary (TEMP) tablespaces, and are adversely affected by the conversion process.

Since the I-O patterns for these tablespaces often alternate between extensive reads and writes, performance may vary dramatically. The alternation between intense writes followed by intense reads, can cause serious system performance degradation because the RAID controller may attempt to compensate for this by changing the RAID type too frequently and yet not fast enough. It has been observed that the conversion often doesn’t get done “in time” to support the future nature of the operations requested.

The problem mentioned here can occur even on all other components of the database, if there are periods of lull, followed by varying operations (reads followed by writes or vice versa) that cause the disk blocks to be converted back and forth.

Further, as mentioned before, the lack of control over drive allocation for various volumes can cause serious disk contention problems, if the application performs significant index-range scans followed by table lookups.

In a benchmark, it was observed that if the RBS tablespace was not written to for a period of time, but was read from (as part of building a read-consistent image for a long running query), the disk blocks housing the rollback segments of the database were converted to RAID 5. Then when a slew of write activity was launched at the database, the disk blocks remained as RAID 5, and this degraded write performance significantly as parity had to be calculated and written for those blocks. Later when the I-O sub-system got a breather, these blocks were re-converted to RAID 0+1. A similar phenomenon was also noticed on the TEMP tablespace.

Hope that helps,


Gaja Krishna Vaidyanatha
Director, Storage Management Products, Quest Software Inc. Office : (972)-304-1170, E-mail :

Author - Oracle Tuning 101 by Osborne McGraw-Hill "Opinions and views expressed are my own and not of Quest"

Received on Fri Sep 15 2000 - 08:08:19 CDT

Original text of this message