Re: RAID and Oracle

From: George Dau <gedau_at_mim.com.au>
Date: 1996/03/17
Message-ID: <314c748d.77427504_at_158.54.105.102>#1/1


steve.miles_at_ci.seattle.wa.us wrote:

]We are planning to set up several new disks for our Oracle database. I'm a little confused about the 
]issue of RAID. Our database will be used for both data entry applications and read-only queries. 
]Obviously the nice thing about RAID 5 is the potential for hot-swapping disks (and little down time) in 
]case of media failure. However, I hear that there is a performance hit when implementing RAID 5. 
]Does anyone have numbers/sources to back this up? Does anyone have a general 
]recommendation for the use of RAID with Oracle? Is anyone using RAID 5 with Oracle now? Can 
]you tell me what kind of performance you get? Are there other issues/options that need to be 
]considered?
]
]Thanks.
]
]Steve Miles
]Seattle Water
]

Yes, have a look at the following stats from our Sun 1000e running a Sun 102 SSA. After some experimentation I have turned off the prestoserve write cache and brought the write times down from 200ms to 60 odd. Anyway, the figures: This is a df -k to give the file system sizes:

/dev/vx/dsk/rootdg/u02 7099672 5081602 1308110 80% /u02
/dev/vx/dsk/rootdg/u03 1183025 877390 187335 82% /u03
/dev/vx/dsk/rootdg/u04 10316366 7673670 1611066 83% /u04
/dev/vx/dsk/rootdg/u05 10473253 5977267 3448666 63% /u05
/dev/vx/dsk/rootdg/u08 1183025 378267 686458 36% /u08
/dev/vx/dsk/rootdg/u09 100098 33076 57022 37% /u09
/dev/vx/dsk/rootdg/u10 780725 557372 145283 79% /u10
/dev/vx/dsk/rootdg/u06 2642029 342953 2034876 14% /u06
/dev/vx/dsk/rootdg/u11 473751 370865 55516 87% /u11
/dev/vx/dsk/rootdg/u07 2958317 1950683 711804 73% /u07
/dev/vx/dsk/rootdg/u01 1952350 9 1757111 0% /u12

This is a vxstat to give performance details: (about 20 hrs sample).

                        OPERATIONS           BLOCKS        AVG TIME(ms)
TYP NAME              READ     WRITE      READ     WRITE   READ  WRITE 
vol swap1            47432     10891    379456    467520    4.2   22.0 
vol swap2            47730     11261    381840    466512    4.4   20.1 
vol swap3            47029     11284    376232    465864    4.2   22.9 
vol swap4            46896     11081    375168    468848    4.2   21.7 
vol swap5            46846     11047    374768    466416    4.3   23.2 
vol u02              59613    120079   3749238   4951132   13.9   63.4 
vol u03               2523      5399     56274     52488   14.9    6.2 
vol u04              26801     38162    642224    350244   13.8    7.1 
vol u05             465296    883293   7652282   7193956   11.8   34.3 
vol u06              22184     33332   2187184   1223360   15.7   54.6 
vol u07              99627     94665   9539000   8959726   13.7   15.7 
vol u08              25721     21507   2287550   1846034   14.1   14.3 
vol u09             171634    734645   3778286   8662468   11.8    8.9 
vol u10             248751    507849   3334944   4083540   10.0    9.5 
vol u11                 12      7085        58    745768   22.5   21.5 
vol u012                15     15511       155    248067   10.0    8.9 

The swap volumes are single column mirrors.
/u02, /u05 and /u06 are RAID5. Notice the slower writes.
/u11 is a single column concat volume (just a disk).
/u03, /u04 and /u07 are stripped over 6 disks.
/u09 and /u10 are striped mirrors.

Even though our write/read ratio is high, we still get good performance from the RAID5 volumes. The 63.4ms write average is quite fast enough for us compared to the cost of disks to mirror a 10Gig file system.

We had a disk fail yesterday too. All I noticed was a mail from the Veritas software. The "Hot standby" did not cut in as I would have expected, but I was able to move the bad sub-disks onto the spare without interruption to the operation of the volumes.

I will be checking my config to see why the hot spare didn't cut in, but the RAID redundancy definitly worked.

Regards, George Dau
gedau_at_mim.com.au Received on Sun Mar 17 1996 - 00:00:00 CET

Original text of this message