From: Neil Chandler <>
Date: Tue, 22 Nov 2011 21:05:13 +0000
Message-ID: <BAY153-W414EC6E3064CBB6C2733DC85C80_at_phx.gbl>


I would disagree about drive failure being a boundary condition - there are only 2 types of disk drive. Drives that have failed, and drives that are going to fail. :o)

Where the more recent SAN's get around the rebuild issue is by using meta-LUN's, or Thin Provisioning, which really goes to town with the striping and allocation of blocks of extents from am many spindles as you put into the SAN storage pool. It's SAME gone mad. However, it also minimises the impact of a single failed drived whilst it rebuilds to a hot spare as a low percentage of the actual LUN is on that drive.

Where this really impacts you is when it's truely shared infrastructure. I have had a lot of trouble recently on some bin XP 20000's getting low utilisation LUNs that require very low response times to respond the way I need them to (less than 5ms - so some cache hit needed) as they get swamped by other huge LUNs for other databases in the same Storage Pool.

The difference performance difference between the Raid-5 pool and Raid-10 pool on these huge arrays isn't that much unless we overload the 32GB of (write) cache.

People very rarely seem to spec SAN hardware correctly. It's either way too big or way too small.


Neil Chander

> Date: Fri, 18 Nov 2011 08:06:52 +0800
> Subject: Re: RAID5
> From:
> To:
> (drive failures and other boundary conditions aside)
> I've found that its rarely the disk configuration nowadays that really
> matters (my currently clients being a mix of software raid, hardware raid,
> raid-10, raid-5, raid-dp, the list goes on). Its much more the piping
> between storage and server, the CPU grunt on the storage, and to a lesser
> extent, the CPU grunt on the server, that seems to make all the difference.

Received on Tue Nov 22 2011 - 15:05:13 CST

Original text of this message