RE: Solid State Drives

From: Vishal Gupta <>
Date: Sun, 3 May 2009 09:36:19 +0100
Message-ID: <>


I would agree. SAN cache does a pretty good job, even with RAID-5 as we have it in our bank.  

Since most of the data, most of time is getting written to memory (RAM) in SAN. And it offloaded to disk in the background by SAN. So database gets a success handshake as soon as data is written to SAN cache. And with combination of server RAM (i.e db cache) and SAN cache and RAID-5, reads are also lot faster. As Tanel, suggests idead should be to optimize your SQLs so they do less IO. But even if they cant, a full tablescan might get repeatedly served from SAN cache.  

Only problem I see with SAN cache is, there is no resource scheduling. Its all shared. So if you have too many system on same SAN cache, then one rouge system can bring down the entire company's systems. I have seen that happening. If there is too much written to SAN cache and writes are coming fast and thick. Then SAN does not get time to offload this dirty cache to disk,and this write pending goes above your set threshold, it starts throttling writes from all the sources. And we had a linux kernel where it eventually made all the SAN connected mount point read-only. OUCH.... Major downtime on all systems. Now linux kernels have been patches so that mount point does not become read-only under such conditions.  

But SAN administrators really needs the understanding of databases IO, and also needs to be contain busy system to their own front end ports (Fibre channel ports) and their own disk and controllers.  

But still they don't have the ability provided by SAN to isolate cache for a particular system. Or ability to throttle only selected systems/FC ports.      


Vishal Gupta  

[] On Behalf Of Tanel Poder
Sent: 01 May 2009 18:22
To:; 'Oracle-L' Subject: RE: Solid State Drives  

Once they get cheap and big then there would be a business case for them for regular shops.  

But right now, if you want to reduce the time spent waiting for physical reads in your database - one way is buy faster IO subsystem which SSD may give, another way is to just buy more memory for your server and do less physical reads. The same is with writes, consider whether its cheaper to buy/deploy/maintain the SSDs or just to have more write cache in your storage array (and again, if you buy more RAM into your server for caching read data then you can allocate even more of the storage cache for caching writes).  

So the question should be what's the most cost-effective option for achieving the result - reducing TIME spent doing physical IO. Given the write-caching of large storage arrays already in use in todays enterprises I don't think adding SSDs make sense from cost/performance perspective. Of course when the technology gets cheaper then the potential power savings and lesser drive dying rate will be another factor to consider.  

So my prediction is that, unless some other major new technology emerges in coming few years, SSDs will replace disk spindles for online "active" data just like (remote) disk spindles have replaced tape backups in some enterprises (btw I'm not saying that I like this approach entirely - tapes have the benefit of being physically disconnected from any servers, in a guarded safe in a bank in another city or so).  

In addition to backups, the disk spindles will still be used for archived data as well (lots of storage which is rarely accessed), they are faster than tape but cheaper per gigabyte than SSDs. Long term backups are kept on tapes, but some companies will completely throw away their tape systems to cut costs & complexity and keep all backups on disk spindles.  

After saying all that - currently I don't see much reason for buying SSDs for database solutions which are already deployed on mid/high-end storage arrays.  

Tanel Poder <>  





[] On Behalf Of Freeman, Donald
Sent: 01 May 2009 16:09 To: Oracle-L ( Subject: Solid State Drives Has anybody given any thought to where we are going as SSD's get cheaper and bigger? We've been going round and round at my shop with discussions about RAID, other disk allocation issues, fights over storage. I mean we seem to spend a lot of time on that issue. I saw that IBM is testing a 4 TB SSD. I was wondering if you'd have to mirror that, What kind of reliability we would be getting. No more RAID discussions? I've heard there is a finite number of times you can write to it. What's the upgrade path here? --
Received on Sun May 03 2009 - 03:36:19 CDT

Original text of this message