Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Solid state disks for Oracle?

RE: Solid state disks for Oracle?

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Fri, 10 Mar 2006 08:46:50 -0500
Message-ID: <KNEIIDHFLNJDHOOCFCDKKEJIHMAA.mwf@rsiz.com>


I haven't read the whole thread carefully, but from what I scanned I think I have three things to add:

  1. SSD in modest quantities (ie. 4 to 30 GB, for example) is not that expensive in the context of systems that need SSD. In terms of input output operations per second, it is cheaper.
  2. The increased rate of operations for things like online redo are important, but I have not heard mention of the effects of "de-heating" the cache of the disk arrays in the system. If there is skew in the curve of object size to object activity, there will be opportunity to relocate smaller hot objects from the disk array to SSD. If the heat of the small objects is primarily read, then Oracle's cache architecture diminishes this effect. But competition for the write cache on the arrays is reduced, so DBWR et. al. performance is improved. Depending on what is busying the write cache on a given system, this may either be a huge effect per gigabyte of object relocated to SSD or nearly nil.
  3. It is most certainly feasible to make the primary landing spot for archived redo logs SSD. This does require some engineering, but it is not particularly difficult engineering. First, you set up n (where n is at least
  4. disk drives (or array stripes, but subject to at least ping-pong resource privacy) that are resource private to serving your secondary archiving process. You monitor completion of writing the archived redo log. As each archive is completed, you use your fastest secure copy process to get the archive from SSD to the current n. (The key here being that the copy load on the SSD is trivial compared to speed, so you don't have to ping-pong LGWR, ARCH, and this copy process). Since the output media is private to the "copy archives" processes and SSD is very very fast, there is usually no bottleneck created by doing a full comparison read after write. If your SSD can be attached by multiple reading hosts, you can have a cheap CPU (ie. not one from your DB server) dump each archived redo log from SSD to null to trap for errors. If the SSD is not multiple host threaded for read, then you have to consider whether dumping is a good use of your server CPUs. You check before each copy whether the archive to be copied will fit in the remaining space on the current n, and move to the next n if required. If you have tape jobs or other removable media, that also triggers writing from the current-1 n to removable media. If you're rolling your own standby, you also can ftp directly from the SSD archive to the remote destination via your preferred technology (usually some compressing pipe secure ftp protocol, but your mileage with all of this stuff may vary). When all downstream copies of the archives on SSD have been made, you whack the archived redo log that exists on SSD.

I do not have a "one size fits all" recommendation for whether or not using SSD and (in what quantities if used) makes sense. But I am continuously annoyed at how quickly array vendors tend to dismiss using any SSD. I keep hoping that one or another latches on to the truth that cooperative use of SSD with arrays magnifies the power of their array by de-heating the array cache. Once one vendor does, all the others will have to in order to remain competitive. Fortunately this also fits extremely well with ASM in that various SSDs are simply disk groups at a different speed rating.

Please stay BORING,

mwf



Rightsizing, Inc.
Mark W. Farnham
President
mwf_at_rsiz.com
36 West Street
Lebanon, NH 03766-1239
tel: (603) 448-1803

Balanced Organization of Resources in Natural Groups (BORING)

<snip>

 Obviously, placing archivel log destination on SSD is not feasible.

Regarding UNDO tablespaces - it might be cheaper to oversize buffer cache so that UNDO tablespace can be mostly cached. In this case only writes needs to be done but those are background things and tuned properly shouldn't affect performance. (ok, there can be some overhead of managing lager buffer cache) In our case, undo tablespaces usually experience very few reads and almost exclusively write IO.
<snip>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Mar 10 2006 - 07:46:50 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US