RAM Disks Revisited
Date: Tue, 30 Jun 1992 08:49:51 EDT
Message-ID: <92182.084951KML110_at_psuvm.psu.edu>
I had earlier asked if anyone on the net was using a RAM disk as part of their
datbase. Surprisingly, I didn't hear a word from anyone at oracle.com about
how to take advantage of this feature, so here's my best shot at how to
implement this:
Oracle insists (for the sake of portability) on performing all sorts internally
in the TEMP area (unlike SAS, for example, which creates temp files in a sort
directory). I have created a tablespace called TEMP, 100M, to which all of my
non-DBA users point for Temporary_tablespace usage. It contains no objects
other than the SYS.... temp tables that area created during large group bys,
create indexes, etc.
To improve our sort performance, we can either (1) increase the sort area in
the SGA - already pretty much maxed out and forced into contention with other
users of the SGA, which is limited in VMS to 32M - or put the temp tablespace
on the fastest disk possible, a RAM disk. Alter TEMP offline, backup the file
to the RAM disk, alter online, and the TEMP tablespace is now in its own
section of memory, ready to fly. The problem, though, is that a node shutdown
will nuke the temp tablespace file. DEC of Canada suggested using an ESE50
disk. This makes a copy of a RAM disk's contents to disk before the node is
completely shutdown - basically, it keeps the power going long enough to make
a backup. The problem here is the cost - To handle TEMP, it'd cost $100K US.
Talk about your price performance points.
To recover from a shutdown of this sort, you have to be running in archivelog
mode. Make frequent backups of the temp tablespace. This may sound odd, but
there's a good reason. The temp tablespace, despite having no permanent
objects, has a sequence number in the header controlled by the control file.
How often should you back it up? Depends. How often are your redo logs
switched? And how long do you want recover to take? Since every node
shutdown necessitates a media recovery, frequent backups reduce the number
of redo logs you'd need to apply.
Given these caveats, its seems doable, and the performance benefit may make
it worthwhile for db's doing large group by's (ex: I had a situation where
we had to do a full-table six column group by on a 5-million record table
to denormalize it to a workable size. The temp table was 109M.). Other
efforts it'd help include create indexes and correlated subq's.
Considering Oracle's rep as a performance hog (at least on VMS), you think
they'd offer such support. Any comments from the field? Does this scenario
need to be modified before being implemented?
Kevin Loney "If at first you don't succeed, shutdown abort."
Received on Tue Jun 30 1992 - 14:49:51 CEST