I have seen an aging white paper that basicly slams the use of raw
partitions, except when you can demonstrate your database is I/O
bound. I disagree with it. I think most of its arguments against raw
partitions are outdated. It was written without consideration of LVM,
or backup products that make backing up a raw logical volume as easy
as backing up a file system. Plus, if you first build your database
using filesystems, think how much more trouble it will be to later
convert a production database to raw LV's, than to build it raw in the
first place. Why wait until you ARE I/O bound just to prove a point?
Advantages:
- Faster I/O performance (although some argue the improvement is
marginal).
- You can implement dbwr async I/O (the asyncdsk kernel driver).
- Some Oracle options (Parallel Server) require raw.
To me, the biggest advantage is #2 above. Sure, you can run multiple
dbwr's, but using real async I/O is more efficient than simulating it.
Disadvantages:
- Using raw logical volumes does require close cooperation from
your system administrator. Therefore, you will probably need to do a
little more up front planning (not a bad thing, IMHO). In my case, I
began life in the UNIX world as a UNIX system administrator before I
became an Oracle DBA. I've got root access whenever I need it. So,
as long as I don't get into an argument with myself.... :-)
I have built all of our databases on HP systems exclusively using raw
partitions via LVM (except for the control files and init<sid>.ora of
course!). We've had no trouble at all going "raw". LVM eliminates
the raw partition management headaches.
2. BEFORE you go raw, you will definitely need a good backup
product (like Omniback) that can backup raw LV's. Otherwise, the only
native UNIX method of copying a raw partition is "dd". You do NOT
want to use dd to backup your database.
Other thoughts on raw logical volumes....
On systems running HPUX 10.xx, I create, where possible, volume groups
with 8 drives each across as many controllers as possible (8 would be
ideal). Then I create the logical volumes striped across all 8
devices. Note that the order in which you name your devices in the
vgcreate command has a direct impact on logical volume striping. Part
of that planning in advance.
Don't let HPUX name your LV's for you, or you'll have creative names
like lvol1, lvol2, lvol3, etc. My convention is to use a pattern of
<sid name>_<tblspace name><number>. For example, if I have a database
sid of "demo", the system tablespace LV name will be "demo_system01".
You will need to reference the LV via the character device name. So
if this LV is in a VG named "vgora01", the full path name will be
'/dev/vgora01/rdemo_system01'.
You will need to change the ownership of the character devices names
to oracle. After I create all the LV's, I can change ownership with a
single command, e.g. # chown oracle:dba /dev/vgora*/rdemo_*
The maximum size you can define your datafile is the LV size minus 1
db_block. For example, if you use 8K db block size, and you create
the demo_system01 LV at 64M, then (64*1024-8 = 65528). Thus:
create database demo
.....
datafile '/dev/vgora01/rdemo_system01' size 65528k reuse;