Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Datafiles accross multiple disk

Re: Datafiles accross multiple disk

From: Paul Drake <>
Date: Thu, 13 Jan 2005 14:55:49 -0500
Message-ID: <>

Here is a place to start:
subscribe to "Storage" magazine with an email address from which you don't mind some spam.

Here is a benchmarking tool:

I remember a tool named "IOZone" that I think was developed by Intel.

"The Storage Performance Council announced the first industry-standard storage benchmark of its kind for direct attach and network storage subsystems. The new benchmark, dubbed SPC-1, gives customers a vendor-neutral process to compare and configure storage subsystem technology."
I don't think that SPC-1 lived up to its promises.

Wild !@#$ guess:
More than likely, your internal hard drives are direct attached SCSI RAID, more than likely using an integrated on-board RAID controller with 128 MB of memory. If there is more than one internal channel, that memory is divided up per channel, and defaults to 50% read/ 50% write. In effect, you have a 32 MB read cache. The RAID volume is a 3 disk RAID 5 with a stripe size of 32 KB. Scattered reads are so slow due to requiring 4 stripes to be read to satisfy a db_file_multiblock_read_count of 16.
Competition of LGWR, DBWR, CKPT, ARCH for IO is fierce. Redo wastage is likely high, as you only have one filesystem for online redo logs, controlfiles, datafiles, tempfiles and archived redo logs.
Write caching helps somewhat, the read cache is practically useless. All in all, your IO performance off of your internal storage is less than optimal.

More than likely, your external hard drives reside in a unit that is mounted via Fibrechannel via a single 2 Gbps fibrechannel host bus adapter. The external unit has >= 1 GB of cache. It has 8 disks (or more) in a RAID 10 volume with a stripe size of 256 KB (or larger). One stripe needs to be read to satisfy a request of 16 * 8 KB blocks (full table scans). Read-ahead is configured such that table scans are throttled on the FCHBA, not on the physical disks. Table scans now fly for tablespaces whose datafiles have been moved to the external unit.
Or not. Who can tell, as the information on hardware is non-existent from our end.
The point is that larger volumes with larger stripes over larger numbers of disks (hopefully not larger disks) are better suited to supporting a data warehouse where table scans dominate.

One way to tell the speed is to predict it based upon hardware LEAST (
(# of writes/unit time * average write size + # of reads/unit time * average read size) / (# of operations per second that SCSI RAID | FCHBA supports),
#IOPS per drive * # of drives)

or measure it (see IOmeter link above).
Have you at least configured statspack or examined v$filestat? group by the mount point, joining v$datafile to v$filestat to see file stats by mount point.
this info is available via the standard statspack reports.

When you say "DW" - I would think that you'd be talking about hundreds of GB. Filesystems can perform poorly when free space is below 5% as finding free space becomes an issue - think dictionary managed tablespaces, uet$, fet$ and all that recursive processing required to support it, that thankfully we no longer have to deal with.

I think that you need to discuss the server configuration with your SysAdmin. If you are the SysAdmin, start back with the Oracle Concepts manual and look toward offloading storage management to a logical volume manager. 10g ASM may or may not be good for you as you upgrade this in the future, but I believe that it was designed with you in mind (I hope that doesn't come off the wrong way).

relevant piece of info to include next time would be: external storage cabinet make/model
volume stripe sizes



On Thu, 13 Jan 2005 09:19:10 -0700, Bryan Wells <> wrote:
> All,
> Newbie here again.
> We are running a DW on on w2k Advanced Server. im trying to
> manage disk space on our currrent server. We were down to 5GB of disk
> available on the e:\ drive as of last weekend when i moved index
> datafiles/tablespaces to the external f:\ array. would this cause a
> perfomance hit? is there a better best practice to have datafiles
> accross disk? Im sure there is since im continually learning. Also,
> im still trying to find the disk speed between the array and the
> server.
> Bryan S Wells - DBA Newbie
> --

Received on Thu Jan 13 2005 - 13:56:33 CST

Original text of this message