Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Oracle Myths- Tablespace placement answered by Oracle

Re: Oracle Myths- Tablespace placement answered by Oracle

From: Nuno Souto <nsouto_at_optushome.com.au.nospam>
Date: Sun, 19 May 2002 18:47:43 +1000
Message-ID: <3ce767d3$0$15145$afc38c87@news.optusnet.com.au>


In article <3ce714a7$1_12_at_news.teranews.com>, you said (and I quote):
> >
> > It doesn't matter. At all. Think about it. An I/O is an I/O is an
> I/O.
>
> Actually, it does matter in that if the disk head is at one end of the
> platter and an index I/O sends it to the other end, that can be
> significant.

And a table I/O will never send the disk head to the other end? Index or table is immaterial. What is important is the I/O itself. The less of those in a single disk, the better. The more you spread the load across multiple disks, the faster overall I/O will be. If each of them is table or index is not relevant.

At database and tablespace level, you have no control whatsoever as to the positioning of the disk heads. Those are determined by the proportion of the disk platters that is allocated to your data file and the mix of use of the disk drive.

If you stripe a large datafile across many drives, you will be using a very small portion of the platters in EACH drive. Which will reduce your issue with disk heads moving from one end of the platter to the other in any individual disk.

Again, this has everything to do with the LVM and the striping of your disks and absolutely nothing to do with whatever you do tablespace-wise in the database.

This reminds me of a design a couple of years ago of a partitioning scheme for a very large table (nearly 0.5Tb). The data was keyed by date. But the PK (and only index) was a surrogate key. Many times a day, all data for that day had to be scanned. Once a week, all data for THAT week had to be scanned multiple times. Once a month, all data for that month had to be scanned multiple times. Data was kept for three months.

Two partitioning schemes were tried. One concentrated on spreading the I/O load at tablespace level by hashing the key into ten partitions and keeping every single consecutive row in one of these 10 partitions. Each partition of course stored in a single tablespace. Great performance in inserting rows: spread all over the tablespaces. Absolutely LOUSY for scanning the daily, weekly and monthly stuff: the ENTIRE table's 10 partitions had to be full scanned for this simple process. Horrendous load in the system and NO WAY we could run any of these scans concurrently with daily processing.

Second partitioning scheme: forget the hashing, one partition per day. Every day, a new partition is created in a STRIPED disk set, the previous day's partition is moved to a normal set. This takes care of the daily load of inserts. Now, all the daily scans happen in a single partition on a striped set. No need to full scan the ENTIRE table. Major improvement in performance right there. Weekly? Try 7 partitions, all of them not the current day one. No contention for doing a weekly scan while the daily processing is going on. Monthly? Try 30 partitions out of 91. Still one third of the previous (daily) load.

No guesses as to which partitioning proved easier to use and maintain, as well as having better overall performance. By a country mile.

-- 
Cheers
Nuno Souto
nsouto_at_optushome.com.au.nospam
Received on Sun May 19 2002 - 03:47:43 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US