Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: First large table

Re: First large table

From: <>
Date: Wed, 19 Jan 2005 14:43:17 +0000
Message-Id: <>

answers inline....

> the day with approx 5000 records in a file.
> Approx 250000 will be loaded in a day, i,e approx 50 files loaded per day.
> Data will be deleted once a day. Data where column 10 equals a date 3 months
> in the past will be deleted. i.e a days worth of data, approx 250000 records
> will be deleted.

you need to use direct path loads. backup your database at night. keep the files until the database is backed up. dont use deletes. partition by date and add a partition every 3 months. drop the old partition when all records in that partition are 3 months old. you dont need to delete records as soon as they are 3 months old. just filter them out in your where clause of your query.

> Data must be recoverable.
> Although this does not seem too much of a problem I would like to know what
> techniques I should be considering to
> 1. Maximise sqlldr performance
> Is direct load an option here. I believe not as logging must be on and data
> can be possibly be queried whilst data is loaded(although query will only
> run once a day).
> What should sqlldr rows and bindsize values be?

just use the max. the rows and bindsize are not important. you are only talking about 5000 rows at a time.

> 2. Maximise select performance
> Index on columns 14,13 and 10
> Seperate tablespace for this table. Seperate tablespace for index
> Partiton table using column 10 as the value, i.e each partion will have
> approx 250000 records.
> Parallel queries

no you need to partition on date so you can purge. seperate tablespace for each partition doesnt help unless they are on a seperate hard drive mount point. if its just copies of the dame file on the same disk its irrelevant. parallel query only appears to help if you have partitions on different drives. I have never noticed any improvement when all the data is on the same disk. this isn't that much data.... when you do your reports how many rows are you brining back? if its just a few, then the size of the table is irrelavent. if its alot, then you will be doing full tablescans, so your indexes don't matter much. keep in mind, the more records you return the longer it takes no matter what...

 Anything else to consider?
> 3. Deletion of data, rollback segments

do not delete. delete is the costliest operation. drop old partitions.
> By deleting based on the table partition will this minimise rollback
> requirements?
> Any other advise on tuning this?

this isn't that big of a deal. its not alot of data with today's hardware.
> Any comments will be very much appreciated.
> John
> --

Received on Wed Jan 19 2005 - 10:19:10 CST

Original text of this message