Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> First large table

First large table

From: John Dunn <jdunn_at_sefas.com>
Date: Wed, 19 Jan 2005 11:01:49 -0000
Message-ID: <001701c4fe16$473ecc50$3e04050a@johnman>


Currently our database is small, with tables having no more than 200,000 rows.

Now we have a requirement to load data from text files and hold approx 15 million rows at any one time.

The Oracle platform is 8.1.7 on AIX 4.33

The data will be queried from a client PC using Crystal Reports

The table definition is somelike like

Column1        NUMBER(6)
Column2        NUMBER(2)
Column3        NUMBER(1)
Column4        VARCHAR2(20)
Column5        NUMBER(8),
Column6        NUMBER(8),
Column7        NUMBER(2),
Column8        VARCHAR2(2),
Column9        VARCHAR2(2),
Column10(Todays date)      DATE,
Column11      VARCHAR2(8),
Column12      VARCHAR2(8),
Column13      VARCHAR2(20),
Column14      VARCHAR2(60),


The report will select records using columns 14,13 and 10

The report will only be run once a day.

Data from the text files will be loaded into the table by sqlldr throughout the day with approx 5000 records in a file. Approx 250000 will be loaded in a day, i,e approx 50 files loaded per day.

Data will be deleted once a day. Data where column 10 equals a date 3 months in the past will be deleted. i.e a days worth of data, approx 250000 records will be deleted.

Data must be recoverable.

Although this does not seem too much of a problem I would like to know what techniques I should be considering to

  1. Maximise sqlldr performance Is direct load an option here. I believe not as logging must be on and data can be possibly be queried whilst data is loaded(although query will only run once a day). What should sqlldr rows and bindsize values be?
  2. Maximise select performance Index on columns 14,13 and 10 Seperate tablespace for this table. Seperate tablespace for index Partiton table using column 10 as the value, i.e each partion will have approx 250000 records. Parallel queries

Anything else to consider?

3. Deletion of data, rollback segments

By deleting based on the table partition will this minimise rollback requirements?
Any other advise on tuning this?

Any comments will be very much appreciated.

John

--
http://www.freelists.org/webpage/oracle-l
Received on Wed Jan 19 2005 - 06:04:11 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US