Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Loading 300 million records

Re: Loading 300 million records

From: Stu Charlton <stuartcharlton_at_gmail.com>
Date: 14 Jun 2005 20:19:05 -0700
Message-ID: <1118805545.546354.250030@f14g2000cwb.googlegroups.com>

AlexK wrote:
> Hi All,
>
> I'm trying to improve the performance of a procedure that loads
> approximately 300 million records into a table. Currently the process
> is using 'INSERT /*+ APPEND*/ .. INTO SELECT' and takes about 10 hours
> to run on 10G. The select joins about 5 tables, most of which are

For understanding fast bulk inserts, I would suggest this link: http://asktom.oracle.com/pls/ask/f?p=4950:8:5560369830540486146::NO::F4950_P8_DISPLAYID,F4950_P8_B:6407993912330,Y

For understanding how to speed up that query, we would need more information, like a query plan, a list of all the indexes on the table, and your init.ora file.

If you don't know how to get a query plan, you can use SQL*Plus' autotrace facility ("set autotrace traceonly explain") before running your query.

My only advice I can give without this information is the general advice I give people that have slow running queries: Oracle's SQL optimizer needs statistics to understand how big a table is. IF you haven't collected statistics - through Oracle Enterprise Manager GUI, or TOAD, or from the SQL*Plus prompt [i.e. DBMS_STATS.GATHER_TABLE_STATS( ... ) ], you probably should do so.

Cheers
Stu Received on Tue Jun 14 2005 - 22:19:05 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US