Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Loading 300 million records

Re: Loading 300 million records

From: Anton Dischner <nomail_at_nospam.org>
Date: Fri, 10 Jun 2005 12:34:50 +0200
Message-ID: <100620051234507552%nomail@nospam.org>


Hi Alex,

you are using Oracle 10g right? Try this with enterprise manager: administration, advisor central, sql-tuning advisor , top-sql, sql-id, run sql-tunig advisor.

If it finds a better way you can implement it with a mouse click and restart your sql-statement with better plan.

Worked great for me.

kind regards,

Toni

In article <1118072447.344657.22030_at_g49g2000cwa.googlegroups.com>, AlexK <alexk84_at_hotmail.com> wrote:

> Hi All,
>
> I'm trying to improve the performance of a procedure that loads
> approximately 300 million records into a table. Currently the process
> is using 'INSERT /*+ APPEND*/ .. INTO SELECT' and takes about 10 hours
> to run on 10G. The select joins about 5 tables, most of which are
> small except the driving table which has about 300M records in it. I
> believe the indexes are good as explain plan only shows the main table
> getting a full table scan. From what I've read online, it says that
> the 'INSERT .. INTO SELECT' is the fastest and most efficient way to
> load data from one table to another. Unfortunately the only examples
> I've seen where the quantity of records inserted are mentioned only
> deals with about 1 million records at most. Is this still the best
> approach to take when loading 300M records? Would a bulk collect or
> something else be better since so many records are being processed?
> Any information would be greatly appreciated. Thanks in advance.
>
> Alex
>
Received on Fri Jun 10 2005 - 05:34:50 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US