Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Loading 300 million records
Hi All,
I'm trying to improve the performance of a procedure that loads approximately 300 million records into a table. Currently the process is using 'INSERT /*+ APPEND*/ .. INTO SELECT' and takes about 10 hours to run on 10G. The select joins about 5 tables, most of which are small except the driving table which has about 300M records in it. I believe the indexes are good as explain plan only shows the main table getting a full table scan. From what I've read online, it says that the 'INSERT .. INTO SELECT' is the fastest and most efficient way to load data from one table to another. Unfortunately the only examples I've seen where the quantity of records inserted are mentioned only deals with about 1 million records at most. Is this still the best approach to take when loading 300M records? Would a bulk collect or something else be better since so many records are being processed? Any information would be greatly appreciated. Thanks in advance.
Alex Received on Mon Jun 06 2005 - 10:40:47 CDT