Inserting millin records from remote database [message #9972] |
Thu, 18 December 2003 00:35 |
Ajendra Naraya Samal
Messages: 26 Registered: December 2003
|
Junior Member |
|
|
Hai
I am accesing abt 2 million records from a remote database by giving the command
Insert into target Select * from source@dblink
It takes around 2 hrs to fetch the data. Is there any way to improve performance.
Will the performance increase if I fetch first 20000 record in PL/SQL loop and then give the COMMIT stmt.
Also I can fetch the first 20000 rows using ROWNUM and give the commit stm then after. Will that be a better way ?
Is there any other way in oracle which will increase the performance ?
Thanks and regards
Ajendra
ajendrans@planetasia.com
|
|
|
Re: Inserting millin records from remote database [message #9977 is a reply to message #9972] |
Thu, 18 December 2003 01:54 |
Thiru
Messages: 1089 Registered: May 2002
|
Senior Member |
|
|
Dont have any indexes on the target table until the load is complete.
Set the target_table to NOLOGGING and do Insert /*+ APPEND */ into target ..
if possible.
Do it as 'Create Table as Select ' using PQ servers and NOLOGGING , if possible.
If not,try SQLPLUS COPY command which can do arrayfetches ( and optionally copycommits if you dont have Rollback segment space to do 2 million rows at a time).
Dont use PL/SQL unnecessarily and dont commit unnecessarily.
-Thiru
|
|
|
Re: Inserting millin records from remote database [message #9978 is a reply to message #9977] |
Thu, 18 December 2003 07:28 |
Ajendra Naraya Samal
Messages: 26 Registered: December 2003
|
Junior Member |
|
|
Hai could u pl explain a bit on SQLPLUS COPY. And i am too not allowed for the CREATE TABLE command. Will Bulk fetch and then FORALL stmt be faster. I am using 9i. The table is having 30 fields and mostly numeric. I too need to insert into another 2 tables having around 1 million records in each.
|
|
|
Re: Inserting millin records from remote database [message #9980 is a reply to message #9978] |
Thu, 18 December 2003 08:21 |
Thiru
Messages: 1089 Registered: May 2002
|
Senior Member |
|
|
Sqlplus COPY Demo:
SQL> set arraysize 5000
SQL> set copycommit 10
SQL> -- will commit every 50000 rows ( you should increase this to as high as possible)
SQL> drop table t;
Table dropped.
SQL> create table t as select * from all_objects where 1=2;
Table created.
SQL> COPY from test/****@dev06 -
> TO thiru/****@dev08 -
> INSERT T -
> USING -
> Select * from t;
Array fetch/bind size is 5000. (arraysize is 5000)
Will commit after every array bind. (copycommit is 10)
Maximum long size is 80. (long is 80)
27011 rows selected from test@dev06.
27011 rows inserted into T.
27011 rows committed into T at thiru@dev08.
SQL> select count(*) from t;
COUNT(*)
----------
27011
For more details,pls refer to documentation.
A direct load Insert (ie Insert /*+ APPEND */ ) is very likely to outperform Bulk Collect/Insert .
-Thiru
|
|
|