RE: Paralellizing Pl/sql inserts

From: Connor McDonald <mcdonald.connor_at_gmail.com>
Date: Wed, 23 Jan 2008 22:06:51 +0900
Message-ID: <004e01c85dc0$d4353560$0b00a8c0@dell600m>

-----Original Message-----
From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Ken Naim
Sent: Wednesday, 23 January 2008 9:04 AM To: oracle-l_at_freelists.org
Subject: Paralellizing Pl/sql inserts

I have a process that is currently running at a rate of 4 million inserts per hour. In selects from a cursor that returns 65 million records bulk collects 500 at a time and forall inserts them into another table with 1.5 billion rows. I cannot do an insert as select as the record may already exists. I use the save exceptions clause to allow the code to continue processing but I don't care about which record turn the unique constraint violation. The 1.2 billion row table is partitioned but the data can go into any partition. The 65 million row table is not partitioned

<snip>

I am open to any suggestions but my question is how can I parallelize the process into multiple (24-48) threads.

Thanks,
Ken

--
http://www.freelists.org/webpage/oracle-l


Some things to possibly consider:


- Use SQL and LOG ERRORS clause to catch your exceptions
- Why "insert-select-where-not-exists" to avoid your duplicates
- Are your indexes global or local ? If local, you might get some nice
parallelism by selecting from your 65mill table rows based on the partition they will end up in, and then hammer against multiple partitions in parallel hth connor -- http://www.freelists.org/webpage/oracle-l
Received on Wed Jan 23 2008 - 07:06:51 CST

Original text of this message