Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: Memory usage of Bulk Inserts

Re: Memory usage of Bulk Inserts

From: Connor McDonald <connor_mcdonald_at_yahoo.com>
Date: Thu, 05 Dec 2002 20:59:04 +0000
Message-ID: <3DEFBE18.6A27@yahoo.com>


Yash wrote:
>
> Hi,
>
> In the application I am developing, I need to perform a bulk insert into a
> table by selecting from another table. I anticipate millions of rows in the
> source table.
>
> My question is, will the bulk insert statement try to gather all rows from
> the source in memory and then perform the insert?
>
> I want to make sure the operation does not fail due to lack of memory or
> lack of rollback segment space.
>
> Is there an option, or hint that I can give in the INSERT command to avoid
> running out of memory?
> Is there a way I can commit after INSERTing every 10000 records?
>
> Thanks in advance

The bulk collect has a "limit" clause, and once collected, obviously a for-all will only process the number of rows in the array.

Its unlikely you'll get much benefit with an arraysize greater than (say) 500, but some quick benchmarking will find the right size for your environment.

If its a one-off operation, create a massive rollback and let it go in one op. If its not a one-off, then you need that size rollback seg anyway, so just create it and keep it

hth
connor

-- 
==============================
Connor McDonald

http://www.oracledba.co.uk

"Some days you're the pigeon, some days you're the statue..."
Received on Thu Dec 05 2002 - 14:59:04 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US