Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Bulk (array) inserts - performances

Re: Bulk (array) inserts - performances

From: Martic Zoran <>
Date: Fri, 19 Nov 2004 02:40:31 -0800 (PST)
Message-ID: <>


Thanks for the response.

With the bulk inserts there are something different. Probably I should think more :)

The main thing is you have 6 times more redo (in my case) with failing bulk inserts then with failing simple inserts (commits are after 100 in both cases).

Also failing simple inserts are generating less redo then non-failing simple inserts. This is probably OK assuming no indexes are updated.

With bulk inserts something else is happening. My understanding it is updating or preparing everything then checking contraints at the end, but not sure about the logic behind.

Maybe no logic, just one of Oracle bugs with bulk/arry operations :)



   Besides the name of the constraint which has to be fetched from the dictionary, as you mention, my own understanding is that the violation of unicity can only be checked when inserting the index. What do you store in indices ? Rowids. How do you get the rowid ? Well, you know it after having inserted the table. So I believe the sequence to be :

which of course means a lot of work, a lot of writes to the database, and a lot of undo generation.


Stephane Faroult

RoughSea Ltd

On Fri, 19 Nov 2004 01:23 , Martic Zoran <> sent:

Hi all,

I have a few questions about bulk (array) inserts. The question is very simple and it is based on real scenario:

  1. Do the bulk inserts from C-OCI 50 times per 100 inserts in the batch without duplicates
  2. Do the same bulk inserts from C-OCI 50 times per 100 inserts in the batch. All inserts will fail because of PK and UK I have on the table.

Both 1. and 2. are done on the same connection one after the other.

First run took less then a second.
The second took around 6 seconds.
It is very easy to see Oracle db CPU statistics and other stats below.
If the difference was not that big and the customers did not complain I will never realize that big discrepancy. This is not usually happening but when happens you do not want to have the system slowing that much.

The questions are:
1) Why failing bulk inserts are that much slower then the bulk inserts without errors? Are the failing bulk inserts tend to have the similar characteristics as individual
failing inserts?
2) Why the redo generated in the second run is much bigger? Is it the main reason for this slowness (I know there are a few, like sending the constraint name and error message, ...)?


Zoran Martic                 

Do you Yahoo!?
Meet the all-new My Yahoo! - Try it today!  
Received on Fri Nov 19 2004 - 04:40:19 CST

Original text of this message