Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> Re: Huge INSERT and unfriendly ROLLBACK
In article <727gfa$o4h$1_at_nnrp1.dejanews.com>,
buzkoff_at_my-dejanews.com wrote:
> I'd like to get some opinions from people about a problem already raised in
> this group, though, with a dissatisfying answer.
>
> I run a huge INSERT, approx. 500,000 records. The reason I do it is the speed
> of the operation. If I use PL/SQL with commits after each 10,000 records, as
> was suggested here, it takes ages.
>
> Now in more than 60% of cases, transactions just hang, doing some "db
> sequential reads" or giving me the "unable to extend rollback segment" error.
> My RBS is 140M and if I look at all of the segments they rarely extend 40M.
>
> So what might be the solution?
>
What format is your data in that you need to do 500,000 records at a time? You can try running the 10,000 record batches in parallel, which may help if you have the CPU power for it. Another solution is to bypass the rollback segments using the SQL*LOADER DIRECT option. BUT BE CAREFUL! I don't recall the version that fixes it but there is a bug which can corrupt your data if the load is interrupted. (It was posted in this newsgroup within the last week or so.)
You have 500,000 records. What's the average size of the records? There is overhead in oracle so if the data bulk is close to your RBS limit, then oops. Also are there other transactions happening at the same time? They can consume rollback space too.
What timeframe are we talking about? How long is too long ("it took ages")?
SQL*LOADER Direct is the only alternative I can suggest. But if your data is coming from a PL/SQL procedure, it may not be an option for you.
-- Ed Prochak Magic Interface, Ltd. 440-498-3702 -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your OwnReceived on Mon Nov 09 1998 - 00:00:00 CST