Home » SQL & PL/SQL » SQL & PL/SQL » commiting the records with limit
commiting the records with limit [message #429753] Thu, 05 November 2009 12:01 Go to next message
nagaraju.ch
Messages: 98
Registered: July 2007
Location: bangalore
Member
Hi experts,

we are processing the more than 10 crore records, out of them some records will be updated based on the condition and rest will be inserted into the inserted.

Due to the huge no. of records, our script is getting failed and saying the unable to extend temp segment.

Can any one tell me how to avoid this by commiting after each bunch of records.
Re: commiting the records with limit [message #429754 is a reply to message #429753] Thu, 05 November 2009 12:09 Go to previous messageGo to next message
BlackSwan
Messages: 25032
Registered: January 2009
Location: SoCal
Senior Member
>Can any one tell me how to avoid this by commiting after each bunch of records.
BAD, Bad, bad "solution"!

Increases odds of ORA-01555 error

Just increase size of TEMP
Re: commiting the records with limit [message #429755 is a reply to message #429754] Thu, 05 November 2009 12:17 Go to previous messageGo to next message
nagaraju.ch
Messages: 98
Registered: July 2007
Location: bangalore
Member
There is no enough space in the server. server people hands up with this issue. asking us to put commit after some records processed.
Re: commiting the records with limit [message #429757 is a reply to message #429755] Thu, 05 November 2009 12:30 Go to previous messageGo to next message
BlackSwan
Messages: 25032
Registered: January 2009
Location: SoCal
Senior Member
ROCK<-->you<-->HARD_SPOT

row by row is SLOW BY SLOW.

You need to help us by following the Posting Guidelines as stated below.
http://www.orafaq.com/forum/t/88153/0/
Go to the URL above click the link "Posting Guidelines"
Go to the section labeled "Practice" & do as directed

It appears you get to write some custom PL/SQL to implement requested "work around".

Good Luck!


Re: commiting the records with limit [message #429758 is a reply to message #429757] Thu, 05 November 2009 12:44 Go to previous messageGo to next message
ThomasG
Messages: 3189
Registered: April 2005
Location: Heilbronn, Germany
Senior Member
Actually I think neither approach is inherently evil.

If your requirement is "I want this records in, and I want them in fast, and nobody else is working with the database, and I have enough disc space" then the one-statement insert/update is the way to go.

But, if for example you have to update big master data tables while people are working with them, and the update will run a few hours, then you HAVE to commit row-by-row, or by little groups of rows, so that each row is only locked for a fraction of a second, so that the normal user can go on working, instead of being unable to lock the rows they need themselves.

How to do it exactly, of course, would depend on where you have the data in which form, and whereto you have to update it in which way.
Re: commiting the records with limit [message #429760 is a reply to message #429757] Thu, 05 November 2009 12:53 Go to previous message
andrew again
Messages: 2577
Registered: March 2000
Senior Member
--> Performing commits at intervals during a single logical transaction is bad.
--> Performing commits after n rows is fine as long as the integrity of you data isn't compromised.

Example: If you are loading Orders and Order-line-items, then the parent/child relationships need to be maintained, so that means if you want to insert 10000 Orders and then commit, you must aim to insert all the Order-line-items for those 10000 Orders before you commit. Don't insert all the Orders, committing at intervals, then start inserting the Order-line-items committing them at intervals too.

Even SQL*Loader defaults to committing at intervals - it's the correct way to load large amounts of data AS LONG AS YOU DONT VIOLATE the integrity of the data.

ETL tools do exactly the same...
Previous Topic: oracle 9i and user activity logging
Next Topic: execute procedure after database startup.
Goto Forum:
  


Current Time: Fri Dec 02 12:17:15 CST 2016

Total time taken to generate the page: 0.15845 seconds