Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Intermittent Commits

Re: Intermittent Commits

From: Ryan Gaffuri <rgaffuri_at_cox.net>
Date: 7 Jul 2003 11:00:56 -0700
Message-ID: <1efdad5b.0307071000.652d9056@posting.google.com>


mikeeria_at_interia.pl (Michael) wrote in message news:<875e251b.0307070637.620d8c49_at_posting.google.com>...
> Hye !
>
>
> I have a query which selects from several tables and inserts into a
> table X. Two ways of writing this query are (A) fast but risky (B)
> slow but certain
>
>
> (A) Writing this select in one bulk INSERT SELECT statement will be
> the fastest way, however we only commit once at the end. If something
> wrong happens then nothing will be stored in the new table and in
> addition we should wait a period of time for the database to rollback
> its work.
>
> (B) Now performing the same INSERT in a cursor and COMMITing after
> each INSERT is slower but more certain, because if something happens
> which will stop
> the query, then it is possible to restart from where we stopped.
>
> What I am looking for is a solution in between, that will balance
> speed and risk for example to commit every chunk of records. The
> reason for that is not space but time. When this query is run the
> database should be turned offline and the downtime is minimal. By
> introducing more commits, it will be much simpler to forecast how much
> time will it take to perform this INSERT, better feedback of what is
> happening. Any ideas??
>
> I am using Oracle 8i and the INSERT varies from 100000 to 500000
> records.
>
>
> TIA
> Michal

500000 inserts is not THAT many. how fast is your machine? how long does it take to do this insert?

here is a 'possible' middle ground. This is really only practicle if there is a limited number of fields that you are inserting.

its partially explained on asktom.oracle.com

bulk collect the records into a series of pl/sql tables(in 8i, i think you can only bulk collect into 1 dimensional arrays... i keep forgetting when they changed it). this is where the slow down comes from. how many pl/sql tables do you need?

use the limit clause. so you get say 2000 at a time.

then do a forall insert
commit

bulk collect into is an array select and forall allows you to insert as an array.

then you can have a loop and keep running until the query is complete. See that website for details.

if you have alot of columns... its not worth it.

500k records isnt that many... Id just run it as an insert select. how often does your server go down? are you on windows and you have to do periodic reboots? Received on Mon Jul 07 2003 - 13:00:56 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US