Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: PL/SQL Question

Re: PL/SQL Question

From: Keith Jamieson <jamiesonk_at_phoenix.ie>
Date: Tue, 4 Apr 2000 13:48:41 +0100
Message-ID: <8ccofi$gvp$1@kermit.esat.net>


Since commiting in a cursor for loop is bad practice, what do you recommend as an alternative?

I have procedures on an OLTP application which have to copy data from the OLTP's schema to a report scshema, and this is done nightly.

In some instances, It is possible that a large number of rows is needed to be copied, therefore, being pragmatic, I commit after every 1000 inserts/updates.

Interestingly enough, this did not prevent me from running out of rollback segment space, when I was doing a distributed copy, so the calling procedure which performs the distributed copy now specifies the name of a Large rollback segment which has specifically been created for this purpose.

I don't particularly like any solution where I have to hardcode something, but I don't see any way around this at present.

Thomas J. Kyte wrote in message <8c4t87$eei$1_at_nnrp1.deja.com>...
>In article <8c4dvu$vl6$1_at_nnrp1.deja.com>,
>amerar_at_unsu.com wrote:
>>
>>
>> Hi there,
>>
>> I have a cursor FOR loop that will update over 1 million records. The
>cursor
>> is defined with the FOR UPDATE clause. I have a counter that
>commits after
>> every 10000, however when running the program I got this error:
>>
>> ORA-01002: Fetch out of sequence
>>
>> What does this mean? Can't I commit within a cursor FOR loop?
>>
>In general -- you can commit in a cursor for loop (its a bad practice
>and can lead to other errors - in particular ORA-1555 if you are
>updating the table you are selecting from).
>In particular though -- a commit closes all FOR UPDATE queries though.
>A commit releases the locks you have aquired. A select for update gets
>lots of locks. They are released when you commit -- hence the query
>must close. So, you cannot commit in a cursor for loop that has a for
>update.
>Selecting for update 1,000,000 rows is *extremely* expensive. The
>select for update will take a long time to open. Consider locking the
>table or just locking the rows you will actually process in the update.
>For example, lets say you are going to do 1million out of 5million
>records. There is a "needs_processing" field in the record you set to
>Y before processing this record. After processing it is set to NULL.
>You might use logic such as:
>ops$tkyte_at_8i> REM drop table t;
>ops$tkyte_at_8i> REM create table t as select a.*, 'Y' needs_processing
>from all_objects a;
>ops$tkyte_at_8i> REM create index t_idx on t(needs_processing);
>ops$tkyte_at_8i> set serveroutput on
>ops$tkyte_at_8i> declare
>2 cursor c1 is select *
>3 from t
>4 where needs_processing = 'Y'
>5 and rownum < 1000
>6 FOR UPDATE;
>7
>8 keep_going boolean default false;
>9 cnt number default 0;
>10 begin
>11
>12
>13 loop
>14 keep_going := false;
>15
>16 for x in c1 loop
>17 update t set needs_processing = null where current of
>c1;
>18 cnt := cnt + sql%rowcount;
>19 keep_going := true;
>20 end loop;
>21 commit;
>22 dbms_output.put_line( 'processing ' || cnt || '
>records...' );
>23
>24 exit when not keep_going;
>25 end loop;
>26 end;
>27 /
>processing 999 records...
>processing 1998 records...
>processing 2997 records...
>processing 3996 records...
>processing 4995 records...
>processing 5994 records...
>processing 6993 records...
>processing 7992 records...
>processing 8991 records...
>processing 9990 records...
>processing 10989 records...
>processing 11988 records...
>processing 12987 records...
>processing 13986 records...
>processing 14985 records...
>processing 15984 records...
>processing 16983 records...
>processing 17982 records...
>processing 18981 records...
>processing 19980 records...
>processing 20421 records...
>processing 20421 records...
>PL/SQL procedure successfully completed.
>ops$tkyte_at_8i>
>ops$tkyte_at_8i> select distinct needs_processing from t;
>N
>-
>The reason I used NULL and Y for needs processing is so the INDEX on T
>would only have entries for those rows that needed processing. the
>query:
>SELECT *
>FROM
>T WHERE NEEDS_PROCESSING = 'Y' AND ROWNUM < 1000 FOR UPDATE
>call count cpu elapsed disk query
>current rows
>------- ------ -------- ---------- ---------- ---------- ---------- --
>--------
>Parse 1 0.00 0.00 0 0
>0 0
>Execute 22 0.00 0.00 0 779
>20819 0
>Fetch 20443 0.00 0.00 0 41290
>0 20421
>------- ------ -------- ---------- ---------- ---------- ---------- --
>--------
>total 20466 0.00 0.00 0 42069
>20819 20421
>Misses in library cache during parse: 1
>Optimizer goal: CHOOSE
>Parsing user id: 31780 (recursive depth: 1)
>Rows Row Source Operation
>------- ---------------------------------------------------
>20421 FOR UPDATE
>40885 COUNT STOPKEY
>40842 TABLE ACCESS BY INDEX ROWID T
>40845 INDEX RANGE SCAN (object id 78037)
>will scan the index to get my first 1000 records. After I update them -
>- they will not appear in that index any more (since an all NULL index
>entry is not made for BTree indices).
>> Thanks,
>>
>> Arthur
>> amerar_at_unsu.com
>>
>> Sent via Deja.com http://www.deja.com/
>> Before you buy.
>>
>--
>Thomas Kyte tkyte_at_us.oracle.com
>Oracle Service Industries http://osi.oracle.com/~tkyte/index.html
>--
>Opinions are mine and do not necessarily reflect those of Oracle Corp
>
>
>Sent via Deja.com http://www.deja.com/
>Before you buy.
Received on Tue Apr 04 2000 - 07:48:41 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US