Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: ORA-01555 on a long select after many small txs from the same connection ?
A copy of this was sent to "Nick Rumble" <rumbln_at_tetraworld.com>
(if that email address didn't require changing)
On Fri, 25 Jun 1999 13:24:40 +0100, you wrote:
>I have an OCI application that issues a select statement to retrieve every
>row from a transaction table.
>
>The application fetches a row from the result set and for each of these, it
>updates a corresponding row on another table (a kind of "extension" table).
>
>Each row is processed as a separate transaction.
>
>After processing 300,000 rows (and therefore 300,000 txs), we get the 01555
>error.
>
I'm going to guess that your logic looks something like:
for x in ( select * from transaction_table where ... order by .... ) loop
update some_other_table; delete from transaction_table where key = x.key; commit;
If so - you are causing the 1555 by commiting. We need the rollback segments to reconstruct 'transaction_table' as it looked at the beginning of the cursor for loop as we are going through the table. You are modifying the table -- causing rollback to be generated for it. by committing you are releasing the rollback you just generated (it can now get overwritten). At some point, we are coming back to the rollback segment to reconstruct some block of transaction_table and finding that the rollback we need no longer exists.
You postpone the 1555 by making the RBS larger -- it takes longer for you to overwrite some data we really need.
Solutions:
declare
cursor c1 is select * from transaction_tbale where ... order by ....;
l_rec c1%rowtype;
begin
loop
open c1; fetch c1 into l_rec; exit when c1%notfound; close c1; update some_other_table; delete from transaction_table where key = l_rec.key; commit;
that doesn't keep a query open across commits
3) don't modify the table inside the cursor for loop. Instead, create a table like:
create table processed_transactions( key_field <datatype> primary key);
for x in ( select *
from transaction_table A where ... NOT EXISTS ( select null from processed_transactions where key_field = A.key ) order by .... ) loop update some_other_table; insert into processed_transactions values ( x.key ); commit;
delete from transaction_table
where key in ( select key_field from processed_transactions );
delete from processed_transactions;
commit;
That way -- if the job aborts halfway through, you have a record of what you've done and can restart it. when it completes, it cleans itself out.
>Anecdotal information suggests increasing the rollback segment size allows
>this application to complete successfully. But this results in having to
>establish a very large RBS - and the whole point of making each row a
>separate transaction was to try to avoid this.
>
>I've tried running this application using a 1Mb RBS and although we get the
>01555 error, it doesn't seem to affect how many rows are needed to be
>processed - I'd expect a 1Mb RBS to cause the app to fall over very early
>whereas - say a 300Mb RBS to allow the app to run for must longer.
>
>Can anyone explain what is happening here ?
>
>Many thanks
>
>Nick Rumble
>
--
See http://govt.us.oracle.com/~tkyte/ for my columns 'Digging-in to Oracle8i'...
Current article is "Part I of V, Autonomous Transactions" updated June 21'st
Thomas Kyte tkyte_at_us.oracle.com Oracle Service Industries Reston, VA USA
Opinions are mine and do not necessarily reflect those of Oracle Corporation Received on Fri Jun 25 1999 - 07:48:50 CDT
![]() |
![]() |