On further reflection, I guess I'm not crystal clear on what the
boundary is for beginning the transaction in a situation such as this
(as I said, I'm not a frequent PL/SQL programmer).
cursor c is select....
for n in c loop...
If the cursor select statement is part of the same transaction as every
uncommited statement in the loop, then I can see where it is not a
special case regarding ORA-1555. Cancel the first part of what I said below.
For some reason I have been under the impression that the cursor query
was a separate transaction from what happened inside the loop.
Mark Bole wrote:
> Daniel Morgan wrote:
>
>> Mark Bole wrote:
>>
>>> Here's what I'm curious about, on the technical side.
>>>
>>> As one who programs in Perl more often than I do in PL/SQL, it is
>>> second nature to grab all key (ID) values from a table in memory as
>>> an array (list), and then loop on the in-memory array to perform the
>>> specific actions for each record. AFAIK, this completely avoids the
>>> ORA-1555 error, no matter how many times you do or don't commit
>>> within the loop.
>>
>>
>>
>> Actually it doesn't. You are as capable of getting an ORA-01555 as
>> everyone else. What causes most ORA-01555's is Oracle trying to find
>> something in a rollback or undo segment that has been overwritten.
>>
>
> Here is what I've gotten from this thread so far:
>
> 1) there is a widespread, long-standing myth that committing every so
> many rows within a PL/SQL loop similar to the OP's is good because it
> keeps your undo from getting "too big".
>
> 2) expert advice debunks this myth because (according to Kyte) "[undo]
> is not 'overhead', not something to minimize, not something to skimp
> on", and besides, doing fetches across commits can lead to ORA-1555 errors.
>
> But isn't it still possible to get an ORA-1555 in this type of
> situation, even with no commits inside the loop, due to the select query
> on which the cursor is based failing a read-consistent fetch when other
> database activity causes undo segments to be overwritten while the loop
> is running?
>
> My statement (now a question), more precisely, was this: isn't
> pre-fetching into memory the entire list of values to loop on, using
> something like a PL/SQL collection, one way (and maybe a good way) to
> avoid this particular issue? I don't see this approach used very often,
> is it because of lack of awareness or something else?
>
>>> When the UNDO tablespace becomes 10-20% of your total disk usage, it
>>> is a problem you have to be on top of.
>>
>> Why?
>>
>
> The amount of undo normally needed for our OLTP database is relatively
> small. But when our developers come along after every other software
> release with a massive update script to "clean up the residue of a bug",
> having the UNDO tablespace suddenly swell to many times its normal size
> and stay that way becomes an operational liability {think capacity
> planning, think disaster recovery, think being paged in the middle of
> the night). With rollback segments, at least you could drop and
> recreate them one at a time and eventually shrink the tablespace(s)
> containing them back down -- I haven't found out how to do that with
> UNDO tablespaces.
>
> Maybe 10-20% is too conservative, but I believe there is a point where
> the overhead can be considered excessive.
>
> --Mark Bole
>
>
Received on Sun Feb 29 2004 - 19:43:35 CST