Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.misc -> Re: snapshot too old

Re: snapshot too old

From: Mark Bole <makbo_at_pacbell.net>
Date: Sun, 29 Feb 2004 23:46:17 GMT
Message-ID: <dRu0c.18574$yr5.17978@newssvr29.news.prodigy.com>


Daniel Morgan wrote:
> Mark Bole wrote:
>

>> Here's what I'm curious about, on the technical side.
>>
>> As one who programs in Perl more often than I do in PL/SQL, it is 
>> second nature to grab all key (ID) values from a table in memory as an 
>> array (list), and then loop on the in-memory array to perform the 
>> specific actions for each record.  AFAIK, this completely avoids the 
>> ORA-1555 error, no matter how many times you do or don't commit within 
>> the loop.

>
>
> Actually it doesn't. You are as capable of getting an ORA-01555 as
> everyone else. What causes most ORA-01555's is Oracle trying to find
> something in a rollback or undo segment that has been overwritten.
>

Here is what I've gotten from this thread so far:

  1. there is a widespread, long-standing myth that committing every so many rows within a PL/SQL loop similar to the OP's is good because it keeps your undo from getting "too big".
  2. expert advice debunks this myth because (according to Kyte) "[undo] is not 'overhead', not something to minimize, not something to skimp on", and besides, doing fetches across commits can lead to ORA-1555 errors.

But isn't it still possible to get an ORA-1555 in this type of situation, even with no commits inside the loop, due to the select query on which the cursor is based failing a read-consistent fetch when other database activity causes undo segments to be overwritten while the loop is running?

My statement (now a question), more precisely, was this: isn't pre-fetching into memory the entire list of values to loop on, using something like a PL/SQL collection, one way (and maybe a good way) to avoid this particular issue? I don't see this approach used very often, is it because of lack of awareness or something else?

>> When the UNDO tablespace becomes 10-20% of your total disk usage, it 
>> is a problem you have to be on top of.

> Why?
>

The amount of undo normally needed for our OLTP database is relatively small. But when our developers come along after every other software release with a massive update script to "clean up the residue of a bug", having the UNDO tablespace suddenly swell to many times its normal size and stay that way becomes an operational liability {think capacity planning, think disaster recovery, think being paged in the middle of the night). With rollback segments, at least you could drop and recreate them one at a time and eventually shrink the tablespace(s) containing them back down -- I haven't found out how to do that with UNDO tablespaces.

Maybe 10-20% is too conservative, but I believe there is a point where the overhead can be considered excessive.

--Mark Bole Received on Sun Feb 29 2004 - 17:46:17 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US