Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: PL/SQL limitation while updating table A with a cursor on Table A

Re: PL/SQL limitation while updating table A with a cursor on Table A

From: Jurij Modic <jurij.modic_at_mf.sigov.mail.si>
Date: 1997/06/19
Message-ID: <33a9ac0b.5120652@www.sigov.si>#1/1

On 18 Jun 1997 23:06:08 GMT, "Shivakumar Gopalakrishnan" <shiva_at_one.net> wrote:

>I am doing a financial data warehouse currently. We have the Fact Table
>Balances by Period and the Account(which is a composit of 9 fields). I have
>a kind of period opening process. What this process does is read the
>previous month records and create the current month records with the ending
>balance of privious month as opening balance of the current month.
>
>While running this procedure, I always get a ORA-01555, which says snapshot
>is too old in rollback segment. We have increased the extent size of the
>rollback segment, decreased the no of records after which I commit, Doubled
>the tablespace of the rollback segment. Still this error is persisting.
>
>Currently we have decided to write the balance to a flat file and load it
>using sql/loader. Does anyone know of a better way.
>
>The problem we found is that, Oracle tries to maintain a copy of the
>database as it was before, or as of the open of the cursor. SQL/Loader by
>passes this somehow. Does anyone know of a way to turn this switch off, as
>is done by the sql/loader.
>
>Any help or suggestion is appreciated. Pls mail them to shiva_at_one.net
>

You are most probably suffering from so called "fetch across commit". By the SQL standards, your cursors should be closed after commit is executed. However, Oracle allows you to remain your cursor open, but at your own risk of getting ORA-1555 if your RB segments are not big enough. The funny thing is, more often you isue COMMIT (and leaving cursor open), more chances you have to get this error when you fetch subsequent records! So *reducing* (not increasing!) your COMMIT frequency *might* solve your problem.

If you wan't to avoid this problem, you should create your cursor so that it selects only a subset of the records you want to process (the subset should be small enough for your RB segments) and that the selection criteria (upper and lower boundaries) for the subset can be set dynamicaly. Then inside the loop open the cursor for the first set of reccords, iteratively fetch and process them, commit the transaction and close the cursor. Increase the boundaries for the cursor and repeat the loop until all the records are processed. The only problem can be if you don't have appropriate column in your table to break your whole record set into subsets.

In pseudo code with the assumption that your table have 'id' column which sequentialy identify your records:

DECLARE
  CURSOR c1 IS SELECT ......
    WHERE rec_id BETWEEN min_id AND (min_id + n)     AND .....
BEGIN
  GET MIN(id), MAX(id) OF all_records_to_be_processed;   min_id := MIN(rec_id);
  LOOP WHILE rec_id <= MAX(id);
    OPEN CURSOR c1;
    LOOP UNTIL last_record_inside_c1_subset;       process the record;
    END LOOP;
    COMMIT;
    CLOSE c1;

Regards,
Jurij Modic                             Republic of Slovenia
tel: +386 61 178 55 14                  Ministry of Finance
fax: +386 61  21 45 84                  Zupanciceva 3
e-mail: jurij.modic_at_mf.sigov.mail.si Ljubljana 1000
Received on Thu Jun 19 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US