Re: Cursors, rollback and Long Updates

From: Marty Himmelstein <marty.himmelstein_at_valley.net>
Date: 1995/08/08
Message-ID: <406krf$ceo_at_dartvax.dartmouth.edu>#1/1


chuckh_at_ix.netcom.com (Chuck Hamilton) wrote:
>Marty Himmelstein <marty.himmelstein_at_valley.net> wrote:
>
>>Simon,
 

>>I recently had a similar problem. This is how a solved it:
 [snippsville]
>>5. Every x transactions commit the data, and, in a separate activity log table,
>>record the last rownum processed by the program. I also use the activity log table
>>to record timings, and monitor progress of the stored procedure, for performance
>>tuning.
 

>>If the stored proc fails in the middle, look at the activity log to figure out at
>>what row to restart.
>
>But what if it fails somewhere in between updates to the activity log?
>Aren't you going to try to reprocess rows that were already processed.
>

No. Each time I issue a commit for the "real" data, I update the activity_log table.

>I would suggest a nearly identical process, except I'd trap any error
>condition that causes the PL/SQL block to fail and write out the
>rownum it failed on before exiting. That'd make recovery a little
>smoother. And unless you're interested in checking the progress of the
>program while it's running, you could probably speed it up even
>further but not writing the activity log at all until the very end.

The performance hit for writing to the activity log is noise. I keep a running counter in a local variable, and just write it to the database at commit time. My procedure is parameterized so that I can run just 5,000 inserts, for example, even if there are 1,000,000 rows to insert. I do this for performance tuning, and debugging. I do incorporate your suggestion of logging error conditions.

Marty Himmelstein
marty.himmelstein_at_valley.net Received on Tue Aug 08 1995 - 00:00:00 CEST

Original text of this message