Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Very Interested in Common Rollback Issue

Re: Very Interested in Common Rollback Issue

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 18 Jul 1999 10:05:55 +0100
Message-ID: <932289041.12900.0.nnrp-08.9e984b29@news.demon.co.uk>


Some comments interleaved:-

--

Jonathan Lewis
Yet another Oracle-related web site: www.jlcomp.demon.co.uk

Doug Cowles wrote in message <37912228.39A0E805_at_bigfoot.com>...
> If I start his loop at SCN 5000, get 80,000 rows
into it (
>which is what actually happens in my case),

In some ways you don't actually 'get' the row (unless you do a select for update in which the actually row entry is marked as locked on the block itself - but this lock would clear on the first commit).

> and then my
cursor loop is looking
>for a block (tries the buffer cache first I assume), and notices it was
stamped
>at 5050, do I really "rollback transaction 5050" - etc.,

The session makes of copy of the current block, and tries to rollback back the changes in that copy - this is what 'consistent gets' are - Oracle trying
to produce a copy of a block that was consistent with a given point in time.

>I mean, it was already comitted wasn't it? Don't I really look in the
rollback
>segment for the block I'm looking for less than 5050, find one for 5023,
try again,
> and then fail to find one for 5000 and send out the error? (Maybe just a
>re-intepretation).

The rollback segment does not contain copies of blocks, it contains information
it contains a list of deltas - these are applied in turn to the copied block to get back
to the read-consistent image.

>
>In any case, if I'm close to understanding you - what do you think of the
>follwing suggestions to get around this problem -
>
>A) Take the commit out of the loop - I think this would work, but may
result
>in a lot of rollback extension - your thoughts?

    Totally correct - if your process is the only one on the system

>B) Don't do any deletes, just throw the rowid's into a table and after the
loop
>is finished, clear out the load table. - This seems like the best
suggestion to me,
> what do you think?

    You could still get 1555 from the table being updated.

>C) Add some rollback segments - (This was suggested to me ) - I think this
is
> unlikely to help, and if it does, it would only be by chance since the
commits are
> invalidating the rollback entries, regardless of what segment they're in.
Adding rollbacks
>only seems >like a good idea if I get a lot of redo header contention,
which it would seem
>to me has >nothing to do with this problem... your thoughts?

    Correct, but many people do this and get lucky, so think it works.

The only 'correct' solution is to commit just once at the end. But also to select all the rows for update anyway to stop anyone else on the system updating a row that you want to update and blowing you out of the water anyway.

However, this is often unnecessary, and impractical. If you want to run on a short commit cycle, the best bet may be to run a PL/SQl look which starts as you have done, but after every N rows (say 250) commits and 'remembers' the value of the key field it had got to. The loop should the re-open the cursor for rows where 'key value > remembered value'.

There are several minor variations on this theme, but they all revolve around using a cursor which is small enough to be safe, and throwing it away on commit. Received on Sun Jul 18 1999 - 04:05:55 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US