Re: transaction tables consistent reads - undo records applied

From: Steve Howard <stevedhoward_at_gmail.com>
Date: Tue, 29 Jun 2010 09:53:58 -0700 (PDT)
Message-ID: <ab9307e4-6b3d-4306-96ee-a1f3632ccf98_at_u7g2000yqm.googlegroups.com>



On Jun 25, 12:37 pm, "Jonathan Lewis" <jonat..._at_jlcomp.demon.co.uk> wrote:
> This happens when your query hits a block that was updated
> "a long time" in the past by a transaction that has committed
> but not been cleaned out.
>
> Your query can tell that it is a committed transaction because
> the ITL entry for the transaction points to transaction table slot
> (in an undo segment header block) that has been re-used for
> a newer transaction.  (Part of the transaction id is the "transaction
> sequence number", which is counting the number of times a transaction
> slot has been used).
>
> Your query therefore needs to know WHEN the transaction committed,
> so that it can decide whether or not it's supposed to see the new version
> or the old version of the row.  (If the transaction committed before the
> query then the query doesn't need to know exactly when the transaction
> committed, if it started after the query then it has to be rolled back -
> and it's possible that the "snapshot too old" is the result of the data
> rollback
> than the transaction table rollback.)
>
> To find out when the transaction committed, your query copies the undo
> segment header block and starts rolling it back. The number of times this
> happens is recorded as:
>     "transaction tables consistent read rollbacks"
>
> To perform the rollback, your query will read the transaction control block
> (another part of the undo segment header) which contains a number of
> important
> details - including the first undo block address of the most recent
> transaction
> to use that undo segment header.  This undo block address will hold the
> first
> record of that transaction *** - which include information about the
> PREVIOUS
> state of the transaction control block.  By using this undo record your
> query
> can take the undo segment header block backwards in time by one step -
> at which point it reads the older version of the transaction control block
> and
> repeats the process until it reaches the point where the transaction slot
> it's
> interested in has been taken back to the correct sequence number (or a
> change
> has taken the undo segment header block back to a point in time before the
> start of the query).  Each record it reads in this process is counted in
> the
>     "transaction tables consistent reads - undo records applied"
>
> (*** This is why the block you dumped had nothing to do with your table.)
>
> The trouble with your requirement is that we really need to do a backwards
> tablescan - because it's probably the data near the end of the table that
> is
> changing while you are "wasting" time reading all the data from the start
> of
> the table.
>
> Unfortunately there is no such hint - but if it's really critical, you
> could write
> some code to scan the table one extent at a time in reverse order.
>
> --
> Regards
>
> Jonathan Lewishttp://jonathanlewis.wordpress.com
>
> "Steve Howard" <stevedhow..._at_gmail.com> wrote in message
>
> news:82fa462e-574c-461d-b1c6-65a5473a3afc_at_d37g2000yqm.googlegroups.com...
>
> > Hi All,
>
> > 10.2.0.4 three node cluster EE on SLES 10
>
> > Can someone give me a good definition of *exactly* what this means and
> > what causes it (mainly the latter).  The documentation is not very
> > descriptive, with "Number of undo records applied to transaction
> > tables that have been rolled back for consistent read purposes".
>
> > It sounds like undo on undo, but we don't have any larger number for
> > rollbacks (or commits) when this happens than we do at any other time.
>
> > We have been plagued by this for over a year, and after multilpe SR's
> > where the support analyst just reads us the documentation, I am at my
> > wits end.
>
> > We have a fairly large table (almost 1TB with about 300 million rows)
> > with a large XMLTYPE column.  Once a day, a job scans this table for
> > records added that day for propagation to an external system.  The
> > longer the query runs, the more we see the session doing single block
> > reads against the undo tablespace, with the stat in the subject
> > climbing into the millions.  Eventually, after several hours, an
> > ORA-01555 is thrown.
>
> > I even grabbed one of the P1/P2 parameters for the session querying
> > and dumped the undo block in the P2 value.  While it was a second or
> > two after the event was posted, the block itself didn't even contain
> > any references to the table being queried!
>
> > Can anyone shed some light?
>
> > Thanks,
>
> > Steve

Jonathan,

I am just re-reading your post.

<<
The trouble with your requirement is that we really need to do a backwards tablescan - because it's probably the data near the end of the table that is changing>>

This is true.

<<
 while you are "wasting" time reading all the data from the start of the table. Unfortunately there is no such hint - but if it's really critical, you could write some code to scan the table one extent at a time in reverse order.
>>

Are you suggesting that if I were to scan the *newest* rows prior to the "real" job reading older rows, that may help? If so, that would be fairly easy, as I can identify them by create_time.

Thanks,

Steve Received on Tue Jun 29 2010 - 11:53:58 CDT

Original text of this message