Re: transaction tables consistent reads - undo records applied

From: Steve Howard <stevedhoward_at_gmail.com>
Date: Mon, 28 Jun 2010 06:37:32 -0700 (PDT)
Message-ID: <59a2a4c2-50f8-42ec-a40b-27ce749f71f4_at_q12g2000yqj.googlegroups.com>



On Jun 26, 2:05 am, "Jonathan Lewis" <jonat..._at_jlcomp.demon.co.uk> wrote:
> "joel garry" <joel-ga..._at_home.com> wrote in message
>
> news:29c017c9-7c3a-40db-b422-1b1f2d861431_at_i9g2000prn.googlegroups.com...
> ]On Jun 25, 9:37 am, "Jonathan Lewis" <jonat..._at_jlcomp.demon.co.uk>
> ]wrote:
> ]>
> ]> The trouble with your requirement is that we really need to do a
> backwards
> ]> tablescan - because it's probably the data near the end of the table
> that
> ]> is
> ]> changing while you are "wasting" time reading all the data from the
> start
> ]> of
> ]> the table.
> ]
> ]Excellent explanation, but I lost you here.  He says plan says doing a
> ]range scan, for 1% of the table?  (Maybe you hadn't seen subsequent
> ]post yet, where he mentions a fetch suddenly exhibiting the
> ]characteristics you describe.)
> ]
>
> By the time I'd written this much, I'd forgotten that he'd added the note
> about the index - but it doesn't really make any difference (a) to the
> explanation or (b) to the concept in the solution - except that you
> can put in an "index_desc()" hint and that might be enough to help.
> It depends on the how the query is written, what index it uses, and
> the distribution of the changed data.
>
> ]>
> ]> Unfortunately there is no such hint - but if it's really critical, you
> ]> could write
> ]> some code to scan the table one extent at a time in reverse order.
> ]
> ]This cleaning makes perfect sense, but I'm wondering if there is some
> ]administrative tuning like adjusting undo size or retention or some
> ]fiddling with initrans?  Sounds critical if it's interrupting data
> ]extraction.
>
> The error is "just" the same as a traditional 1555 problem when it gets
> that far so a "large enough" undo retention should stop the 1555 - but
> that won't stop the amount of work it takes.  Thinking about initrans is
> a good idea - but that won't have any effect either because the problem
> is the number of backward steps that have to be taken and the value of
> initrans only eliminates the first few (i.e. a few relating to the size of
> INITRANS).
>
> --
> Regards
>
> Jonathan Lewishttp://jonathanlewis.wordpress.com

What is really odd about this is that several months ago, I started running a job to “pre-scan” all the rows we would need, before ‘the “real” job got there. My assumption was this had something to do with block cleanout, even though none of the cleanout statistics were incremented like the “transaction tables consistent reads – undo records applied” counter was.

This doesn’t seem to help, though. My “pre-scan” job never has an a issue, but I run one hour windows for the range to scan.

A little more background. This is a “transaction history” table of sorts. It is partitioned by month, and records are only added, never updated.

SQL> desc big_table

 Name                                      Null?    Type
 ----------------------------------------- --------
----------------------------
PK                                   NOT NULL NUMBER
FK                                  NOT NULL NUMBER
COL3                                    NOT NULL NUMBER(3)
 CREATE_TIME                                        TIMESTAMP(6)
COL5                                NOT NULL VARCHAR2(50)
COL6                                     VARCHAR2(50)
COL7                                          XMLTYPE

SQL> We query as follows:

SELECT concatenated_xml_string_of_columns_from_big_table,

       a.xml_col.getClobVal()
  FROM big_table a
  WHERE create_time between trunc(sysdate) + (:1 / 1440) and trunc(sysdate) + (:2 / 1440)

…where the window is three hours. This does a range scan on the create_time column, which is good as it is by far the most selective filter.

The selected records are retrieved in PL/SQL (no bulk collect), and run through a few more XML tagging operations and written to a file. They are then propagated to a mainframe for additional business usage to which I am not privy.

If the query runs “fast enough” (less than 30 minutes or so), we don’t see the issue. If it starts to “get slow” for whatever reason, we start reading tons of undo.

Based on what you wrote, and the fact that I “pre-scan” the rows, shouldn’t I pay the price for the cleanout? Or could it be we *do* have other transactions hitting this table of which I am not aware? In other words,

  • I pre-scan
  • A row *is* changed after my query finishes
  • They run the “real” query

Thanks,

Steve Received on Mon Jun 28 2010 - 08:37:32 CDT

Original text of this message