Re: Query ASH for Undo Blocks

From: Andy Sayer <andysayer_at_gmail.com>
Date: Sat, 18 Jun 2022 11:43:26 +0100
Message-ID: <CACj1VR7zJfRKMd6BpkwBsqXY5qQJGGMSrCLHvK-Ky1Bo2xgALQ_at_mail.gmail.com>



It might be me reading the plan on a tiny screen or it might be general tiredness but it looks like your plan is finding 10G rows in the index (the predicate applied against the index are ending up with that many rows, one of those is not an access predicate which means more of the index is probably read than just these 10G rows). Those 10G rowids are looked up against the table and after the filters are applied against the table (the ones that can’t be done against the index) you get 49M rows.

These numbers are always going to lend themselves to high buffer gets (and will scale badly when they have to deal with undo).

I think your best course of action is to take a step back, see what the whole process this SQL is responsible for is trying to do and see if you can limit what it really needs to touch. Once you’ve done that, figure out the best way of indexing for the filters you need to use.

Thanks,
Andy

On Sat, 18 Jun 2022 at 09:53, Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

> A couple more thoughts:
>
> a) what does the status column look like - number of distinct values,
> pattern of 2's and 3's, number of nulls.
>
> b) You could check v$segstat / v$segment_statistics over an interval and
> check the number of executions/buffer gets for the statement then see if
> the changed in buffer gets over the interval is mostly due to the table
> blocks or index blocks. Since you update only one row then if the buffer
> gets are mostly index that suggests the "id2" predicate is eliminating the
> data (because it's null), if the buffer gets are mostly table it's
> probably about the status (and maybe that should be in the index).
>
> c) If you enable sql_trace at level 4 for that SQL_ID for a few executions
> the trace file will report the peeked value for the timestamp variable each
> time. See https://jonathanlewis.wordpress.com/2019/05/03/occurence/ for
> getting at the SQL_ID from outside any session that's running it.
>
> d) I do wonder if the very small number of distinct values for id means
> every id has a huge number of entries in the index which HAVE to be
> examined before being eliminated by the id2 predicate (what does the result
> of "select id, count(*)" look like ?
>
> Regards
> Jonathan Lewis
>
>
>
> On Fri, 17 Jun 2022 at 15:47, Jonathan Lewis <jlewisoracle_at_gmail.com>
> wrote:
>
>> Lothar,
>>
>> What's num_rows for the table and for the index?
>>
>> Surely id2 is also a column in the index since it appears as a filter on
>> the index range scan
>>
>> What's the actual query - has Oracle taken the predicate like the one
>> that appears as the filter predicate to the "table access" operation and
>> constructed the date_col1 predicate that appears as an access predicate on
>> the "index range scan" operation.
>>
>> Which version of Oracle ? It's possible that the :B1 is not captured
>> because it's a timestamp and there's a bit of code missing. I have a note
>> dated some time in 2006 which shows that bind variables are not reported in
>> trace files if they are nvarchar2() it's possible that the same happens
>> with timestamps and other newer data types.
>>
>> Regards
>> JOnathan Lewis
>>
>>
>>
>>
>>
>>
>> On Fri, 17 Jun 2022 at 15:04, Lothar Flatz <l.flatz_at_bluewin.ch> wrote:
>>
>>> Hi Jonathan,
>>>
>>> On the very database I am limited in rights, therefore I can not provide
>>> all the Information you asked for.
>>> In particular there is no SQL Monitor, one execution runs to fast and I
>>> can not switch it on.
>>> The statement is rather simple (I changed all Names):
>>>
>>>
>>> -------------------------------------------------------------------------------------------
>>> | Id | Operation | Name | Rows | Bytes | Cost
>>> (%CPU)| Time |
>>>
>>> -------------------------------------------------------------------------------------------
>>> | 0 | SELECT STATEMENT | | | |
>>> 2 (100)| |
>>> | 1 | FOR UPDATE | | |
>>> | | |
>>> |* 2 | TABLE ACCESS BY INDEX ROWID| Tab1 | 1 | 42 |
>>> 2 (0)| 00:00:01 |
>>> |* 3 | INDEX RANGE SCAN | Index1 | 1 | |
>>> 2 (0)| 00:00:01 |
>>>
>>> -------------------------------------------------------------------------------------------
>>>
>>> Predicate Information (identified by operation id):
>>> ---------------------------------------------------
>>>
>>> 2 - filter("Date_col1"<=CASE "status" WHEN 2 THEN :B1 WHEN 3 THEN
>>> LOCALTIMESTAMP(6) END )
>>> 3 - access("ID"=:B3 AND "Date_col1"<=GREATEST(:B1,LOCALTIMESTAMP(6))
>>> )
>>> filter("ID2"=COALESCE(:B2,"ID2"))
>>>
>>> Here the Stats of a batch run
>>>
>>> Stat Name Statement Per Execution %
>>> Snap
>>> ---------------------------------------- ---------- --------------
>>> -------
>>> Elapsed Time (ms) 3.9098E+08 45.7
>>> 3.2
>>> CPU Time (ms) 3.8841E+08 45.4
>>> 4.5
>>> Executions 8,546,245 1.0
>>> 0.0
>>> Buffer Gets 3.2236E+10 3,772.0
>>> 6.9
>>> Disk Reads 364,449 0.0
>>> 0.0
>>> Parse Calls 187,232 0.0
>>> 0.0
>>> Rows 6,441,767 0.8
>>> N/A
>>> User I/O Wait Time (ms) 328,217 0.0
>>> 0.0
>>> Cluster Wait Time (ms) 0 0.0
>>> 0.0
>>> Application Wait Time (ms) 117 0.0
>>> 0.0
>>> Concurrency Wait Time (ms) 1,693,881 0.2
>>> 0.1
>>> Invalidations 0 N/A
>>> N/A
>>> Version Count 1,536 N/A
>>> N/A
>>> Sharable Mem(KB) 64,108 N/A
>>> N/A
>>>
>>> -------------------------------------------------------------
>>> ASH says time is spent:
>>>
>>> SQL_PLAN_OPERATION SQL_PLAN_OPTIONS
>>> SQL_PLAN_LINE_ID SECONDS % seconds
>>> CPU RANGE SCAN
>>> 3 415160 97
>>> CPU BY INDEX ROWID
>>> 2 6100 1
>>> CPU
>>> 1 3170 1
>>> buffer busy waits
>>> 1 1310 0
>>> CPU
>>> 480 0
>>> read by other session BY INDEX ROWID
>>> 2 220 0
>>> db file sequential read BY INDEX ROWID
>>> 2 100 0
>>> latch: cache buffers chains BY INDEX ROWID
>>> 2 90 0
>>> buffer busy waits BY INDEX ROWID
>>> 2 70 0
>>> db file scattered read
>>> 1 40 0
>>> latch: cache buffers chains RANGE SCAN
>>> 3 30 0
>>> latch: cache buffers chains
>>> 1 30 0
>>>
>>> This is the Index Definition:
>>>
>>> INDEX_NAME COMPRESSION BLEVEL LEAF_BLOCKS
>>> DISTINCT_KEYS AVG_LEAF_BLOCKS_PER_KEY AVG_DATA_BLOCKS_PER_KEY
>>> Index1 DISABLED 3 112885
>>> 6290 17 102
>>>
>>> col stats
>>> COLUMN_NAME
>>> NUM_DISTINCT NUM_NULLS NUM_BUCKETS
>>> --------------------------------------------------------------------------------------------------------------------------------
>>> ------------ ---------- -----------
>>> ID
>>>
>>> 56 37349080 1
>>> Date_col1
>>>
>>> 2628 39004885 1
>>>
>>> Bind Variable B2 is always Null btw. B1 is never captured. (Do you know
>>> why? I wonder if the reason might be that it is used in an expression.)
>>>
>>> These Numbers are from a Test DB. Here I got about 200 Buffers per
>>> execution. Question: where does the rest come from?
>>>
>>> --------------------------------------------------------------------------------------------------------------
>>> | Id | Operation | Name | Starts | E-Rows |
>>> A-Rows | A-Time | Buffers | Reads |
>>>
>>> --------------------------------------------------------------------------------------------------------------
>>> | 0 | SELECT STATEMENT | | 49M|
>>> | 42M|15:17:46.13 | 17G| 12889 |
>>> | 1 | FOR UPDATE | | 49M|
>>> | 42M|15:17:46.13 | 17G| 12889 |
>>> |* 2 | TABLE ACCESS BY INDEX ROWID| Tab1 | 49M| 2
>>> | 57M|13:45:46.36 | 16G| 12874 |
>>> |* 3 | INDEX RANGE SCAN | Index1 | 49M| 22
>>> | 12G|07:27:46.63 | 10G| 1536 |
>>>
>>> --------------------------------------------------------------------------------------------------------------
>>>
>>> Thanks
>>>
>>> Lothar
>>>
>>>
>>> Am 13.06.2022 um 20:13 schrieb Jonathan Lewis:
>>>
>>>
>>> What's the scale of the problem? i.e. how long does the query run, how
>>> complex is the query/plans, how much of the time relates to the index range
>>> scan, how many buffer gets does that index range scan step take, how many
>>> do you think it should take. How frequently doe the query run, how
>>> repeatable is the experiment?
>>>
>>> How are you working out that that index range scan is doing more buffer
>>> gets than you expect - have you enabled SQL Monitor for the SQL_Id?
>>>
>>> Regards
>>> Jonathan Lewis
>>>
>>>
>>> On Mon, 13 Jun 2022 at 11:35, Lothar Flatz <l.flatz_at_bluewin.ch> wrote:
>>>
>>>> Hi,
>>>>
>>>> there is a plan with an Index Range scan that generates way more
>>>> buffer_gets than you would expect.
>>>> Would it be correct to query active session history using
>>>> CURRENT_FILE#
>>>> and CURRENT_BLOCK# to check against DBA_EXTENTS
>>>> to find out if some buffer gets are coming from undo?
>>>> The query does work, but i am not sure if the result is showing the
>>>> indented answer.
>>>>
>>>> Thanks
>>>>
>>>> Lothar
>>>>
>>>>
>>>>
>>>> --
>>>> http://www.freelists.org/webpage/oracle-l
>>>>
>>>>
>>>>
>>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jun 18 2022 - 12:43:26 CEST

Original text of this message