Re: Tuning Update query

From: Lothar Flatz <l.flatz_at_bluewin.ch>
Date: Mon, 3 May 2021 10:27:56 +0200
Message-ID: <f11f2803-9e36-a3b5-096a-d908ba02263f_at_bluewin.ch>



Hi Pap,

Index access is one row at a time (essentially, there are improvements like rowid batching) and very inefficient when accessing so man rows. What happens below is the equivalent of a Nested Loop Join that starts with GTT_TAB as driving table.
What we would need is Hash Join and a full table scan on PRODUCT_TAB. Ideally we would have a bloom filter on GTT_TAB.  From my experience, any Update statement will always start accessing the table to be updated, no matter what you hint. And you will not get a Hash join ever.
But, you can rewrite to a merge statement, where you will be able to do a hash join. Note that a merge does not need to have a Insert part, we can go with update only.

Without warranty :

MERGE INTO GTT_TAB TMP USING (SELECT col1,col2,..col11
                FROM PRODUCT_TAB P                WHERE P.CODE = 'XX') p

ON (P.TID = TMP.TID AND P.PART_DT = TMP.PART_DT) WHEN MATCHED THEN UPDATE SET TMP.col1 = p.col1,

TMP.col2 = p.col2,

....

WHERE TMP.CIND = 'Y' ;

Thanks

Lothar

Am 03.05.2021 um 10:02 schrieb Lok P:
> Thank You Lothar and Pap.
>
> Agreed index on the GTT_TAB should not help us anyway considering we
> do not have any filter and also that part is hardly taking any time as
> shown in the sql monitor. Now as you are pointing to rewriting these
> UPDATEs to MERGE statements, I am trying to understand how it can be
> modified(if you can show a sample example) and also how it can help us
> because even in MERGE it can also go for the same access path to
> access the PRODUCT_TAB table where these are spending ~80% of the
> time. Correct me if wrong.
>
> On Mon, May 3, 2021 at 1:21 PM Lothar Flatz <l.flatz_at_bluewin.ch
> <mailto:l.flatz_at_bluewin.ch>> wrote:
>
> This update Statement should be rewritten into a merge statement.
> You would not want to index GTT_TAB, There is not even a filter
> criteria on that table.
>
> Am 03.05.2021 um 09:13 schrieb Pap:
>> In all of those three sql monitors posted by LOK, it's clear that
>> the major chunk(~80%) of time and resources is spent while
>> accessing table PRODUCT_TAB (which is mostly range partitioned on
>> column PART_DT). So I doubt the index of GTT_TAB will be of any
>> help here. And also even the estimation is showing as 8G the
>> actual number of rows coming out to be 17M. And this i assume it
>> must be the error in dynamic sampling. Additionally , if you see
>> the " gc cr grant 2-way" the #sample shows them as few seconds
>> only out of ~1hr+ of DB time.
>>
>> On Mon, May 3, 2021 at 5:22 AM Mladen Gogala
>> <gogala.mladen_at_gmail.com <mailto:gogala.mladen_at_gmail.com>> wrote:
>>
>> How about indexing GTT_TAB?
>>
>> SQL Plan Monitoring Details (Plan Hash Value=1406665007)
>> ====================================================================================================================================================================================================
>> | Id |               Operation |          Name          | 
>> Rows   | Cost | Time    | Start  | Execs |   Rows   | Read |
>> Read  | Activity |            Activity Detail             |
>> |    | |                        | (Estim) |      | Active(s)
>> | Active |       | (Actual) | Reqs | Bytes |   (%)   
>> |              (# samples)               |
>> ====================================================================================================================================================================================================
>> |  0 | UPDATE STATEMENT |                        |        
>> |      | 1235 |  +2153 |     1 |        0 |      | |     0.07
>> | Cpu (1)                                |
>> |    | |                        |         | |          
>> |        |       |          | |       |          | reliable
>> message (2)                   |
>> |  1 |   UPDATE                               |
>> GTT_TAB                |         |      | 4044 |     +1 |    
>> 1 |        0 |      | |    10.62 | Cpu
>> (429)                              |
>> |  2 |    TABLE ACCESS STORAGE FULL           |
>> GTT_TAB                |      8G | 381K | 4042 |     +3 |    
>> 1 |      17M | 4467 |   4GB |     0.72 | Cpu
>> (20)                               |
>> |    | |                        |         | |          
>> |        |       |          | |       |          | cell
>> multiblock physical read (9)      |
>> |  3 |    FILTER |                        |         |      |
>> 4042 |     +3 |   17M |       8M |      | |     0.07 | Cpu
>> (3)                                |
>> |  4 |     PARTITION RANGE SINGLE |                       
>> |       1 |   13 | 4042 |     +3 |    8M |       8M |      |
>> |     0.12 | Cpu (5)                                |
>> |  5 |      TABLE ACCESS BY LOCAL INDEX ROWID |
>> PRODUCT_TAB            |       1 |   13 | 4043 |     +2 |   
>> 8M |       8M |   6M |  45GB |    55.79 | gc cr grant 2-way
>> (1)                  |
>> |    | |                        |         | |          
>> |        |       |          | |       |          | Cpu
>> (292)                              |
>> |    | |                        |         | |          
>> |        |       |          | |       |          | cell
>> single block physical read (1961) |
>> |  6 |       INDEX RANGE SCAN                 |
>> PROD_TAB_IX1           |       1 |    4 | 4042 |     +3 |   
>> 8M |       8M |   3M |  25GB |    32.60 | Cpu
>> (218)                              |
>> |    | |                        |         | |          
>> |        |       |          | |       |          | cell
>> single block physical read (1099) |
>> ====================================================================================================================================================================================================
>>
>> Predicate Information (identified by operation id):
>> ---------------------------------------------------
>>
>>    3 - filter(:B1='Y')
>>    5 - filter("P"."PART_DT"=:B1)
>>    6 - access("P"."TID"=:B1 AND "P"."CODE"='XX')
>>
>>
>>  **************************************************
>>
>> Your plan says that GTT_TAB has around 8G rows and is
>> accessed using full table scan. Since your update is using a
>> correlated query, I would try indexing the GTT. The other
>> thing I notice is "gc cr grant 2-way (1)"event. That means
>> that PRODUCT_TAB is being updated from another node. You are
>> essentially coordinating with the other node and that is
>> slow. Can you perform the operation on the same node where
>> the table is being modified?
>>
>> On 5/2/21 2:53 PM, Lok P wrote:
>>>
>>> Hello, We have a plsql procedure in which it updatesmultiple
>>> columns of the global temporary table after loading it in
>>> different steps. It'supdated for different filter criteria
>>> and each of these UPDATES are running for hours even if they
>>> are simple ones. I am trying to see if it'spossible to do it
>>> in an efficientfashion by combining all different updates
>>> into one UPDATE statement or any other possible method? One
>>> initial thought was to do it as part of the INSERT itself
>>> rather than updatingit through multiple statements at the
>>> later stage , but then I seethe data load/INSERT is
>>> happening for multiple if/else conditions which may not be
>>> straight enough.
>>>
>>> Attached is the sql monitor for few of the long running
>>> UPDATES. And i am seeing this is not really spending time
>>> for updating rows but rather in finding the eligible rows
>>> for update i.e. the SELECT part of it is actually taking
>>> time. So wanted to understand how these UPDATES can be done
>>> faster?
>>>
>>> Its version 11.2.0.4 of Oracle Exadata-X5.
>>>
>>> Regards
>>>
>>> Lok
>>>
>> --
>> Mladen Gogala
>> Database Consultant
>> Tel: (347) 321-1217
>> https://dbwhisperer.wordpress.com <https://dbwhisperer.wordpress.com>
>>
>> -- http://www.freelists.org/webpage/oracle-l
>> <http://www.freelists.org/webpage/oracle-l>
>>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon May 03 2021 - 10:27:56 CEST

Original text of this message