Re: Tuning Update query

From: Lok P <loknath.73_at_gmail.com>
Date: Mon, 3 May 2021 13:32:21 +0530
Message-ID: <CAKna9VYbAg6QV0G-ai9A=6ZCm3Bamgc8NFLr16iV8rgLtjx7Zw_at_mail.gmail.com>



Thank You Lothar and Pap.

Agreed index on the GTT_TAB should not help us anyway considering we do not have any filter and also that part is hardly taking any time as shown in the sql monitor. Now as you are pointing to rewriting these UPDATEs to MERGE statements, I am trying to understand how it can be modified(if you can show a sample example) and also how it can help us because even in MERGE it can also go for the same access path to access the PRODUCT_TAB table where these are spending ~80% of the time. Correct me if wrong.

On Mon, May 3, 2021 at 1:21 PM Lothar Flatz <l.flatz_at_bluewin.ch> wrote:

> This update Statement should be rewritten into a merge statement. You
> would not want to index GTT_TAB, There is not even a filter criteria on
> that table.
>
> Am 03.05.2021 um 09:13 schrieb Pap:
>
> In all of those three sql monitors posted by LOK, it's clear that the
> major chunk(~80%) of time and resources is spent while accessing table
> PRODUCT_TAB (which is mostly range partitioned on column PART_DT). So I
> doubt the index of GTT_TAB will be of any help here. And also even the
> estimation is showing as 8G the actual number of rows coming out to be 17M.
> And this i assume it must be the error in dynamic sampling. Additionally ,
> if you see the " gc cr grant 2-way" the #sample shows them as few seconds
> only out of ~1hr+ of DB time.
>
> On Mon, May 3, 2021 at 5:22 AM Mladen Gogala <gogala.mladen_at_gmail.com>
> wrote:
>
>> How about indexing GTT_TAB?
>>
>> SQL Plan Monitoring Details (Plan Hash Value=1406665007)
>>
>> ====================================================================================================================================================================================================
>> | Id | Operation | Name |
>> Rows | Cost | Time | Start | Execs | Rows | Read | Read |
>> Activity | Activity Detail |
>> | | | |
>> (Estim) | | Active(s) | Active | | (Actual) | Reqs | Bytes |
>> (%) | (# samples) |
>>
>> ====================================================================================================================================================================================================
>> | 0 | UPDATE STATEMENT |
>> | | | 1235 | +2153 | 1 | 0 | |
>> | 0.07 | Cpu (1) |
>> | | |
>> | | | | | | | |
>> | | reliable message (2) |
>> | 1 | UPDATE | GTT_TAB
>> | | | 4044 | +1 | 1 | 0 | |
>> | 10.62 | Cpu (429) |
>> | 2 | TABLE ACCESS STORAGE FULL | GTT_TAB
>> | 8G | 381K | 4042 | +3 | 1 | 17M | 4467 | 4GB
>> | 0.72 | Cpu (20) |
>> | | |
>> | | | | | | | |
>> | | cell multiblock physical read (9) |
>> | 3 | FILTER |
>> | | | 4042 | +3 | 17M | 8M | |
>> | 0.07 | Cpu (3) |
>> | 4 | PARTITION RANGE SINGLE |
>> | 1 | 13 | 4042 | +3 | 8M | 8M | |
>> | 0.12 | Cpu (5) |
>> | 5 | TABLE ACCESS BY LOCAL INDEX ROWID | PRODUCT_TAB
>> | 1 | 13 | 4043 | +2 | 8M | 8M | 6M | 45GB
>> | 55.79 | gc cr grant 2-way (1) |
>> | | |
>> | | | | | | | |
>> | | Cpu (292) |
>> | | |
>> | | | | | | | |
>> | | cell single block physical read (1961) |
>> | 6 | INDEX RANGE SCAN | PROD_TAB_IX1
>> | 1 | 4 | 4042 | +3 | 8M | 8M | 3M | 25GB
>> | 32.60 | Cpu (218) |
>> | | |
>> | | | | | | | |
>> | | cell single block physical read (1099) |
>>
>> ====================================================================================================================================================================================================
>>
>> Predicate Information (identified by operation id):
>> ---------------------------------------------------
>>
>> 3 - filter(:B1='Y')
>> 5 - filter("P"."PART_DT"=:B1)
>> 6 - access("P"."TID"=:B1 AND "P"."CODE"='XX')
>>
>>
>> **************************************************
>> Your plan says that GTT_TAB has around 8G rows and is accessed using full
>> table scan. Since your update is using a correlated query, I would try
>> indexing the GTT. The other thing I notice is "gc cr grant 2-way (1)"
>> event. That means that PRODUCT_TAB is being updated from another node. You
>> are essentially coordinating with the other node and that is slow. Can you
>> perform the operation on the same node where the table is being modified?
>>
>> On 5/2/21 2:53 PM, Lok P wrote:
>>
>> Hello, We have a plsql procedure in which it updates multiple columns
>> of the global temporary table after loading it in different steps. It's
>> updated for different filter criteria and each of these UPDATES are running
>> for hours even if they are simple ones. I am trying to see if it's
>> possible to do it in an efficient fashion by combining all different
>> updates into one UPDATE statement or any other possible method? One initial
>> thought was to do it as part of the INSERT itself rather than updating
>> it through multiple statements at the later stage , but then I see the
>> data load/INSERT is happening for multiple if/else conditions which may not
>> be straight enough.
>>
>> Attached is the sql monitor for few of the long running UPDATES. And i am
>> seeing this is not really spending time for updating rows but rather in
>> finding the eligible rows for update i.e. the SELECT part of it is actually
>> taking time. So wanted to understand how these UPDATES can be done faster?
>>
>> Its version 11.2.0.4 of Oracle Exadata-X5.
>>
>> Regards
>>
>> Lok
>>
>> --
>> Mladen Gogala
>> Database Consultant
>> Tel: (347) 321-1217https://dbwhisperer.wordpress.com
>>
>> -- http://www.freelists.org/webpage/oracle-l
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Mon May 03 2021 - 10:02:21 CEST

Original text of this message