RE: Parallel Query Performance Issues

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Mon, 13 Jun 2022 14:19:20 -0400
Message-ID: <17d201d87f52$1a51e490$4ef5adb0$_at_rsiz.com>



Quite possibly the date pruning is important, and if so, then:  

select distinct dc.date_id from REPORT_ENGINE.date_code dc

   where dc.date_value BETWEEN TO_DATE('04/01/2017', 'MM/DD/YYYY')

                                    AND TO_DATE('04/28/2022', 'MM/DD/YYYY');

 

let’s pretend and call that dd for distinct date.  

then

select document_id, client_role_id, publication_date_id from REPORT_ENGINE.lit_fact_bd rd

where rd.year_number = to_number('2018')

     and rd.firm_id in (select FIRM.firm_id from report_engine.firm FIRM where FIRM.ultimate_parent_firm_id = to_number('252094')

     and rd.publication_date_id in (select date_id from dd);  

prunes all that by filtering before it is tossed into the hash for all the other joins. IF the CBO doesn’t have stats that let this just happen as re-written, a no_merge hint can make the pieces occur.  

Slenderized thus, the details of the broadcast *MAY* become less important for either 2018 or 2019.  

And this also allows the possibility of cooking that as a materialized view as step one of a “batch” job, possibly doing statistics on the materialized view to help the CBO.  

okay, I’m stepping aside. I’m supposed to be retired, but I still find these puzzles fun.      

From: Mark W. Farnham [mailto:mwf_at_rsiz.com] Sent: Monday, June 13, 2022 1:06 PM
To: 'Goti'
Cc: 'ORACLE-L'
Subject: RE: Parallel Query Performance Issues  

So that is plenty small enough to be done as a materialized inline view, with just the column you need. That would be one less thing for the optimizer to figure out.  

My experience is that the fewer things the optimizer has to figure out, the better and more consistent the results.  

Now further, if

select count(*) from REPORT_ENGINE.lit_fact_bd rd

   where rd.year_number = to_number('2018')

     and rd.firm_id in (select FIRM.firm_id from report_engine.firm FIRM where FIRM.ultimate_parent_firm_id = to_number('252094');  

count(*) is reasonably small, pre-cook that as a materialized inline view  

select document_id, client_role_id, publication_date_id from REPORT_ENGINE.lit_fact_bd rd

where rd.year_number = to_number('2018')

     and rd.firm_id in (select FIRM.firm_id from report_engine.firm FIRM where FIRM.ultimate_parent_firm_id = to_number('252094');  

This reduces the join building the parenthesized ) rd reuse of that name to a simple two-way join on rd.publication_date_id = dc.date_id  

With a neon sign painted for the CBO to work that out, I suspect it will do a better job on the rest. Again, watch out for typos. This is just off my fingers in text.  

But maybe you can’t change the code, which is a different issue.  

From: Goti [mailto:aryan.goti_at_gmail.com] Sent: Monday, June 13, 2022 12:25 PM
To: Mark W. Farnham
Cc: Laurentiu Oprea; ORACLE-L
Subject: Re: Parallel Query Performance Issues  

Hi Mark,  

SQL> SQL> SQL> SQL> SQL> select count(distinct FIRM.firm_id) from report_engine.firm FIRM where FIRM.ultimate_parent_firm_id = to_number('252094');

COUNT(DISTINCTFIRM.FIRM_ID)


                         43

 

Thanks,  

Goti    

On Mon, Jun 13, 2022 at 9:53 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:

Just curious, what does:  

select count(distinct FIRM.firm_id) from report_engine.firm FIRM where FIRM.ultimate_parent_firm_id = to_number('252094');  

give you. (I hope I didn’t make a typo, and you could probably just use the number, but I was just cutting and pasting).  

mwf  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Goti Sent: Monday, June 13, 2022 11:00 AM
To: Laurentiu Oprea
Cc: ORACLE-L
Subject: Re: Parallel Query Performance Issues  

Hi Laurentiu,  

Thanks again!.  

The SQL is still running slow and consumes more than 100GB of temp..  

https://gist.github.com/aryangoti/f2d0f6b8408a924fbf25673eb3929732    

Thanks,  

Goti    

On Mon, Jun 13, 2022 at 7:49 PM Laurentiu Oprea <laurentiu.oprea06_at_gmail.com> wrote:

at this stage I would say let's just add them as hints something like:  

SELECT
/*+ PARALLEL(8) PQ_DISTRIBUTE(_at_"SEL$B62753A3" "OP"_at_"SEL$1" HASH HASH) PQ_DISTRIBUTE(@"SEL$B62753A3" "OPF"@"SEL$5" HASH HASH) */

DISTINCT .... from (..)  

and then update the outcome similar with previous run  

Thanks  

În lun., 13 iun. 2022 la 17:13, Goti <aryan.goti_at_gmail.com> a scris:

HI Laurentiu,  

Thanks for the response. DO you want me to add the below hints to the existing outline and execute the query?  

 PQ_DISTRIBUTE(_at_"SEL$B62753A3" "OP"_at_"SEL$1" HASH HASH)

      PQ_DISTRIBUTE(_at_"SEL$B62753A3" "OPF"_at_"SEL$5" HASH HASH)  

Thanks,  

Goti    

On Mon, Jun 13, 2022 at 6:54 PM Laurentiu Oprea <laurentiu.oprea06_at_gmail.com> wrote:

what is the outcome if you add the next hints:  

      PQ_DISTRIBUTE(_at_"SEL$B62753A3" "OP"_at_"SEL$1" HASH  HASH)
      PQ_DISTRIBUTE(_at_"SEL$B62753A3" "OPF"_at_"SEL$5"  HASH  HASH)

 

În lun., 13 iun. 2022 la 13:02, Goti <aryan.goti_at_gmail.com> a scris:

Thanks Andy and Jonathan.  

I did change _parallel_broadcast_enabled to TRUE to have "PX BROADCAST in the plan. But still it doesn't improve the response time of the SQL. Can you please help me to identify why the step 38 actual rows shows 495M whereas Oracle estimates it to be 1 row. Below are the gist details.    

https://gist.github.com/aryangoti/ec2804a7b832a7fe606ec0bf6a0681b7  

Thanks,  

Goti    

On Thu, Jun 9, 2022 at 8:15 PM Andy Sayer <andysayer_at_gmail.com> wrote:

Just quick thoughts - replace the distincts with group by, this might allow group by placement to happen for you.  

The inner distinct doesn’t seem to be executed as a distinct, there might be clues in the outline if it’s decided that it only need wants to do a sort.  

I’ll have a closer look when I can  

Thanks,

Andy  

On Thu, 9 Jun 2022 at 15:39, Goti <aryan.goti_at_gmail.com> wrote:

Thanks Jonathan for the quick response!  

I tried for the first 2 workarounds and that didn't work as expected. I will work on the 3rd and 4th action plan and update here.  

Thanks,  

Goti    

On Thu, Jun 9, 2022 at 5:41 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:  

The two queries may return the same size result, but the 2019 report generates and aggregates roughly 12 times as much data as the 2018 report. Check the "Actual Rows" figures - the 2018 report hits 3M rows (and 3M execs of the subsequent table probes) while the 2919 report hits 39M rows/execs - and that's where a lot of time goes on CPU.  

Strangely (almost) all the data is passed to one PX server (at operation 13/14, I think) that blows it up through segement NL joins to get most of the 39M rows that have to be "buffer sorted" (i.e. buffered, but not actually sorted) which is where the temp space and I/O time goes.   

Possible workarounds

  • MAYBE if you tried parallel 7 rather than 8 the hash disrtibution at operation MIGHT be better balanced;
    • MAYBE if you set "_gby_hash_aggregation_enabled" to false and got a SORT UNIQUE instead of a hash unique the distribution would work better.
    • if you get the outline information for the plan you should be able to find the pq_distribute hint controls the distribution at operation 14 and change it from a hash distribution to a round-robin - this will probably introduce a 2nd layer of aggregation/uniqueness, but two small, shared stages may well do better than one very large operation.
    • can you rewrite the query to eliminate duplication earlier. This may require you to include inline non-mergeable views: ideally you want to avoid generating 39M rows at any point and then executing 39M join steps as that will still account for a lot of your time.

Regards

Jonathan Lewis        

On Thu, 9 Jun 2022 at 12:15, Goti <aryan.goti_at_gmail.com> wrote:

Environment : 11.2.0.4 database running on Linux.  

Need help to understand parallel query performance issues. Below are the query details and its associated plans. The 2018_query does execute in 24 seconds and returns about 2.5K rows. The 2019_query is also expected to process almost the same number of rows however it consumes a lot of TEMP space and finally fails. The 2019_query without parallel completes in 45 minutes (Just by removing the parallel hint). The only difference between both the queries is related to the predicate "opclf.year_number = to_number('YYYY')". The stats are up to date for the tables are partitions.    

2019_query: https://gist.github.com/aryangoti/a7704a8075f118f7d942e49acee1900d  

2018_query: https://gist.github.com/aryangoti/a7704a8075f118f7d942e49acee1900d  

Stats and other details: https://gist.github.com/aryangoti/a3797424ce0cb4fd87e194c05ad099b6  

Thanks,  

Goti

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Jun 13 2022 - 20:19:20 CEST

Original text of this message