RE: Increased PGA requirements for SORTs in 19c?

From: Tefft, Michael J <Michael.J.Tefft_at_snapon.com>
Date: Thu, 3 Dec 2020 11:50:40 +0000
Message-ID: <c1487b8d6a554e228970663dc751a722_at_snapon.com>



Thanks to all for your advice.

We have found a workaround for this issue.

The query involves a view that uses a GROUP BY to 'pivot' a table's VARCHAR2(4000) column into over 20 copies of this column. The data in the column never has more than 33 characters. But the optimizer seems to be using the maximum (declared) column size to determine its memory needs - which has spilled over to TEMP demands as well.

This seems to be a change in behavior from 12.2 to 19c. The disparity in memory allocation suggests that the previous version probably used the column statistics to plan the memory/temp requirement: we observed roughly a 100x increase in PGA+TEMP and the difference between 'declared' data size and statistics of the column data size is also roughly 100x.

Our workaround: We changed the definition of the table's VARCHAR2(4000) column to VARCHAR2(100) and now performance (response time, PGA usage, TEMP usage) is acceptable.

By the way, this is the article that got me thinking in this direction: https://oracle-randolf.blogspot.com/2011/01/hash-aggregation.html in particular, these observations:

Ø The fundamental defect is that it is not able to dynamically resize to a larger workarea size when using automatic PGA management and therefore remains more or less at its initial expected size based on the estimates at optimization time.

Ø This effectively means that the efficiency of the hash aggregation operation when using automatic PGA management is heavily dependant on the cardinality estimates at optimization time - in case of estimates in the right ballpark, the memory used at execution time will correspond to the actual requirements at runtime, but in case of bad estimates, the operation potentially uses far less memory than available and unnecessarily spills to disk.

My thinking was: perhaps Oracle changed/fixed the optimizer to be less optimistic in estimating memory needs - using actual column sizes rather than statistics, which caused the increased memory demands. (Note that such a change would not necessarily be intentional...)

Thanks again. The suggestions here were helpful in diagnosing this issue and gave me some new additions for my toolbox.

Mike Tefft

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis Sent: Friday, November 20, 2020 11:19 AM To: ORACLE-L <oracle-l_at_freelists.org> Subject: Re: Increased PGA requirements for SORTs in 19c?

CAUTION: This email originated from outside of Snap-on. Do not click on links or open attachments unless you have validated the sender, even if it is a known contact. Contact the sender by phone to validate the contents.

I haven't tested it for a long time but the following might work:

SQL> alter system set events 'sort_end[sql:42v4z1dh3255x]';

System altered.

SQL> alter system set events 'sort_run[sql:42v4z1dh3255x]';

System altered.

sort_end is the 10032
sort_run is the 10033

Regards
Jonathan Lewis

On Fri, 20 Nov 2020 at 15:37, Tefft, Michael J <Michael.J.Tefft_at_snapon.com<mailto:Michael.J.Tefft_at_snapon.com>> wrote: I have used
alter system set events 'sql_trace[sql: 42v4z1dh3255x] wait=true,bind=true'; to capture the 10046 trace from executions ‘in the wild’. I have also seen a similar ability to capture 10053 trace: alter system set events 'trace[sql_optimizer.*] [sql:42v4z1dh3255x]';

So… Is there a similar method to capture 10032 and 10033 traces for a specific sql_id?

From: oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org> [mailto:oracle-l-bounce_at_freelists.org<mailto:oracle-l-bounce_at_freelists.org>] On Behalf Of Jonathan Lewis Sent: Friday, November 20, 2020 7:42 AM To: ORACLE-L <oracle-l_at_freelists.org<mailto:oracle-l_at_freelists.org>> Subject: Re: Increased PGA requirements for SORTs in 19c?

CAUTION: This email originated from outside of Snap-on. Do not click on links or open attachments unless you have validated the sender, even if it is a known contact. Contact the sender by phone to validate the contents.

Add 10033 to that for reporting spills to disc, otherwise the 10032 will only show you the final memory used during the last merge, and that could be very deceptive.

Regards
Jonathan Lewis

On Fri, 20 Nov 2020 at 12:29, Sayan Malakshinov <xt.and.r_at_gmail.com<mailto:xt.and.r_at_gmail.com>> wrote: Hi Mike,

That's not enough info for deep detailed analysis. Could you trace your query with enabled trace events 10046 and 10032 (sort statistics) and provide raw trace files, please? Or at least RTSM reports. You can also analyze v$sql_workarea (v$sql_workarea_active)

Best regards,
Sayan Malakshinov
Oracle performance tuning expert
Oracle Database Developer Choice Award winner Oracle ACE Associate
http://orasql.org<https://urldefense.com/v3/__http:/orasql.org__;!!Lf_9VycLqA!zwd7B5IqtJ_MSod_hzjPC27jGlS8p4TTs2chnyl3M0nqd7izdiTJyd8-sa8nRJ6XzYCLqg$>

--
http://www.freelists.org/webpage/oracle-l
Received on Thu Dec 03 2020 - 12:50:40 CET

Original text of this message