RE: Increased PGA requirements for SORTs in 19c?

From: Mark W. Farnham <mwf_at_rsiz.com>
Date: Sat, 5 Dec 2020 08:59:10 -0500
Message-ID: <0d8d01d6cb0e$cece3710$6c6aa530$_at_rsiz.com>



something like  

select max(a),max(b)

   from

   (select count(distinct a),0 “b” from tab

    union all

    select 0 “a”, count(distinct b) from tab)  

so each select can use the fastest method to get the distinct count of its column and the union all only has the number of columns required as its number of rows to unionize. Then each independent distinct would need only whatever temporary memory its plan might require (possibly nearly nothing) saving its single row result set also in nearly nothing for which the union all to operate on. You might need to cast the literal to match the type thrown by count.  

direct from fingers, didn’t test syntax, just couldn’t work out your missing word or words after “the” so checking if I understood your meaning.  

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of Jonathan Lewis Sent: Thursday, December 03, 2020 7:49 AM To: Tefft, Michael J
Cc: ORACLE-L
Subject: Re: Increased PGA requirements for SORTs in 19c?    

Michael,  

Thanks for the comments, possibly the non-appearance of the "hash group by" relates to something in

https://jonathanlewis.wordpress.com/2018/07/14/quiz-night-33/  

A thought I've had about multiple count(distinct) in a select query is that there may be cases where you could benefit by rewriting the query first as a union all of queries that each handle one of the count(distincts) - since each would then be subject to the and then aggregate across the union all.  

Regards

Jonathan Lewis        

On Thu, 3 Dec 2020 at 12:37, Tefft, Michael J <Michael.J.Tefft_at_snapon.com> wrote:

As a separate workaround, I was trying to force the queries to use hash aggregation instead of sort aggregation. I was able to get one query to use the HASH GROUP BY (which was also restored good performance) but was unable to get another to bend to my will. This is why I was searching for hash aggregation articles.  

I did get results from the sort traces, in our non-production 19c environment. I had not yet attempted to get them from our 12.2 environment, which is production and is subject to change-control procedures. In the meantime I found this solution.  

I spent years as a data modeler; to see such columns defined as VARCHAR2(4000) when they would never need to be such a size is…disappointing.  

If I have time, I will try to construct a simple test case.  

Mike  

From: Jonathan Lewis [mailto:jlewisoracle_at_gmail.com] Sent: Thursday, December 3, 2020 7:23 AM To: Tefft, Michael J <Michael.J.Tefft_at_snapon.com> Cc: ORACLE-L <oracle-l_at_freelists.org> Subject: Re: Increased PGA requirements for SORTs in 19c?  

CAUTION: This email originated from outside of Snap-on. Do not click on links or open attachments unless you have validated the sender, even if it is a known contact. Contact the sender by phone to validate the contents.  

That's very interesting,

Thanks for posting your conclusions.

Do you have a small model / test case that you could post for tother people to play around with ?  

Since you mention hash aggregation does that mean you didn't actually get any results for the 10032 / 10033 traces which are for sorting only, not for hashing ?  

Regards

Jonathan Lewis      

On Thu, 3 Dec 2020 at 11:50, Tefft, Michael J <Michael.J.Tefft_at_snapon.com> wrote:

Thanks to all for your advice.  

We have found a workaround for this issue.  

The query involves a view that uses a GROUP BY to 'pivot' a table's VARCHAR2(4000) column into over 20 copies of this column. The data in the column never has more than 33 characters. But the optimizer seems to be using the maximum (declared) column size to determine its memory needs - which has spilled over to TEMP demands as well.  

This seems to be a change in behavior from 12.2 to 19c. The disparity in memory allocation suggests that the previous version probably used the column statistics to plan the memory/temp requirement: we observed roughly a 100x increase in PGA+TEMP and the difference between 'declared' data size and statistics of the column data size is also roughly 100x.  

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Dec 05 2020 - 14:59:10 CET

Original text of this message