Re: high temp space usage for same analytic function on latest version

From: Sayan Malakshinov <xt.and.r_at_gmail.com>
Date: Mon, 14 Nov 2022 10:49:41 +0000
Message-ID: <CAOVevU5ZAGAh_gut+8Vnj0Qb9dfxv8=4NNhSTdOe+3gL4pM40Q_at_mail.gmail.com>



Hi Yudhi,

I noticed in a projection section:

 "T"."C_NBR"[VARCHAR2,100]
 "C"."LOW_VALUE"[VARCHAR2,4000]
 "C"."HIGH_VALUE"[VARCHAR2,4000]

and in the predicate section:

filter(("T"."C_NBR"<="C"."HIGH_VALUE" AND "T"."C_NBR">="C"."LOW_VALUE"))

Then I checked your query and found this:

                      RPAD (
                         SUBSTR (CP.CP_NBR, 1,   CP.CPCL_NBR - X.RN),
                         CASE CP.PANL_CNT
                            WHEN 0 THEN 16
                            ELSE CP.PANL_CNT
                         END,  '0')  AS LOW_VALUE,
                      RPAD (
                         SUBSTR (CP.CP_NBR,  1, CP.CPCL_NBR - X.RN),
                         CASE CP.PANL_CNT
                            WHEN 0 THEN 16
                            ELSE CP.PANL_CNT
                         END, '9')   AS HIGH_VALUE

So what is the maximum of CP.PANL_CNT?
Probably it would be better to wrap it into cast(RPAD(...) as varchar2(max_length). Obviously, from the predicate it shouldn't be longer than 100

On Mon, Nov 14, 2022 at 10:02 AM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

>
> A quick throwaway comment before I look at the latest github stuff.
> Your select is running parallel, but the insert is serialising - have you
> enabled parallel DML, do you want the insert to go parallel as well?
> If the insert runs parallel then some of the mechanism of the plan may
> change anyway and the problem might just go away.
>
> Regards
> Jonathan Lewis
>
>
> On Sun, 13 Nov 2022 at 14:28, yudhi s <learnerdatabase99_at_gmail.com> wrote:
>
>> Thank you Lok. Actually the join is not exactly on the columns but with
>> some functions so that might be the case why it seems like Cartesian.
>>
>> However the point is , this query has not been changed and I agree that
>> it may be because the data has been increased a lil in one of the table 'C'
>> or 'T' , so the join output has been increased from 7billion to 10billion
>> but how come that justifies the temp space requirement from 20gb to
>> 2.5terabyte it not twice or thrice but multiple times.
>>
>> As this query is running in parallel-16 degree because of a underlying
>> table degree , and considering no other option at hand, so I was thinking
>> if by increasing or decreasing the parallelism with help of hints will help
>> in completing the query at least with lesser temp space(we have 2.5TB of
>> temp space with us right now currently) ? Or any other possible option to
>> make this query succeed?
>>
>>

-- 
Best regards,
Sayan Malakshinov
Oracle performance tuning engineer
Oracle ACE
http://orasql.org

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Nov 14 2022 - 11:49:41 CET

Original text of this message