Re: Stats collection issue in 19C

From: Pap <oracle.developer35_at_gmail.com>
Date: Wed, 9 Nov 2022 00:20:15 +0530
Message-ID: <CAEjw_fjQE=jbL=UuLpNahupSMd_EZL63gT95XvTSj-XhXc+j1g_at_mail.gmail.com>



Thank You so much Jonathan for the details. It looks like we got the clue which you pointed to and might be the cause here.

The "APPROXIMATE_NDV_ALGORITHM" is defined as "REPEAT OR HYPERLOGLOG" in both global level and table level preferences and all other preferences are also the same across both. However, if we see dba_part_col_statistics the NOTES column shows the value as "HYPERLOGLOG" in one database (in which its running fast) for all the partitions, but in other its having "HYPERLOGLOG" for majority of the partition but couple of partition having notes column set as "NULL" too. I am wondering if these couple of partitions are the culprit, which makes oracle to go for all partitions?

For some other table showing similar symptoms the notes column i.e. "approximate_ndv_algorithm" shows combination of as "ADAPTIVE_SAMPLING" and "HYPERLOGLOG".

Talking about the cause of this issue, it's a mystery for us as to how this happened in the first place? or is it expected while migrating from lower version to 19C as because 'hyperloglog' is new in 19C and old was "ADAPTIVE_SAMPLING" but again here it's showing as "NULL" also for a few columns?

But again as you rightly mentioned "*The default for preference "approximate_ndv_algorithm" is "repeat or hyperloglog", so if you have a mixture it wont every clear out.*" , So these tables are partitioned tables and are big in size(In Terabytes) and also have indexes in them. So thinking of any alternatives rather than going for complete deletion and gathering stats from scratch for these tables?

Say, should we try to just delete stats from the specific partitions which have "NOTES" as null/"ADAPTIVE_SAMPLING". And then gathering assuming the same preference will help. Something as below..

EXEC DBMS_STATS.DELETE_TABLE_STATS (OWNNAME => 'USER1', TABNAME =>
'TAB',PARTNAME => 'TAB_20220101');
EXEC DBMS_STATS.GATHER_TABLE_STATS (OWNNAME => 'USER1', TABNAME =>
'TAB',PARTNAME => 'TAB_20220101');



On Tue, Nov 8, 2022 at 5:23 PM Jonathan Lewis <jlewisoracle_at_gmail.com> wrote:

>
> Neil beat me to it with the dangers of REPEAT, which I described and
> demonstrated a few years ago here:
> https://jonathanlewis.wordpress.com/2017/06/01/histogram-upgrade-2/
>
> Tim's script is dated Dec 2013, by the way, and reports only a few of the
> 41 possible table preferences (some of which are undocumented and not for
> end-user consumption, of course). A better script for more recent versions
> of Oracle is this one:
> https://jonathanlewis.wordpress.com/2021/08/09/preferences/
>
> I don't know why you're seeing different behaviour in different databases
> - though as Lothar points out, the slow SQL is gathering stats for the
> table as a whole while the fast one is gathering for a single partition.
> The significance of the NDV_HLL with all the SYN that appear in the slow
> case MIGHT be an indication that you have a mixture of the old
> approximate_ndv method (possibly even using the large synopses) and the new
> Hyper Log Log approximate_ndv method (with tiny synopses) and that the code
> can't (remember this is hypothetical) create a global histogram from
> combining mixed synopses. The default for preference
> "approximate_ndv_algorithm" is "repeat or hyperloglog", so if you have a
> mixture it won't every clear out.
>
> If working through all the details to find that actual problem is too
> difficult you could simply try deleting all the statistics for the table
> and then gathering them again
> with a method_opt that will create simple stats for most columns and
> recreate histograms only for the columns that currently have them using
> "size 254" for those columns. There are examples of the syntax in another
> blog note I wrote about the trouble I have getting the method_opt right
> first time: https://jonathanlewis.wordpress.com/2018/01/18/column-stats/
>
> Regards
> Jonathan Lewis
>
>
>
>
>
>
> On Tue, 8 Nov 2022 at 11:01, Neil Chandler <neil_chandler_at_hotmail.com>
> wrote:
>
>> using “REPEAT” is problematic from 12.1 onwards.
>>
>> Oracle changed the algorithm between 11.2 and 12.1, changing it from
>> “very useful” to maintain the current histogram set, to limiting the number
>> of buckets on regeneration to the current maximum set on the histogram. If
>> your data changes and the number of unique values increases (or worse,
>> decreases then increases) the histogram can change type from frequency-type
>> to hybrid. It is also subject to unexpected degradation if it’s already
>> hybrid and the sample is unlucky and selects fewer than the current number
>> of buckets in the data sample.
>>
>> REPEAT should be avoided from 12.1 onwards. With incremental stats on
>> partitioned tables, it is safest to be explicit with your histograms so
>> queries with new predicates do not cause new histograms to be created
>> (causing all partitions to be scanned for the stats gather.)
>>
>> Regards
>>
>> Neil.
>> sent from my phone
>>
>> On 8 Nov 2022, at 10:42, Lothar Flatz <l.flatz_at_bluewin.ch> wrote:
>>
>> 
>> Hi,
>>
>> So it looks as method_opt is not on "repeat".
>> I think the savest would really be something like "for all columns size
>> 1, for columns size 254 COL1, COLUMN2" .
>> But repeat should also work.
>>
>> Thanks
>>
>> Lothar
>>
>> Am 08.11.2022 um 11:31 schrieb Pap:
>>
>> Thank you Lothar.
>>
>> The global preferences are below in both databases. So basically these
>> are same in both the databases and are all defaults.
>>
>> AUTOSTATS_TARGET CASCADE DEGREE ESTIMATE_PERCENT METHOD_OPT NO_INVALIDATE
>> GRANULARITY PUBLISH INCREMENTAL STALE_PERCENT
>>
>> AUTO DBMS_STATS.AUTO_CASCADE NULL DBMS_STATS.AUTO_SAMPLE_SIZE FOR ALL
>> COLUMNS SIZE AUTO DBMS_STATS.AUTO_INVALIDATE AUTO TRUE FALSE 10
>>
>> As i notice we only have frequency histogram on two columns and they have
>> one distinct value in both of them, so do you say, we can now collect the
>> stats by setting method_opt as say "for all columns size 1, for columns
>> size 254 COL1, COLUMN2" and then do another round of collect to see if the
>> Incremental stats is really kicking in and its not collecting across all
>> partitions as it used to do? And yes, we are also following up with Oracle
>> on this issue.
>>
>>
>>
>> On Tue, 8 Nov, 2022, 3:06 pm Lothar Flatz, <l.flatz_at_bluewin.ch> wrote:
>>
>>> Hi,
>>>
>>> it seems the quick execution is doing one partition, but the slow
>>> executions is doing all partitions. The number of actual rows will also
>>> support that.
>>> The condition
>>> TBL$OR$IDX$PART$NUM ("USER1"."TAB1",
>>>
>>> 0,
>>> 4,
>>> 0,
>>> "ROWID") = :objn
>>>
>>> is identifying one partition if Morgan's Library is right https://morganslibrary.org/reference/undocumented.html <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmorganslibrary.org%2Freference%2Fundocumented.html&data=05%7C01%7C%7C086b5c7a1c9e43ba14e308dac175eb42%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638035009470416525%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9XuAN%2FL%2FzuJ%2FirRitIG76uX7ovc4KkpJrtYtVDKw4SY%3D&reserved=0>
>>> Thus it is probably true that all partitions are done in the slow run.
>>> What does trigger it? I don't know either.
>>> Maybe it helps to list all preferences : https://oracle-base.com/dba/script?category=monitoring&file=statistics_prefs.sql <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Foracle-base.com%2Fdba%2Fscript%3Fcategory%3Dmonitoring%26file%3Dstatistics_prefs.sql&data=05%7C01%7C%7C086b5c7a1c9e43ba14e308dac175eb42%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638035009470416525%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=xZ3Fr94LEi5D9xxq%2B9WKyJsGiolwdWbXtRKlTnQ9v04%3D&reserved=0>
>>> There might be a trace to diagnose the decision. Did you open an SR?
>>> You can experiment by setting histogram preferences fixed and then start a gathering and break it once you know the recursive statement.
>>>
>>> Thanks
>>>
>>> LOthar
>>>
>>>
>>>
>>> Am 07.11.2022 um 16:09 schrieb Pap:
>>>
>>> Thank you Nenad, Lothar, Lok.
>>> Apology for late response. Actually if it would have been just because
>>> of some new histogram creation, it would have happened once in a while, but
>>> in our case we are seeing the incremental stats running longer(~1hr+) for
>>> every run in one database but is faster(<10mins) in another database with
>>> the same stats preferences and data volume. However ,we are able to get
>>> the recursive queries executed behind the stats gather proc for both quick
>>> and slow databases. And the difference it's making is because of the one
>>> recursive sql i am able to get hold of. It's as below.
>>>
>>> Below is the recursive sql and its sql monitor which is appearing to
>>> underline the long execution database.
>>> https://gist.github.com/oracle9999/e6ef3ef44fa9a7125ef38d93e33a3e5b
>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Foracle9999%2Fe6ef3ef44fa9a7125ef38d93e33a3e5b&data=05%7C01%7C%7C086b5c7a1c9e43ba14e308dac175eb42%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638035009470416525%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=GI1eNtSsjPcGsmnsZ1zyVR%2F0y3ocdyBlQZ4VLUkbb9I%3D&reserved=0>
>>>
>>> Below is the recursive sql and its sql monitor which is appearing to
>>> underline the quick execution database.
>>> https://gist.github.com/oracle9999/b13bd79b1f026eb486d8c75a7b08ac3c
>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Foracle9999%2Fb13bd79b1f026eb486d8c75a7b08ac3c&data=05%7C01%7C%7C086b5c7a1c9e43ba14e308dac175eb42%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638035009470416525%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=hW1wapoKodHpvozB2lndRsb%2BeRvvxJja9HDaWl3MgsE%3D&reserved=0>
>>>
>>> Some differences in the recursive sql text which gets executed behind
>>> the scene for quick execution vs long execution are below..
>>>
>>> The quick execution one having from clause of the query as "*FROM
>>> "USER1"."TAB1" t WHERE TBL$OR$IDX$PART$NUM ("USER1"."TAB1",0,4,0,"ROWID") =
>>> :objn* "
>>> The slow execution one having from clause of the query as "*FROM
>>> "USER1"."TAB1" t* "
>>>
>>> The quick execution shows the strings in comment something like "*SYN,NIL,NIL"
>>> ,"RWID, NDV_HLL,B67430*".... etc after the where clause.
>>> The slow execution shows the strings in comment something like "*NDV,NIL,NIL"
>>> , "TOPN,NIL,NIL*".. etc after the where clause.
>>>
>>> I believe these above differences in the sql text give some clue,
>>> however I am still not able to figure out ,what is that difference making
>>> oracle go for executing such sql texts in both the cases?
>>>
>>> Table preferences are as below and the METHOD_OPT is kept purposely as
>>> "REPEAT" but not "AUTO", so as to avoid any new histogram automatically
>>> introduced by optimizer endup collecting stats on these big tables. These
>>> tables are in the 10's of TB's in size and are partitioned tables. Just the
>>> Degree is kept 8 on the slow database vs 4 on the fast database, just to
>>> finish the stats quicker as that is running longer because of this issue.
>>>
>>> GRANULARITY AUTO
>>> INCREMENTAL TRUE
>>> INCREMENTAL_STALENESS USE_STALE_PERCENT,USE_LOCKED_STATS
>>> METHOD_OPT FOR ALL COLUMNS SIZE REPEAT
>>>
>>>
>>> *Snippet of 'FROM" clause of slow run :- *
>>> FROM "USER1"."TAB1" t
>>> WHERE TBL$OR$IDX$PART$NUM ("USER1"."TAB1",0,4,0,"ROWID") = :objn /*
>>> SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,STOPN,NIL,NIL,STOPN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,SYN,NIL,NIL,RWID,
>>> NDV_HLL,
>>> B67430,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1U*/
>>>
>>> *Snippet of 'FROM" clause of fast run :-*
>>>
>>> FROM "USER1"."TAB1" t /*
>>> NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,TOPN,NIL,NIL,TOPN,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,NDV,NIL,NIL,RWID,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1,U1U*/
>>>
>>> On Tue, Oct 4, 2022 at 12:35 PM Nenad Noveljic <nenad.noveljic_at_gmail.com>
>>> wrote:
>>>
>>>> Could you check if new histograms were created:
>>>>
>>>> SELECT *
>>>> FROM table(dbms_stats.diff_table_stats_in_history(
>>>> ownname => 'USER',
>>>> tabname => 'TABLE',
>>>> time1 => localtimestamp,
>>>> time2 => localtimestamp-to_dsinterval('5 00:00:15'),
>>>> pctthreshold => 0
>>>> ));
>>>>
>>>> New histograms could have triggered the recreation of the synopses.
>>>>
>>>> Best regards,
>>>> Nenad
>>>>
>>>> Von meinem iPhone gesendet
>>>>
>>>> Am 04.10.2022 um 09:34 schrieb Lok P <loknath.73_at_gmail.com>:
>>>>
>>>> 
>>>> Hi Pap, I can't remember exactly the discussion (mostly from Jonathan
>>>> Lewis) sometimes in past I read , you have to do it something as below..
>>>>
>>>> Export your existing stats manually to a table and then update the
>>>> global_stats flag to NO there and then import it back to the data
>>>> dictionary back.
>>>>
>>>> Or else you can pass the granularity parameter as 'partition' such that
>>>> the underlying SQL taking time I. E global column stats won't trigger. So
>>>> your stats collection will be faster. But then your partition stats will be
>>>> accurate but it won't gets rolled up to global level automatically. Thats
>>>> an issue for queries relying on global statistics.
>>>>
>>>> Regarding your incremental stats, the global_stats flag will be YES
>>>> only. But as you mentioned it's still going for scanning full table and
>>>> gathering column stats each time it triggers. So it may be because of
>>>> change in behaviour of granularity parameter ALL from 11g vs 19C, so I
>>>> would suggest you try with AUTO.
>>>>
>>>>
>>>>
>>>>
>>>> On Tue, 4 Oct, 2022, 2:11 am Pap, <oracle.developer35_at_gmail.com> wrote:
>>>>
>>>>> Thank You Lothar.
>>>>>
>>>>> Something odd I am noticing. Because we have many partitioned tables
>>>>> and this issue we are noticing for a handful of them and another thing i
>>>>> notice, for the table which has no table level preferences set and also we
>>>>> are just passing a hard coded partition_name without any incremental, the
>>>>> Global_stats column in dba_tables is showing "YES". Ideally it should show
>>>>> Global_stats as 'NO'. Not sure how it happened and if it's responsible for
>>>>> making the stats to be collected as global each time. How to fix this ?
>>>>>
>>>>> And also the synopsis you mentioned is related to incremental stats
>>>>> only, and for the partitioned table with incremental set as TRUE, i see the
>>>>> Granularity set as 'ALL', so is that the cause for making it go for
>>>>> collecting global column stats each time and we should turn it to AUTO
>>>>> then? But my worry is if just tweaking the 'Granularity" to AUTO will make
>>>>> the oracle go scanning/collecting synopsis from scratch for all the
>>>>> partitions again for the first stats run?
>>>>>
>>>>> Also if i remember correctly , in the 11G period we encountered an
>>>>> issue in which Granularity =>AUTO was not collecting stats on the
>>>>> subpartitions, so we have set it to ALL and it was working perfectly fine
>>>>> with that setup in the 11.2 version. Not sure if some changes happened in
>>>>> 19C with respect to that.
>>>>>
>>>>> On Tue, Oct 4, 2022 at 1:59 AM Lothar Flatz <l.flatz_at_bluewin.ch>
>>>>> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am trying some educated guess right now. In order to do incremental
>>>>>> stats, an auxiliary table called synopsis must be generated.
>>>>>> Basically the synopsis is a sample of the NDV of a partition.
>>>>>> When you set a table to incremental, the synopsis must be build for
>>>>>> all
>>>>>> existing partitions once. So this first run still scans all
>>>>>> partitions,
>>>>>> only the next runs from there on will be incremental.
>>>>>> The synopsis from version 11g differs from that in 19c IMHO. Thus, it
>>>>>> might be that after the upgrade a new, full size synopsis must be
>>>>>> generated.
>>>>>> That could look similar to global stats.
>>>>>> If that is correct, things should switch bach to normal by themselves.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> Lothar
>>>>>>
>>>>>> Am 03.10.2022 um 22:14 schrieb Pap:
>>>>>> > It's a 19.15 version oracle customer database. We are seeing the
>>>>>> stats
>>>>>> > gathering is running a lot longer than it used to happen in the
>>>>>> same
>>>>>> > database when it used to be in version 11G. And it seems to be only
>>>>>> > happening for partitioned tables.
>>>>>> >
>>>>>> > When trying to look into the underlying sql when the stats gather
>>>>>> was
>>>>>> > running , I am seeing one type of sqls. It seems to be gathering
>>>>>> the
>>>>>> > column level stats for the whole table even if we have just passed
>>>>>> the
>>>>>> > exact partition_name as parameter to the stats gather block as
>>>>>> below
>>>>>> > and also its happening irrespective of whether the table has
>>>>>> > INCREMENTAL set as TRUE or not.
>>>>>> > So my question is even if INCREMENTAL is set as TRUE for TAB2, but
>>>>>> > still each time it should not collect column level stats in the
>>>>>> whole
>>>>>> > table level. So why is it doing so? And in case of TAB1 even if we
>>>>>> > have no preference set in table level, then Oracle should only
>>>>>> gather
>>>>>> > stats for that partition, so why is it gathering for global level
>>>>>> > column stats each time?
>>>>>> >
>>>>>> > exec
>>>>>> >
>>>>>> Dbms_stats.gather_table_stats(ownname=>'SCHEMA_APP',tabname=>'TAB1',partname=>'DAY_20221104');
>>>>>> > exec
>>>>>> >
>>>>>> Dbms_stats.gather_table_stats(ownname=>'SCHEMA_APP',tabname=>'TAB2',partname=>'TAB2_11182022_P');
>>>>>> >
>>>>>> > Below is the sql monitoring report for two of the underlying sqls
>>>>>> for
>>>>>> > above two calls. Here TAB2 is defined as INCREMENTAL as TRUE in the
>>>>>> > dba_tab_stat_prefs. And TAB1 has no preference set in table level.
>>>>>> >
>>>>>> > https://gist.github.com/oracle9999/c2268195a01a11479e18fb488c4bfd69
>>>>>> <https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgist.github.com%2Foracle9999%2Fc2268195a01a11479e18fb488c4bfd69&data=05%7C01%7C%7C086b5c7a1c9e43ba14e308dac175eb42%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C638035009470416525%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6aqUvScXXGL%2B4qOky4OsJszEWg0d18lifOYW8SFEvnI%3D&reserved=0>
>>>>>> >
>>>>>> > Regards
>>>>>> > Pap
>>>>>>
>>>>>>
>>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Nov 08 2022 - 19:50:15 CET

Original text of this message