Re: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

From: kunwar singh <krishsingh.111_at_gmail.com>
Date: Tue, 13 Aug 2019 07:13:05 -0400
Message-ID: <CAJSrDUpyD=wSJdgV5qZc++kY9Wn-oVUiSr50KCi2APO9zzN+aA_at_mail.gmail.com>



Hi Jonathan,
Thanks for the details , these are very helpful. Here my comments and few questions on it.
Unfortunately the cost of I/O to temp becomes highly visible on EXADATA when a hash join spills to disc, and the implied speed of tablescan is only helpful if you can push predicates down to storage and take advantage of storage indexes
[Kunwar] so should i disable storage indexes in order to check if that rules out Exadata playing a part here or just disabling cell_offload_processing should be enough to tell if problem is Exadata is specific or at 12c optimizer software level only.

Generic strategy point 1:
Look at the "table_cached_blocks" preference for gathering table stats. This can help to give Oracle a much better idea of the quality of an index by allowing it do derive a better value for the clustering_factor. [Kunwar] Interesting ..i wasnt aware of table_cached_blocks preference. Now going over your below blog for more on it :) https://www.red-gate.com/simple-talk/sql/oracle/hacking-indexes/

Generic strategy point 2:
Look for places where the optimizer's estimate of cardinality is HIGHER than it ought to be and find out why - as higher cardinalities tend to push the optimizer away from indexes/nested loops. [Kunwar] The only place where optimizer's estimate of cardinality is HIGHER is at the FTS on the 2GB table, which i mentioned initially.

Generic strategy point 3:
If Exadata is wasting a lot of resources on tablescans that clearly be indexed access paths you consider deleting system stats, or faking some system stats that promise less about tablescan speed. [Kunwar] To avoid system wide impact of changing system statistics is there a hack to get rid of system statistics at query or object level. Of course i can take a backup of system statistics before doing the change, just that it will take a bit of coordination with the DBAs and it takes some time to get it implemented.

I note you said you had an 11g outline - presumably this means that left to itself on 11g this query picked the "wrong" path - which means nothing has really changed.
[Kunwar] My bad. i should have worded it more correctly. It is the outline data from 11g DB got via DBMS_XPLAN.DISPLAY(format=>'ADVANCED') 11g didnt have a outline as such.

On Tue, Aug 13, 2019 at 5:23 AM Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk> wrote:

>
> By gathering stats with the EXADATA option you've told the optimizer that
> a typical single block read will take about 10ms while a 1MB multiblock
> read will take about 15ms, so it's not surprising that you will have
> execution plans that switch from indexed access and nested loop to full
> tablescan and hash joins.
>
> Unfortunately the cost of I/O to temp becomes highly visible on EXADATA
> when a hash join spills to disc, and the implied speed of tablescan is only
> helpful if you can push predicates down to storage and take advantage of
> storage indexes. (Or if you have purchase the IN-Memory option and have
> the right sort of queries that can do vector transformations).
>
> Generic strategy point 1:
> Look at the "table_cached_blocks" preference for gathering table stats.
> This can help to give Oracle a much better idea of the quality of an index
> by allowing it do derive a better value for the clustering_factor.
>
> Generic strategy point 2:
> Look for places where the optimizer's estimate of cardinality is HIGHER
> than it ought to be and find out why - as higher cardinalities tend to push
> the optimizer away from indexes/nested loops.
>
> Generic strategy point 3:
> If Exadata is wasting a lot of resources on tablescans that clearly be
> indexed access paths you consider deleting system stats, or faking some
> system stats that promise less about tablescan speed.
>
> I note you said you had an 11g outline - presumably this means that left
> to itself on 11g this query picked the "wrong" path - which means nothing
> has really changed.
>
> Regards
> Jonathan Lewis
>
> ________________________________________
> From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on
> behalf of kunwar singh <krishsingh.111_at_gmail.com>
> Sent: 13 August 2019 07:13
> To: ORACLE-L
> Subject: Bad execution plan after migrating to exadata ( 12c) from
> non-exadata (11g)
>
> Hi Listers,
>
> How to approach this? Looking for a approach in general when it comes to
> check plan issues when migrating to exadata and not something to this query
> ( but wont mind any insights into it either ;) )
>
> Issue:
> with outline data from 11g(in 12c exa DB)
> - cost ~90k, fast, elapsed time about 15 ms.
> - doing index range scan on a index on a 2GB table .
>
> 12c exadata
> - cost ~6k , slower , elapsed time about 4 seconds.
> - FTS on the 2GB table and from sql monitor report time is spent on
> reading it only/processing the hash join on it.
> - execution plan is having a view VW_NSO_1
>
>
> Few details:
> 1. I have already gathered stats on all tables/indexes
> 2. Have gathered system statistics with 'EXADATA'
> 3. Don't have the access to source 11g DB . getting it will take some time.
>
> Will post redacted version of the SQL & the execution plan ( if you prefer
> to look at it ) as account is very strict about security.
>
>
>
> --
> Cheers,
> Kunwar
> --
> http://www.freelists.org/webpage/oracle-l
>
>
>

-- 
Cheers,
Kunwar

--
http://www.freelists.org/webpage/oracle-l
Received on Tue Aug 13 2019 - 13:13:05 CEST

Original text of this message