Re: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

From: kunwar singh <>
Date: Sat, 17 Aug 2019 07:06:32 -0400
Message-ID: <>

Hi Jonathan, All,
Thanks a lot for your inputs. I am working on getting all the points to find the root cause.

Yes i am trying to testing the migration and trying to minimise the number of odd details that have to be fixed ? Found 10 queries so far which are having bad plans and yes i am preferring to keep the individual tweaking.


On Wed, Aug 14, 2019 at 9:18 AM Jonathan Lewis <> wrote:

> > Unfortunately the cost of I/O to temp becomes highly visible on EXADATA
> when a hash join spills to disc, and the implied speed of tablescan is only
> helpful if > you can push predicates down to storage and take advantage of
> storage indexes
> > [Kunwar] so should i disable storage indexes in order to check if that
> rules out Exadata playing a part here or just disabling
> cell_offload_processing should be > enough to tell if problem is Exadata is
> specific or at 12c optimizer software level only.
> My point was that setting the system stats to EXADATA made the optimizer
> give too much weight to doing tablescans and hash joins because the stats
> they didn't give the correct impression of how expensive the join would be,
> only how cheap the tablescan would be. Your basic problem is that you've
> told the optimizer that tablescans and hash joins are fantastic - the use
> or not of storage indexes is irrelevant to the optimizer, fiddling with
> offloading isn't going to do anything useful as far as changing execution
> plans is concerned.
> > Generic strategy point 1:
> > Look at the "table_cached_blocks" preference for gathering table stats.
> This can help to give Oracle a much better idea of the quality of an index
> by allowing > it do derive a better value for the clustering_factor.
> > [Kunwar] Interesting ..i wasnt aware of table_cached_blocks preference.
> Now going over your below blog for more on it :)
> >
> See also
> - and
> various other clustering_factor notes I've written. Bottom line - you've
> told Oracle that tablescans are fantastic, but you haven't let it know how
> effective some of your indexes are.
> > Generic strategy point 2:
> > Look for places where the optimizer's estimate of cardinality is HIGHER
> than it ought to be and find out why - as higher cardinalities tend to push
> the
> > optimizer away from indexes/nested loops.
> > [Kunwar] The only place where optimizer's estimate of cardinality is
> HIGHER is at the FTS on the 2GB table, which i mentioned initially.
> But your original posting said: "Looking for a approach in general when
> it comes to check plan issues when migrating to exadata and not something
> to this query" - that's why I'm giving your STRATEGY points, not attemping
> to tell you how to fix the one query that you've mentioned.
> > Generic strategy point 3:
> > If Exadata is wasting a lot of resources on tablescans that clearly be
> indexed access paths you consider deleting system stats, or faking some
> system stats
> > that promise less about tablescan speed.
> > [Kunwar] To avoid system wide impact of changing system statistics is
> there a hack to get rid of system statistics at query or object level.
> > Of course i can take a backup of system statistics before doing the
> change, just that it will take a bit of coordination with the DBAs and it
> takes some time to > get it implemented.
> Answer to question. No, you can't hack system stats for an object; yes, in
> 18.3 you could use the opt_estimate() hint to hack them (but shouldn't).
> You don't need to take a backup of the system stats if you've set them
> using dbms_gather_system_stats('EXADATA') because it's fixed code to set
> them, and does nothing dynamic to measure them. Are you in production on
> 12c on Exadata and trying to solve problems, or are you testing the
> migration and trying to minimise the number of odd details that have to be
> fixed ? Major strategy point - work out how to make almost everything
> behave adequately, and keep individual tweaking to a minimum.
> As far as the individual query is concerned, you mentioned vw_nso_1, which
> is an internal view from subquery unnesting, if this appeared in 12c and
> was not present in 11g it may simply mean that the costing changed
> sufficiently to make it a good plan, on the other hand maybe you have an
> example of a subquery where the 11g optimizer was not allowed to unnest and
> an enhancement in 12c allowed unnesting. (So 11g might be running with a
> FILTER operation) If that's the case then hinting the subquery with
> NO_UNNEST might revert you to the 11g plan. If EXACTLY THE SAME problem
> appears in many cases (i.e. new feature introduces problem) then there may
> be an optimizer parameter that you can set to disable the new feature in
> the short term while you pursue the problem with Oracle support.
> Regards
> Jonathan Lewis
> ________________________________________
> From: kunwar singh <>
> Sent: 13 August 2019 12:13
> To: Jonathan Lewis
> Cc: Oracle L
> Subject: Re: Bad execution plan after migrating to exadata ( 12c) from
> non-exadata (11g)
> --


Received on Sat Aug 17 2019 - 13:06:32 CEST

Original text of this message