RE: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

From: Mark W. Farnham <>
Date: Wed, 14 Aug 2019 11:38:32 -0400
Message-ID: <018001d552b6$5454b260$fcfe1720$>

JL reminded me of a key point I left out regarding zonemaps:

JL wrote: " the use or not of storage indexes is irrelevant to the optimizer, fiddling with offloading isn't going to do anything useful as far as changing execution plans is concerned."

This is different from zonemaps.

IF a good zonemap is present, then the optimizer can use it (unlike warmed up storage indexes that match your query that do not change the plan and only have the effect of making the plan you already have run both more efficiently in the storage cells AND push fewer blocks up.)

I have NOT had access to the lower half regarding zonemaps being more efficient in the storage cells, but they definitely push up only blocks possibly allowed by your predicates. Frits measured an early implementation and that implementation was *not* effective IN THE STORAGE CELLS the way storage indexes are. But they definitely deliver only blocks (or compression units) possibly containing your eligible predicate values UP from the storage cells. This could be important if the base table is very large and your storage cells therefore would do less work reading the blocks in the lower half. Your 2 gb actual case is likely read in a blink in the storage cells, which you can see in sqlmonitor or other ways of displaying actuals.

IF you have applied linear attribute clustering and reloaded to match your zonemap, this ends up being like a well clustered index from the view of the optimizer and runs like a very fortunate bloom filter between the storage and storage. Your mileage may vary, but this *may* be operationally useful in the elimination of one non-constraint supporting index path. One of the characteristics relevant to that "*may*" is whether or not the columns relevant to the zonemap are updated after they arrive. Another is whether newly loaded data arrives with useful clustering compared to your predicates of later queries.

If there are equals predicates of unique values, the zonemap will automatically be very good (even without reloading) for what I hope is the obvious reason.

If there are time of arrival columns that arrive in batches that are equals or range predicates in your queries, then very likely the zonemap will also be good in that case.

I don't believe any of what I have written here is contrary to what JL wrote, but it may be useful in the move to EXADATA at 12.2+ (or 12.1 with the patches that allow zonemaps to be updated efficiently, because 12.1 out of the box was badly broken always rebuilding the entire zonemap instead of just the blocks that had possibly changed. In 12.2+, unless you are committing single rows very frequently you are unlikely to notice any elapsed time overhead from the zonemap update.)

Finally, you really do need to consider whether you want to reproduce the old plan or use a new plan that takes advantage of EXADATA and how that may map in effort across all your queries.

good luck!


-----Original Message-----
From: [] On Behalf Of Jonathan Lewis
Sent: Wednesday, August 14, 2019 9:18 AM To: Oracle L
Subject: Re: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

> Unfortunately the cost of I/O to temp becomes highly visible on
> EXADATA when a hash join spills to disc, and the implied speed of
tablescan is only helpful if > you can push predicates down to storage and take advantage of storage indexes [Kunwar] so should i disable storage indexes in order to check if that rules out Exadata playing a part here or just disabling cell_offload_processing should be > enough to tell if problem is Exadata is specific or at 12c optimizer software level only.

My point was that setting the system stats to EXADATA made the optimizer give too much weight to doing tablescans and hash joins because the stats they didn't give the correct impression of how expensive the join would be, only how cheap the tablescan would be. Your basic problem is that you've told the optimizer that tablescans and hash joins are fantastic - the use or not of storage indexes is irrelevant to the optimizer, fiddling with offloading isn't going to do anything useful as far as changing execution plans is concerned.

> Generic strategy point 1:
> Look at the "table_cached_blocks" preference for gathering table stats.
This can help to give Oracle a much better idea of the quality of an index by allowing > it do derive a better value for the clustering_factor.
> [Kunwar] Interesting ..i wasnt aware of table_cached_blocks
> preference. Now going over your below blog for more on it :)

See also - and various other clustering_factor notes I've written. Bottom line - you've told Oracle that tablescans are fantastic, but you haven't let it know how effective some of your indexes are.

> Generic strategy point 2:
> Look for places where the optimizer's estimate of cardinality is
> HIGHER than it ought to be and find out why - as higher cardinalities tend
to push the optimizer away from indexes/nested loops.
> [Kunwar] The only place where optimizer's estimate of cardinality is
HIGHER is at the FTS on the 2GB table, which i mentioned initially.

But your original posting said: "Looking for a approach in general when it comes to check plan issues when migrating to exadata and not something to this query" - that's why I'm giving your STRATEGY points, not attemping to tell you how to fix the one query that you've mentioned.

> Generic strategy point 3:
> If Exadata is wasting a lot of resources on tablescans that clearly be
> indexed access paths you consider deleting system stats, or faking some
system stats that promise less about tablescan speed.
> [Kunwar] To avoid system wide impact of changing system statistics is
there a hack to get rid of system statistics at query or object level.
> Of course i can take a backup of system statistics before doing the
change, just that it will take a bit of coordination with the DBAs and it takes some time to > get it implemented.

Answer to question. No, you can't hack system stats for an object; yes, in 18.3 you could use the opt_estimate() hint to hack them (but shouldn't). You don't need to take a backup of the system stats if you've set them using dbms_gather_system_stats('EXADATA') because it's fixed code to set them, and does nothing dynamic to measure them. Are you in production on 12c on Exadata and trying to solve problems, or are you testing the migration and trying to minimise the number of odd details that have to be fixed ? Major strategy point - work out how to make almost everything behave adequately, and keep individual tweaking to a minimum.

As far as the individual query is concerned, you mentioned vw_nso_1, which is an internal view from subquery unnesting, if this appeared in 12c and was not present in 11g it may simply mean that the costing changed sufficiently to make it a good plan, on the other hand maybe you have an example of a subquery where the 11g optimizer was not allowed to unnest and an enhancement in 12c allowed unnesting. (So 11g might be running with a FILTER operation) If that's the case then hinting the subquery with NO_UNNEST might revert you to the 11g plan. If EXACTLY THE SAME problem appears in many cases (i.e. new feature introduces problem) then there may be an optimizer parameter that you can set to disable the new feature in the short term while you pursue the problem with Oracle support.

Jonathan Lewis

From: kunwar singh <> Sent: 13 August 2019 12:13
To: Jonathan Lewis
Cc: Oracle L
Subject: Re: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

Received on Wed Aug 14 2019 - 17:38:32 CEST

Original text of this message