Re: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

From: Jonathan Lewis <>
Date: Tue, 13 Aug 2019 09:23:06 +0000
Message-ID: <LNXP265MB04430CD7D0CD45C7C84289C5A5D20_at_LNXP265MB0443.GBRP265.PROD.OUTLOOK.COM>

By gathering stats with the EXADATA option you've told the optimizer that a typical single block read will take about 10ms while a 1MB multiblock read will take about 15ms, so it's not surprising that you will have execution plans that switch from indexed access and nested loop to full tablescan and hash joins.

Unfortunately the cost of I/O to temp becomes highly visible on EXADATA when a hash join spills to disc, and the implied speed of tablescan is only helpful if you can push predicates down to storage and take advantage of storage indexes. (Or if you have purchase the IN-Memory option and have the right sort of queries that can do vector transformations).

Generic strategy point 1:
Look at the "table_cached_blocks" preference for gathering table stats. This can help to give Oracle a much better idea of the quality of an index by allowing it do derive a better value for the clustering_factor.

Generic strategy point 2:
Look for places where the optimizer's estimate of cardinality is HIGHER than it ought to be and find out why - as higher cardinalities tend to push the optimizer away from indexes/nested loops.

Generic strategy point 3:
If Exadata is wasting a lot of resources on tablescans that clearly be indexed access paths you consider deleting system stats, or faking some system stats that promise less about tablescan speed.

I note you said you had an 11g outline - presumably this means that left to itself on 11g this query picked the "wrong" path - which means nothing has really changed.

Jonathan Lewis

From: <> on behalf of kunwar singh <> Sent: 13 August 2019 07:13
Subject: Bad execution plan after migrating to exadata ( 12c) from non-exadata (11g)

Hi Listers,

How to approach this? Looking for a approach in general when it comes to check plan issues when migrating to exadata and not something to this query ( but wont mind any insights into it either ;) )

with outline data from 11g(in 12c exa DB)
- cost ~90k, fast, elapsed time about 15 ms.

  • doing index range scan on a index on a 2GB table .

12c exadata
- cost ~6k , slower , elapsed time about 4 seconds.

  • FTS on the 2GB table and from sql monitor report time is spent on reading it only/processing the hash join on it.
  • execution plan is having a view VW_NSO_1

Few details:

1. I have already gathered stats on all tables/indexes
2. Have gathered system statistics with 'EXADATA'
3. Don't have the access to source 11g DB . getting it will take some time.

Will post redacted version of the SQL & the execution plan ( if you prefer to look at it ) as account is very strict about security.

Received on Tue Aug 13 2019 - 11:23:06 CEST

Original text of this message