Re: performance issue after upgrade to oracle linux 32 bit.

From: Randolf Geist <>
Date: Fri, 6 Nov 2009 05:35:10 -0800 (PST)
Message-ID: <>

On Nov 5, 5:37 pm, lsllcm <> wrote:
> --------------------case 1--------------------------------------------
> 1. What cost / plan do you get if you request to use the index
> Result: as same as no index hint

That is quite interesting. If the hint wasn't malformed, the optimizer should have obeyed it, although there are all kind of cases due to transformations applied, bugs and may be other things that might prevent the optimizer from doing so.

Can you show what you've tried?

> --------------------case 2--------------------------------------------
> 2. What plan do you get in 11.2 if you're setting
> values:

Here I would say these are too many changes in a single testcase. I would suggest to do the following:

  1. Try with ALL_ROWS instead of FIRST_ROWS_100 (no other changes)
  2. Try with OPTIMIZER_INDEX_COST_ADJ and OPTIMIZER_INDEX_CACHING set to default (no other changes)
  3. Try with combination above (ALL_ROWS + OICA + OIC), (no other changes - you had the index hint in addition in the testcase posted)

Don't apply any hints on top of that, in particular no INDEX or LEADING hint. This can be done separately.

> --------------------item 4--------------------------------------------
> 4. A general question: Is your application actually using the
> FIRST_ROWS_100 optimizer mode properly?
> Yes, most of applications are oltp type and return first of rows. So
> we choose aggressive OPTIMIZER_INDEX_COST_ADJ and
Charles has already tried to clarify what I tried to express: Do you really fetch only the first rows of a *larger* result set? Most OLTP applications fetch *all* rows of a very small result set, e.g. the query identifies 6 rows and these 6 rows will be processed.

The FIRST_ROWS_n is meant for cases where you have a query that identifies e.g. 10,000 rows but your application only shows/processes the first N of those rows, may be in a pagination style like Google search results.

And as I've already said: The ALL_ROWS mode should still favor index access paths and NESTED LOOP joins given the OIC and OICA settings that you're using. Only use the FIRST_ROWS_n mode if your application performs as pointed out above.

Ideally you should have one of the following combinations:

If your application processes all rows of a (small) result set:

  • ALL_ROWS + Representative *System* Statistics
  • ALL_ROWS + your/some non-default OICA / OIC settings

If your application processes only the first n rows of result sets (even if you have places in the application where it does, is it reasonable to have FIRST_ROWS_n as default OPTIMIZER_MODE for all queries? Does the majority of queries really get processed that way?)

  • FIRST_ROWS_n + Representative *System* Statistics

I doubt that it is necessary to combine the FIRST_ROWS_n mode with aggressive OICA / OIC settings, since it already favors index access due to the way it calculates the costs.

By the way, you haven't addressed yet the point that the initial estimate of the optimizer is way off: It estimates several million rows to be returned from this statement, although no rows are returned when executing the statement.

Is it a special case that no rows are returned due to some of the bind values used? How many rows does this kind of statement return typically?

Why does the optimizer not recognize that no rows will be returned? Is it a FILTER predicate on some of the tables that already returns no rows from the table, or is it a combination of FILTER predicate + join that eliminates every row? You could have e.g. a filter on a table, leaving only a part of the rows, but joining these remaining rows from the table to another table doesn't match any row on the join predicate. Without histograms on the join columns Oracle cannot recognize this partial overlap of the join columns, but even with histograms in place the join cardinality calculation can be quite incorrect.

The 3 consistent gets of your original testcase suggests that it's already a simple table filter that doesn't find any matching rows. It might be required to check the object-level statistics on the involved tables in indexes if the allow the optimizer to come up with reasonable estimates.

> --------------------item 5--------------------------------------------
> 5. question
> Why is "_optimizer_cost_based_transformation = off" specified?
> Why is "optimizer_mode = first_rows_100" specified?
> Why is "optimizer_index_cost_adj = 10" specified?
> Why is "optimizer_index_caching = 90" specified?
> [lsllcm] The _optimizer_cost_based_transformation parameter is set
> useless, I remove the setting

Why do you think that the "_optimizer_cost_based_transformation" parameter is "useless"? It has significant influence on the transformations applied by the optimizer, and you can get quite different results for many queries depending on having the cost based transformations turned on (default) or off. So I would be very careful with simply setting this parameter back to "on" - there was hopefully a good reason why they were disabled.

What I mean to say is - it would require complete regression testing when altering this parameter - obviously this also applies to most of the other changes suggested (e.g. switching to ALL_ROWS as default instance OPTIMIZER_MODE, changing OICA / OIC, gathering System Statistics etc.)

> --------------------item 6--------------------------------------------
> 6. system statistics: There are about 10 scheduled jobs. I stop them
> at first.

Charles has already raised this: It's not sure what you mean by those 10 scheduled jobs. It's quite unlikely that you have 10 jobs in place that attempt to gather System Statistics - in fact it's not advisable to gather System Statistics on a regular basis. Preferably they should only be gathered once during a "representative" workload (WORKLOAD System Statistics), although I like the idea that Christian Antognini mentions in his "Troubleshooting Oracle Performance" book to gather System Statistics regularly into an user-defined statistics table just to get an impression how the different workloads of the system influence the measurement.

Very likely you mean to say that there are 10 jobs that gather object level statistics (table / index). The question here would then be: Why so many, how (DBMS_STATS / ANALYZE) and in case of DBMS_STATS what options of DBMS_STATS do they use? You are aware of the fact that starting with 10g there is a default job that attempts to gather stale / missing statistics every night?

Also starting with 11g you should use in most cases the DBMS_STATS.AUTO_SAMPLE_SIZE option for the "estimate_percent" parameter of DBMS_STATS for individual DBMS_STATS calls, since it generates very precise statistics but requires only a fraction of the time of a full compute.


Oracle related stuff blog:

SQLTools++ for Oracle (Open source Oracle GUI for Windows): Received on Fri Nov 06 2009 - 07:35:10 CST

Original text of this message