Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: How to calibrate the CBO

Re: How to calibrate the CBO

From: Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk>
Date: Sun, 8 Feb 2004 10:03:38 +0000 (UTC)
Message-ID: <c051hq$sma$1@sparta.btinternet.com>

Notes in-line

-- 
Regards

Jonathan Lewis
http://www.jlcomp.demon.co.uk

  The educated person is not the person
  who can answer the questions, but the
  person who can question the answers -- T. Schick Jr


Next public appearances:
 March 2004 Hotsos Symposium - The Burden of Proof
 March 2004 Charlotte NC OUG - CBO Tutorial
 April 2004 Iceland


One-day tutorials:
http://www.jlcomp.demon.co.uk/tutorial.html


Three-day seminar:
see http://www.jlcomp.demon.co.uk/seminar.html
____UK___February
____UK___June


The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


"Rick Denoire" <100.17706_at_germanynet.de> wrote in message
news:8l68201dujlnijl0tkv0sa1c7op3bges09_at_4ax.com...

> We migrated a DB Oracle application from 8.1.7 to 9.2.0.4 on Solaris,
> and since then we are having complains about some specific batch jobs
> running about 7x longer (which means that instead of 2 hours now it
> takes 14 - does not fit in a dayly schedule anymore).
>
On the plus side, if you have been told of specific jobs, you could enable 10046 level 8 on those jobs. and see where most the time is going. There is a bit of a bug in 9.2, though, which means that sql_trace can really wreck performnace, so an initiall step might be to do a snapshot of the session's wait times (through v$session_event) and resource usage (through v$sesstat or v$mystat) before you do anything more expensive. These may give you a couple of cheap clues.
> The developers see this as a DB Admin issue, they did not change the
> code and it ran well prior to the migration after all.
>
Of course, there is the change to the 20M lines or so of Oracle code - but that doesn't really count as a code change ;) The problem with
> I have used statspack, trace files with tkprof, profilers,
> investigated the execution plans... These statements are quite long
> and complex, so the execution plans too.
>
Look at the Row Source Operation outputs from tkprof, and compare these with the theoretical plans from explain plan - then check where the ROWS in the row source operation diverges wildly from the CARDINALITY in explain plan. In theory, the ROWS is the number of rows that came out of a row source, and the CARDINALITY is the predicted number of rows.
> But since I kept the old version of the DB (using a different name), I
> just started it on the same host to track the cause of the problem.
> It turns out that Oracle 9 chooses an execution plan with far less
> cost than the Oracle 8 version, but runs much slower.
>
One of the problems of upgrading is that Oracle fixes bugs in the optimizer so that costs change. I have a query that has a cost of: 34 in 8.1.7.4 34,034 in 9.2.0.4 238 in 10.1.0.1 The first is obviously wrong, the second looks as if it is arithmetically correct for the way the plan is reported, and the third is correct, given what actually happens. If there were alternative plans, you could easily be in a position where v8 and v10 don't take the alternative, but v9 goes for it.
> What I did was to create a stored outline in the 8i version and
> transfer it to the 9i instance, so the same execution plan was forced.
> Now costs are "high enough" again and I expect a better performance, I
> will know tomorrow for sure.
>
> Evidently, Oracle 9i is mislead by a wrong cost calculation when
> choosing the execution plan. My question is, is there a way to adjust
> the magnitudes used for cost calculation? Can I then get costs that
> are proportional to execution time?
>
There are options for adjusting some fairly significant assumptions in the CBO's strategy - check www.evdbt.com for Tim Gorman's article on optimizer_index_cost_adj (and optimizer_index_caching). I think Richard Foote would agree that it is vastly better than Don Burleson's article. (The title is something like "The search for Intelligent life in the CBO") Bear in mind that Oracle 9 uses 'system statistics', and I think that the default behaviour in 9.2.0.4 is to invent some values if none have been gathered. Check dbms_stats, in particular the procedure gather_system_stats. This gives the CBO some figures about machine resource costs and timings, and allows it to generate more appropriate execution paths for the current working environment.
> Perhaps it is possible to execute customized statements to generate
> specific load on the database and so to calibrate the corresponding
> Oracle "currency". For example, force a full table scan on a 1 Mio
> rows table, then repeat with 2,3,6,10 Million rows, and measure
> execution times. The result would be a rough (but realistic) estimate
> of costs of FTS/row in this particular environment.
>
> I lack enough experience in order to be able to tune without
> measuring results - not a funny task at a 14 hours rate, so I depend
> strongly on a reasonable cost calculation.
>
> Is it a common practice to adjust the CBO cost calculations? How?
>
Tim Gorman's article explains the concepts.
> Bye
> Rick Denoire
Received on Sun Feb 08 2004 - 04:03:38 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US