Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Optimizer methods...

Re: Optimizer methods...

From: Roger Snowden <rsnowden_at_IHateSpam.com>
Date: 1997/07/14
Message-ID: <01bc909e$00e0db60$096fc589@RSNOWDEN.labinterlink.com>#1/1

No simple answers here. Rule-based optimization is entirely predictable: if the rule applies, that path is taken. Cost-based optimization is an entirely different matter. Cost depends on statistics gathered by analyzing objects, and the optimizer behaviour is dependent on the statistics generated *at the time of analysis*. Nothing dynamic here. It uses those stats to estimate how many blocks will be visited using one patch versus another. As such, it is entirely possible that the cost-based approach will decide to do a full table scan on a table that has grown substantially since last analysis.

You must do application tuning and 'explain plan' work on your target (production) system. Test systems don't have the same row counts as production, so you can't usually mock it up in advance. Moreover, the actual algorithms used by Oracle will vary with each release. The generally get smarter, but you might see optimizer behaviour changes when you upgrade Oracle versions.

Hints help, but are not absolute with the cost-based approach. What you can do is set the initxxx.ora OPTIMIZER_MODE parameter to CHOOSE. This will cause Oracle to use the RULE approach when there are no stats, and then cost when stats are available. Then, selectively analyze tables and use rule hints on the non-analyzed objects. You can drop statistics on any table at any time. Thus, you can 'spin' Oracle any which way you choose, and experiment as you like. Read the SQL guides to get specific syntax on analysis operations.

Bear in mind - with the mixed approach, you might suddenly experience dramatic performance changes if some tool, such as SQL Studio of ADHawk is used and statistics get accidentally generated. You might document which tables need stats and which don't, and then build scripts to create and drop stats on the fly to get you out of such emergencies. And, be careful who gets DBA privs.

I wrote up a discussion on the topic a few months ago, at the www.oreview.com site. Check back issues for JAN/FEB. The subject is 'application tuning'.

Have fun.

Roger Snowden

> used a concatenated index on a large table in a nested query. However,
 in
> some situations, cost-based dramatically increase performance (we have
 only
> noticed this on the NT so far). We are running 7.3.2 (NT) and 7.2.3
 (AIX). Any
> input would be greatly appreciated.
Received on Mon Jul 14 1997 - 00:00:00 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US