Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Does Optimizer use P.A.T to calculate cost..?

Re: Does Optimizer use P.A.T to calculate cost..?

From: Charles Hooper <hooperc2000_at_yahoo.com>
Date: 27 Oct 2006 07:16:24 -0700
Message-ID: <1161958583.954116.104980@i42g2000cwa.googlegroups.com>


mccmx_at_hotmail.com wrote:
> > Time to start investigating what is happening, a 10046 trace at level 8
> > will help. Also, take a look at the output of this SQL statement:
>
> The 10046 hasn't helped. It has just shown me that the plan has
> changed. I've done some tests and I can confirm that I can get much
> more stable execution times if I use the old method of manually
> specifying *_area_size parameters.
>
> I've seen advice on this newsgroup which says that you can set the PGA
> target to a very high value as long as you have the memory available.
>
> But I am seeing that a value that is too high can cause instablity in
> execution plans.
>
> Matt

Gints Plivna has provided a thorough tests that confirm that modification of the pga_aggregate_target does influence the costs of the various plans developed by the Oracle cost based optimizer, and these costs are then used to determine the least expensive path for data access.

Oracle on your system is apparently determined that performing full table scans is in fact less expensive than performing index look ups - it might be right, or it might have been supplied incorrect information to make that determination. As a _very_ general rule, if more than 25% of a table is to be accessed, it is less expensive to perform a full table scan than it is to perform an index lookup and then row look up by row ID.

Now that you have the 10046 trace at level 8, start analyzing the trace file. Scan through the trace file and look for the wait events associated with each SQL statement - what are the causes of the delays.  Does it appear that the temp tablespace is being heavily used for large sorts? You can monitor the number of sorts that hit the temp tablespace with a SQL statement like this: SELECT
  NAME,
  VALUE
FROM
  V$SYSSTAT
WHERE
  NAME IN ('sorts (memory)','sorts (disk)');

Have you executed a command to collect server hardware system statistics, something like this:
EXEC DBMS_STATS.GATHER_SYSTEM_STATS ('interval', interval => 60)

Also, do you frequently collect table and index statistics, or just relying on the automatic collection of statistics in 10g?

You can influence the apparent cost of full table scans compared with index lookups by modifying the various initialization parameters, such as optimizer_index_caching,
optimizer_index_cost_adj, and optimizer_features_enable, in addition to the parameters such as sort_area_size, sort_area_retained_size, and db_file_multiblock_read_count. It is also possible to add first row hints to the SQL statements to essentially force Oracle to use index access.

On my system, I set a fairly large pga_aggregate_target, and force various parameters, including sort_area_size, to a high minimum value, to help reduce the chance of a sort to the temp tablespace. Automatic adjustment of settings is a nice feature, but specifying appropriate minimum values can also help.

Have you reviewed the SQL statements to make certain that they are written efficiently? Some of those SQL statements that I brought over from my Oracle 8.1.7.3 system acquired terrible plans when run against my Oracle 10.2.0.2 system. Rewriting the SQL statements into equivalent forms may help reduce the run times.

Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc. Received on Fri Oct 27 2006 - 09:16:24 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US