Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Strange Cost Based Optimizer Decision Making

Re: Strange Cost Based Optimizer Decision Making

From: Mladen Gogala <gogala_at_sbcglobal.net>
Date: Mon, 19 Dec 2005 17:04:36 GMT
Message-Id: <pan.2005.12.19.17.04.35.558321@sbcglobal.net>


On Mon, 19 Dec 2005 07:17:40 -0800, Jimbo1 wrote:

> Now, I have another table that's not quite so large, containing over 65
> million rows. The statistics on this table are 100% completely up to
> date. Once again, this table is range-partitioned on month. However,
> this time, there is a UNIQUE composite index present on two columns:
> SECTION_ID and TIME_STAMP.
>
> I run an identical SELECT clause to the last query against this table.
> However, this is the execution plan I get:
>
> QUERY 2 - LARGE_TABLE MAX(time_stamp) query results
> **********************************************************************
> SQL> SELECT /*+ CHOOSE */
> 2 MAX(time_stamp)
> 3 FROM large_table;
>
> MAX(TIME_
> ---------
> 30-SEP-05
>
> Elapsed: 00:04:261.22
>
> Execution Plan
> ----------------------------------------------------------
> 0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=34394 Card=1
> Bytes=8)
> 1 0 SORT (AGGREGATE)
> 2 1 PARTITION RANGE (ALL)
> 3 2 TABLE ACCESS (FULL) OF 'LARGE_TABLE' (Cost=34394
> Card=65623008 Bytes=524984064)
>
>
> Right, I'm no longer getting a (MIN/MAX) full index scan. Instead I'm
> getting a full table scan of each partition, and the query is taking
> over four minutes to run for a table that is ten times smaller than the
> previous one. Now, the strange thing is, as I've already said, is all
> the stats are fully up to date for LARGE_TABLE. This includes the index
> stats.
>
> The really strange thing is that the stats for HUGE_TABLE and its
> associated index are more than 6 months out of date, yet the execution
> plan for my query on this table is perfect, i.e. I get the (MIN/MAX)
> full index scan.
>
> All indexes concerned are local (partitioned) indexes. The only
> structural difference between the two is that the index on HUGE_TABLE
> is non-unique, and the index on LARGE_TABLE is unique. Why would the
> unique index not be picked up for a (MIN/MAX) full index scan if the
> stats are all present and up-to-date?

First, analyzing tables regularly is a mistake that you pay dearly for. The only time when it's safe to analyze the table is when it doesn't matter (Dave Ensor's paradox). Second, do the following query: select count(*) from (
 select distinct dbms_rowid.rowid_block_number(rowid),

                 dbms_rowid.rowid_relative_fno(rowid)
 from large_table where section_id is not null);

When you are done, compare that number with clustering factor and number of distinct keys in index. Both of those values can be found in DBA_INDEXES view. If your clustering factor is closer to the number of rows in the table then is to the number of the relevant blocks (as determined by the query above) in the table, then you have a case of "bad index". You can then hack the statistics using DBMS_STATS.SET_INDEX_STATSand set it to the number of relevant blocks, or even 1. That will make index much cheaper to use and it will be used. The 10.2 version uses ass managed tablespaces by default (thanks, Pete!) so you can get distortions from there. Of course, the very next time that your regular analysis job runs, your statistics will be overrun, unless you lock it by DBMS_STATS.LOCK_TABLE_STATS.

In addition to that, there is a yellow Appress book detailing the fundamentals of selectivity, clustering factor and alike. Tom Kyte wrote the preface. The book is called "Cost Based Oracle Fundamentals". This advice was devised following the methodology from the book.

-- 
http://www.mgogala.com
Received on Mon Dec 19 2005 - 11:04:36 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US