Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
Home -> Community -> Usenet -> c.d.o.server -> CBO histograms & density
Hi Gurus,
I've got a tricky problem on 9iR2 / HPUX.
The statement is:
select * from table where object_id=1234
The table contains 50.000 rows in 900 blocks. There's an index on object_id with ~26.000 distict keys. object_id is a number column.
We analyze our schema using
dbms_stasts.gather_table_stats( .., estimate=>false,
method_opt=>'for all indexes for all indexed columns size 16',
cascade=>true)
(BTW: the same happens if using "analyze schema ...")
After that, the query is done by a full table scan (FTS). I believe the
problem to be the information dba_tab_cols.density, which contains 0,011. In
turn the optimizer believes to get 0,011*50000 = 600 rows for a given
object_id by an index range scan.
This seems to be 3 times more expensive than the FTS (which, of course, is
wrong!).
The value of density is dependent on the number of bucket named in the
analyze statement. The more bucket, the better the value. A value of 200
buckets produce a density value of 2,1e-4, while the accurate value of
1/26.000=3,8e-5 is produced only if the "for all indexed columns" is omitted
or if a bucket size of 1 is given.
If density is close to the correct value, the optimizer chooses a "good"
(=index) plan.
My questions are:
1. If there is histogram information available, why is the column overall
density used at all?
2. What does dba_tab_cols.density mean in the presence of histograms?
TIA
Stephan
Received on Fri Nov 15 2002 - 04:09:28 CST