Re: Better cardinality estimate when dialing optimizer_features_enable back

From: Randolf Geist <>
Date: Tue, 24 Nov 2009 20:56:42 +0100
Message-Id: <>

> Here is the histogram on letter_bin.prty_id
> XV3RNdFVINlE&hl=en
> I've never manually crafted a histogram before.


I can't access that Google spreadsheet - it tells me I lack privileges to do so.

How many distinct values are in the column - how is the actual distribution?

Are there values that are so rare that the index access makes actually sense? If not consider simply removing the histogram - or consider removing the index.

Instead of fiddling with the histogram itself you can simply try to set the DENSITY manually of the column using DBMS_STATS.SET_COLUMN_STATS to the value currently shown in DENSITY - according to Alberto's post this should revert to the old density calculation using the DENSITY value (the new calculation ignores the value from the dictionary), but I haven't tested this recently myself. That should modify the cardinality estimate of unpopular column values.

The plans that you've posted: There is transitive closure taking place on PRTY_ID between LETTER_BIN and PARTIES - which leads to the side effect that the optimizer removed the join condition between LETTER_BIN and PARTIES. With that in place the cardinality estimate might be lower and closer to reality. It's just a workaround and does not address the root cause but what do you get if you add manually the "PARTIES.PRTY_ID = <val>" to the queries?

This workaround should retain the join predicate and thus lower the cardinality estimate.


Oracle related stuff blog:

Co-author of the forthcoming "OakTable Expert Oracle Practices" book:

GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT! Jetzt freischalten unter
Received on Tue Nov 24 2009 - 13:56:42 CST

Original text of this message