Re: Auto stats gathering is not sufficient - what now?

From: Restomi_w <restomi_w_at_yahoo.com>
Date: Mon, 14 Feb 2011 21:16:30 +0800
Message-Id: <DEFF609A-B567-4E4D-B85D-0AA89A8F9326_at_yahoo.com>



The way Oracle calculates costing for out of range value is one of the culprit of plan instabity (another one is bind peeking before 11g). It's demanding stats to be as accurate as possible (instead of representative enough), wasting so much efforts to analyse the best stats gathering method for different application.

Those id and timestamp columns which commonly have increasing value unnecessarily require frequent table stats update.

Most of the time, values are actually exists, just stats not getting updated frequently enough, thus optimiser assume it does not exist.

I believe it would be best to pressume values to be always in-range, at least for those columns without histogram (with 1 low and high value)

All those fancy prorated density calculatios are simply irrelevant (again, for majority of cases) because the values are mostly in-range.

Sorry to sound rude, I guess i'm sort of frustrated knowing plan instability could happen anytime considering you have hundredth of databases there.

Regards,
Tomi

On Feb 12, 2011, at 3:27 AM, Greg Rahn <greg_at_structureddata.org> wrote:

> Before talking solutions, lets talk root cause. The reason the
> cardinality is way off for the supplied query is that stats are no
> longer representative of the data. This is clear from the query
> optimizer trace:
> Using prorated density: [...] as selectivity of out-of-range value pred
>
> Since this table is events (time series data) at some point the data
> that is to be queried is outside the window of values known to the
> query optimizer so the prorated calculation kicks in (higher than the
> last known high value). IIRC in 10.2.0.3 this may be a steep cliff, I
> believe it was changed in a later release to offer a more gradual
> reduction in selectivity, but eventually it will approach 0.
>
> I think the ideal solution is to run dbms_stats at a regular interval,
> regular enough not to encounter this issue. That may be every week,
> every 2 weeks, every month, etc. Perhaps there is time enough to do
> so on the weekends.
>
>
> On Fri, Feb 11, 2011 at 2:59 AM, Tim Hall <tim_at_oracle-base.com> wrote:

>> Nothing is all bad or all good.
>> 
>> If gathering stats on table X gives you consistently bad execution
>> plans, but deleting & locking stats and using dynamic sampling gives
>> you consistently good execution plans I know what I would pick. :)
>> 
>> The point is you must do what it takes to give you consistently good
>> execution plans for all queries against that table.
>> 
>> I don't feel happy about the delete & lock approach myself, but maybe
>> it is the better of two evils in this case.

>
>
>
> --
> Regards,
> Greg Rahn
> http://structureddata.org
> --
> http://www.freelists.org/webpage/oracle-l
>
>
--
http://www.freelists.org/webpage/oracle-l
Received on Mon Feb 14 2011 - 07:16:30 CST

Original text of this message