Re: Auto stats gathering is not sufficient - what now?

From: Greg Rahn <greg_at_structureddata.org>
Date: Fri, 11 Feb 2011 11:27:30 -0800
Message-ID: <AANLkTimNEYZcM3ddyWB-gd-2xYyZyAjEkH1QR=U5dRjR_at_mail.gmail.com>



Before talking solutions, lets talk root cause. The reason the cardinality is way off for the supplied query is that stats are no longer representative of the data. This is clear from the query optimizer trace:
Using prorated density: [...] as selectivity of out-of-range value pred

Since this table is events (time series data) at some point the data that is to be queried is outside the window of values known to the query optimizer so the prorated calculation kicks in (higher than the last known high value). IIRC in 10.2.0.3 this may be a steep cliff, I believe it was changed in a later release to offer a more gradual reduction in selectivity, but eventually it will approach 0.

I think the ideal solution is to run dbms_stats at a regular interval, regular enough not to encounter this issue. That may be every week, every 2 weeks, every month, etc. Perhaps there is time enough to do so on the weekends.

On Fri, Feb 11, 2011 at 2:59 AM, Tim Hall <tim_at_oracle-base.com> wrote:
> Nothing is all bad or all good.
>
> If gathering stats on table X gives you consistently bad execution
> plans, but deleting & locking stats and using dynamic sampling gives
> you consistently good execution plans I know what I would pick. :)
>
> The point is you must do what it takes to give you consistently good
> execution plans for all queries against that table.
>
> I don't feel happy about the delete & lock approach myself, but maybe
> it is the better of two evils in this case.

-- 
Regards,
Greg Rahn
http://structureddata.org
--
http://www.freelists.org/webpage/oracle-l
Received on Fri Feb 11 2011 - 13:27:30 CST

Original text of this message