Re: Optimizer issue - cost of full table scans

From: Niall Litchfield <niall.litchfield_at_gmail.com>
Date: Mon, 13 Sep 2010 18:57:41 +0100
Message-ID: <AANLkTinTCd0p7BzujxD=q1cmxdUjwND6tP5K3qEVE_vo_at_mail.gmail.com>



Hi Greg,
I meant to ask previously, but didn't. Are there any changes to system stat calculation for exadata? You earlier talked about getting stats representative, I wonder (in the absence of an exadata play box), how representative the system stat IO and CPU costing is in this environment?

On 13 Sep 2010 17:49, "Greg Rahn" <greg_at_structureddata.org> wrote:

Pavel-

This is actually a poor recommendation. Using a "estimate_percent=>null" will be very very costly and since the OP is on 11g (11.2 in fact as he is on Exadata V2), the default value for estimate_percent of dbms_stats.auto_sample_size is much faster and usually within
>99% accuracy of a 100% sample.

More details on why:
http://structureddata.org/2007/09/17/oracle-11g-enhancements-to-dbms_stats/

On Mon, Sep 13, 2010 at 6:14 AM, Pavel Ermakov <ocp.pauler_at_gmail.com> wrote:
>
> Hi
>
> Try to ga...

-- 
Regards,
Greg Rahn
http://structureddata.org

--
http://www.freelists.org/webpage/oracle-l
Received on Mon Sep 13 2010 - 12:57:41 CDT

Original text of this message