Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Re: Usage of the dbms_stats.gather_database_stats() procedure

Re: Usage of the dbms_stats.gather_database_stats() procedure

From: Wolfgang Breitling <breitliw_at_centrexcc.com>
Date: Mon, 13 Feb 2006 10:36:07 -0700
Message-Id: <6.2.3.4.2.20060213101537.04bdc2c8@pop.centrexcc.com>


My (personal) take on this is

  1. I definitely DO NOT use 'for all indexed columns size auto', especially not with estimate_percent < 100.
  2. I use cascade=> true ONLY with estimate_percent = 100

I generally do not use gather_database_stats, but that is largely because I manage single-schema (Peoplesoft) databases. Even in your case I would create 130 gather_schema_stats jobs. You might be able to run some in parallel.
I find the impact of the process of gathering statistics on the user community, after all it is just an FTS unless you insist on gathering histograms, not nearly as onerous as the potential effect of the gathering on the access paths. Tread carefully and preserve the old statistics in a stattab table so you can restore the prior statistics.

I would recommend the following:
- gather table stats with 100 percent if you can afford it timewise, else experiment with percentages based on segment size, or use auto_sample_size. In general, table stats are fairly accurate with even rather small sampling percentages.
- index stats should always be gathered with a full compute (estimate_percent=100)
- histograms should only be gathered where proven beneficial/necessary, never witha shotgun approach. When gathered they must (IMO) be gathered with full compute. After all you (OUGHT TO) gather histograms only for columns with rather skewed data distribution. Much of the subtleties of the skewed distribution can easily be missed when sampling.

See also metslink ote 44961.1

At 09:12 AM 2/13/2006, Orr, Steve wrote:
>I've just become aware of an instance that has 130 schemas with a
>total of 15,000 tables, 12,700 of which have no optimizer
>statistics. Even though things run relatively well without the stats
>we are sometimes bitten by poor performance without warning and have
>to compute stats as a fix. I'm thinking I want to get stats on all
>tables by default. I'm looking for recommendations on updating stats
>on all the tables and indexes. I'm thinking I want to use a big
>shotgun approach as a first pass then fine tune after that.
>
>Any recommendations on the estimate percentage? Using cascade? Using
>gather_database_stats? Other?
>

Regards

Wolfgang Breitling
Centrex Consulting Corporation
www.centrexcc.com



This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email
--
http://www.freelists.org/webpage/oracle-l
Received on Mon Feb 13 2006 - 11:36:07 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US