Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: DBMS_STATS [resend chomped version]

Re: DBMS_STATS [resend chomped version]

From: Jonathan Lewis <>
Date: Wed, 23 Jun 2004 13:48:24 +0100
Message-ID: <022101c45920$62bc2720$7102a8c0@Primary>

Note in-line


Jonathan Lewis The Co-operative Oracle Users' FAQ Optimising Oracle Seminar - schedule updated May 1st

> 3. I like your idea of creating a table to hold the sample size for each
> table in the schema. Other columns could hold the reanalyze interval, date
> of next analyze, etc. This would ensure each table was analyzed
> appropriately.

I like the idea as well, with the following caveats.

                                                            Oh and it might
suffer in a
similar way from a lack of understanding by end-users as to what it was doing and why.

The critical point is that the control table is populated intelligently by someone who understand the data.

The biggest drawback is the traditional battle-cry:

    We do that because the person who worked here     12 years ago set it up that way, and we're not going     to change it.

The requirement for statistics will change with the evolution of the database and its users. My viewpoint is that this is a reference table that summarises the documentation that specified the database in the first place. Technically it should be derived from the database specification (the bit that goes:

    table X will grow at M rows per month     critical entry points to table X are

        predicate 1
        predicate 2

    For each row in table X there will be an average of N rows     in table Y, varying from 0 to 5N with a sigma of S ) not that I've come across many of those.


For example what my comments in 1 above mean - if correct and its a while since I was at college - is that the package would need to answer the following question about each analyzed segment.


    Definitely not the intention.

    But a good reason for suitably size LMTs, so you can     easily spot the tables or indexes that are growing faster     than predicted and revise you understanding of the data,     hence revisit you assumptions about required statistics.


I think what I am saying that if we buy the argument that one should not sample every x days (hours, years), because we care about the appropriateness of the stats and not their 'freshness' then building an automated system to gather appropriate stats becomes a non-trivial task.

Quite agree.
Fortunately, most tables and most columns can be covered adequately by a small sample which is affordable (in terms of resources); so we only need to be intelligent about a few critical columns

I forgot to mention the bit where you don't analyze, but just use dbms_stats.set_system_stats to put in the correct numbers where Oracle can't work them out (or spends too much time trying).


Please see the official ORACLE-L FAQ:

To unsubscribe send email to: put 'unsubscribe' in the subject line.
Archives are at
FAQ is at
Received on Wed Jun 23 2004 - 07:45:14 CDT

Original text of this message