Re: Analyze gather statistics automated in 10g?
Date: Tue, 31 Mar 2009 08:42:10 -0500
On a side note, it's not necessary to alter a table to invalidate all statements for that object. If you want to flush a single statement in 10g you can do so by creating an outline on the statement using the DBMS_OUTLN.CREATE_OUTLINE procedure, and them immediately dropping it. This has the side effective of flushing the statement from the shared pool (most of the time). Of course this is possible in 11g (and back ported to 10.2.0.4) with the dbms_shared_pool.purge procedure. I wrote a little script that I use all the time that prompts for a sql_id, creates the outline, and then drops it. I posted it on my blog a while back if anyone is interested. Here's a direct link to the script: http://www.oracle-guy.com/scripts/flush_sql10.sql
On Mar 31, 2009, at 4:40 AM, Martin Klier wrote:
> J. Dex schrieb:
>> (specifically 10.2.0.3) or is there still something that needs to
>> be set
>> up to get it to regularly run? What needs to be set up? How do you
>> set it up?
>> In the past, we have had a cron job that runs weekly that does the
>> How is everybody else handling this?
> Yes, Oracle gathers statitics on its own, with some default values
> described by John in the parallel post.
> But IMO the major disadvantage of this nightly operations is the time
> period they do it in by default. For your very special case, you will
> have to look up carefully, if statistics/histograms collected in the
> middle of the night are fitting your needs. Especially with OLTP
> databases, the contrary may be true:
> Imagine a table of orders. The company tries to finish all open orders
> before quitting time. So only a minority of order statuses will be in
> "order-is-still-open" mode when the stats collector comes and gathers
> statistics. He might be mislead to create even histograms based on
> status field contents. (For example: OPEN means 1, closed means 1000.
> some single OPEN lines will be considered as massive data skew, as
> far-outs.) If you are now inserting lots of open orders (or reopen old
> ones) in the morning, all stats and histograms for the column are
> and your query plans will honor that. :)
> A far better approach for this kind of usage profile might be, to kick
> out the auto gather jobs, and create statistics/histograms one time
> while the data set is in a realistic state, maybe at average plant
> workload. Yes, the performance will degrade in this time period, and
> may experience lots of reparsing at the time, and there may be several
> hybrid parses as well, but at least the stats are ok one time. Then
> the stats, and recreate them only on demand or in a wider interval
> monday morning, etc.)
> Hybrid parses (no clue if the term is useful or correct) are a special
> side effect of bind variable peeking: Imagine a join between two
> The statement is parsed during your gather stats job. One table
> has the new stats, and the other table still has its old stats. A new
> query plan will be created based on this information. Now the
> is used so frequently (with honored bind variables in service) that it
> will not age out of the library cache. So you have proper stats now,
> should be well, but you see a sad query plan anyway all over the day!
> My quick and dirty solution prior to 11g: Alter the comment of all
> (affected) tables in a superfast loop after gather stats job ran. :)
> This hack will invalidate all execution plans operating with the table
> in question, because the table "has changed".
> Martin Klier
> Usn's IT Blog for Linux, Oracle, Asterisk