Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Wolfgang Breitling's papers at centrexcc.com

Re: Wolfgang Breitling's papers at centrexcc.com

From: W.Breitling <member28455_at_dbforums.com>
Date: Fri, 30 May 2003 02:11:37 +0000
Message-ID: <2938606.1054260697@dbforums.com>

Re 1:
I know of the existence of v$sql_plan_statistics_all. If your comment is in reference to the view in appendix A of my paper "What is new in the Oracle 9i CBO", I created that view to get around the limitations of the v$ view. It is up to you what you want to use.

Re 2:
I am not certain I ever said that manually changing statistics is the best way to correct statistics, but the more I think about it the more I come to the conclusion that it is the ONLY way to correct statistics. What else would be there?
In reference to your comment that changing the stats would affect other sql too, you are correct, but so does almost everything else you can do to tune a sql where you can not change the sql itself. The only means I can think of that would not are outlines, but in the case I used for the paper and presentation on "Tuning with Statistics" that is not an option because the sql is never the same.

Even something as seamingly innocent as analyzing the tables can affect the access path of existing SQL, or else why are you analyzing. Just for the fun of it? I am always amazed that so many DBAs have jobs on a fixed schedule to analyze entire schemas or any table with so-called "stale" statistics.
Would you make an application change untested? Or slap an Oracle version upgrade or even just a patch onto production without testing it first? However, by analyzing tables on a schedule you are virtually doing the same thing - making an untested system change. Think about it. After you analyzed a table you have either of the following two situations:
a) ALL SQL use the same access plans as before and perform the same.

   Then clearly analyzing the table was not necessary and the exercise    was a waste of resources.
b) The access path of some SQL changes. Do you know which SQL? Do you

   know if the new path performs better, the same, or worse? Should you    not know BEFORE you commit such a change? Look around at the various    news and discussion groups for post like "Why is the performance of    this SQL worse after analyzing?"

In the system I am responsible for there are four tables which have the histograms for 2 of their columns refreshed every night after a batch job. Most of the other tables have last been analyzed 2 years ago when we upgraded to Oracle 8i.

Originally posted by Mikito Harakiri
> "W.Breitling" wrote in message
> news:2933923.1054170175_at_dbforums.com"]news:2933923.1054170175_at_d-
> bforums.com[/url]...
> > I changed the settings and re-uploaded the papers and
> presentations.
> > Hopefully they will now be viewable with any Acrobat Reader
> version 3
> > and up. Let me know if there are still problems and thanks for
> bringing
> > it to my attention.
>
> Thank you, Wolfgang. Very professinally written papers.
>
> Couple of comments.
> 1. A minor convenience: v$sql_plan and v$sql_plan_statistics are
> joined into
> v$sql_plan_statistics_all already.
> 2. I don't agree that manually changing statistics is the best way
> correcting selectivity of single table predicates like this
>
> L1.tree_node_num between ... and ...
> or
> L1.tree_node_num between ... and ...
>
> Changing stats would affect other simpler queries too.
>
> Dynamic sampling is supposed to amend selectivity in this case.
> There are
> couple of pitfalls for dynamic sampling as well:
> i) if recursive sampling query is returning 0 rows, sampling is
> considered
> "statistically insignificant" and rejected in favour of
> estimation based
> upon static statistics.
> ii) if there is no more predicates on L1 table, sampling wouldn't be
> triggered as well. One have to give an explicit hint, or add dummy
> tautology
> predicate on a different L1 column, or wrap column variable into a
> [built-in] function.
> iii) if plan is concatenated, then I don't see correct cardinalities
> calculated by sampling in the plan too. The difficulty might be
> pushing the
> numbers inside of each branch without being have to fire more sampling
queries...

--
Posted via http://dbforums.com
Received on Thu May 29 2003 - 21:11:37 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US