Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> Re: 10g System statistics - single and multi

Re: 10g System statistics - single and multi

From: Martic Zoran <>
Date: Thu, 26 May 2005 00:52:59 -0700 (PDT)
Message-ID: <>


Thanks for your valuable answers.

I need to agree with you.

I believe that not all of 10,000 tables used in SAP are used with FTS :)
Also other timing details may be per
tablespace/datafile - this is collected somehow anyway.

But to come back to the fact that I agree with you. Oracle knows for defficiencies in their optimizer for a sure.

As you said the main problems are:
1. No knowledge about the cacheability of some objects - the fact that every data block is disk read is ugly (if we forget OIC parameter)
2. The cardinality errors may be 100 and not just a few times different because of many things leading to the catastrophy

On top of these miscalculations is your point that the groan system statistics are minor problem. Ugly to say when even you agreed that system statistics may cause 2-10 times discrepancies.

At the end Oracle was thinking pragmatic: - many SQL statements will perform well no metter what - not all SQL statements are too complex or too unfriendly to the current optimizer
- only a few SQL's will need the attention in every system

Then, why we just not provide SQL Profile and Automatic tuning management system that will lead our customers to do next:
- efficiently fix the main problems that are in most cases a small percentage of overall SQL's - you will need to pay us for a such facility that is much better then tools like index advisor or such is on the market

It looks that SQL Profiles is the best and very intelligent way to Oracle to fill optimizer gaps whith possible the best cost effective method to just add the new code and not breaking what is working already. They caluclated that filling all fallacies of the optimizer to cover 1% of crazy nasty SQL's that are with new SQL syntax nastier and nastier is going to be ugly task. Why we just not do the things that consultants are doing with smart hinting if needed, executing things, looking missing statistics, comparing the execution stats with estimated, knowing the cacheability of the data/indexes, ....

I need to say that this is very focused and very smart method, as you said they figure it out probably from IBM, LEO and did it their way.

At the end we want easy and nice way to fix bad SQL and we do not care how Oracle is going to do it. If SQL Profiles are way to go then great, because it is smart, the tuning by the feedback.


> Martic,
> I suppose we have to agree to disagree. I find
> variances of 1 order of
> magnitude ( factors of between 2 and 10 ) not
> significant enough to
> warrant collection, storage and processing of that
> detail to be "cost
> effective". I for one would be very unhappy if
> Oracle, during system
> statistics collection which should be done during a
> representative
> workload, i.e. during rather busy times, would take
> time out to update
> all 80,000+ segments of my Peoplesoft financials
> system with
> segment-level mbrc, sreadtm, and mreadtm values. OK,
> it probably
> wouldn't need to update all 80,000 because likely
> 70,000 are not being
> used at all.
> So far I have found that when the optimizer made
> really bad plan choices
> it was because some row source cardinality was off
> by several orders of
> magnitude, not just a factor of 2 (or 6, or even
> 10).
> I believe all our concernes - yours about differing
> IO speeds and
> efficiency (i.e. mbrc), and mine about cardinality
> mis-estimation and
> caching efficiency - could be elegantly addressed by
> query feedback. I
> eluded to that with my reference to profiles which
> can include hints to
> correct the cardinality estimate of a row source.
> There is a paper by some IBM guys, presented at the
> 2001 VLDB
> conference, about LEO (the LEaring Optimizer). The
> gist is that the dbms
> compares its pre-execution estimates (row source
> cardinalities and
> resource costs) with the execution actuals and
> "remembers" the deltas.
> These deltas are then used to correct estimates on
> subsequent parses and
> could invalidate a plan to force a reparse. They
> survive shutdowns
> because they are stored in the dictionary and can
> even be merged (by the
> dba) with the object statistics.
> My interpretation is that with profiles Oracle is
> taking a step in that
> direction. It just is not dynamic (yet), but the
> infrastructure is being
> prepared.
> I find it interesting how Oracle and IBM (DB2) - I
> don't know enough
> about the optimizer in SQLServer - started on
> sort-of opposite ends and
> had to adopt good things from each other. In DB2 (on
> MVS at least, the
> relational granddaddy) you had to bind statements.
> That bind phase had
> all the time it wanted to come up with a plan which
> was then stored in
> the dictionary. Oracle did the plan generation
> ad-hoc, first with the
> RBO, then with the CBO. As the CBO matured and took
> more and more
> factors into consideration, the need arose to
> (re-)introduce some
> stability - plan stability / outlines were born.
> Also, taking more and
> more information into account when creating an
> access path takes time,
> which is not available in abundance at parse time.
> So the extended
> optimizer was born in 10g which, like the DB2 bind,
> can take as much
> time as needed to exhaust all avenues and then store
> a profile in the
> dictionary.

Do you Yahoo!?
Yahoo! Small Business - Try our new Resources site
Received on Thu May 26 2005 - 03:57:47 CDT

Original text of this message