RE: Statistics_Level = All for Stress Testing?

From: Milen Kulev <makulev_at_gmx.net>
Date: Sat, 22 Mar 2008 15:10:13 +0100
Message-ID: <003701c88c26$75b65a70$6401a8c0@trivadis.com>


Hallo Jack,
the overhead assiociated with staistics_level= ALL depend mainly on the amount/frequency of the issued SQL statements. I can confirm that there is an overhad (CPU overhead) of about 15-20 % when setting staistics_level= ALL. It was on a relatively busy (> 100 SQLs/sec) database (version 10.2.0.2, running on 8 CPU HPUX Superdome). The overhead could be easily confirmed in many layers (statspack reports-> CPU usage), GlancePlus, iostat  

The best strategy is to set staistics_level= ALL for 2-3 hours (do not forget to flush the shared pool 2-3 times BEFORE setting staistics_level= ALL -> thus you will get much more execution plans with run-time statistics).  

The overhead of ~ 20% is well worth "the trouble", since you will get feedback where the CBO is doing wrong estimations (see excellent paper from
Wolfgang Breitling http://www.centrexcc.com/Tuning%20by%20Cardinality%20Feedback.ppt.pdf).  

Especially useful is the combination staistics_level= ALL with Lab128 Tool http://www.lab128.com/ , since Lab128 can give you the real execution plans (from the shared pool) + run-time SQL statement statistics, top SQLs in specified time window SQL history (aka AWR) etc.  

HTH. Milen

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of JApplewhite_at_austinisd.org Sent: Thursday, March 20, 2008 8:37 PM
To: oracle-l
Subject: Statistics_Level = All for Stress Testing?

Anyone use this setting in a Dev or Test environment to bring out the worst in your Developers' SQL?

Background is that I set Statistics_Level = All in our Production Student Information database (9.2.0.4 on 64bit Linux) to do some tracing and analysis with HotSOS Profiler over Spring Break last week and didn't change it back to Typical before School restarted on Monday. I hadn't yet searched MetaLink and found all the problems associated with the All setting, so didn't even think about it until we'd fought maxed-out CPUs for a day and a half and had a bunch of frustrated Users who couldn't use our applications. Once I finally remembered that I'd done that and set it back to Typical, CPU went way down to normal levels.

In the course of trying to figure out what the heck was going wrong with our previously fairly nicely behaved apps, we found instances of inefficient SQL in several apps. Adding an index and hint or two here and there worked wonders and made performance noticeably better even after Statistics_Level was set back to Typical. That got me to thinking that maybe I should use the All setting in our Dev and Test databases to bring out the worst in new/revised apps and tune them before they go to Production.

Does that seem reasonable or does the All setting only affect certain kinds of SQL or be skewed some other way to not provide the kind of general stress-testing I'm envisioning?

Thanks.

Jack C. Applewhite - Database Administrator Austin I.S.D.
414.9715 (phone) / 935.5929 (pager)

--

http://www.freelists.org/webpage/oracle-l Received on Sat Mar 22 2008 - 09:10:13 CDT

Original text of this message