Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: The best CPU usage measurement in Oracle: BUFFER_GETS or CPU_TIME?

RE: The best CPU usage measurement in Oracle: BUFFER_GETS or CPU_TIME?

From: Cary Millsap <>
Date: Mon, 21 Jun 2004 09:50:43 -0500
Message-ID: <029101c4579f$23f79060$6601a8c0@CVMLAP02>


One of my goals since 2000 has been to come up with a method that has a single point of entry into its execution. One of the features of = traditional
methods that I think has largely prohibited their effectiveness for a = vast
segment of the DBA/developer population is that you have to be nearly psychic--at least VERY experienced--to figure out what to do first. And = even
the VERY experienced people we meet screw it up an alarmingly large percentage of the time. In my experience, most Oracle analysts spent the vast majority of their time working hard to solve the wrong problem.

I agree with you that a reasonably trained person should be able to look = at
your query and see the problem instantly. But when a person can't, what then?

One feature of the human brain that scares me is that tendency--I know = that
at least I have this tendency--to lock onto a target and not let go, = even
when it can be demonstrated that it's locked onto the wrong target. = Looking
at trace data gives two things that looking at V$ data cannot:

  1. An ability to MEASURE priority. You may know that X is your problem, = but how much time is X costing your user? Stated another way, if you fix X, = what END-USER benefit will you get in return? It is very, very difficult to answer these questions with V$ (or any other aggregated) data. = Sometimes, it's provably impossible.
  2. A single place to begin working that will deliver the correct answer almost every single time. (The "almost" is provided for completeness, because there are a few cases I've run across where getting the right = data to use Method R is impossible unless you can alter the application. = 10046 is not perfect.)

Issue #2 reminds me of a probability game. Imagine method A takes an = hour
but works every time. Imagine that method B takes ten minutes in 10% of cases, 20 hours in 70% of cases, and will never accurately identify the problem in 30% of cases. A real "expert" should choose method B in the = 10%
case and method A in all the others. What I see so often is people who = THINK
they're experts burrowed into the 30% case for way, way too long.

So what I teach is to use method A (which is really Method R). I think = this
is the best overall approach to issue #2, and I think it's practically = the
ONLY way to win on issue #1.

Cary Millsap
Hotsos Enterprises, Ltd.
* Nullius in verba *

Upcoming events:
- Performance Diagnosis 101: 6/22 Pittsburgh, 7/20 Cleveland, 8/10 = Boston

- SQL Optimization 101: 5/24 San Diego, 6/14 Chicago, 6/28 Denver
- Hotsos Symposium 2005: March 6-10 Dallas
- Visit for schedule details...

-----Original Message-----
From: =
On Behalf Of Stephane Faroult
Sent: Monday, June 21, 2004 2:40 AM
Subject: Re: The best CPU usage measurement in Oracle: BUFFER_GETS or CPU_TIME? Cary Millsap wrote:


>More generally, the problem is not V$ data in particular, it's ANY

I think that presenting things as abruptly requires a bit of=20 qualification. To me, it's the same as saying that macroeconomics are=20 totally useless and microeconomics rule. Well, yes and no. I don't like statspack much and I usually use my own=20 scripts to collect what I need. But at the same time, I very rarely use=20 traces - for one thing, using them in a production environment which has =

trouble keeping afloat isn't always easy, and I don't always have the=20 freedom to connect as Oracle (or to connect, at the OS level, to the=20 server. No such problem with Oracle, since the password for DBSNMP is=20 rarely changed and ALL_USERS usually shows an acount named SUPERDBA the=20 password of which is ... - I am not sure that some queries on the V$ may =

not put as much overhead on the system as a localized trace, but it's=20 more discreet :-)). It's probably a frame of mind as well. I have=20 several millions of lines of code behind me, and have always disliked=20 debuggers.
Traces certainly are a great tool in some cases for understanding some=20 complex situations. But to get the broad picture and an understanding of =

what is going on from a business point of view, V$ are quite valuable.=20 If aggregates were invented, it's simply because people were lost in the =

details. Something as simple as checking which statements are executed=20 most often is very telling (such as getting four times per second=20 currency exchange rates which are updated once a day ...). To me,=20 understanding what people are trying to do comes first, and how they are =

doing it second. I am certain that you'll agree with me that it is=20 pointless to minimize wait events on a query which has no reason to be=20 run in the first place - but I am not sure that everybody understands it =

as clearly, especially some well-intentioned beginners. I have sometimes seen on this list questions such as: 'I am running this query :

                            select distinct a.c1, b.c2
                            from mega_table a, gigantic_table b;
  and it is very slow. What can I do to speed it up ?' To which the answer was :

      'Have you tried to set event 10046?'.

Cough, cough, cough! Well, it's better than 'your BCHR looks dreadful,=20 try to increase your SGA size'.

Stephane Faroult

Please see the official ORACLE-L FAQ:

To unsubscribe send email to: put 'unsubscribe' in the subject line.
Archives are at
FAQ is at

Please see the official ORACLE-L FAQ:
To unsubscribe send email to:
put 'unsubscribe' in the subject line.
Archives are at
FAQ is at
Received on Mon Jun 21 2004 - 09:49:20 CDT

Original text of this message