Re: HOw to introduce a proactive approach on DB use?

From: Steve Howard <stevedhoward_at_gmail.com>
Date: Wed, 15 Oct 2008 07:37:24 -0700 (PDT)
Message-ID: <32f8e146-3379-490e-a191-010d0afbbaac@i24g2000prf.googlegroups.com>


On Oct 11, 1:51 am, Helma <helma.vi..._at_hotmail.com> wrote:
> hello everyone,
>
> I just arrived at a shop where there are monthly performance problems.
> At every end of the month, endusers are running reports (business
> objects) and Peoplesoft batches.
> Problem is, that there is no overview with the DBA-team of the
> database use, e.g. :
>
> - a nightbatch was accidentally started during daytime, the DBA team
> was unable to figure this out. (enduser found out about this one)
>
> - an other batch exeeded it's running time by more than 200% - DBA
> team found out when looking for something else.
>
> - the same report is ran by different users
>
> - there are no ad hoc queries , but non of the batches and reports are
> examined for performance.
>
> etc.
>
> I wonder how to get control over this situation and introduce a
> proactive approach.
>
> E.g. , I need an overview of the database use, and as a second target,
> be able to quickly identify reports/ batches that are running amok.
>
> If anyone has pointers on how they've dealt with such a situation, i
> would be glad to hear.
>
> Solaris, Oracle 9.2.0.5, app server
>
> Thank you for your time,
>
> H.

>=10g. I know you don't want to hear that, and you do have some things available like statspack, as Mark suggested.

However, I swear I heard the alleluia chorus the first time I looked at the v$ views and AWR in 10g.

We recently completed a total model of our application based on resource utlization using the AWR. We determined our highest CPU consuming statements over a 90 day window of half hour AWR snapshots. Some people think that is too infrequent, but for modeling it is perfect.

As noted, we tuned the bad code, and for the remainder, built a model that allows us to predict capacity requirements down to within 3% or so of host utilization based on a given workload. We did this by determining the largest total CPU consumer statements over time, and talked to the developers to understand what drives their execution based on business logic. We then used that to plug into our model and predict CPU.

For example, we may average 16 milliseconds of CPU per execution to execute a given statement 200,000 times per half hour period. Once we understand the origin of that statement and its relationship to others, we can predict how often it will be executed based on a given business workload. From there we can predict our CPU utilization (3,200 seconds).

If the business load increases, we can use high level business metrics to predict host utilization based on the frequently executed statements and their average utilization.

The issue with 9i is that some of the elements of the v$ structures just aren't present in that version, such as CPU_ELAPSED. That one was an absolute goldmine for us. I have long thought that the instrumentation of Oracle's kernel is what sets them apart, and this is a perfect example. Received on Wed Oct 15 2008 - 09:37:24 CDT

Original text of this message