measuring performance changes

From: Dba DBA <>
Date: Tue, 15 Nov 2011 11:33:40 -0500
Message-ID: <>

We have gathered some historical performance data and I am trying to figure out good ways to use this to find possibly performance issues now and in the future. I will try to keep this email from rambling, but I am not 100% sure what I want to do. However, I think it can be useful. We have some jobs that store historicall performance. For now, I am looking at data for queries. We also store wait events, etc... For now, I'm focused on the queries. I cannot post the exact queries we run to pull this data since I don't own it. It isn't fancy. We have a years worth of historical data.

We pull historical Query data from the AWR (we dont have space to keep AWR data for years, so we pull out what we need)

This is by hour and right out of the AWR repository

  • --------
We also store our v$sqlarea and do a 'minus' between the v$sqlarea and our storage table to add new queries.

This is how we have been using it so far.
-- we monitor the database for bad queries(we have homegrown shell scripts
for this). We then compare the gets/executions of poorly performing queries to their historical trends (we also check to see if its new).
-- if we see an increase (how much of an increase depends on the query.
going from up 1-2 logical IO for some queries matters sine they run so often, for others it has to be significant) and if they do, we try to figure out why and then resolve the issue. Sometimes with sql profiles, analyzing the table, checking to see what bind variables are passed (we dont log this, so we can only use v$sql_bind_capture, contact developer,stake holder).

I am trying to figure out a good way to write a report on this data. how anyone done this:?
This is where I am thinking about starting
-- Look at the average gets/executions and standard deviation over the last
month for each query, then look for queries where the gets/executions either increased beyond 1 standard deviation, or some percentage increase
-- there will need to be more logic. Some will be custom logic for our
environment specific to the needs of our environment (certain queries, applications)
-- we also want to look at an increase in executions.

  • I think its also a good idea to look to at this in more detail by hour so I can look for more specific spikes.
  • I want to map increases for specific times of day, days of the week, and times of the month (we get month end processing, that increases usage, etc... )
  • I think it may be useful to dump some data and use excel to create graphs. so we can view it visually.
  • virtually all of our sessions use dbms_application_info and this really helps alot. We can pinpoint queries down to specific applications and screens.

Has anyone done this? I think my idea is kind of a starter idea and there are probably better ideas. As I said, we have wait event data as well. But for now, I want to get something going on the query data.

-- Received on Tue Nov 15 2011 - 10:33:40 CST

Original text of this message