Re: awr history

From: rjamya <>
Date: Thu, 13 Jan 2011 21:34:05 -0500
Message-ID: <>

Hi Ed,

Designing GC reports (in GC interface) is painful at best unless you want real simple reports ... just use SQL. All you need really is joining couple of views and you are ready to roll.

GC has loads of data (at different granularity) mostly in mgmt$metric_daily   (for daily longterm averages), mgmt$metric_hourly (for hourly rollups, shorter term averages) and mgmt$metric_current which has latest collected info. You could join mgmt$target with these views to grab boatload of info (again not as granular/complete as ASH/AWR but still useful for trending / capacity planning).

I created a spreadsheet from following query, which tells me which metrics I have and can use (a sort of ready reference).

select distinct target_type, metric_name, metric_column, metric_label, column_label from mgmt$metric_daily where rollup_timestamp > = trunc(sysdate-8) order by 1,2,3;

Once you have this, you can pull all sorts of information, metric name is major category, metric_column is individual metric name. Actual values are in average/min/max/sttdev columns. key_value,1,2,3,4,5 give more attributes about that metric.

I roll a dashboard/capacity report for our few thousand targets (with matrices collected for hosts as well as databases, rac, asm, listeners etc ). My report, prepared for different groups of servers, currently has following info (e.g.)

Database: db name, version, number of patches installed, when installed, mem usage, db size, db growth per month/day, mount point usage for datafile mounts, severe alerts etc, tablespace growth, RMAN backup status, DG performance (if applicable)
Host: mem/cpu/swap usage, # of cpus, nic status, rac interconnect performance etc, filesystem growth, os patches, os version etc Storage: allocated, used, daily/weekly/monthly growth, storage type (vendor/software versions etc)
You can also find Exadata matrices there including cell performance etc if you have such configured.
This gives an easier view for our delivery team. Your requirements might be different.

For visualization, Kyle recently posted a few links for graphing tools. This data can be exported in any format you want (using sql). I have had good success with Google visualization (no data is sent out, it remains in your browser) when I have had few occasions to graph them. That can be scripted as well.

To dig into GC views this link is very
valuable<>. There are many more views but you can easily find them in few minutes.

As other has indicated, at instance level ASH/AWR canot be beat, but for a installation-wide view GC data could be useful. Look into it, you may find what you want.


On Thu, Jan 13, 2011 at 5:27 AM, ed lewis <> wrote:

> Hi Kyle,
> Thanks for your response.
> We do have grid control, which I should have mentioned. It was recently
> installed
> so I'm still learning my way around. Some of these reports
> to be generated will be required for management.
> So, we'll need to create these reports outside of grid control, I would
> think.
> So, are there ways to take advantage of the data stored in
> GC ? In our case, to generate customized reports with graphs.
> thanks again.
> ed
> ----- Original Message -----
> *From:* kyle Hailey <>
> *To:*
> *Cc:* ;
> *Sent:* Wednesday, January 12, 2011 9:18 PM
> *Subject:* Re: awr history
> Interestingly enough, I just started a thread along similar lines at
> My questions was "what are the best Oracle performance tools"
> As far as reporting, what are you looking for exactly?
> There is always OEM. OEM doesn't have official reports but the graphs are
> pretty good for explaining load and bottlenecks.
> Of course there is AWR report and ASH report.
> As far as other tools, there is which I've never
> used and does trends for AWR and statspack.
> I know Enteros had some tool for viewing AWR data.
> ASHMon ,, will read ASH and display it
> graphically.
> Some ASH queries are on
> - Kyle
> On Wed, Jan 12, 2011 at 5:25 PM, Andrew Kerber <>wrote:
>> Below is one of my favorites. IO over time. There are many variations
>> you can do on this theme:
>> select
>> to_char(sn.end_interval_time,'yyyymmddhh24') the_date,
>> sum(decode(sn2.startup_time,sn3.startup_time,(newreads.value-oldreads.value),newreads.value))
>> reads,
>> sum(decode(sn2.startup_time,sn3.startup_time,(newwrites.value-oldwrites.value),newwrites.value))
>> writes,
>> (sum(decode(sn2.startup_time,sn3.startup_time,(newreads.value-oldreads.value),newreads.value)))+
>> (sum(decode(sn2.startup_time,sn3.startup_time,(newwrites.value-oldwrites.value),newwrites.value)))
>> total
>> from
>> dba_hist_sysstat oldreads,
>> dba_hist_sysstat newreads,
>> dba_hist_sysstat oldwrites,
>> dba_hist_sysstat newwrites,
>> dba_hist_snapshot sn,
>> dba_hist_snapshot sn2,
>> dba_hist_snapshot sn3
>> where sn.instance_number=dbms_utility.current_instance
>> and sn.instance_number=sn2.instance_number
>> and sn2.instance_number=sn3.instance_number
>> and oldreads.instance_number=sn3.instance_number
>> and newreads.instance_number=oldreads.instance_number
>> and oldreads.instance_number=oldwrites.instance_number
>> and oldwrites.instance_number=newwrites.instance_number
>> and newreads.snap_id=sn.snap_id
>> and newwrites.snap_id=newreads.snap_id
>> and sn.instance_number=oldreads.instance_number
>> and oldreads.instance_number=newreads.instance_number
>> and sn.instance_number=oldwrites.instance_number
>> and oldwrites.instance_number=newwrites.instance_number
>> and oldreads.snap_id = (select max(sn.snap_id) from dba_hist_snapshot
>> sn where sn.snap_id<newreads.snap_id and
>> sn.instance_number=newreads.instance_number and
>> newreads.instance_number=oldreads.instance_number)
>> and oldreads.snap_id=sn2.snap_id
>> and newreads.snap_id=sn3.snap_id
>> and oldwrites.snap_id = (select max(sn.snap_id) from dba_hist_snapshot
>> sn where sn.snap_id<newwrites.snap_id and
>> sn.instance_number=newwrites.instance_number and
>> newwrites.instance_number=oldwrites.instance_number)
>> and oldreads.stat_name = 'physical reads'
>> and newreads.stat_name = 'physical reads'
>> and oldwrites.stat_name = 'physical writes'
>> and newwrites.stat_name = 'physical writes'
>> group by to_char(sn.end_interval_time,'yyyymmddhh24')
>> order by to_char(sn.end_interval_time,'yyyymmddhh24')
>> ;
>> On Wed, Jan 12, 2011 at 7:21 PM, ed lewis <> wrote:
>>> Hello,
>>> I'm looking for a way to query the AWR, and ASH tables
>>> to create customized reports. I would also like to present
>>> some of the results in graphs.
>>> Can anyone recommend any sources for this ?
>>> Scripts, websites, or 3rd party tools.
>>> Thanks in advance.
>>> Ed
>> --
>> Andrew W. Kerber
>> 'If at first you dont succeed, dont take up skydiving.'

Best regards

Received on Thu Jan 13 2011 - 20:34:05 CST

Original text of this message