RE: inefficient sql

From: Taylor, Chris David <>
Date: Wed, 30 May 2012 09:31:08 -0500
Message-ID: <>

Good point by Raj. I have an internally developed snapshot capture that I can run over a 24-hour period, or a week or whatever I desire.

The upshot of this data I capture is that I can run a report off of it to list SQL statements by executions and elapsed time per execution. That way I can focus on the SQLs that get executed the most that take the most elapsed time per execution. (Focusing on what the users are waiting on)

It's ugly (at least to me) but it's handy as I also capture all the bind variables for the executions. The binds snapshot table can grow very large very quickly.

Chris Taylor

"Quality is never an accident; it is always the result of intelligent effort." -- John Ruskin (English Writer 1819-1900)

Any views and/or opinions expressed herein are my own and do not necessarily reflect the views of Ingram Industries, its affiliates, its subsidiaries or its employees.

-----Original Message-----
From: [] On Behalf Of rjamya Sent: Wednesday, May 30, 2012 9:25 AM
Subject: Re: inefficient sql

I'd say that you need to define what _inefficient_ means in your environment since your workload is different then everyone else's. The considerations will be different for OLTP/DSS/DW and something else for exadata systems. Many moons ago, I wrote a query that would scan through automatically generated ADDM reports (scanning the views), for last n days, look at all sqls identified as candidates for tuning and list them including how many times they were identified as candidates. Also scanned db parameter changes if those were recommended by ADDM runs as a handy html report. On occasions we found it to be very handy. This was slightly better than looking at a single addm report for us.

However due to employment contract, I had to leave the only copy with my employer, some day I might need to revisit the topic and try to rewrite it.


Received on Wed May 30 2012 - 09:31:08 CDT

Original text of this message