Oracle FAQ Your Portal to the Oracle Knowledge Grid

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Distinguising one SQL execution from another

RE: Distinguising one SQL execution from another

From: <>
Date: Wed, 13 Oct 2004 22:36:45 +1000
Message-ID: <>

comments inline...

> I know that I can get all of this from 10046 traces, including the
> SQL. I've been using them for years and thanks to Cary and others, I
> how to interpret them pretty well. The issue is that gathering these
> is expensive - you can't expect to run them all of the time and
> not for all sessions on a production database. What I'm not convinced
> yet is that this is the only way to get the information I'm looking for.

Cary may actually tell you of a student he once had who does exactly this. You'd have to have a pretty good reason for doing this, and I personally can't think of one, but if the need is there and the money for dedicated (and fast) storage then it is a possibility!!

> What I'm looking for is a light weight method to get a rough idea of how
> much time (elapsed and CPU) each session spends executing (broken out by
> parse, execute and fetch if I can get it) each SQL statement it submits,
> taking recursive SQL into account. This would be a monitor that runs
> continuously on heavily loaded production systems. 10046 traces do not
> the bill.

Sounds like you need to investigate Oracle Trace. It's pretty comprehensive but takes a bit to get going with it, however it should be able to satisfy your requirements. Works on the same basis as 10046 tracing (Oracle kernel events trigger trace data updates) however trace data is written in binary format which is obviously less expensive to write. Later, after collecting data with otrccol, this information must be formated into reports or database tables for human digestion (otrcrep, otrcfmt).

Trace data is written to buffers initially and written to file in batch so is more efficient than 10046 and is written in a binary format so is again more efficient. Trace data files can be size limited and set to recycle automatically.

You can certainly get parse/execute/fetch info, dump out SQL, etc, etc so it's probably worth the effort to get up to speed with it. Jonathon has an article on his website that gives a good introduction of the facility and I'd suggest you read that to give you a taste before you jump into the dry technical detail of setting it all up.

The down-side, there always is one, is that it is a deprecated feature and will be (or may already be in 10G) removed in its current form. However, 10G has some alternatives as you are probably already aware.

> I'd like to be able to do this with SQL against V$ and/or X$ tables, but
> consider using a direct SGA attach method such as described by Kyle
> I just need to know where to look in the SGA for the information.

I really don't think this is a viable option. Even with a high sampling rate, and even if you "could" identify the start/stop point of a parse operation as distinct from an execute or fetch or sys-call, etc, there would still be margin for not insigificant error (just think how fast some/most parse calls are compared to your sampling rate!). Besides, especially if you are looking at direct SGA methods, you'd better have shares in Brazilian Coffee as you'd need it!!

> John Smiley

This transmission is for the intended addressee only and is confidential information. If you have received this transmission in error, please notify the sender and delete the transmission. The contents of this e-mail are the opinion of the writer only and are not endorsed by the Mincom Group of companies unless expressly stated otherwise.

Received on Wed Oct 13 2004 - 07:32:57 CDT

Original text of this message