Re: Query help please

From: Randolf Geist <>
Date: Wed, 18 Mar 2009 15:25:03 -0700 (PDT)
Message-ID: <>

On 18 Mrz., 21:41, "bob123" <> wrote:
> Thanks shakespeare end randolf
> tkprof gives :

As Shakespeare has already pointed out, the tkprof output reveals that the first point on my list is the major issue, the second one is at most a minor issue.

Note that apart from the operation identified by Shakespeare and the full table scan operation on PFMQ_MESSAGESTATUS you didn't perform any physical I/O, so the performance of this statement could be far worse (e.g. when multiple sessions were competing for the buffer cache) and it therefore already benefits from the buffer cache significantly.

You're spending approx. 60 secs. to run the full table scan on the PFMQ_MESSAGESTATUS table and corresponding index and table lookups, which are completely served from the buffer cache (approx. 870k consistent buffer gets within 4 secs.).

The full table scan performed 7,266 physical reads in approx. 53 secs. Assuming on average 8 blocks per multi-block I/O (can you check the trace file for "db file scattered read" wait events and check the number of "blocks" read per request?) this is approx. 60ms per I/O request, which is very slow, see below for more information regarding this.

The major part of the remaining time is spent in the single row/block random access to the table PFMQ_MESSAGEDATASTORAGE, where you perform 64k physical reads in 2,420 secs, which is 40ms on average per physical read. This seems to be quite slow, you should expect at least something around 10ms from a decent storage system for a single-block random access.

Either your I/O was already saturated by some concurrent processing when performing this trace or this indicates some potentially severe problems with your storage layer.

The starting operation of your execution plan which is the full table scan on PFMQ_MESSAGESTATUS generated 291k rows instead of the estimated 1 row. As already mentioned above you need to identify why the filter predicate applied results in such a underestimated cardinality. There seems to be something wrong with the statistics of this table.

The bottom line is, you need to get better cardinality estimates, so that the optimizer better estimates the cost of the nested loop operations currently chosen. Very likely it comes to a different plan when the cardinality estimates are closer to reality.

The second point is you should check why you had such slow physical read performance while tracing this statement.

The FILTER operation did not benefit from the "filter optimization" built into the Oracle runtime engine, but it took only 2 seconds, so it is currently the least important issue, although it could get worse if it required more physical I/O.


Oracle related stuff blog:

SQLTools++ for Oracle (Open source Oracle GUI for Windows): Received on Wed Mar 18 2009 - 17:25:03 CDT

Original text of this message