Re: Estimating effectivity of performance improvement measures

From: Robert Klemme <>
Date: Thu, 4 Dec 2008 02:46:35 -0800 (PST)
Message-ID: <>

I follow up myself just in case someone is interested how things went.

Robert Klemme wrote:
> On 29.11.2008 12:16, Jonathan Lewis wrote:
> > "Robert Klemme" <> wrote in message
> >
> >> Today I have another question regarding estimating effectivity of
> >> changes.
> >>
> >> Situation: we have a query that accesses four tables, which result in
> >> 18 buffer gets per execution and 3 physical reads. The plan looks
> >> pretty good given the schema and query. This query is responsible for
> >> more than 30% of IO of this instance ( (I'll leave out
> >> details for now because my main point is independent of the specific
> >> SQL statement and schema and the posting is going to be lengthy
> >> anyway.)

> > You may be able to get some idea of the "object-level cache hit
> > ratios" from v$segstat - which can give you an idea of where you
> > are most likely or least likely to gain some I/O benefit by restructuring -
> Thanks for that hint! I will check this and see whether I can gain a
> better idea where the IO is spent. In fact this is exactly the type of
> thing I have been looking for.
> > but mostly you just have to say:
> > this is the amount of data I HAVE to visit
> > this is where I can find it in the least number of blocks
> > this is the number of blocks I will almost certainly have to get off
> > disc
> > this is how I can probably keep the rest of the blocks buffered

We did actually place the table with lowest hit ratio in the recycle pool. Guess what, no noticeable effect. Average disk reads actually went up a bit (3.5 vs. 3) and hit ratio of this table went slightly _up_.

Well, in my attempt to find out where the 3 disk accesses take place I ended up doing an ASH report filtered by the SQL ID in question. Result was discouraging: five of the eight database objects (4 tables, 4 indexes) accessed showed up in section "Top DB Objects" with Activity between 10% and 13% - so the three average disk accesses were spread pretty much evenly on those objects and it was not the nice case that there was a single object standing out. This also explains why "banning" one of the tables did not show any real effect.

For the time being we created an additional covering index that eliminates access to one of the tables. This led to a decreased # of buffer gets (17 vs. 18) and also of physical reads (2 vs. 3). Unfortunately at the same time SGA was reconfigured and buffer cache is now approx. twice the size as before. So we cannot clearly attribute the effect to the index, at least not the physical disk read decrease. The buffer gets certainly decreased because the table completely fell out of the execution plan.

Anyway, this was a good learning experience. Thank you for all the valuable hints. I'll now do some more research to find indexes that might benefit from compression to generally improve cache efficiency. :-)


robert Received on Thu Dec 04 2008 - 04:46:35 CST

Original text of this message