Re: Estimating effectivity of performance improvement measures

From: Robert Klemme <>
Date: Sat, 29 Nov 2008 13:18:15 +0100
Message-ID: <>

On 29.11.2008 12:16, Jonathan Lewis wrote:
> "Robert Klemme" <> wrote in message

>> Today I have another question regarding estimating effectivity of
>> changes.
>> Situation: we have a query that accesses four tables, which result in
>> 18 buffer gets per execution and 3 physical reads.  The plan looks
>> pretty good given the schema and query.  This query is responsible for
>> more than 30% of IO of this instance (  (I'll leave out
>> details for now because my main point is independent of the specific
>> SQL statement and schema and the posting is going to be lengthy
>> anyway.)

>> Now, what do you think? What other options do I have? Did I overlook >> something? Thanks in advance!

> I think you need to read Chapter 1 of volume 2 of
> "Cost Based Oracle" - but I'm still writing it, so
> you will have to wait a bit.

LOL - Thanks for making my day!

> You're obviously thinking along the right lines - the
> question of how you decide the potential effectiveness,
> though, has to involve a component of understanding of
> the requirement.
> Taking your requirement:
> 4 table, 18 block visits, 3 physical reads.
> Assume unique indexed access to a table takes three
> index block visits and one table visit - that's 16 blocks
> in your case. For reasonably large tables it's not
> unreasonable to find that each table block visit has a
> very high chance of being a physical read.
> So - why have you got 2 "extra" block visits, and where
> do you "lose" one physical read ?

Now this is a very interesting and helpful way to look at this! Since I did some more experimenting after the initial posting, it slowly had started to dawn on me that with those four tables involved the 16 buffer gets I achieved during testing could be close to optimal.

> The extra block visits might be undo block visits for read
> consistency during periods of concurrent update - and that's
> an issue that's going to make it hard to test the effects of any
> changes properly.

IMHO this is unlikely since with copy of production database with no concurrent write activity and repeated execution of the query I had 18 consistent gets as well. Since there are ~18 buffer gets in production as well, I assume the query does not suffer significantly from undo block visits. But this is a good item to keep in mind!

> The extra block visits may be due to index range scans that
> leave you acquiring and discarding excess rows from tables -
> and each extra row might mean an extra block visit.

This seems a likely cause since there _are_ range scans involved.

> The extra blocks may be due to large index range scans that
> end up returning just one appropriate rowid, so the extra work
> is done only in the index (and if that's the case, then this query
> could also be putting an undue workload on your CPU).

I don't observe any unusual high CPU for this query.

> The "lost" physical read might be because you have a table which
> is so small and popular that it's always 100% buffered. You might
> even have three such tables - and be able to see that all the I/O
> comes from (say) one very big index that doesn't stay buffered,
> plus two physical reads on the last table.

There are two tables that are likely so "popular" that they are kept in buffer cache.

> --- and so on.

Yes, there's a nice number of combinations of reasons but:

> You may be able to get some idea of the "object-level cache hit
> ratios" from v$segstat - which can give you an idea of where you
> are most likely or least likely to gain some I/O benefit by restructuring -

Thanks for that hint! I will check this and see whether I can gain a better idea where the IO is spent. In fact this is exactly the type of thing I have been looking for.

> but mostly you just have to say:
> this is the amount of data I HAVE to visit
> this is where I can find it in the least number of blocks
> this is the number of blocks I will almost certainly have to get off
> disc
> this is how I can probably keep the rest of the blocks buffered


> In your specific case, simply reducing 18 block visits to 15 may make
> virtually
> no difference to performance because the three blocks you have eliminated
> may have been blocks that would have been buffered blocks anyway. But
> if you can say "I have avoided one trip to a big table, and my knowledge
> of the application (or at least v$segstat) tells me that this will usually
> mean one less physical read you're heading in the right direction.
> Ultimately, to do less work, you have to "co-locate" the data you want
> (and this usually means more work on the inserts, and can mean more
> undo block visits for read-consistency),

I have actually considered an option with a similar negative "undo effect": separating current data from old data. Unfortunately this will involve either changes in application logic (which we can do but will take more time and need more thorough testing than just another added index) or coding of triggers which I suspect are complex as well (i.e. also need thorough testing).

I believe, ultimately this is the direction we have to go into anyway because one of the large tables contains only 5% old data which is not accessed but due to the nature of the application this will increase significantly during next months. Too bad this has not been accounted for during initial design.

> or you have to avoid visiting
> date that you don't really need (and this usually means adding columns
> to indexes - which can make the indexes larger, need more space in
> the buffer to stay cached, and become subject to more undo block visits
> for read-consistency reasons).

Luckily the tables whose accesses can be eliminated by adding columns to indexes have a much lower change frequency than query frequency. So the "undo effect" is probably not as dramatic although I will keep an eye on this.

Thanks again for the valuable help!

Kind regards

        robert Received on Sat Nov 29 2008 - 06:18:15 CST

Original text of this message