Re: Estimating effectivity of performance improvement measures

From: Robert Klemme <shortcutter_at_googlemail.com>
Date: Thu, 27 Nov 2008 01:51:21 -0800 (PST)
Message-ID: <5d3bd791-ed8a-46f7-9961-0441f8cfb647@f13g2000yqj.googlegroups.com>


On Nov 26, 8:12 pm, joel garry <joel-ga..._at_home.com> wrote:
> On Nov 26, 2:45 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
>
> > Today I have another question regarding estimating effectivity of
> > changes.
>
> > This is even more the case as part of these tables involve historic
> > data, i.e. multiple values with different time ranges of validity,
> > which makes a certain amount of data dead - from _this_ query's point
> > of view.  So the way how this "dead wood" is distributed across
> > objects may have a significant impact (for example when using
> > timestamps as leading columns for an index vs. trailing columns or
> > having them in the table only).
>
> If you are worried about this, have you considered the recycle buffer
> pool?

No, not yet. Usually I am wary to reserve or otherwise fiddle with resource bandwidth unless you need guaranteed response times because you might limit overall throughput of a system by keeping part of resources idle. Also, when load changes a system with reserved bandwidths might adapt not as good as another system without such limitations. In the worst case you have reserved resource bandwidth for a part of the load which has gone away effectively wasting resources. :-)

Having said that, your suggestion is worth considering nevertheless. I assume you envisioned something like this: by putting the table blocks (i.e. those with the highest variability) in the recycle pool we could improve caching for other blocks leading to less physical IO.

More concrete, in an ideal scenario each occurrence of the query would access a single different block from the table which leads to a physical IO anyway. But because the block displaces another block which would otherwise be kept in the cache more physical IO occurs. So in a perfect world we might be able to go down from 3 to 1 physical block reads, which would be a dramatic improvement.

Is this roughly the line of thought that you had in mind?

> Another answer is to be glad you can say "hey, we need more powerful
> hardware!"  :-)

Hehe, I'll tell the customer and see what he thinks of this. :-)

> It's my understanding that RAT has been backported.http://www.oracle.com/us/corporate/press/015217_ENand metalink Note:
> 560977.1

Oh, that's a valuable bit of information! Thanks!

> > Now, what do you think?  What other options do I have?  Did I overlook
> > something?  Thanks in advance!
>
> I think you have a better handle on the issues than most people.  It's
> difficult to accept that the problem may have so many variables that
> reactive empirical testing is the only way to go.  Jonathan Lewis has
> published some on attempting to figure out if the data layout would
> benefit from reorganization, sometimes the data quickly goes back to
> it's old shape.  I have at least one table/report combination that
> runs noticeably slower (7 hours v. 5 hours) on newly added data
> apparently because of the strangely skewed distribution on entry.

Yeah, my rule of thumb is to NOT consider table or index reorganization as first choice because it is likely that after a short time your data goes back to the old layout quickly and the restructuring cost is higher than the benefit.

Thanks again!

Kind regards

robert Received on Thu Nov 27 2008 - 03:51:21 CST

Original text of this message