Re: Estimating effectivity of performance improvement measures

From: joel garry <joel-garry_at_home.com>
Date: Wed, 26 Nov 2008 11:12:35 -0800 (PST)
Message-ID: <61ff1939-cae7-4ba0-a9b2-c2bfe3204ca6@a12g2000pro.googlegroups.com>


On Nov 26, 2:45 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
> Today I have another question regarding estimating effectivity of
> changes.

>
> This is even more the case as part of these tables involve historic
> data, i.e. multiple values with different time ranges of validity,
> which makes a certain amount of data dead - from _this_ query's point
> of view. So the way how this "dead wood" is distributed across
> objects may have a significant impact (for example when using
> timestamps as leading columns for an index vs. trailing columns or
> having them in the table only).

If you are worried about this, have you considered the recycle buffer pool?

Another answer is to be glad you can say "hey, we need more powerful hardware!" :-)

>
> All this does not become easier through the effects of _other_ DML
> running in parallel which compete for buffer cache space. For a
> realistic test 11g's Real Application Testing would come in handy
> where you can capture a particular load and replay it against the test
> database. Unfortunately I cannot simulate the load right now because
> 11g is not installed and is not an option, load is highly client
> application and traffic dependent, both things outside of my control
> and creating a load capturing and replay framework would take a
> significant amount of time.

It's my understanding that RAT has been backported. http://www.oracle.com/us/corporate/press/015217_EN and metalink Note: 560977.1

>
> Now, what do you think? What other options do I have? Did I overlook
> something? Thanks in advance!
>

I think you have a better handle on the issues than most people. It's difficult to accept that the problem may have so many variables that reactive empirical testing is the only way to go. Jonathan Lewis has published some on attempting to figure out if the data layout would benefit from reorganization, sometimes the data quickly goes back to it's old shape. I have at least one table/report combination that runs noticeably slower (7 hours v. 5 hours) on newly added data apparently because of the strangely skewed distribution on entry.

jg

--
@home.com is bogus.
http://www.frieze.com/issue/review/barbara_visser
Received on Wed Nov 26 2008 - 13:12:35 CST

Original text of this message