Re: Estimating effectivity of performance improvement measures
From: Palooka <nobody_at_nowhere.com>
Date: Wed, 26 Nov 2008 19:32:30 +0000
Message-ID: <hvhXk.8$Jx2.1@newsfe19.ams2>
>
>
> If you are worried about this, have you considered the recycle buffer
> pool?
>
> Another answer is to be glad you can say "hey, we need more powerful
> hardware!" :-)
>
>
> It's my understanding that RAT has been backported.
> http://www.oracle.com/us/corporate/press/015217_EN and metalink Note:
> 560977.1
>
>
> I think you have a better handle on the issues than most people. It's
> difficult to accept that the problem may have so many variables that
> reactive empirical testing is the only way to go. Jonathan Lewis has
> published some on attempting to figure out if the data layout would
> benefit from reorganization, sometimes the data quickly goes back to
> it's old shape. I have at least one table/report combination that
> runs noticeably slower (7 hours v. 5 hours) on newly added data
> apparently because of the strangely skewed distribution on entry.
>
You have reports which take 5-7 hours? Reorganising tables and rebuilding indexes is not going to help much.
Date: Wed, 26 Nov 2008 19:32:30 +0000
Message-ID: <hvhXk.8$Jx2.1@newsfe19.ams2>
joel garry wrote:
> On Nov 26, 2:45 am, Robert Klemme <shortcut..._at_googlemail.com> wrote:
>> Today I have another question regarding estimating effectivity of >> changes.
>
>> This is even more the case as part of these tables involve historic >> data, i.e. multiple values with different time ranges of validity, >> which makes a certain amount of data dead - from _this_ query's point >> of view. So the way how this "dead wood" is distributed across >> objects may have a significant impact (for example when using >> timestamps as leading columns for an index vs. trailing columns or >> having them in the table only).
>
> If you are worried about this, have you considered the recycle buffer
> pool?
>
> Another answer is to be glad you can say "hey, we need more powerful
> hardware!" :-)
>
>> All this does not become easier through the effects of _other_ DML >> running in parallel which compete for buffer cache space. For a >> realistic test 11g's Real Application Testing would come in handy >> where you can capture a particular load and replay it against the test >> database. Unfortunately I cannot simulate the load right now because >> 11g is not installed and is not an option, load is highly client >> application and traffic dependent, both things outside of my control >> and creating a load capturing and replay framework would take a >> significant amount of time.
>
> It's my understanding that RAT has been backported.
> http://www.oracle.com/us/corporate/press/015217_EN and metalink Note:
> 560977.1
>
>> Now, what do you think? What other options do I have? Did I overlook >> something? Thanks in advance! >>
>
> I think you have a better handle on the issues than most people. It's
> difficult to accept that the problem may have so many variables that
> reactive empirical testing is the only way to go. Jonathan Lewis has
> published some on attempting to figure out if the data layout would
> benefit from reorganization, sometimes the data quickly goes back to
> it's old shape. I have at least one table/report combination that
> runs noticeably slower (7 hours v. 5 hours) on newly added data
> apparently because of the strangely skewed distribution on entry.
>
You have reports which take 5-7 hours? Reorganising tables and rebuilding indexes is not going to help much.
Redesign, introduce some redundancy through mviews, set up a data mart or something.
Palooka Received on Wed Nov 26 2008 - 13:32:30 CST