Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Proving or Debunking the need for rebuilding

Re: Proving or Debunking the need for rebuilding

From: Richard Foote <richard.foote_at_bigpond.nospam.com>
Date: Wed, 15 Nov 2006 07:42:37 GMT
Message-ID: <Nxz6h.66454$rP1.12825@news-server.bigpond.net.au>


<hasta_l3_at_hotmail.com> wrote in message news:1163528916.396146.307250_at_m7g2000cwm.googlegroups.com...
> hpuxrac wrote :
>
>> hasta_l3_at_hotmail.com wrote:
>> > >
>> > > What conclusions did you reach based on your approach?
>> >
>> > On my system - and I stress : on my system
>> >
>
>> What selects saw a 30% improvement? Was that in response time?
>> Any OLTP application that is retrieving 4000 rows repeatedly is going to
>> vary in response time anyway based on transient conditions, at least in
>> my experience. Again the phrase "An occasional" makes me wonder if
>> this is based on test cases or what exactly.
>
> Yes, the test case is well specified. It happens that we open lists,
> and the underlying select goes through a huge index but answers
> only 4000 rows or so...
>
> The time to open a list - to execute the test plan - is "occasionaly"
> important, probably when the cache has to be filled. We are here
> talking of times on the order of 20 secs. When the test is retried,
> response time is of course dramatically better.
>
> Now, the 30% was measured between an "occasional" slowdown
> before rebuild, and an "occasional" slowdown after the rebuild.
>
> I realize that the conditions varies wildly. I'm reporting the notes I
>
> took.

This is all rather vague and ambiguous isn't ? I have no idea what "occasionally important" is suppose to mean for example. I refer back to my earlier post where I suggest it's rather difficult to attribute these figures (whatever they actually mean) back to rebuilding an index.

You mention that "the underlying select goes through a huge index but answers only 4000 rows or so" and that "we are talking of times on the order of 20 secs". With no SQL, no execution plans, no trace files etc. to base any meaning out of these comments, 20 secs sounds like an extraordinary long time to select just 4000 rows. Even with a huge index, a range scan should only visit a small number of index blocks and the 4000 table blocks at most. And this takes 20 secs ? So one can only assume there being various tables and joins being performed and again it suggests that the execution plan could differ between good and bad execution times.

Now this could be attributed to an index rebuild but then again it might not. For example, I assume you re-analyze the index after the rebuild ? What are the differences in index statistics before/after the rebuild ? When was the index last analyzed prior to the rebuild ? There was one classic thread here a while ago where Mike Ault swore an index rebuild changed the clustering factor of an index only to realize (eventually and somewhat painfully) that the rebuild made no difference to the CF, it was simply the statistics being no longer fully up to date. Is it possible that by simply re-analyzing the index/table, performance improves without the need for the rebuild ? I've seen this on various occasions ...

Some information on the SQL, the execution plan (both good and bad if different), some table/index stats and some summary execution statistics (both good and bad) would be useful in determining why you're getting this "occasional" 30% improvement.

However, my money is on it having nothing to do with just an index rebuild ...

Cheers

Richard Received on Wed Nov 15 2006 - 01:42:37 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US