Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Table / Index Rebuild,, less data but slower

Re: Table / Index Rebuild,, less data but slower

From: <Dave.Wotton_at_no-spam.it.camcnty.gov.uk>
Date: 1997/11/29
Message-ID: <65osu2$sqv@dns.camcnty.gov.uk>

>On Fri, 14 Nov 1997 14:12:11 -0600, al.davis_at_ns.sympatico.ca wrote:
>
>Table / Index Rebuild Problem, less data but slower response.
>
>This is a real challenge question. Two 20 million row tables, X and Y,
>were created from two 40 million row tables, A and B. The 40 million row
>tables were dropped and the 20 miliion row tables were renamed to A and B
>respectively. The one PK index on A was recreated and the 4 indexes on B
>were recreated. The tables and indexes reside on the same file systems
>as before the rebuild, but the application runs 10 times slower. It now
>takes 12 seconds to process a claim as opposed to approx. 1 second.
>
>The X and Y tables were built from the A and B tables using an insert into
>A select * from A where date_vbl > in_date type of strategy.
>
>The pararell query option is in use, but has been turned off with degree
>set to 1. The DB is 7.1.4. Cost based optimization is being used.
>
>The following has been tried, indexes have been validated, indexes have
>been rebuilt (with noparallel), rule based has been tried (response was
>greater than 40 seconds), tables have been analyzed with compute
>statistics (a 20 second or greater response time resulted so that analyze
>estimate had to be used to return response time to 10 seconds).
>
>The application architecture includes Tuxedo and C code with embedded
>SQL. Because of this being a production system, and due to the volume of
>transctions tkprof has not been tried. The next attempt will be to try an
>import export - simply because no other ideas come to mind.
>
>WHAT CAN WE TRY THAT HAS not been tried already? If anyone has the cure
>for this one, I think they should be president!
>
>Regards,
>
>Alexander Davis
>

Hi,

( sorry this response is late - didn't see original post, just Keith Boulton's followup )

I don't think exp/imp will help. exp/imp provides its benefits by (a) eliminating chaining from your tables, (b) resetting the table's highwatermark and (c) eliminating dead space from your indexes. However, you achieved all this by the way you created your tables.

My guess is that the optimiser is now optimising your queries in a different way because the distribution of values has changed.

As an example ( and this is speculative, because I don't know enough about your application ), suppose one of your original 40 million row tables had an index on "department" and there were 500 different departments. That index would probably have been used by the optimiser because its selectivety is good: each department represents 0.2% of the rows. However, suppose that since Jan 1 1996, only three departments have been adding rows to the table, and that was your cutoff date for creating your new table. Now the index's selectivety is only 33% ( each department represents 1/3rd of the rows in the table. ). Now the index probably won't be used. This will make a dramatic difference to your execution path, and the result may be a severe degradation in performance.

I think you'll probably have to resort to using tkprof to work out what's happening, otherwise you're really working blindfold.

You're not clear whether you know exactly which transaction is affected. If you do, you could, of course, extract this piece of sql and run it under sqlplus, with 'alter session set sql_trace true' so that tracing is only switched on for you, not for everyone. You won't need to switch on 'timed statistics', you should be able to see from the counts in the execution plan what the problem is, without needing the timings. Since 'alter session' is an sql statement, you could instead imbed this in a test version of your application code, rather than extract the sql into sqlplus.

Although my production systems are much smaller than yours, we have sql_trace = true and timed_statistcs = true at a database level all the time. No one noticed when we switched it on, so its overhead doesn't appear to be too high. The benefits we get from being able to monitor whats going on far outweigh the imperceptible decrease in response time. Using the statistics from tkprof, I can see that 85% of our sql statements still take < 0.0005 secs to execute and 99.5% take less than < 0.002 secs. ( but then, I don't know how much faster they were with sql_trace switched off ).

Incidentally, we still use the rule based optimiser. Your transaction ran much slower with the RBO optimiser simply because the sql had not been structured correctly to exploit it. The CBO does not make *well-tuned* sql run any faster: all it does is avoid programmers needing to know the arcane rules by which the RBO decides its execution plan. The drawback of the CBO is what you have discovered - since you have handed over the responsibility for optimising the sql to the optimiser, you'll have to trust it to do a good job. And sometimes it makes mistakes. You can always provide it with hints, but then you're back to having to have an indepth knowldege of sql optimisation. The bottom line is that, whenever you make major changes to the data in your system, either adding more or deleting some, you're likely to need to look again at the optimisation of your key queries.

HTH, Dave.

--
Remove the no-spam bit from my email address to reply.
Received on Sat Nov 29 1997 - 00:00:00 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US