Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: Perf Tuning : Analyze table BACKFIRED!!

Re: Perf Tuning : Analyze table BACKFIRED!!

From: Niall Litchfield <n-litchfield_at_audit-commission.gov.uk>
Date: Mon, 18 Aug 2003 14:35:01 +0100
Message-ID: <3f40d606$0$3135$ed9e5944@reading.news.pipex.net>


"JK_Yahoo" <JKMalpani_at_Yahoo.com> wrote in message news:7a58b2f6.0308172028.7e2960b3_at_posting.google.com...
> Little long chain of events, but little lost on what went wrong...
> Thanks!!
>
> Problem statement :
> * Production Oracle 8.1.7.4 DB of 90+ GB in size, 300+ tables
> * Overall performance is "survivable" to users, current DB health is
> OK, can be much better.
> * On closer review from tuning perspective, from Oracle Statspack
> and lots of queries across various tables with existing suitable
> indices are observed to be not utilizing indices.
> * Buffer Cache hit ratio was around 90%, but still select routine
> queries were using full table scans
> * Users were NOT complaining
> * A being monitored program had a throughput of 300 units per hour
> during business day
>
> * ACTION 1 : Suggested some table specific index and analyze of
> select tables,
> * monitored program throughput increased to 1000 units per hour
> during business day
> * Still many other queries continued to have problem, hence thought
> all tables may needed anlayze
>
> * ACTION 2 : for all 300 tables
> * "ANALYZE table_name FOR TABLE ESTIMATE STATISTICS 5 percent;"
> * oops..
> * few hours down into night and weekend, despite very less user
> and transaction activity on DB the monitored program throughput came
> down to 500 units per hour.
> * Data buffer cache hit ratio became volatile down FROM 90% TO
> range of 15-60.
> * Overall DB slowed down a LOT
>
> * Action 3 : : for all 60 medium to heavy transaction tables ( rows
> > 1000 ) executed
> * "ANALYZE table_name FOR ALL INDEXED COLUMNS ESTIMATE STATISTICS
> 5000 rows ;"
> * Some relief, but still open concerns
> * Many queries which after Action 2 were doing much better, as
> expected.
> * But Buffer cache still remained volatile between 20-70%, below
> earlier 90%
> * Even after ACTION3, some queries on those tables were still
> doing Table Scans
> * I have a fear, come Monday full business load, things are going
> to get worse.
>
> What went wrong? In past I have seen improvements in performance with
> other customer databases with similar to above actions, but it
> BACKFIRED on me this time. Any clues / suggestions / analysis is
> appreciated.

What went wrong? Well action 1 seems to have been documented and thought out, that is you started with a known slow process and some suggested indexes. You carried out the action, updated stats and then measured performance which was improved (3 times a good figure to quote to mgmt). Action 2 seems to have been carried out on the basis that it seemed like a good idea at the time, action 3 was a similar attempt to sort out action 2. So the big lesson to learn is not to make changes at random and without testing.

Where to go from here?
well if you had *never* previously collected stats then you can remove all stats and return to square one by deleting the stats. otherwise you should 1. Gather decent stats - I'd suggest dbms_stats.gather_schema_stats with a sample size as Ryan suggests of at least 20%. If you have skewed data and 2. Monitor what queries take a long time and look at optimizing those queries first.
3. Ignore ratios and FTS per se and concentrate on what takes the time.

-- 
Niall Litchfield
Oracle DBA
Audit Commission UK
Received on Mon Aug 18 2003 - 08:35:01 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US