Re: Slow performance... urgent help required

From: Saad Khan <saad4u_at_gmail.com>
Date: Fri, 5 Feb 2010 11:27:21 -0500
Message-ID: <76b3d4e31002050827x36c750easd98b9c487b1b716d_at_mail.gmail.com>



Very interestingly, the rown < 2 brings data in a neno-second while the other query is still getting a big pause despite stats generation, index rebuild and db recycle.

And that gives a big boost to app support personnel, "see its all database.. lets go home while DBA resolves everything..." :(

On Thu, Feb 4, 2010 at 10:51 PM, Mark W. Farnham <mwf_at_rsiz.com> wrote:

> If I recall correctly, rownum=1 should be doing a full table scan with a
> stopkey. (I always use rownum<2, ie. number of rows I want plus one, but
> that is an older story than you want to hear.)
>
>
>
> If that is the plan and you’re still getting a huge number of blocks read,
> then that is either “empty front” or some sort of corruption. Empty front
> occurs if you delete lots of rows that empties out the front of a table
> without inserting more rows. Then when you do a FTS, you read the empty
> blocks until you get to a row that is still there. The space may be reused
> in the future, but until it is, reading even a single row is expensive.
>
>
>
> If that is not the plan you’re getting, then I don’t know what is going on.
>
>
>
> If that is the plan you’re getting and it is reading lots of blocks to get
> one row, then **probably** alter table move is your best way out of it,
> although it might be faster to use sqlplus copy and read the whole table
> through an index (which you might have to hint to get), because that will
> skip the emtpy blocks at the beginning.
>
>
>
> Good luck,
>
>
>
> mwf
>
>
>
>
> ------------------------------
>
> *From:* oracle-l-bounce_at_freelists.org [mailto:
> oracle-l-bounce_at_freelists.org] *On Behalf Of *Saad Khan
> *Sent:* Thursday, February 04, 2010 5:28 PM
> *To:* Guillermo Alan Bort
> *Cc:* andrew.kerber_at_gmail.com; oracle-l_at_freelists.org
>
> *Subject:* Re: Slow performance... urgent help required
>
>
>
> Optimizer_feature_enable is set to 9.2.0
>
>
> Why would it be a optimizer issue when a small query having rownum=1 is
> taking long time?
>
> Actually we are planning to upgarde it to 10g on 20th but this situation
> needs to be resolved URGENTLY.
>
> We can open a SR with oracle if it gets out of hand
>
> On Thu, Feb 4, 2010 at 4:29 PM, Guillermo Alan Bort <cicciuxdba_at_gmail.com>
> wrote:
>
> Just a hunch, what is optimizer_features_enable set to? have you changed
> anything lately? Is the optimizer set to choose or rules? This looks like an
> optimizer issue. Have you considered upgrading to 9.2.0.8.0 and opening a SR
> with Oracle?
>
> Alan.-
>
> On Thu, Feb 4, 2010 at 7:07 PM, Andrew Kerber <andrew.kerber_at_gmail.com>
> wrote:
>
> It is unlikely to be frragmentation, but the quickest way to defrag a table
> is to run an alter table move, then rebuild the indexes. Did you check your
> alert log for any error messages?
>
>
>
> On Thu, Feb 4, 2010 at 2:58 PM, Saad Khan <saad4u_at_gmail.com> wrote:
>
> Hi Gurus,
>
> I need urgent help.
>
> This is oracle 9.2.0.7 on AIX 5.3
>
> The application support persons and business had been complaining of the
> extremely slow performance for last three days. They could not run anything.
> I decided to run the index rebuild and magically they started cheering that
> things are coming up.
> But since today, things went down again. I checked the CPU at server which
> was as high as 99%. I asked them to stop the app. They did that and the CPU
> started going down where it should be.
> However, problem reappeared in some time again when the CPU was around
> 70-80% while only few queries were running.
> Just to check, I ran the following queries at the tables having as much as
> 34 million rows:
>
> select * from <tablename> where rownum=1;
>
> and it took around 2 min to come up with the result. WTH! It should bring
> the result in less than a sec just like it does in all other similar
> environments. Its execution plan shows the cost of 51000. When I got its
> trace, the trace file clearly shows that even though it is only fetching a
> single row, it's essentially performing a full table scan (456,000 disk
> operations). The cost of this query should be 1 or 2 at most.
>
> The latest statspack report shows the following database ratios:
>
> Instance Efficiency Percentages (Target 100%)
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Buffer Nowait %: 100.00 Redo NoWait %: 100.00
> Buffer Hit %: 99.36 In-memory Sort %: 100.00
> Library Hit %: 97.99 Soft Parse %: 92.72
> Execute to Parse %: 37.54 Latch Hit %: 99.94
> Parse CPU to Parse Elapsd %: 10.51 % Non-Parse CPU: 89.50
>
> Shared Pool Statistics Begin End
> ------ ------
> Memory Usage %: 94.97 94.74
> % SQL with executions>1: 3.79 3.95
> % Memory for SQL w/exec>1: 13.20 13.73
>
> Top 5 Timed Events
> ~~~~~~~~~~~~~~~~~~ %
> Total
> Event Waits Time (s) Ela
> Time
> -------------------------------------------- ------------ -----------
> --------
> latch free 80,839 1,286
> 31.82
> db file sequential read 402,316 906
> 22.43
> enqueue 286 835
> 20.66
> db file scattered read 267,029 358
> 8.86
> CPU time 323
> 7.98
> -------------------------------------------------------------
>
> Something is wrong with those tables. Either they are too fragmented or I
> dont know. Can a database recycle help? What can be the best ways the get
> those tables defragmented in case these are. I need help urgently please.
>
> Thanks in advance.
>
> Sid
>
>
>
>
>
> --
> Andrew W. Kerber
>
> 'If at first you dont succeed, dont take up skydiving.'
>
>
>
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Feb 05 2010 - 10:27:21 CST

Original text of this message