Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> RE: Question re statistics

RE: Question re statistics

From: <krish.hariharan_at_quasardb.com>
Date: Thu, 29 Nov 2007 21:35:27 -0700
Message-ID: <008601c8330a$6fca5640$6401a8c0@BHAIRAVIPC01>


After having sent this email I was following the thread on "CBO Bug - Tables with 0 rows". This may have some relevance to your situation, but the fundamentals outlined below are still relevant for deterministic problem resolution.

Regards
-Krish

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of krish.hariharan_at_quasardb.com Sent: Thursday, November 29, 2007 6:34 PM To: wjwagman_at_ucdavis.edu; oracle-l_at_freelists.org Subject: RE: Question re statistics

Bill,

In having seen a few of these, I would first try to quantify the three things

1. What is slow
2. Performance degrades
3. Performance stays up to their standards.

The place to start is to establish a baseline for the specific operation and to define an exit criteria so that you know when you are done.

For cases such as these I have used event 10046 or equivalent package commands to get the wait events and the plan for the specific operation under both circumstances (slow and "fast") and then determine how to influence the system (optimizer for example) for stability in the execution.

Regards
-Krish

-----Original Message-----

From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org] On Behalf Of William Wagman
Sent: Thursday, November 29, 2007 6:12 PM To: oracle-l_at_freelists.org
Subject: Question re statistics

Greetings,

I have come up against this situation before and have never been quite sure how to handle it. Running 10gR2 on RHEL 4 EE. In a home grown application there are a series of tables in which records are either inserted or deleted such that the number of records in the table is constantly changing. It is not a large table, the maximum number of records at any one time is 5000 but may vary between 0 and 5000. What they are seeing is that as updates to the table proceed performance degrades because allegedly statistics become old and unhelpful. So what they have done is create a job which runs every few minutes and analyzes these tables so the statistics stay fresh and performance stays up to their standards. Here is a description of one of the tables in question
-

SQL> desc org.requests

 Name                                      Null?    Type
 ----------------------------------------- --------

----------------------------
REQUESTOID NOT NULL RAW(16) REQUESTGROUPOID NOT NULL RAW(16) REQUEST_TYPE NOT NULL CHAR(1) REQUEST_PERSONOID NOT NULL RAW(16) OBJECT_TABLE_CODE CHAR(2) ORIGINAL_OID RAW(16) PENDING_OID RAW(16) REQUEST_ACTION CHAR(1) REQUEST_DATE DATE

I haven't yet traced a session. I've looked at awr reports but am not seeing anything helpful there, yet.

So, my question, and I realize it is an open ended one, can someone offer suggestions for resolving this issue and things to look at to track down the problem. It just strikes me as being very inefficient, silly almost, to analyze these tables every 5 minutes but I don't know how to approach a solution.

Thanks.

Bill Wagman
Univ. of California at Davis
IET Campus Data Center
wjwagman_at_ucdavis.edu
(530) 754-6208
--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l

--

http://www.freelists.org/webpage/oracle-l Received on Thu Nov 29 2007 - 22:35:27 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US