Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Usenet -> c.d.o.server -> Re: How long should statspack.snap take to run?

Re: How long should statspack.snap take to run?

From: Charles Hooper <hooperc2000_at_yahoo.com>
Date: 14 Jul 2006 12:47:27 -0700
Message-ID: <1152906447.148364.273000@p79g2000cwp.googlegroups.com>


jon.fife_at_gmail.com wrote:

> Hi all,
>
> I've been tossed into a vacant DBA role and I'm trying to pick up the
> pieces the former DBA left in his wake. It doesn't seem that he had any
> sort of history for performance baselines, so I've been playing around
> with statspack. When I do the statspack.snap it takes around 15 minutes
> to complete and I was just wondering if that was outrageous or typical.
> It is a 9.2.0.4 server with roughly 150GB of data and 150 concurrent
> users during the day doing OLTP. I've done some googling to see if the
> time spent was sane, but didn't find much there. It seems some people
> set up statspack to run every 30 minutes or so, which makes 15 minutes
> seem high...
>
> Another thing to note: when I first did the snap I was getting the
> following:
> ORA-00001: unique constraint (PERFSTAT.STATS$SQL_SUMMARY_PK) violated
>
> I found a fix for that in Metalink (Note:267244.1) that had me disable
> the primary key constraint on PERFSTAT.STATS$SQL_SUMMARY. Since that
> invalidated the STATS$SQL_SUMMARY_PK index I created a new index named
> $STATS$SQL_SUMMARY_PK on STATS$SQL_SUMMARY(SNAP_ID, DBID,
> INSTANCE_NUMBER, HASH_VALUE, TEXT_SUBSET) and did a compute statistics
> on it. I don't know if that would have a huge impact on the execution
> time, since there are only 3700 rows in STATS$SQL_SUMMARY.
>
> Thanks for any advice,
>
> Jon

Personal opinion, having read several performance tuning books, is that stats pack reports are of limited value, since their do not allow you to focus on a specific problem. The stats pack reports are either for too long of a duration, and thus dilute the existence of real performance problems, or are so short in duration and miss a performance problem when it occurs.

Instead, pick a high value target. Activity A would cost my company N fewer dollars per month if it completed X seconds earlier. Once you have that information, perform a 10046 trace at level 8 (wait events) or level 12 (wait with bind variable values) against the application with the greatest potential benefit to the company. Examine the waits carefully - they will provide an indication of what can be adjusted in your system to improve performance.

If you feel the need to look at full system performance first, try this to obtain a rough idea of what is happening in the system: CREATE TABLE
  SYSTEM_LOG
AS
SELECT
  0 SID,
  TO_DATE(TO_CHAR(SYSDATE,'YYYYMMDD HH24:MI'),'YYYYMMDD HH24:MI') TIME_STAMP,
  EVENT,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT
FROM
  V$SYSTEM_EVENT
WHERE
  EVENT IN (
'buffer busy waits',
'db file sequential read',
'db file scattered read',
'direct path read',
'direct path write',
'enqueue',
'free buffer waits',
'latch free',
'log file parallel write',
'log file sync')

UNION ALL
SELECT
  SID,
  TO_DATE(TO_CHAR(SYSDATE,'YYYYMMDD HH24:MI'),'YYYYMMDD HH24:MI') TIME_STAMP,
  EVENT,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT
FROM
  V$SESSION_EVENT
WHERE
  EVENT IN (
'buffer busy waits',
'db file sequential read',
'db file scattered read',
'direct path read',
'direct path write',
'enqueue',
'free buffer waits',
'latch free',
'log file parallel write',
'log file sync');

COMMIT; The above creates a logging table with baseline values for various performance statistics. When the performance in the system seems to be slower than expected, execute the following: INSERT INTO
  SYSTEM_LOG
SELECT
  0 SID,
  TO_DATE(TO_CHAR(SYSDATE,'YYYYMMDD HH24:MI'),'YYYYMMDD HH24:MI') TIME_STAMP,
  EVENT,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT
FROM
  V$SYSTEM_EVENT
WHERE
  EVENT IN (
'buffer busy waits',
'db file sequential read',
'db file scattered read',
'direct path read',
'direct path write',
'enqueue',
'free buffer waits',
'latch free',
'log file parallel write',
'log file sync')

UNION ALL
SELECT
  SID,
  TO_DATE(TO_CHAR(SYSDATE,'YYYYMMDD HH24:MI'),'YYYYMMDD HH24:MI') TIME_STAMP,
  EVENT,
  TOTAL_WAITS,
  TOTAL_TIMEOUTS,
  TIME_WAITED,
  AVERAGE_WAIT
FROM
  V$SESSION_EVENT
WHERE
  EVENT IN (
'buffer busy waits',
'db file sequential read',
'db file scattered read',
'direct path read',
'direct path write',
'enqueue',
'free buffer waits',
'latch free',
'log file parallel write',
'log file sync');

Then run this SQL statement again once a minute for five to 10 minutes.

Making sense of the captured data:
SELECT

  SL2.SID,
  SL2.TIME_STAMP,
  SL2.EVENT,
  SL2.TOTAL_WAITS-NVL(SL1.TOTAL_WAITS,0) WAITS,
  SL2.TOTAL_TIMEOUTS-NVL(SL1.TOTAL_TIMEOUTS,0) TIME_OUTS,
  SL2.TIME_WAITED-NVL(SL1.TIME_WAITED,0) TIME_WAITED,
  SL2.AVERAGE_WAIT

FROM
  SYSTEM_LOG SL1,
  SYSTEM_LOG SL2
WHERE
  SL2.SID=SL1.SID(+)
  AND SL2.TIME_STAMP=SL1.TIME_STAMP(+) + (1/24/60)   AND SL2.EVENT=SL1.EVENT(+)
ORDER BY
  SID,
  SL2.TIME_STAMP,
  SL2.EVENT; For example, if the delta value of time waited for db file scattered read where SID=0 seems consistently high, that is an indication that full table scans are likely contributing to the system performance problems. You can then look at that value for the various sessions to determine which session is contributing the most to the performance problem. For there, you can enable a 10046 trace for the session to determine the SQL statements executed by the session, the wait events that result from the execution of the SQL statement, and the file and block number (or other information present with other types of waits) involved in the wait, which may be used to determine the table that is the greatest contributor. This may lead you to look at the statistics for the table and indexes on the table, where you may find that the statistics for the table and indexes are stale - statistics show that the table has 10 rows with an average size of 100 bytes per row, when there are actually 1,000,000 rows in the table.

Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc. Received on Fri Jul 14 2006 - 14:47:27 CDT

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US