------------------------------------------------------------------------- Oracle9i Server Release 9.2 Production ------------------------------------------------------------------------- Copyright (C) 1993, 2002, Oracle Corporation. All rights reserved. Author: Connie Dialeris Green Contributors: Cecilia Gervasio, Graham Wood, Russell Green, Patrick Tearle, Harald Eri, Stefan Pommerenk, Vladimir Barriere Please refer to the Oracle9i server README file in the rdbms doc directory, for copyright, disclosure, restrictions, warrant, trademark, disclaimer, and licensing information. On Unix systems, the file is README.doc, and on Windows systems the file is README_RDBMS.HTM. Oracle Corporation, 500 Oracle Parkway, Redwood City, CA 94065. ------------------------------------------------------------------------- Statistics Package (STATSPACK) README (spdoc.txt) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ TABLE OF CONTENTS ----------------- 0. Introduction and Terminology 1. Oracle Enterprise Manager (EM) and Statspack 2. Statspack Configuration 2.1. Database Space Requirements 2.2. Installing the Tool 2.3. Errors during Installation 3. Gathering data - taking a snapshot 3.1. Automating Statistics Gathering 3.2. Using dbms_job 4. Running the Performance reports 4.1. Running the instance report 4.2. Running the instance report when there are multiple instances 4.3. Running the SQL report 4.4. Gathering optimizer statistics on the PERFSTAT schema 5. Configuring the amount of data captured 5.1. Snapshot Level 5.2. Snapshot SQL thresholds 5.3. Changing the default values for Snapshot Level and SQL Thresholds 5.4. Snapshot Levels - details 5.5. Specifying a Session Id 5.6. Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETERS procedures 6. Time Units used for Performance Statistics 7. Event Timings 8. Managing and Sharing performance data 8.1. Sharing data via export 8.2. Purging/removing unnecessary data 8.3. Removing all data 9. New and Changed Features 9.1. Changes between 9.0 and 9.2 9.2. Changes between 8.1.7 and 9.0 9.3. Changes between 8.1.6 and 8.1.7 10. Compatibility and Upgrading from previous releases 10.1. Compatibility Matrix 10.1.1. Using Statspack shipped with 9.2 10.1.2. Using Statspack shipped with 9.0 10.1.3. Using Statspack shipped with 8.1.7 on 9i releases 10.2. Upgrading an existing Statspack schema to a newer release 10.2.1. Upgrading the Statspack schema from 9.0 to 9.2 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7 10.2.4. Upgrading the Statspack schema from 8.1.6 to 9.2 10.2.5. Upgrading the Statspack schema from 8.1.6 to 9.0 10.2.6. Upgrading the Statspack schema from 8.1.7 to 9.2 11. Oracle Real Application Clusters specific considerations 11.1. Changing Instance Numbers 11.2. Cluster Specific Report 11.3. Cluster Specific Data 12. Conflicts and differences compared to UTLBSTAT/UTLESTAT 12.1. Running BSTAT/ESTAT in conjunction to Statspack 12.2. Differences between Statspack and BSTAT/ESTAT 13. Removing the package 14. Supplied Scripts Overview 15. Limitations and Modifications 15.1. Limitations 15.2. Modifications 0. Introduction and Terminology -------------------------------- To effectively perform reactive tuning, it is vital to have an established baseline for later comparison when the system is running poorly. Without a baseline data point, it becomes very difficult to identify what a new problem is attributable to: Has the volume of transactions on the system increased? Has the transaction profile or application changed? Has the number of users increased? Statspack fundamentally differs from the well known UTLBSTAT/UTLESTAT tuning scripts by collecting more information, and also by storing the performance statistics permanently in Oracle tables, which can later be used for reporting and analysis. The data collected can be analyzed using the report provided, which includes an 'instance health and load' summary page, high resource SQL statements, as well as the traditional wait events and initialization parameters. Statspack improves on the existing UTLBSTAT/UTLESTAT performance scripts in the following ways: - Statspack collects more data, including high resource SQL (and the optimizer execution plans for those statements) - Statspack pre-calculates many ratios useful when performance tuning, such as cache hit ratios, per transaction and per second statistics (many of these ratios must be calculated manually when using BSTAT/ESTAT) - Permanent tables owned by PERFSTAT store performance statistics; instead of creating/dropping tables each time, data is inserted into the pre-existing tables. This makes historical data comparisons easier - Statspack separates the data collection from the report generation. Data is collected when a 'snapshot' is taken; viewing the data collected is in the hands of the performance engineer when he/she runs the performance report - Data collection is easy to automate using either dbms_job or an OS utility NOTE: The term 'snapshot' is used to denote a set of statistics gathered at a single time, identified by a unique Id which includes the snapshot number (or snap_id). This term should not be confused with Oracle's Snapshot Replication technology. How does Statspack work? Statspack is a set of SQL, PL/SQL and SQL*Plus scripts which allow the collection, automation, storage and viewing of performance data. A user is automatically created by the installation script - this user, PERFSTAT, owns all objects needed by this package. This user is granted limited query-only privileges on the V$views required for performance tuning. Statspack users will become familiar with the concept of a 'snapshot'. 'snapshot' is the term used to identify a single collection of performance data. Each snapshot taken is identified by a 'snapshot id' which is a unique number generated at the time the snapshot is taken; each time a new collection is taken, a new snap_id is generated. The snap_id, along with the database identifier (dbid) and instance number (instance_number) comprise the unique key for a snapshot (using this unique combination allows storage of multiple instances of a Clustered database in the same tables). Once snapshots are taken, it is possible to run the performance report. The performance report will prompt for the two snapshot id's the report will process. The report produced calculates the activity on the instance between the two snapshot periods specified, in a similar way to the BSTAT/ESTAT report; to compare - the first snap_id supplied can be considered the equivalent of running BSTAT; the second snap_id specified can be considered the equivalent of ESTAT. Unlike BSTAT/ESTAT which can by it's nature only compare two static data points, the report can compare any two snapshots specified. 1. Oracle Enterprise Manager (EM) and Statspack ------------------------------------------------ Statspack allows you to capture Oracle instance-related performance data, and report on this data in a textual format. Oracle Enterprise Manager Diagnostics Pack offers extended features including capturing related operating system, middle-tier and application performance data for end-to-end diagnostics. The Oracle Diagnostics Pack can automatically analyze this performance data, display it in a graphical interface, and use alerts to immediately direct you to any performance problems. In addition, you can be alerted automatically via email or page when a problem is detected. Oracle Enterprise Manager also includes an integrated diagnostics methodology that uses guided drilldowns and expert advice to help you quickly resolve performance issues. EM also allows you the ability to store the captured data in a separate performance repository database, and to store the performance data for multiple databases in the same repository. For more information about Oracle Enterprise Manager or for a trial license, visit the Oracle website (www.oracle.com) or the Oracle Store website (oraclestore.oracle.com). 2. Statspack Configuration --------------------------- 2.1. Database Space Requirements The amount of database space required by the package will vary considerably based on the frequency of snapshots, the size of the database and instance, and the amount of data collected (which is configurable). It is therefore difficult to provide general storage clauses and space utilization predictions which will be accurate at each site. The default initial and next extent sizes are 100k, 1MB, 3MB or 5MB for all Statspack tables and indexes. To install Statspack, the minimum space requirement is approximately 100MB. Locally Managed Tablespaces --------------------------- If you install the package in a locally-managed tablespace, modifying storage clauses is not required, as the storage characteristics are automatically managed. Dictionary Managed Tablespaces ------------------------------ If you install the package in a dictionary-managed tablespace, Oracle suggests you monitor the space used by the objects created, and adjust the storage clauses of the segments, if required. 2.2. Installing the Tool Installation scripts create a user called PERFSTAT, which will own all PL/SQL code and database objects created (including the STATSPACK tables, constraints and the STATSPACK package). During the installation you will be prompted for the PERFSTAT user's password and default and temporary tablespaces. The default tablespace will be used to create all Statspack objects (such as tables and indexes). The temporary tablespace will be used for sort-type activities (for more information on temporary tablespaces, see the Oracle9i Concepts Manual). NOTE: o A password for PERFSTAT user is mandatory and there is no default password; if a password is not specified, the installation will abort with an error indicating this is the problem. o For security reasons, keep PERFSTAT's password confidential. o Do not specify the SYSTEM tablespace for the PERFSTAT users DEFAULT or TEMPORARY tablespaces; if SYSTEM is specified the installation will abort with an error indicating this is the problem. This is enforced as Oracle do not recommend using the SYSTEM tablespace to store statistics data, nor for workareas. Use a TOOLS tablespace to store the data, and your instance's TEMPORARY tablespace for workarea overflows. o During the installation, the dbms_shared_pool and dbms_job PL/SQL packages are created. dbms_shared_pool is used to pin the Statspack package in the shared pool; dbms_job is created on the assumption the DBA will want to schedule periodic snapshots automatically using dbms_job. To install the package, either change directory to the ORACLE_HOME rdbms/admin directory, or fully specify the ORACLE_HOME/rdbms/admin directory when calling the installation script, spcreate. To run the installation script, you must use SQL*Plus and connect as a user with SYSDBA privilege. e.g. Start SQL*Plus, then: on Unix: SQL> connect / as sysdba SQL> @?/rdbms/admin/spcreate on NT: SQL> connect / as sysdba SQL> @%ORACLE_HOME%\rdbms\admin\spcreate The spcreate install script runs 3 other scripts - you do not need to run these - these scripts are called automatically: 1. spcusr -> creates the user and grants privileges 2. spctab -> creates the tables 3. spcpkg -> creates the package Check each of the three output files produced (spcusr.lis, spctab.lis, spcpkg.lis) by the installation to ensure no errors were encountered, before continuing on to the next step. Note that there are two ways to install Statspack - interactively (as shown above), or in 'batch' mode; batch mode is useful when you do not wish to be prompted for the PERFSTAT user's default and temporary tablespaces. Batch mode installation ~~~~~~~~~~~~~~~~~~~~~~~ To install in batch mode, you must assign values to the SQL*Plus variables which specify the password and the default and temporary tablespaces before running spcreate. The variables are: perfstat_password -> for the password default_tablespace -> for the default tablespace temporary_tablespace -> for the temporary tablespace e.g. on Unix: SQL> connect / as sysdba SQL> define default_tablespace='tools' SQL> define temporary_tablespace='temp' SQL> define perfstat_password='erg8oiw' SQL> @?/rdbms/admin/spcreate SQL> undefine perfstat_password spcreate will no longer prompt for the above information. 2.3. Errors during installation A common error made during Statspack installation is running the install script from Server Manager (svrmgrl) rather than from SQL*Plus. If you use svrmgrl, the installation will fail. Another possible error during installation is to specify the SYSTEM tablespace for the PERFSTAT user's DEFAULT or TEMPORARY tablespace. In such a situation, the installation will fail, stating the problem. To correctly install Statspack after such errors, first run the de-install script, then the install script. Both scripts must be run from SQL*Plus. e.g. Start SQL*Plus, connect as a user with SYSDBA privilege, then: SQL> @spdrop SQL> @spcreate 3. Gathering data - taking a snapshot -------------------------------------- The simplest interactive way to take a snapshot is to login to SQL*Plus as the PERFSTAT user, and execute the procedure Statspack.snap: e.g. SQL> connect perfstat/perfstat_password SQL> execute statspack.snap; Note: In a Clustered database environment, you must connect to the instance you wish to collect data for. This will store the current values for the performance statistics in the Statspack tables, and can be used as a baseline snapshot for comparison with another snapshot taken at a later time. For better performance analysis, set the initialization parameter timed_statistics to true; this way, Statspack data collected will include important timing information. The timed_statistics parameter is also dynamically changeable using the 'alter system' command. Timing data is important and is usually required by Oracle support to diagnose performance problems. Typically, in the situation where you would like to automate the gathering and reporting phases (such as during a benchmark), you may need to know the snap_id of the snapshot just taken. To take a snapshot and display the snap_id, call the statspack.snap function. Below is an example of calling the snap function using an anonymous PL/SQL block in SQL*Plus: e.g. SQL> variable snap number; SQL> begin :snap := statspack.snap; end; 2 / PL/SQL procedure successfully completed. SQL> print snap SNAP ---------- 12 3.1. Automating statistics gathering To be able to make comparisons of performance from one day, week or year to the next, there must be multiple snapshots taken over a period of time. The best method to gather snapshots is to automate the collection on a regular time interval. It is possible to do this: - within the database, using the Oracle dbms_job procedure to schedule the snapshots - using Operating System utilities (such as 'cron' on Unix or 'at' on NT) to schedule the snapshot 3.2. Using dbms_job To use an Oracle-automated method for collecting statistics, you can use dbms_job. A sample script on how to do this is supplied in spauto.sql, which schedules a snapshot every hour, on the hour. You may wish to schedule snapshots at regular times each day to reflect your system's OLTP and/or batch peak loads. For example take snapshots at 9am, 10am, 11am, 12 midday and 6pm for the OLTP load, then a snapshot at 12 midnight and another at 6am for the batch window. In order to use dbms_job to schedule snapshots, the job_queue_processes initialization parameter must be set to greater than 0 for the job to run automatically. Example of setting the job_queue_processes parameter in an init.ora file: # Set to enable the job queue process to start. This allows dbms_job # to schedule automatic statistics collection using STATSPACK job_queue_processes=1 If using spauto.sql in a Clustered database environment, the spauto.sql script must be run once on each instance in the cluster. Similarly, the job_queue_processes parameter must also be set for each instance. Changing the interval of statistics collection ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To change the interval of statistics collection use the dbms_job.interval procedure e.g. execute dbms_job.interval(1,'SYSDATE+(1/48)'); Where 'SYSDATE+(1/48)' will result in the statistics being gathered each 1/48 hours (i.e. every half hour). To force the job to run immediately, execute dbms_job.run(); To remove the autocollect job, execute dbms_job.remove(); For more information on dbms_job, see the Supplied Packages Reference Manual. 4. Running the Performance reports ----------------------------------- Once snapshots are taken, it is possible to generate a performance report. There are two reports: - spreport.sql (and sprepins.sql) is a general instance health report, covering all aspects of instance performance. The instance report calculate and print ratios, increases etc. for all statistics between the two snapshot periods, in a similar way to the BSTAT/ESTAT report. Note: spreport.sql calls sprepins.sql, first defaulting the dbid and instance number of the instance you are connected to. For more information on the difference between sprepins and spreport, see section 4.2 Running the instance report when there are multiple instances. - sprepsql.sql is a report for a specific SQL statement. The SQL report is usually run after examining the high-load SQL sections of the instance health report. The SQL report provides detailed statistics and data for a single SQL statement (as identified by the Hash Value). Both reports prompt for the beginning snapshot id, the ending snapshot id, and the report name. The SQL report additionally requests the Hash Value for the SQL statement to be reported on. Note: It is not correct to specify begin and end snapshots where the begin snapshot and end snapshot were taken from different instance startups. In other words, the instance must not have been shutdown between the times that the begin and end snapshots were taken. The reason for this requirement is the database's dynamic performance tables which Statspack queries to gather the data are memory resident, hence shutting down the database will reset the values in the performance tables to 0. As Statspack subtracts the begin-snapshot statistics from the end-snapshot statistics, the resulting output will be invalid. If begin and end snapshots which were taken between shutdowns are specified in the report, the report shows an appropriate error to indicate this. Separating the phase of data gathering from producing a report, allows the flexibility of basing a report on any data points selected. For example it may be reasonable for the DBA to use the supplied automation script to automate data collection every hour on the hour; If at some later point a performance issue arose which may be better investigated by looking at a three hour data window rather than an hour's worth of data, the only thing the DBA need do, is specify the required start point and end point when running the report. 4.1. Running the instance report To examine the change in instance-wide statistics between two time periods, the spreport.sql file is executed while being connected to the PERFSTAT user. The spreport.sql command file is located in the rdbms/admin directory of the Oracle Home. This report assumes you are connected to the database you wish to report on. In a clustered database environment, you must connect to the instance you wish to report on when running spreport.sql. To avoid this, see section 4.2. Running the instance report when there are multiple instances. When running spreport, you will be prompted for: 1. The beginning snapshot Id 2. The ending snapshot Id 3. The name of the report text file to be created Note: Blank lines between lines of snapshot Id's means the instance has been restarted (shutdown/startup) between those times - this helps identify which begin and end snapshots can be used together when running a statspack report (ones separated by a blank line can not). e.g. on Unix SQL> connect perfstat/perfstat_password SQL> @?/rdbms/admin/spreport e.g. on NT SQL> connect perfstat/perfstat_password SQL> @%ORACLE_HOME%\rdbms\admin\spreport Example output: SQL> connect perfstat/perfstat_password Connected. SQL> @spreport Current Instance ~~~~~~~~~~~~~~~~ DB Id DB Name Inst Num Instance ----------- ------------ -------- ------------ 2618106428 PRD1 1 prd1 Instances in this Statspack schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ----------- -------- ------------ ------------ ------------ 2618106428 1 PRD10 prd1 dlsun525 Using 261810642 for database Id Using 1 for instance number Completed Snapshots Snap Snap Instance DB Name Id Snap Started Level Comment ------------ ------------ ----- ----------------- ----- ---------------------- prd1 PRD1 1 11 May 2000 12:07 5 2 11 May 2000 12:08 5 3 12 May 2000 07:07 5 4 12 May 2000 08:08 5 Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 1 Begin Snapshot Id specified: 1 Enter value for end_snap: End Snapshot Id specified: 2 Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is sp_1_2 To use this name, press to continue, otherwise enter an alternative. Enter value for report_name: Using the report name sp_1_2 The report will now scroll past, and also be written to the file specified (e.g. sp_1_2.lis). Batch mode report generation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To run a report without being prompted, assign values to the SQL*Plus variables which specify the begin snap id, the end snap id and the report name before running spreport. The variables are: begin_snap -> specifies the begin Snapshot Id end_snap -> specifies the end Snapshot Id report_name -> specifies the Report output name e.g. on Unix: SQL> connect perfstat/perfstat_password SQL> define begin_snap=1 SQL> define end_snap=2 SQL> define report_name=batch_run SQL> @?/rdbms/admin/spreport spreport will no longer prompt for the above information. 4.2. Running the instance report when there are multiple instances spreport.sql assumes you are connected to the database you wish to report on. There are certain situations where this assumption may not be valid: - In a clustered database environment, you may be connected to an instance which is not the instance you wish to report on - If you are archiving baseline Statspack data in a separate database from your production database, or when importing Statspack data (e.g. in the case of Oracle support) In these situations, you would not be able to produce the Statspack instance report using spreport.sql, as the instance assumed may be unavailable, possibly on a totally different host. To circumvent this problem, you should run the sprepins.sql report instead. The sprepins.sql report output is identical to the spreport.sql output, as spreport.sql simply calls sprepins.sql, first defaulting the Instance Number and DBId of the database you are currently connected to. If you run sprepins.sql directly, you are prompted for the DBId and Instance Number for the instance you wish to report on, in addition to the begin_snap and end_snap Ids and report output name (i.e. the current DBId and Instance Number are not defaulted). You will be prompted for: 1. The DBId 2. The Instance Number 3. The beginning snapshot Id 4. The ending snapshot Id 5. The name of the report text file to be created Example output: SQL> connect perfstat/perfstat_password Connected. SQL> @sprepins Instances in this Statspack schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ----------- -------- ------------ ------------ ------------ 590400074 1 CON90 con90 dlsun525 4290976145 1 MAIL MAIL mailhost Enter value for dbid: 4290976145 Using 4290976145 for database Id Enter value for inst_num: 1 Then similarly to spreport, the available snapshots are displayed, and the begin and end snaps and report name are prompted for. Batch mode report generation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To run the sprepins.sql report without being prompted, assign values to the SQL*Plus variables which specify the dbid, instance number, begin snap id, the end snap id, and the report name, before running spreport. The variables are: dbid -> specifies the dbid inst_num -> specifies the instance number begin_snap -> specifies the begin Snapshot Id end_snap -> specifies the end Snapshot Id report_name -> specifies the Report output name e.g. SQL> connect perfstat/perfstat_password SQL> define dbid=4290976145 SQL> define inst_num=1 SQL> define begin_snap=1 SQL> define end_snap=2 SQL> define report_name=batch_run SQL> @?/rdbms/admin/sprepins sprepins will no longer prompt for the above information. 4.3. Running the SQL report Once the instance report has been analyzed, often there are high-load SQL statements which should be examined to determine if they are causing unnecessary load. The SQL report sprepsql.sql, displays statistics, the complete SQL text and (if level 6 snapshot has been taken), information on any SQL Plan(s) associated with that statement. The SQL statement to be reported on is identified by the statement's Hash Value (which is a numerical representation of the statement's SQL text). The Hash Value for each statement is displayed in the high-load SQL sections of the instance report. The sprepsql.sql file is executed while being connected to the PERFSTAT user, and is located in the rdbms/admin directory of the Oracle Home. Note: To run sprepsql.sql in a Cluster environment, you must connect to the instance you wish to report on. You will be prompted for: 1. The beginning snapshot Id 2. The ending snapshot Id 3. The Hash Value for the SQL statement 4. The name of the report text file to be created Example output: SQL> connect perfstat/perfstat_password Connected. SQL> @sprepsql DB Id DB Name Inst Num Instance ----------- ------------ -------- ------------ 2618106428 PRD1 1 prd1 Completed Snapshots Snap Snap Instance DB Name Id Snap Started Level Comment ------------ ------------ ----- ----------------- ----- ---------------------- prd1 PRD1 37 02 Mar 2001 11:01 6 38 02 Mar 2001 12:01 6 39 08 Mar 2001 09:01 5 40 08 Mar 2001 10:02 5 Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 39 Begin Snapshot Id specified: 39 Enter value for end_snap: 40 End Snapshot Id specified: 40 Specify the Hash Value ~~~~~~~~~~~~~~~~~~~~~~ Enter value for hash_value: 1988538571 Hash Value specified is: 1988538571 Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is sp_39_40_1988538571. To use this name, press to continue, otherwise enter an alternative. Enter value for report_name: Using the report name sp_39_40_1988538571 The report will scroll past, and also be written to the file specified (e.g. sp_39_40_1988538571.lis). Batch mode report generation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Similarly to spreport.sql, the SQL report can be run in batch mode. To run a report without being prompted, assign values to the SQL*Plus variables which specify the begin snap id, the end snap id, the SQL hash value, and the report name before running spreport. The variables are: begin_snap -> specifies the begin Snapshot Id end_snap -> specifies the end Snapshot Id hash_value -> specifies the Hash Value report_name -> specifies the Report output name e.g. SQL> connect perfstat/perfstat_password SQL> define begin_snap=39 SQL> define end_snap=40 SQL define hash_value=1988538571 SQL> define report_name=batch_sql_run SQL> @sprepsql sprepsql will no longer prompt for the above information. 4.4. Gathering Optimizer statistics on the PERFSTAT schema For best performance when running the performance reports, collect optimizer statistics for tables and indexes owned by PERFSTAT. This should be performed whenever significant change in data volumes in PERFSTAT's tables. To do this, either to use dbms_stats, or dbms_utility, and specify the PERFSTAT user: execute dbms_stats.gather_schema_stats(ownname=>'PERFSTAT',cascade=>true); or execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE'); 5. Configuring the amount of data captured ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Both the snapshot level, and the thresholds specified will affect the amount of data Statspack captures. 5.1. Snapshot Level It is possible to change the amount of information gathered by the package, by specifying a different snapshot 'level'. In other words, the level chosen (or defaulted) will decide the amount of data collected. The higher the snapshot level, the more data is gathered. The default level set by the installation is level 5. For typical usage, level 5 snapshot is effective on most sites. There are certain situations when using a level 6 snapshot is beneficial, such as when taking a baseline. The events listed below are a subset of events which should prompt taking a new baseline, using level 6: - when taking the first snapshots - when a new application is installed, or an application is modified/upgraded - after gathering optimizer statistics - before and after upgrading The various levels are explained in detail section 5.4 below. 5.2. Snapshot SQL thresholds There are other parameters which can be configured in addition to the snapshot level. These parameters are used as thresholds when collecting data on SQL statements; data will be captured on any SQL statements that breach the specified thresholds. Snapshot level and threshold information used by the package is stored in the stats$statspack_parameter table. 5.3. Changing the default values for Snapshot Level and SQL Thresholds If you wish to, you can change the default parameters used for taking snapshots, so that they are tailored to the instance's workload. The full list of parameters which can be passed into the modify_statspack_parameter procedure are the same as those for the snap procedure. These are listed in section 5.6. below. Temporarily using new values ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ To temporarily use a snapshot level or threshold which is different to the instance's default snapshot values, simply specify the required threshold or snapshot level when taking the snapshot. This value will only be used for immediate snapshot taken - the new value will not be saved as the default. e.g. Take a single level 6 snapshot (do not save level 6 as the default): SQL> execute statspack.snap(i_snap_level=>6); Saving new defaults ~~~~~~~~~~~~~~~~~~~~ If you wish to save the new value as the instance's default, you can do this either by: o Taking a snapshot, and specifying the new defaults to be saved to the database (using statspack.snap, and using the i_modify_parameter input variable). SQL> execute statspack.snap - (i_snap_level=>10, i_modify_parameter=>'true'); Setting the i_modify_parameter value to true will save the new thresholds in the stats$statspack_parameter table; these thresholds will be used for all subsequent snapshots. If the i_modify_parameter was set to false or if it were omitted, the new parameter values would not be saved. Only the snapshot taken at that point will use the specified values, any subsequent snapshots will use the preexisting values in the stats$statspack_parameter table. o Changing the defaults immediately without taking a snapshot, using the statspack.modify_statspack_parameter procedure. For example to change the snapshot level to 10, and the SQL thresholds for buffer_gets and disk_reads, the following statement can be issued: SQL> execute statspack.modify_statspack_parameter - (i_snap_level=>10, i_buffer_gets_th=>10000, i_disk_reads_th=>1000); This procedure changes the values permanently, but does not take a snapshot. 5.4 Snapshot Levels - details Levels >= 0 General performance statistics Statistics gathered: This level and any level greater than 0 collects general performance statistics, such as: wait statistics, system events, system statistics, rollback segment data, row cache, SGA, background events, session events, lock statistics, buffer pool statistics, latch statistics, resource limit, enqueue statistics, and statistics for each of the following, if enabled: automatic undo management, buffer cache advisory data, auto PGA memory management, Cluster DB statistics. Levels >= 5 Additional data: SQL Statements This level includes all statistics gathered in the lower level(s), and additionally gathers the performance data on high resource usage SQL statements. In a level 5 snapshot (or above), note that the time required for the snapshot to complete is dependent on the shared_pool_size and on the number of SQL statements in the shared pool at the time the snapshot is taken: the larger the shared pool, the longer the time taken to complete the snapshot. SQL 'Thresholds' The SQL statements gathered by Statspack are those which exceed one of six predefined threshold parameters: - number of executions of the SQL statement (default 100) - number of disk reads performed by the SQL statement (default 1,000) - number of parse calls performed by the SQL statement (default 1,000) - number of buffer gets performed by the SQL statement (default 10,000) - size of sharable memory used by the SQL statement (default 1m) - version count for the SQL statement (default 20) The values of each of these threshold parameters are used when deciding which SQL statements to collect - if a SQL statement's resource usage exceeds any one of the above threshold values, it is captured during the snapshot. The SQL threshold levels used are either those stored in the table stats$statspack_parameter, or by the thresholds specified when the snapshot is taken. Levels >= 6 Additional data: SQL Plans and SQL Plan usage This level includes all statistics gathered in the lower level(s), and additionally gathers optimizer execution plans, and plan usage data for each of the high resource usage SQL statements captured. A level 6 snapshot gathers information which is invaluable when determining whether the execution plan used for a SQL statement has changed. Therefore level 6 snapshots should be used whenever there is the possibility a plan may change, such as after large data loads, or after gathering new optimizer statistics. To capture the plan for a SQL statement, the statement must be in the shared pool at the time the snapshot is taken, and must exceed one of the SQL thresholds. To gather plans for all statements in the shared pool, you can temporarily specify the executions threshold (i_executions_th) to be zero (0) for those snapshots. For information on how to do this, see section 5.3. above. Levels >= 7 Additional data: Segment level statistics This level includes all statistics gathered in the lower level(s), and additionally gathers the performance data on highly used segments. A level 7 snapshot gathers information which determines what segments are more heavily accessed and contended. With this information, you can decide to modify the physical layout of some segments or of the tablespaces they reside in. For example, to better spread the segment io load, you can add files residing on different disks to a tablespace storing a heavily accessed segment or you can (re)partition a segment. This information can also help decide on changing segment attributes values such as PCTFREE and/or INITRANS. On a RAC environment, this information allows us to easily spot the segments responsible for much of the cross-instance traffic. Segment statistics are: - logical reads - db block changes - physical reads - physical writes - physical reads direct - physical writes direct - global cache cr blocks served (RAC specific) - global cache current blocks served (RAC specific) - buffer busy waits - ITL waits - row lock waits Although statspack capture all segment statistics, it reports only the following statistics: - logical reads - physical reads - buffer busy waits - ITL waits - row lock waits - global cache cr blocks served (RAC only) - global cache current blocks served (RAC only) Segment statistics 'Thresholds' The segments gathered by Statspack are those which exceed one of seven predefined threshold parameters: - number of logical reads on the segment (default 10,000) - number of physical reads on the segment (default 1,000) - number of buffer busy waits on the segment (default 100) - number of row lock waits on the segment (default 100) - number of ITL waits on the segment (default 100) - number of global cache ConsistentRead blocks served-RAC (default 1000) - number of global cache CUrrent blocks served (RAC) (default 1000) The values of each of these threshold parameters are used when deciding which segment statistics to collect - if a segment's statistic exceeds any one of the above threshold values, all statistics regarding this segment are captured during the snapshot. The threshold levels used are either those stored in the table stats$statspack_parameter, or by the thresholds specified when the snapshot is taken. Levels >= 10 Additional statistics: Parent and Child latches This level includes all statistics gathered in the lower levels, and additionally gathers Parent and Child Latch information. Data gathered at this level can sometimes cause the snapshot to take longer to complete i.e. this level can be resource intensive, and should only be used when advised by Oracle personnel. 5.5. Specifying a Session Id If you would like to gather session statistics and wait events for a particular session (in addition to the instance statistics and wait events), it is possible to specify the session id in the call to Statspack. The statistics gathered for the session will include session statistics, session events and lock activity. The default behaviour is to not to gather session level statistics. SQL> execute statspack.snap(i_session_id=>3); 5.6. Input Parameters for the SNAP and MODIFY_STATSPACK_PARAMETERS procedures Parameters able to be passed in to the statspack.snap and statspack.modify_statspack_parameter procedures are as follows: Range of Default Parameter Name Valid Values Value Meaning ------------------ ------------ ------- ----------------------------------- i_snap_level 0, 5,6,7, 10 5 Snapshot Level i_ucomment Text Blank Comment to be stored with Snapshot i_executions_th Integer >=0 100 SQL Threshold: number of times the statement was executed i_disk_reads_th Integer >=0 1,000 SQL Threshold: number of disk reads the statement made i_parse_calls_th Integer >=0 1,000 SQL Threshold: number of parse calls the statement made i_buffer_gets_th Integer >=0 10,000 SQL Threshold: number of buffer gets the statement made i_sharable_mem_th Integer >=0 1048576 SQL Threshold: amount of sharable memory i_version_count_th Integer >=0 20 SQL Threshold: number of versions of a SQL statement i_seg_phy_reads_th Integer >=0 1,000 Segment statistic Threshold: number of physical reads on a segment. i_seg_log_reads_th Integer >=0 1,0000 Segment statistic Threshold: number of logical reads on a segment. i_seg_buff_busy_th Integer >=0 100 Segment statistic Threshold: number of buffer busy waits for a segment. i_seg_rowlock_w_th Integer >=0 100 Segment statistic Threshold: number of row lock waits for a segment. i_seg_itl_waits_th Integer >=0 100 Segment statistic Threshold: number of itl waits for a segment. i_seg_cr_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number of Consistent Reads blocks served by the instance for the segment (RAC). i_seg_cu_bks_sd_th Integer >=0 1000 Segment statistic Threshold: number of CUrrent blocks served by the instance for the segment (RAC). i_session_id Valid sid 0 (no Session Id of the Oracle Session from session) to capture session granular v$session statistics for i_modify_parameter True, False False Save the parameters specified for future snapshots? 6. Time Units used for Performance Statistics ---------------------------------------------- Oracle now supports capturing certain performance data with millisecond and microsecond granularity. Views which include microsecond timing include: - v$session_wait, v$system_event, v$session_event (time_waited_micro column) - v$sql, v$sqlarea (cpu_time, elapsed_time columns) - v$latch, v$latch_parent, v$latch_children (wait_time column) - v$sql_workarea, v$sql_workarea_active (active_time column) Views which include millisecond timings include: - v$enqueue_stat (cum_wait_time) Note that existing columns in other views continue to capture centi-second times. As centi-second and microsecond timing may not be appropriate for rolled up data such as that displayed by Statspack, Statspack displays most cumulative times in seconds, and average times in milliseconds (for easier comparison with Operating System monitoring utilities which often report timings in milliseconds). For clarity, the time units used are specified in the column headings of each timed column in the Statspack report. The convention used is: (s) - a second (cs) - a centisecond - which is 100th of a second (ms) - a millisecond - which is 1,000th of a second (us) - a microsecond - which is 1,000,000th of a second 7. Event Timings ----------------- If timings are available, the Statspack report will order wait events by time (in the Top-5 and background and foreground Wait Events sections). If timed_statistics is false for the instance, however a subset of users or programs set timed_statistics set to true dynamically, the Statspack report output may look inconsistent, where some events have timings (those which the individual programs/users waited for), and the remaining events do not. The Top-5 section will also look unusual in this situation. Optimally, timed_statistics should be set to true at the instance level for ease of diagnosing performance problems. 8. Managing and Sharing performance data ----------------------------------------- 8.1. Sharing data via export If you wish to share data with other sites (for example if Oracle Support requires the raw statistics), it is possible to export the PERFSTAT user. An export parameter file (spuexp.par) has been supplied for this purpose. To use this file, supply the export command with the userid parameter, along with the export parameter file name. e.g. exp userid=perfstat/perfstat_password parfile=spuexp.par This will create a file called spuexp.dmp and the log file spuexp.log If you wish to load the data into another database, use the import command. For more information on using export and import, please see the Oracle Utilities manual. 8.2. Purging/removing unnecessary data It is possible to purge unnecessary data from the PERFSTAT schema using sppurge.sql. This script deletes snapshots which fall between the begin and end range of Snapshot Id's specified. NOTE: It is recommended you export the schema as a backup before running this script, either using your own export parameters, or those provided in spuexp.par Purging may require the use of a large rollback segment, as all data relating each Snapshot Id to be purged will be deleted. To avoid rollback segment extension errors, explicitly use a large rollback segment. This can be done by executing the 'set transaction use rollback segment..' command before running the sppurge.sql script (for more information on the set transaction command see the SQL reference manual). Alternatively, to avoid rollback segment extension errors specify a smaller range of Snapshot Id's to purge. When sppurge is run, the instance currently connected to and the available snapshots are displayed. The DBA is then prompted for the low Snap Id and high Snap Id. All snapshots which fall within this range will be purged. e.g. Purging data - connect to PERFSTAT using SQL*Plus, then run the script - an example output appears below. SQL> connect perfstat/perfstat_password SQL> set transaction use rollback segment rbig; SQL> @sppurge Database Instance currently connected to ======================================== Instance DB Id DB Name Inst Num Name ----------- ---------- -------- ---------- 720559826 PERF 1 perf Snapshots for this database instance ==================================== Snap Snap Id Level Snapshot Started Host Comment ---------- ----- --------------------- --------------- ------------------- 1 5 30 Feb 2000 10:00:01 perfhost 2 5 30 Feb 2000 12:00:06 perfhost 3 5 01 Mar 2000 02:00:01 perfhost 4 5 01 Mar 2000 06:00:01 perfhost WARNING ======= sppurge.sql deletes all snapshots ranging between the lower and upper bound Snapshot Id's specified, for the database instance connected to. You may wish to export this data before continuing. Specify the Lo Snap Id and Hi Snap Id range to purge ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for losnapid: 1 Using 1 for lower bound. Enter value for hisnapid: 2 Using 2 for upper bound. Deleting snapshots 1 - 2 Purge of specified Snapshot range complete. If you wish to ROLLBACK the purge, it is still possible to do so. Exiting from SQL*Plus will automatically commit the purge. SQL> -- end of example output Batch mode purging ~~~~~~~~~~~~~~~~~~ To purge in batch mode, you must assign values to the SQL*Plus variables which specify the low and high snapshot Ids to purge. The variables are: losnapid -> Begin Snapshot Id hisnapid -> End Snapshot Id e.g. SQL> connect perfstat/perfstat_password SQL> define losnapid=1 SQL> define hisnapid=2 SQL> @sppurge sppurge will no longer prompt for the above information. 8.3. Truncating all data If you wish to truncate all performance data indiscriminately, it is possible to do this using sptrunc.sql This script truncates all statistics data gathered. NOTE: It is recommended you export the schema as a backup before running this script either using your own export parameters, or those provided in spuexp.par e.g. Truncating all data - connect to PERFSTAT using SQL*Plus, and run the script - an example is below SQL> connect perfstat/perfstat_password SQL> @sptrunc About to Truncate Statspack Tables ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOTE: Running sptrunc.sql removes ALL data from Statspack tables You may wish to export the data before continuing If you would like to continue, enter any string, followed by Enter value for anystring: entered - starting truncate operation Table truncated. Truncate operation complete 9. New and Changed Features ---------------------------- 9.1. Changes between 9.0 and 9.2 Changes on the Summary Page of the Instance Report (spreport.sql) o The Top 5 Wait Events has been changed to be the Top 5 Timed Events. What was previously the Top 5 Wait Events has been expanded to give the Top 5 time usage within the instance: i.e. in addition to including Wait events, this section can now include the 'CPU used by this session' statistic. This statistic will appear in the Top 5 only if it's value is one of the the Top 5 users of time for the snapshot interval. Note that the name of the statistic 'CPU used by this session' will actually appear in the Top 5 section as 'CPU Time'. The statistic name is masked in the Top 5 to avoid the confusion of the suffix 'by this session'. The statistic will continue to appear in the System Statistics (SYSSTAT) section of the report as 'CPU used by this session'. Additionally, instead of the percentage calculation being the % Total Wait Time (which is time for each wait event divided by the total wait time for this snapshot interval), the percentage calculation is now % Total Elapsed Time (which is time for each timed event divided by the total elapsed time). i.e. previously: time for each wait event / total wait time for all events now: time for each timed event / total elapsed time The total elapsed time is computed using the total wait time for all events added to the total CPU time used for the interval i.e. total elapsed time = total wait time + total CPU time Purpose ~~~~~~~ The purpose for including CPU time with wait events: When tuning a system, the first step is to identify where the most of the time is spent, in order to identify where the most productive tuning effort should be concentrated. The majority of time could be spent in waiting for events to complete (and so be identifyable in the wait event data), or the system could be consuming much CPU (for which Operating System statistics, and the Oracle CPU statistic 'CPU used by this session' in SYSSTAT are examined). Having the CPU Time co-located with the wait events in the Top 5 section of the instance report makes it easier to compare the relative values and to identify whether the most productive investigation would occur by drilling down the wait events, or in reducing Oracle CPU usage (e.g. by tuning SQL). Changes on the Top SQL sections of the Report (spreport.sql) o When specified by the application, the MODULE information is reported just before the SQL statement itself. This information is preceded by the mention "Module: " New columns added to - stats$db_cache_advice size_factor: compares the estimated cache size with the current cache size - stats$sql_plan search_columns: the number of index columns with matching predicates. access_predicates: predicates used to locate rows in an access structure. For example, start and/or stop predicates for an index range scan. filter_predicates: predicates used to filter rows before producing them. - stats$sql_summary child_latch: the library cache child latch number which protects this SQL statement (join to v$latch_children.child#). A parent SQL statement, and all it's children are protected by the same library cache child latch. fetches: the number of fetches performed for this SQL statement New Scripts o spup90.sql - Upgrades a 9.0 Statspack schema to the 9.2 format New Data captured/reported on - Level 1 - Shared Pool Advisory - PGA statistics including PGA Advisory, PGA Histogram usage New Data captured/reported on - Level 7 - Segment level Statistics Cluster Features o Cluster Statistics page (page 2 of a clustered database report) has been significantly modified to add new ratios and remove ratios deemed less-useful. o RAC specific segment level statistics are captured with level 7 SQL Plan Usage capture changed o The logic for capturing SQL Plan Usage data (level 6) has been modified significantly. Instead of capturing a Plan's Usage once the first time the plan is used and never again thereafter, the algorithm now captures the plans used each snapshot. This allows tracking whether multiple plans are in use concurrently, or whether a plan has reverted back to an older plan. Note that plan usage data is only captured for high-load SQL (this is unchanged between 9.0 and 9.2). Due to the significant change in data capture, it is not possible to convert existing data. Instead, any pre-existing data will be archived into the table STATS$SQL_PLAN_USAGE_90 (this allows querying the archived data, should this be necessary). sprepsql.sql o 'All Optimizer Plan(s) for this Hash Value' change: Instead of showing the first time a plan was seen for a specific hash value, this section now shows each time the Optimizer Plan changed since the SQL statement was first seen e.g. if the SQL statement had the following plan changes: snap ids plan hash value -------- --------------- 1 -> 12 AAAAAAA 13 -> 134 BBBBBBB 145 -> 299 CCCCCCC 300 -> 410 AAAAAAA Then this section of the report will now show: snap id plan hash value -------- --------------- 1 AAAAAAA 13 BBBBBBB 145 CCCCCCC 300 AAAAAAA Previously, only the rows with snap_id's 1, 13 and 145 would have been displayed, as these were the first snap Id's these plans were found. However this data could not show that plan AAAAAA was found again in snap_id 300. The new output format makes it easier to see when an older plan is again in use. This is possible due to the change in the SQL Plan Usage capture (described above). 9.2. Changes between 8.1.7 and 9.0 Timing data o columns with cumulative times are now displayed in seconds. Changes on the Summary Page o All cache sizes are now reported in M or K New Statistics on the Summary page o open cursors per session values for the begin and end snapshot o comments specified when taking a snapshot are displayed for the begin and end snapshots Latches o The Latch Activity, Child and Parent Latch sections have the following additional column: - wait_time: cumulative time spent waiting for the latch New Scripts o spup817.sql - Upgrades an 8.1.7 Statspack schema to the 9.0 format o sprepsql.sql - Reports on a single hash_value, including the SQL statistics for the snapshot, the complete SQL text and optimizer execution plan information. o sprepins.sql - A report which can be run to query performance data for any instance which the PERFSTAT schema contains. The report will prompt for a dbid, instance_number and begin and end snap id's. This report can be used when importing data from another instance, or in a Real Application Clusters environment to report on an instance which you are not directly connected to. New Data captured/reported on - Level 1 - Data from v$resource_limit - If the instance is a Cluster instance, v$dlm_misc data - Additional columns are now captured in stats$enqueue_stat - Automatic Undo Management statistics - Buffer Cache advisory data - New Auto-PGA memory management data - Support for multiple sized-block buffer pools - Support for resizable buffer pool and shared pool - Data from v$instance_recovery New Snapshot Level - Level 6 - New SQL plans and SQL Plan usage information for high-load SQL statements are captured. Cluster Features o There is additional derived data and statistics which are now included in the Statspack report for a clustered database. For more information, see section 11.3. Cluster Specific Data New SNAP function o the call to take a snapshot can also be a PL/SQL function call which returns the snapshot Id of the snapshot taken. Using the function rather than the procedure is useful in situations where you wish to know the snap_id immediately, such as when running Statspack reports in batch mode, or during benchmark runs. Installation o The installation script will no longer accept the SYSTEM tablespace for the PERFSTAT user's DEFAULT or TEMPORARY tablespace. If SYSTEM is specified, the installation will error. SQL o Each SQL report has two new columns CPU Time and Elapsed Time. These show the cumulative CPU time and Elapsed time for all executions of that SQL statement for the snapshot period. If cumulative CPU and Elapsed times are not shown, the CPU and Elapsed times per execute are shown. Changed o The SGA Breakdown difference section of the statspack report now shows the difference between begin and end values as a percentage of the begin value, rather than in bytes. o The data in the Dictionary Cache Stats and Library Cache Activity section are only printed if the number of gets is greater than zero. 9.3. Changes between 8.1.6 and 8.1.7 New Statistics on the Summary page o connections at the begin snapshot and connections at the end snapshot Load Profile o executes per transaction and per second o logons per transaction and per second Instance Efficiency o % Non-Parse CPU: which is the parse time CPU / CPU used by this session o Parse CPU to Parse Elapsd%: which is the parse time CPU / parse time elapsed o Execute to Parse %: The ratio of executions to parses Instance Efficiency - Shared Pool Statistics are shown for the begin and end snapshots. o Memory Usage %: The percentage of the shared pool which is used. o % SQL with executions>1: The percentage of reused SQL (i.e. the percentage of SQL statements with more than one execution). o % Memory for SQL w/exec>1: The percentage of memory used for SQL statements with more than one execution. This data is newly gathered by the 8.1.7 Statspack for level 5 snapshots and above, and so will not evident if the report is run against older data captured using the 8.1.6 Statspack. Tablespace and File IO o Tempfile statistics are now captured. The statistics for tempfiles are shown in the same sections with statistics for datafiles and tablespaces. o The tablespace and File IO reports have been modified to include reads/s and writes/s. Latches o The report has been modified to include parent and child latch sections, which only appears in the report when a level 10 snapshot is taken. New Scripts o sppurge.sql - Purges a range of Snapshot Ids o sptrunc.sql - Deletes all data o spup816.sql - Upgrades an 8.1.6 Statspack to the 8.1.7 schema Batch Mode execution o The installation, reporting and purge scripts (spcreate.sql, spreport.sql and sppurge.sql) have been modified so they can be run in batch mode, if the appropriate SQL*Plus variables are defined before the scripts are run. SQL o Two new SQL thresholds (and sections in the report) have been added: sharable_mem and version_count o The report which was previously ordered by rows processed has been changed to be ordered by executions o The full text of a SQL statement is now captured (previously only the first 1000 bytes of the text was captured); the text is captured once only. Previously, Statspack gathered all SQL related information, including all the SQL text for each snapshot. The new strategy will result less space usage. o The first 5 lines of a SQL statement are shown in each SQL report (rather than the first line) File Rename o The Statspack files have been renamed, with all files now beginning with the prefix sp. The new and old file names are given below. For more information on the purpose of each file, please see the Supplied Scripts Overview section. New Name Old Name ------------ ------------- spdoc.txt statspack.doc spcreate.sql statscre.sql spreport.sql statsrep.sql spauto.sql statsauto.sql spuexp.par statsuexp.par sppurge.sql - new file - sptrunc.sql - new file - spup816.sql - new file - spdrop.sql statsdrp.sql spcpkg.sql statspack.sql spctab.sql statsctab.sql spcusr.sql statscusr.sql spdtab.sql statsdtab.sql spdusr.sql statsdusr.sql o The default Statspack report output file name prefix has been modified to sp_ (was st_) to be consistent with the new script names. 10. Compatibility and Upgrading from previous releases ------------------------------------------------------- 10.1 Compatibility Matrix Database - Statspack Release - Release 9.2 9.0 8.1.7 8.1.6 -------- ----- ----- ----- ----- 9.2 Y - - - 9.0 - Y - - 8.1.7 - - Y - 8.1.6 - - - Y In summary, it is best to use the Statspack release shipped with the version of the database you are using. If you are already using an earlier release of Statspack must use a newer Statspack release (e.g. because you are upgrading the database), it is possible to upgrade an existing Statspack schema, and so keep previously captured data. See the section 10.2. below which describes Upgrading an existing Statspack schema to a newer release. 10.1.1. Using Statspack shipped with 9.2 The Statspack scripts shipped with 9.2 can not be used with any release earlier than 9.2, as Statspack uses new v$views (and new columns added to existing v$views) introduced in this server release. 10.1.2. Using Statspack shipped with 9.0 The Statspack scripts shipped with 9.0 can not be used with any release earlier than 9.0, as the 9.2 release uses new v$views (and new columns added to existing v$views) introduced in this server release. 10.1.3. Using Statspack shipped with 8.1.7 on 9i releases It is not possible to use the Statspack shipped with 8.1.7 with any 9i instance, due to the definition of an undocumented view Statspack 8i used, changing between Oracle8i and Oracle9i. Attempting to use 8.1 Statspack on an instance running 9i will result in package compilation errors. 10.2. Upgrading an existing Statspack schema to a newer release Scripts are provided which convert performance data in an existing Statspack schema running an older Statspack release, to the newer schema format. Although data conversion is not a supported activity, these scripts have been provided as a convenient way of keeping previously captured Statspack data. Due to the differences in schema layout, minor irregularities may result in statistics captured before conversion. An example of this is the Enqueue statistics data migration: do not compare Enqueue statistics data collected pre-9.0 to the Enqueue statistics data captured in 9.0 (for more details, see section 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0). Note: There is no downgrade script. Backup the PERFSTAT schema using export BEFORE attempting the upgrade, in case the upgrade fails. The only method of downgrading, or re-running the upgrade is to de-install Statspack, and import a previously made export. Before running the upgrade script, export the Statspack schema (for a backup), then disable any scripts which use Statspack, as these will interfere with the upgrade. For example, if you use a dbms_job to gather statistics, disable this job for the duration of the upgrade. If there is a large volume of data in the Statspack schema (i.e. a large number of snapshots), to avoid a long upgrade time or avoid an unsuccessful upgrade: - ensure there is enough free space in PERFSTAT's default tablespace before starting the upgrade (each individual upgrade section will describe how to estimate the required disk space) - if you do not use Automatic Undo Management, ensure you specify a large rollback segment when prompted - if you do not use Automatic Memory Management, ensure you specify a large sort_area_size (e.g. 1048576) when prompted The upgrade script will prompt you for the rollback segment and sort_area_size. If you do not need to specify a a rollback segment or sort_area_size, then simply press return, and ignore the following errors appearing in the upgrade log file: alter session set sort_area_size = * ERROR at line 1: ORA-02017: integer value required set transaction use rollback segment * ERROR at line 1: ORA-02245: invalid ROLLBACK SEGMENT name 10.2.1. Upgrading the Statspack schema from 9.0 to 9.2 Follow the general instructions in section 10.2. 'Upgrading an existing Statspack schema to a newer release' above. This release creates new tables and indexes, and requires approx. 20 extra MB. To upgrade: - ensure you have sufficient free space in the tablespace - disable any programs which use Statspack - backup the Statspack schema (e.g. using export) - run the upgrade by connecting as a user with SYSDBA privilege: SQL> connect / as sysdba SQL> @spup90 Once the upgrade script completes, check the log files (spup90a.lis and spup90b.lis) for errors. If errors are evident, determine and rectify the cause. If no errors are evident, re-enable any Statspack data collection or reporting scripts which were previously disabled. SQL Plan Usage Data Upgrade note: If there is more than one database in a single Statspack schema (i.e. there are multiple distinct dbid's), AND if Level 6 snapshots have been taken using the 9.0 release Statspack, then the SQL plan usage data will be saved, but will not be queried by the sprepsql.sql SQL report (this is because during the data conversion, it will not be possible to identify which database first identified a plan usage). For more details see 'SQL Plan Usage capture changed' in section 9.1. Changes between 9.0 and 9.2. 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0 Follow the general instructions in section 10.2. 'Upgrading an existing Statspack schema to a newer release' above. Then, to estimate whether you have sufficient free space to run this upgrade, execute the following SQL statement while connected as PERFSTAT in SQL*Plus: select 10 + (2*sum(bytes)/1024/1024) est_space_mb from dba_segments where segment_name in ('STATS$ENQUEUESTAT'); The est_space_mb column will give you a guesstimate as to the required free space, in megabytes. To upgrade: - ensure you have sufficient free space in the tablespace - disable any programs which use Statspack - backup the Statspack schema (e.g. using export) - run the upgrade by connecting as a user with SYSDBA privilege: SQL> connect / as sysdba SQL> @spup817 Once the upgrade script completes, check the log files (spup817a.lis and spup817b.lis) for errors. If errors are evident, determine and rectify the cause before proceeding. If no errors are evident, and you are upgrading to 9.2, you may proceed with the upgrade. Data Compatibility ~~~~~~~~~~~~~~~~~~ Prior to release 9.0, the STATS$ENQUEUESTAT table gathered data based on an X$ table, rather than a V$view. In 9.0, the column data within the underlying X$ table has been considerably improved, and the data externalised via the V$ENQUEUE_STAT view. The Statspack upgrade script spup817.sql migrates the data captured from prior releases into the new format, in order to avoid losing historical data. Note however, that the column names and data contained within the columns has changed considerably between the two releases: the STATS$ENQUEUE_STAT columns in 9.0 capture different data to the columns which existed in the STATS$ENQUEUESTAT table in the 8.1. Statspack releases. The column data migration performed by spup817.sql is as follows: 8.1 STATS$ENQUEUESTAT 9.0 STATS$ENQUEUE_STAT --------------------- ---------------------- GETS TOTAL_REQ# WAITS TOTAL_WAIT# To further emphasise the difference, the column definitions appear below: STATS$ENQUEUESTAT.GETS - 8.1 Reflected the number of enqueue gets, excluding enqueue conversions. This statistic was incremented at the end of a get. STATS$ENQUEUE_STAT.TOTAL_REQ# - 9.0 Is the total number of requests for an enqueue + the number of enqueue conversions. This statistic is incremented at the beginning of a get request. STATS$ENQUEUESTAT.WAITS - 8.1 Reflected the number of times a session waited for at least 3 seconds for an enqueue operation (get or convert). The statistic was incremented at the end of the wait (either if the enqueue was successfully gotten or if the request timed out). If a session waited for less than 3 seconds, this statistic was not incremented. STATS$ENQUEUE_STAT.TOTAL_WAIT# - 9.0 Is the total number of times a session waited for any enqueue operation. This statistic is incremented at the beginning of the wait. For these reasons it is not valid to compare Enqueue statistics data collected pre-9.0, to Enqueue statistics data captured in Oracle9i. 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7 Follow the general instructions in section 10.2. 'Upgrading an existing Statspack schema to a newer release' above. Then, to estimate whether you have sufficient free space to run this upgrade, execute the following SQL statement while connected as PERFSTAT in SQL*Plus: select 1.3*sum(bytes)/1024/1024 est_space_mb from dba_segments where segment_name in ('STATS$SQL_SUMMARY','STATS$SQL_SUMMARY_PK'); The est_space_mb column will give you a guesstimate as to the required free space, in megabytes. The larger the SQL statements in the sql_summary table, the more space will be released after the upgrade is complete. To upgrade: - ensure you have sufficient free space in the tablespace - disable any programs which use Statspack - backup the Statspack schema (e.g. using export) - run the upgrade by connecting as a user with SYSDBA privilege: SQL> connect / as sysdba SQL> @spup816 Once the upgrade script completes, check the log files (spup816a.lis and spup816b.lis) for errors. If errors are evident, determine and rectify the cause before proceeding. If no errors are evident, and you are upgrading to 9.0, you may proceed with the upgrade. 10.2.4. Upgrading the Statspack schema from 8.1.6 to 9.2 If you are running 8.1.6 Statspack and wish to upgrade to 9.2 Statspack, you must follow the upgrade steps - in the following order: - 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7 - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0 - 10.2.1. Upgrading the Statspack schema from 9.0 to 9.2 10.2.5. Upgrading the Statspack schema from 8.1.6 to 9.0 If you are running 8.1.6 Statspack and wish to upgrade to 9.0 Statspack, you must follow the upgrade steps - in the following order: - 10.2.3. Upgrading the Statspack schema from 8.1.6 to 8.1.7 - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0 10.2.6. Upgrading the Statspack schema from 8.1.7 to 9.2 If you are running 8.1.7 Statspack and wish to upgrade to 9.2 Statspack, you must follow the upgrade steps - in the following order: - 10.2.2. Upgrading the Statspack schema from 8.1.7 to 9.0 - 10.2.1. Upgrading the Statspack schema from 9.0 to 9.2 11. Oracle Real Application Clusters specific considerations ------------------------------------------------------------ 11.1. Changing Instance Numbers The unique identifier for a database instance used by Statspack is the dbid and the instance_number. When in a Real Application Clusters environment, it is possible the instance_number may change between startups (either because the instance_number initialization parameter is set, or because the instances are started in a different order). In this case, as Statspack uses the instance_number and the dbid to identify the instance's snapshot preferences, it is important to note that this may inadvertently result in a different set of levels or thresholds being used when snapshotting an instance. There are three conditions which must be met for this to occur: - the instance numbers must have switched between startups - the DBA must have modified the default Statspack parameters used for at least one of the instances - the parameters used (e.g. thresholds and snapshot level) must not be the same on all instances Note that the only way the parameters will differ is if the parameters have been explicitly modified by the DBA after installation, either by saving the specified values or by using the modify_statspack_parameter procedure. It is easy to check whether any of the Statspack snapshot parameters are different for the instances by querying the STATS$STATSPACK_PARAMETER table. NOTE: If you have changed the default Statspack parameters you may wish to avoid encountering this problem by hard-coding the instance_number initialization parameter for each of the instances of a Clustered database - this will avoid encountering this problem. For recommendations and issues with setting the instance_number initialization parameter, please see the Real Application Clusters documentation. 11.2. Cluster Specific Report sprepins.sql can be run to query performance data for any instance which the PERFSTAT schema contains. The report will prompt for a dbid, instance_number and begin and end snap id's. This report can be used when importing data from another instance, or in a Real Application Clusters environment to report on an instance which you are not connected to. For more information on sprepins.sql, see section 4.2. 11.3 Cluster Specific Data New Cluster Specific data displayed in Statspack instance report: - Page 2 of the Statspack report for a clustered instance displays cluster specific derived statistics. - Page 3 of a clustered instance Statspack report shows data from v$dlm_misc. - Cluster-specific data for Library Cache and Dictionary Cache - RAC segment statistics 12. Conflicts and differences compared to UTLBSTAT/UTLESTAT ------------------------------------------------------------ 12.1. Running BSTAT/ESTAT in conjunction to Statspack If you choose to run BSTAT/ESTAT in conjunction to Statspack, do not do run both as the same user, as there is a table name conflict - this table is stats$waitstat. 12.2. Differences between Statspack and BSTAT/ESTAT Statspack considers a transaction to either finish with a commit or a rollback, and so calculates the number of transactions thus: 'user commits' + 'user rollbacks' BSTAT/ESTAT considers a transaction to complete with a commit only, and so assumes that transactions = 'user commits' For this reason, comparing per transaction statistics between Statspack and BSTAT/ESTAT may result in significantly different per transaction ratios. 13. Removing the package ------------------------- To deinstall the package, connect as a user with SYSDBA privilege and run the following script from SQL*Plus: spdrop e.g. SQL> connect / as sysdba SQL> @spdrop This script actually calls 2 other scripts: 1. spdtab -> Drops tables and public synonyms 2. spdusr -> Drops the user Check each of the two output files produced (spdtab.lis, spdusr.lis) to ensure the package was completely deinstalled. 14. Supplied Scripts Overview ------------------------------ Installation Must be run as a user with SYSDBA privilege spcreate.sql -> Creates entire Statspack environment (calls spcusr.sql, spctab.sql, spcpkg.sql) spdrop.sql -> Drops entire Statspack environment (calls spdtab.sql, spdusr.sql) Are run as a user with SYSDBA priv by the calling scripts (above) spdtab.sql -> Drops Statspack tables spdusr.sql -> Drops the Statspack user (PERFSTAT) Are run as PERFSTAT by the calling scripts (above) spcusr.sql -> Creates the Statspack user (PERFSTAT) spctab.sql -> Creates Statspack tables spcpkg.sql -> Creates the Statspack package Reporting and Automation Must be run as PERFSTAT spreport.sql -> Generates a Statspack report sprepins.sql -> Generates a Statspack report for the database and instance specified sprepsql.sql -> Generates a Statspack SQL report for the specific SQL Hash Value specified spauto.sql -> Automates Statspack statistics collection (using dbms_job) Upgrading Must be run as SYSDBA spup90.sql -> Converts data from the 9.0 schema to the newer 9.2 schema. Backup the existing schema before running the upgrade. If upgrading from Statspack 8.1.6, spup816.sql must be run, then spup817.sql, then spup90.sql spup817.sql -> Converts data from the 8.1.7 schema to the newer 9.0 schema. Backup the existing schema before running the upgrade. If upgrading from Statspack 8.1.6, spup816.sql must be run, then spup817.sql spup816.sql -> Converts data from the 8.1.6 schema to the 8.1.7 schema. Backup the existing schema before running the upgrade. Performance Data Maintenance Must be run as PERFSTAT sppurge.sql -> Purges a limited range of Snapshot Id's for a given database instance. sptrunc.sql -> Truncates all Performance data in Statspack tables WARNING - Do not use unless you wish to remove all data in the schema you are using. You may choose to export the data as a backup before using this script. spuexp.par -> An export parameter file supplied for exporting the whole PERFSTAT user. Documentation Should be read by the DBA running the scripts spdoc.txt -> This file contains instructions and documentation on the STATSPACK package. 15. Limitations and Modifications ---------------------------------- 15.1. Limitations As the Statspack schema is updated to reflect the features in the latest Oracle releases, the schema may change; backward compatibility is not guaranteed. 15.2. Modifications All Statspack code is Oracle proprietary and must not be modified. Any modifications made to Statspack software will render the code and data captured thereafter unsupported; unsupported changes may result in errors in data capture or reporting. Instead, please request enhancements. -------------------------------------------------------------------------