Oracle FAQ Your Portal to the Oracle Knowledge Grid
HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US
 

Home -> Community -> Mailing Lists -> Oracle-L -> Perf and Tuning

Perf and Tuning

From: STOCKHOLDING CORPORATION OF INDIA LTD. <hemant_at_SHAKTI.NCST.ERNET.IN>
Date: Wed, 31 Jan 1996 21:39:45 +0530
Message-Id: <9601311647.AA18439@alice.jcc.com>


These are a set of notes which I have written on Oracle Performance and Tuning.
Feedback (and corrections, if necessary) from other DBAs is invited.

Hemant K Chitale
Senior Manager, Systems
Automation Dept
(Systems Administrator and DBA)
Stock Holding Corporation of India Ltd


DBA Group Login ID                    | hemant_at_shakti.ncst.ernet.in
                                      | Phone : +91-22-2882749/2045481/2045483
Stock Holding Corporation of India Ltd| Fax : +91-22-2027211/2027202

Notes on Tuning of Oracle RDBMS


These notes *exclude* rules for writing SQL Statements.

  1. Database Block Buffers : Definition : This is the number of Blocks allocated in the SGA (residing in memory) for Data Blocks. It is specified as : DB_BLOCK_BUFFERS parameter in the init.ora file ; specification in number of database blocks ; size of each block defined by DB_BLOCK_SIZE.
     Usage : Processes first look for data in Database Block Buffers. The Process search blocks in the Buffer. If the data is not found, the DBWR is signalled to write out modified blocks and release space. DBWR also writes back modified blocks at every Checkpoint and Log Switch.

     Performance : The Database Buffer Hit Ratio is the only relevant statistic.
Hit Ratio is computed as 1 - (physical reads / (db block gets + consistent gets) ). Where the Hit Ratio is low (consistently below 90%), DB_BLOCK_BUFFERS are to be increased. The maximum value is 65536.

     SQL : SELECT 1 - (sum(decode(name,'physical reads',value,0)) / ( sum(decode(name,'db block gets',value,0)) +   ( sum(decode(name,'consistent gets',value,0)) ) ) from V$SYSSTAT


2. Log Buffers :

     Definition : This is the size of memory allocated to Redo Log Buffering in memory. It is specified as : LOG_BUFFER parameter in the init.ora file ; specification in bytes, not supporting "M" or "K" suffix notation.

     Usage : User Processes continously write to the Log Buffer. The LGWR writes down from the Buffer to the Redo Log files. If LGWR does not write down to the disk files fast enough, user processes have to wait before they can acquire space.

     Perf Statistics : There are three Statistics to use :

  1. "redo log space requests" shows the number of times a user process waits for space in the Buffer. The value should be as close to 0 as possible. If the value is high, increase the Log Buffer. Increases in the Log Buffer should be in small increments.
     SQL :    SELECT name, value
               from V$SYSSTAT
               where name = 'redo log space requests"

          b. MISSES and IMMEDIATE_MISSES of "redo allocation".
                        The  ratio of MISSES to GETS  should  not
exceed 1%.
The ratio of IMMEIDATE_MISSES to (IMMEDIATE_GETS + IMMEDIATE_MISSES) should not exceed 1%. These ratios show success rates in acquiring latches for allocation of space. If the ratio is high, reduce the maximum size of a Redo Entry that can be copied onto a redo allocation latch. This is defined by LOG_SMALL_ENTRY_MAX_SIZE in the init.ora. Refer to the Installation and User Guide for the particular OS for the minimum, maximum and default values.
     SQL   :   SELECT   name,   gets,   misses,   immediate_gets,
               immediate_misses
               from V$LATCH x, V$LATCHNAME y
               where y.name = 'redo allocation'
               and  y.latch# = x.latch#

          c. MISSES and IMMEDIATE_MISSES of "redo copy".
                        The  ratio of MISSES to GETS  should  not
exceed 1%.
The ratio of IMMEIDATE_MISSES to (IMMEDIATE_GETS + IMMEDIATE_MISSES) should not exceed 1%. These ratios show success rates in acquiring latches for copying into the buffer where the entry is too large to copy on to the latch (as above).
That is, if an entry exceeds LOG_SMALL_ENTRY_MAX_SIZE, the user process needs a redo copy latch. If it is less than this value, it needs a redo allocation latch (as above). The number of redo copy latches is defined by LOG_SIMULTANEOUS_COPIES which, by default, is equal to CPU_COUNT. {The RDBMS should automatically set CPU_COUNT to the number of processors in a multi-processor machine. Therefore, CPU_COUNT value must not be set in the init.ora}. If the Miss ratio is high, increase LOG_SIMULTANEOUS_COPIES upto a maximum of twice the number of processors on the machine. LOG_ENTRY_PREBUILD_THRESHOLD (default 0) can be increased to permit user processes to "pre-build" entries before attempting to copy into the log buffer.
     SQL   :   SELECT   name,   gets,   misses,   immediate_gets,
               immediate_misses
               froms V$LATCH x, V$LATCHNAME y
               where y.name = 'redo copy'
               and y.latch# = x.latch#


-----------------------------------------------------------------
3. Locks : Definition : Oracle supports Table Level Locking and Row
Level Locking (only with Transaction Processing Option in V6 and Procedural Option in V7).
There are many types of Locks.
  Row Share
  Row Exclusive
  Share
  Share Row Exclusive
  Exclusive
  None/Null
Row Level Locking is enabled by :
SERIALIZABLE=FALSE (V6 and V7) and
ROW_LOCKING=ALWAYS (V7) in the init.ora parameter file.

     Usage : Every SELECT transaction acquires a SHARE lock while an UPDATE acquires an EXCLUSIVE lock. Generally, Locking is designed such that there is no blocking ie, readers waiting for writers and vice-versa. However, there can be situations caused by faults in the application design or usage where blocking and, finally, DEADLOCKs occur. When Oracle detects a DEADLOCK, it writes the DEADLOCK information with the preceding SQL statements into a Trace File (only in V7). Alternatively, users may get the error message : "Unable to reserve record for update/delete".

     Performance : There are 2 scripts provided by Oracle (V7) :  CATBLOCK.sql is to be run as SYS. This creates the DBA_LOCKS views.
 UTLLOCKT.sql can be run by any DBA. It shows, in a hierarchical manner, processes waiting to acquire locks and the processes they are waiting for.
It is more important to design the Application and Coding properly.


4. Object Access :

     Definition : Every object accessed has to be loaded into the cache.

     Performance : The "popularity" or frequency of access of an object can be identified.
The SQL Statement is :

col owner format a12
col name format a25 trunc
col tablespace_name format a10 hea 'TBS' trunc col type format a10 trunc
col l format 9999 hea 'Loads'
col e format 99,999 hea 'Exec'
spool object_usage
select v.owner,v.name,v.type, nvl(loads,0) l,nvl(executions,0) e from v$db_object_cache v
where v.executions > 25
order by executions desc
/
spool off


5. Checkpointing :

     Definition : A Checkpoint occurs at pre-defined intervals whereby modified blocks are written back to the data files on disk by the DBWR process. A Checkpoint thus requires less time for an Instance Recovery should the running instance fail, as only data modified since the last Checkpoint needs to be re- applied from the Redo Logs. (Rollbacks of uncomitted transactions will, in any case, be done whether they are Chekpointed or not). Also, all Data File Headers are updated. A Checkpoint also occurs when a Tablespace is taken OFFLINE or when an Online Backup of the Tablespace is started. Another Checkpoint occurs at SHUTDOWN (NORMAL|IMMEDIATE). The frequency of Checkpointing is defined by : LOG_CHECKPOINT_INTERVAL parameter in the init.ora ; defined in terms of OS blocks.
Size of Redo Log File. A Checkpoint occurs at every Log Switch. LOG_CHECKPOINT_TIMEOUT (only in V7) parameter in the init.ora ; defined in terms of seconds (default 0 -- disabled). This permits time-based Checkpointing.
CHECKPOINT_PROCESS=TRUE in the init.ora can be enabled to allow a seperate process (CKPT) to do the checkpointing. Else, the LGWR does Checkpointing by default.

     Usage : At a Checkpoint, the LGWR or CKPT updates all Data File Headers while DBWR writes modified blocks (Table, Index, Rollback Segment) to the Data Files on disk.

     Performance : Increasing the frequency of Checkpointing has two effects :

   i) Reduces the time required for Instance Recovery.   ii) Increases I/O Overhead as the DBWR does writes more frequently.
It is advisable to keep the number of Checkpoints to the minimum (ie, the number of Log Switches). This can be done by setting LOG_CHECKPOINT_INTERVAL to a value larger than the size of the Redo Log Files.
If LGWR is too busy, it is advisable to enable the CHECKPOINT_PROCESS. This causes one more Oracle Process.

     SQL :     SELECT name, value
               from V$SYSSTAT where name = 'dbwr checkpoints'


-----------------------------------------------------------------
6. Library Cache Definition : The Library Cache consists of the SQL
Statements in the Shared Pool (only in V7). The size of the Shared Pool is specified as :
SHARED_POOL_SIZE parameter in the init.ora ; specified in bytes ; not supporting "K" or "M" suffix notation.

     Usage : The Shared Pool is used to save parsed copies of SQL statements so that the parsing phase is avoided and process execution is speeded up.

     Performance Statistics : The cache hit ratio is relevant. This is available from the V$LIBRARYCACHE. PINS shows the number of times an item was executed. RELOADS shows the number of library cache misses on execution. RELOADS to PINS indicates the proportion of reparsing.
The SHARED_POOL_SIZE can be increased to allocate more space for SQLs in the SGA. CURSOR_SPACE_FOR_TIME can be set to TRUE (default FALSE) in the init.ora to cause implicit pinning of SQL statements for all open cursors. If, however, the number of execution misses (RELOADS) is high, and the Shared Pool is full, a user process is returned an error if the RDBMS is unable to parse and load the SQL statement for it. Therefore, if SHARED_POOL_SIZE is not high, CURSOR_SPACE_FOR_TIME should be set to FALSE. The Application Developers and Users must be asked to use identical SQL Statements. A CASE difference (e.g. "Select * from EMP" and "Select * from emp") or even an additional space- character results in re-parsing. Bind Variable Names must be the same. SQL statements used must be optimised so that users re-use optimised statements (else, a number of users using the same unoptimised SELECT statement will cause a degradation).

     SQL :     SELECT sum(pins), sum(reloads)
               from V$LIBRARYCACHE



-----------------------------------------------------------------
7. Data Dictionary Cache Definition : The Data Dictionary Cache stores information
relating to Database Objects.
Each parameter (called "DC_...") is specifiable in V6. In V7, this is a part of the SHARED_POOL_SIZE.

     Usage : Various different parameters (in V$ROWCACHE) are used for sizing the cache. Although they are seperately tunable in V6, V7 does not allow user-tuning of these parameters and allocates space in the Shared Pool by its own algorithm.

     Performance Statistics : Here, too, the hit ratio is relevant. The hit ratio is determined by (1 - getmisses/gets). If the ratio is low, the SHARED_POOL_SIZE is to be increased. ROW_CACHE_CURSORS may also be increased.

     SQL :     SELECT 1 - (sum(getsmisses) / sum(gets))
               from V$ROWCACHE


-----------------------------------------------------------------
8. Rollback Segments Definition : Rollback Segments are used by the RDBMS when :
i) They are PUBLIC Rollback Segments and are ONLINE. ii) They are PRIVATE Rollback Segments and are specified in the ROLLBACK_SEGMENTS parameter in the init.ora. iii) The number of Rollback Segments specified above is less than the value of TRANSACTIONS/TRANSACTIONS_PER_ROLLBACK_SEGMENT in the init.ora (TRANSACTIONS is derived as 1.1 times PROCESSES and TRANSACTIONS_PER_ROLLBACK_SEGMENT is a default of 30). This Ratio is the number of Rollback Segments the RDBMS tries to acquire automatically and will not OPEN the instance if it is unable to do so.
MAX_ROLLBACK_SEGMENTS in the init.ora defines the maximum number of Rollback Segments that can be kept ONLINE. (The default value is 30).
GC_ROLLBACK_SEGMENTS in both init.ora's of a Parallel Server Database defines the number of Rollback Segments that can be ONLINE across both the instances. (The value must be the same in both init.ora's).

     Usage : Rollback Segments can be created online. Moreover, their Status (ONLINE, OFFLINE) also can be changed when the Database is OPEN (only in V7). PUBLIC Rollback Segments are available for all Instances of a Parallel Server Database while PRIVATE Rollback Segments belong to the Instance they are created in. PUBLIC Rollback Segments are *not* recommended.

     Performance : There are three issues :

  1. Contention for Rollback Segments is to be reduced to the minimum. Note here that TRANSACTIONS_PER_ROLLBACK_SEGMENT is *not* relevant : it is used only for determining the number of Rollback Segments to bring ONLINE when OPENing the Database. Contention can be determined from V$WAITSTAT. Where Contention is high, the number of Segments is to be increased. A guideline to determine the number of Rollback Segments to keep ONLINE is "n/4, but not more than 50" where "n" is the number of Concurrent Transactions expected. Additional Rollback Segments can be created and brought online when the database is OPEN. They may then be included in the ROLLBACK_SEGMENTS parameter in the init.ora for subsequent database startups.
  2. Sizing of Rollback Segments and their Extents is necessary to ensure that Rollback Segments are also cached. Rollback Segments are cached in DB_BLOCK_BUFFERS in the same manner as Tables and Indexes. In general, Rollback Segments should be small in environments of small, short-running transactions and large in cases of long-running transactions. Note that seperately sizes Rollback Segments can be created. Thus long transactions can explicitly use large Rollback Segments (with the "SET TRANSACTION USE ROLLBACK SEGMENT <segment_name>" as the first line for execution in a transaction or .SQL file). The OPTIMAL parameter should be carefully specified (it is alterable at any time online, should the need so arise). Setting a too low value for OPTIMAL can result in "snapshot too old" error messages. The statistics for Shrinks, Wraps and Average Size are available in MONITOR ROLLBACK in SQL*DBA. Every Shrink, Wrap and Extend involves a Recursive SQL overhead. The following rules determine OPTIMAL sizing :
  3. if "Shrinks" is Low and "Average Sizes, Shrunk" is also Low and "Average Sizes, Active" is close to "Sizes, Optimal", the OPTIMAL setting is correct. Else, OPTIMAL is too large (and not enough Shrinks are being performed). ii) if "Shrinks" is Low and "Average Sizes, Shrunk" is High, OPTIMAL is correctly tuned. iii) if "Shrinks" is High and "Average Sizes, Shrunk" is Low, OPTIMAL is too small (there are too many Shrinks). iv) if "Shrinks" is High and "Average Sizes, Shrunk" is High, OPTIMAL is to be increased.
  4. Rollback Segments should be placed in a seperate Tablespace. This seperates Rollback I/O from Table and Index I/O. Also, it facilitates Tablespace administration whereby a Data or Index Tablespace can be taken OFFLINE, when needed, without worrying about Rollback Segments (a Tablespace cannot be taken OFFLINE if there is any ONLINE Rollback Segment) in it. (This, arguably, is less relevant in V7 where Rollback Segments can be taken OFFLINE when the database is OPEN.) Also, allocation and de-allocation of extents for Rollback Segment does not, then, conflict with Tables and Indexes (and cause fragmentation amongst them). This Tablespace may consist of a number of data files spread across disks (the trade-off is that a disk-error or disk-failurre can cause a Database Crash because the file goes offline and ONLINE Rollback Segments become unavailable, unless the files are mirrored).
     SQL :  For Waits (Contention) in Rollback Segments :
               SELECT class, count
               from V$WAITSTAT
               where class in ( 'system undo header' ,
               'system undo block', 'undo header', 'undo block')

            For Rollback Segment Activity :
               SELECT a.name , b.xacts , b.waits , b.gets ,
               b.shrinks , b.wraps , b.extends , b.rssize
               from V$ROLLNAME a, V$ROLLSTAT b
               where a.usn = b.usn
               and status = 'ONLINE'
               order by a.name

            Also,  use  the MONITOR ROLLBACK SEGMENTS  option  in
SQL*DBA.

9. Sorts

     Definition : The amount of memory acquired by an Oracle Process for the purpose of a Sort. The Retained Size is the size to which Oracle reduces the allocated sort area if the data is not expected to be referenced again within the same transaction. Overflows beyond the specified size go into the user's TEMPORARY TABLESPACE.
This is defined by :
SORT_AREA_SIZE parameter in the init.ora ; specified in bytes ; must be defined as a multiple of the memory page size. SORT_AREA_RETAINED_SIZE parameter in the init.ora ; specified in bytes ; default equal to the SORT_AREA_SIZE. Default STORAGE Parameters of the user's TEMPORARY TABLESPACE.

     Usage : When any transaction needs a Sort Operation, the Oracle Shadow Process acquires Memory equal to the SORT_AREA_SIZE. If this is insufficient, further sorting is done in the user's TEMPORARY TABLESPACE. Once a Sort is over, the size is shrunk to SORT_AREA_RETAINED_SIZE so as to release memory. A Sort is required for the following SQLs :

   i) CREATE INDEX
  ii) SELECT statement with ORDER BY
Temporary Tablespace is also required for :

   i) SELECT statement with GROUP BY

     Performance : There are three considerations :

  1. SORT_AREA_SIZE should be large enough for most Sorts. The number of Sorts on Disk can be verified after changing the SORT_AREA_SIZE. e.g. if there are no Sorts on Disk, you may reduce the SORT_AREA_SIZE and monitor Sorts on Disk. If it results in some Disk Sorts, the SORT_AREA_SIZE has to be increased again. The trade-off is that increasing the SORT_AREA_SIZE increases the total memory requirements.
  2. SORT_AREA_RETAINED_SIZE can be set to 0 so that Oracle Processes de-allocate memory more frequently.
  3. All Users of a particular class can be given the same TEMPORARY TABLESPACE or different TEMPORARY TABLESPACEs, depending on an analysis of the requirements. For Example, Application Users can have TEMP_1 as their TEMPORARY TABLESPACE while SQL Users and Users doing administration/data-fixes have use TEMP_2. These Tablespaces should, preferably, have no Objects. Temporary Extents are automatically coalesced by the SMON process periodically. The Storage Parameters (INITIAL EXTENT, NEXT EXTENT and PCTINCREASE) must be set properly. For Example, with SORT_AREA_SIZE of 1M, INITIAL and NEXT are also 1M and PCTINCREASE is 0.
     SQL :     SELECT name, value
               from V$SYSSTAT
               where name in ('sorts(memory)','sorts(disk)')


-----------------------------------------------------------------
10. Private SQL Area Definition : Private SQL Areas are used for Parsing. The
relevant parameters are :
OPEN_CURSORS parameter in the init.ora
HOLD_CURSOR, RELEASE_CURSOR and MAXOPENCURSORS in Precompiling of Programs.
Buffering parameters in SQL*Forms.

     Usage : A Cursor is required for every SQL parsing.

     Performance : The Application Code is to be run with the Trace facility enabled. The Trace file can then be read to identify the number of re-parses and re-executes and cursors opened.


  1. SGA Size and Allocation Definition : The System Global Area is the sum of DB_BLOCK_BUFFERS, LOG_BUFFERS, SHARED_POOL_SIZE and some fixed and variable overheads.

     Usage : This resides in memory. It is a Shared Memory Segment.

     Performance : The SGA should fit into a single Shared Memory Segment. If SHMMAX (Unix Kernel Parameter) is less than the total SGA, the SGA is fragmented across Shared Memory Segments. If possible, increase the value of SHMMAX and rebuild the Unix Kernel.

     SQL :     SELECT * from
               V$SGA

     Unix :    ipcs -m -b
               shows  the number and sizes of each Shared  Memory
Segment.
  1. Free Lists Definition : Every Table has a value for FREELISTS which is picked up from the default value for the instance. This specifies the number of linked-lists maintained for the free blocks within available extents for the table.

     Usage : Every INSERT statement requires reference to a FREELIST to identify an available block.

     Performance : If FREELISTS is less than the number of concurrent INSERTs into the Table, the additional INSERTs have to wait for those with access to FREELISTs to complete. Free List contention can be monitored by identifying the ratio of waits for free-list to the total number of block requests. However, the V$ views give global statistics (ie. for ALL tables being accessed).
Since this value is defined for a Table only when the Table is created, changing it for a Table requires dropping and recreating the Table.

     SQL :     SELECT class, count
               from V$WAITSTAT
               where class = 'free list'

               SELECT sum(value)
               from V$SYSSTAT
               where name in ('db block gets','consistent gets')


-----------------------------------------------------------------
12. Archiving of Redo Logs Definition : The ARCH process can be enabled for Automatic Archiving with the following : LOG_ARCHIVE_START=TRUE and
LOG_ARCHIVE_DEST=<destination_directory> in the init.ora LOG_ARCHIVE_FORMAT specifies the name format for the Archived Log file created in Unix.
in SQL*DBA with the Database MOUNTED but not OPEN : ALTER DATABASE ARCHIVELOG      Usage : As a Redo Log file is filled, the ARCH process archives it out to a Unix file, while the LGWR continues on to the next file. If the LGWR finishes using all the subsequent files and returns to the file being archived, it has to wait till the archiving is completed. In such a situation, *all* database operations are suspended and a message is written into the alert_log file.

     Performance : Generally, the performance of ARCH is not an issue. It is also recommended to seperate Redo Log files across disks to avoid contention between LGWR and ARCH. Moreover, the Log Archive Destination directory should be on a dedicated disk. Another recommendation is to add Redo Log files so that there is enough time-lag between the ARCH archiving a file and LGWR wanting to re-use it.
If ARCH works very quickly, overall system performance can be reduced while it runs, taking CPU time. If it runs very slowly, a bottleneck can be created if a file is not archived and the LGWR needs to overwrite the file. Therefore, tuning can be done both ways, viz cause the ARCH to run either as slowly as possible without being a bottleneck, or as quickly as possible without reducing system performance.
Setting LOG_ARCHIVE_BUFFERS to 1 and LOG_ARCHIVE_BUFFER_SIZE to th maximum (as per the Installation and User Guide for the OS), makes ARCH work slowly.
To increase the speed of ARCH, LOG_ARCHIVE_BUFFERS can be set to 2,3 or 4 and LOG_ARCHIVE_BUFFER_SIZE to a lower value.

     SQL : In SQL*DBA : ARCHIVE LOG LIST


  1. Storage Parameters for Objects Definition : The STORAGE clause of a CREATE statement defines the various storage parameters. INITIAL is the size of the initial extent to be allocated NEXT is the size of the next extent to be allocated MINEXTENTS is the minimum number of extents to be allocated when creating the Object itself (the defaults are 1 for Tables and Indexes and 2 for Rollback Segments) MAXEXTENTS is the maximum number of extents the Object's Segment can grow to (on Unix the default values are 121 with a 2K DB_BLOCK_SIZE and 249 with a 4K DB_BLOCK_SIZE). PCTINCREASE is the percentage increment in the size of every subsequent extent after the second one (the default is 50). Other equally relevant parameters before the STORAGE clause in the CREATE statement are : PCTFREE which defines the percentage space in each Block which is to be kept free for UPDATEs to field values which increase the size(s) of the Rows within the Block (the default is 10). Thus, INSERTs will not be allowed into the Block if the free-space falls below PCTFREE. PCTUSED which is the lower threshold to be used below which only will fresh INSERTs be allowed, once the PCTFREE parameter has been exceeded, inspite of deletes making more space than PCTFREE available. INITRANS defines the initial number of Transaction Entries to be allowed within the Block (the default is 1 for Tables and 2 for Clusters and Indexes). (Every Transaction working on row(s) in the Block makes an entry specifying the type(s) of lock(s) it is acquiring). MAXTRANS defines the maximum number of entries to be allowed. (the default is a function of the Block size). Where this limit is reached, a fresh transaction has to wait till an existing one is completed.

     Usage and Performance : All these parameters are defined at CREATE time. All (except INITIAL) can be modified later with an ALTER <Object> statement. However, if altered, the new values are effective only for Extents and Blocks created/allocated subsequent to the alteration.

     INITIAL and NEXT sizes of All Objects in a Tablespace should be a multiple of the DB_BLOCK_SIZE and the lowest value within the Tablespace and they should all be a factor of the Tablespace size. This is to avoid fragmentation of the Tablespace. Also, the DEFAULT STORAGE parameters for the Tablespace should also follow this rule. Note that there are two overheads :  i) Every Object Segment takes an additional Block.  ii) Every Extent takes an additional Block.

     PCTINCREASE should generally be set to 0 (explicitly) to avoid fragmentation of the Tablespace into odd-sized extents.

     PCTFREE is to be low for Tables with very few UPDATEs. It results in :

  i) less room for updates
 ii) allowing INSERTs to fill the Block more completely
iii) saving in total space utilised
 iv)  more  CPU  overhead  as  the  Blocks  are  more  frequently
reorganised
  v) increase in processing cost if UPDATEs cause Row-Spanning or Row-Chaining.
Indexes need a low PCTFREE (for example, 5 or lower).

     PCTFREE is to be high for Tables with many UPDATEs that result in the size ofa row increasing. It results in :

  i) more room for UPDATEs
 ii) fewer rows per Block
iii) increase in total space needed
 iv)  improvement in performance with less Row-Spanning  or  Row-
Chaining
     A low PCTUSED results in :

  i) keeping block less full
 ii) reducing processing cost during UPDATE and DELETE statements as the Block is less likely to be moved to the Free List iii) increasing the unused space in the database

     A high PCTUSED results in :

  i) keeping blocks more full
 ii) increasing processing cost
iii) improving space efficiency
     PCTFREE  +  PCTUSED cannot exceed 100.  If the sum  is  less
than 100, the ideal compromise of space versus I/O performance is a sum that differs from 100 by the percentage of space in the available Block (ie., Block size minus Overhead) that the Average Row occupies. If the sum equals 100, a maximum of PCTFREE is kept free and processing costs are high. The smaller the difference between 100 and the sum, the more efficient space usage is, at the cost of performance.

     A High UPDATEs Table should have a high PCTFREE (say 20) and a low (or default 40) PCTUSED. A High INSERT and DELETE with Low UPDATE Table should have a low PCTFREE (say 5) and a high PCTUSED (say 60). A Large Table should have a low PCTFREE (say 5) and a high PCTUSED (say 90) to improve space utilisation.

     A Table which has few rows in each block can have a low MAXTRANS. Where the number of rows is high, the probability of multiple concurrent transactions increases and MAXTRANS may be increased.


  1. Optimiser Modes Definition : V6 had only Rule-Based Optimisation of SQL SELECTs while V7 permits both Cost-Based and Rule-Based. OPTIMISER_MODE (values, "RULE","COST","CHOOSE") determines the mode. Usage : COST based Optimisation relies on Table and Index Statistics from the ANALYZE command.

     Performance : As a general rule, it is still preferable to use RULE and write all SQL Statements as per the known rules.

     SQL : ANALYZE TABLE|INDEX ESTIMATE|COMPUTE STATISTICS


  1. User Profiles Definition : Different Profiles for limitations on utilisation of resources can be set up. RESOURCE_LIMIT=TRUE enables the enforcement of Profile limits.
     Usage : Profiles can have limits for :
          SESSIONS_PER_USER
          CPU_PER_SESSION  (Time in hundredths of a second)
          CPU_PER_CALL  (Time in hundredths of a second)
          CONNECT_TIME (in minutes)
          IDLE_TIME (in minutes)
          LOGICAL_READS_PER_SESSION
          LOGICAL_READS_PER_CALL

The default Profile and the default values for any newly-created Profile are UNLIMITED.

     Performance : A typical application of Resource Limits is for SQL*NET connects where users run random SELECT statements. All the Oracle User_names (e.g. "usr") can be limited by a Profile defined for them.

     SQL :     CREATE PROFILE <Profile_Name>
               ALTER PROFILE
               Alternatively, the above can be done from SQL*DBA.
               ALTER USER <user_names> PROFILE <Profile_Name>


=================================================================

Annexure 1 :



Other Parameters for Tuning :
  1. CLEANUP_ROLLBACK_ENTRIES Default Value : 20 This defines the number of row-entries which PMON would clean-up in case of an Aborted Transaction. The default value is usually sufficient. But in cases where large updates are likely to be run and there is a probability that they may get aborted or killed, it is preferable to increase this value.
  2. DB_BLOCK_CHECKPOINT_BATCH Default Value : Derived to be DB_BLOCK_WRITE_BATCH / 4 This defines the maximum number of Blocks that DBWR will write in one batch for a Checkpoint. Setting a low value (relative to the DB_BLOCK_WRITE_BATCH) prevents long duration writes and allows other modified Blocks also to be written during a Checkpoint. Setting a higher value allows Checkpoints to be completed faster. Thus if Checkpoints are taking long to complete, this value may be increased. A value larger than DB_BLOCK_WRITE_BATCH is ignored.
  3. DB_BLOCK_WRITE_BATCH Default Value : OS Dependent This defines the number of Dirty Buffers DBWR will normally write back to disk. This write is signalled in the following situations :
  4. When a User Process moves a Buffer it has modified to the Dirty list and finds that this list has reached a threshold which is half the value of DB_BLOCK_WRITE_BATCH. ii) When a User Process scans the LRU list for a Free Buffer and the number of entries scanned exceeds DB_BLOCK_MAX_SCAN_CNT. iii) When a DBWR time-out occurs (every three seconds). iv) When a Checkpoint occurs.
  5. DB_FILE_MULTIBLOCK_READ_COUNT Default Value : OS Dependent This determines the number of Blocks read into the Buffers in a single I/O when a Full Table Scan is necessary. This is particularly relevant when COST based Optimisation is selected. In this environment, the Optimiser knows the number of Rows likely to be required to be read (from the output of the ANALYZE command) and uses DB_FILE_MULTIBLOCK_READ_COUNT as a benchmark to decide whether to do a Full Table Scan. If this value is high, Full Table Scans would need fewer I/Os and therefore would be preferred.
  6. DB_FILE_SIMULTANEOUS_WRITES Default Value : 4 Range : 1 to 24 This is the number of simultaneous writes for each Database File when written by DBWR. The value should be set to 1 on platforms not supportin multiple writes to a single device. Check the Oracle Installation and User Guide for the OS.
  7. DB_WRITERS Default Value : 1 This is the number of DBWR processes to be forked. On platforms using Asynchronous I/O, it should be set to 1. On other platforms it can be varied upto the number of disks in use by the database.
  8. ENQUEUE_RESOURCES Default Value : Derived ( ( (PROCESSES -10) *2) +55) This is the number of resources that can be locked by the lock manager. If many Tables are used, the value may be increased, allowing one per resource, not one per lock (ie, regardless of the number of sessions or cursors using the same resource). The value should be no greater than DML_LOCKS + DDL_LOCKS + 20, preferably much less. SQL : SELECT name, value from V$SYSSTAT where name = 'enqueue timeouts'
  9. PROCESSES (SESSIONS, TRANSACTIONS) Default Values : The Default Value for PROCESSES is 50. SESSIONS = 1.1 * PROCESSES TRANSACTIONS = 1.1 * PROCESSES PROCESSES determines the number of User Processes that can be allowed to connect to the RDBMS. It can be increased to allow more uers to connect.
  10. SEQUENCE_CACHE_ENTRIES Default Value : 10 Range : 10 to 32000 This defines the number of SEQUENCEs to be cached in the SGA. It is preferable to set this to the number of SEQUENCEs in use. A SEQUENCE not available in the CACHE would cause an existing one to be flushed out before it is brought in. Each entry requires 110 bytes in the SGA in case of Parallel Server. SEQUENCEs created with NOCACHE option (not recommended) do not enter the Cache and have to be read in at every request.
  11. SMALL_TABLE_THRESHOLD Default Value : 4 This detemines the number of Buffers in the SGA that are available for Full Table Scanning. Tables that are smaller than this are read into Buffers entirely and are put at the Most Recently Used end of the LRU list. Larger ones get aged out as they are put at the head of the LRU list. This value is to be increased only if Blocks read in from a Full Table Scan are desired to be retained in the SGA.
  12. TIMED_STATISTICS Default Value : FALSE Setting it to true enables collection and reporting of various "Time" statistics in V$SYSSTAT. It involves an Overhead on the CPU.

Annexure 2:


  1. SQL Retrieval Execution Plan.

    The Execution Plan for an SQL SELECT Statement can be obtained by :

   i) Creating a PLAN_TABLE under the Oracle User ID that is to run the SELECT Statement. The CREATE TABLE Statement for PLAN_TABLE is in $ORACLE_HOME/rdbms/admin/utlxplan.sql

  ii) Executing the SQL SELECT Statement as :

      "explain plan
      set statement_id = '&statement'
      for  SELECT .... "

This syntax allows giving a different name (statement_id) for each SQL SELECT run.

 iii) On Running the EXPLAIN PLAN, the Plan can be obtained as :

      "select operation ,options ,object_name ,id ,parent_id,
      position  from  plan_table where statement_id =
      '&statement' order by id "
       A tabular format for the output can be obtained as :
      select     lpad(' ',2*(level-1))||operation||' '||options
      ||'  '||object_name||' '||decode(id,0,'Cost =  '||position)
from plan_table start with id = 0 and statement_id = '&&statement' connect by prior id = parent_id and statement_id = '&&statement'

2. SQL_TRACE and tkprof

    The command ALTER SESSION SET SQL_TRACE TRUE would enable generation of a Trace File for subsequent SQL Statements. The Trace file is created in USER_DUMP_DEST.

    The Trace File is a Text file. It can be further analysed by the tkprof command as : tkprof <trace_file_name>.

    tkprof can be compiled in $ORACLE_HOME/rdbms/install as "make -f oracle.mk tkprof" Received on Wed Jan 31 1996 - 11:48:14 CST

Original text of this message

HOME | ASK QUESTION | ADD INFO | SEARCH | E-MAIL US