David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 14 hours 11 min ago

PeopleTools 8.54: Global Temporary Tables

Thu, 2015-02-19 05:11
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Database Feature OverviewGlobal Temporary tables were introduced in Oracle 8i.  They can be used where an application temporarily needs a working storage tables.  They are named
  • Global because the content is private
  • Temporary because the definition is permanent
Or if you prefer
  • Global because the definition is available to everyone
  • Temporary because 
    • physical instantiation of the table is temporary, in the temporary segment (so it isn't redo logged and so isn't recoverable),
    • but it does generate undo in the undo segment, and there is redo on the undo.
    • Each session gets its own private copy of the table in the temp segment.  So you cannot see what is in another session's temporary table, which can make application debugging difficult.
    • The physical instantiation of the table is removed either 
      • when the session disconnects - on commit preserve
      • or when the transaction is terminated with a commit or rollback - on commit delete
This is a very useful database feature (I have been using it in PeopleSoft application ever since it was introduced). 
  • Can be used for temporary records in Application Engines where restart is disabled.
  • Can be implemented without any application code change.
  • Only Application Designer temporary records can be built as global temporary tables.  You cannot make a SQL Table record global temporary.
  • The reduction in redo generation during intensive batch processes, such as payroll processing, can bring significant performance benefits.  There is no point logging redo information for temporary working storage tables that you do not ever need to restore.
  • Shared temporary tables, such as in the GP calculation process GPPDPRUN that is written in COBOL.  If using payroll streaming (multiple concurrent processes to process in parallel), then concurrent delete/update can cause read consistency problems when using a normal table, but with global temporary tables, each session has its own physical table so there is never any need to read consistency recover to read a global temporary tables.
  • Global temporary tables are also an effective way to resolve table high water mark issues that can occur on non-shared temporary tables in on-line application engine.  The PeopleTools %TruncateTable macro still resolves to delete.  You never get high water mark problems with global temporary tables because they are physically created afresh for each new session.  
  • There is often a reduction in database size because the tables are not retained after the session terminates.  Although there will be an increased demand for temporary tablespace while the global temporary tables are in use.
  • I have occasionally seen performance problems when PeopleSoft systems very frequently truncate tables and experience contention on the RO enqueue.  This problem does not occur with global temporary tables.
Global temporary table are not a licensed database feature and are also available in standard edition.
Global Temporary Tables in PeopleToolsThis is the create table DDL created by Application Designer
DROP TABLE PS_ST_RM2_TAO
/
CREATE GLOBAL TEMPORARY TABLE PS_ST_RM2_TAO (PROCESS_INSTANCE
DECIMAL(10) NOT NULL,
EMPLID VARCHAR2(11) NOT NULL,
GRANT_NBR VARCHAR2(10) NOT NULL,
VEST_DT DATE,
SHARES_REMAINDER DECIMAL(21, 9) NOT NULL,
DEC_PLACES SMALLINT NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE PSGTT01
/
CREATE UNIQUE iNDEX PS_ST_RM2_TAO ON PS_ST_RM2_TAO (PROCESS_INSTANCE,
EMPLID,
GRANT_NBR,
VEST_DT)
/
The first thing to point out is the specification of a tablespace.  This is a new feature in Oracle 11g.  It is not mandatory in Oracle, but it is coded into the PeopleSoft DDL model so you must specify a temporary tablespace on the record otherwise it will fail to build.  A new temporary tablespace PSGTT01 is delivered by Oracle when you upgrade to 8.54, or you could just use the existing temporary tables.

This new feature has been implemented using 2 new DDL models (statement types 6 and 7).
SELECT * FROM psddlmodel WHERE statement_type IN(6,7);

STATEMENT_TYPE PLATFORMID SIZING_SET PARMCOUNT
-------------- ---------- ---------- ----------
MODEL_STATEMENT
------------------------------------------------------------------------
6 2 0 0
CREATE GLOBAL TEMPORARY TABLE [TBNAME] ([TBCOLLIST]) ON COMMIT PRESERVE
ROWS TABLESPACE [TBSPCNAME];

7 2 0 0
CREATE [UNIQUE] INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST]);
  • All tables created ON COMMIT PRESERVE, but on-line instances could be ON COMMIT DELETE (theory subject to testing) and for ALL application engine programs even if restart is enabled because commits suppressed in on-line application engines.  Instead, commit is done by the component.
If you try adding a global temporary table table to an application engine that is not restart disabled you quite rightly get the following error message. The table will be added, but the program will not execute correctly.

"Global Temporary Tables allocated to this restart enabled AE program will not retain any data when program exits."
Problems:
  • There has always been a 13 character limit on temporary records, because there used to be a maximum of 99 non-shared instances, and 2 characters were reserved.  If you try to set the number of instances to greater than 99 in an application Engine (I tried GP_GL_PREP)  you now get the warning message
"Do not support more than 99 instances when select the Temp Table which are not attributed as GTT"
  • There is now a maximum length of 11 characters for the name of a record built a global temporary table because from PeopleTools 8.54 there can be up to 9999 non-shared instances of the record.  The restriction applies irrespective of how many instances you are actually using. 
    • I have yet to encounter a system where I need more than 99 instances of a temporary table.  I can just about imagine needing 100 non-shared instances, but not 1000.  
    • This means that I cannot retrofit global temporary tables into an existing Application Engine processes without changing record names.  There are existing delivered application engine programs with 12 and 13 character temporary record names that cannot now be switched to use global temporary tables managed by application designer.  I don't need to support more instances just because the table is global temporary.
      • For example, GP_GL_SEGTMP in GP_GL_PREP is a candidate to be made global temporary because that is a streamed Global Payroll process.  When I tried, I got a record name too long error!
"Record Name is too long. (47,67)"
      • Really, if the table is global temporary you don't need lots of instances.  Everyone could use the shared instance, because Oracle gives each session a private physical copy of the table anyway. 
        • You could do this by removing the record name from the list of temporary records in the application engine, then the %Table() macro will generate the table name without an instance number.
        • There would be a question of how to handle optimizer statistics.  Optimizer statistics collected on a global temporary table in one session could end up being used in another because there is only one place to store them in the data dictionary.
        • The answer is not to collect statistics at all and to use Optimizer Dynamic Sampling.  There is a further enhancement in Oracle 12c where the dynamically sampled stats from different sessions are kept separate.
    • When Application Designer builds an alter script, it can't tell whether it is global temporary or a normal table, so doesn't rebuild the table if you change it from one to the other.
    • The only real runtime downside of global temporary tables is that if you want to debug a process the data is not left behind after the process terminates.  Even while the process is running, you cannot query the contents of a global temporary tables in use by another from your session,
    My RecommendationSupport for global temporary tables is welcome and long overdue.  It can bring significant run time performance and system benefits due to the reduction in redo and read consistency.  It can be implemented without any code change. 

    We just need to sort out the 11 character record name length restriction.

    PeopleTools 8.54: Materialized Views

    Wed, 2015-02-18 15:30
    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
    Materialized Views in the DatabaseSnapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

    Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

    Materialized views can generally be put into two categories
    • A simple, single-table materialized view. 
      • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
      • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
        •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
    • Multi-table materialized view
      • Usually done within a single database rather than across a database link.
      • Can only be refreshed by being completely rebuilt. 
      • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.
    Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

    Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

    When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.
    Materialized Views in PeopleTools 8.54Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
    PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

    There are two new PeopleTools tables:
    • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
    • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.
    I am going to demonstrate some aspects of the feature by running through some examples.
    Example 1: Replicate part of PS_JOB across a database linkIn this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.
    If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

    This is build script generated by Application Designer
    DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.
    DROP MATERIALIZED VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
    • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
    ERROR at line 4:
    ORA-12014: table 'PS_JOB' does not contain a primary key constraint
    • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
      • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
    • Nor is it possible to specify partitioning in Application Designer on a materialized view.
    • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.
    Example 2: Replicate part of PS_JOB locallyIn this example, I am again only replicating 6 named columns into my materialized view.
    DROP VIEW PS_DMK
    /
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
    If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.
    CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
    DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
    DEPTID FROM PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
    PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
    ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
    = 'DMK'
    /
    ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
    /
    DROP MATERIALIZED VIEW LOG ON PS_JOB
    /
    DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
    'PS_JOB'
    /
    ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
    , EMPLID, EMPL_RCD)
    /
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
    KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
    IMMEDIATE
    /
    INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
    'PS_JOB')
    /
    DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
    /
    DROP VIEW PS_DMK
    /
    CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
    SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
    ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
    EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
    = 'DMK'
    /
    • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
    • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
    • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
    • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
    SQL> desc mlog$_ps_job
    Name Null? Type
    ----------------------------- -------- -------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    DEPTID VARCHAR2(10 CHAR)
    SETID_DEPT VARCHAR2(5 CHAR)
    M_ROW$$ VARCHAR2(255 CHAR)
    SEQUENCE$$ NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
      • If I just created the materialized view log as follows
    CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
    WITH PRIMARY KEY
    --, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    /
      • then the materialized view log contains fewer columns
    Name                          Null?    Type
    ----------------------------- -------- --------------------
    EFFDT DATE
    EFFSEQ NUMBER
    EMPLID VARCHAR2(11 CHAR)
    EMPL_RCD NUMBER
    SNAPTIME$$ DATE
    DMLTYPE$$ VARCHAR2(1 CHAR)
    OLD_NEW$$ VARCHAR2(1 CHAR)
    CHANGE_VECTOR$$ RAW(255)
    XID$$ NUMBER
    • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
    SELECT constraint_name, constraint_type, table_name, index_name
    FROM user_constraints
    WHERE table_name = 'PS_DMK'
    AND constraint_type != 'C'
    /

    CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
    -------------------- - ---------- ----------
    PS_JOB_PK1 P PS_DMK PS_JOB_PK1

    SELECT index_name, index_type, uniqueness
    FROM user_indexes
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME INDEX_TYPE UNIQUENES
    ---------- ---------- ---------
    PS_JOB_PK1 NORMAL UNIQUE

    SELECT index_name, column_position, column_name, descend
    FROM user_ind_columns
    WHERE table_name = 'PS_DMK'
    /

    INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
    ---------- --------------- -------------------- ----
    PS_JOB_PK1 1 EFFDT ASC
    PS_JOB_PK1 2 EFFSEQ ASC
    PS_JOB_PK1 3 EMPLID ASC
    PS_JOB_PK1 4 EMPL_RCD ASC




  • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.

  • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.This view references various tables (I have edited out column lists and predicates for readability)

    SELECT …
    FROM PS_DEPT_TBL DEPT
    , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.
    SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
    /

    RECNAME PTMAT_BASETBL
    --------------- ------------------
    DMK_DPT_SEC_MVW PSOPRDEFN
    DMK_DPT_SEC_MVW PS_DEPT_TBL
    DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
    DMK_DPT_SEC_MVW PS_SJT_OPR_CLS
    I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.
    ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
    (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

    CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
    WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated
    Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

    We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate
    EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
    , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
    TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
    SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
    , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
    FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
    WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
    AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
    AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
    AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID
    Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VWIn this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.
    SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
    Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

    SQL Build process ended on 16/02/2015 at 21:05:30.
    1 records processed, 1 errors, 15 warnings.
    SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
    SQL executed online.
    SQL Build log file written to C:\Temp\PSBUILD.LOG.
    • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
    • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
    CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
    ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
    LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
    COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
    , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
    SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
    COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
    ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
    ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
    ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
    ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
    ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
    ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
    PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
    ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
    (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
    TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
    MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
    J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
    FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
    AND D.EFFDT = A.EFFDT)
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
    WHERE RECNAME = 'DMK_JOB_CUR_MVW'
    /
    • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.
    Query ReWriteOne common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.
    DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
    /
    CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
    , DESCR) TABLESPACE PSMATVW
    BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
    enable query rewrite
    AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
    WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    ,'YYYY-MM-DD'),'YYYY-MM-DD'))
    /
    However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    *
    ERROR at line 7:
    ORA-30353: expression not supported for query rewrite
    This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.
    Refreshing materialized ViewsThere is a new component to manage the refresh frequency of materialized views.
    PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View Maintenance
    This component will schedule an Application Engine process called PTMATREFVW.
    What this Application Engine actually does is to specify the refresh frequency for the materialized view
    &AlterSQL = "alter materialized view " | &mview_name 
    | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";
    So the command issued is just
    alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";
    Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.
    SELECT rname, next_date, interval FROM user_refresh
    /

    RNAME NEXT_DATE INTERVAL
    ---------- --------- -------------------------
    PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

    SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
    /

    NAME TYPE RNAME JOB NEXT_DATE INTERVAL
    ---------- ---------- ---------- ---------- --------- -------------------------
    PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

    SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
    /

    JOB NEXT_DATE NEXT_SEC INTERVAL
    ---------- --------- -------- -------------------------
    WHAT
    --------------------------------------------------
    21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
    dbms_refresh.refresh('"SYSADM"."PS_DMK"');
    • But I might want to group related materialized views together into a single refresh group.
    • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
    • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).
    My Recommendations
    • Good Things
      • The removal of the descending indexes and the creation of the primary key is a good thing
      • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
      • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
    • Bad Things
      • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
      • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
      • There is no support for rowid based materialized views.
      • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
      • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
      • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
    • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
    • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
    • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.
     When you design materialized views into your application you need to make careful choices about
    • Refresh method
      • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
      • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
    • Refresh mode
      • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
      • On demand - manually issue refresh at a specific point in a batch
      • On schedule by a database job.
    • Build options
      • Immediate - the materialized view is populated when it is created
      • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.
    Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
    PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

    In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.

    PeopleTools 8.54: Descending Indexes are not supported

    Mon, 2015-02-16 07:57
    This is the first in a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    "With PeopleTools 8.54, PeopleTools will no longer support descending indexes on the Oracle database platform" - PeopleTools 8.54 Release Notes

    They have gone again!  Forgive the trip down memory lane, but I think it is worth reviewing their history.
    • Prior to PeopleTools 8.14, if you specified a key or alternate search field in record as descending then where it appeared in automatically generated key and alternate search key indexes, that column would be descending.  PeopleTools would add the DESC keyword after the column in the CREATE INDEX DDL.  Similarly, columns can be specified as descending in user indexes (that is to say ones created by the developer with index ID A through Z).
    • In PeopleTools 8.14 to 8.47, descending indexes were not built by Application Designer because of a problem with the descending key indexes in some versions of Oracle 8i.  
      • PeopleSoft had previously recommended setting an initialisation parameter on 8i to prevent descending indexes from being created even if the DESC keyword was specified.parameters.
    _IGNORE_DESC_IN_INDEX=TRUE
    • From PeopleTools 8.48 the descending keyword came back because this was the first version of PeopleTools that was only certified from Oracle 9i, in which the descending index bug never occurred. (see blog posting 'Descending Indexes are Back!' October 2007).
    • In PeopleTools 8.54, there are again no descending indexes because the descending keyword has been omitted from the column list in the CREATE TABLE DDL. You can still specify descending keys in Application Designer because that controls the order in which rows are queried into scrolls in the PIA.  You can also still specify descending order on user indexes, but it has no effect upon either the application or the index DDL.
    I haven’t found any documentation that explains why this change has been made.  This time there is no suggestion of a database bug.  However, I think that there are a good database design reasons behind it.

    Normally creation of a primary key automatically creates a unique index to police the constraint.  It is possible to create a primary key constraint using a pre-existing index.  The index does not have to be unique, but it may as well.  However, there are some limitations.
    • You cannot create primary key on nullable columns - that is a fundamental part of the relational model.  This is rarely a problem in PeopleSoft where only dates that are not marked 'required' in the Application Designer are created nullable in the database.
      • You can create a unique index on nullable columns, which is probably why PeopleSoft has always used unique indexes.  
    • You cannot use a descending index in a primary key constraint because it is implemented as a function-based index.
    CREATE TABLE t (a NUMBER NOT NULL)
    /
    CREATE UNIQUE INDEX t1 ON t(A DESC)
    /
    ALTER TABLE t ADD PRIMARY KEY (a) USING INDEX t1
    /
    ORA-14196: Specified index cannot be used to enforce the constraint.

      • We can see that the descending key column is actually a function of a column and not a column, and so cannot be used in a primary key.
    SELECT index_name, index_type, uniqueness FROM user_indexes WHERE table_name = 'T'
    /

    INDEX_NAME INDEX_TYPE UNIQUENES
    ---------- --------------------------- ---------
    T1 FUNCTION-BASED NORMAL UNIQUE

    SELECT index_name, column_name, column_position, descend FROM user_ind_columns WHERE table_name = 'T'
    /

    INDEX_NAME COLUMN_NAME COLUMN_POSITION DESC
    ---------- ------------ --------------- ----
    T1 SYS_NC00002$ 1 DESC

    SELECT * FROM user_ind_expressions WHERE table_name = 'T'
    /

    INDEX_NAME TABLE_NAME COLUMN_EXPRESSION COLUMN_POSITION
    ---------- ---------- -------------------- ---------------
    T1 T "A" 1
      • Non-PeopleSoft digression: you can create a primary key on a virtual column.  An index on the virtual column is not function-based.  So you can achieve the same effect if you move the function from the index into a virtual column, and you can have a primary key on the function.  However, PeopleTools Application Designer doesn't support virtual columns.
    I think that descending keys have removed from PeopleTools because:
    • It permits the creation of primary key constraints using the unique indexes.
    • It does not pose any performance threat.  In Oracle, index leaf blocks are chained in both directions so it is possible to use an ascending index for a descending scan and vice versa. 
    • Update 20.4.2016: There is an optimisation in Oracle 11.2.0.4 that improves the performance of the MAX() function in correlated sub-queries on ascending indexes only.  This will benefit all PeopleSoft applications, but especially HCM.
    What are the advantages of having a primary key rather than just a unique constraint?
    • The optimizer can only consider certain SQL transformations if there is a primary key.  That mainly affects star transformation.
    • It allows the optimizer to rewrite a query to use a materialized view. 
    • It allows a materialized view refresh to be based on the primary key rather than the rowid (the physical address of the row in a table).  This can save you from performing a full refresh of the materialized view if you rebuild the table (I will come back to materialized views in another posting).
    • If you are using logical standby, you need to be able to uniquely identify a row of data otherwise Oracle will perform supplemental logging.  Oracle will additionally log all bounded-size columns (in PeopleSoft, that means everything except LOBs).  Oracle can use non-null unique constraint, but it cannot use a unique function-based index.
      • Logical standby can be a useful way to minimise downtime during a migration.  For example, when migrating the database from a proprietary Unix to Linux where there is an Endian change.  Minimising supplemental logging would definitely be of interest in this case.
    The descending index change is not something that can be configured by the developer or administrator.  It is just something that is hard coded in Application Design and Data Mover that changes the DDL that they generate.

    There are some considerations on migration to PeopleTools 8.54:
    • It will be necessary to rebuild all indexes with descending keys to remove the descending keys.  
    • There are a lot of descending indexes in a typical PeopleSoft application (I counted the number of function-based indexes in a typical system: HR ~11000 , Financials: ~8500).  If you choose to do this during the migration it may take considerable time.
    • In Application Designer, if your build script settings are set to only recreate an index if modified, Application Designer will not detect that the index has a descending key and rebuild it.  So you will have to work out for yourself which indexes have descending key columns and handle the rebuild manually.
    • With migration to PeopleTools 8.54 in mind, you might choose to prevent Oracle from building descending indexes by setting _IGNORE_DESC_IN_INDEX=TRUE. Then you can handle the rebuild in stages in advance
    ConclusionI think the removal of descending indexes from PeopleSoft is a sensible change that enables a number of Oracle database features, while doing no harm.

    PeopleTools 8.54 for the Oracle DBA

    Mon, 2015-02-16 06:36
    The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

    I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:
    This post is not about a new feature in PeopleTools 8.54, but it is something that I discovered while investigating the new version.
      Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

      PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.

      Filtering PeopleTools SQL from Performance Monitor Traces

      Mon, 2014-11-03 15:01

      I have been doing some on-line performance tuning on a PeopleSoft Financials system using PeopleSoft Performance Monitor (PPM).  End-users have collect verbose PPM traces. Usually, when I use PPM in a production system, all the components are fully cached by the normal activity of the user (except when the application server caches have recently been cleared).  However, when working in a user test environment it is common to find that the components are not fully cached. This presents two problems.
      • The application servers spend quite a lot of time executing queries on the PeopleTools tables to load the components, pages and PeopleCode into their caches. We can see in the screenshot of the component trace that there is a warning message that component objects are not fully cached, and that these  cache misses skew timings.
      • In verbose mode, the PPM traces collect a lot of additional transactions capturing executions and fetches against PeopleTools tables. The PPM analytic components cannot always manage the resultant volume of transactions.
        Figure 1. Component trace as collected by PPMFigure 1. Component trace as collected by PPMIf I go further down the same page and look in the SQL Summary, I can see SQL operations against PeopleTools tables (they are easily identifiable in that they generally do not have an underscore in the third character). Not only are 5 of the top 8 SQL operations related to PeopleTools tables, we can also see that they also account for over 13000 executions, which means there are at least 13000 rows of additional data to be read from PSPMTRANSHIST.
        Figure 2. SQL Summary of PPM trace with PeopleTools SQLFigure 2. SQL Summary of PPM trace with PeopleTools SQLWhen I open the longest running server round trip (this is also referred to as a Performance Monitoring Unit or PMU), I can only load 1001 rows before I get a message warning that the maximum row limit has been reached. The duration summary and the number of executions and fetches cannot be calculated and hence 0 is displayed.
         Details of longest PMU with PeopleTools SQLFigure 3: Details of longest PMU with PeopleTools SQL
        Another consequence of the PeopleTools data is that it can take a long time to open the PMU tree. There is no screenshot of the PMU tree here because in this case I had so much data that I couldn't open it before the transaction timed out!
        Solution My solution to this problem is to delete the transactions that relate to PeopleTools SQL and correct the durations, and the number of executions and fetches held in summary transactions. The rationale is that these transactions would not normally occur in significant quantities in a real production system, and there is not much I can do about them when they do.
        The first step is to clone the trace. I could work on the trace directly, but I want to preserve the original data.
        PPM transactions are held in the table PSPMTRANSHIST. They have a unique identifier PM_INSTANCE_ID. A single server round trip, also called a Performance Monitoring Unit (PMU), will consist of many transactions. They can be shown as a tree and each transaction has another field PM_PARENT_INST_ID which holds the instance of the parent. This links the data together and we can use hierarchical queries in Oracle SQL to walk the tree. Another field PM_TOP_INST_ID identifies the root transaction in the tree.
        Cloning a PPM trace is simply a matter of inserting data into PSPMTRANSHIST. However, when I clone a PPM trace I have to make sure that the instance numbers are distinct but still link correctly. In my system I can take a very simple approach. All the instance numbers actually collected by PPM are greater than 1016. So, I will simply use the modulus function to consistently alter the instances to be different. This approach may break down in future, but it will do for now.
        On an Oracle database, PL/SQL is a simple and effective way to write simple procedural processes.  I have written two anonymous blocks of code.
        Note that the cloned trace will be purged from PPM like any other data by the delivered PPM archive process.

        REM xPT.sql
        BEGIN --duplicate PPM traces
        FOR i IN (
        SELECT h.*
        FROM pspmtranshist h
        WHERE pm_perf_trace != ' ' /*rows must have a trace name*/
        -- AND pm_perf_trace = '9b. XXXXXXXXXX' /*I could specify a specific trace by name*/
        AND pm_instance_id > 1E16 /*only look at instance > 1e16 so I do not clone cloned traces*/
        ) LOOP
        INSERT INTO pspmtranshist
        (PM_INSTANCE_ID, PM_TRANS_DEFN_SET, PM_TRANS_DEFN_ID, PM_AGENTID, PM_TRANS_STATUS,
        OPRID, PM_PERF_TRACE, PM_CONTEXT_VALUE1, PM_CONTEXT_VALUE2, PM_CONTEXT_VALUE3,
        PM_CONTEXTID_1, PM_CONTEXTID_2, PM_CONTEXTID_3, PM_PROCESS_ID, PM_AGENT_STRT_DTTM,
        PM_MON_STRT_DTTM, PM_TRANS_DURATION, PM_PARENT_INST_ID, PM_TOP_INST_ID, PM_METRIC_VALUE1,
        PM_METRIC_VALUE2, PM_METRIC_VALUE3, PM_METRIC_VALUE4, PM_METRIC_VALUE5, PM_METRIC_VALUE6,
        PM_METRIC_VALUE7, PM_ADDTNL_DESCR)
        VALUES
        (MOD(i.PM_INSTANCE_ID,1E16) /*apply modulus to instance number*/
        ,i.PM_TRANS_DEFN_SET, i.PM_TRANS_DEFN_ID, i.PM_AGENTID, i.PM_TRANS_STATUS,
        i.OPRID,
        SUBSTR('xPT'||i.PM_PERF_TRACE,1,30) /*adjust trace name*/,
        i.PM_CONTEXT_VALUE1, i.PM_CONTEXT_VALUE2, i.PM_CONTEXT_VALUE3,
        i.PM_CONTEXTID_1, i.PM_CONTEXTID_2, i.PM_CONTEXTID_3, i.PM_PROCESS_ID, i.PM_AGENT_STRT_DTTM,
        i.PM_MON_STRT_DTTM, i.PM_TRANS_DURATION,
        MOD(i.PM_PARENT_INST_ID,1E16), MOD(i.PM_TOP_INST_ID,1E16), /*apply modulus to parent and top instance number*/
        i.PM_METRIC_VALUE1, i.PM_METRIC_VALUE2, i.PM_METRIC_VALUE3, i.PM_METRIC_VALUE4, i.PM_METRIC_VALUE5,
        i.PM_METRIC_VALUE6, i.PM_METRIC_VALUE7, i.PM_ADDTNL_DESCR);
        END LOOP;
        COMMIT;
        END;
        /
        Now I will work on the cloned trace. I want to remove certain transaction.
        • PeopleTools SQL. Metric value 7 reports the SQL operation and SQL table name. So if the first word is SELECT and the second word is a PeopleTools table name then it is a PeopleTools SQL operation. A list of PeopleTools tables can be obtained from the object security table PSOBJGROUP.
        • Implicit Commit transactions. This is easy - it is just transaction type 425. 
        Having deleted the PeopleTools transactions, I must also
        • Correct transaction duration for any parents of transaction. I work up the hierarchy of transactions and deduct the duration of the transaction that I am deleting from all of the parent.
        • Transaction types 400, 427 and 428 all record PeopleTools SQL time (metric 66). When I come to that transaction I also deduct the duration of the deleted transaction from the PeopleTools SQL time metric in an parent transaction.
        • Delete any children of the transactions that I delete. 
        • I must also count each PeopleTools SQL Execution transaction (type 408) and each PeopleTools SQL Fetch transaction (type 414) that I delete. These counts are also deducted from the summaries on the parent transaction 400. 
        The summaries in transaction 400 are used on the 'Round Trip Details' components, and if they are not adjusted you can get misleading results. Without the adjustments, I have encountered PMUs where more than 100% of the total duration is spent in SQL - which is obviously impossible.
        Although this technique of first cloning the whole trace and then deleting the PeopleTools operations can be quite slow, it is not something that you are going to do very often. 
        REM xPT.sql
        REM (c)Go-Faster Consultancy Ltd. 2014
        set serveroutput on echo on
        DECLARE
        l_pm_instance_id_m4 INTEGER;
        l_fetch_count INTEGER;
        l_exec_count INTEGER;
        BEGIN /*now remove PeopleTools SQL transaction and any children and adjust trans durations*/
        FOR i IN (
        WITH x AS ( /*returns PeopleTools tables as defined in Object security*/
        SELECT o.entname recname
        FROM psobjgroup o
        WHERE o.objgroupid = 'PEOPLETOOLS'
        AND o.enttype = 'R'
        )
        SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_trans_duration, h.pm_trans_defn_id
        FROM pspmtranshist h
        LEFT OUTER JOIN x
        ON h.pm_metric_value7 LIKE 'SELECT '||x.recname||'%'
        AND x.recname = upper(regexp_substr(pm_metric_value7,'[^ ,]+',8,1)) /*first word after select*/
        WHERE pm_perf_trace like 'xPT%' /*restrict to cloned traces*/
        -- AND pm_perf_trace = 'xPT9b. XXXXXXXXXX' /*work on a specific trace*/
        AND pm_instance_id < 1E16 /*restrict to cloned traces*/
        AND ( x.recname IS NOT NULL
        OR h.pm_trans_defn_id IN(425 /*Implicit Commit*/))
        ORDER BY pm_instance_id DESC
        ) LOOP
        l_pm_instance_id_m4 := TO_NUMBER(NULL);
         
            IF i.pm_parent_inst_id>0 AND i.pm_trans_duration>0 THEN
        FOR j IN(
        SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_top_inst_id, h.pm_trans_defn_id
        , d.pm_metricid_3, d.pm_metricid_4
        FROM pspmtranshist h
        INNER JOIN pspmtransdefn d
        ON d.pm_trans_defn_set = h.pm_trans_defn_set
        AND d.pm_trans_defn_id = h.pm_trans_Defn_id
        START WITH h.pm_instance_id = i.pm_parent_inst_id
        CONNECT BY prior h.pm_parent_inst_id = h.pm_instance_id
        ) LOOP
        /*decrement parent transaction times*/
        IF j.pm_metricid_4 = 66 /*PeopleTools SQL Time (ms)*/ THEN --decrement metric 4 on transaction 400
        --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value4 by '||i.pm_trans_duration);
        UPDATE pspmtranshist
        SET pm_metric_value4 = pm_metric_value4 - i.pm_trans_duration
        WHERE pm_instance_id = j.pm_instance_id
        AND pm_trans_Defn_id = j.pm_trans_defn_id
        AND pm_metric_value4 >= i.pm_trans_duration
        RETURNING pm_instance_id INTO l_pm_instance_id_m4;
        ELSIF j.pm_metricid_3 = 66 /*PeopleTools SQL Time (ms)*/ THEN --SQL time on serialisation
        --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value3 by '||i.pm_trans_duration);
        UPDATE pspmtranshist
        SET pm_metric_value3 = pm_metric_value3 - i.pm_trans_duration
        WHERE pm_instance_id = j.pm_instance_id
        AND pm_trans_Defn_id = j.pm_trans_defn_id
        AND pm_metric_value3 >= i.pm_trans_duration;
        END IF;

        UPDATE pspmtranshist
        SET pm_trans_duration = pm_trans_duration - i.pm_trans_duration
        WHERE pm_instance_id = j.pm_instance_id
        AND pm_trans_duration >= i.pm_trans_duration;
        END LOOP;
        END IF;

        l_fetch_count := 0;
        l_exec_count := 0;
        FOR j IN( /*identify transaction to be deleted and any children*/
        SELECT pm_instance_id, pm_parent_inst_id, pm_top_inst_id, pm_trans_defn_id, pm_metric_value3
        FROM pspmtranshist
        START WITH pm_instance_id = i.pm_instance_id
        CONNECT BY PRIOR pm_instance_id = pm_parent_inst_id
        ) LOOP
        IF j.pm_trans_defn_id = 408 THEN /*if PeopleTools SQL*/
        l_exec_count := l_exec_count + 1;
        ELSIF j.pm_trans_defn_id = 414 THEN /*if PeopleTools SQL Fetch*/
        l_fetch_count := l_fetch_count + j.pm_metric_value3;
        END IF;
        DELETE FROM pspmtranshist h /*delete tools transaction*/
        WHERE h.pm_instance_id = j.pm_instance_id;
        END LOOP;

        IF l_pm_instance_id_m4 > 0 THEN
        --dbms_output.put_line('ID:'||l_pm_instance_id_m4||' Decrement '||l_exec_Count||' executions, '||l_fetch_count||' fetches');
        UPDATE pspmtranshist
        SET pm_metric_value5 = pm_metric_value5 - l_exec_count
        , pm_metric_value6 = pm_metric_value6 - l_fetch_count
        WHERE pm_instance_id = l_pm_instance_id_m4;
        l_fetch_count := 0;
        l_exec_count := 0;
        END IF;

        END LOOP;
        END;
        /
        Now, I have a second PPM trace that I can open in the analytic component.
         Original and Cloned PPM tracesFigure 4: Original and Cloned PPM traces

        When I open the cloned trace, both timings in the duration summary have reduced as have the number of executions and fetches.  The durations of the individual server round trips have also reduced.
         Component Trace without PeopleTools transactionsFigure 5: Component Trace without PeopleTools transactions
        All of the PeopleTools SQL operations have disappeared from the SQL summary.
         SQL Summary of PPM trace after removing PeopleTools SQL transactionsFigure 6: SQL Summary of PPM trace after removing PeopleTools SQL transactions
        The SQL summary now only has 125 rows of data.
        Figure 7: SQL Summary of PMU without PeopleTools SQL
        Now, the PPM tree component opens quickly and without error.
         PMU Tree after removing PeopleTools SQLFigure 8: PMU Tree after removing PeopleTools SQL
        There may still be more transactions in a PMU than I can show in a screenshot, but I can now find the statement that took the most time quite quickly.

         Long SQL transaction further down same PMU treeFigure 9: Long SQL transaction further down same PMU tree
        Conclusions I think that it is reasonable and useful to remove PeopleTools SQL operations from a PPM trace.
        In normal production operation, components will mostly be cached, and this approach renders traces collected in non-production environments both usable in the PPM analytic components and more realistic for performance tuning. However, it is essential that when deleting some transactions from a PMU, that summary data held in other transactions in the same PMU are also corrected so that the metrics remain consistent.

        Minimising Parse Time in Application Engine with ReUseStatement

        Fri, 2014-10-24 10:15
        This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
        ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

        Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
        Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

        ;-------------------------------------------------------------------------
        ; AE Tracing Bitfield
        ;
        ; Bit Type of tracing
        ; --- ---------------
        ...
        ; 128 - Timings Report to AET file
        ...
        ; 1024 - Timings Report to tables
        ...
        TraceAE=1152
        ;------------------------------------------------------------------------

        Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

        NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

        Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
        It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
        Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                                  PeopleSoft Application Engine Timings
        (All timings in seconds)

        C o m p i l e E x e c u t e F e t c h Total
        SQL Statement Count Time Count Time Count Time Time
        ------------------------------ ------- -------- ------- -------- ------- -------- --------
        99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
        ...

        With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                                       C o m p i l e    E x e c u t e    F e t c h        Total
        SQL Statement Count Time Count Time Count Time Time
        ------------------------------ ------- -------- ------- -------- ------- -------- --------
        99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
        ...

        So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
         Profile CompilationsThis query produces a simple profile of batch timings for statements. 
        • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
        • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
        • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

        REM ReUseCand.sql
        REM (c)Go-Faster COnsultancy Ltd. 2014
        COLUMN detail_id FORMAT a32
        COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
        COLUMN compile_count HEADING 'AE|Compile|Count'
        COLUMN execute_count HEADING 'AE|Execute|Count'
        COLUMN processes HEADING 'Num|Process|Instances'
        COLUMN process_name HEADING 'Process|Name'
        SPOOL ReUseCand
        WITH x AS (
        SELECT l.process_instance, l.process_name
        , l.time_elapsed/1000 time_elapsed
        , l.enddttm-l.begindttm diffdttm
        , d.bat_program_name||'.'||d.detail_id detail_id
        , d.compile_count, d.compile_time/1000 compile_time
        , d.execute_time/1000 execute_time
        FROM ps_bat_Timings_dtl d
        , ps_bat_timings_log l
        WHERE d.process_instance = l.process_instance
        AND d.compile_count = d.execute_count
        AND d.compile_count > 1
        AND l.enddttm > SYSDATE-7
        ), y as (
        SELECT x.*
        , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
        +EXTRACT(hour FROM diffdttm))
        +EXTRACT(minute FROM diffdttm))
        +EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
        FROM x)
        , z as (
        SELECT process_instance, process_name, detail_id
        , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
        ELSE time_elapsed END time_elapsed
        , compile_count
        , CASE WHEN compile_time < 0 THEN compile_time+delta
        ELSE compile_time END AS compile_time
        , CASE WHEN execute_time < 0 THEN execute_time+delta
        ELSE execute_time END AS execute_time
        FROM y
        ), a as (
        SELECT process_name, detail_id
        , SUM(compile_time+execute_time) step_time
        , SUM(compile_count) compile_count
        , COUNT(DISTINCT process_instance) processes
        FROM z
        GROUP BY process_name, detail_id)
        SELECT * FROM a
        WHERE compile_count >= 10000
        ORDER BY step_time DESC
        /
        SPOOL OFF

        So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

        Process                                         Step    Compile    Process
        Name DETAIL_ID SEcs Count Instances
        ------------ -------------------------------- ------ ---------- ----------
        AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
        AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
        FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
        AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
        PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
        FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
        GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
        AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

        This query is based on the previous one, but includes scalar queries on the ASH data for each step.
        • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
        • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
        • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
        • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

        REM ReUseCandASH.sql
        REM ReUseCandASH.sql
        REM (c)Go-Faster Consultancy Ltd. 2014
        COLUMN processes HEADING 'Num|Process|Instances'
        COLUMN process_name HEADING 'Process|Name'
        COLUMN detail_id FORMAT a32
        COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
        COLUMN compile_count HEADING 'AE|Compile|Count'
        COLUMN execute_count HEADING 'AE|Execute|Count'
        COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
        COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
        SPOOL ReUseCandASH
        WITH x AS (
        SELECT l.process_instance, l.process_name
        , l.time_elapsed/1000 time_elapsed
        , l.begindttm, l.enddttm
        , l.enddttm-l.begindttm diffdttm
        , d.bat_program_name||'.'||d.detail_id detail_id
        , d.compile_count, d.compile_time/1000 compile_time
        , d.execute_time/1000 execute_time
        FROM ps_bat_timings_dtl d
        , ps_bat_timings_log l
        WHERE d.process_instance = l.process_instance
        AND d.compile_count = d.execute_count
        AND d.compile_count > 1
        AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
        AND l.enddttm < TRUNC(SYSDATE,'HH24')
        ), y as (
        SELECT x.*
        , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
        +EXTRACT(hour FROM diffdttm))
        +EXTRACT(minute FROM diffdttm))
        +EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
        FROM x)
        , z as (
        SELECT process_instance, process_name, detail_id
        , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
        ELSE time_elapsed END AS time_elapsed
        , compile_count
        , CASE WHEN compile_time < 0 THEN compile_time+delta
        ELSE compile_time END AS compile_time
        , CASE WHEN execute_time < 0 THEN execute_time+delta
        ELSE execute_time END AS execute_time
        , (
        SELECT 10*COUNT(DISTINCT h.sample_id)
        FROM psprcsque q
        , dba_hist_snapshot x
        , dba_hist_active_sess_history h
        WHERE q.prcsinstance = y.process_instance
        AND x.begin_interval_time <= y.enddttm
        AND X.END_INTERVAL_TIME >= y.begindttm
        AND h.sample_time between y.begindttm and y.enddttm
        AND h.SNAP_id = x.SNAP_id
        AND h.dbid = x.dbid
        AND h.instance_number = x.instance_number
        AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
        AND h.action = y.detail_id
        AND h.in_hard_parse = 'Y'
        ) hard_parse_secs
        , (
        SELECT 10*COUNT(DISTINCT h.sample_id)
        FROM psprcsque q
        , dba_hist_snapshot x
        , dba_hist_active_sess_history h
        WHERE q.prcsinstance = y.process_instance
        AND x.begin_interval_time <= y.enddttm
        AND X.END_INTERVAL_TIME >= y.begindttm
        AND h.sample_time between y.begindttm and y.enddttm
        AND h.SNAP_id = X.SNAP_id
        AND h.dbid = x.dbid
        AND h.instance_number = x.instance_number
        AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
        AND h.action = y.detail_id
        ) ash_secs
        FROM y
        ), a AS (
        SELECT /*process_name ,*/ detail_id
        , SUM(compile_time+execute_time) step_time
        , SUM(compile_count) compile_count
        , COUNT(DISTINCT process_instance) processes
        , SUM(hard_parse_secs) hard_parse_secs
        , SUM(ash_secs) ash_secs
        FROM z
        GROUP BY /*process_name,*/ detail_id)
        SELECT a.*
        FROM a
        WHERE compile_count >= 10000
        ORDER BY step_time DESC
        /
        spool off

        Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
        However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                           Step    Compile    Process      Parse         DB
        DETAIL_ID Secs Count Instances Secs Time Comment
        -------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
        AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
        FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
        APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
        FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
        FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
        APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
        PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
        FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
        FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
        AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
        GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
        FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
        FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
        FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
        • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
        • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
        ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.

          Who is using this index?

          Wed, 2014-09-03 05:30
          Or, to put it another way, I want to change or drop this index, who and what will I impact?
          The Challenge 
          The problem that I am going to outline is certainly not exclusive to PeopleSoft, but I am going to illustrate it with examples from PeopleSoft. I often find tables with far more indexes than are good for them.
          • The Application Designer tool makes it very easy for developers to add indexes to tables. Sometimes, too easy!
          • Sometimes, DBAs are too quick to unquestioningly follow the advice of the Oracle tuning advisor to add indexes.
          Recently, I have been working on 3 different PeopleSoft Financials systems where I have found major tables with a host of indexes.

          There are several concerns:
          • Indexes are maintained during data modification. The more indexes you have, the greater the overhead. 
          • The more indexes you have, particularly if they lead on the same columns, the more likely Oracle is to use the wrong one, resulting in poorer performance.
          • There is of course also a space overhead for each index, but this is often of less concern. 
          If you can get rid of an index, Oracle doesn't store, maintain or use it. 

          In some cases, I have wanted to remove unnecessary indexes, and in others to adjust indexes. However, this immediately raises the question of where are these indexes used, and who will be impacted by the change. Naturally, I turn to the Active Session History (ASH) to help me find the answers. 
          Index Maintenance Overhead during DDL 
          ASH reports the object number, file number, block number and (from 11g) row number within the block being accessed by physical file operations. However, the values reported in v$active_session_history (and later other views) are not reliable for other events because they are merely left over from the previous file event that reported them. So, we can profile the amount of time spent on physical I/O on different tables and indexes, but not for other forms of DB Time, such as CPU time, spent accessing the blocks in the buffer cache.

          Let me take an extreme example from PeopleSoft Global Payroll. The table PS_GP_RSLT_ACUM is one of the principal result tables. It has only a single unique index (with the same name). The table is populated with the simplest of insert statements.
          INSERT /*GPPRDMGR_I_ACUM*/ INTO PS_GP_RSLT_ACUM
          (CAL_RUN_ID ,EMPLID ,EMPL_RCD ,GP_PAYGROUP ,CAL_ID ,RSLT_SEG_NUM ,PIN_NUM ,EMPL_RCD_ACUM ,ACM_FROM_DT ,ACM_THRU_DT ,USER_KEY1 ,USER_KEY2 ,USER_KEY3 ,USER_KEY4 ,USER_KEY5 ,USER_KEY6 ,SLICE_BGN_DT ,SLICE_END_DT ,COUNTRY ,ACM_TYPE ,ACM_PRD_OPTN ,CALC_RSLT_VAL ,CALC_VAL ,USER_ADJ_VAL ,PIN_PARENT_NUM ,CORR_RTO_IND ,ORIG_CAL_RUN_ID ,SEQ_NUM8 ,VALID_IN_SEG_IND ,CALLED_IN_SEG_IND )
          VALUES
          (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30)
          I can profile the ASH data for just this statement over the last week on a production system. Note that DBA_OBJECTS and DBA_DATA_FILES are outer joined to the ASH data and only matched for events like 'db file%'
          SELECT o.object_type, o.object_name
          , f.tablespace_name, NVL(h.event,'CPU+CPU Wait') event
          , SUM(10) ash_Secs
          FROM dba_hist_Active_sess_history h
          LEFT OUTER JOIN dba_objects o
          ON o.object_id = h.current_obj#
          AND h.event like 'db file%'
          LEFT OUTER JOIN dba_data_files f
          ON f.file_id = h.current_file#
          AND h.event like 'db file%'
          WHERE h.sql_id = '4ru0618dswz3y'
          AND h.sample_time >= sysdate-7
          GROUP BY o.object_type, o.object_name, h.event, f.tablespace_name
          ORDER BY ash_secs DESC
          /
          A full payroll calculation inserts over 3 million rows on this particular system. The calculation is run incrementally several times per week during which old rows are deleted and newly recalculated rows inserted.  Looking at just this insert statement:
          • 30% of the time is spent on CPU operations, we cannot profile that time further with ASH.
          • 38% of the time is spent reading from the table and index, yet this is a simple INSERT … VALUES statement.
          OBJECT_TYPE         OBJECT_NAME        TABLESPACE_NAME EVENT                      ASH_SECS
          ------------------- ------------------ --------------- ------------------------ ----------
          CPU+CPU Wait 1040
          UNDOTBS1 db file sequential read 900
          INDEX SUBPARTITION PS_GP_RSLT_ACUM GP201408IDX db file sequential read 750
          TABLE SUBPARTITION PS_GP_RSLT_ACUM GP201408TAB db file sequential read 550
          gc current grant 2-way 70
          cursor: pin S wait on X 60
          db file sequential read 10
          buffer exterminate 10
          row cache lock 10
          ----------
          3400
          More time is spent reading the index than the table.  That is not a surprise.  When you insert a row into a table, you also insert it into the index. Rows in index leaf blocks are ordered by the key columns, and the new entry must go into the right place, so you have to read down the index from the root block, through the branch blocks, to find the correct leaf block for the new entry.
          [Digression: Counter-intuitively index compression can improve DML performance. It does for this index.  The overhead of the compression processing can be outweighed by the saving in physical I/O.  It depends.]
          Profile Physical I/O by Object I can twist this query around and profile DB_TIME by object for 'db file%' events
          SELECT o.object_type, o.object_name, sum(10) ash_secs
          FROM dba_hist_active_sess_history h
          , dba_objects o
          WHERE o.object_id = h.current_obj#
          AND h.event LIKE 'db file%'
          AND h.sample_time > sysdate-7
          GROUP BY o.object_type, o.object_name
          ORDER BY ash_Secs DESC
          Now I can see upon which objects the most time is spent on physical I/O.
          OBJECT_TYP OBJECT_NAME          ASH_SECS
          ---------- ------------------ ----------
          TABLE PS_ITEM 101130
          INDEX PS_WS_ITEM 98750
          TABLE PS_PROJ_RESOURCE 97410
          TABLE PS_BI_LINE 85040
          INDEX PSAPSAPMSGSUBCON 75070
          TABLE PS_BI_HDR 37230
          TABLE PS_RS_ASSIGNMENT 29460
          INDEX PS_PSAPMSGPUBHDR 23230
          INDEX PS_BI_ACCT_ENTRY 21490
          TABLE PS_VOUCHER 21330
          TABLE PS_VCHR_ACCTG_LINE 21250
          TABLE PS_BI_ACCT_ENTRY 18860

          ----------
          sum 1382680
          This is a worthwhile exercise, it shows the sources of physical I/O in an application.

          However, if you want to find where an index is used, then this query will also identify SQL_IDs where the index is either used in the query or maintained by DML. If I am interested in looking for places where changing or deleting an index could have an impact then I am only interested in SQL query activity. ASH samples which relate to index maintenance are a false positive. Yet, I cannot simply eliminate ASH samples where the SQL_OPNAME is not SELECT because the index may be used in a query within the DML statement.

          Another problem with this method is that it matches SQL to ASH by object ID. If someone has rebuilt an index, then its object number changes.

          A different approach is required.
          Index Use from SQL Plans Captured by AWR During an AWR snapshot the top-n SQL statements by each SQL criteria in the AWR report (Elapsed Time, CPU Time, Parse Calls, Shareable Memory, Version Count) , see dbms_workload_repository. The SQL plans are exposed by the view DBA_HIST_SQL_PLAN.

          On PeopleSoft systems, I generally recommend decreasing the snapshot interval from the default of 60 minutes to 15. The main reason is that SQL gets aged out of the library cache very quickly in PeopleSoft systems. They generate lots of dynamic code, often with literal values rather than bind variables. Cursor sharing is not recommended for PeopleSoft, so different bind variables result in different SQL_IDs. The dynamic code also results in different SQL IDs even with cursor sharing. Therefore, increasing the snapshot frequency means that will capture more SQL statement. This will increase total volume of the AWR repository simply because there are more snapshots. However, the overall volume of ASH data captured does not change, it just gets copied to the repository earlier.

          On DBA_HIST_SQL_PLAN the object ID, owner, type and name are recorded, so I can find the plans which referenced a particular object. I am going to carry on with the example from a PeopleSoft Financials system, and look at indexes on the PS_PROJ_RESOURCE table.

          These are some of the indexes on PS_PROJ_RESOURCE. We have 4 indexes that all lead on PROCESS_INSTANCE. I suspect that not all are essential, but I need to work out what is using them, and which one I should retain.
                                    Col
          INDEX_NAME Pos COLUMN_NAME COLUMN_EXPRESSION
          ------------------ ---------- -------------------- ----------------------------------

          PSJPROJ_RESOURCE 1 PROCESS_INSTANCE
          2 BUSINESS_UNIT_GL
          3 BUSINESS_UNIT
          4 PROJECT_ID
          5 ACTIVITY_ID
          6 CUST_ID

          PSLPROJ_RESOURCE 1 PROCESS_INSTANCE
          2 EMPLID
          3 EMPL_RCD
          4 TRANS_DT

          PSMPROJ_RESOURCE 1 PROCESS_INSTANCE
          2 BUSINESS_UNIT
          3 PROJECT_ID
          4 ACTIVITY_ID
          5 RESOURCE_ID

          PSNPROJ_RESOURCE 1 PROCESS_INSTANCE
          2 BUSINESS_UNIT
          3 TIME_RPTG_CD

          I find it easier to extract the ASH data to my own working storage table. For each index on PS_PROJ_RESOURCE, I am going to extract a distinct list of plan hash values. I will then extract all ASH data for those plans. Note, that I have not joined the SQL_ID on DBA_HIST_SQL_PLAN. That is because different SQL_IDs can produce the same execution plan. The plan is equally valid for all SQL_IDs that produce the plan, not just the one where the SQL_ID also matches.
          DROP TABLE my_ash purge
          /
          CREATE TABLE my_ash COMPRESS AS
          WITH p AS (
          SELECT DISTINCT p.plan_hash_value, p.object#, p.object_owner, p.object_type, p.object_name
          FROM dba_hist_sql_plan p
          WHERE p.object_name like 'PS_PROJ_RESOURCE'
          AND p.object_type LIKE 'INDEX%'
          AND p.object_owner = 'SYSADM'
          )
          SELECT p.object# object_id, p.object_owner, p.object_type, p.object_name
          , h.*
          FROM dba_hist_active_sess_history h
          , p
          WHERE h.sql_plan_hash_value = p.plan_hash_value
          /
          I am fortunate that PeopleSoft is a well instrumented application. Module and Action are set to fairly sensible values that will tell me whereabouts in the application the ASH sample relates. In the following query, I have omitted any ASH data generated by SQL*Plus, Toad, or SQL Developer, and also any generated by Oracle processes to prevent statistics collection jobs being included.
          Set pages 999 lines 150 trimspool on
          break on object_name skip 1
          compute sum of ash_secs on object_name
          column ash_secs heading 'ASH|Secs' format 9999999
          column module format a20
          column action format a32
          column object_name format a18
          column max_sample_time format a19 heading 'Last|Sample'
          column sql_plans heading 'SQL|Plans' format 9999
          column sql_execs heading 'SQL|Execs' format 99999
          WITH h AS (
          SELECT object_name
          , CASE WHEN h.module IS NULL THEN REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
          WHEN h.module LIKE 'PSAE.%' THEN REGEXP_SUBSTR(h.module,'[^.]+',1,2)
          ELSE REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
          END as module
          , CASE WHEN h.action LIKE 'PI=%' THEN NULL
          ELSE h.action
          END as action
          , CAST(sample_time AS DATE) sample_time
          , sql_id, sql_plan_hash_value, sql_exec_id
          FROM my_ash h
          )
          SELECT object_name, module, action
          , sum(10) ash_secs
          , COUNT(DISTINCT sql_plan_hash_value) sql_plans
          , COUNT(DISTINCT sql_id||sql_plan_hash_value||sql_exec_id) sql_execs
          , MAX(sample_time) max_sample_time
          FROM h
          WHERE NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
          AND NOT LOWER(module) LIKE 'sql%'
          GROUP BY object_name, module, action
          ORDER BY SUBSTR(object_name,4), object_name, ash_Secs desc
          /
          Spool off
          I now have a profile of how much each index is used. In this particular case I found something using every index.  It is possible that you will not find anything that uses some indexes.
                                                                                       ASH   SQL    SQL Last
          OBJECT_NAME MODULE ACTION Secs Plans Execs Sample
          ------------------ -------------------- -------------------------------- ------- ----- ------ -------------------

          PSJPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step24.S 7300 1 66 06:32:57 27/08/2014
          PC_PRICING GFCPBINT_AE.CallmeA.Step24.S 40 1 2 08:38:57 22/08/2014
          ****************** -------
          sum 7340

          PSLPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step28.S 1220 1 53 06:33:17 27/08/2014
          ****************** -------
          sum 1220

          PSMPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.XxBiEDM.Step07.S 60 2 6 18:35:18 20/08/2014
          ****************** -------
          sum 60

          PSNPROJ_RESOURCE PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step26.S 6720 1 49 18:53:58 26/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step30.S 3460 1 60 06:33:27 27/08/2014
          GFCOA_CMSN GFCOA_CMSN.01INIT.Step01.S 2660 1 47 19:19:40 26/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step06.S 1800 1 52 18:53:28 26/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeG.Step01.S 1740 1 61 06:34:17 27/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step02.S 1680 1 24 18:53:18 26/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step10.S 1460 1 33 17:26:26 22/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step08.S 920 1 26 17:26:16 22/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step36.S 460 1 18 18:26:38 20/08/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Step09.S 420 1 16 06:33:07 27/08/2014
          PC_PRICING GFCPBINT_AE.CallmeG.Step01.S 200 1 10 08:09:55 22/08/2014
          PC_AP_TO_PC GFCPBINT_AE.CallmeH.Step00A.S 170 1 17 21:53:26 21/08/2014
          PC_PRICING GFCPBINT_AE.CallmeA.Step36.S 20 1 1 08:02:46 05/08/2014
          PC_PRICING GFCPBINT_AE.CallmeA.Step30.S 20 1 1 13:42:48 04/08/2014
          PC_PRICING GFCPBINT_AE.CallmeA.Step06.S 20 1 1 15:58:35 28/07/2014
          PC_TL_TO_PC GFCPBINT_AE.CallmeA.Pseudo.S 20 1 1 19:45:11 06/08/2014
          ****************** -------
          sum 21770

          The next stage is to look at individual SQL statements This query looks for which SQL statement is using a particular index on PROJ_RESOURCE. If I can't find the SQL which cost the most time, then just choose another SQL with the same plan
          • I have found that sometimes a plan is captured by AWR, but the SQL statement is not. Personally, I think that is a bug. Working around it has made the following query quite complicated.
          Break on object_name skip 1 
          column ash_secs heading 'ASH|Secs' format 9999999
          Set long 50000
          Column cmd Format a200
          Spool dmk

          WITH h AS (
          SELECT h.object_name
          , CASE WHEN h.module IS NULL THEN REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
          WHEN h.module LIKE 'PSAE.%' THEN REGEXP_SUBSTR(h.module,'[^.]+',1,2)
          ELSE REGEXP_SUBSTR(h.program,'[^.@]+',1,1)
          END as module
          , CASE WHEN h.action LIKE 'PI=%' THEN NULL
          ELSE h.action
          END as action
          , h.sql_id, h.sql_plan_hash_value
          , t.command_type –-not null if plan and statement captured
          FROM my_ash h
          LEFT OUTER JOIN (
          SELECT t1.*
          FROM dba_hist_sqltext t1
          , dba_hist_sql_plan p1
          WHERE t1.sql_id = p1.sql_id
          AND p1.id = 1
          ) t
          ON t.sql_id = h.sql_id
          AND t.dbid = h.dbid
          WHERE h.object_name IN('PSMPROJ_RESOURCE')
          AND h.object_Type = 'INDEX'
          AND h.object_owner = 'SYSADM'
          And NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
          AND NOT LOWER(module) LIKE 'sql%'
          ), x AS ( --aggregate DB time by object and statement
          SELECT object_name, sql_id, sql_plan_hash_value
          , sum(10) ash_secs
          , 10*COUNT(command_type) sql_secs --DB time for captured statements only
          FROM h
          WHERE NOT LOWER(module) IN('oracle','toad','sqlplus','sqlplusw')
          AND NOT LOWER(module) LIKE 'sql%'
          GROUP BY object_name, sql_id, sql_plan_hash_value
          ), y AS ( --rank DB time per object and plan
          SELECT object_name, sql_id, sql_plan_hash_value
          , ash_secs
          , SUM(ash_secs) OVER (PARTITION BY object_name, sql_plan_hash_value) plan_ash_secs
          , row_number() OVER (PARTITION BY object_name, sql_plan_hash_value ORDER BY sql_Secs DESC) ranking
          FROM x
          ), z AS (
          SELECT object_name
          , CASE WHEN t.sql_text IS NOT NULL THEN y.sql_id
          ELSE (SELECT t1.sql_id
          FROM dba_hist_sqltext t1
          , dba_hist_sql_plan p1
          WHERE t1.sql_id = p1.sql_id
          AND p1.plan_hash_value = y.sql_plan_hash_value
          AND rownum = 1) --if still cannot find statement just pick any one
          END AS sql_id
          , y.sql_plan_hash_value, y.plan_ash_secs
          , CASE WHEN t.sql_text IS NOT NULL THEN t.sql_text
          ELSE (SELECT t1.sql_Text
          FROM dba_hist_sqltext t1
          , dba_hist_sql_plan p1
          WHERE t1.sql_id = p1.sql_id
          AND p1.plan_hash_value = y.sql_plan_hash_value
          AND rownum = 1) --if still cannot find statement just pick any one
          END AS sql_text
          from y
          left outer join dba_hist_sqltext t
          on t.sql_id = y.sql_id
          WHERE ranking = 1 --captured statement with most time
          )
          SELECT *
          --'SELECT * FROM table(dbms_xplan.display_awr('''||sql_id||''','||sql_plan_hash_value||',NULL,''ADVANCED''))/*'||object_name||':'||plan_ash_Secs||'*/;' cmd
          FROM z
          ORDER BY object_name, plan_ash_secs DESC
          /
          Spool off
          So now I can see the individual SQL statements.
          PSJPROJ_RESOURCE   f02k23bqj0xc4          3393167302          7340 UPDATE PS_PROJ_RESOURCE C SET (C.Operating_Unit, C.CHARTFIELD1, C.PRODUCT, C.CLA
          SS_FLD, C.CHARTFIELD2, C.VENDOR_ID, C.contract_num, C.contract_line_num, …

          PSLPROJ_RESOURCE 2fz0gcb2774y0 821236869 1220 UPDATE ps_proj_resource p SET p.deptid = NVL (( SELECT j.deptid FROM ps_job j WH
          ERE j.emplid = p.emplid AND j.empl_rcd = p.empl_rcd AND j.effdt = ( SELECT MAX (…

          PSMPROJ_RESOURCE 96cdkb7jyq863 338292674 50 UPDATE PS_GFCBI_EDM_TA04 a SET a.GFCni_amount = ( SELECT x.resource_amount FROM
          PS_PROJ_RESOURCE x WHERE x.process_instance = …

          1kq9rfy8sb8d4 4135884683 10 UPDATE PS_GFCBI_EDM_TA04 a SET a.GFCni_amount = ( SELECT x.resource_amount FROM
          PS_PROJ_RESOURCE x WHERE x.process_instance = …

          PSNPROJ_RESOURCE ga2x2u4jw9p0x 2282068749 6760 UPDATE PS_PROJ_RESOURCE P SET (P.RESOURCE_TYPE, P.RESOURCE_SUB_CAT) = …

          9z5qsq6wrr7zp 3665912247 3500 UPDATE PS_PROJ_RESOURCE P SET P.TIME_SHEET_ID = …
          If I replace the last select clause with the commented line, then I can generate the commands to extract the statement and plan from the AWR repository.
          SELECT * FROM table(dbms_xplan.display_awr('45ggt0yfrh5qp',3393167302,NULL,'ADVANCED'))/*PSJPROJ_RESOURCE:7340*/;

          SELECT * FROM table(dbms_xplan.display_awr('8ntxj3694r6kg',821236869,NULL,'ADVANCED'))/*PSLPROJ_RESOURCE:1220*/;

          SELECT * FROM table(dbms_xplan.display_awr('96cdkb7jyq863',338292674,NULL,'ADVANCED'))/*PSMPROJ_RESOURCE:50*/;

          SELECT * FROM table(dbms_xplan.display_awr('1kq9rfy8sb8d4',4135884683,NULL,'ADVANCED'))/*PSMPROJ_RESOURCE:10*/;

          SELECT * FROM table(dbms_xplan.display_awr('ga2x2u4jw9p0x',2282068749,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:6760*/;
          SELECT * FROM table(dbms_xplan.display_awr('9z5qsq6wrr7zp',3665912247,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3500*/;
          SELECT * FROM table(dbms_xplan.display_awr('b28btd6k3x8jt',1288409804,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:3060*/;
          SELECT * FROM table(dbms_xplan.display_awr('avs70c19khxmw',2276811880,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:2660*/;
          SELECT * FROM table(dbms_xplan.display_awr('b78qhsch85g4a',1019599680,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1960*/;
          SELECT * FROM table(dbms_xplan.display_awr('65kq2v1ubybps',3138703971,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1820*/;
          SELECT * FROM table(dbms_xplan.display_awr('1dj17ra70c801',1175874548,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:1460*/;
          SELECT * FROM table(dbms_xplan.display_awr('3w71v896s7m5d',3207074729,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:500*/;
          SELECT * FROM table(dbms_xplan.display_awr('35mz5bw7p5ubw',2447377432,NULL,'ADVANCED'))/*PSNPROJ_RESOURCE:170*/;
          Ultimately, I have needed to look through the SQL plans that use an index to decide whether I need to keep that index or decide whether the statement would perform adequately using another index. In this case, on this particular system, I think the index PSMPROJ_RESOURCE would be adequate for this statement, and I would consider dropping PSLPROJ_RESOURCE.
          >SELECT * FROM table(dbms_xplan.display_awr('8ntxj3694r6kg',821236869,NULL,'ADVANCED'))/*PSLPROJ_RESOURCE:1220*/;
          --------------------
          UPDATE ps_proj_resource p SET p.deptid = NVL (( SELECT j.deptid FROM
          ps_job j WHERE j.emplid = p.emplid AND j.empl_rcd = p.empl_rcd AND
          j.effdt = ( SELECT MAX (j1.effdt) FROM ps_job j1 WHERE j1.emplid =
          j.emplid AND j1.empl_rcd = j.empl_rcd AND j1.effdt <= p.trans_dt) AND
          j.effseq = ( SELECT MAX (j2.effseq) FROM ps_job j2 WHERE j2.emplid =
          j.emplid AND j2.empl_rcd = j.empl_rcd AND j2.effdt = j.effdt)),
          p.deptid )
          WHERE p.process_instance = …
          AND EXISTS ( SELECT
          j.deptid FROM ps_job j WHERE j.emplid = p.emplid AND j.empl_rcd =
          p.empl_rcd AND j.effdt = ( SELECT MAX (j1.effdt) FROM ps_job j1 WHERE
          j1.emplid = j.emplid AND j1.empl_rcd = j.empl_rcd AND j1.effdt <=
          p.trans_dt) AND j.effseq = ( SELECT MAX (j2.effseq) FROM ps_job j2
          WHERE j2.emplid = j.emplid AND j2.empl_rcd = j.empl_rcd AND j2.effdt =
          j.effdt))

          Plan hash value: 821236869

          -----------------------------------------------------------------------------------------
          | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
          -----------------------------------------------------------------------------------------
          | 0 | UPDATE STATEMENT | | | | 63104 (100)| |
          | 1 | UPDATE | PS_PROJ_RESOURCE | | | | |
          | 2 | INDEX RANGE SCAN | PSLPROJ_RESOURCE | 365 | 11315 | 22 (0)| 00:00:01 |
          | 3 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
          | 4 | SORT AGGREGATE | | 1 | 20 | | |
          | 5 | INDEX RANGE SCAN| PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
          | 6 | SORT AGGREGATE | | 1 | 23 | | |
          | 7 | INDEX RANGE SCAN| PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
          | 8 | INDEX RANGE SCAN | PSAJOB | 1 | 29 | 3 (0)| 00:00:01 |
          | 9 | SORT AGGREGATE | | 1 | 20 | | |
          | 10 | INDEX RANGE SCAN | PSAJOB | 1 | 20 | 3 (0)| 00:00:01 |
          | 11 | SORT AGGREGATE | | 1 | 23 | | |
          | 12 | INDEX RANGE SCAN | PSAJOB | 1 | 23 | 3 (0)| 00:00:01 |
          -----------------------------------------------------------------------------------------

          I carried on with examination of SQL statements and execution plans to determine whether each index is really needed or another index (or even no index at all) would do as well.  This decision also requires some background knowledge about the application. Eventually, I decided that I want drop the J, L and N indexes on PROJ_RESOURCE and just keep M. 
          Limitations of Method
            AWR does not capture all SQLs, nor all SQL plans. First the SQL has to be in the library cache and then it must be one of the top-n. A SQL that is efficient because it uses an appropriate index may not be captured, and will not be detected by this approach. This might lead you to erronously believe that the index could be dropped.
              ASH data is purged after a period of time, by default 31 days. If an index is only used by a process that has not run within the retention period, then it will not be detected by this approach. This is another reason to retain ASH and AWR in a repository for a longer period. I have heard 400 days suggested, so that you have ASH for a year and a month.
                • However, this also causes the SYSAUX tablespace to be become very large, so I would suggest regularly moving the data to a separate database. I know one customer who has built a central AWR repository for all their production and test databases and automated regular transfer of data. That repository has been of immense diagnostic value.
                [Update] This analysis will not detect index use in support of constraint validation (PeopleSoft doesn't use database referential integrity constraints).  As Mark Farnham points out below, that may be a reason for retaining a particular index.
                Getting Rid of Indexes Obviously any index changes need to be tested carefully in all the places that reference the index, but on the other hand it is not viable to do a full regression test every time you want to change an index.
                  Therefore, if all the testing is successful and you decide to go ahead and drop the index in production, you might prefer to make it invisible first for a while before actually dropping it. It is likely that the indexes you choose to examine are large and will take time to rebuild. An invisible index will not be used by the optimizer, but it will continue to be maintained during DML. If there are any unfortunate consequences, you can immediately make the index visible without having to rebuild it.

                    To Hint or not to hint (Application Engine), that is the question

                    Mon, 2014-08-25 13:36
                    Over the years Oracle has provided a number of plan stability technologies to control how SQL statements are executed.  At the risk of over simplification, Outlines (deprecated in 11g), Profiles, Baselines and Patches work by injecting a set of hints into a SQL statement at parse time.  There is quite a lot of advice from Oracle to use these technologies to fix errant execution plans rather than hint the application.  I think it is generally good advice, however, there are times when this approach does not work well with PeopleSoft, and that is due to the behaviour and structure of PeopleSoft rather than the Oracle database.

                    It is possible to produce a SQL profile from a plan captured by AWR.  A part of distribution for the SQLT Diagnostic Tool (Doc ID 215187.1) is a script called coe_xfr_sql_profile.sql written by Carlos Sierra.
                    The only thing I would change in the delivered script, (for use with PeopleSoft and as suggested in a comment) is to create the profile with FORCE_MATCHING so that similar statements with different literal values still match. 
                    The Slings and Arrows of outrageous execution plans Let's take an example of a statement (from the vanilla Financials product that has not been customised) that performed poorly because it didn't generate a good execution plan (although I have cut out most of the statement for readability.  Note, that it references instance 5 of PeopleTools temporary record CA_SUM_TAO.
                    INSERT INTO PS_CA_SUM_RC_TAO5 (…) SELECT

                    FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE

                    B.PROCESS_INSTANCE = 51381955 AND C.IN_USE_FLAG = 'Y'

                    Plan hash value: 2039212279
                    ----------------------------------------------------------------------------------------------------
                    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
                    ----------------------------------------------------------------------------------------------------
                    | 0 | INSERT STATEMENT | | | | 14424 (100)| |
                    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
                    | 2 | NESTED LOOPS | | | | | |
                    | 3 | NESTED LOOPS | | 1 | 430 | 14424 (1)| 00:02:54 |
                    | 4 | NESTED LOOPS | | 1 | 318 | 14421 (1)| 00:02:54 |
                    | 5 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14420 (1)| 00:02:54 |
                    | 6 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO5 | 1 | 233 | 1 (0)| 00:00:01 |
                    | 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO5 | 1 | | 0 (0)| |
                    | 8 | INDEX RANGE SCAN | PSACA_PR_SUMM | 1 | | 2 (0)| 00:00:01 |
                    | 9 | TABLE ACCESS BY INDEX ROWID | PS_CA_PR_SUMM | 1 | 112 | 3 (0)| 00:00:01 |
                    ----------------------------------------------------------------------------------------------------
                    However, below is the plan we get on instance 4.  We get this plan because there is already a profile that has been applied in the past, but now we are on a different non-shared instance of the temporary table, so the profile cannot match because we are on different objects, and we get the same problem, but on different non-shared instances of the temporary record.  Different literal values, such as those for Process Instance can be handled by FORCE_MATCHING, but not different tables.  This is a totally different SQL statement.
                    SQL_ID 5gtxdqbx0d0c3
                    --------------------
                    INSERT INTO PS_CA_SUM_RC_TAO4 (…) SELECT

                    FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE

                    B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'

                    Plan hash value: 3552771247

                    ---------------------------------------------------------------------------------------------------
                    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
                    ---------------------------------------------------------------------------------------------------
                    | 0 | INSERT STATEMENT | | | | 36361 (100)| |
                    | 1 | LOAD TABLE CONVENTIONAL | | | | | |
                    | 2 | HASH JOIN | | 1 | 430 | 36361 (3)| 00:07:17 |
                    | 3 | TABLE ACCESS FULL | PS_CA_SUM_IN_USE | 1 | 85 | 14347 (1)| 00:02:53 |
                    | 4 | NESTED LOOPS | | | | | |
                    | 5 | NESTED LOOPS | | 1 | 345 | 22014 (3)| 00:04:25 |
                    | 6 | TABLE ACCESS FULL | PS_CA_PR_SUMM | 5268K| 562M| 21539 (1)| 00:04:19 |
                    | 7 | INDEX UNIQUE SCAN | PS_CA_SUM_TAO4 | 1 | | 0 (0)| |
                    | 8 | TABLE ACCESS BY INDEX ROWID| PS_CA_SUM_TAO4 | 1 | 233 | 1 (0)| 00:00:01 |
                    ---------------------------------------------------------------------------------------------------

                    Note
                    -----
                    - SQL profile "coe_gn3n77gs6xj2a_3552771247" used for this statement
                    Of course, the statement on instance 4 had a profile because it was added as a short term fix and then left in situ long term.  It worked fine until a process errored, left the non-shared instance of the temporary record allocated to that process instance, and so PeopleSoft allocated instance 5 on the next execution.
                    So we could just create another profile using the coe_xfr_sql_profile.sql script
                    SPO coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.log;
                    SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999;
                    REM
                    REM $Header: 215187.1 coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql 11.4.1.4 2014/08/13 csierra $
                    REM
                    REM Copyright (c) 2000-2010, Oracle Corporation. All rights reserved.
                    REM
                    REM AUTHOR
                    REM carlos.sierra@oracle.com
                    REM
                    REM SCRIPT
                    REM coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql
                    REM
                    REM DESCRIPTION
                    REM This script is generated by coe_xfr_sql_profile.sql
                    REM It contains the SQL*Plus commands to create a custom
                    REM SQL Profile for SQL_ID 5gtxdqbx0d0c3 based on plan hash
                    REM value 3552771247.
                    REM The custom SQL Profile to be created by this script
                    REM will affect plans for SQL commands with signature
                    REM matching the one for SQL Text below.
                    REM Review SQL Text and adjust accordingly.
                    REM
                    REM PARAMETERS
                    REM None.
                    REM
                    REM EXAMPLE
                    REM SQL> START coe_xfr_sql_profile_5gtxdqbx0d0c3_3552771247.sql;
                    REM
                    REM NOTES
                    REM 1. Should be run as SYSTEM or SYSDBA.
                    REM 2. User must have CREATE ANY SQL PROFILE privilege.
                    REM 3. SOURCE and TARGET systems can be the same or similar.
                    REM 4. To drop this custom SQL Profile after it has been created:
                    REM EXEC DBMS_SQLTUNE.DROP_SQL_PROFILE('coe_5gtxdqbx0d0c3_3552771247');
                    REM 5. Be aware that using DBMS_SQLTUNE requires a license
                    REM for the Oracle Tuning Pack.
                    REM
                    WHENEVER SQLERROR EXIT SQL.SQLCODE;
                    REM
                    VAR signature NUMBER;
                    REM
                    DECLARE
                    sql_txt CLOB;
                    h SYS.SQLPROF_ATTR;
                    BEGIN
                    sql_txt := q'[
                    INSERT INTO PS_CA_SUM_RC_TAO4 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
                    A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO4 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
                    C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
                    B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
                    ]';
                    h := SYS.SQLPROF_ATTR(
                    q'[BEGIN_OUTLINE_DATA]',
                    q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
                    q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
                    q'[DB_VERSION('11.2.0.3')]',
                    q'[OPT_PARAM('_unnest_subquery' 'false')]',
                    q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
                    q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
                    q'[ALL_ROWS]',
                    q'[OUTLINE_LEAF(@"SEL$1")]',
                    q'[OUTLINE_LEAF(@"INS$1")]',
                    q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO4"@"INS$1")]',
                    q'[FULL(@"SEL$1" "A"@"SEL$1")]',
                    q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO4"."PROCESS_INSTANCE" "PS_CA_SUM_TAO4"."BUSINESS_UNIT" "PS_CA_SUM_TAO4"."PROJECT_ID" "PS_CA_SUM_TAO4"."ACTIVITY_ID" "PS_CA_SUM_TAO4"."ANALYSIS_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_TYPE" "PS_CA_SUM_TAO4"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO4"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO4"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO4"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO4"."CONTRACT_NUM" ]',
                    q'[ "PS_CA_SUM_TAO4"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO4"."CA_FEE_STATUS"))]',
                    q'[FULL(@"SEL$1" "C"@"SEL$1")]',
                    q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
                    q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
                    q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
                    q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
                    q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
                    q'[END_OUTLINE_DATA]');
                    :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
                    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
                    sql_text => sql_txt,
                    profile => h,
                    name => 'coe_5gtxdqbx0d0c3_3552771247',
                    description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
                    category => 'DEFAULT',
                    validate => TRUE,
                    replace => TRUE,
                    force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
                    END;
                    /
                    WHENEVER SQLERROR CONTINUE
                    SET ECHO OFF;
                    PRINT signature
                    PRO
                    PRO ... manual custom SQL Profile has been created
                    PRO
                    SET TERM ON ECHO OFF LIN 80 TRIMS OFF NUMF "";
                    SPO OFF;
                    PRO
                    PRO COE_XFR_SQL_PROFILE_5gtxdqbx0d0c3_3552771247 completed
                    But then we must manually change the table and index names from 4 to 5.
                    DECLARE
                    sql_txt CLOB;
                    h SYS.SQLPROF_ATTR;
                    BEGIN
                    sql_txt := q'[
                    INSERT INTO PS_CA_SUM_RC_TAO5 (PROCESS_INSTANCE, BUSINESS_UNIT, PROJECT_ID, ACTIVITY_ID, ANALYSIS_TYPE, RESOURCE_TYPE, RESOURCE_CATEGORY, RESOURCE_SUB_CAT, BI_DISTRIB_STATUS, GL_DISTRIB_STATUS, FOREIGN_CURRENCY, CONTRACT_CURRENCY, CONTRACT_NUM, CONTRACT_LINE_NUM, CA_FEE_STATUS, RESOURCE_QUANTITY, FOREIGN_AMOUNT_BSE, FOREIGN_AMOUNT_INC, FOREIGN_AMOUNT, CONTRACT_AMT_BSE, CONTRACT_AMT_INC, CONTRACT_AMT, MIN_TRANS_DT, MAX_TRANS_DT, CAND_MIN_TRANS_DT, CAND_MAX_TRANS_DT) SELECT B.PROCESS_INSTANCE, A.BUSINESS_UNIT, A.PROJECT_ID, A.ACTIVITY_ID, A.ANALYSIS_TYPE, A.RESOURCE_TYPE, A.RESOURCE_CATEGORY, A.RESOURCE_SUB_CAT, A.BI_DISTRIB_STATUS, A.GL_DISTRIB_STATUS,
                    A.FOREIGN_CURRENCY, A.CONTRACT_CURRENCY, A.CONTRACT_NUM, A.CONTRACT_LINE_NUM, A.CA_FEE_STATUS, (A.RESOURCE_QUANTITY+B.RESOURCE_QUANTITY), A.FOREIGN_AMOUNT, B.FOREIGN_AMOUNT, (A.FOREIGN_AMOUNT+B.FOREIGN_AMOUNT), A.CONTRACT_AMT, B.CONTRACT_AMT, (A.CONTRACT_AMT+B.CONTRACT_AMT), A.MIN_TRANS_DT, A.MAX_TRANS_DT, B.CAND_MIN_TRANS_DT, B.CAND_MAX_TRANS_DT FROM PS_CA_PR_SUMM A, PS_CA_SUM_TAO5 B , PS_CA_SUM_IN_USE C WHERE B.BUSINESS_UNIT = C.BUSINESS_UNIT AND B.PROJECT_ID = C.PROJECT_ID AND B.ACTIVITY_ID = C.ACTIVITY_ID AND B.ANALYSIS_TYPE = C.ANALYSIS_TYPE AND B.RESOURCE_TYPE = C.RESOURCE_TYPE AND B.RESOURCE_CATEGORY = C.RESOURCE_CATEGORY AND B.RESOURCE_SUB_CAT =
                    C.RESOURCE_SUB_CAT AND B.BI_DISTRIB_STATUS = C.BI_DISTRIB_STATUS AND B.GL_DISTRIB_STATUS = C.GL_DISTRIB_STATUS AND B.FOREIGN_CURRENCY = C.FOREIGN_CURRENCY AND B.CONTRACT_CURRENCY = C.CONTRACT_CURRENCY AND B.CONTRACT_NUM = C.CONTRACT_NUM AND B.CONTRACT_LINE_NUM = C.CONTRACT_LINE_NUM AND B.CA_FEE_STATUS = C.CA_FEE_STATUS AND A.BUSINESS_UNIT = B.BUSINESS_UNIT AND A.PROJECT_ID = B.PROJECT_ID AND A.ACTIVITY_ID = B.ACTIVITY_ID AND A.ANALYSIS_TYPE = B.ANALYSIS_TYPE AND A.RESOURCE_TYPE = B.RESOURCE_TYPE AND A.RESOURCE_CATEGORY = B.RESOURCE_CATEGORY AND A.RESOURCE_SUB_CAT = B.RESOURCE_SUB_CAT AND A.BI_DISTRIB_STATUS = B.BI_DISTRIB_STATUS AND A.GL_DISTRIB_STATUS =
                    B.GL_DISTRIB_STATUS AND A.FOREIGN_CURRENCY = B.FOREIGN_CURRENCY AND A.CONTRACT_CURRENCY = B.CONTRACT_CURRENCY AND A.CONTRACT_NUM = B.CONTRACT_NUM AND A.CONTRACT_LINE_NUM = B.CONTRACT_LINE_NUM AND A.CA_FEE_STATUS = B.CA_FEE_STATUS AND B.PROCESS_INSTANCE = 51377796 AND C.IN_USE_FLAG = 'Y'
                    ]';
                    h := SYS.SQLPROF_ATTR(
                    q'[BEGIN_OUTLINE_DATA]',
                    q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
                    q'[OPTIMIZER_FEATURES_ENABLE('11.2.0.3')]',
                    q'[DB_VERSION('11.2.0.3')]',
                    q'[OPT_PARAM('_unnest_subquery' 'false')]',
                    q'[OPT_PARAM('optimizer_dynamic_sampling' 4)]',
                    q'[OPT_PARAM('_optimizer_cost_based_transformation' 'off')]',
                    q'[ALL_ROWS]',
                    q'[OUTLINE_LEAF(@"SEL$1")]',
                    q'[OUTLINE_LEAF(@"INS$1")]',
                    q'[FULL(@"INS$1" "PS_CA_SUM_RC_TAO5"@"INS$1")]',
                    q'[FULL(@"SEL$1" "A"@"SEL$1")]',
                    q'[INDEX(@"SEL$1" "B"@"SEL$1" ("PS_CA_SUM_TAO5"."PROCESS_INSTANCE" "PS_CA_SUM_TAO5"."BUSINESS_UNIT" "PS_CA_SUM_TAO5"."PROJECT_ID" "PS_CA_SUM_TAO5"."ACTIVITY_ID" "PS_CA_SUM_TAO5"."ANALYSIS_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_TYPE" "PS_CA_SUM_TAO5"."RESOURCE_CATEGORY" "PS_CA_SUM_TAO5"."RESOURCE_SUB_CAT" "PS_CA_SUM_TAO5"."BI_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."GL_DISTRIB_STATUS" "PS_CA_SUM_TAO5"."FOREIGN_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_CURRENCY" "PS_CA_SUM_TAO5"."CONTRACT_NUM" ]',
                    q'[ "PS_CA_SUM_TAO5"."CONTRACT_LINE_NUM" "PS_CA_SUM_TAO5"."CA_FEE_STATUS"))]',
                    q'[FULL(@"SEL$1" "C"@"SEL$1")]',
                    q'[LEADING(@"SEL$1" "A"@"SEL$1" "B"@"SEL$1" "C"@"SEL$1")]',
                    q'[USE_NL(@"SEL$1" "B"@"SEL$1")]',
                    q'[NLJ_BATCHING(@"SEL$1" "B"@"SEL$1")]',
                    q'[USE_HASH(@"SEL$1" "C"@"SEL$1")]',
                    q'[SWAP_JOIN_INPUTS(@"SEL$1" "C"@"SEL$1")]',
                    q'[END_OUTLINE_DATA]');
                    :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
                    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
                    sql_text => sql_txt,
                    profile => h,
                    name => 'coe_5gtxdqbx0d0c3_3552771247',
                    description => 'coe 5gtxdqbx0d0c3 3552771247 '||:signature||'',
                    category => 'DEFAULT',
                    validate => TRUE,
                    replace => TRUE,
                    force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );
                    END;
                    /
                    Or to take Arms against a Sea of statements,
                    The profile has advantage that it can be applied quickly without a code change.  It is the perfect tool for the DBA with a production performance problem. However, there are some other considerations.
                    • If applying to statement that references a PS temp record then we need to apply the profile to all instances of the record (both non-shared instances and the shared instance).
                    • We were lucky that we referenced instance 5 of two temporary records. However, you could get a situation where a statement references different instances of different temporary records.  So perhaps instance 5 of one table and instance 6 of another.  In which case, you might also get instance 6 of the first table and instance 5 of the other.  A SQL profile could be needed for each permutation.
                    • Bear in mind also that some areas of PeopleSoft use dynamically generated SQL.  So you get similar SQL statements which are sufficiently different for the profile not to match.  
                    • Any changes to the expansion of Application Engine and PeopleCode MetaSQL on upgrading PeopleTools, or potentially even patching, will also prevent matching.
                    • There is also the challenge of dealing with code changes as the system is upgraded, patched and customised.  A small code change, perhaps just an extra field in the select clause, can result in a performance regression because the profile stops matching. Of course, this challenge is not limited to PeopleSoft systems! 
                    Profiles are likely to be effective if there are no PeopleSoft temporary records present.  So you can generally use them in COBOL and SQR processes and the on-line application (other than in on-line Application Engine processes).
                    Aye, there's the rub,
                    I would use a profile (or a set of profiles) as a short-term temporary fix that is easier to introduce into production, and then add hints to the source code and so fix all instances of the code, not just the ones that have been profiled. Of course, that does entail a code change, and everything that goes with that.  One strong argument against making code change is that you have to change the code again to remove or change the hint if it becomes unnecessary at some time in future after a significant change, such as an Oracle upgrade.  However, on balance, I think it is better than the scenario where the profile stops working one day without warning.
                    The rest is silence.
                    Unless you add a comment.

                    Introducing the Analytic Keep Clause for Effective-Dated/Sequence Queries in PeopleSoft

                    Wed, 2014-07-09 13:46
                    Those of us who work with PeopleSoft, and especially the HCM product, are all too familiar with the concept of effect-dated data, and the need to find data that was effective at a particular date.  PeopleSoft products have always made extensive use of correlated sub-queries to determine the required rows from an effective-dated record.

                    The JOB record is a the heart of HCM. It is both effective-dated and effective sequenced. I will use it for the demonstrations in this article. I am going to suggest an alternative, although Oracle-specific, SQL construction.

                     Let's start by looking at the job data for an employee in the demo database. Employee KF0018 has 17 rows of data two concurrent jobs.  The question I am going to ask is "What was the annual salary for this employee on 11 February 1995?".  Therefore, I am interested in the rows marked below with the asterisks. 
                    column annual_rt format 999,999
                    SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
                    FROM ps_job j
                    WHERE j.emplid = 'KF0018'
                    ORDER BY 1,2,3,4
                    /

                    EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
                    ----------- ---------- --------- ---------- --- ---------- --- ---------
                    KF0018 0 12-JUN-83 0 HIR 13000 FRF 120,000
                    KF0018 0 01-JAN-84 0 PAY 13000 FRF 123,600
                    KF0018 0 01-JAN-85 0 PAY 13000 FRF 127,308
                    KF0018 0 01-JAN-86 0 PAY 13000 FRF 131,764
                    KF0018 0 01-JAN-87 0 PAY 13000 FRF 136,376
                    KF0018 0 01-JAN-88 0 PAY 13000 FRF 140,467
                    KF0018 0 01-JAN-89 0 PAY 13000 FRF 147,490
                    KF0018 0 22-JAN-95 0 PRO 13000 FRF 147,490
                    KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239 *
                    KF0018 0 22-JAN-96 0 PAY 13000 FRF 318,575
                    KF0018 0 01-JAN-98 0 PAY 13000 FRF 346,156
                    KF0018 0 01-JAN-00 0 DTA 13000 FRF 346,156
                    KF0018 0 01-JAN-02 0 PAY 13000 EUR 52,771
                    KF0018 1 01-NOV-89 0 ASG 21300 GBP 22,440
                    KF0018 1 31-DEC-93 0 ASC 21300 GBP 22,440
                    KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440 *
                    KF0018 1 31-DEC-95 0 ASC 10000 GBP 22,440

                    I will set statistics level to ALL so I can obtain detailed information about how the SQL statements execute:
                    ALTER SESSION SET statistics_level = ALL;

                    I extracted the execution plans and execution statistics with the following command
                    select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS')) 
                    Typical PeopleSoft Platform Agnostic ConstructionThis is the usual way to construct the query in PeopleSoft. It is also valid on all databases platforms supported by PeopleSoft, not just Oracle. 
                    SELECT  emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
                    FROM ps_job j
                    WHERE j.effdt = (
                    SELECT MAX (j1.effdt) FROM ps_job j1
                    WHERE j1.emplid = j.emplid
                    AND j1.empl_rcd = j.empl_rcd
                    AND j1.effdt <= TO_DATE('19950211','YYYYMMDD'))
                    AND j.effseq = (
                    SELECT MAX (j2.effseq) FROM ps_job j2
                    WHERE j2.emplid = j.emplid
                    AND j2.empl_rcd = j.empl_rcd
                    AND j2.effdt = j.effdt)
                    AND j.emplid = 'KF0018'
                    ORDER BY 1,2,3,4
                    /

                    EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
                    ----------- ---------- --------- ---------- --- ---------- --- ---------
                    KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
                    KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

                    This required three access of indexes on the PS_JOB table, and two accesses of the table, using 26 consistent reads.
                    Plan hash value: 2299825310
                    ----------------------------------------------------------------------------------------------------------------
                    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
                    ----------------------------------------------------------------------------------------------------------------
                    | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 26 | 2 |
                    | 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
                    | 2 | NESTED LOOPS | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
                    | 3 | NESTED LOOPS | | 1 | 1 | 3 |00:00:00.01 | 21 | 2 |
                    | 4 | VIEW | VW_SQ_1 | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
                    |* 5 | FILTER | | 1 | | 2 |00:00:00.01 | 14 | 2 |
                    | 6 | HASH GROUP BY | | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
                    | 7 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 | 2 |
                    |* 8 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 2 | 2 |
                    | 9 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 2 | 1 | 3 |00:00:00.01 | 7 | 0 |
                    |* 10 | INDEX RANGE SCAN | PSAJOB | 2 | 1 | 3 |00:00:00.01 | 4 | 0 |
                    |* 11 | VIEW PUSHED PREDICATE | VW_SQ_2 | 3 | 1 | 2 |00:00:00.01 | 5 | 0 |
                    |* 12 | FILTER | | 3 | | 3 |00:00:00.01 | 5 | 0 |
                    | 13 | SORT AGGREGATE | | 3 | 1 | 3 |00:00:00.01 | 5 | 0 |
                    |* 14 | FILTER | | 3 | | 5 |00:00:00.01 | 5 | 0 |
                    |* 15 | INDEX RANGE SCAN | PSAJOB | 3 | 1 | 5 |00:00:00.01 | 5 | 0 |
                    ----------------------------------------------------------------------------------------------------------------

                    Predicate Information (identified by operation id):
                    ---------------------------------------------------

                    5 - filter("J1"."EMPLID"='KF0018')
                    8 - access("J1"."EMPLID"='KF0018' AND "J1"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF') )
                    filter(SYS_OP_UNDESCEND("J1"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00', 'syyyy-mm-dd
                    hh24:mi:ss'))
                    10 - access("J"."EMPLID"='KF0018' AND "ITEM_2"="J"."EMPL_RCD" AND
                    "J"."SYS_NC00164$"=SYS_OP_DESCEND("MAX(J1.EFFDT)"))
                    filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")="MAX(J1.EFFDT)")
                    11 - filter(SYS_OP_UNDESCEND("J"."SYS_NC00165$")="MAX(J2.EFFSEQ)")
                    12 - filter(COUNT(*)>0)
                    14 - filter('KF0018'="J"."EMPLID")
                    15 - access("J2"."EMPLID"='KF0018' AND "J2"."EMPL_RCD"="J"."EMPL_RCD" AND
                    "J2"."SYS_NC00164$"=SYS_OP_DESCEND(SYS_OP_UNDESCEND("J"."SYS_NC00164$")))
                    filter(SYS_OP_UNDESCEND("J2"."SYS_NC00164$")=SYS_OP_UNDESCEND("J"."SYS_NC00164$"))

                    This construction is also the reason you are required to set
                    _UNNEST_SUBQUERY=FALSE
                    on all PeopleSoft systems
                    Analytic Function and In-LineView/Sub-query FactorI have seen people use a combination of analytic functions and in-line views to avoid having to use the correlated sub-query construction. This has been possible since Oracle 9i.
                    WITH X AS (
                    SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
                    , ROW_NUMBER() OVER (PARTITION BY emplid, empl_rcd 
                    ORDER BY effdt DESC, effseq DESC) myrowseq
                    FROM ps_job j
                    WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
                    AND j.emplid = 'KF0018'
                    )
                    SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
                    FROM x
                    WHERE myrowseq = 1
                    ORDER BY 1,2,3,4
                    /

                    EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
                    ----------- ---------- --------- ---------- --- ---------- --- ---------
                    KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
                    KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

                    We get the same result, but now the index is scanned just once and we only need 14 consistent reads, so it produces a significant improvement. However, it still includes a sort operation in addition to the window function. We have to create a sequence number field in the in-line view and filter by that in the final query.
                    Plan hash value: 1316906785
                    ---------------------------------------------------------------------------------------------------
                    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
                    ---------------------------------------------------------------------------------------------------
                    | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 14 |
                    | 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 14 |
                    |* 2 | VIEW | | 1 | 1 | 2 |00:00:00.01 | 14 |
                    |* 3 | WINDOW NOSORT | | 1 | 1 | 12 |00:00:00.01 | 14 |
                    | 4 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 |
                    |* 5 | INDEX RANGE SCAN | PSAJOB | 1 | 1 | 12 |00:00:00.01 | 2 |
                    ---------------------------------------------------------------------------------------------------

                    Predicate Information (identified by operation id):
                    ---------------------------------------------------

                    2 - filter("MYROWSEQ"=1)
                    3 - filter(ROW_NUMBER() OVER ( PARTITION BY "EMPLID","EMPL_RCD" ORDER BY
                    "J"."SYS_NC00164$","J"."SYS_NC00165$")<=1)
                    5 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF')
                    )
                    filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00',
                    'syyyy-mm-dd hh24:mi:ss'))

                    Analytic Function Keep Clause This form of the analytic functions is documented for the first time in 12c, but is available in 10g (my thanks to Tony Hasler for introducing me to it). It works by effectively keeping a running maximum value of the columns in the order by clause within in group.
                    SELECT emplid, empl_rcd
                    , MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effdt
                    , MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effseq
                    , MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS action
                    , MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS deptid
                    , MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS currency_cd
                    , MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS annual_rt
                    FROM ps_job j
                    WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
                    AND j.emplid = 'KF0018'
                    GROUP BY emplid, empl_rcd
                    /

                    EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
                    ----------- ---------- --------- ---------- --- ---------- --- ---------
                    KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
                    KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

                    Although this construction uses an additional consistent read, it has the advantage of not using either an inline view or a window function and does not sort the data.
                    Plan hash value: 1550496807
                    -------------------------------------------------------------------------------------------------
                    | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
                    -------------------------------------------------------------------------------------------------
                    | 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 15 |
                    | 1 | SORT GROUP BY NOSORT | | 1 | 1 | 2 |00:00:00.01 | 15 |
                    | 2 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 15 |
                    |* 3 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 3 |
                    -------------------------------------------------------------------------------------------------

                    Predicate Information (identified by operation id):
                    ---------------------------------------------------

                    3 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF3FEF8FEFAFF'
                    ) )
                    filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-12 00:00:00',
                    'syyyy-mm-dd hh24:mi:ss'))

                    I think this construction could be useful in PeopleSoft.  At first glance the SQL appears more complicated, but it in this example it removed two correlated sub-queries. 
                    Using Analytic Functions in PS/QueryOf course you can code it anywhere where you can simply enter SQL as text.  However, it also has the advantage over the other analytic function construction that it can be coded in the PS/Query tool.  The analytic functions in the select caluse should be created in PS/Query expressions with the aggregate expression checkbox ticked.
                    Analytic 'Keep' function in PS/Query Aggregate ExpressionAnalytic Function in Aggregated Expression in Windows Client version of PS/Query  The analytic functions can be selected in the PS/Query, and their lengths and titles can be tidied up.
                    Analytic PS/QueryPS/Query with Analytic 'Keep' Functions
                    This is the resulting SQL which is the same as before (with row level security added by PS/Query) and produces the same results.
                    SELECT A.EMPLID, A.EMPL_RCD, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
                    , MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
                    FROM PS_JOB A, PS_EMPLMT_SRCH_QRY A1
                    WHERE ( A.EMPLID = A1.EMPLID
                    AND A.EMPL_RCD = A1.EMPL_RCD
                    AND A1.OPRID = 'PS'
                    AND ( A.EFFDT <= TO_DATE('1995-02-11','YYYY-MM-DD')
                    AND A.EMPLID = 'KF0018' ) )
                    GROUP BY A.EMPLID, A.EMPL_RCD

                    Implementing Deferred Segment Creation After an Upgrade

                    Thu, 2014-01-30 13:02
                    I have written previously about Deferred Segment Creation. Each empty table and its indexes use only 64Kb (assuming an 8Kb block size and locally managed tablespaces), but in a PeopleSoft there can be be tens of thousands of such tables and that adds up to a saving worth making.

                    If you are upgrading your database to 11gR2 , you might want to make sure that you are using it.  Deferred segment creation was introduced in Oracle 11.2.0.2 and it became the default in 11.2.0.3.  However, any table created in a previous version will have a physical segment.

                    This problem could affect any system, but it also manifests itself in PeopleSoft in a particular way.

                    When you run the alter scripts in PeopleTools a table may be recreated.  If it is a regular table (record type 0) then the CREATE TABLE command will not specify a segment creation clause and so the segment creation will be deferred until rows are inserted.

                    CREATE TABLE PS_TL_IPT1 (PROCESS_INSTANCE DECIMAL(10) NOT NULL,
                       EMPLID VARCHAR2(11) NOT NULL,
                    ...
                       INITIAL_SEQ_NBR DECIMAL(15) NOT NULL) TABLESPACE TLWORK STORAGE
                     (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0)
                     PCTFREE 10 PCTUSED 80
                    /
                    However, from PeopleTools 8.51, Application Designer uses the Oracle delivered DBMS_METADATA package to extract the DDL to recreate the object from the actual object.  However, this behaviour only occurs for Temporary working storage tables (record type 7).  Yes, these are exactly the tables that would benefit most from deferred segment creation because in many systems there are many unused temporary table instances.  If table was created under a version of the database prior to 11.2.0.2 then the segment will exist and DBMS_METADATA will generate the DDL with the SEGMENT CREATION IMMEDIATE clause. 

                    -- Create temporary table 
                    CREATE TABLE PSYPERSON (EMPLID VARCHAR2(11) NOT NULL,
                    ...
                       LAST_CHILD_UPDDTM tIMESTAMP) SEGMENT CREATION IMMEDIATE
                      PCTFREE 10 PCTUSED 80 INITRANS 1 MAXTRANS 255
                     NOCOMPRESS LOGGING
                      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
                      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
                      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
                      TABLESPACE "HRLARGE"
                    /
                    You can use DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS to remove the segments for any empty tables (and their indexes) for which the segment has been previously created.  There would be no harm in simply running this program for every table in the system.  If there are rows then DBMS_SPACE_ADMIN will take no action.

                    The following script identifies candidate tables where the statistics suggest that there are no rowa or where there are no statistics.  I am indebted to Tim Hall for the idea for this tip.

                    set serveroutput on 
                    BEGIN
                     FOR i IN (
                      SELECT owner, table_name
                      FROM   all_tables
                      WHERE  owner = 'SYSADM'
                      AND    segment_created = 'YES'
                      AND    temporary = 'N'
                      AND   (num_rows = 0 OR num_rows IS NULL)
                     ) LOOP
                      dbms_output.put_line(i.owner||'.'||i.table_name);
                      dbms_space_admin.drop_empty_segments (
                        schema_name    => i.owner,
                        table_name     => i.table_name);
                     END LOOP;
                    END;
                    /
                    As this package drops the empty segments, the SEGMENT_CREATED column on USER_TABLES changes to NO and if you were to extract the DDL with DBMS_METADATA the SEGMENT CREATION clause would have changed to DEFERRED.

                    As soon as any data is inserted, the segment is created, SEGMENT_CREATED changes to YES and the DDL generated by DBMS_METADATA would have SEGMENT CREATION IMMEDIATE.

                    The result is that 64Kb of space (assuming a block size of 8Kb) will be freed up for each empty table and index segment that is dropped. Your mileage may vary, but in my demo HR database that is over 20000 tables and 25000 indexes. 2.7Gb isn't a vast amount these days, but it is an easy win.

                    Added 1.2.2014:
                    To answer Noons' question below.  So long as the table or partition doesn't have any rows, the segment will be dropped it will as if the segment creation had been deferred.  You don't have to do anything special to the table.  There is no problem applying this to any empty tables create with their segments.  Here is a simple test with my results on 11.2.0.3:

                    I will create a table and I have explicitly created the segment immediately, then I insert a row, commit the insert and delete the row.  I haven't even bothered to commit the delete.
                    SQL> create table t(a number) segment creation immediate;
                    SQL> insert into t values(42);
                    SQL> commit;
                    SQL> delete from t;
                    SQL> select segment_type, segment_name, tablespace_name from user_Segments where segment_name = 'T';

                    SEGMENT_TYPE SEGMENT_NAME TABLESPACE_NAME
                    ------------------ --------------- ------------------------------
                    TABLE T PSDEFAULT

                    SQL> select table_name, segment_created from user_tables where table_name = 'T';

                    TABLE_NAME SEG
                    ------------------------------ ---
                    T YES

                    SQL> select dbms_metadata.get_ddl('TABLE','T') from dual;

                    DBMS_METADATA.GET_DDL('TABLE','T')
                    --------------------------------------------------------------------------------
                    CREATE TABLE "SYSADM"."T"
                    ( "A" NUMBER
                    ) SEGMENT CREATION IMMEDIATE


                    So, at the moment the segment exists, it has had rows in it, but they have been deleted and the table is empty. If I run DBMS_SPACE_ADMIN.DROP_EMPTY_SEGMENTS the segment is dropped.

                    SQL> execute dbms_space_admin.drop_empty_segments (user,'T');
                    SQL> select segment_type, segment_name, tablespace_name from user_Segments where segment_name = 'T';

                    no rows selected

                    SQL> select table_name, segment_created from user_tables where table_name = 'T';

                    TABLE_NAME SEG
                    ------------------------------ ---
                    T NO

                    SQL> select dbms_metadata.get_ddl('TABLE','T') from dual;

                    DBMS_METADATA.GET_DDL('TABLE','T')
                    --------------------------------------------------------------------------------
                    CREATE TABLE "SYSADM"."T"
                    ( "A" NUMBER
                    ) SEGMENT CREATION DEFERRED

                    Date to Timestamp Conversion during PeopleTools Upgrade

                    Thu, 2014-01-02 20:14
                    This blog posting describes a script to convert Oracle date columns to Timestamps as used from PeopleTools 8.50 but only rebuilding those indexes that reference those columns, rather than drop and recreate every index in the system, thus producing a significant saving of time during the upgrade.

                    (A longer version of this article is available on my website)

                    I am working on a PeopleSoft upgrade project.  We are going from PeopleTools 8.49 to 8.53.  One of the things that happens is that some date columns in the Oracle database become timestamps.

                    Timestamps were introduced by Oracle in version 10g of the database, and provide the ability to store times accurate to the nanosecond (although the default is microsecond). Dates are accurate to the whole second.

                    There are 3 types of temporal column in PeopleSoft as defined on PSDBFIELD.  Prior to PeopleTools 8.50 they all become Oracle data columns in the database.  However, from PeopleTools 8.50; Time and DateTime fields are built as TimeStamp columns.if bit 5 of PSSTATUS.DATABASE_OPTIONS (value 32) is set.

                    PeopleTools Field Type
                    Database Column Type
                    PeopleTools <= 8.49 PeopleTools >= 8.50 4 Date DATE DATE 5 Time DATE TIMESTAMP 6 DateTime DATE TIMESTAMP
                    Timestamps must be handled differently to dates in SQL.  Some date arithmetic must be done differently, in particular the difference between two timestamps is a timestamp rather than a number of days.  Therefore this setting also controls how PeopleCode date macros expand on Oracle.

                    During the upgrade, PeopleSoft Change Assistant simply alters all the Time and DateTime columns from dates to timestamps.  This generally works well.  The data value doesn't appear to get longer, so the block doesn't run out of room leading to row migration, and so it isn't necessary to rebuild every table that is affected.

                    However, there are some limitations. If the column being converted to a timestamp falls into one of the following categories you will get an error.
                    • The column is a key column in a function-based index.
                    • The table or index is partitioned by the column.
                    The functional key index issue has not affected many customers because the upgrade template drops all the indexes before altering the tables and rebuilding them again.

                    However, dropping and recreating all these indexes can be very time consuming and increases the duration of the outage required to perform the upgrade.  This has been my incentive to find a better way.

                    Function-Based Indexes
                    PeopleSoft Application Designer defines some key and search fields as descending.  The rows in components and the results of search dialogue are sorted on the key fields in the order specified.  Application Designer then indexes these columns in descending order (prior to PeopleTools 8 and since PeopleTools 8.47).  If any column in an Oracle index is in descending order the index is created as a function-based index.  Consequently, there can be a lot of descending indexes in a PeopleSoft system!  HR systems are particularly affected because many tables are effective-dated, and the field EFFDT is usually a descending key field.

                    It is not possible to alter a column to a timestamp if it appears anywhere in a function-based index. You get the following error:
                    ORA-30556: functional index is defined on the column to be modified 

                    Partitioning
                    Partitioning is not something that you usually encounter in a vanilla PeopleSoft system, but it can be added by customisation.  You generate the necessary DDL yourself if you want to use it. However, from PeopleTools 8.51 Application Designer will preserve existing partitioning.

                    In the system on which I am working, when partitioned the audit tables by AUDIT_STAMP which is a DateTime field.
                    ORA-14060: data type or length of an table partitioning column may not be changed 
                    We have had no alternative but to rebuild these tables and repopulate the data. This has also dealt with all locally partitioned indexes.

                    We have also found that we have one global index partitioned on a timestamp.
                    ORA-14061: data type or length of an index partitioning column may not be changed 
                    We also have had to drop this index in order to alter the table.

                    My Approach
                    We have had no alternative but to rebuild and repopulate our partitioned audit tables which are partitioned by a DateTime field.  However, that is what we did when we first partitioned them.  The scripts are very similar to those generated by Application Designer.  The table is renamed, a new one is built, and the data is copied.  In our case these scripts are built with a PL/SQL utility.  This also addressed the need to rebuild the locally partitioned indexes..

                    To minimize the number of indexes which must be rebuilt I have written a PL/SQL script (http://www.go-faster.co.uk/scripts/gfc_desc_timestamp_index.sql) that:
                    • identifies the indexes that need to be dropped.
                    • captures the DDL to recreate the indexes using DBMS_METADATA and stores it in a table,
                    • drops the indexes,
                    • alters the columns that cannot be altered with the index in place,
                    • recreates the index.
                    The script successfully handles partitioned function-based indexes.

                    On this particular HR system we only rebuilt about 400 indexes instead of over 10000. Now the standard PeopleSoft upgrade template can be run without dropping or recreating any further indexes.

                    Pages