David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 15 hours 19 min ago

PeopleSoft on Oracle 12c

Fri, 2016-05-06 04:59
I was asked by Dan Iverson from psadmin.io about my experiences of PeopleSoft on Oracle 12c.  I have seen a number of PeopleSoft Financials systems on 12c recently.  Generally the experience is very positive, but one common feature is that they had all disabled Adaptive Query Optimization.

What is Adaptive Query Optimization?

Adaptive Query Optimization is a term for a set of new features in Oracle 12c to allow the optimizer to improve the accuracy of execution plans.  It is described in the Optimizer with Oracle 12c white paper.  Put simply, Oracle collects additional statistics at parse which it can use to generate a better execution plan than if just using the ordinarily collected statistics, and further statistics at execution time which can be used to decide to change the next execution of the same SQL statement.
Why does it cause a Problem in PeopleSoft?Adaptive Optimization is clearly a very useful feature in an application where SQL is shareable and reused.  But there's the rub.  This is another Oracle feature that works best with an application that is written the way that Oracle advise applications be written (Automatic Memory Management and the plan stability/management technologies also come to mind), and PeopleSoft is not such an application.  PeopleSoft applications are SQL parse intensive.Parsing SQL is an expensive business in terms of time and CPU.  The library cache preserves the results of previous parses to save the overhead of repeatedly parsing the same SQL for subsequent executions.  If the SQL is different, even just by a literal value, it will have a different SQL_ID and will not be matched.  If SQL statements in the library cache are not being reused, then you incur the cost of parse for each execution.  Adaptive Optimization adds to that overhead, in PeopleSoft sometimes to the point where it can have a significant effect on performance.PeopleSoft has several behaviours which lead to different SQL statements being generated from the same source:
  • Application Engine %BIND() variables are resolved to literals in the generated SQL if the ReUseStatement attribute is not set, which by default it is not.  Thus, each iteration of a loop may cause the same Application Engine step to produce a different SQL statement with a different SQL ID (see Performance Benefits of ReUse Statement Flag in Application Engine).  PeopleSoft development have got better at using this attribute in delivered code.  However, there are still many places where it could be set but is not.
  • There are many places in PeopleSoft where SQL is generated dynamically to cater for dynamic application configuration options.  Bind variables may be embedded in the SQL as literals, or more significant changes may be introduced such as completely different criteria or joining different tables.  In Application Engine, ReUseStatement usually cannot be used in conjunction with such steps.
  • Different concurrently executing instances of the same Application Engine programs will be allocated a different non-shared instance of a temporary record, so they reference different tables.  
Some parts of PeopleSoft do produce shareable SQL statements.  Much of the SQL generated by the component processor and PeopleCode uses bind variables, the exception being where dynamic SQL is generated in PeopleCode.  PeopleSoft COBOL programs also widely uses bind variables, but again some places dynamically generate SQL statements.
What does Oracle say about it?
There are a number of documents on the Oracle support website that touch on this (you will need to log in to follow these links):

Advice for the PeopleSoft Oracle DBA: The source of this document is not clear.  It is not attached to a MOS note, and is not in the usual format for such notes.  It is the only document that I can find that makes specific recommendations for Adaptive Optimization in conjunction with PeopleSoft, and it recommends totally disabling the entire Adaptive Optimization feature:
  • "optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications."
  • It has always been tempting to enable cursor_sharing with PeopleSoft so that Oracle converts literals in the SQL back to bind variables and thus matching SQLs in the library cache with different literal values.  However, it has no benefit for some dynamic SQL and different temporary table instances.  It can also introduce other optimizer problems.  On the few occasions that I have tried it, I have never had a good experience.  This document also recommends against it.
How to Approach Issues that Appear Related to Adaptive Features Such as Adaptive Query Optimization (Doc ID 2068807.1): This note acknowledges that "while some issues with adaptive features have been found, in a number of cases, issues manifesting as performance degradation associated with Adaptive Query Optimization simply highlight deficiencies in the system, primarily inaccurate or stale object or system statistics,  Due to the nature of the activities that Adaptive Query Optimization perform, the impact of degraded performance can be widespread and can include (but not limited to):
    • Poor SQL execution performance (where a poor plan is selected)
    • Poor Parse performance (where the optimizer takes more time to determine the optimal access path)
  • The scope of either of the issues may not be limited to individual statements but can impact the whole system in unforeseen ways, for example both poor SQL execution and parse performance may cause locks to be held for a greater duration than normal causing lock contention while poor parse performance may hold latches or mutexes to similar results.
  • Blanket disabling of features: While disabling Adaptive Features with blanket parameters (see: Document 2031605.1) may provide a short term workaround, the loss of their benefits to other queries may be significant. If the underlying cause is some other factor then a better solution is to identify and address that"
Adaptive Query Optimization (Doc ID 2031605.1) discusses how to disable either the entire Adaptive Query Optimization feature
OPTIMIZER_ADAPATIVE_FEATURES=FALSE /*disables adaptive optimisation as a whole*/
or to disable individual sub-features
_optimizer_adaptive_plans=false /*disables adaptive plans*/
_optimizer_use_feedback=false /*disables the use of statistics feedback for subsequent executions. Default is true*/
_px_adaptive_dist_method=off /*disables the adaptive parallel distribution methods*/
_optimizer_dsdir_usage_control=0 /* disables the optimizer usage of dynamic statistics (sampling) directives. Default value is 126 */
_optimizer_gather_feedback=false /*disables the gathering of execution feedback in the optimizer*/
_optimizer_nlj_hj_adaptive_join=false /*disables only the adaptive join from nested loops to hash join*/
Summary
I have worked with a number of PeopleSoft Financials systems on Oracle 12c.  All but one of them had already disabled Adaptive Query Optimization, and I had to disable it on that one.  Once disabled performance improved such that it was at least as good as it had been on 11g.
There is no question that it is important to maintain accurate statistics and histograms on tables in PeopleSoft.  I have written elsewhere about how to achieve that, especially on temporary tables.  However, I am left with a concern that disabling the entire feature may be excessive, and that there may be areas in PeopleSoft where it can bring some benefits.  I would like to find the opportunity to test whether it is possible to achieve better performance by only disabling certain parts of the feature.

PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data

Wed, 2016-04-20 12:16
IntroductionMost of a PeopleSoft application is itself stored in the database in PeopleTools tables.  Therefore there is lot of information about the configuration and operation of a PeopleSoft system.  There are also performance metrics, particularly about batch processes.
PS360 is a new tool on which I am working.  It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis.  The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra.  This is another free tool used for health check and performance analysis of any Oracle database system.  PS360 aims to gather PeopleSoft specific information that is not presented by EDB360.  It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block.  It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage).  Each report is in tabular and/or graphical format.  All the charts are produced with the Google chart API.
The output falls into three sections.  
  • Configuration: Simple reports of certain configuration tables.
  • Performance Metrics: Process Scheduler and Application Engine timings
  • Consistency Checks: both within PeopleTools tables and between the PeopleSoft and Oracle catalogues.
Sample PS360 Index Page generated on PeopleSoft Demo Database
InstructionsThe tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM).  That user will already have privilege to read the Oracle catalogue.
Download the tool and unzip it into a directory.  Navigate to the ps360 (master) directory, open SQL*Plus and connect as SYSADM.  Execute the script ps360.sql.  The output will be written to a zip file in the same directory.  Unpack that zip file on your own PC and open the file ps360_[database name]_0_index.html with a browser.
Feedback EnhancementsI am looking for feedback about the tool, and suggestions for further enhancements.
Please either leave comments here or e-mail me at info@go-faster.co.uk.

nVision Performance Tuning: Coalescing Tree Leaves

Tue, 2016-04-19 11:09
I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
nVision Tree Performance Options|


There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
The following is typical of nVision SQL with these performance options set.
SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
FROM
PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND
A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND
A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=

) AND (
A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR
A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR
A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR
A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR
A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR
A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR
A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR
A.DEPTID BETWEEN 'A135' AND 'A138'

A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
Event Name
% Time
Seconds
Calls
- Time per Call -
Avg
Min
Max
FETCH calls [CPU]
48.2%
3,699.8440s
16,068
0.2303s
0.0000s
178.0640s
db file sequential read
22.5%
1,728.2101s
4,413,352
0.0004s
0.0002s
0.1294s
SQL*Net message from client [idle]
8.0%
617.7042s
926
0.6671s
0.0002s
61.3147s
PARSE calls [CPU]
7.9%
605.9340s
5,383
0.1126s
0.0000s
11.0500s






Total
100.0%
7,681.4428s

Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
  • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
  • There is not another leaf on another node on the tree that would intersect with the merged leaf.
Instructionsleafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
The level of output emitted depends on the variable l_debug_variable
l_debug_level  INTEGER := 4;
  • 1. end of processing message 
  • 2. start of processing for tree 
  • 3. number of leaves in tree and number of leaves coalesced 
  • 4. details of leaves being compressed 
  • 5. start and end of each procedure 
  • 6. parameters passed to functions 
  • 7. number of rows updated/deleted during coalesce 
  • 8. dynamic SQL statement 
The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

.(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
.(4)634 nodes, 2636 leaves
.(4)1358 leaves coalesced (52%)

(1)Commit changes or rollback
The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

  • specific trees,
  • most recent effective dated trees,
  • trees with literal values performance option

  FOR i IN (
SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
FROM pstreedefn d
, pstreestrct s
, psrecfielddb f
WHERE d.tree_strct_id = s.tree_strct_id
AND s.node_fieldname = 'TREE_NODE'
-- AND d.TREE_ACC_METHOD = 'L' --literal values
AND s.dtl_recname = f.recname
AND s.dtl_fieldname = f.fieldname
-- AND tree_name = 'XXX_ACCOUNT'
) LOOP
Conclusion The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports.

Interview with PeopleSoft Administrator Podcast

Fri, 2016-03-25 13:59
I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.
(25 March 2016) #21 - Temporary Tables
(8 April 2016) #23 - The Application Server
(15 April 2016) #24 - Application Server Tuning You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

nVision Performance Tuning: General Principles

Sun, 2016-03-06 12:20
Over the years I have dealt with performance problems with nVision reporting on General Ledger on various Financials systems in various kinds of businesses.  Different businesses use nVision very differently, and have different challenges, but I have produced an approach that mostly works well at most customers.  I have collected that advice in to a document that I have recently published on my website (http://www.go-faster.co.uk/docs.htm#nVision Tuning.Generic.pdf).

The key points are
  • Indexing
    • Effective indexing of LEDGER and LEDGER_BUDG tables to match the analysis criteria of the reports.
    • Enhanced indexing of the PSTREESELECT tables, so that the indexes fully satisfy the queries without the need to visit the tables.
  • Collection of statistics and extended statistics on the PSTREESELECT tables.
  • Using the nVision performance options 
    • use static selectors instead of dynamic selectors.  It is difficult to maintain up-to-date optimizer statistics on the selector tables with dynamic selectors.
    • simplify SQL statements by replacing joins with literal criteria
    • updated 11.4.2016: reduce SQL parse time by coalescing leaves on trees.
  • I also suggest use of Oracle Fine Grained Auditing to 
    • enhance instrumentation,
    • detect the use of dynamic selectors.
  • Appropriate partitioning of the LEDGER and LEDGER_BUDG tables.
  • Archiving.
    • If the partitioning option is not available, then I strongly recommended that as much historical data as possible is purged from the LEDGER and LEDGER_BUDG tables.
Caveat: This is a general approach, and the document makes general statements.  Every customer is different, because their data is different and often their method of analysis differs.  There is always something that requires adjustment or an exception to the general approach.  Your mileage will vary!

    Implementing Index Compression (and other Physical Storage Options) via Application Designer

    Thu, 2016-02-11 12:07
    There are some performance improvements that require physical storage options to be set on tables or indexes.
    One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
    This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using
    ANALYZE INDEX … VALIDATE STRUCTURE
    I have written script to make it slightly easier, calc_opt_comp.sql.  This is the output on my demo database, but I get similar results on production systems.
                                              Optimal
    Compression Weighted
    Prefix Current Average
    Table Name Index Name Length FREQ PARTS Blocks Saving %
    ------------------ ------------------ ----------- ---- ----- ---------- --------
    PSTREENODE PSAPSTREENODE 4 1 0 280 39.0
    PSBPSTREENODE 3 1 0 264 30.0
    PSCPSTREENODE 1 1 0 120 7.0
    PSDPSTREENODE 4 1 0 256 61.0
    PSFPSTREENODE 2 1 0 256 67.0
    PSGPSTREENODE 3 1 0 400 49.0
    PS_PSTREENODE 4 1 0 256 44.0 
    However, I want to make sure that should the table need to rebuilt in the future, PeopleTools will generate the DDL with the appropriate settings.  The same principle would also apply to any other physical storage option.  I would always recommend that the compression prefix lengths be incorporated into the PeopleTools DDL override in Application Designer (figure 1).  While you could extend the DDL model and add another override for compression, I just append it to the PCTFREE setting.
    Index DDL OverridesFigure 1. Index DDL OverideHowever, there is catch.  PeopleTools has never examined DDL overrides when determining whether there is a difference between the PeopleSoft and database data dictionaries, even though that comparison must be platform specific.  DDL overrides and DDL models are just strings held in the PeopleTools tables.  They can be extended (or even removed) by customers.  I assume this is the reason; it was not felt possible to reliably check them,
    So, if the build settings (figure 2) are 'recreate index only if modified', which is the default, Application Designer will not generate a DDL script, nor execute any DDL.
    Build SettingsFigure 2. Build SettingsThe workaround has always been to set the index creation option in the build settings to 'recreate index if it already exists'.  However, we then discover the override doesn't appear in the DDL.  As Application Designer has not detected a difference between PeopleTools and the database, it has instead used the Oracle DBMS_METADATA package to generate the storage clause from from the index that exists in the database.  Hence the DDL contains additional keywords not in the PeopleSoft DDL model.
    CREATE UNIQUE  INDEX PS_PSTREENODE ON PSTREENODE (SETID,
    SETCNTRLVALUE,
    TREE_NAME,
    EFFDT,
    TREE_NODE_NUM,
    TREE_NODE,
    TREE_BRANCH)
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "PSINDEX"
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /

    I have only checked this behaviour on PeopleTools 8.54, but use of DBMS_METADATA was introduced in PeopleTools 8.51, so this problem has probably existed since then.
    SELECT dbms_metadata.get_ddl('INDEX','PS_PSTREENODE')
    FROM dual

    DBMS_METADATA.GET_DDL('INDEX','PS_PSTREENODE')
    --------------------------------------------------------------------------------
    CREATE UNIQUE INDEX "SYSADM"."PS_PSTREENODE" ON "SYSADM"."PSTREENODE" ("SETID"
    , "SETCNTRLVALUE", "TREE_NAME", "EFFDT", "TREE_NODE_NUM", "TREE_NODE", "TREE_BRANCH")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "PSINDEX"

    However, if I drop the index and then regenerate the DDL script in Application Designer,
    DROP INDEX ps_pstreenode
    /
    PeopleTools generates the create index with the compression specified in the PeopleTools table.

    CREATE UNIQUE INDEX PS_PSTREENODE ON PSTREENODE (SETID,
    SETCNTRLVALUE,
    TREE_NAME,
    EFFDT,
    TREE_NODE_NUM,
    TREE_NODE,
    TREE_BRANCH) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
    MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 0 COMPRESS 4 PARALLEL
    NOLOGGING
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /

    Rather than go through the business of dropping the index so you can then generate the correct script to then recreate the index, I would suggest just implementing the change manually by rebuilding the indexes.
    ALTER INDEX PSAPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSBPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PSCPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 1;
    ALTER INDEX PSDPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSFPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 2;
    ALTER INDEX PSGPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PS_PSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;

    ConclusionThis makes the business of implementing physical attributes through Application Designer much more complicated.  I would still recommend recording the settings in Application Designer, if only because it provides documentation, but then it may be easier to implement the changes manually.

    Measuring Tuxedo Queuing in the PeopleSoft Application Server

    Fri, 2015-09-04 09:13

    Why Should I Care About Queuing?Queuing in the application server is usually an indicator of a performance problem, rather than a problem in its own right.  Requests will back up on the inbound queue because the application server cannot process them as fast as they arrive.  This is usually seen on the APPQ which is serviced by the PSAPPSRV process, but applies to other server processes too.  Common causes include (but are not limited to):
    • Poor performance of either SQL on the database or PeopleCode executed within the application server is extending service duration
    • The application server domain is undersized for the load.  Increasing the number of application server domains or application server process may be appropriate.  However, before increasing the number of server process it is necessary to ensure that the physical server has sufficient memory and CPU to support the domain (if the application server CPU is overloaded then requests move from the Tuxedo queues to the operating system run queue).
    • The application server has too many server processes per queue causing contention in the systems calls that enqueue and dequeue requests to and from IPC queue structure.  A queue with more than 8-10 application server processes can exhibit this contention.  There will be a queue of inbound requests, but not all the server processes will be non-idle.
    When user service requests spend time queuing in the application server, that time is part of the users' response time.  Application server queuing is generally to be avoided (although it may be the least worst alternative). 
    What you do about queuing depends on the circumstances, but it is something that you do want to know about.
    3 Ways to Measure Application Server QueuingThere are a number of ways to detect queuing in Tuxedo
    • Direct measurement of the Tuxedo domain using the tmadmin command-line interface.  A long time ago I wrote a shell script tuxmon.sh.  It periodically runs the printqueue and printserver commands on an application server and extracts comma separated data to a flat that can then be loaded into a database.  It would have to be configured for each domain in a system.
    • Direct Measurement with PeopleSoft Performance Monitor (PPM).  Events 301 and 302 simulate the printqueue and printserver commands.  However, event 301 only works from PT8.54 (and at the time of writing I am working on a PT8.53 system).  Even then, the measurements would only be taken once per event cycle, which defaults to every 5 minutes.  I wouldn't recommend increasing the sample frequency, so this will only ever be quite a coarse measurement.
    • Indirect Measurement from sampled PPM transactions.  Although includes time spent on the return queue and to unpack the Tuxedo message.  This technique is what the rest of this article is about.
    Indirectly Measuring Application Server Queuing from Transactional DataEvery PIA and Portal request includes a Jolt call made by the PeopleSoft servlet to the domain.  The Jolt call is instrumented in PPM as transaction 115.  Various layers in the application server are instrumented in PPM, and the highest point is transaction 400 which where the service enters the PeopleSoft application server code.  Transaction 400 is always the immediate child of transaction 115.  The difference in the duration of these transactions is the duration of the following operations:
    • Transmit the message across the network from the web server to the JSH.  There is a persistent TCP socket connection.
    • To enqueue the message on the APPQ queue (including writing the message to disk if it cannot fit on the queue).
    •  Time spent in the queue
    • To dequeue the message from the queue (including reading the message back from disk it was written there).
    • To unpack the Tuxedo message and pass the information to the service function
    • And then repeat the process for the return message back to the web server via the JSH queue (which is not shown  in tmadmin)
    I am going make an assumption that the majority of the time is spent by message waiting in the inbound queue and that time spent on the other activities is negligible.  This is not strictly true, but is good enough for practical purposes.  Any error means that I will tend to overestimate queuing.
    Some simple arithmetic can convert this duration into an average queue length. A queue length of n means that n requests are waiting in the queue.  Each second there are n seconds of queue time.  So the number of seconds per second of queue time is the same as the queue length. 
    I can take all the sampled transactions in a given time period and aggregate the time spent between transactions 115 and 400.  I must multiply it by the sampling ratio, and then divide it by the duration of the time period for which I am aggregating it.  That gives me the average queue length for that period.
    This query aggregates queue time across all application server domains in each system.  It would be easy to examine a specific application server, web server or time period.
    WITH c AS (
    SELECT B.DBNAME, b.pm_sampling_rate
    , TRUNC(c115.pm_agent_Strt_dttm,'mi') pm_agent_dttm
    , A115.PM_DOMAIN_NAME web_domain_name
    , SUBSTR(A400.PM_HOST_PORT,1,INSTR(A400.PM_HOST_PORT,':')-1) PM_tux_HOST
    , SUBSTR(A400.PM_HOST_PORT,INSTR(A400.PM_HOST_PORT,':')+1) PM_tux_PORT
    , A400.PM_DOMAIN_NAME tux_domain_name
    , (C115.pm_trans_duration-C400.pm_trans_duration)/1000 qtime
    FROM PSPMAGENT A115 /*Web server details*/
    , PSPMAGENT A400 /*Application server details*/
    , PSPMSYSDEFN B
    , PSPMTRANSHIST C115 /*Jolt transaction*/
    , PSPMTRANSHIST C400 /*Tuxedo transaction*/
    WHERE A115.PM_SYSTEMID = B.PM_SYSTEMID
    AND A115.PM_AGENT_INACTIVE = 'N'
    AND C115.PM_AGENTID = A115.PM_AGENTID
    AND C115.PM_TRANS_DEFN_SET=1
    AND C115.PM_TRANS_DEFN_ID=115
    AND C115.pm_trans_status = '1' /*valid transaction only*/
    --
    AND A400.PM_SYSTEMID = B.PM_SYSTEMID
    AND A400.PM_AGENT_INACTIVE = 'N'
    AND C400.PM_AGENTID = A400.PM_AGENTID
    AND C400.PM_TRANS_DEFN_SET=1
    AND C400.PM_TRANS_DEFN_ID=400
    AND C400.pm_trans_status = '1' /*valid transaction only*/
    --
    AND C115.PM_INSTANCE_ID = C400.PM_PARENT_INST_ID /*parent-child relationship*/
    AND C115.pm_trans_duration >= C400.pm_trans_duration
    ), x as (
    SELECT dbname, pm_agent_dttm
    , AVG(qtime) avg_qtime
    , MAX(qtime) max_qtime
    , c.pm_sampling_rate*sum(qtime)/60 avg_qlen
    , c.pm_sampling_rate*count(*) num_services
    GROUP BY dbname, pm_agent_dttm, pm_sampling_rate
    )
    SELECT * FROM x
    ORDER BY dbname, pm_agent_dttm
    • Transactions are aggregated per minute, so the queue time is divided by 60 at the end of the calculation because we are measuring time in seconds.
    Then the results from the query can be charted in excel (see http://www.go-faster.co.uk/scripts.htm#awr_wait.xls). This chart was taken from a real system undergoing a performance load test, and we could see


    Is this calculation and assumption reasonable?The best way to validate this approach would be to measure queuing directly using tmadmin.  I could also try this on a PT8.54 system where event 301 will report the queuing.  This will have to wait for a future opportunity.
    However, I can compare queuing with the number of busy application servers at reported by PPM event 302 for the CRM database.  Around 16:28 queuing all but disappears.  We can see that there were a few idle application servers which is consistent with the queue being cleared.  Later the queuing comes back, and most of the application servers are busy again.  So it looks reasonable.
    Application Server Activity

    PeopleTools 8.54: Performance Performance Monitor Enhancements

    Tue, 2015-03-10 05:09
    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
    Transaction History Search ComponentThere are a number of changes:
    • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
      • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
    • You can also specify multiple transaction types, rather than have to search each transaction type individually.
    This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
    PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
    %DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
    This expands to
    ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
       >= (PM_MAX_HIST_AGE * 24 *  60)
    which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

    Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
    "... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
    and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
    and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
    Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
    Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
    • New Table
      • PS_PTPMJMXUSER - keyed on PM_AGENTID
    • New Columns
      • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
      • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
    • New Component

      Undocumented Application Engine Parameter: EnableAEMonitoring

      Wed, 2015-03-04 13:57
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

      Oracle Support Document 1640355.1: E-AE: Performance Degradation When Using Do Loop Action to Process Large Amount of Data on PT 8.52 & PT 8.53 describes a performance problem in Application Engine when a small but frequently executed loop.  Application Engine calls DBMS_APPLICATION_INFO to set MODULE and ACTION on v$session each time it goes round the loop.

      However, Oracle Bug 10130415  Latch contention on "resmgr group change latch" acknowledges a problem in some versions of Oracle, but it is resolved in 11.2.0.3 and 12c.
      Updated 8 May 2015: Oracle support do not know of any explicit link to this database bug.

      A new parameter was introduced in PT8.54 and backported to PT8.52.23 and PT8.53.13 in the Database Options section of the process scheduler configuration file (psprcs.cfg).  EnableAEMonitor controls whether Application Engine calls DBMS_APPLICATION_INFO.  The default value for this parameter is 0.

      The PeopleSoft support document does not reference the database bug report, but it seems reasonable to infer that the new parameter was introduced to work around the database bug.

      This new parameter is not described in PeopleBooks.  It does appear in the delivered configuration files on at least 8.53.13.  However, it is not present in the delivered 8.54.05 configuration file (bug 21055140). Therefore, by default, Application Engine will not set the module and Action unless you add it to the configuration file.

      [Database Options]
      ;=========================================================================
      ; Database-specific configuration options
      ;=========================================================================

      ;DMK - added to enable DBMS_APPLICATION_INFO instrumentation
      EnableAEMonitoring=1
      Then the behaviour is then as it has been since 8.52, described in PeopleTools 8.52 Application Engine sets MODULE and ACTION.
      My Recommendation I certainly think that you should add this parameter to all process scheduler configuration files at relevant PeopleTools version.  Unless you specifically have the problem described in the support note, I recommend that you also set the parameter to 1 as shown above. I have never seen the problem in affected database versions, and it is fixed in the terminal release of 11g.

      Without setting the parameter, you will loose the ability to relate Enterprise Manager and ASH data to specific application engine steps.  If you need to make a code change to achieve a performance improvement you will have to go through the manual process of finding the SQL in an application engine trace.

      PeopleTools 8.54: Oracle Resource Manager

      Wed, 2015-02-25 04:11
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

      Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
      I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.
      PeopleTools FeatureThis feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

      • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
      SELECT DISTINCT group_or_subplan, type
      FROM dba_rsrc_plan_directives
      WHERE plan = 'MIXED_WORKLOAD_PLAN'
      ORDER BY 2 DESC,1
      /

      GROUP_OR_SUBPLAN TYPE
      ------------------------------ --------------
      ORA$AUTOTASK_SUB_PLAN PLAN
      BATCH_GROUP CONSUMER_GROUP
      INTERACTIVE_GROUP CONSUMER_GROUP
      ORA$DIAGNOSTICS CONSUMER_GROUP
      OTHER_GROUPS CONSUMER_GROUP
      SYS_GROUP CONSUMER_GROUP
      If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE
      • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
      SELECT PT_ORA_CONSUMR_GRP 
      FROM PS_PT_ORA_RESOURCE
      , PS_PTEXEC2RESOURCE
      WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
      AND PT_ORA_CONSUMR_GRP <> ' '
      AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

      PT_ORA_CONSUMR_GRP
      ------------------------
      INTERACTIVE_GROUP

      And then the PeopleSoft process explicitly switches its group, thus:
      DECLARE 
      old_group varchar2(30);
      BEGIN
      DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
      END;
      Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.
      SetupThe PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.
      BEGIN
      DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
      ('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
      END;

      BEGIN
      FOR i IN(
      SELECT DISTINCT r.pt_ora_consumr_grp
      FROM sysadm.ps_pt_ora_resource r
      WHERE r.pt_ora_consumr_grp != ' '
      AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
      ) LOOP
      dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
      DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
      (GRANTEE_NAME => 'SYSADM'
      ,CONSUMER_GROUP => i.pt_ora_consumr_grp
      ,GRANT_OPTION => FALSE);
      END LOOP;
      END;
      /

      The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.
      NAME                                 TYPE        VALUE
      ------------------------------------ ----------- ----------------------
      resource_manager_plan string MIXED_WORKLOAD_PLAN

      I question one or two of the mappings on PS_PTEXEC2RESOURCE.
      SELECT * FROM PS_PTEXEC2RESOURCE …

      PT_EXECUTABLE_NAME PT_RESOURCE_NAME
      -------------------------------- -----------------

      PSAPPSRV APPLICATION SERVE
      PSQED MISCELLANEOUS
      PSQRYSRV QUERY SERVER

      • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
        • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
      • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.
      Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group MappingsI would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.
      BEGIN
      DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

      DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
      (attribute => 'MODULE_NAME'
      ,value => 'PROCESSMONITOR'
      ,consumer_group => 'SYS_GROUP');
      DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
      END;
      /

      However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings
      exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
      SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
      , module, action, resource_consumer_group
      FROM v$session
      WHERE module IN('PROCESSMONITOR','WIBBLE')
      ORDER BY program, module, action
      /

      So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.
      PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
      ---------------- ---------------- ---------------- ------------------------
      PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
      PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
      sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

      If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 
      exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

      PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
      ---------------- ---------------- ---------------- ------------------------
      PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
      PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
      sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

      Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.
      DECLARE 
      old_group varchar2(30);
      BEGIN
      DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
      END;
      /

      PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
      ---------------- ---------------- ---------------- ------------------------
      PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
      PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
      sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

      Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.
      PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
      ---------------- ---------------- ---------------- ------------------------
      PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
      PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
      sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
      You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
      Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.
      UPDATE ps_ptexec2resource 
      SET pt_resource_name = 'DO_NOT_USE'
      WHERE pt_executable_name = 'PSAPPSRV'
      AND pt_resource_name = 'APPLICATION SERVER'
      /
      COMMIT
      /
      BEGIN
      DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
      END;
      /
      BEGIN
      DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
      (attribute => 'CLIENT_PROGRAM'
      ,value => 'PSAPPSRV'
      ,consumer_group => 'INTERACTIVE_GROUP');

      DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
      (attribute => 'MODULE_NAME'
      ,value => 'PROCESSMONITOR'
      ,consumer_group => 'SYS_GROUP');

      DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
      explicit => 1,
      oracle_user => 2,
      service_name => 3,
      module_name_action => 4, --note higher than just module
      module_name => 5, --note higher than program
      service_module => 6,
      service_module_action => 7,
      client_os_user => 8,
      client_program => 9, --note lower than module
      client_machine => 10
      );
      DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
      END;
      /
      So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

      ConclusionI have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.
      • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
      • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
        • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
        • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.
      Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

      PeopleTools 8.54: Multiple Query Security Records

      Mon, 2015-02-23 04:26
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

      This post is not about a database feature newly supported in PeopleTools, but PeopleTools is capable of doing something new that could negatively impact the database.  When I saw the following warning in the PeopleTools 8.54 release notes, I thought I should look into it.
      "PeopleTools has added an Advanced Query Security option to Application Designer. This feature allows up to five Query Security Records to be associated with a single record, including the ability to associate security with non-key fields. While powerful, this feature should be used sparingly because multiple additional joins will affect query performance."

      The PeopleTools documentation shows how to add multiple query security records in Application Designer, but doesn't really explain what effect it will have on queries on that record.
      PeopleTools has always allowed a query security record to be defined on a record.  This is the record properties for JOB.


      I am going to create a simple example query that joins PS_JOB and PS_NAMES.  These records have different query security records, so both query security records appear in the PS/Query.
      SELECT B.EMPLID, B.DEPTID
      FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_NAMES A, PS_PERALL_SEC_QRY A1
      WHERE ( B.EMPLID = B1.EMPLID
      AND B.EMPL_RCD = B1.EMPL_RCD
      AND B1.OPRID = 'PS'
      AND A.EMPLID = A1.EMPLID
      AND A1.OPRID = 'PS'
      AND ( B.EFFDT =
      (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
      WHERE B.EMPLID = B_ED.EMPLID
      AND B.EMPL_RCD = B_ED.EMPL_RCD
      AND B_ED.EFFDT <= SYSDATE)
      AND B.EFFSEQ =
      (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
      WHERE B.EMPLID = B_ES.EMPLID
      AND B.EMPL_RCD = B_ES.EMPL_RCD
      AND B.EFFDT = B_ES.EFFDT)
      AND B.EMPLID = A.EMPLID
      AND A.EFFDT =
      (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
      WHERE A.EMPLID = A_ED.EMPLID
      AND A.NAME_TYPE = A_ED.NAME_TYPE
      AND A_ED.EFFDT <= SYSDATE) ))


      The new version of the same query, but this time with multiple query security records if force, is below.  Note that:
      • A1, B1 are the query security records defined on the record properties that have always been present in PeopleTools.
      • B4, B5, B6 are the advanced query security records.  Note that EMPLNT_SRCH_QRY has join criteria on the columns specified in the Advanced Query Security Mapping dialogue.
      • EMPLMT_SRCH_QRY gets joined twice to JOB because it is the query security record B1 and an advanced query security record B4, so the advanced settings are in addition to the standard setting.  Be careful not to duplicate records.  The PS/Query Security Join Optimization setting (introduced in PeopleTools 8.52) does not help with this.
      • The standard query security record is stored in PSRECDEFN.QRYSRCRECNAME, while the advanced query security record definitions are stored in a new PeopleTools table PSRECSECFLDMAPS.
      SELECT B.EMPLID, B.DEPTID
      FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_EMPLMT_SRCH_QRY B4,
      PS_PERALL_SEC_QRY B5, PS_PERS_SRCH_QRY B6,
      PS_NAMES A, PS_PERALL_SEC_QRY A1
      WHERE ( B.EMPLID = B1.EMPLID
      AND B.EMPL_RCD = B1.EMPL_RCD
      AND B1.OPRID = 'PS'
      AND B.EMPLID = B4.EMPLID
      AND B.EMPL_RCD = B4.EMPL_RCD
      AND B4.OPRID = 'PS'
      AND B.EMPLID = B5.EMPLID
      AND B5.OPRID = 'PS'
      AND B.EMPLID = B6.EMPLID
      AND B6.OPRID = 'PS'

      AND A.EMPLID = A1.EMPLID
      AND A1.OPRID = 'PS'
      AND ( B.EFFDT =
      (SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
      WHERE B.EMPLID = B_ED.EMPLID
      AND B.EMPL_RCD = B_ED.EMPL_RCD
      AND B_ED.EFFDT <= SYSDATE)
      AND B.EFFSEQ =
      (SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
      WHERE B.EMPLID = B_ES.EMPLID
      AND B.EMPL_RCD = B_ES.EMPL_RCD
      AND B.EFFDT = B_ES.EFFDT)
      AND B.EMPLID = A.EMPLID
      AND A.EFFDT =
      (SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
      WHERE A.EMPLID = A_ED.EMPLID
      AND A.NAME_TYPE = A_ED.NAME_TYPE
      AND A_ED.EFFDT <= SYSDATE) ))
      Conclusion I know from previous experience that having just two different query security records on different base records, as in the first example above can lead to significant performance problems.  This new feature has the potential to add up to five more per record.
      I can see that this feature could have occasional application where the additional security is not joined by a key field.  However, I would generally echo the sentiment in the release notes, and use it sparingly.  Instead of two query security records, could you merge them into one security record?

      PeopleTools 8.54: %SelectDummyTable Meta-SQL

      Sun, 2015-02-22 11:57
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
        
      PeopleTools simply evaluates this meta-SQL as 'DUAL' on Oracle.

      In Oracle, DUAL is just a convenience table.  You don't need to use it, you can use anything you want. PeopleSoft applications often use PS_INSTALLATION when they needs a single row source in a query.  This was another example of platform agnosticism.  PS_INSTALLATION is available on every platform and every PeopleSoft installation, DUAL is not.

      Instead of coding this (the example is taken from ESPP_REF_REVMAIN.PSHUP.Do When)
      %Select(IF_FLAG) 
      SELECT 'X'
      FROM PS_INSTALLATION
      WHERE %Bind(ST_SEND_SRC) = 'N'
      You can now code this instead:
      %Select(IF_FLAG) 
      SELECT 'X'
      FROM %SelectDummyTable
      WHERE %Bind(ST_SEND_SRC) = 'N'
      Which resolves to:
      %Select(IF_FLAG)  
      SELECT 'X'
      FROM DUAL
      WHERE %Bind(ST_SEND_SRC) = 'N'
      There are two advantages to using DUAL.
      • In the database, the Oracle optimizer knows that DUAL is a special one row, one column table.  When you use it in queries, it uses this knowledge when generating the execution plan.
      • If you used a real table, there was a risk that it could have no rows, or more than one row.  Either could cause application problems.  In previous versions of Oracle, when it was a real physical table, that problem could also occur with DUAL.  However, since Oracle 10g it is just a memory structure in the database, and accessing it involves neither a logical nor a physical read. 
      ConclusionPeopleSoft have created a Meta-SQL that evaluates as DUAL so that this Oracle optimization is implemented in a PeopleSoft platform generic manner. 
      I would not bother to go back and change existing code to use this, unless perhaps I was visiting the code anyway, but I would certainly recommend its use going forward.

      PeopleTools 8.54: %SQLHint Meta-SQL

      Sat, 2015-02-21 04:41
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
       
      This new PeopleCode meta-SQL macro performs a search of SQL statement for the nth instance of SQL command keyword and inserts a string after it.
      %SqlHint(SQL_cmd, index, hint_text, DB_platform [, {ENABLE | DISABLE}])
      It is particularly effective with the %InsertSelect meta-SQL.  Previously the only way to put a hint into the main select was with a variable assignment, but that didn't work if the DISTINCT keyword was used because the hint appeared behind the distinct.
      %InsertSelect(DISTINCT, DMK,JOB J, EMPLID= /*+LEADING(J)*/ J.EMPLID) 
      FROM PS_JOB J
      which resolves to:
      INSERT INTO PS_DMK (EMPLID 
      , EMPL_RCD
      , EFFDT
      , EFFSEQ
      , SETID_DEPT
      , DEPTID)
      SELECT DISTINCT /*+LEADING(J)*/ J.EMPLID
      , J.EMPL_RCD
      , J.EFFDT
      , J.EFFSEQ
      , J.SETID_DEPT
      , J.DEPTID
      FROM PS_JOB J
      Here is a deliberately contrived example of how to use the command.
      • I have created a separate SQL object, DMK_CURJOB, to hold effective date/sequence sub-queries which I will reference from an application engine SQL.
      %P(2).EFFDT = ( 
      SELECT MAX(%P(3).EFFDT)
      FROM %Table(%P(1)) %P(3)
      WHERE %P(3).EMPLID = %P(2).EMPLID
      AND %P(3).EMPL_RCD = %P(2).EMPL_RCD
      AND %P(3).EFFDT <= %CurrentDateIn)
      AND %P(2).EFFSEQ = (
      SELECT MAX(%P(4).EFFSEQ)
      FROM %Table(%P(1)) %P(4)
      WHERE %P(4).EMPLID = %P(2).EMPLID
      AND %P(4).EMPL_RCD = %P(2).EMPL_RCD
      AND %P(4).EFFDT = %P(2).EFFDT)
      • I want my insert statement to run in direct-path mode, so I am putting an APPEND hint into the INSERT statement.
      • I am going to put different hints into each of the different SQL query blocks, including the sub-queries in the SQL object.
      %SqlHint(INSERT,1,'/*+APPEND*/',ORACLE,ENABLE)
      %SqlHint(INSERT,1,'/*Developer Comment*/',ORACLE,DISABLE)
      %SqlHint(SELECT,1,'/*+LEADING(J)*/',ORACLE)
      %SqlHint(SELECT,2,'/*+UNNEST(J1)*/',ORACLE)
      %SqlHint(SELECT,3,'/*+UNNEST(J2)*/',ORACLE)
      %InsertSelect(DISTINCT, DMK,JOB J)
      FROM PS_JOB J
      WHERE %Sql(DMK_CURJOB,JOB,J,J1,J2)

      Which resolves to:
       INSERT /*+APPEND*/ INTO PS_DMK (EMPLID 
      , EMPL_RCD
      , EFFDT
      , EFFSEQ
      , SETID_DEPT
      , DEPTID)
      SELECT /*+LEADING(J)*/ DISTINCT J.EMPLID
      , J.EMPL_RCD
      , J.EFFDT
      , J.EFFSEQ
      , J.SETID_DEPT
      , J.DEPTID
      FROM PS_JOB J
      WHERE J.EFFDT = (
      SELECT /*+UNNEST(J1)*/ MAX(J1.EFFDT)
      FROM PS_JOB J1
      WHERE J1.EMPLID = J.EMPLID
      AND J1.EMPL_RCD = J.EMPL_RCD
      AND J1.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD'))
      AND J.EFFSEQ = (
      SELECT /*+UNNEST(J2)*/ MAX(J2.EFFSEQ)
      FROM PS_JOB J2
      WHERE J2.EMPLID = J.EMPLID
      AND J2.EMPL_RCD = J.EMPL_RCD
      AND J2.EFFDT = J.EFFDT)

      The %SQLHint processing appears to be done after all other expansions, so the search and insert can reach into %SQL objects.  Previously we had to put hints into the SQL object.  Although, sometimes, we could avoid that by using query block naming hints.  Now, I can place any hint after any SQL command.  I can choose to apply a hint in just one step that references a SQL object, rather than in the SQL object which affects every step that references it.

      If you put multiple substitutions in for the same SQL command, only the last enabled one is processed.

      I frequently find that developers love to put comments into SQL which then appears in logs files and Oracle monitoring tool.  I hate that.  Comments in SQL that run on the database is an unnecessary overhead, and it turns up later in SQL monitoring and tuning tools.  It is worth noting that comments that are not hints are stripped out of SQL in PL/SQL procedures.  Perhaps developers should put their comment in a disabled %SQLHint so it will not appear in the final SQL?
      Oracle SQL Outlines/Profiles/Patches/BaselinesAll this talk of adding hints to source code is going to cause an Oracle DBA to ask why not use the techniques provided by Oracle to control execution plans on application engine SQL.  The problem is that those techniques are frequently thwarted by the dynamic nature of SQL created by PeopleSoft.
      • Bind variables can become literals when the SQL is generated, though profiles and baselines can handle this.
      • Different instances of temporary records are different tables in the database. You would have to handle each table (or every combination of tables if you have multiple temporary records in one statement).
      • Many delivered processes have dynamically generated SQL and you would again have to handle every variation separately.
      ConclusionThe %SQLHint meta-SQL brings a huge advantage for Oracle's PeopleSoft developers.  Although it is possible to create platform specific application engine sections, there is huge reluctance to do this in development in Oracle.  This is entirely reasonable as it results in having to develop, test and maintain separate code lines.  Many of the meta-SQL macros are designed precisely to overcome SQL differences between different supported database platforms. Now, using %SQLHint they can now put Oracle specific optimizer hints into platform generic application engine steps, safe in the knowledge that the hints will only affect Oracle platforms.

      This is going to be a very useful feature.  Simple.  Effective.  I look forward to hinting the places that other techniques cannot reach!

      PeopleTools 8.54: Table/Index Partitioning

      Fri, 2015-02-20 10:00
      This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
      Partitioning in OraclePartitioning of table (and index) segments involves breaking them into several smaller segments where certain data values only occur in certain segments.  Thus if a query is looking for a certain data value it may be able to eliminate some partitions without having to scan them because by definition those values cannot occur in those partitions.  Thus saving logical and physical read, and improving performance.  This is called partition elimination or pruning.  It is often the principal reason for partitioning a table.
      Physically each partition in a partitioned table is a table, but logically the partitions form a single table.  There should be no need to change application code in order to use partitioning, but the way the code is written will affect Oracle's ability to perform partition elimination.
      The following diagram is taken from the Oracle 11g Database VLDB and Partitioning Guide

      If a query was only looking data in March, then it could eliminate the January and February partitions without inspecting them.  However, if it was filtering data by another column then it would still have to inspect all three partitions.  Application design will determine whether, and if so how to partition a table.
      NB: I can't mention partitioning without also saying that Partitioning Option is a licensed feature of Oracle Database Enterprise Edition.
      Partitioning in PeopleTools prior to 8.54I have to declare an interest.  I have been using partitioning in PeopleSoft since PeopleTools 7.5 when it was introduced in Oracle 8i.  The line from PeopleSoft was that you can introduce partitioning without invalidating your support (the same is not true of E-Business suite).  Application Designer won't generate partition DDL, so you were supposed to give your DDL scripts to the DBA who would add the partition clauses.  So you if wanted to use partitioning, you would be plunged into a hellish world of manual scripting.  One of the key benefits of Application Designer is that it generates the DDL for you.
      Since 2001, I have developed a PL/SQL utility that effectively reverse engineers the functionality of Application Designer that builds DDL scripts, but then adds the partitioning clauses.  It also adds partitions, and has been extended to assist with partition-wise data archive/purge.  It is in uses at a number sites using Global Payroll (for which it was originally designed) and Financials (see Managing Oracle Table Partitioning in PeopleSoft Applications with GFC_PSPART Package)
      So in investigating the new partitioning feature of PeopleTools 8.54 I was concerned:
      • Is my partitioning utility was now obsolete?  Or should I continue to use it?
      • How would I be able to retrofit existing partitioning into PeopleTools?
      Partitioning in PeopleTools 8.54I am going to illustrate the behaviour of the new partition support with a number of example.
      Example 1: Range Partitioning PSWORKLISTIn this example, I will range partition table PSWORKLIST on the INSTSTATUS column. The valid statuses for this column are:

      INSTSTATUS Description 0 Available 1 Selected 2 Worked 3 Cancelled
      • the first partition will only contain statuses 0 and 1, which are the open worklist items, 
      • the other partition will contain the other statuses; 2 and 3 which are the closed items. 
      The application repeatedly queries this table looking for work lists items to be processed, using the criterion INSTSTATUS < 2.  Over time, unless data is archived, the vast majority of entries are closed.  This partitioning strategy will enable the application to find the open worklist items quickly by eliminating the larger closed partition only querying the smaller open item partition.  As items are worked or cancelled, their statuses are updated to 2 or 3, and they will automatically be moved to the other partition.
      This is something that I have actually done on a customer site, and it produced a considerable performance improvement.
      PeopleSoft provides a component that allows you to configure the partitioning strategy for a record.  However, I immediately ran into my first problem. 
      • The Partitioning Utility component will only permit me to partition by a PeopleSoft unique key column.  If a query doesn't have a predicate on the partitioning column, then Oracle will certainly not be able to prune any partitions, and the query will perform no better than if the table had not been partitioned.  While a column frequently used in selective criteria is often the subject of an index, and sometimes the unique key, this is not always the case.  It does not make sense to assume this in this utility component.
      • In this case, INSTSTATUS is not part of any delivered index, though I added it to index B.  I have seen that the application frequently queries the PSWORKLIST table by INSTSTATUS, so it does make sense to partition it on that column.
      However, I can customise the component to get around this.  The key field is validated by the view PPMU_RECKEYS_VW.
      SELECT A.RECNAME 
      ,A.FIELDNAME
      FROM PSRECFIELDALL A
      WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT
      I can change the view as follows:
      DROP TABLE PS_ST_RM2_TAO
      /
      SELECT A.RECNAME
      ,A.FIELDNAME
      FROM PSRECFIELDALL A /* WHERE %DecMult(%Round(%DECDIV(A.USEEDIT,2),0 ) , 2) <> A.USEEDIT*/
      , PSDBFIELD B
      WHERE A.FIELDNAME = B.FIELDNAME
      AND B.FIELDTYPE IN(0,2,3,4,5,6)
      So, now I can specify the partitioning for this table in the Partitioning Utility Component
       I notice that can leave tablespace blank in component, but the tablespace keyword is lying around - so I have to put a tablespace in.  It is valid to omit physical attributes at partition level and they will be inherited from table level, and similarly for table level.
      • The component automatically adds a MAXVALUE partition.  This means that is valid to put any value into the partition column, otherwise it can cause an error.  However, it might not be what I want.
      • The component also adds a table storage clause, overriding anything specified in the record, with a fixed PCTFREE 20 which applies to all partitions.  Again this might not be what I want.  The value of PCTFREE depends on whether and how I update data in the table. 
      • There are a number of things that I can't control in this component
        • The name of MAXVALUE partition
        • The tablespace of the MAXVALUE partition, which defaults to be the same tablespace as the last defined partition, which might not be what I want.
        • Any other physical attribute of any of the partitions, for example I might want a different PCTFREE on partitions containing data will not be updated.
      • The component adds clause to enable row movement.  This permits Oracle to move rows between partitions if necessary when the value of the partitioning key column is updated.  In this case it is essential because as worklist items are completed they move from the first partition to the other.  ALTER TABLE ... SHRINK requires row moment, so it is useful to enable it generally.
      The partitioning definition can be viewed in Application Designer under Tools -> Data Administration -> Partitioning.


      The create table script (PSBUILD.SQL) does not contain any partition DDL.  So first you build the table and then alter it partitioned.  To be fair, this limitation is set out in the PeopleTools documentation, and it is not unreasonable as you would often build the table and then decide to partition it.  I do the same in my own utility.
      -- Start the Transaction 


      -- Create temporary table

      CREATE TABLE PSYPSWORKLIST (BUSPROCNAME VARCHAR2(30) DEFAULT ' ' NOT
      NULL,

      DESCR254_MIXED VARCHAR2(254) DEFAULT ' ' NOT NULL) PARTITION BY
      RANGE (INSTSTATUS)
      (
      PARTITION OPEN VALUES LESS THAN (2) TABLESPACE PTTBL,
      PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE PTTBL
      )
      PCTFREE 20 ENABLE ROW MOVEMENT
      /

      -- Copy from source to temp table

      INSERT INTO PSYPSWORKLIST (
      BUSPROCNAME,

      DESCR254_MIXED)
      SELECT
      BUSPROCNAME,

      DESCR254_MIXED
      FROM PSWORKLIST
      /

      -- CAUTION: Drop Original Table

      DROP TABLE PSWORKLIST
      /

      -- Rename Table

      RENAME PSYPSWORKLIST TO PSWORKLIST
      /

      -- Done

      CREATE UNIQUE INDEX PS_PSWORKLIST ON PSWORKLIST (BUSPROCNAME,
      ACTIVITYNAME,
      EVENTNAME,
      WORKLISTNAME,
      INSTANCEID)
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"
      /
      ALTER INDEX PS_PSWORKLIST NOPARALLEL LOGGING
      /

      CREATE INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
      INSTSTATUS)
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"
      /
      ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
      /
      CREATE INDEX PSBWORKLIST ON PSWORKLIST
      ('') LOCAL TABLESPACE PTTBL
      /

      The DDL to create partitioned index does not seem to appear properly.  The first CREATE INDEX command was generated by Application Designer extracting it from the catalogue with DBMS_METADATA.  This functionality was introduced in PeopleTools 8.51 to preserve existing configuration.The second create index comes from the partitioning definition.
      • The index column list is missing, it should come from the column list is defined in Application Designer.
      • The locally partitioned index is the same tablespace as the table instead of the tablespace defined on the index. 
        • I would not normally keep indexes in the same tablespace as the table (the rationale is that in the case of having to recover only a tablespace with indexes then I could rebuild it instead of recovering it).
      I also note that the table is not altered NOLOGGING.  Yet the indexes are still made NOPARALLEL.  The default degree of parallelism on a partitioned table is equal to the number of partitioned, so it will cause parallel query to be invoked on the table access. 
      • I strongly recommend against generally allowing parallel query in all SQLs that reference a partitioned table in an OLTP system, which is what PeopleSoft is.  There are occasions where parallel query is the right thing to do, and in those cases I would use a hint, or SQL profile or SQL patch.
      If I leave the Partitioning Utility component and then go back to a record where I have previously created partition DDL, then the partition DDL is still there, but all the other information that I typed in has disappeared.



      If you trace the SQL generated by this component while entering partition details and generating partition DDL, then the only two tables that are updated at all;  PS_PTTBLPARTDDL and PS_PTIDXPARTDDL.  They are both keyed on RECNAME and PLATFORMID and have just one other column, a CLOB to hold the DDL.
      • The partition information disappears because there is nowhere to hold it persistently, and the component cannot extract it from the DDL.  It was being entered into a derived work record.
        • So it is not going to be much help when I want to adjust partitioning in a table that is already partitioned.  For example, over time, I might want to add new partitions, compress static partitions, or purge old ones.
      • It is also clear that there is no intention to support different partitioning strategies for different indexes on the same table.  There are certainly cases where a table will one or more locally partitioned indexes and some global indexes that may or may not be partitioned.
      • Even these two tables are not fully integrated into Application Designer.  There is a throwaway line in Appendix E of the Data Management Guide - Administering Databases on Oracle:"Record and index partitioning is not migrated as part of the IDE project. If you want to migrate the partitioning metadata along with the record, you will need to…" copy it yourself and it goes on to recommend creating a Data Migration Project in the Data Migration Workbench"
      Sample 2: Import Existing PartitioningSticking with PSWORKLIST, I have partitioned it exactly the way I want.  The partition DDL was generated by my own partitioning utility .  I have added INSTSTATUS to index B.
      CREATE TABLE sysadm.psworklist
      (busprocname VARCHAR2(30) NOT NULL

      ,descr254_mixed VARCHAR2(254) NOT NULL
      )
      TABLESPACE PTTBL
      PCTFREE 10 PCTUSED 80
      PARTITION BY RANGE(INSTSTATUS)
      (PARTITION psworklist_select_open VALUES LESS THAN ('2')
      ,PARTITION psworklist_worked_canc VALUES LESS THAN (MAXVALUE) PCTFREE 1 PCTUSED 90
      )
      ENABLE ROW MOVEMENT
      PARALLEL
      NOLOGGING
      /

      ALTER TABLE sysadm.psworklist LOGGING NOPARALLEL MONITORING
      /

      CREATE INDEX sysadm.ps0psworklist ON sysadm.psworklist
      (transactionid
      ,busprocname
      ,activityname
      ,eventname
      ,worklistname
      ,instanceid
      )
      TABLESPACE PSINDEX
      PCTFREE 10
      PARALLEL
      NOLOGGING
      /

      CREATE INDEX sysadm.psbpsworklist ON sysadm.psworklist
      (oprid
      ,inststatus
      )
      LOCAL
      (PARTITION psworklistbselect_open
      ,PARTITION psworklistbworked_canc PCTFREE 1
      )
      TABLESPACE PSINDEX
      PCTFREE 10
      PARALLEL
      NOLOGGING
      /
      ALTER INDEX sysadm.psbpsworklist LOGGING
      /
      ALTER INDEX sysadm.psbpsworklist NOPARALLEL
      /

      The DDL in the Maintain Partitioning box in Application Designer is extracted from the data dictionary using the Oracle supplied DBMS_METADATA package.  Application Designer has done this since PeopleTools 8.51 for index build scripts, but now you can see the DDL directly in the tool.
      When I generate an alter table script I still get two create index command for the partitioned index.  The second one comes from the generated partition DDL and is not correct because it still doesn't have a column list.
      CREATE   INDEX PSBPSWORKLIST ON PSWORKLIST (OPRID,
      INSTSTATUS)
      PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
      STORAGE(
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX" LOCAL
      (PARTITION "PSWORKLISTBSELECT_OPEN"
      PCTFREE 10 INITRANS 2 MAXTRANS 255 LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
      2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX" ,
      PARTITION "PSWORKLISTBWORKED_CANC"
      PCTFREE 1 INITRANS 2 MAXTRANS 255 LOGGING
      STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS
      2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX" )
      /
      ALTER INDEX PSBPSWORKLIST NOPARALLEL LOGGING
      /
      CREATE INDEX PSBPSWORKLIST ON PSWORKLIST ('') LOCAL TABLESPACE
      PTTBL
      /

      Example 3 - GP_RSLT_ACUMI have now chosen to partition one of the Global Payroll result tables.  This is often the largest table in a payroll system.  I have seen more than 1 billion rows in this table at one customer.  In a Global Payroll system, I usually:
      • range partition payroll tables on EMPLID to match the streamed processing (in GP streaming means concurrently running several Cobol or Application Engine programs to process different ranges of employees).  So there is a 1:1 relationship between payroll processes and physical partitions
      • the largest result tables are sub-partitioned on CAL_RUN_ID so each payroll period is in a separate physical partition.  Later I can archive historical payroll data by partition.
      Here, I have swapped the partitioning over.  I have partitioned by CAL_RUN_ID and sub-partitioned by EMPLID.  I explain why below.


        And this is Table DDL that the utility generated.
        PARTITION BY RANGE (CAL_RUN_ID) 
        SUBPARTITION BY RANGE (EMPLID)
        SUBPARTITION TEMPLATE
        (
        SUBPARTITION SUB1 VALUES LESS THAN ('K9999999999'),
        SUBPARTITION SUB2 VALUES LESS THAN ('KE999999999'),
        SUBPARTITION SUB3 VALUES LESS THAN ('KM999999999'),
        SUBPARTITION SUB4 VALUES LESS THAN ('KT999999999') ,
        SUBPARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE)
        )
        (
        PARTITION STRM1 VALUES LESS THAN ('K9999999999') TABLESPACE GPPART1,
        PARTITION STRM2 VALUES LESS THAN ('KE999999999') TABLESPACE GPPART2,
        PARTITION STRM3 VALUES LESS THAN ('KM999999999') TABLESPACE GPPART3,
        PARTITION STRM4 VALUES LESS THAN ('KT999999999') TABLESPACE GPPART4,
        PARTITION PE_MAXVALUE VALUES LESS THAN (MAXVALUE) TABLESPACE GPPART4
        )
        PCTFREE 20 ENABLE ROW MOVEMENT

        • Use of the sub-partition template clause simplifies the SQL.  There is certainly a lot less of it.  However, it means you get all the sub-partitions within in all partitions.  That might not be what you want.  In this demo database both employees and calendars are prefixed by something that corresponds to legislature, so some partitions will be empty.  They won't take up any physical space, due to deferred segment creation, but it would be better not to build them at all.
        • I can specify tablespace on the sub-partitions in the utility component, but there is no tablespace on the sub-partition template in the DDL.  I care more about putting different payroll periods into different tablespaces, than different ranges of employees (so I can compress and purge data later) so I swapped the partition key columns and have range partitioned on CAL_RUN_ID and sub-partitioned on EMPLID.
        In Global Payroll, partitioning is required to support streamed processing.  Tables are range partitioned on EMPLID to match the stream definitions.  In UK payroll, there are 45 tables that are updated or heavily referenced by streamed payroll processes that should all have similar range partitioning on EMPLID. 
        In PeopleTools 8.54, it is not possible to define a single partitioning strategy and consistently apply it to several tables.  Even if the data entered into the partition utility component was retained, I would have to enter it again for each table.
        ConclusionWhen I heard that PeopleTools would have native support for partitioning, if only in Oracle, I was hopeful that we would get something that would bring the process of migrating and building partitioned tables in line with normal tables. Unfortunately, I have to say that I don't think the partitioning support that I have seen so far is particularly useful.
        • There is no point typing in a lot of partition data into a utility component that doesn't retain the data.
        • As with materialized views, table partitioning is something on which DBAs will have to advise and will probably implement and maintain.  This component doesn't really help them do anything they already do with a text editor!
        • Even the minimal partition data that the utility component does retain is not migrated between environments by Application Designer when you migrate the record.
        Again, I think the problems stem from PeopleTools development trying to minimize the level of alteration to the Application Designer.  The partitioning utility component looks good because it sets out a lot of the partition attributes and metadata that you do need to consider, but there is no data structure behind that to hold it.
        I would like to see PeopleTools tables to hold partitioning metadata for tables and indexes, and for Application Designer to build DDL scripts to create and alter partitioned tables, to add partitions to existing tables, and then to migrate those definitions between environments.
        One positive that I can take from this is that Oracle has now clearly stated that it is reasonable to introduce partitioning into your PeopleSoft application without invalidating your support. The position hasn't actually changed, but now there is clarity.

        PeopleTools 8.54: Global Temporary Tables

        Thu, 2015-02-19 05:11
        This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

        Database Feature OverviewGlobal Temporary tables were introduced in Oracle 8i.  They can be used where an application temporarily needs a working storage tables.  They are named
        • Global because the content is private
        • Temporary because the definition is permanent
        Or if you prefer
        • Global because the definition is available to everyone
        • Temporary because 
          • physical instantiation of the table is temporary, in the temporary segment (so it isn't redo logged and so isn't recoverable),
          • but it does generate undo in the undo segment, and there is redo on the undo.
          • Each session gets its own private copy of the table in the temp segment.  So you cannot see what is in another session's temporary table, which can make application debugging difficult.
          • The physical instantiation of the table is removed either 
            • when the session disconnects - on commit preserve
            • or when the transaction is terminated with a commit or rollback - on commit delete
        This is a very useful database feature (I have been using it in PeopleSoft application ever since it was introduced). 
        • Can be used for temporary records in Application Engines where restart is disabled.
        • Can be implemented without any application code change.
        • Only Application Designer temporary records can be built as global temporary tables.  You cannot make a SQL Table record global temporary.
        • The reduction in redo generation during intensive batch processes, such as payroll processing, can bring significant performance benefits.  There is no point logging redo information for temporary working storage tables that you do not ever need to restore.
        • Shared temporary tables, such as in the GP calculation process GPPDPRUN that is written in COBOL.  If using payroll streaming (multiple concurrent processes to process in parallel), then concurrent delete/update can cause read consistency problems when using a normal table, but with global temporary tables, each session has its own physical table so there is never any need to read consistency recover to read a global temporary tables.
        • Global temporary tables are also an effective way to resolve table high water mark issues that can occur on non-shared temporary tables in on-line application engine.  The PeopleTools %TruncateTable macro still resolves to delete.  You never get high water mark problems with global temporary tables because they are physically created afresh for each new session.  
        • There is often a reduction in database size because the tables are not retained after the session terminates.  Although there will be an increased demand for temporary tablespace while the global temporary tables are in use.
        • I have occasionally seen performance problems when PeopleSoft systems very frequently truncate tables and experience contention on the RO enqueue.  This problem does not occur with global temporary tables.
        Global temporary table are not a licensed database feature and are also available in standard edition.
        Global Temporary Tables in PeopleToolsThis is the create table DDL created by Application Designer
        DROP TABLE PS_ST_RM2_TAO
        /
        CREATE GLOBAL TEMPORARY TABLE PS_ST_RM2_TAO (PROCESS_INSTANCE
        DECIMAL(10) NOT NULL,
        EMPLID VARCHAR2(11) NOT NULL,
        GRANT_NBR VARCHAR2(10) NOT NULL,
        VEST_DT DATE,
        SHARES_REMAINDER DECIMAL(21, 9) NOT NULL,
        DEC_PLACES SMALLINT NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE PSGTT01
        /
        CREATE UNIQUE iNDEX PS_ST_RM2_TAO ON PS_ST_RM2_TAO (PROCESS_INSTANCE,
        EMPLID,
        GRANT_NBR,
        VEST_DT)
        /
        The first thing to point out is the specification of a tablespace.  This is a new feature in Oracle 11g.  It is not mandatory in Oracle, but it is coded into the PeopleSoft DDL model so you must specify a temporary tablespace on the record otherwise it will fail to build.  A new temporary tablespace PSGTT01 is delivered by Oracle when you upgrade to 8.54, or you could just use the existing temporary tables.

        This new feature has been implemented using 2 new DDL models (statement types 6 and 7).
        SELECT * FROM psddlmodel WHERE statement_type IN(6,7);

        STATEMENT_TYPE PLATFORMID SIZING_SET PARMCOUNT
        -------------- ---------- ---------- ----------
        MODEL_STATEMENT
        ------------------------------------------------------------------------
        6 2 0 0
        CREATE GLOBAL TEMPORARY TABLE [TBNAME] ([TBCOLLIST]) ON COMMIT PRESERVE
        ROWS TABLESPACE [TBSPCNAME];

        7 2 0 0
        CREATE [UNIQUE] INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST]);
        • All tables created ON COMMIT PRESERVE, but on-line instances could be ON COMMIT DELETE (theory subject to testing) and for ALL application engine programs even if restart is enabled because commits suppressed in on-line application engines.  Instead, commit is done by the component.
        If you try adding a global temporary table table to an application engine that is not restart disabled you quite rightly get the following error message. The table will be added, but the program will not execute correctly.

        "Global Temporary Tables allocated to this restart enabled AE program will not retain any data when program exits."
        Problems:
        • There has always been a 13 character limit on temporary records, because there used to be a maximum of 99 non-shared instances, and 2 characters were reserved.  If you try to set the number of instances to greater than 99 in an application Engine (I tried GP_GL_PREP)  you now get the warning message
        "Do not support more than 99 instances when select the Temp Table which are not attributed as GTT"
        • There is now a maximum length of 11 characters for the name of a record built a global temporary table because from PeopleTools 8.54 there can be up to 9999 non-shared instances of the record.  The restriction applies irrespective of how many instances you are actually using. 
          • I have yet to encounter a system where I need more than 99 instances of a temporary table.  I can just about imagine needing 100 non-shared instances, but not 1000.  
          • This means that I cannot retrofit global temporary tables into an existing Application Engine processes without changing record names.  There are existing delivered application engine programs with 12 and 13 character temporary record names that cannot now be switched to use global temporary tables managed by application designer.  I don't need to support more instances just because the table is global temporary.
            • For example, GP_GL_SEGTMP in GP_GL_PREP is a candidate to be made global temporary because that is a streamed Global Payroll process.  When I tried, I got a record name too long error!
        "Record Name is too long. (47,67)"
            • Really, if the table is global temporary you don't need lots of instances.  Everyone could use the shared instance, because Oracle gives each session a private physical copy of the table anyway. 
              • You could do this by removing the record name from the list of temporary records in the application engine, then the %Table() macro will generate the table name without an instance number.
              • There would be a question of how to handle optimizer statistics.  Optimizer statistics collected on a global temporary table in one session could end up being used in another because there is only one place to store them in the data dictionary.
              • The answer is not to collect statistics at all and to use Optimizer Dynamic Sampling.  There is a further enhancement in Oracle 12c where the dynamically sampled stats from different sessions are kept separate.
          • When Application Designer builds an alter script, it can't tell whether it is global temporary or a normal table, so doesn't rebuild the table if you change it from one to the other.
          • The only real runtime downside of global temporary tables is that if you want to debug a process the data is not left behind after the process terminates.  Even while the process is running, you cannot query the contents of a global temporary tables in use by another from your session,
          My RecommendationSupport for global temporary tables is welcome and long overdue.  It can bring significant run time performance and system benefits due to the reduction in redo and read consistency.  It can be implemented without any code change. 

          We just need to sort out the 11 character record name length restriction.

          PeopleTools 8.54: Materialized Views

          Wed, 2015-02-18 15:30
          This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
          Materialized Views in the DatabaseSnapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

          Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

          Materialized views can generally be put into two categories
          • A simple, single-table materialized view. 
            • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
            • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
              •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
          • Multi-table materialized view
            • Usually done within a single database rather than across a database link.
            • Can only be refreshed by being completely rebuilt. 
            • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.
          Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

          Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

          When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.
          Materialized Views in PeopleTools 8.54Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
          PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

          There are two new PeopleTools tables:
          • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
          • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.
          I am going to demonstrate some aspects of the feature by running through some examples.
          Example 1: Replicate part of PS_JOB across a database linkIn this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.
          If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

          This is build script generated by Application Designer
          DROP VIEW PS_DMK
          /
          CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
          DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
          DEPTID FROM PS_JOB@HROTHER
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
          /
          DROP VIEW PS_DMK
          /
          CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
          SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
          ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
          EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.
          DROP MATERIALIZED VIEW PS_DMK
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
          DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
          DEPTID FROM PS_JOB@HROTHER
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
          /
          DROP VIEW PS_DMK
          /
          CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
          SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
          ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
          EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
          (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
          STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
          = 'DMK'
          /
          • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
          • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
          ERROR at line 4:
          ORA-12014: table 'PS_JOB' does not contain a primary key constraint
          • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
            • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
          • Nor is it possible to specify partitioning in Application Designer on a materialized view.
          • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.
          Example 2: Replicate part of PS_JOB locallyIn this example, I am again only replicating 6 named columns into my materialized view.
          DROP VIEW PS_DMK
          /
          CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
          DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
          DEPTID FROM PS_JOB
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
          /
          DROP MATERIALIZED VIEW LOG ON PS_JOB
          /
          DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
          'PS_JOB'
          /
          ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
          , EMPLID, EMPL_RCD)
          /
          CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
          KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
          IMMEDIATE
          /
          INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
          'PS_JOB')
          /
          DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
          /
          DROP VIEW PS_DMK
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
          If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.
          CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
          DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
          DEPTID FROM PS_JOB
          /
          DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
          'PS_JOB'
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
          PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
          ,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
          = 'DMK'
          /
          ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
          /
          DROP MATERIALIZED VIEW LOG ON PS_JOB
          /
          DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
          'PS_JOB'
          /
          ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
          , EMPLID, EMPL_RCD)
          /
          CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
          KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
          IMMEDIATE
          /
          INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
          'PS_JOB')
          /
          DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
          /
          DROP VIEW PS_DMK
          /
          CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
          SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
          ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
          EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
          (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
          STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
          = 'DMK'
          /
          • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
          • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
          • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
          • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
          SQL> desc mlog$_ps_job
          Name Null? Type
          ----------------------------- -------- -------------------
          EFFDT DATE
          EFFSEQ NUMBER
          EMPLID VARCHAR2(11 CHAR)
          EMPL_RCD NUMBER
          DEPTID VARCHAR2(10 CHAR)
          SETID_DEPT VARCHAR2(5 CHAR)
          M_ROW$$ VARCHAR2(255 CHAR)
          SEQUENCE$$ NUMBER
          SNAPTIME$$ DATE
          DMLTYPE$$ VARCHAR2(1 CHAR)
          OLD_NEW$$ VARCHAR2(1 CHAR)
          CHANGE_VECTOR$$ RAW(255)
          XID$$ NUMBER
            • If I just created the materialized view log as follows
          CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
          WITH PRIMARY KEY
          --, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
          INCLUDING NEW VALUES PURGE IMMEDIATE
          /
            • then the materialized view log contains fewer columns
          Name                          Null?    Type
          ----------------------------- -------- --------------------
          EFFDT DATE
          EFFSEQ NUMBER
          EMPLID VARCHAR2(11 CHAR)
          EMPL_RCD NUMBER
          SNAPTIME$$ DATE
          DMLTYPE$$ VARCHAR2(1 CHAR)
          OLD_NEW$$ VARCHAR2(1 CHAR)
          CHANGE_VECTOR$$ RAW(255)
          XID$$ NUMBER
          • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
          SELECT constraint_name, constraint_type, table_name, index_name
          FROM user_constraints
          WHERE table_name = 'PS_DMK'
          AND constraint_type != 'C'
          /

          CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
          -------------------- - ---------- ----------
          PS_JOB_PK1 P PS_DMK PS_JOB_PK1

          SELECT index_name, index_type, uniqueness
          FROM user_indexes
          WHERE table_name = 'PS_DMK'
          /

          INDEX_NAME INDEX_TYPE UNIQUENES
          ---------- ---------- ---------
          PS_JOB_PK1 NORMAL UNIQUE

          SELECT index_name, column_position, column_name, descend
          FROM user_ind_columns
          WHERE table_name = 'PS_DMK'
          /

          INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
          ---------- --------------- -------------------- ----
          PS_JOB_PK1 1 EFFDT ASC
          PS_JOB_PK1 2 EFFSEQ ASC
          PS_JOB_PK1 3 EMPLID ASC
          PS_JOB_PK1 4 EMPL_RCD ASC




        • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.

        • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.This view references various tables (I have edited out column lists and predicates for readability)

          SELECT …
          FROM PS_DEPT_TBL DEPT
          , PSOPRDEFN OPR
          WHERE EXISTS (
          SELECT 'X'
          FROM PS_SJT_DEPT SEC
          , PS_SJT_CLASS_ALL CLS
          , PS_SJT_OPR_CLS SOC
          …)
          OR EXISTS (
          SELECT 'X'
          FROM PS_SJT_DEPT SEC
          , PS_SJT_CLASS_ALL CLS
          , PS_SJT_OPR_CLS SOC
          …)
          OR EXISTS (
          SELECT 'X'
          FROM PS_SJT_DEPT SEC
          , PS_SJT_CLASS_ALL CLS
          , PS_SJT_OPR_CLS SOC
          …)
          But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.
          SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
          /

          RECNAME PTMAT_BASETBL
          --------------- ------------------
          DMK_DPT_SEC_MVW PSOPRDEFN
          DMK_DPT_SEC_MVW PS_DEPT_TBL
          DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
          DMK_DPT_SEC_MVW PS_SJT_OPR_CLS
          I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.
          ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
          (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
          Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
          - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

          CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
          WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
          INCLUDING NEW VALUES PURGE IMMEDIATE
          Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
          - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated
          Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

          We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate
          EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
          , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
          TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
          SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
          , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
          FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
          WHERE EXISTS (
          SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
          WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
          AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
          AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
          AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID
          Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VWIn this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.
          SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
          Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
          Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
          Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

          SQL Build process ended on 16/02/2015 at 21:05:30.
          1 records processed, 1 errors, 15 warnings.
          SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
          SQL executed online.
          SQL Build log file written to C:\Temp\PSBUILD.LOG.
          • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
          • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
          CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
          ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
          LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
          COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
          , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
          SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
          COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
          ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
          ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
          ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
          ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
          ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
          ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
          PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
          ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
          (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
          TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
          MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
          J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
          FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
          AND D.EFFDT = A.EFFDT)
          /
          UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
          (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
          STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
          WHERE RECNAME = 'DMK_JOB_CUR_MVW'
          /
          • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.
          Query ReWriteOne common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.
          DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
          /
          CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
          , DESCR) TABLESPACE PSMATVW
          BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
          enable query rewrite
          AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
          WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
          =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
          ,'YYYY-MM-DD'),'YYYY-MM-DD'))
          /
          However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.
          =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
          *
          ERROR at line 7:
          ORA-30353: expression not supported for query rewrite
          This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.
          Refreshing materialized ViewsThere is a new component to manage the refresh frequency of materialized views.
          PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View Maintenance
          This component will schedule an Application Engine process called PTMATREFVW.
          What this Application Engine actually does is to specify the refresh frequency for the materialized view
          &AlterSQL = "alter materialized view " | &mview_name 
          | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";
          So the command issued is just
          alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";
          Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.
          SELECT rname, next_date, interval FROM user_refresh
          /

          RNAME NEXT_DATE INTERVAL
          ---------- --------- -------------------------
          PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

          SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
          /

          NAME TYPE RNAME JOB NEXT_DATE INTERVAL
          ---------- ---------- ---------- ---------- --------- -------------------------
          PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

          SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
          /

          JOB NEXT_DATE NEXT_SEC INTERVAL
          ---------- --------- -------- -------------------------
          WHAT
          --------------------------------------------------
          21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
          dbms_refresh.refresh('"SYSADM"."PS_DMK"');
          • But I might want to group related materialized views together into a single refresh group.
          • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
          • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).
          My Recommendations
          • Good Things
            • The removal of the descending indexes and the creation of the primary key is a good thing
            • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
            • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
          • Bad Things
            • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
            • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
            • There is no support for rowid based materialized views.
            • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
            • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
            • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
          • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
          • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
          • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.
           When you design materialized views into your application you need to make careful choices about
          • Refresh method
            • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
            • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
          • Refresh mode
            • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
            • On demand - manually issue refresh at a specific point in a batch
            • On schedule by a database job.
          • Build options
            • Immediate - the materialized view is populated when it is created
            • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.
          Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
          PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

          In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.

          PeopleTools 8.54: Descending Indexes are not supported

          Mon, 2015-02-16 07:57
          This is the first in a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

          "With PeopleTools 8.54, PeopleTools will no longer support descending indexes on the Oracle database platform" - PeopleTools 8.54 Release Notes

          They have gone again!  Forgive the trip down memory lane, but I think it is worth reviewing their history.
          • Prior to PeopleTools 8.14, if you specified a key or alternate search field in record as descending then where it appeared in automatically generated key and alternate search key indexes, that column would be descending.  PeopleTools would add the DESC keyword after the column in the CREATE INDEX DDL.  Similarly, columns can be specified as descending in user indexes (that is to say ones created by the developer with index ID A through Z).
          • In PeopleTools 8.14 to 8.47, descending indexes were not built by Application Designer because of a problem with the descending key indexes in some versions of Oracle 8i.  
            • PeopleSoft had previously recommended setting an initialisation parameter on 8i to prevent descending indexes from being created even if the DESC keyword was specified.parameters.
          _IGNORE_DESC_IN_INDEX=TRUE
          • From PeopleTools 8.48 the descending keyword came back because this was the first version of PeopleTools that was only certified from Oracle 9i, in which the descending index bug never occurred. (see blog posting 'Descending Indexes are Back!' October 2007).
          • In PeopleTools 8.54, there are again no descending indexes because the descending keyword has been omitted from the column list in the CREATE TABLE DDL. You can still specify descending keys in Application Designer because that controls the order in which rows are queried into scrolls in the PIA.  You can also still specify descending order on user indexes, but it has no effect upon either the application or the index DDL.
          I haven’t found any documentation that explains why this change has been made.  This time there is no suggestion of a database bug.  However, I think that there are a good database design reasons behind it.

          Normally creation of a primary key automatically creates a unique index to police the constraint.  It is possible to create a primary key constraint using a pre-existing index.  The index does not have to be unique, but it may as well.  However, there are some limitations.
          • You cannot create primary key on nullable columns - that is a fundamental part of the relational model.  This is rarely a problem in PeopleSoft where only dates that are not marked 'required' in the Application Designer are created nullable in the database.
            • You can create a unique index on nullable columns, which is probably why PeopleSoft has always used unique indexes.  
          • You cannot use a descending index in a primary key constraint because it is implemented as a function-based index.
          CREATE TABLE t (a NUMBER NOT NULL)
          /
          CREATE UNIQUE INDEX t1 ON t(A DESC)
          /
          ALTER TABLE t ADD PRIMARY KEY (a) USING INDEX t1
          /
          ORA-14196: Specified index cannot be used to enforce the constraint.

            • We can see that the descending key column is actually a function of a column and not a column, and so cannot be used in a primary key.
          SELECT index_name, index_type, uniqueness FROM user_indexes WHERE table_name = 'T'
          /

          INDEX_NAME INDEX_TYPE UNIQUENES
          ---------- --------------------------- ---------
          T1 FUNCTION-BASED NORMAL UNIQUE

          SELECT index_name, column_name, column_position, descend FROM user_ind_columns WHERE table_name = 'T'
          /

          INDEX_NAME COLUMN_NAME COLUMN_POSITION DESC
          ---------- ------------ --------------- ----
          T1 SYS_NC00002$ 1 DESC

          SELECT * FROM user_ind_expressions WHERE table_name = 'T'
          /

          INDEX_NAME TABLE_NAME COLUMN_EXPRESSION COLUMN_POSITION
          ---------- ---------- -------------------- ---------------
          T1 T "A" 1
            • Non-PeopleSoft digression: you can create a primary key on a virtual column.  An index on the virtual column is not function-based.  So you can achieve the same effect if you move the function from the index into a virtual column, and you can have a primary key on the function.  However, PeopleTools Application Designer doesn't support virtual columns.
          I think that descending keys have removed from PeopleTools because:
          • It permits the creation of primary key constraints using the unique indexes.
          • It does not pose any performance threat.  In Oracle, index leaf blocks are chained in both directions so it is possible to use an ascending index for a descending scan and vice versa. 
          • Update 20.4.2016: There is an optimisation in Oracle 11.2.0.4 that improves the performance of the MAX() function in correlated sub-queries on ascending indexes only.  This will benefit all PeopleSoft applications, but especially HCM.
          What are the advantages of having a primary key rather than just a unique constraint?
          • The optimizer can only consider certain SQL transformations if there is a primary key.  That mainly affects star transformation.
          • It allows the optimizer to rewrite a query to use a materialized view. 
          • It allows a materialized view refresh to be based on the primary key rather than the rowid (the physical address of the row in a table).  This can save you from performing a full refresh of the materialized view if you rebuild the table (I will come back to materialized views in another posting).
          • If you are using logical standby, you need to be able to uniquely identify a row of data otherwise Oracle will perform supplemental logging.  Oracle will additionally log all bounded-size columns (in PeopleSoft, that means everything except LOBs).  Oracle can use non-null unique constraint, but it cannot use a unique function-based index.
            • Logical standby can be a useful way to minimise downtime during a migration.  For example, when migrating the database from a proprietary Unix to Linux where there is an Endian change.  Minimising supplemental logging would definitely be of interest in this case.
          The descending index change is not something that can be configured by the developer or administrator.  It is just something that is hard coded in Application Design and Data Mover that changes the DDL that they generate.

          There are some considerations on migration to PeopleTools 8.54:
          • It will be necessary to rebuild all indexes with descending keys to remove the descending keys.  
          • There are a lot of descending indexes in a typical PeopleSoft application (I counted the number of function-based indexes in a typical system: HR ~11000 , Financials: ~8500).  If you choose to do this during the migration it may take considerable time.
          • In Application Designer, if your build script settings are set to only recreate an index if modified, Application Designer will not detect that the index has a descending key and rebuild it.  So you will have to work out for yourself which indexes have descending key columns and handle the rebuild manually.
          • With migration to PeopleTools 8.54 in mind, you might choose to prevent Oracle from building descending indexes by setting _IGNORE_DESC_IN_INDEX=TRUE. Then you can handle the rebuild in stages in advance
          ConclusionI think the removal of descending indexes from PeopleSoft is a sensible change that enables a number of Oracle database features, while doing no harm.

          PeopleTools 8.54 for the Oracle DBA

          Mon, 2015-02-16 06:36
          The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

          I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:
          This post is not about a new feature in PeopleTools 8.54, but it is something that I discovered while investigating the new version.
            Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

            PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.

            Filtering PeopleTools SQL from Performance Monitor Traces

            Mon, 2014-11-03 15:01

            I have been doing some on-line performance tuning on a PeopleSoft Financials system using PeopleSoft Performance Monitor (PPM).  End-users have collect verbose PPM traces. Usually, when I use PPM in a production system, all the components are fully cached by the normal activity of the user (except when the application server caches have recently been cleared).  However, when working in a user test environment it is common to find that the components are not fully cached. This presents two problems.
            • The application servers spend quite a lot of time executing queries on the PeopleTools tables to load the components, pages and PeopleCode into their caches. We can see in the screenshot of the component trace that there is a warning message that component objects are not fully cached, and that these  cache misses skew timings.
            • In verbose mode, the PPM traces collect a lot of additional transactions capturing executions and fetches against PeopleTools tables. The PPM analytic components cannot always manage the resultant volume of transactions.
              Figure 1. Component trace as collected by PPMFigure 1. Component trace as collected by PPMIf I go further down the same page and look in the SQL Summary, I can see SQL operations against PeopleTools tables (they are easily identifiable in that they generally do not have an underscore in the third character). Not only are 5 of the top 8 SQL operations related to PeopleTools tables, we can also see that they also account for over 13000 executions, which means there are at least 13000 rows of additional data to be read from PSPMTRANSHIST.
              Figure 2. SQL Summary of PPM trace with PeopleTools SQLFigure 2. SQL Summary of PPM trace with PeopleTools SQLWhen I open the longest running server round trip (this is also referred to as a Performance Monitoring Unit or PMU), I can only load 1001 rows before I get a message warning that the maximum row limit has been reached. The duration summary and the number of executions and fetches cannot be calculated and hence 0 is displayed.
               Details of longest PMU with PeopleTools SQLFigure 3: Details of longest PMU with PeopleTools SQL
              Another consequence of the PeopleTools data is that it can take a long time to open the PMU tree. There is no screenshot of the PMU tree here because in this case I had so much data that I couldn't open it before the transaction timed out!
              Solution My solution to this problem is to delete the transactions that relate to PeopleTools SQL and correct the durations, and the number of executions and fetches held in summary transactions. The rationale is that these transactions would not normally occur in significant quantities in a real production system, and there is not much I can do about them when they do.
              The first step is to clone the trace. I could work on the trace directly, but I want to preserve the original data.
              PPM transactions are held in the table PSPMTRANSHIST. They have a unique identifier PM_INSTANCE_ID. A single server round trip, also called a Performance Monitoring Unit (PMU), will consist of many transactions. They can be shown as a tree and each transaction has another field PM_PARENT_INST_ID which holds the instance of the parent. This links the data together and we can use hierarchical queries in Oracle SQL to walk the tree. Another field PM_TOP_INST_ID identifies the root transaction in the tree.
              Cloning a PPM trace is simply a matter of inserting data into PSPMTRANSHIST. However, when I clone a PPM trace I have to make sure that the instance numbers are distinct but still link correctly. In my system I can take a very simple approach. All the instance numbers actually collected by PPM are greater than 1016. So, I will simply use the modulus function to consistently alter the instances to be different. This approach may break down in future, but it will do for now.
              On an Oracle database, PL/SQL is a simple and effective way to write simple procedural processes.  I have written two anonymous blocks of code.
              Note that the cloned trace will be purged from PPM like any other data by the delivered PPM archive process.

              REM xPT.sql
              BEGIN --duplicate PPM traces
              FOR i IN (
              SELECT h.*
              FROM pspmtranshist h
              WHERE pm_perf_trace != ' ' /*rows must have a trace name*/
              -- AND pm_perf_trace = '9b. XXXXXXXXXX' /*I could specify a specific trace by name*/
              AND pm_instance_id > 1E16 /*only look at instance > 1e16 so I do not clone cloned traces*/
              ) LOOP
              INSERT INTO pspmtranshist
              (PM_INSTANCE_ID, PM_TRANS_DEFN_SET, PM_TRANS_DEFN_ID, PM_AGENTID, PM_TRANS_STATUS,
              OPRID, PM_PERF_TRACE, PM_CONTEXT_VALUE1, PM_CONTEXT_VALUE2, PM_CONTEXT_VALUE3,
              PM_CONTEXTID_1, PM_CONTEXTID_2, PM_CONTEXTID_3, PM_PROCESS_ID, PM_AGENT_STRT_DTTM,
              PM_MON_STRT_DTTM, PM_TRANS_DURATION, PM_PARENT_INST_ID, PM_TOP_INST_ID, PM_METRIC_VALUE1,
              PM_METRIC_VALUE2, PM_METRIC_VALUE3, PM_METRIC_VALUE4, PM_METRIC_VALUE5, PM_METRIC_VALUE6,
              PM_METRIC_VALUE7, PM_ADDTNL_DESCR)
              VALUES
              (MOD(i.PM_INSTANCE_ID,1E16) /*apply modulus to instance number*/
              ,i.PM_TRANS_DEFN_SET, i.PM_TRANS_DEFN_ID, i.PM_AGENTID, i.PM_TRANS_STATUS,
              i.OPRID,
              SUBSTR('xPT'||i.PM_PERF_TRACE,1,30) /*adjust trace name*/,
              i.PM_CONTEXT_VALUE1, i.PM_CONTEXT_VALUE2, i.PM_CONTEXT_VALUE3,
              i.PM_CONTEXTID_1, i.PM_CONTEXTID_2, i.PM_CONTEXTID_3, i.PM_PROCESS_ID, i.PM_AGENT_STRT_DTTM,
              i.PM_MON_STRT_DTTM, i.PM_TRANS_DURATION,
              MOD(i.PM_PARENT_INST_ID,1E16), MOD(i.PM_TOP_INST_ID,1E16), /*apply modulus to parent and top instance number*/
              i.PM_METRIC_VALUE1, i.PM_METRIC_VALUE2, i.PM_METRIC_VALUE3, i.PM_METRIC_VALUE4, i.PM_METRIC_VALUE5,
              i.PM_METRIC_VALUE6, i.PM_METRIC_VALUE7, i.PM_ADDTNL_DESCR);
              END LOOP;
              COMMIT;
              END;
              /
              Now I will work on the cloned trace. I want to remove certain transaction.
              • PeopleTools SQL. Metric value 7 reports the SQL operation and SQL table name. So if the first word is SELECT and the second word is a PeopleTools table name then it is a PeopleTools SQL operation. A list of PeopleTools tables can be obtained from the object security table PSOBJGROUP.
              • Implicit Commit transactions. This is easy - it is just transaction type 425. 
              Having deleted the PeopleTools transactions, I must also
              • Correct transaction duration for any parents of transaction. I work up the hierarchy of transactions and deduct the duration of the transaction that I am deleting from all of the parent.
              • Transaction types 400, 427 and 428 all record PeopleTools SQL time (metric 66). When I come to that transaction I also deduct the duration of the deleted transaction from the PeopleTools SQL time metric in an parent transaction.
              • Delete any children of the transactions that I delete. 
              • I must also count each PeopleTools SQL Execution transaction (type 408) and each PeopleTools SQL Fetch transaction (type 414) that I delete. These counts are also deducted from the summaries on the parent transaction 400. 
              The summaries in transaction 400 are used on the 'Round Trip Details' components, and if they are not adjusted you can get misleading results. Without the adjustments, I have encountered PMUs where more than 100% of the total duration is spent in SQL - which is obviously impossible.
              Although this technique of first cloning the whole trace and then deleting the PeopleTools operations can be quite slow, it is not something that you are going to do very often. 
              REM xPT.sql
              REM (c)Go-Faster Consultancy Ltd. 2014
              set serveroutput on echo on
              DECLARE
              l_pm_instance_id_m4 INTEGER;
              l_fetch_count INTEGER;
              l_exec_count INTEGER;
              BEGIN /*now remove PeopleTools SQL transaction and any children and adjust trans durations*/
              FOR i IN (
              WITH x AS ( /*returns PeopleTools tables as defined in Object security*/
              SELECT o.entname recname
              FROM psobjgroup o
              WHERE o.objgroupid = 'PEOPLETOOLS'
              AND o.enttype = 'R'
              )
              SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_trans_duration, h.pm_trans_defn_id
              FROM pspmtranshist h
              LEFT OUTER JOIN x
              ON h.pm_metric_value7 LIKE 'SELECT '||x.recname||'%'
              AND x.recname = upper(regexp_substr(pm_metric_value7,'[^ ,]+',8,1)) /*first word after select*/
              WHERE pm_perf_trace like 'xPT%' /*restrict to cloned traces*/
              -- AND pm_perf_trace = 'xPT9b. XXXXXXXXXX' /*work on a specific trace*/
              AND pm_instance_id < 1E16 /*restrict to cloned traces*/
              AND ( x.recname IS NOT NULL
              OR h.pm_trans_defn_id IN(425 /*Implicit Commit*/))
              ORDER BY pm_instance_id DESC
              ) LOOP
              l_pm_instance_id_m4 := TO_NUMBER(NULL);
               
                  IF i.pm_parent_inst_id>0 AND i.pm_trans_duration>0 THEN
              FOR j IN(
              SELECT h.pm_instance_id, h.pm_parent_inst_id, h.pm_top_inst_id, h.pm_trans_defn_id
              , d.pm_metricid_3, d.pm_metricid_4
              FROM pspmtranshist h
              INNER JOIN pspmtransdefn d
              ON d.pm_trans_defn_set = h.pm_trans_defn_set
              AND d.pm_trans_defn_id = h.pm_trans_Defn_id
              START WITH h.pm_instance_id = i.pm_parent_inst_id
              CONNECT BY prior h.pm_parent_inst_id = h.pm_instance_id
              ) LOOP
              /*decrement parent transaction times*/
              IF j.pm_metricid_4 = 66 /*PeopleTools SQL Time (ms)*/ THEN --decrement metric 4 on transaction 400
              --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value4 by '||i.pm_trans_duration);
              UPDATE pspmtranshist
              SET pm_metric_value4 = pm_metric_value4 - i.pm_trans_duration
              WHERE pm_instance_id = j.pm_instance_id
              AND pm_trans_Defn_id = j.pm_trans_defn_id
              AND pm_metric_value4 >= i.pm_trans_duration
              RETURNING pm_instance_id INTO l_pm_instance_id_m4;
              ELSIF j.pm_metricid_3 = 66 /*PeopleTools SQL Time (ms)*/ THEN --SQL time on serialisation
              --dbms_output.put_line('ID:'||i.pm_instance_id||' Type:'||i.pm_trans_defn_id||' decrement metric_value3 by '||i.pm_trans_duration);
              UPDATE pspmtranshist
              SET pm_metric_value3 = pm_metric_value3 - i.pm_trans_duration
              WHERE pm_instance_id = j.pm_instance_id
              AND pm_trans_Defn_id = j.pm_trans_defn_id
              AND pm_metric_value3 >= i.pm_trans_duration;
              END IF;

              UPDATE pspmtranshist
              SET pm_trans_duration = pm_trans_duration - i.pm_trans_duration
              WHERE pm_instance_id = j.pm_instance_id
              AND pm_trans_duration >= i.pm_trans_duration;
              END LOOP;
              END IF;

              l_fetch_count := 0;
              l_exec_count := 0;
              FOR j IN( /*identify transaction to be deleted and any children*/
              SELECT pm_instance_id, pm_parent_inst_id, pm_top_inst_id, pm_trans_defn_id, pm_metric_value3
              FROM pspmtranshist
              START WITH pm_instance_id = i.pm_instance_id
              CONNECT BY PRIOR pm_instance_id = pm_parent_inst_id
              ) LOOP
              IF j.pm_trans_defn_id = 408 THEN /*if PeopleTools SQL*/
              l_exec_count := l_exec_count + 1;
              ELSIF j.pm_trans_defn_id = 414 THEN /*if PeopleTools SQL Fetch*/
              l_fetch_count := l_fetch_count + j.pm_metric_value3;
              END IF;
              DELETE FROM pspmtranshist h /*delete tools transaction*/
              WHERE h.pm_instance_id = j.pm_instance_id;
              END LOOP;

              IF l_pm_instance_id_m4 > 0 THEN
              --dbms_output.put_line('ID:'||l_pm_instance_id_m4||' Decrement '||l_exec_Count||' executions, '||l_fetch_count||' fetches');
              UPDATE pspmtranshist
              SET pm_metric_value5 = pm_metric_value5 - l_exec_count
              , pm_metric_value6 = pm_metric_value6 - l_fetch_count
              WHERE pm_instance_id = l_pm_instance_id_m4;
              l_fetch_count := 0;
              l_exec_count := 0;
              END IF;

              END LOOP;
              END;
              /
              Now, I have a second PPM trace that I can open in the analytic component.
               Original and Cloned PPM tracesFigure 4: Original and Cloned PPM traces

              When I open the cloned trace, both timings in the duration summary have reduced as have the number of executions and fetches.  The durations of the individual server round trips have also reduced.
               Component Trace without PeopleTools transactionsFigure 5: Component Trace without PeopleTools transactions
              All of the PeopleTools SQL operations have disappeared from the SQL summary.
               SQL Summary of PPM trace after removing PeopleTools SQL transactionsFigure 6: SQL Summary of PPM trace after removing PeopleTools SQL transactions
              The SQL summary now only has 125 rows of data.
              Figure 7: SQL Summary of PMU without PeopleTools SQL
              Now, the PPM tree component opens quickly and without error.
               PMU Tree after removing PeopleTools SQLFigure 8: PMU Tree after removing PeopleTools SQL
              There may still be more transactions in a PMU than I can show in a screenshot, but I can now find the statement that took the most time quite quickly.

               Long SQL transaction further down same PMU treeFigure 9: Long SQL transaction further down same PMU tree
              Conclusions I think that it is reasonable and useful to remove PeopleTools SQL operations from a PPM trace.
              In normal production operation, components will mostly be cached, and this approach renders traces collected in non-production environments both usable in the PPM analytic components and more realistic for performance tuning. However, it is essential that when deleting some transactions from a PMU, that summary data held in other transactions in the same PMU are also corrected so that the metrics remain consistent.

              Minimising Parse Time in Application Engine with ReUseStatement

              Fri, 2014-10-24 10:15
              This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
              ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

              Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
              Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

              ;-------------------------------------------------------------------------
              ; AE Tracing Bitfield
              ;
              ; Bit Type of tracing
              ; --- ---------------
              ...
              ; 128 - Timings Report to AET file
              ...
              ; 1024 - Timings Report to tables
              ...
              TraceAE=1152
              ;------------------------------------------------------------------------

              Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

              NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

              Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
              It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
              Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                                        PeopleSoft Application Engine Timings
              (All timings in seconds)

              C o m p i l e E x e c u t e F e t c h Total
              SQL Statement Count Time Count Time Count Time Time
              ------------------------------ ------- -------- ------- -------- ------- -------- --------
              99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
              ...

              With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                                             C o m p i l e    E x e c u t e    F e t c h        Total
              SQL Statement Count Time Count Time Count Time Time
              ------------------------------ ------- -------- ------- -------- ------- -------- --------
              99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
              ...

              So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
               Profile CompilationsThis query produces a simple profile of batch timings for statements. 
              • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
              • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
              • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

              REM ReUseCand.sql
              REM (c)Go-Faster COnsultancy Ltd. 2014
              COLUMN detail_id FORMAT a32
              COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
              COLUMN compile_count HEADING 'AE|Compile|Count'
              COLUMN execute_count HEADING 'AE|Execute|Count'
              COLUMN processes HEADING 'Num|Process|Instances'
              COLUMN process_name HEADING 'Process|Name'
              SPOOL ReUseCand
              WITH x AS (
              SELECT l.process_instance, l.process_name
              , l.time_elapsed/1000 time_elapsed
              , l.enddttm-l.begindttm diffdttm
              , d.bat_program_name||'.'||d.detail_id detail_id
              , d.compile_count, d.compile_time/1000 compile_time
              , d.execute_time/1000 execute_time
              FROM ps_bat_Timings_dtl d
              , ps_bat_timings_log l
              WHERE d.process_instance = l.process_instance
              AND d.compile_count = d.execute_count
              AND d.compile_count > 1
              AND l.enddttm > SYSDATE-7
              ), y as (
              SELECT x.*
              , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
              +EXTRACT(hour FROM diffdttm))
              +EXTRACT(minute FROM diffdttm))
              +EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
              FROM x)
              , z as (
              SELECT process_instance, process_name, detail_id
              , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
              ELSE time_elapsed END time_elapsed
              , compile_count
              , CASE WHEN compile_time < 0 THEN compile_time+delta
              ELSE compile_time END AS compile_time
              , CASE WHEN execute_time < 0 THEN execute_time+delta
              ELSE execute_time END AS execute_time
              FROM y
              ), a as (
              SELECT process_name, detail_id
              , SUM(compile_time+execute_time) step_time
              , SUM(compile_count) compile_count
              , COUNT(DISTINCT process_instance) processes
              FROM z
              GROUP BY process_name, detail_id)
              SELECT * FROM a
              WHERE compile_count >= 10000
              ORDER BY step_time DESC
              /
              SPOOL OFF

              So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

              Process                                         Step    Compile    Process
              Name DETAIL_ID SEcs Count Instances
              ------------ -------------------------------- ------ ---------- ----------
              AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
              AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
              FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
              AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
              PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
              FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
              GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
              AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

              This query is based on the previous one, but includes scalar queries on the ASH data for each step.
              • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
              • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
              • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
              • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

              REM ReUseCandASH.sql
              REM ReUseCandASH.sql
              REM (c)Go-Faster Consultancy Ltd. 2014
              COLUMN processes HEADING 'Num|Process|Instances'
              COLUMN process_name HEADING 'Process|Name'
              COLUMN detail_id FORMAT a32
              COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
              COLUMN compile_count HEADING 'AE|Compile|Count'
              COLUMN execute_count HEADING 'AE|Execute|Count'
              COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
              COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
              SPOOL ReUseCandASH
              WITH x AS (
              SELECT l.process_instance, l.process_name
              , l.time_elapsed/1000 time_elapsed
              , l.begindttm, l.enddttm
              , l.enddttm-l.begindttm diffdttm
              , d.bat_program_name||'.'||d.detail_id detail_id
              , d.compile_count, d.compile_time/1000 compile_time
              , d.execute_time/1000 execute_time
              FROM ps_bat_timings_dtl d
              , ps_bat_timings_log l
              WHERE d.process_instance = l.process_instance
              AND d.compile_count = d.execute_count
              AND d.compile_count > 1
              AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
              AND l.enddttm < TRUNC(SYSDATE,'HH24')
              ), y as (
              SELECT x.*
              , GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
              +EXTRACT(hour FROM diffdttm))
              +EXTRACT(minute FROM diffdttm))
              +EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
              FROM x)
              , z as (
              SELECT process_instance, process_name, detail_id
              , CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
              ELSE time_elapsed END AS time_elapsed
              , compile_count
              , CASE WHEN compile_time < 0 THEN compile_time+delta
              ELSE compile_time END AS compile_time
              , CASE WHEN execute_time < 0 THEN execute_time+delta
              ELSE execute_time END AS execute_time
              , (
              SELECT 10*COUNT(DISTINCT h.sample_id)
              FROM psprcsque q
              , dba_hist_snapshot x
              , dba_hist_active_sess_history h
              WHERE q.prcsinstance = y.process_instance
              AND x.begin_interval_time <= y.enddttm
              AND X.END_INTERVAL_TIME >= y.begindttm
              AND h.sample_time between y.begindttm and y.enddttm
              AND h.SNAP_id = x.SNAP_id
              AND h.dbid = x.dbid
              AND h.instance_number = x.instance_number
              AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
              AND h.action = y.detail_id
              AND h.in_hard_parse = 'Y'
              ) hard_parse_secs
              , (
              SELECT 10*COUNT(DISTINCT h.sample_id)
              FROM psprcsque q
              , dba_hist_snapshot x
              , dba_hist_active_sess_history h
              WHERE q.prcsinstance = y.process_instance
              AND x.begin_interval_time <= y.enddttm
              AND X.END_INTERVAL_TIME >= y.begindttm
              AND h.sample_time between y.begindttm and y.enddttm
              AND h.SNAP_id = X.SNAP_id
              AND h.dbid = x.dbid
              AND h.instance_number = x.instance_number
              AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
              AND h.action = y.detail_id
              ) ash_secs
              FROM y
              ), a AS (
              SELECT /*process_name ,*/ detail_id
              , SUM(compile_time+execute_time) step_time
              , SUM(compile_count) compile_count
              , COUNT(DISTINCT process_instance) processes
              , SUM(hard_parse_secs) hard_parse_secs
              , SUM(ash_secs) ash_secs
              FROM z
              GROUP BY /*process_name,*/ detail_id)
              SELECT a.*
              FROM a
              WHERE compile_count >= 10000
              ORDER BY step_time DESC
              /
              spool off

              Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
              However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                                 Step    Compile    Process      Parse         DB
              DETAIL_ID Secs Count Instances Secs Time Comment
              -------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
              AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
              FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
              APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
              FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
              FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
              APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
              PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
              FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
              FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
              AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
              GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
              FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
              FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
              FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
              • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
              • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
              ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.

                Pages