Skip navigation.

David Kurtz

Syndicate content
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 14 hours 43 min ago

PeopleSoft on Oracle 12c

Fri, 2016-05-06 03:59
I was asked by Dan Iverson from psadmin.io about my experiences of PeopleSoft on Oracle 12c.  I have seen a number of PeopleSoft Financials systems on 12c recently.  Generally the experience is very positive, but one common feature is that they had all disabled Adaptive Query Optimization.

What is Adaptive Query Optimization?

Adaptive Query Optimization is a term for a set of new features in Oracle 12c to allow the optimizer to improve the accuracy of execution plans.  It is described in the Optimizer with Oracle 12c white paper.  Put simply, Oracle collects additional statistics at parse which it can use to generate a better execution plan than if just using the ordinarily collected statistics, and further statistics at execution time which can be used to decide to change the next execution of the same SQL statement.
Why does it cause a Problem in PeopleSoft?Adaptive Optimization is clearly a very useful feature in an application where SQL is shareable and reused.  But there's the rub.  This is another Oracle feature that works best with an application that is written the way that Oracle advise applications be written (Automatic Memory Management and the plan stability/management technologies also come to mind), and PeopleSoft is not such an application.  PeopleSoft applications are SQL parse intensive.Parsing SQL is an expensive business in terms of time and CPU.  The library cache preserves the results of previous parses to save the overhead of repeatedly parsing the same SQL for subsequent executions.  If the SQL is different, even just by a literal value, it will have a different SQL_ID and will not be matched.  If SQL statements in the library cache are not being reused, then you incur the cost of parse for each execution.  Adaptive Optimization adds to that overhead, in PeopleSoft sometimes to the point where it can have a significant effect on performance.PeopleSoft has several behaviours which lead to different SQL statements being generated from the same source:
  • Application Engine %BIND() variables are resolved to literals in the generated SQL if the ReUseStatement attribute is not set, which by default it is not.  Thus, each iteration of a loop may cause the same Application Engine step to produce a different SQL statement with a different SQL ID (see Performance Benefits of ReUse Statement Flag in Application Engine).  PeopleSoft development have got better at using this attribute in delivered code.  However, there are still many places where it could be set but is not.
  • There are many places in PeopleSoft where SQL is generated dynamically to cater for dynamic application configuration options.  Bind variables may be embedded in the SQL as literals, or more significant changes may be introduced such as completely different criteria or joining different tables.  In Application Engine, ReUseStatement usually cannot be used in conjunction with such steps.
  • Different concurrently executing instances of the same Application Engine programs will be allocated a different non-shared instance of a temporary record, so they reference different tables.  
Some parts of PeopleSoft do produce shareable SQL statements.  Much of the SQL generated by the component processor and PeopleCode uses bind variables, the exception being where dynamic SQL is generated in PeopleCode.  PeopleSoft COBOL programs also widely uses bind variables, but again some places dynamically generate SQL statements.What does Oracle say about it?There are a number of documents on the Oracle support website that touch on this (you will need to log in to follow these links):
Advice for the PeopleSoft Oracle DBA: The source of this document is not clear.  It is not attached to a MOS note, and is not in the usual format for such notes.  It is the only document that I can find that makes specific recommendations for Adaptive Optimization in conjunction with PeopleSoft, and it recommends totally disabling the entire Adaptive Optimization feature:
  • "optimizer_adaptive_features = FALSE - After upgrading to Oracle Database 12c, many PeopleSoft customers have noticed overall performance degradation which is related to Oracle 12c Optimizer Adaptive Feature (OAF). It is recommended that this value be disabled in all Oracle Databases running PeopleSoft Applications."
  • It has always been tempting to enable cursor_sharing with PeopleSoft so that Oracle converts literals in the SQL back to bind variables and thus matching SQLs in the library cache with different literal values.  However, it has no benefit for some dynamic SQL and different temporary table instances.  It can also introduce other optimizer problems.  On the few occasions that I have tried it, I have never had a good experience.  This document also recommends against it.
How to Approach Issues that Appear Related to Adaptive Features Such as Adaptive Query Optimization (Doc ID 2068807.1): This note acknowledges that "while some issues with adaptive features have been found, in a number of cases, issues manifesting as performance degradation associated with Adaptive Query Optimization simply highlight deficiencies in the system, primarily inaccurate or stale object or system statistics,  Due to the nature of the activities that Adaptive Query Optimization perform, the impact of degraded performance can be widespread and can include (but not limited to):
    • Poor SQL execution performance (where a poor plan is selected)
    • Poor Parse performance (where the optimizer takes more time to determine the optimal access path)
  • The scope of either of the issues may not be limited to individual statements but can impact the whole system in unforeseen ways, for example both poor SQL execution and parse performance may cause locks to be held for a greater duration than normal causing lock contention while poor parse performance may hold latches or mutexes to similar results.
  • Blanket disabling of features: While disabling Adaptive Features with blanket parameters (see: Document 2031605.1) may provide a short term workaround, the loss of their benefits to other queries may be significant. If the underlying cause is some other factor then a better solution is to identify and address that"
Adaptive Query Optimization (Doc ID 2031605.1) discusses how to disable either the entire Adaptive Query Optimization feature
OPTIMIZER_ADAPATIVE_FEATURES=FALSE /*disables adaptive optimisation as a whole*/
or to disable individual sub-features
_optimizer_adaptive_plans=false /*disables adaptive plans*/
_optimizer_use_feedback=false /*disables the use of statistics feedback for subsequent executions. Default is true*/
_px_adaptive_dist_method=off /*disables the adaptive parallel distribution methods*/
_optimizer_dsdir_usage_control=0 /* disables the optimizer usage of dynamic statistics (sampling) directives. Default value is 126 */
_optimizer_gather_feedback=false /*disables the gathering of execution feedback in the optimizer*/
_optimizer_nlj_hj_adaptive_join=false /*disables only the adaptive join from nested loops to hash join*/
SummaryI have worked with a number of PeopleSoft Financials systems on Oracle 12c.  All but one of them had already disabled Adaptive Query Optimization, and I had to disable it on that one.  Once disabled performance improved such that it was at least as good as it had been on 11g.
There is no question that it is important to maintain accurate statistics and histograms on tables in PeopleSoft.  I have written elsewhere about how to achieve that, especially on temporary tables.  However, I am left with a concern that disabling the entire feature may be excessive, and that there may be areas in PeopleSoft where it can bring some benefits.  I would like to find the opportunity to test whether it is possible to achieve better performance by only disabling certain parts of the feature.
©David Kurtz

PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data

Wed, 2016-04-20 11:16
IntroductionMost of a PeopleSoft application is itself stored in the database in PeopleTools tables.  Therefore there is lot of information about the configuration and operation of a PeopleSoft system.  There are also performance metrics, particularly about batch processes.
PS360 is a new tool on which I am working.  It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis.  The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra.  This is another free tool used for health check and performance analysis of any Oracle database system.  PS360 aims to gather PeopleSoft specific information that is not presented by EDB360.  It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block.  It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage).  Each report is in tabular and/or graphical format.  All the charts are produced with the Google chart API.
The output falls into three sections.  
  • Configuration: Simple reports of certain configuration tables.
  • Performance Metrics: Process Scheduler and Application Engine timings
  • Consistency Checks: both within PeopleTools tables and between the PeopleSoft and Oracle catalogues.
Sample PS360 Index Page generated on PeopleSoft Demo DatabaseInstructionsThe tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM).  That user will already have privilege to read the Oracle catalogue.
Download the tool and unzip it into a directory.  Navigate to the ps360 (master) directory, open SQL*Plus and connect as SYSADM.  Execute the script ps360.sql.  The output will be written to a zip file in the same directory.  Unpack that zip file on your own PC and open the file ps360_[database name]_0_index.html with a browser.
Feedback EnhancementsI am looking for feedback about the tool, and suggestions for further enhancements.
Please either leave comments here or e-mail me at info@go-faster.co.uk.

©David Kurtz

nVision Performance Tuning: Coalescing Tree Leaves

Tue, 2016-04-19 10:09
I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
nVision Tree Performance Options|

There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
The following is typical of nVision SQL with these performance options set.
SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
FROM
PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND
A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND
A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=

) AND (
A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR
A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR
A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR
A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR
A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR
A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR
A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR
A.DEPTID BETWEEN 'A135' AND 'A138'

A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
Event Name % Time Seconds Calls - Time per Call - Avg Min Max FETCH calls [CPU] 48.2% 3,699.8440s 16,068 0.2303s 0.0000s 178.0640s db file sequential read 22.5% 1,728.2101s 4,413,352 0.0004s 0.0002s 0.1294s SQL*Net message from client [idle] 8.0% 617.7042s 926 0.6671s 0.0002s 61.3147s PARSE calls [CPU] 7.9% 605.9340s 5,383 0.1126s 0.0000s 11.0500s …





Total 100.0% 7,681.4428s
Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
  • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
  • There is not another leaf on another node on the tree that would intersect with the merged leaf.
Instructionsleafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
The level of output emitted depends on the variable l_debug_variable
l_debug_level  INTEGER := 4;
  • 1. end of processing message 
  • 2. start of processing for tree 
  • 3. number of leaves in tree and number of leaves coalesced 
  • 4. details of leaves being compressed 
  • 5. start and end of each procedure 
  • 6. parameters passed to functions 
  • 7. number of rows updated/deleted during coalesce 
  • 8. dynamic SQL statement 
The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

.(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
.(4)634 nodes, 2636 leaves
.(4)1358 leaves coalesced (52%)

(1)Commit changes or rollback
The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

  • specific trees,
  • most recent effective dated trees,
  • trees with literal values performance option

  FOR i IN (
SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
FROM pstreedefn d
, pstreestrct s
, psrecfielddb f
WHERE d.tree_strct_id = s.tree_strct_id
AND s.node_fieldname = 'TREE_NODE'
-- AND d.TREE_ACC_METHOD = 'L' --literal values
AND s.dtl_recname = f.recname
AND s.dtl_fieldname = f.fieldname
-- AND tree_name = 'XXX_ACCOUNT'
) LOOP
Conclusion The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports. ©David Kurtz

Interview with PeopleSoft Administrator Podcast

Fri, 2016-03-25 12:59
I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.
(25 March 2016) #21 - Temporary Tables
(8 April 2016) #23 - The Application Server
(15 April 2016) #24 - Application Server Tuning You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.©David Kurtz

nVision Performance Tuning: General Principles

Sun, 2016-03-06 12:20
Over the years I have dealt with performance problems with nVision reporting on General Ledger on various Financials systems in various kinds of businesses.  Different businesses use nVision very differently, and have different challenges, but I have produced an approach that mostly works well at most customers.  I have collected that advice in to a document that I have recently published on my website (http://www.go-faster.co.uk/docs.htm#nVision Tuning.Generic.pdf).

The key points are
  • Indexing
    • Effective indexing of LEDGER and LEDGER_BUDG tables to match the analysis criteria of the reports.
    • Enhanced indexing of the PSTREESELECT tables, so that the indexes fully satisfy the queries without the need to visit the tables.
  • Collection of statistics and extended statistics on the PSTREESELECT tables.
  • Using the nVision performance options 
    • use static selectors instead of dynamic selectors.  It is difficult to maintain up-to-date optimizer statistics on the selector tables with dynamic selectors.
    • simplify SQL statements by replacing joins with literal criteria
    • updated 11.4.2016: reduce SQL parse time by coalescing leaves on trees.
  • I also suggest use of Oracle Fine Grained Auditing to 
    • enhance instrumentation,
    • detect the use of dynamic selectors.
  • Appropriate partitioning of the LEDGER and LEDGER_BUDG tables.
  • Archiving.
    • If the partitioning option is not available, then I strongly recommended that as much historical data as possible is purged from the LEDGER and LEDGER_BUDG tables.
Caveat: This is a general approach, and the document makes general statements.  Every customer is different, because their data is different and often their method of analysis differs.  There is always something that requires adjustment or an exception to the general approach.  Your mileage will vary!
    ©David Kurtz

    Implementing Index Compression (and other Physical Storage Options) via Application Designer

    Thu, 2016-02-11 12:07
    There are some performance improvements that require physical storage options to be set on tables or indexes.
    One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
    This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using
    ANALYZE INDEX … VALIDATE STRUCTURE
    I have written script to make it slightly easier, calc_opt_comp.sql.  This is the output on my demo database, but I get similar results on production systems.
                                              Optimal
    Compression Weighted
    Prefix Current Average
    Table Name Index Name Length FREQ PARTS Blocks Saving %
    ------------------ ------------------ ----------- ---- ----- ---------- --------
    PSTREENODE PSAPSTREENODE 4 1 0 280 39.0
    PSBPSTREENODE 3 1 0 264 30.0
    PSCPSTREENODE 1 1 0 120 7.0
    PSDPSTREENODE 4 1 0 256 61.0
    PSFPSTREENODE 2 1 0 256 67.0
    PSGPSTREENODE 3 1 0 400 49.0
    PS_PSTREENODE 4 1 0 256 44.0 
    However, I want to make sure that should the table need to rebuilt in the future, PeopleTools will generate the DDL with the appropriate settings.  The same principle would also apply to any other physical storage option.  I would always recommend that the compression prefix lengths be incorporated into the PeopleTools DDL override in Application Designer (figure 1).  While you could extend the DDL model and add another override for compression, I just append it to the PCTFREE setting.
    Index DDL OverridesFigure 1. Index DDL OverideHowever, there is catch.  PeopleTools has never examined DDL overrides when determining whether there is a difference between the PeopleSoft and database data dictionaries, even though that comparison must be platform specific.  DDL overrides and DDL models are just strings held in the PeopleTools tables.  They can be extended (or even removed) by customers.  I assume this is the reason; it was not felt possible to reliably check them,
    So, if the build settings (figure 2) are 'recreate index only if modified', which is the default, Application Designer will not generate a DDL script, nor execute any DDL.
    Build SettingsFigure 2. Build SettingsThe workaround has always been to set the index creation option in the build settings to 'recreate index if it already exists'.  However, we then discover the override doesn't appear in the DDL.  As Application Designer has not detected a difference between PeopleTools and the database, it has instead used the Oracle DBMS_METADATA package to generate the storage clause from from the index that exists in the database.  Hence the DDL contains additional keywords not in the PeopleSoft DDL model.
    CREATE UNIQUE  INDEX PS_PSTREENODE ON PSTREENODE (SETID,
    SETCNTRLVALUE,
    TREE_NAME,
    EFFDT,
    TREE_NODE_NUM,
    TREE_NODE,
    TREE_BRANCH)
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "PSINDEX"
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /
    I have only checked this behaviour on PeopleTools 8.54, but use of DBMS_METADATA was introduced in PeopleTools 8.51, so this problem has probably existed since then.
    SELECT dbms_metadata.get_ddl('INDEX','PS_PSTREENODE')
    FROM dual

    DBMS_METADATA.GET_DDL('INDEX','PS_PSTREENODE')
    --------------------------------------------------------------------------------
    CREATE UNIQUE INDEX "SYSADM"."PS_PSTREENODE" ON "SYSADM"."PSTREENODE" ("SETID"
    , "SETCNTRLVALUE", "TREE_NAME", "EFFDT", "TREE_NODE_NUM", "TREE_NODE", "TREE_BRANCH")
    PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
    BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "PSINDEX"
    However, if I drop the index and then regenerate the DDL script in Application Designer,
    DROP INDEX ps_pstreenode
    /
    PeopleTools generates the create index with the compression specified in the PeopleTools table.

    CREATE UNIQUE INDEX PS_PSTREENODE ON PSTREENODE (SETID,
    SETCNTRLVALUE,
    TREE_NAME,
    EFFDT,
    TREE_NODE_NUM,
    TREE_NODE,
    TREE_BRANCH) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
    MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 0 COMPRESS 4 PARALLEL
    NOLOGGING
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /

    Rather than go through the business of dropping the index so you can then generate the correct script to then recreate the index, I would suggest just implementing the change manually by rebuilding the indexes.
    ALTER INDEX PSAPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSBPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PSCPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 1;
    ALTER INDEX PSDPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSFPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 2;
    ALTER INDEX PSGPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PS_PSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;

    ConclusionThis makes the business of implementing physical attributes through Application Designer much more complicated.  I would still recommend recording the settings in Application Designer, if only because it provides documentation, but then it may be easier to implement the changes manually.©David Kurtz