David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft that DBAs might find interesting.
Or then again they might not!
Non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.
Updated: 14 hours 49 min ago

Managing Cost-Based Optimizer Statistics for PeopleSoft

Fri, 2018-06-29 06:05
I gave this presentation to UKOUG PeopleSoft Roadshow 2018

PeopleSoft presents some special challenges when it comes to collecting and maintaining the object statistics used by the cost-based optimizer.

I have previously written and blogged on this subject.  This presentation focuses exclusively on the Oracle database and draws together the various concepts into a single consistent picture.  It makes clear recommendations for Oracle 12c that will help you work with the cost-based optimizer, rather than continually fight against it.

It looks at collecting statistics for permanent and temporary working storage tables and considers some other factors that can affect optimizer statistics.

This presentation also discusses PSCBO_STATS, that is going to be shipped with PeopleTools, and compares and contrasts it with GFCPSSTATS11.

Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable

Sat, 2018-04-14 06:05
Whether to use the Application Engine server process (PSAESRV) in the process scheduler tuxedo domain or the standalone PSAE executable is a frequently discussed point amongst PeopleSoft administrator.  Over the years, I have written various things on the subject.  I am going to draw them together in this blog, and restate Oracle’s now clear advice about when to use which option.

In PeopleTools 8.4, the Process Scheduler became a fully fledged Tuxedo domain.  The PSAESRV process was also introduced at this time.  It is a persistent process that handles both Application Engine and Optimization Engine requests.  Each server process creates persistent database connections.  A number of these server processes are started with the domain.   The PSAESRV process does not spawn like other Tuxedo server processes.  Instead, you must configure the number of server processes to match the maximum number of concurrent Application Engine process requests and concurrent Optimization Engine requests that the process scheduler can run.  The server was introduced to handle very short-lived Application Engine programs thus avoiding the overhead of instantiating a new process and new database sessions for each process request.  CRM typically uses Application Engine in this fashion, but generally, you do not see this in other PeopleSoft products.

Oracle has not always been clear what they mean by a short-lived process.  It has been suggested that if Application Engine processes are typically taking less than 10-30 seconds, or if you run more than 1000 Application Engine processes requests per hour (note 651970.1) you should use PSAESRVs.
PeopleBooks advises you should use PSAESRV because it delivers improved system performance.  However, PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1) contradicts this somewhat.  Ultimately, if you have any doubts, you should it test each way and determine whether one way gives a clear advantage over the other.

Oracle Support Note "What will be the impact of disabling PSAESRV on the Process Scheduler (Doc ID 651970.1)" explains that if PSAESRV is disabled in the Tuxedo domain configuration, the Process Scheduler goes back to the legacy behaviour and spawns a stand-alone PSAE process for each Application Engine request.  “The Application Engine will take a bit longer to start, [the] time delay may be range from millisecond to seconds” depending on hardware and configuration.

The stand-alone process has several advantages.
  • At the end of the Application Engine program, it disconnects from the database and terminates.  Thus releasing resources from the process and the database session.  Whereas the persistent Application Engine process has been reported to accumulate allocated memory over time.
  • If you are using Oracle database Global Temporary Tables in an application engine, then you should not use PSAESRV because the tables are always created PRESERVE ON COMMIT and so are only released when the database session terminates.
  • If you set any session parameters within an Application Engine program run via PSAESRV, or enable database trace, then these settings will carry forward from one Application Program to the next unless you reset the parameter at the end of the program, or the start of the next.  This is not a concern with standalone PSAE processes.
However, there is at least one case where you must use the server process:
  • If you are using Oracle Active Data Guard and wish to mark some Application Engine programs as read-only then they must be run via the PSAESRV process
Conclusion
  • PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1 sums it up very nicely: “PSAE is as good as PSAESRV for most practical purposes.  If you have an application engine job that runs longer than 10 seconds, PSAE is equivalent to PSAESRV.  PSAE has the added advantage of being recycled at the end of each application engine job, cleaning up any outstanding SQL cursors to the database that may have been left behind.  Because PSAE recycles after each use, PSAE does not have any possible memory leakage problem that may occupy the precious system memory.  In short, PSAE is a cleaner workhorse.”
  • I think it is reasonable to use PSAESRV in CRM.  For all other products, I recommend that PSAESRV should be disabled from all Process Schedulers.
    • If you do have some Application Processes that are both short-lived (i.e. less than 10 seconds) and run frequently, then consider creating other process schedulers with PSAESRV processes that are dedicated to running only these process.  You can then move these processes to a new Process Scheduler category that only runs on these new Process Scheduler.
  • PSAESRV is configured by default, so if you don’t want to use it, and mostly you won’t, then you have to remember to disable it.

Resetting High Water Marks on On-line Temporary Table Instances

Tue, 2018-03-20 11:45
PeopleSoft has always used regular database tables for temporary working storage in batch processes.   Up to PeopleTools 7.x working storage tables were shared by all instances of a program.  That led to consistent read contention when multiple processes concurrently used the same table, and much higher high water marks that increased durations of full scans.
From PeopleTools 8, many copies of each temporary working storage table are created.  Application Engines that run on the Process Scheduler are allocated exclusive use of a particular copy of the table.  This avoids the inter-process contention.  They start by truncating each allocated table, which resets the high-water mark.
Some delivered processing uses batch programs that are run apparently synchronously from the PIA.  On-line edit and post in Financials General Ledger is a common example.  Up to PeopleTools 7, the application server would synchronously spawn a batch process and wait for it to complete.  From PeopleTools 8 the process is submitted to the process scheduler, and the PIA polls the Scheduler tables waiting for the process to complete.  However, Application Engine can be run within the component processor.  In Financials General Ledger, this can be chosen by a setting an installation configuration option.  The truly on-line method can perform better because you are no longer waiting for the process scheduler to pick up the process request.  A separate process Application Engine is not spawned, but the Application Engine program is executed by the PSAPPSRV application server process.  One of the limitations is that the Application Engine program cannot commit.  Committing after steps or sections is suppressed, and the %TruncateTable macro generates a delete statement instead.  Therefore, on-line temporary table instances are never truncated by any process and their high-water marks can be raised by processes that handle larger volumes of data.  This can have impacts for subsequent processes with smaller data volumes but that still have to full-scan working storage tables up to their high water marks.
Truncating On-line Temporary Table Instances
The answer is to implement a periodic process that truncates working storage tables, but only doing so when the table is not currently being used by a process.  Every on-line Application Engine program is allocated a temporary table instance number, it locks the corresponding row on the table PS_AEONLINEINST.  If it allocated to instance 1, it locks the row where CURTEMPINSTANCE is 1 and uses instance 1 of each temporary record that it needs.  
Therefore the proposed truncate process must also lock the row on PS_AEONLINEINST that corresponds to each table that is to be truncated.  The truncate must be done in an autonomous transaction so that the implicit commit does not release that lock.  The lock can be released after the truncate completes.  Thus, the truncate process waits for any online process to complete before truncating a table with the same instance number, and no process can start while the truncate process is holding the lock.  However, each truncate will be very quick, and so each lock will only be held briefly, and it will have only a minimal effect on any online process that may be running at the time.  

I have written a PL/SQL packaged procedure (to perform this process for all temporary records.  It is available on Github as a part of my collection of miscellaneous PeopleSoft scripts.
Package UsageUsually, the package will be run without any parameters. The default behaviour will be to truncate tables with more than a single extent.  Information on what the package does is emitted to the server output.
Set serveroutput on 
EXECUTE xx_onlineinsthwmreset.main;
The package can be run in test mode when it will list the commands without executing them.  Thus you can see what it will do without actually doing it.
EXECUTE xx_onlineinsthwmreset.main(p_testmode=>TRUE);
The package can optionally deallocate any physical storage. Storage will be reallocated next time the table is used.
EXECUTE xx_onlineinsthwmreset.main(p_drop_storage=>TRUE, p_min_extents=>0);
The package can be run for certain tables that match a particular pattern.
EXECUTE xx_onlineinsthwmreset.main(p_recname_like=>'JP%');
I recommend that the package is run daily. However, it can be run safely while the users are doing on-line edit/post processing, but it would be sensible to choose a quiet time.

Fewer Platform Flags on Indexes from PeopleTools 8.55

Wed, 2018-03-07 12:28
It has always been possible in Application Deisnger to specify upon which databases platforms each index should be built.  This is really a feature that is used by PeopleSoft development, rather than customers to deliver indexes that are more appropriate for a particular platform due to differences in the optimizer.
Over the years, the number of supported PeopleSoft platforms has declined.  In PeopleTools 8.45, it went down from 9 to 6 and in PeopleTools 8.55 it has gone down to just 4, but there are still 9 columns on PSINDEXDEFN that correspond to the original 9 supported platforms.
I explained in a previous blog that you can have all or none of the platform flags set to the same value, but with the platform radio button on the index properties dialogue box is still set to 'some' because one or more of the platform flag columns for some of the unsupported platforms is set differently.  Of course, this is a purely cosmetic problem, but one that can cause confusion in Application Designer.
PeopleTools 8.45PeopleTools 8.55I fix this by updating PeopleTools tables as follows. The first query reports on the indexes where the supported platform flags all have the same value and one of the unsupported platform flags are different.
column RECNAME format a15
column INDEXID format a3 heading 'Idx|ID'
column DDLCOUNT format 999 heading 'DDL|Cnt'
column CUSTKEYORDER format 9999 heading 'Cust|Key|Order'
column KEYCOUNT format 999 heading 'Key|Cnt'
column PLATFORM_SBS format 999 heading 'SBS'
column PLATFORM_DB2 format 999 heading 'DB2'
column PLATFORM_ORA format 999 heading 'ORA'
column PLATFORM_INF format 999 heading 'INF'
column PLATFORM_DBX format 999 heading 'DBx'
column PLATFORM_ALB format 999 heading 'ALB'
column PLATFORM_SYB format 999 heading 'SYB'
column PLATFORM_MSS format 999 heading 'MSS'
column PLATFORM_DB4 format 999 heading 'DB4'
column ACTIVEFLAG Format 999 heading 'Active'
column CLUSTERFLAG format 999 heading 'Clst|Flg'
column UNIQUEFLAG format 999 heading 'Uniq|Flg'
column INDEXTYPE format 999 heading 'Idx|Type'
column IDXCOMMENTS format a60
spool platformfix855
SELECT *
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_MSS
AND (PLATFORM_ORA!=PLATFORM_SBS
  OR PLATFORM_ORA!=PLATFORM_ALB
  OR PLATFORM_ORA!=PLATFORM_SYB
  OR PLATFORM_ORA!=PLATFORM_INF
  OR PLATFORM_ORA!=PLATFORM_DB4)
;
These are the indexes that have inconsistent platform flags.  In this case PS_PSPMTRANSHIST is to be disabled on DB2/AS400.  You can't update the flag via Application Designer, but you could set the radio button to ALL.
                                           Cust
Idx Idx Uniq Clst Key Key DDL
RECNAME ID Type Flg Flg Active Order Cnt Cnt SBS DB2 ORA INF DBx ALB SYB MSS DB4
--------------- --- ---- ---- ---- ------ ----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
IDXCOMMENTS
------------------------------------------------------------
PSPMTRANSHIST _ 1 1 1 1 0 4 1 1 1 1 1 1 1 1 1 0
It could be a tedious process to do this for a lot of indexes.  So the following SQL commands correct all indexes.  They set the SQL flags for the unsupported platforms to the value for the supported platforms if they are all the same. The version number on the record definition is updated so that Application Desinger refreshes the object.
UPDATE PSVERSION
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSLOCK
SET VERSION = VERSION + 1
WHERE OBJECTTYPENAME IN('SYS','RDM');

UPDATE PSRECDEFN
SET VERSION = (
SELECT VERSION
FROM PSVERSION
WHERE OBJECTTYPENAME = 'RDM')
WHERE RECNAME IN (
SELECT RECNAME
FROM PSINDEXDEFN
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND ( PLATFORM_ORA!=PLATFORM_SBS
OR PLATFORM_ORA!=PLATFORM_ALB
OR PLATFORM_ORA!=PLATFORM_DB4)
);

UPDATE psindexdefn
SET PLATFORM_DB4=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_DB4;

UPDATE psindexdefn
SET PLATFORM_ALB=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_ALB;

UPDATE psindexdefn
SET PLATFORM_SBS=PLATFORM_ORA
WHERE PLATFORM_DB2=PLATFORM_DBX
AND PLATFORM_DBX=PLATFORM_INF
AND PLATFORM_INF=PLATFORM_ORA
AND PLATFORM_ORA=PLATFORM_SYB
AND PLATFORM_SYB=PLATFORM_MSS
AND PLATFORM_ORA!=PLATFORM_SBS;
The platform flags now say 'ALL'. Not a tremendous change, but at least I can immediately see that these indexes do build on all platforms without having to open each one.

The new platformfix855.sql script is available on collection of miscellaneous scripts on Github.

Setting Oracle Session Parameters for Specific Process Scheduler Processes

Fri, 2018-03-02 06:01
This note describes a mechanism for setting initialisation parameters for specific processes run on the process scheduler. I will demonstrate it relation to nVision, but it has general application in PeopleSoft.
A table is used to hold metadata that described what setting is applied to which processes. A trigger on the process scheduler request table PSPRCSRQST reads that data and alters the session setting. This approach is easier to adjust and understand that static PL/SQL code in a trigger.
NB: The PSAESRV server process is used to run application engine programs in the process scheduler by default.  It should only be used for very short-lived programs and should usually be deconfigured. It maintains a persistent connection to the database. Therefore, session parameters set for one application engine program will carry forward into subsequent programs run on the same server process with the potential for unintended and unpredictable results. Other processes all establish their own database session that ends when the process terminates. This trigger mechanism can be still used to set parameters for some processes that run on PSAESRV, but the default parameter value should then be set for all other application engine processes. 
Metadata The table that holds the metadata should be defined in Application Designer.
CREATE TABLE PS_PRCS_SESS_PARM (PRCSTYPE VARCHAR2(30) NOT NULL,
PRCSNAME VARCHAR2(12) NOT NULL,
OPRID VARCHAR2(30) NOT NULL,
RUNCNTLID VARCHAR2(30) NOT NULL,
PARAM_NAME VARCHAR2(50) NOT NULL,
PARMVALUE VARCHAR2(128) NOT NULL) TABLESPACE PTTBL STORAGE (INITIAL
40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
PCTUSED 80
/
CREATE UNIQUE iNDEX PS_PRCS_SESS_PARM ON PS_PRCS_SESS_PARM (PRCSTYPE,
PRCSNAME,
OPRID,
RUNCNTLID,
PARAM_NAME) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
/
ALTER INDEX PS_PRCS_SESS_PARM NOPARALLEL LOGGING
/
I will demonstrate this mechanism for nVision. The metadata is simply inserted into the table by script.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_policy','auto');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_limit','4');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_level','150');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_min_time_threshold','1');
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', '_optimizer_skip_scan_enabled','FALSE');
Here we have 5 session parameters that will apply to all nVision reportbooks, but which I don't want to apply to the rest of the system.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------
nVision-ReportBook RPTBOOK parallel_degree_policy auto
nVision-ReportBook RPTBOOK parallel_degree_limit 4
nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK parallel_min_time_threshold 1
nVision-ReportBook RPTBOOK _optimizer_skip_scan_enabled FALSE
The objective is to use limited parallelism only in the nVision reporting, and without decorating underlying ledger tables
  • parallel_degree_policy=auto enables the new 12c automatic degree of parallel calculation, statement queuing. 
  • parallel_min_time_threshold is set to 1 second. The default is 10. Statements whose runtime is estimated to be greater than or equal to this value will be considered for automatic degree of parallelism. 
  • parallel_degree_limit=4 restricts the automatic degree of parallelism to 4 to prevent any one statement using excessive parallelism.
  • parallel_degree_level=150 scales up the automatic degree of parallelism calculation but within the parallel_degree_limit. See Kerry Osborne's blog 12c – parallel_degree_level (control for auto DOP)
  • _optimiser_skip_scan_enabled=FALSE disables index skip scan to promote the use of smart full scan and Bloom filtering. It is recommended for engineered systems in Advice for the PeopleSoft DBA. Skip scan can prevent the optimizer choosing a smart full scan, so it makes sense to limits the setting to just nVision. I can also specify a parameter that will only be set when the reportbook is run by a particular operator with a particular run control.
INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue) VALUES ('nVision-ReportBook','RPTBOOK','NVOPR','NVSRBK_2', 'parallel_degree_level','200');
The specific setting for one particular operation ID and run control takes precedence of the generic setting for all reportbooks. In this case, I will scale the degree of parallelism further for a particular reportbook.
PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
-------------------- ------------ ---------- --------------- ------------------------------ --------------------

nVision-ReportBook RPTBOOK parallel_degree_level 150
nVision-ReportBook RPTBOOK NVOPR NVSRBK_2 parallel_degree_level 200
Trigger When a process starts the first thing it does is update its own status to 7 to indicate that it is processing. This is another example of a trigger created on that transition that injects behaviour at the start of a PeopleSoft process. This trigger reads the metadata and applies the settings with an ALTER SESSION command. The process type, name, operation and run control attributes must exactly match the process request, but a blank space is treated as a wildcard. Underscore parameters must be delimited in double quotes.
CREATE OR REPLACE TRIGGER sysadm.set_prcs_sess_parm
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus = 7
AND old.runstatus != 7
AND new.prcstype != 'PSJob')
DECLARE
l_cmd VARCHAR2(100 CHAR);
l_delim VARCHAR2(1 CHAR);
BEGIN
FOR i IN (
WITH x as (
SELECT p.*
, row_number() over (partition by param_name
order by NULLIF(prcstype, ' ') nulls last,
NULLIF(prcsname, ' ') nulls last,
NULLIF(oprid, ' ') nulls last,
NULLIF(runcntlid,' ') nulls last
) priority
FROM PS_PRCS_SESS_PARM p
WHERE (p.prcstype = :new.prcstype OR p.prcstype = ' ')
AND (p.prcsname = :new.prcsname OR p.prcsname = ' ')
AND (p.oprid = :new.oprid OR p.oprid = ' ')
AND (p.runcntlid = :new.runcntlid OR p.runcntlid = ' ')
)
SELECT * FROM x
WHERE priority = 1
) LOOP
IF SUBSTR(i.param_name,1,1) = '_' THEN
l_delim := '"';
ELSE
l_delim := '';
END IF;

IF NULLIF(i.parmvalue,' ') IS NOT NULL THEN
dbms_output.put_line('Rule:'||NVL(NULLIF(i.prcstype,' '),'*')
||'.'||NVL(NULLIF(i.prcsname,' '),'*')
||':'||NVL(NULLIF(i.oprid,' '),'*')
||'.'||NVL(NULLIF(i.runcntlid,' '),'*')
||':'||i.param_name||'='||i.parmvalue);

l_cmd := 'ALTER SESSION SET '||l_delim||i.param_name||l_delim||'='||i.parmvalue;
dbms_output.put_line('PI='||:new.prcsinstance||':'||:new.prcstype||'.'||:new.prcsname||':'
||:new.oprid||'.'||:new.runcntlid||':'||l_cmd);
EXECUTE IMMEDIATE l_cmd;
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
END;
/
The trigger script set_prcs_sess_parm.sql can be download from my miscellaneous PeopleSoft scripts repository on Github.
Testing The trigger can be tested by updating the process scheduler request table in SQL*Plus, but be careful to roll back the update afterwards rather than committing. The trigger writes debug information to the server output that can be seen in SQL*Plus showing the rule being used and the ALTER SESSION command generated. However, this output is not captured in any PeopleSoft log when the trigger is fired by a PeopleSoft process.
set serveroutput on 
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_2' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:parallel_degree_level=200
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_level=200
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_min_time_threshold=1
In the above example, the specific rule for NVSRBK_2 was applied setting PARALLEL_DEGREE_LEVEL to 200, whereas in the next example the generic setting of 150 is applied to NVSRBK_1.
update psprcsrqst set runstatus = 7 where runstatus != 7
and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_1' and oprid = 'NVOPR' and rownum = 1;
rollback;

Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_level=150
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_level=150
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_limit=4
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_policy=auto
Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_min_time_threshold=1

Invalid Views in PeopleSoft and the Oracle Database

Wed, 2018-01-31 14:33
I was listening to the section on Invalid Views in PSADMIN Podcast #117 (@19:00). Essentially, when you drop and recreate a view that is referenced by a second view, the status on the second view in the database goes invalid. This is not a huge problem because as soon as you query the second view it is compiled. However, you would like to know whether any change to a view prevents any dependent views from compiling, although you would expect have teased these errors out before migration to production.
The PeopleSoft solution to this is to include all the dependent views in the Application Designer package. However, as pointed out, in the podcast you are now releasing code, possibly unintentionally releasing code changes and certainly updating last change dates on record definitions, when really you just need to compile the database objects.   PeopleSoft does this because it is a platform generic solution, but really it is using the PeopleSoft Application Designer to solve a database management issue.
A similar problem also occurs in the Oracle database with dependent PL/SQL procedures and packages where you sometimes get referential loops. Oracle provides a procedure DBMS_UTILITY.COMPILE_SCHEMA that recompiles all invalid objects in a schema and report any errors to the ALL_ERRORS view.  I think this is a much safer option.

Here is a very simple (non-PeopleSoft) example
drop table t purge;
drop view a;
drop view b;

create table t (a number);
insert into t values (1);
create view a as select a from t;
create view b as select a from a;

column object_name format a12
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Dropping and recreating view A renders view B invalid.
drop view a;
create view a as select a from t;

select object_type, object_name, status
from user_objects
where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID

select * from b;
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
Just querying B has made it valid again.
OBJECT_TYPE             OBJECT_NAME  STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B VALID
VIEW A VALID
Let's make B invalid again by rebuild A, but this time I will change the name of the column in view A from A to T so that view B cannot compile without an error.
drop view a;
create view a (t) as select a from t;

select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/
EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
select object_type, object_name, status
from user_objects where object_name IN('T','A','B')
/

OBJECT_TYPE OBJECT_NAME STATUS
----------------------- ------------ -------
TABLE T VALID
VIEW B INVALID
VIEW A VALID
And I can query the errors from user_errors
NAME
---------------------------------------------------
TYPE SEQUENCE LINE POSITION
------------ ---------- ---------- ----------
TEXT
---------------------------------------------------
ATTRIBUTE MESSAGE_NUMBER
--------- --------------
B
VIEW 1 0 0
ORA-00904: "A": invalid identifier
ERROR 0
N.B.: if you use CREATE OR REPLACE VIEW is not left invalid unless there is an error. Unfortunately, Application Designer always drops and recreates views.

nVision Performance Tuning 12: Hinting nVision with SQL Profiles

Wed, 2017-12-20 10:00
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
Note: SQL Profiles require the Tuning Pack to be licenced.
As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.
ExampleLet's take the following nVision query as an example.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
OR A.DEPTID='B9150' OR A.DEPTID=' ')
AND L2.SELECTOR_NUM=10228
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
AND L3.SELECTOR_NUM=10231
AND A.ACCOUNT=L3.RANGE_FROM_10
AND A.CHARTFIELD1='0012345'
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
/
We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
This is the plan I get initially and without a profile. It doesn't perform well.
Plan hash value: 808840077
-----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 10408 (100)| | | |
| 1 | HASH GROUP BY | | 517 | 50666 | 10408 (1)| 00:00:01 | | |
| 2 | HASH JOIN | | 517 | 50666 | 10407 (1)| 00:00:01 | | |
| 3 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 4 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
| 5 | HASH JOIN | | 518 | 41440 | 10404 (1)| 00:00:01 | | |
| 6 | PARTITION RANGE SINGLE | | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 7 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
| 8 | PARTITION RANGE ITERATOR | | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
| 10 | SORT CLUSTER BY ROWID BATCHED | | 5373 | | 5177 (1)| 00:00:01 | | |
| 11 | INDEX SKIP SCAN | PS_X_LEDGER_ACCTS | 5373 | | 5177 (1)| 00:00:01 | 28 | 40 |
-----------------------------------------------------------------------------------------------------------------------------------
These are the hints I want to introduce (on Oracle 12c).
SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
  • Use automatic parallel degree, statement queuing and in-memory parallel execution.
  • Invoke parallelism if the statement is estimated to run for at least 2 seconds
  • However, I will also limit the automatic parallelism to a degree of 4
  • Force materialize view rewrite
  • Use a Bloom filter when joining to the materialized view.
I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
CREATE TABLE dmk_fms_profiles
(force_matching_signature NUMBER NOT NULL
,sql_id VARCHAR2(13)
,plan_hash_value NUMBER
,module VARCHAR2(64)
,report_id VARCHAR2(32) /*Application Specific*/
,tree_list CLOB /*Application Specific*/
,sql_profile_name VARCHAR2(30)
,parallel_min_time_threshold NUMBER
,parallel_degree_limit NUMBER
,other_hints CLOB
,delete_profile VARCHAR2(1)
,sql_text CLOB
,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
)
/
Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
COMMIT;
Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
UPDATE dmk_fms_profiles a
SET (module, action, sql_id, plan_hash_value, sql_text)
= (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
FROM dba_hist_sqlstat s
, dba_hist_sqltext t
WHERE t.dbid = s.dbid
AND t.sql_id = s.sql_id
AND s.force_matching_signature = a.force_matching_signature
AND s.snap_id = (
SELECT MAX(s1.snap_id)
FROM dba_hist_sqlstat s1
WHERE s1.force_matching_signature = a.force_matching_signature
AND s1.module = 'RPTBOOK' /*Application Specific*/
AND s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
AND s.module = 'RPTBOOK' /*Application Specific*/
AND s.action LIKE 'PI=%:%:%' /*Application Specific*/
AND ROWNUM = 1)
WHERE sql_id IS NULL
/

MERGE INTO dmk_fms_profiles u
USING (
SELECT a.sql_id, a.force_matching_signature, p.name
FROM dmk_fms_profiles a
, dba_sql_profiles p
WHERE p.signature = a.force_matching_signature
) s
ON (s.force_matching_signature = u.force_matching_signature)
WHEN MATCHED THEN UPDATE
SET u.sql_profile_name = s.name
/
Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
/*Application Specific - extract report ID from ACTION*/
UPDATE dmk_fms_profiles a
SET report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
WHERE report_id IS NULL
AND action IS NOT NULL
/
/*Application Specific - extract financial analysis tree from application logging*/
UPDATE dmk_fms_profiles a
SET tree_list =
(SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
FROM (select l.tree_name, MAX(l.length) length
FROM dba_hist_sql_plan p
, ps_nvs_treeslctlog l
WHERE p.plan_hash_value = a.plan_hash_value
AND p.sql_id = a.sql_id
AND p.object_name like 'PS%TREESELECT__'
AND p.partition_start = partition_stop
AND p.partition_start = l.selector_num
AND l.tree_name != ' '
GROUP BY l.tree_name)
)
WHERE tree_list IS NULL
/

Now I can produce a simple report of the metadata in order to see what profiles should be created.
column sql_text word_wrapped on format a110
column module format a8
column report_id heading 'nVision|Report ID'
column tree_list word_wrapped on format a20
column plan_hash_value heading 'SQL Plan|Hash Value' format 9999999999
column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
column parallel_degree_limit heading 'Parallel|Degree|Limit' format 999
set long 500
SELECT * FROM dmk_fms_profiles
/

SQL Plan
FORCE_MATCHING_SIGNATURE SQL_ID Hash Value MODULE ACTION
------------------------ ------------- ----------- -------- ----------------------------------------------------------------
Parallel Parallel
nVision Min Time Degree
Report ID TREE_LIST SQL_PROFILE_NAME Threshold Limit D
-------------------------------- -------------------- ------------------------------ --------- -------- -
OTHER_HINTS
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------
12803175998948432502 5pzxhha3392cs 988048519 RPTBOOK PI=3186222:USGL233I:10008
USGL233I BU_GAAP_CONSOL, NVS_GBGL123I_BU_CONSOL_ACCOUNT 1 4
GAAP_ACCOUNT
REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
set serveroutput on
DECLARE
l_signature NUMBER;
h SYS.SQLPROF_ATTR;
e_no_sql_profile EXCEPTION;
PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
l_description CLOB;
BEGIN

FOR i IN (
SELECT f.*, s.name
FROM dmk_fms_profiles f
LEFT OUTER JOIN dba_sql_profiles s
ON f.force_matching_signature = s.signature
) LOOP

BEGIN
IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
dbms_sqltune.drop_sql_profile(name => i.name);
END IF;
EXCEPTION WHEN e_no_sql_profile THEN NULL;
END;

IF i.delete_profile = 'Y' THEN
NULL;
ELSIF i.sql_text IS NOT NULL THEN
h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
CASE WHEN i.parallel_degree_limit >=0 THEN 'OPT_PARAM(''parallel_degree_limit'',' ||i.parallel_degree_limit ||') ' END||
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
i.other_hints,
q'[END_OUTLINE_DATA]');

l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
dbms_output.put_line(i.sql_profile_name||' '||l_description);

DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text => i.sql_text,
profile => h,
name => i.sql_profile_name,
description => l_description,
category => 'DEFAULT',
validate => TRUE,
replace => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

END IF;
END LOOP;
END;
/
I can verify that the profile has been created, and the hints that it contains, thus:
SELECT profile_name,
xmltype(comp_data) as xmlval
FROM dmk_fms_profiles p
, dbmshsxp_sql_profile_attr x
WHERE x.profile_name = p.sql_profile_name
AND p.status = 'ENABLED'
ORDER BY 1
/

PROFILE_NAME
------------------------------
XMLVAL
------------------------------------------------------------------------------------------------
NVS_GBGL123I_BU_CONSOL_ACCOUNT
<![CDATA[BEGIN_OUTLINE_DATA]]>
<![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>
<![CDATA[END_OUTLINE_DATA]]>
And now when the application runs, I get the plan that I wanted.
  • The query runs in parallel.
  • The SQL is rewritten to use materialized view.
  • There are no indexes on the materialized view, so it must full scan it.
  • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2219 (100)| | | | | | |
| 1 | PX COORDINATOR | | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | P->S | QC (RAND) |
| 3 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 4 | PX RECEIVE | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
| 5 | PX SEND HASH | :TQ10003 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | P->P | HASH |
| 6 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 7 | HASH JOIN | | 536 | 47704 | 2218 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 8 | PX RECEIVE | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,03 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10002 | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | | | Q1,02 | PCWC | |
| 11 | HASH JOIN | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 12 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
| 13 | JOIN FILTER CREATE | :BF0000 | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 14 | PX RECEIVE | | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
| 15 | PX SEND BROADCAST | :TQ10000 | 236 | 3776 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
| 16 | PARTITION RANGE SINGLE | | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 17 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
| 18 | JOIN FILTER USE | :BF0000 | 8859 | 475K| 2213 (6)| 00:00:01 | | | Q1,02 | PCWP | |
| 19 | PX BLOCK ITERATOR | | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWC | |
| 20 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWP | |
| 21 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
| 22 | PX RECEIVE | | 731 | 13158 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | 731 | 13158 | 3 (0)| 00:00:01 | | | | S->P | HYBRID HASH|
| 24 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
| 25 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
Conclusion SQL Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

nVision Performance Tuning: 11. Excel -v- OpenXML

Thu, 2017-11-30 04:47
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

nVision Performance Tuning: 10. Maintaning Statistics on Non-Partitioned Tree Selector Tables

Tue, 2017-11-28 10:21
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you do not partition your selector tables, and if you predominantly use static selectors then you can maintain the statistics on the selector table as the new static selectors are created or updated.  PeopleTools table PSTREESELCTL is used to track static selectors.  It is keyed by selector number.  A row is inserted when a new selector is created, and the version number is updated when an existing selector is re-extracted.  Therefore, a trigger on this table can be used to submit a job to refresh the statistics on the tree selector.
CREATE OR REPLACE TRIGGER sysadm.pstreeselector_stats
BEFORE INSERT OR UPDATE ON sysadm.pstreeselctl
FOR EACH ROW
DECLARE
l_jobno NUMBER;
l_cmd VARCHAR2(1000);
l_table_name VARCHAR2(18);
l_suffix VARCHAR2(2);
BEGIN
l_table_name := 'PSTREESELECT'||LTRIM(TO_CHAR(:new.length,'00'));
l_suffix := SUBSTR(l_table_name,-2);
l_cmd := 'dbms_stats.gather_table_stats(ownname=>user,tabname=>'''||l_table_name||''',force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_FROM_'||l_suffix||''',density=>1,force=>TRUE);'
||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_TO_'||l_suffix||''',density=>1,force=>TRUE);'
dbms_output.put_line(l_cmd);
dbms_job.submit(l_jobno,l_cmd);
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
When all the selectors exist in a single segment, the optimizer will not correctly calculate the cardinality of the expressions in the nVision queries on the tree selector tables.  Therefore, I have found it necessary to manually set the density on the range columns to 1 to get the effective execution plans at least some of the time.
N.B. This trigger should not be used if the selector tables are partitioned.


nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

Fri, 2017-11-24 10:42
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.
The Advanced Compression licence enables compression features to be used where segments are still updated by the application.  Without this licence, only simple table compression can be used, although Hybrid Column Compression (HCC) can only be used on an engineered system.  Neither forms of compression can be used in on-line line transaction processing mode.  A table that is marked for compression, but that is populated in conventional path mode will not be compressed.  Any compressed blocks will no longer be compressed after being updated.
However, in most systems, ledger rows cease to be updated within a month or two of the period closing.  The ledger tables will usually be partitioned by fiscal year and accounting period, so a month after the period has closed the corresponding partition will cease to be updated.  Therefore, it could then be compressed.
I usually compress historical partitions when I introduce partitioning.  This following example comes from an engineered system, so Hybrid Columnar Compression has been used.
  • There is one range partition each for the whole of fiscal years 2014 and 2015.  This data is historical and rarely queried, and then not by a single period.  A more aggressive compression QUERY HIGH has been used.  
  • Monthly partitioning is used for the previous fiscal year, 2016.  These partitions are compressed, but using QUERY LOW which should deliver better performance with lower CPU overhead than QUERY HIGH (although usually there isn't much in it).
  • Partition LEDGER_2017_05 is not partitioned because it is current (at the time of building this script) and could still be updated by the application.
CREATE TABLE sysadm.ps_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015 VALUES LESS THAN (2016,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2015_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 COMPRESS FOR QUERY LOW
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2017_05 VALUES LESS THAN (2017,6)
(SUBPARTITION ledger_2017_05_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2017_05_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
As periods close there are more partitions that can be compressed.  However, it may not be practical to take an outage to rebuild ledger partitions each month, and it certainly isn't practical to rebuild the entire ledger table every month.  Instead, from 11g, partitions can be rebuilt in place in an online operation.  The compression attribute can only be specified on the partition, and then the sub-partitions can be rebuilt.  The data in the partition can still be read while it is being moved.
ALTER TABLE ps_ledger MODIFY PARTITON ledger_2017_05 PCTFREE COMPRESS FOR QUERY LOW;
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_actuals ONLINE PARALLEL 32;

ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_z_others ONLINE PARALLEL 32;
NB: Moving a partition online can fail in 12.1.0.1 or later of the database due to bug 2070300.  This is documented in MOS Doc ID 2040742.1.
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY
The bug is fixed in 12.2 and can be addressed in 12.1 by patch 2070300.

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

Mon, 2017-11-20 05:55
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
TREE_NODE_NUM INTEGER NOT NULL,
RANGE_FROM_10 VARCHAR2(10) NOT NULL,
RANGE_TO_10 VARCHAR2(10) NOT NULL)
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10
( SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 
Compound Triggers on Tree Selector Tables There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 
Library Cache Contention 
Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.
Limitation on the Maximum Number of Partitions In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
*
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

nVision Performance Tuning: 7 Analysis of Tree Usage with the Selector Log

Mon, 2017-11-13 11:22
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Over time, the selector log will build up a picture of how each tree is used in a system. Here are two examples of how it can be used.

You may look at a piece of SQL generated by nVision, it will have a literal value for the selector number, and you want to know about that particular selector.
REM treeanal.sql
WITH t as (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l as (
SELECT *
FROM ps_nvs_treeslctlog l
WHERE l.selector_Num = &selector_num
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
/
Now you can see various pieces of information about the selector and the report in which it was used.
  • Process instance of the report, although it is also in the ACTION string
  • Length of the selector.  Here it was 10 characters so it was in PSTREESELECT10.
  • The number of rows inserted into the selector.  This is useful if you want to recreate the conditions at runtime by populating the selector table manually.
  • Time at which the selector was populated.
  • Session module, usually the process name.  
  • Session action string, which contains the process instance, report ID and business unit.
  • The client info string, containing the operator ID, database name, the name of the host running nVision and the name of the executable.
    • PSNVS is nVision running on Microsoft Excel
    • PSNVSSRV is nVision running in OpenXML
    • psae is a PS/Query using the IN TREE operator that has been scheduled to run on the Process Scheduler.
    • PSQED is the windows query client running in 2-tier mode and using an IN TREE operator.
    • PSSAMSRV indicates either nVision or PS/Query running either through the PIA or on the windows client in 3-tier mode.
  • Status of the selector.  X indicates that the selector has been deleted and the partition has been dropped.
  • Name of the tree.
  • Name of the schema in which the selector table is located.
  • Partition name in the tree selector table in which the data was held.  This will be blank if the partition has been dropped.
  • The current values of the three tree selector flags on the tree definition are also reported.
SELECTOR_NUM PROCESS_INSTANCE Len   NUM_ROWS TIMESTAMP                    MODULE       APPINFO_ACTION
------------ ---------------- --- ---------- ---------------------------- ------------ ----------------------------------------------------------------
CLIENT_INFO S TREE_NAME OWNERID PARTITION_NAME JOB_NO DTL_FIELDNAME T T T
---------------------------------------------------------------- - ------------------ -------- -------------------- ---------- ------------------ - - -
10233 1780069 10 362 10-NOV-17 02.40.50.755038 AM RPTBOOK PI=1780069:UKGL123I:UK001
GBNVISION,PSFINPRD,UKLONWIN001,,PSNVSSRV.EXE, X UKGL_ACCOUNT SYSADM 33052 ACCOUNT J D S

The following query aggregated log entries to report the number of times each tree was used over the last 7 days, and provide various statistics about the numbers of rows extracted from trees into the selector tables, and the current tree performance options.
REM tree_usage.sql
WITH t AS (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method
, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l AS (
SELECT tree_name, length
, COUNT(*) num_uses
, MIN(num_rows) min_rows
, AVG(num_rows) avg_rows
, MEDIAN(num_Rows) med_rows
, MAX(num_rowS) max_rows
, STDDEV(num_Rows) stddev_rows
, SUM(num_rows) sum_rows
, COUNT(distinct process_instance) processes
FROM ps_nvs_treeslctlog l
WHERE num_rows>0
AND timestamp >= sysdate-7
GROUP BY tree_name, length
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM t, l
WHERE t.tree_name = l.tree_name
ORDER BY sum_rows
/
The default recommendation is that all trees should use:
  • Literal values where possible when working with less than about 2000 rows in the selector.  However, where more than 2000 rows it may be better to join the table due to parse and execution overhead of each criterion.
  • Dynamic selectors 
  • Single Value joins 
This report can help to identify trees where extreme volumes mean that different options should be considered.
                          Num    Min Average Median    Max Std Dev       Sum   Num
TREE_NAME Len Uses Rows Rows Rows Rows Rows Rows Procs DTL_FIELDNAME T T T
------------------ --- ------ ------ ------- ------ ------ ------- --------- ----- ------------------ - - -

CORP_ACCT 10 5 1147 2839 2616 6668 2263 14194 1 ACCOUNT J D S
FUNCTION 10 480 9 32 35 35 8 15474 43 CHARTFIELD2 L D S
INT_SUP 6 7 225 2463 2838 2838 987 17243 1 PRODUCT L D S
STAT_PRODUCT 6 8 2889 2889 2889 2889 0 23112 1 PRODUCT J D S
AFFILIATE 5 43 215 576 509 938 223 24789 15 AFFILIATE L D S
INT_GAAP_CON 5 62 82 486 522 730 225 30153 10 BUSINESS_UNIT L D S
BU_GAAP_CON 5 96 44 619 614 731 115 59461 48 BUSINESS_UNIT L D S
STAT_ACCOUNT 10 45 23 4204 6516 6516 2905 189182 6 ACCOUNT J D S
INT_REP1 10 135 149 1563 1664 1664 379 211005 1 CHARTFIELD1 L D S
COMBO_CODE 10 172 17 1592 1532 2430 809 273846 18 CHARTFIELD1 L D S
UKGL_ACCOUNT 10 2586 2 1713 1147 7797 1793 4430262 110 ACCOUNT J D S
  • The account trees have been set to join the tree to the ledger table rather than literal values because sometimes in excess of 6000 rows are extracted. A query with 6000 literal terms would be extremely large, take time to generate in nVision, and time on the database to parse and execute each criterion. 
  • STAT_PRODUCT has been set to join partly because it is large, it always extracts 2889 rows, but also because the whole tree is extracted every time so it does not cut down the result set.

nVision Performance Tuning: 6. Logging Selector Usage

Mon, 2017-11-06 13:09
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Static selectors are tracked by entries in the PSTREESELCTL table.  It is maintained after the tree is extracted to the selector table.  The version number on PSTREESELCTL is compared with the corresponding version number on the PSTREEDEFN to determine whether the extract of the tree to the selector table is still valid, or whether it needs to be reextracted because the tree has been updated.  Selectors that do not have an entry in PSTREESELCTL are therefore dynamic.
Static selectors are left in the PSTREESELECTnn table after the report completes so that they can be reused.  However, many customers periodically create new effective dated versions of many trees, and so static selectors on old effective dated versions of the tree will accumulate as there is nothing to purge them.
Dynamic selectors are extracted every time the report runs.  They should normally be deleted by nVision before the report finishes.  However, if a report crashes dynamic selectors can be left behind.  That creates a number of problems
  • The size of the selector tables will tend to increase over time as old static and dynamic selectors are left behind.
  • That in turn affects the statistics on these tables.  The range of values for SELECTOR_NUM will become wider faster than the number of rows in the table grows.  The minimum value will either be the oldest static selector number, or the oldest dynamic selector left after a crash.  The maximum value will be the last selector number inserted when statistics were collected.
Therefore, it is useful to be able to track creation and deletion of dynamic selectors by the various nVision reports and queries.  I have therefore created a logging table PS_NVS_TREESLCTLOG (see nvision_dynamic_selectors.sql), a PL/SQL package XX_NVISION_SELECTORS and compound DML triggers on every tree selector table.
The column names in the log table have been chosen for compatibility with the PeopleSoft data dictionary, so that a record can be defined in Application Designer.
Column Name
Data Type
Description
SELECTOR_NUM
NUMBER
Unique identifier for tree selector records.
PROCESS_INSTANCE
NUMBER
PeopleSoft process instance number for nVision/Query
LENGTH
NUMBER
Length of tree selector
NUM_ROWS
NUMBER
Number of rows inserted into tree selector. 
Counted by the AFTER ROW part of the triggers.
TIMESTAMP
TIMESTAMP
Time when rows inserted
MODULE
VARCHAR2(64)
Module attribute of session inserting selector rows. 
APPINFO_ACTION
VARCHAR2(64)
Action attribute of session inserting selector rows
CLIENTINFO
VARCHAR2(64)
CLIENT_INFO attribute of session inserting selector rows.  This will include:
  • PeopleSoft Operator ID.
  • Name of the application server or process scheduler domain.
  • Name of the machine where the client process is executing.
  • Name of the client executable process.
STATUS_FLAG
VARCHAR2(1)
I=Selectors Inserted
S=Static Selectors Inserted
D=Selectors Deleted
X=Selectors Deleted and Partition Dropped
TREE_NAME
VARCHAR2(18)
Name of the tree from which selector extracted.
Obtained by querying statement from V$SQL.
OWNER_ID
VARCHAR2(8)
Schema under which nVision report run
PARTITION_NAME
VARCHAR2(128)
Name of partition where selectors stored
JOB_NO
NUMBER
Database Job number to collect statistics on the partition.
All of the logic is kept in the PL/SQL package because it is common to the triggers on all the tree selector tables.  Insert triggers track population of selectors and delete triggers track the successful removal of dynamic selectors.  After row triggers track the selector number and count the number of rows inserted.  After statement triggers call the logging procedures.
CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_insert
FOR INSERT ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowins(:new.selector_num,:new.range_from_10,:new.range_to_10);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logins(10,'SYSADM');
EXCEPTION WHEN OTHERS THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/

CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_delete
FOR DELETE ON sysadm.PSTREESELECT10 compound trigger
l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
sysadm.xx_nvision_selectors.rowdel(:old.selector_num);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
sysadm.xx_nvision_selectors.logdel(10);
EXCEPTION WHEN OTHERS
THEN
l_err_msg := SUBSTR(SQLERRM,1,100);
dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/
Once the decision to bear the overhead of triggers on the selector tables is made, there are then various pieces of additional information that can be captured and stored in the logging table for very little additional overhead.  It is easy to record the current session attributes such as module, action, and client_info.  The process instance number is captured on startup and can then be read by the psftapi package.  The tree is identified by scanning the V$SQL for the SQL that fired the triggers. 
It is also possible to maintain statistics on the selector tables.
Purging SelectorsThe selector log can be used to drive purging of selectors not cleared by nVisions that failed to complete.  The package includes a purge procedure to clear selectors that are not already marked as having been deleted either when the logged process instance is no longer running, or if there is no known process instance then if it is more than 2 days since the selector was inserted.  If the selector table is also interval partitioned, then the partition will be dropped.
A trigger on PSTREESELCTL (see pstreeselctl.sql) tracks the logs static selector maintenance.
The purge process is invoked by a trigger on PSPRCSRQST that fires when nVision process status is changed away from processing.  The selector purge process should also be scheduled to run daily.
The selector log itself is not purged as it contains useful information about tree usage.
Recommendations
  • If the tree selector tables are not partitioned create a histogram on SELECTOR_NUM.  However, up to Oracle 11g, this will not be effective as the number of distinct values reaches the maximum number of buckets, 254.  Dynamic selectors should be purged before this happens.  From Oracle 12c the maximum number of buckets is 8192, and hybrid histograms can be used.

nVision Performance Tuning: 5. Additional Instrumentation of nVision

Thu, 2017-11-02 07:28
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. Calls to Oracle instrumentation package dbms_application_info  were added to the component processor in PeopleTools 8.50, and to Application Engine in PeopleTools 8.52.  However, COBOL, nVision, and SQR were never instrumented.
Therefore, there is still a place for the psftapi package and trigger.  When a PeopleSoft batch process starts, it sets the status on its request record on the Process Scheduler request record, psprcsrqst to 7, thus indicating that it is processing.  A trigger on that table fires on that update and calls the psftapi package.  The package sets module and action to the process name and process instance, and also stored the process instance number in a package global variable that can be read with another procedure in the package.  Every scheduled process will have module and action set to something meaningful.  Any PeopleSoft instrumentation will simply overwrite these values.  A sessions module and action are picked up Oracle monitoring tools, in particular, they are also stored in the Active Session History (ASH).
However, nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, we also need to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, you can create a fine-grained audit policy on the query with a PL/SQL handler.  The handler package is then invoked by the audit policy.
BEGIN
DBMS_FGA.ADD_POLICY(
object_schema => 'SYSADM',
object_name => 'PS_NVS_REPORT',
policy_name => 'PS_NVS_REPORT_SEL',
handler_module => 'AEG_FGA_NVISION_HANDLER',
enable => TRUE,
statement_types => 'SELECT',
audit_trail => DBMS_FGA.DB + DBMS_FGA.EXTENDED);
END;
/
The handler package runs in the session that triggered the audit.  It can access the audit record and extract the string of colon-separated bind variables thus obtaining the report ID and business unit.  It updates the session action attribute in the same way as psftapi.sql.
CREATE OR REPLACE PROCEDURE sysadm.aeg_fga_nvision_handler
(object_schema VARCHAR2
,object_name VARCHAR2
,policy_name VARCHAR2)
AS
l_sqlbind VARCHAR2(4000);
l_parm1 VARCHAR2(30);
l_parm2 VARCHAR2(30);
l_parm3 VARCHAR2(30);
l_parm4 VARCHAR2(30);
BEGIN
BEGIN
SELECT x.lsqlbind
, SUBSTR(x.lsqlbind,x.start1,LEAST(30,NVL(x.end1,x.lensqlbind+1)-x.start1)) parm1
, SUBSTR(x.lsqlbind,x.start2,LEAST(30,NVL(x.end2,x.lensqlbind+1)-x.start2)) parm2
, SUBSTR(x.lsqlbind,x.start3,LEAST(30,NVL(x.end3,x.lensqlbind+1)-x.start3)) parm3
, SUBSTR(x.lsqlbind,x.start4,LEAST(30,NVL(x.end4,x.lensqlbind+1)-x.start4)) parm4
INTO l_sqlbind, l_parm1, l_parm2, l_parm3, l_parm4
FROM (
SELECT l.*
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,1,1,'i'),0) start1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,0,'i'),0) end1
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,1,'i'),0) start2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,0,'i'),0) end2
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,1,'i'),0) start3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,0,'i'),0) end3
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,1,'i'),0) start4
, NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,5,1,'i'),0) end4
, LENGTH(lsqlbind) lensqlbind
FROM sys.fga_log$ l
) x
WHERE x.sessionid = USERENV('SESSIONID')
AND x.entryid = USERENV('ENTRYID')
AND x.obj$name = 'PS_NVS_REPORT';
EXCEPTION
WHEN no_data_found THEN
RAISE_APPLICATION_ERROR(-20000,'AEG_FGA_NVISION_HANDER: No Audit Row');
END;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
l_parm2 := l_parm1;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
l_parm3 := l_parm2;
END IF;

IF l_parm4 IS NULL THEN
l_parm4 := l_parm3;
END IF;

dbms_output.put_line(l_sqlbind);
dbms_output.put_line(l_parm1);
dbms_output.put_line(l_parm2);
dbms_output.put_line(l_parm3);
dbms_output.put_line(l_parm4);

dbms_application_info.set_action(SUBSTR('PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3,1,64));
--EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER=''PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3||'''';
END;
/
The action attribute is then picked up by the ASH data.  It is easy to extract the report ID and business unit from the action string with regular expressions, as in this example query.
set lines 160 trimspool on
column module format a12
column action format a32
column client_id format a12
column prcsinstance format a9 heading 'Process|Instance'
column business_unit format a8 heading 'Business|Unit'
column report_id format a10
select DISTINCT module, action, client_id
, REGEXP_SUBSTR(h.action,'[[:digit:]]+') prcsinstance
, substr(regexp_substr(h.action,':([[:alnum:]])+',1,2),2) business_unit
, substr(regexp_substr(h.action,':([A-Za-z0-9_-])+',1,1),2) report_id
from v$active_session_History h
where program like 'PSNVS%'
/
Here you can see how process instance, report ID and business unit are held in action and how they can be extracted.  Now, it is possible to profile ASH data for nVision processes, find the long-running SQL and determine which layout it came from.
                                                           Process   Business
MODULE ACTION CLIENT_ID Instance Unit REPORT_ID
------------ -------------------------------- ------------ --------- -------- ----------
RPTBOOK PI=1780508:GBGL224S:UK001 GBNVISION 1780508 UK001 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK002 GBNVISION 1780509 UK002 GBGL010E
RPTBOOK PI=1780502:GBGL91PF:UK001 GBNVISION 1780502 UK001 GBGL91PF
RPTBOOK PI=1780502:GBGL91FR:UK001 GBNVISION 1780502 UK001 GBGL91FR
RPTBOOK PI=1780502:GBGL91GB:UK001 GBNVISION 1780502 UK001 GBGL91GB
RPTBOOK PI=1780502:GBGL91DM:UK002 GBNVISION 1780502 UK002 GBGL91DM
RPTBOOK PI=1780506:GBEXP2AM:UK001 GBNVISION 1780506 UK001 GBEXP2AM
RPTBOOK PI=1780509:Processing GBNVISION 1780509 Processing
RPTBOOK PI=1780500:GBGL113S:UK003 GBNVISION 1780500 UK003 GBGL113S
RPTBOOK PI=1780509:GBGL010E:UK000 GBNVISION 1780508 UK000 GBGL010E 
This code in this blog is available on github.
Other recommendations
  • Create an index on SYS.FGA_LOG$ to support the query in the FGA handler package.
CREATE INDEX sys.fga_log$_obj$name
ON sys.fga_log$ (obj$name, sessionid, entryid)
TABLESPACE sysaux PCTFREE 1 COMPRESS 1
/
  • Put a regular purge of the FGA_LOG$ table in place, to purge rows after, say, 31 days.  Otherwise, it will grow indefinitely, one row will be added for every nVision report run.
DELETE FROM fga_log$ 
WHERE obj$name = 'PS_NVS_REPORT'
AND ntimestamp#
  • Move SYS.AUD$ and SYS.FGA_LOG$ from the SYSTEM tablespace to another ASSM tablespace using the instructions in Oracle support note 1328239.1.

nVision Performance Tuning: 4. Partitioning of Ledger, Ledger Budget, and Summary Ledger Tables

Sat, 2017-10-28 14:41
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Note: Partitioning is a licenced option in Oracle RDBMS, and is only available on Enterprise Edition.

nVision queries always contain single value predicates on LEDGER and FISCAL_YEAR.  They will also always have either single value predicate or a range predicate on ACCOUNTING_PERIOD.  Therefore, partitioning the ledger tables on these columns is an effective way to cut down the data to be processed by the query as early as possible.
SELECT … SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, …
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2015
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11

I usually partition the ledger, ledger budget and summary ledger tables on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD in a single range.
Most customers have monthly accounting periods, in which case I create 14 partitions for the current and previous fiscal years, but only have a single partition for each previous fiscal years.
  • One for each of the 12 accounting periods
  • One for period 0 (brought forward)
  • One for periods 998 and 999 (carry forward and adjustments)
I have seen one system with daily accounting periods that also had 14 partitions per year, in groups of 30 days.  This also worked very well.
I would then consider sub-partitioning on another column depending on the nature of data and the processing.  For example:
  • LEDGER is often a good candidate.  In which case, I would create one list sub-partition for each of the larger ledgers and have a default list partition for the rest.
  • On one a global system with various regional, but locally overnight, batch processing windows ledger was list sub-partitioned on BUSINESS_UNIT. A list partition was proposed for each region containing the top business units for that region.  This not only helped regional reporting but also minimised inter-regional contention.
  • It would even be possible to vary the sub-partitioning in different fiscal years if a change occurred in the business.
This example shows part of the DDL used to create the LEDGER table.
  • There is one range partition for the whole of the fiscal year 2015 because it is historical and rarely queried, and then not usually by a single period.
  • Monthly partitioning is used from the previous fiscal year, 2016, onwards.
  • Historical partitions are created without any reservation for free space as they are closed and won't be updated any further.  The could also be compressed.
CREATE TABLE sysadm.gfc_ledger
(...)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 …
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_bf VALUES LESS THAN (2015,1) PCTFREE 0…
(SUBPARTITION ledger_2015_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_bf_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_01 VALUES LESS THAN (2015,2) PCTFREE 0 …
(SUBPARTITION ledger_2015_01_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_01_z_others VALUES (DEFAULT)
)

,PARTITION ledger_2015_12 VALUES LESS THAN (2015,13) PCTFREE 0 …
(SUBPARTITION ledger_2015_12_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_12_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015_cf VALUES LESS THAN (2016,0) PCTFREE 0 …
(SUBPARTITION ledger_2015_cf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2015_cf_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 …
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')

,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)

)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
I usually recommend locally partitioning all indexes.  Even though FISCAL_YEAR and ACCOUNTING_PERIOD are the 24th and 25th columns on the unique index on LEDGER, I would still locally partition it.
CREATE UNIQUE INDEX sysadm.ps_ledger_new ON sysadm.ps_ledger
(business_unit,ledger,account,altacct,deptid
,operating_unit,product,fund_code,class_fld,program_code
,budget_ref,affiliate,affiliate_intra1,affiliate_intra2,chartfield1
,chartfield2,chartfield3,project_id,book_code,gl_adjust_type
,date_code,currency_cd,statistics_code,fiscal_year,accounting_period
) LOCAL
(PARTITION ledger_2014 PCTFREE 0
(SUBPARTITION ledger_2014_actuals

,SUBPARTITION ledger_2014_z_others
)

)
TABLESPACE GLLEDGER_IDX
PCTFREE 5 COMPRESS 3
PARALLEL
/
ALTER INDEX ps_ledger NOPARALLEL
/
Maintenance TasksThe introduction of range partitioning on FISCAL_YEAR brings some regular maintenance tasks.
  • New partitions must be added to the ledger and summary ledger tables for each new fiscal year before it is opened and transactions posted to it.  The LEDGER_BUDG table can be similarly partitioned and partitions should be added before budget entries are made.
  • I deliberately do not create MAXVALUE partitions on ledger tables.  If the application attempted to post data to an accounting period for which new partitions had not been created it would experience an Oracle error.  Therefore it is essential to remember to add the partitions in advance. I think this is preferable to forgetting to add the partitions and having performance degrade as data accumulates in the MAXVALUE partition.
  • As periods close and the application ceases to insert or update rows, their partitions can be compressed using
ALTER TABLE … PARTITION … COMPRESS UPDATE ALL INDEXES
  • As and when old fiscal years are no longer needed they can be archived by simply dropping the partitions, or exchanging them out to another table.
Managing Partitioning in PeopleSoftApplication Designer is not good at managing partitioning.  I have written previously about the limited support for partitioning introduced in PeopleTools 8.54.  It uses Oracle's DBMS_METADATA package to preserve existing settings, including partitioning, but the support for initially implementing partitioning is poor.
It may be reasonable to manually manage partitioning in a single table, but if you also have a number of summary ledgers, and have perhaps also built materialized views on them, you can have a significant number of partitioned objects to manage.  Manual scripting is going to become a significant overhead.  You might want to look at Managing Oracle Table Partitioning in PeopleSoft Applications with GFC_PSPART package.

nVision Performance Tuning: 3. Indexing of Ledger, Budget, and Summary Ledger Tables on Non-Engineered Oracle Systems

Wed, 2017-10-25 12:06
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you are on a conventional non-engineered Oracle system, then it is not going to be viable to full scan the ledger tables for every query.  You are going to have to use indexes to find your data.  Every customer's nVision reports are unique to that customer and the nature of their business.  Different customers will analyse their data by different combinations of attributes.  Indexes will be needed to match those analysis criteria.
A survey of the nVision reports and the SQL they produce will reveal the various sets of columns by which ledger data is queried. The heaviest SQL statements can be identified from Active Session History (ASH) or the Automatic Workload Repository (AWR) (or Statspack if the Diagnostics pack is not licenced).   As an example, let's take the following two statements as examples, and consider the indexes that would be needed to support them.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=78948
AND A.CHARTFIELD3>= L4.RANGE_FROM_10 AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
AND L2.SELECTOR_NUM=92481 AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.DEPTID BETWEEN '10000' AND '18999'

OR A.DEPTID='29150'
OR A.DEPTID=' ')
AND L.SELECTOR_NUM=92469 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='USD'
GROUP BY L4.TREE_NODE_NUM
  • nVision queries will always contain single value predicates on LEDGER and FISCAL_YEAR.
  • You may see some queries on a single ACCOUNTING_PERIOD, as in the query above, usually the current period.  Otherwise, they might be on a range of ACCOUNTING_PERIODs, usually the year to date as in the example below.
  • Then there will predicates on the other analysis columns, in this case, DEPTID, ACCOUNT, CHARTFIELD1, CHARTFIELD3.
  • You may see criteria on CURRENCY_CD, however, this may not be very selective.  Generally, most of the queries will be on the base currency, and most of the data will be in the base currency.
  • Sometimes you may see a criterion on STATISTICS_CODE=' '.  Normally, only a very few ledger rows have statistics code, so there is no benefit in adding STATISTICS_ CODE to any index.
  • Generally, single value predicate columns should be placed at the front of the index, followed by the other analysis criteria.
  • However, I would not attempt to overload the index with additional non-selective columns because unless you index all the referenced columns, including the amount columns, you will not be able to satisfy the query from the index only, and you will still have to visit the table.
  • So for this query, you might build an index on the following columns: LEDGER, FISCAL_YEAR, ACCOUNTING_PERIOD, DEPTID, ACCOUNT, CHARTFIELD1, CHARTFIELD3, CURRENCY_CD
This query, from the same system, is on a different combination of columns
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,L4.TREE_NODE_NUM,SUM(A.POSTED_BASE_AMT) 
FROM PS_LEDGER A, PSTREESELECT05 L1, PSTREESELECT10, L2, PSTREESELECT10 L3, PSTREESELECT10 L4, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30369 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L2.SELECTOR_NUM=30374 AND A.CHARTFIELD1=L2.RANGE_FROM_10
AND L3.SELECTOR_NUM=30375 AND A.ACCOUNT=L3.RANGE_FROM_10
AND L4.SELECTOR_NUM=30376 AND A.CHARTFIELD2=L4.RANGE_FROM_10
AND L.SELECTOR_NUM=30372 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='USD'
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,L4.TREE_NODE_NUM
  • In this case, the above query is part of a fiscal year-to-date report, returning the same period in a previous fiscal year.  The system is currently in period 11, so this report is looking at accounting periods 1 to 11.  ACCOUNTING_PERIOD is a much less selective criterion in this case.  If partitioning option were not available, I might be tempted to demote ACCOUNTING_PERIOD to further down the list of indexed columns if the criterion on the next column is more selective.
  • This query uses 5 trees.  A cartesian join of all 5 is likely to produce a product so large that it will not be effective to do index lookups for each member of the product.  The columns with the most selective criteria should be earlier in the index.  It might not even be worth including columns with non-selective criteria in the index.
  • So you might build an index on the following columns: LEDGER, FISCAL_YEAR,  BUSINESS_UNIT, ACCOUNT, CHARTFIELD1, CHARTFIELD2, CURRENCY_CD, ACCOUNTING_PERIOD.
RecommendationsIt is easy to see how you might build up a large number of indexes on the ledger and summary ledger tables.  You might have pairs of indexes, one for single accounting periods and perhaps another similar one for year-to-date for each set of analysis criteria.
However, as you add indexes the overhead of index maintenance experienced by any processes that post to the ledger, or by the summary ledger build process will increase.  There is a balance to be struck. You will have to build as many indexes with as many columns as are necessary, but as few as possible.
If you partition the tables on FISCAL_YEAR and ACCOUNTING_PERIOD and locally partition the indexes, you will only need one of the indexes.
Index leaf block dictionary compression can significantly reduce the size of the index.  Any increase in the CPU overhead of using and maintaining the index will be more than offset by the reduction in I/O overhead.  The optimal prefix length can be calculated using the command ANALYZE INDEX … VALIDATE STRUCTURE.  This feature does not require the Advanced Compression licence.

nVision Performance Tuning: 2. Effects of Performance Options

Sat, 2017-10-21 04:16
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

In this post, I examine the effect of different settings for the tree performance options on how the SQL is generated.  It is common, to see different performance options in use on different trees in the same SQL query.  It is important to be able to look at a piece of SQL generated by nVision and to be able to work out which performance options have been set on which trees.
  • Access Method: join to tree selector –v- use literal values
  • Tree Selectors: statics –v- dynamic
  • Selector Options: single values –v- inequalities/between
Access Method
This option allows you control how the tree selector table is combined with the ledger table.  The choices of interest are whether to join the tree selector, or whether to use literal values where feasible.
JoinHere, two tree selectors are joined to the ledger table
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L4, … PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017

AND L.SELECTOR_NUM=73130
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
It is not feasible to use literal values for L4 because the data is grouped by L4.TREE_NODE, however, it is feasible to use literal values for L because no column from this table is included in either the SELECT or GROUP BY clauses.
Literal ValuesWhen 'use literal values' is selected nVision replaces the join to the tree selector table with a set of literal values generated from the contents of the tree selector table.
In this example, the join to the CHARTFIELD1 tree selector has been replaced with a series of literal value predicates.  There is one for every selected tree leaf.  A simple equality condition for single value leaves, and a BETWEEN condition for ranged leaves.
SELECT L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L2.SELECTOR_NUM=101142
AND A.ACCOUNT=L2.RANGE_FROM_10
AND (A.CHARTFIELD1='0070700'
OR A.CHARTFIELD1='0078999'

OR A.CHARTFIELD1 BETWEEN '0091100' AND '0091101'
OR A.CHARTFIELD1='0150204')
AND A.CURRENCY_CD='GBP'
GROUP BY L2.TREE_NODE_NUM
RecommendationsIn general, the literal values option is very beneficial.
  • It simplifies the queries generated by nVision by removing a table from the from clause and leaves the optimizer with fewer choices when determining the execution plan.
  • Data is filtered on the scan of the ledger table rather than when it is joined to the tree selector.  On an engineered system these predicates can be pushed down to the storage cells.
However, there are some disadvantages
  • As the number of leaves on a tree increase, so the number of literal predicates in the query increases, and the time taken to parse the SQL increases.  As a rough guideline, I would say that the parse time starts to become significant as the tree exceeds 2000 leaves, and you might be better using the Join Option.
  • Also, as the number of literal predicates increase the time taken to evaluate them increases.  This applies to both conventional and engineered systems.
  • Some PeopleSoft customers have found it generally beneficial to the system to raise the value of OPTIMIZER_DYNAMIC_SAMPLING from the default of 2 to 4 so that Oracle collects dynamic statistics if there are expressions in where clause predicates or multiple predicates on the same table (see Database SQL Tuning Guide -> Managing Optimizer Statistics).  However, this can significantly increase the parse overhead of nVision SQL using many predicates.  If necessary, this parameter can be reset at session level for nVision with a trigger.
  • The nVision client also has to do more work to generate the SQL.
  • Literal values increase the variability of SQL generated, reports using different tree nodes will generate SQL with different numbers of predicates and there is effectively no chance that any two statements would be similar enough to use any of Oracle's plan stability technologies (SQL baselines, SQL profiles, SQL plan directives).
  • In theory, cursor sharing could reduce the parse, but in my experience is that the optimizer does not correctly cost the predicates resulting in different execution plans with poorer performance.
Tree SelectorsThis option allows you to choose whether a tree selector is managed in a static or dynamic manner.  As you will see, the Tree Selectors performance option is tied in with the Selector Options performance option.
StaticWhere a tree uses a static selector, the entire tree is extracted into the tree selector.  There will be no tree node number criteria in the extract statement.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT 78722,L.TREE_NODE_NUM, SUBSTR(L.RANGE_FROM,1,10),SUBSTR(L.RANGE_TO,1,10)
FROM PSTREELEAF L
WHERE L.SETID='GLOBE'
AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='FUNCTION'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
Note that the tree leaf was not joined to the underlying detail table.  Ranged leaves become ranged selectors.  This is because there is no mechanism to determine when the detail table has changed and thus when the static selector would be out of date and need to be refreshed.  This is why single value joins cannot be performed with static selectors.
Tree node number criteria will appear in the nVision queries to identify the required selectors.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10
AND L4.TREE_NODE_NUM BETWEEN 1000000000 AND 2000000000
A control row is maintained on the table PSTREESELCTL.  This row indicates that a particular selector is static, and the version number is compared to the version number of PSTREEDEFN to determine whether the selector is valid, or whether the tree has changed.  If the tree has changed it will have a higher version number, and the selector will be extracted again.  The selector number from the query can be looked up on this table to identify the tree.
DynamicDynamic selectors are built on-the-fly by nVision as the report runs.  They select just the rows from the tree that are required by the report.
INSERT INTO PSTREESELECT10
(SELECTOR_NUM,TREE_NODE_NUM,RANGE_FROM_10,RANGE_TO_10)
SELECT DISTINCT 108090,L.TREE_NODE_NUM,D.ACCOUNT,D.ACCOUNT FROM
PS_GL_ACCOUNT_TBL D, PSTREELEAF L
WHERE L.SETID='GLOBE' AND L.SETCNTRLVALUE=' '
AND L.TREE_NAME='ACCOUNT'
AND L.EFFDT=TO_DATE('1901-01-01','YYYY-MM-DD')
AND L.RANGE_FROM<>L.RANGE_TO
AND D.ACCOUNT BETWEEN L.RANGE_FROM AND L.RANGE_TO
AND D.SETID='GLOBE' AND
(L.TREE_NODE_NUM BETWEEN 789473386 AND 793372019
OR L.TREE_NODE_NUM BETWEEN 810915873 AND 812865189

OR L.TREE_NODE_NUM BETWEEN 1089668204 AND 1095516154
OR L.TREE_NODE_NUM BETWEEN 1113060008 AND 1115009324)
All the tree node number predicates are found in the query that populates the selector table and not in the query that generates the report.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='S_USMGT'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10 AND
A.CURRENCY_CD='USD' GROUP BY L4.TREE_NODE_NUM
nVision will delete dynamic selectors after they have been used.  However, if the report crashes or times out, rows will be left in the selector table, and this debris can build up over time.  So there is a requirement to manage these tables.
Note that in this example when the selector was extracted, the tree leaf table was joined to the underlying table on which the tree was based (in this case GL_ACCOUNT_TBL).  This occurs when the single value join option is selected.  The tree selector contains a row for each value rather than for each tree leaf.  This will result in larger tree selectors where ranged leaves are used.
PeopleSoft do not provide any way to identify the tree used to create a dynamic selector.  From the SQL query, all we can see is the field to which it was joined.  The above example is a tree related to CHARTFIELD1.
RecommendationsAlthough static selectors are easier to manage, the decision to use dynamic selectors is driven by the need to use single value joins.
  • It is important to maintain up-to-date statistics on the selector tables.  As new selectors are extracted, a series of ever-increasing selector numbers are allocated from a sequence maintained on the table PSTREESELNUM.   If not maintained, the selector numbers will exceed the high value recorded in the column statistics for column SELECTOR_NUM, and the database will not correctly calculate the number of rows expected, resulting in inappropriate execution plans.
  • It is easier to manage optimizer statistics on static selectors because the PSTREESELCTL table records when they have been updated.  In fact, it is possible to put a trigger on that table to maintain statistics.  However, static selectors imply range predicates that bring other problems that I discuss in the next section.
  • Using compound triggers on the selector tables it is possible to maintain a log of selectors, maintain statistics on them and purge selectors left behind by nVision reports that have not cleaned up after themselves.  This is also used to identify the trees used in dynamic selectors.  I will describe this solution later.
Recommendation: Prior to Oracle 11g, I have recommended using static selectors and ranged joins, and by careful management of statistics on the tree selector and indexing of the ledger tables good results can be obtained.  The tree selectors are Cartesian joined together before using index lookups on the ledger tables.
However, this approach is generally not effective for queries that reference more than 3 tree selector, nor for very large trees.  The size of the Cartesian product is the product of the number of rows extracted from each tree selectors table.  For example, three selectors with 100 rows each will result in 1 million lookups on the ledger table.  As the size of the Cartesian product grows the number of index look-ups also grows.  There comes a point when it is better to remove the least selective tree from the Cartesian and hash joining it after the looking up ledger.  Striking the correct balance can be difficult, and leads to using different performance options in different reports.
From Oracle 11g, and especially on engineered systems, I now prefer dynamic selectors because they permit the use of single value joins that can use Bloom filters and Extended Statistics.  Although, where smaller trees are in use that do not significantly cut down the query result set, range joins may still be preferable, in which cases, so might static selectors.
Selector OptionsThis performance option controls how the tree selector is joined to the ledger table.  The choice is between single value equality joins or inequality joins.
Ranged JoinsThere are two forms of ranged joins, either using the BETWEEN keyword or using a pair of inequalities.  There is no difference to Oracle because the optimizer rewrites BETWEEN as a pair of inequalities.
NB: If you specify single value joins on a static selector you still get an inequality ranged join because only the tree leaves are extracted to the selector.  .  nVision still does this even if all of the leaves in the tree are single values and not ranges.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1
AND L4.SELECTOR_NUM=89052
AND A.CHARTFIELD3>= L4.RANGE_FROM_10
AND A.CHARTFIELD3 <= L4.RANGE_TO_10

AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
Single Value JoinsIf single value joins are selected, nVision generates an equality join between the tree selector and the ledger table.  The tree selector must be dynamic, and nVision extracts all the intermediate values between the tree node ranges on the underlying detail table.
SELECT L4.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_LEDGER A, PSTREESELECT10 L4, PSTREESELECT10 L2, PSTREESELECT10 L
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2017
AND A.ACCOUNTING_PERIOD=1

AND L.SELECTOR_NUM=96774
AND A.CHARTFIELD1=L.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L4.TREE_NODE_NUM
RecommendationsIn general, I would recommend using single value joins and therefore also dynamic selectors.  The main advantage is that they enable two particular optimisations in Oracle.
  • Bloom filters only work with equality predicates, and therefore only with single value joins.  They enable a filter on one column of one table to be roughly mapped to another column of another table upon which there is an equality join.  This filters data earlier in a query, cutting down the size of the eventual join operation.  It is a rough match in that it may return some false positives to the subsequent hash operation, but it will not omit any results.  The hash join then does an exact match of the rows to produce the correct result set.  On an Engineered Oracle system this filter can also be pushed down to the storage cells. 
  • This is an example of the kind of execution plan that we should see when a Bloom filter is used.  A filter has been created on PSTREESELECT10 at line 3, and it is applied to PS_LEDGER at line 6.
-------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12876 (100)| | | |
| 1 | HASH GROUP BY | | 501 | 35571 | 12876 (42)| 00:00:01 | | |
| 2 | HASH JOIN | | 975K| 66M| 12822 (41)| 00:00:01 | | |
| 3 | JOIN FILTER CREATE | :BF0000 | 2577 | 59271 | 7 (0)| 00:00:01 | | |
| 4 | PARTITION RANGE SINGLE | | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 5 | INDEX FAST FULL SCAN | PS_PSTREESELECT10 | 2577 | 59271 | 7 (0)| 00:00:01 | 87968 | 87968 |
| 6 | JOIN FILTER USE | :BF0000 | 1715K| 78M| 12804 (41)| 00:00:01 | | |
| 7 | PARTITION RANGE ITERATOR | | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
| 8 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_LEDGER | 1715K| 78M| 12804 (41)| 00:00:01 | 27 | 34 |
-------------------------------------------------------------------------------------------------------------------------------
  • Extended statistics, (i.e. on groups of columns) also only work with equality predicates.  Single value join queries can, therefore, benefit from them.
There are drawbacks to be overcome.
  • Single value joins require the use dynamic selectors.  That requires management of statistics on the dynamic selectors, and debris left in the tree selectors table from failed reports that have not cleaned up after themselves.
  • Single value joins can result in larger tree selector tables with many more rows than tree leaves.  For large trees or trees with many values in leaf ranges, resulting in more than about 2000 rows on the tree selector tables, it can be advantageous to change the Access Method performance option from using literal values to join because the parse overhead becomes significant.

nVision Performance Tuning: 1. nVision Performance Options

Mon, 2017-10-16 14:26
This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

At no point when defining a nVision report does the developer directly specify a SQL statement, all the SQL is generated by nVision from the report definition.  That also means that it is not possible to directly intervene and change the SQL statement, for example, to add an optimizer hint.

However, the way that the SQL is generated can be controlled via the nVision performance options. Setting this appropriately can make a significant difference to nVision performance, but the optimal settings will differ from system to system, from tree to tree and sometimes even report to report.
Specification of Performance OptionsnVision performance options are specified for named trees.  Since PeopleTools 8, they are defined in the tree definition.
Tree Performance Options in PIA
They are stored in the database in PeopleTools table PSTREEDEFN.
Field Name
Field Value
Description
TREE_ACC_METHOD
D
Use Application Defaults
J
Join to Tree Selector
L
Use Literal Values
S
Sub-SELECT Tree Selector
TREE_ACC_SELECTOR
D
Dynamic Selectors
S
Static Selectors
TREE_ACC_SEL_OPT
B
Range of Values (BETWEEN)
R
Range of Values (>=…<=)
S
Single Values

Performance options can also be specified in individual layouts, and the specific definition in the layout overrides the generic definition in the tree.
Add-Ins->nVision->Layout Options
You can also see the trees defined in a layout file in the Name Manager in Excel.  The performance options are converted into a string of 5 characters.  Below you can see a tree called OPERATING_UNIT. The easiest way to determine whether performance options are set in a layout is to look in the Name Manager, and the only reliable way to delete a performance option from a layout is to delete them from the Name Manager.
Formulas -> Name Manager

The string of flags in the NvsTree formula can be decoded with this table.
X
Value
Position
Y
N
S
1
Dynamic Selector
Static Selector

2
Join to tree selector
Supress join; use literal values
Sub-SELECT tree selector
3
Range of values (BETWEEN)
Range of values (>= … <=)

4
Single Value
(overrides position 3)


5
Non-specific node criteria (above 2 billion)



RecommendationsIt is rare to need to set different performance options for the same tree in different layouts. Therefore, it is easiest to set performance options at tree level and not at all in layouts unless absolutely necessary.
Tree performance options can even be updated in bulk by SQL script, though it is essential to maintain the versions numbers correctly.  Care should be taken when migrating trees from other environments to ensure that the performance options are still correctly set, as the performance options are part of the tree being migrated
It is now possible to give clear advice to developers and especially business users who create nVision reports:
  • Don't use them.
  • If you have specified them in layouts in the past, then remove them unless you really do need to use a different option in a particular report.
Some customers have written Visual Basic macros to work through all their layouts and remove all tree performance options settings.

nVision Performance Tuning: Introduction

Tue, 2017-10-10 15:41
This blog post is the first in a series that discusses how to get good performance from nVision as used in General Ledger reporting.

PS/nVision is a PeopleTools technology that extracts data from the database and places it in an Excel spreadsheet (see PS/nVision Overview).  Although PS/nVision can be used with any PeopleSoft product, it is most commonly used in Financials General Ledger.

The SQL queries generated by nVision are, at least conceptually, similar to data warehouse queries. The ledger, ledger budget or summary ledger tables are the fact tables.

The ledger tables are analysed by their attribute columns. There are always literal conditions on the fiscal year and accounting period, there is usually a literal condition on currency code.  Then there are criteria on some of the other attributes.  I will take an example that analyses the ledger table in three dimensions: BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, but there are many other attribute columns on the ledger tables.  These attributes are defined in lookup tables in the application, but their hierarchies are defined in trees.

nVision reports use the trees to determine which attribute values to report.  A report might report on a whole tree, or particular nodes, or branches of a tree.  nVision joins the tree definition to the attribute table and produces a list of attributes to be reported.  These are put into working storage tree selector tables (PSTREESELECT01 to 30).  The choice of selector table is controlled by the length of the attribute column.  BUSINESS_UNIT is a 5 character column so it goes into PSTREESELECT05. CHARTFIELD1 and ACCOUNT are 10 character columns so they use PSTREESELECT10.  These selector tables form the dimensions in the queries.

Here is an example of a SQL statement generated by nVision.  The tree selector 'dimension' tables are joined to the ledger 'fact' table.

SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A
, PSTREESELECT05 L1
, PSTREESELECT10 L
, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2016
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM
This SQL looks simple enough, but there are various complexities
  • The tree selector tables are populated at runtime.  Many dimensions can be stored in each tree selector table, each keyed by a different SELECTOR_NUM.
  • Selectors can be static or dynamic.  In dynamic selectors, the data is only stored temporarily for the lifetime of the report and will be deleted when it completes.  So immediately, there is a challenge of keeping statistics up to date, and even then Oracle doesn't always manage to find an effective execution plan.
  • Different selectors will have different numbers of rows, so the statistics have to describe that skew.
  • Different nVision reports and even different parts of the same report generate different statements that can use different combinations of attribute columns.  The number of dimensions can vary, I have seen systems that use as many as five different trees in a single query.
  • Then the database needs to find the relevant rows on the ledger table for the dimensions specified as efficiently as possible.
This very quickly becomes a difficult and complex problem.  This series articles works through the various challenges and describe methods to overcome them.  Not all of them are applicable to all systems, in some cases, it will be necessary to choose between approaches depending on circumstances.

nVision Performance Tuning: Table of Contents

Tue, 2017-10-10 15:39
This post is an index for a series of blog posts that discuss how to get good performance from nVision as used in General Ledger reporting.  As the posts become available links will be updated in this post.
  • Introduction
  • nVision Performance Options
  • Indexing of Ledger, Budget and Summary Ledger Tables
  • Partitioning of Ledger, Budget and Summary Ledger Tables
  • Additional Oracle Instrumentation for nVision
  • Logging Selector Usage
  • Analysis of Tree Usage  with the Selector Log
  • Interval Partitioning and Statistics Maintenance of Selector Tables
  • Compression without the Advanced Compression option
  • Maintaining Statistics on Non-Partitioned Selector Tables
  • Excel -v- OpenXML
The current versions of scripts mentioned in the series will be made available on GitHub.


Pages