Feed aggregator

Snippet : How to query the Sales Cloud users username in groovy and in EL

Angelo Santagata - Thu, 2015-02-19 04:16
Ive decided to create a new category called "Snippets" to capture small chunks of information which might be useful to other people

In Groovy
// Get the security context
def secCtx = adf.context.getSecurityContext()
// Check if user has a given role
if (secCtx.isUserInRole('MyAppRole')) {
  // get the current user's name
  def user = secCtx.getUserName()
  // Do something if user belongs to MyAppRole
}


In a EL Expression
#{securityContext.userName} 

Test MySQL on AWS quickly

Kubilay Çilkara - Thu, 2015-02-19 02:20
Using sysbench to performance test AWS RDS MySQL hardware is an easy three step  operation. Sysbench creates synthetic tests and they are done on a 1 mil row 'sbtest' table that sysbench creates in the MySQL database you indicate. The test doesn't intrude with your database schema, and it doesn't use your data, so it is quite safe. The test is an OLTP test trying to simulate event operations in the database as it runs various, SELECT, INSERT, UPDATE and DELETE requests on it's own 'sbtest' table.The results  of the tests are metrics like transactions per second, number of events, elapsed time etc. See man pages for description and Google it, it is a  popular testing tool. Other things you can set it up to do are to control how many requests (events) you want it to execute in a given time or you can tell it to keep on executing infinite requests until you stop it, or destruction testing. Is a very flexible testing tool with many options including throtling concurrency.You can be up and running with 3 commands on a unix system as follows.  Download sysbench tool (doing this on ubuntu)sudo apt-get install sysbenchCreate a table with 1 mil rowssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} prepareTest with different parameterssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} --max-time=60 --num-threads=550 runWarning: Synthetic  tests will just give you the ability of the hardware at a given standard set of requests and DML operations. There are no way an indication of what will happen to your database if the real workload increases beacause of the applications. Application Load Testing is something else, applications are  complex!  Database Workload is dependent on the application generated workload from real users using the system and is very hard to simulate that in a test. It is not imppossible If you use a database, such as Oracle which has the capability of recording and replaying its production database workload - called Automatic Workload Repository (AWR). In MySQL I couldn't find  so far a way to do this. But sysbench synthetic tests gave me the ability to quickly benchmark and baseline a MySQL database capabilities on different AWS Amazon hardware, something is better than nothing I suppose. 
Categories: DBA Blogs

PeopleTools 8.54: Materialized Views

David Kurtz - Wed, 2015-02-18 15:30
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Materialized Views in the DatabaseSnapshots were introduced in Oracle 7.  They were a way of building and maintaining a physical table to hold the results of a SQL query.  We are well used to a view being the results of a query, but here the results are materialised into a physical table.  Hence the feature was renamed materialized views in Oracle 8i.

Today materialized views are one among many replication technologies.  They are available in standard edition Oracle, and there is no licensing implication in their use.

Materialized views can generally be put into two categories
  • A simple, single-table materialized view. 
    • Often used to replicate data across a database link from another database.  Or to produce a subset of the data.
    • Can be refreshed incrementally using a PL/SQL package supplied by Oracle
      •  A materialized view log is created on the source table to record all the changes made to the source table.  It holds the primary key of the changed row, or the rowid (physical address of the row).  It can optionally hold additional columns.  It is populated by a database trigger on the source table (since Oracle 8i that trigger is hidden).
  • Multi-table materialized view
    • Usually done within a single database rather than across a database link.
    • Can only be refreshed by being completely rebuilt. 
    • If the same query as is used in a materialized view is submitted, Oracle can rewrite the query to use the materialized view instead.  Query rewrite only occurs subject to configuration and certain pre-requisites being met.
Materialized Views can be populated immediately when they are built, or later on demand.  They can be refreshed on demand, on a regular schedule by a database job, or immediately when an update to a source table is committed.  Materialized views can be put into Refresh Groups.  All the materialized views in a refresh group are refreshed in the same database transaction so the data in the materialized views is consistent.

Materialized views can be updatable and used for bidirectional replication.  I am not going to talk that here.

When you introduce materialized views into an application you need to consider what you are trying to achieve, and make design decisions accordingly.
Materialized Views in PeopleTools 8.54Using this new feature in PeopleSoft is deceptively easy, but quite a lot is going on behind the scenes.
PeopleSoft Documentation (the term PeopleBooks seems to have been replaced by PeopleSoft Online Help): Data Management, Using Materialized Views provides an overview.

There are two new PeopleTools tables:
  • PSPTMATVWDEFN - addition definition fields for the materialized view, build, refresh, staleness, stats.  Doesn't contain the query, that is in PSSQLTEXTDEFN as it is for all other views.
  • PSPTMATVWDEP - lists tables upon which materialized view depends.  PeopleSoft seems to work this out for itself by parsing the SQL query.
I am going to demonstrate some aspects of the feature by running through some examples.
Example 1: Replicate part of PS_JOB across a database linkIn this example I am going to use a materialized view to replicate a table from another Oracle database across a database link.
If I select SQL View the Materialized View check box appears, if I check the checkbox the Materialized View Options appear.

This is build script generated by Application Designer
DROP VIEW PS_DMK
/
CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
DEPTID FROM PS_JOB@HROTHER
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
/
DROP VIEW PS_DMK
/
CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
However, if the materialized view already exists, the script will drop it, recreate and drop the view, and then recreate the materialized view.
DROP MATERIALIZED VIEW PS_DMK
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
DEPTID FROM PS_JOB@HROTHER
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
/
DROP VIEW PS_DMK
/
CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH FAST
ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB@HROTHER
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
(PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
= 'DMK'
/
  • The Application Designer build script creates the materialized view using a primary key based replication.  If there is no WITH PRIMARY KEY clause specified, because it is the default.  There appears to be no way to get Application Designer to generate a WITH ROWID clause, so it is not possible to replicate a single table without a unique key.  You might question whether that is useful, but it is possible in Oracle.
  • If there is no primary key on the source table, you will to add one.  If this is on another system, or a pre-8.58 PeopleSoft system you will need to do this manually.  Otherwise you will get this error message:
ERROR at line 4:
ORA-12014: table 'PS_JOB' does not contain a primary key constraint
  • If you specify any key columns on the materialized view it does not result in the usual indexes that you get on tables.  Also, it is not possible to add additional user specified indexes to the materialized view - the option is greyed out.  This is rather disappointing, because you might want to do exactly that so you can query the materialized view it in different ways to the underlying table.
    • You will get a unique index on a materialized view that is replicated by primary key, because the primary key will be inherited from the underlying table.
  • Nor is it possible to specify partitioning in Application Designer on a materialized view.
  • You can specify storage options on the materialized view via Record DDL, but the storage options do not appear in the CREATE MATERIALIZED VIEW statement in the build script.  This is rather disappointing because you don't need to provide free space for updates in a materialized view which is completely refreshed each time, but you might if you do incremental update.
Example 2: Replicate part of PS_JOB locallyIn this example, I am again only replicating 6 named columns into my materialized view.
DROP VIEW PS_DMK
/
CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
DEPTID FROM PS_JOB
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
/
DROP MATERIALIZED VIEW LOG ON PS_JOB
/
DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
'PS_JOB'
/
ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
, EMPLID, EMPL_RCD)
/
CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
IMMEDIATE
/
INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
'PS_JOB')
/
DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
/
DROP VIEW PS_DMK
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
I don't know why it rebuilds the non-materialized view as a normal view and drops the primary key constraint each time every time but it does.  You might not want to do this every time for a large materialized view that takes time to build.
If the materialized view log has been built, next time you generate the view build script it creates and drop the view and then builds the materialized view.
CREATE VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ, SETID_DEPT,
DEPTID) AS SELECT EMPLID , EMPL_RCD , EFFDT , EFFSEQ , SETID_DEPT ,
DEPTID FROM PS_JOB
/
DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
'PS_JOB'
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 0, PTMAT_REFRESHSTAT = 0,
PTMAT_LASTREFRESH = TO_TIMESTAMP('1900-01-01-00.00.00.000000'
,'YYYY-MM-DD-HH24.MI.SS.FF'), PTMAT_STALENESS = 'STALE' WHERE RECNAME
= 'DMK'
/
ALTER TABLE PS_JOB DROP CONSTRAINT PS_JOB_PK
/
DROP MATERIALIZED VIEW LOG ON PS_JOB
/
DELETE FROM PSPTMATVWDEP WHERE RECNAME = 'DMK' AND PTMAT_BASETBL =
'PS_JOB'
/
ALTER TABLE PS_JOB ADD CONSTRAINT PS_JOB_PK PRIMARY KEY (EFFDT, EFFSEQ
, EMPLID, EMPL_RCD)
/
CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW WITH PRIMARY
KEY, ROWID, SEQUENCE(DEPTID, SETID_DEPT) INCLUDING NEW VALUES PURGE
IMMEDIATE
/
INSERT INTO PSPTMATVWDEP(RECNAME, PTMAT_BASETBL) VALUES('DMK',
'PS_JOB')
/
DELETE FROM MV_CAPABILITIES_TABLE WHERE MVNAME = 'PS_DMK'
/
DROP VIEW PS_DMK
/
CREATE MATERIALIZED VIEW PS_DMK (EMPLID, EMPL_RCD, EFFDT, EFFSEQ,
SETID_DEPT, DEPTID) TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST
ON DEMAND DISABLE QUERY REWRITE AS SELECT EMPLID , EMPL_RCD , EFFDT ,
EFFSEQ , SETID_DEPT , DEPTID FROM PS_JOB
/
UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
(PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK') WHERE RECNAME
= 'DMK'
/
  • It is rather odd to see a build script update a PeopleTools table. Application Designer also updates the PSPTMATVWDEFN table itself every time it generates the build script.  Note that the script doesn't issue an explicit commit, so if you execute the build script in SQL*Plus remember to commit to release the row level lock on PSPTMATVWDEFN.  
  • Application Designer flip-flops between these two build scripts that will repeatedly drop and create the materialized view and materialized view log.  Unless you are very careful you might not know whether you have the objects in the desired state.
  • The materialized view and materialized view log are always created in tablespace PSMATVW.  This is a new tablespace delivered in the standard tablespace script.  It is not possible to set the tablespace to something else as for a normal table because it is a view.  This is unfortunately because, I might not want all my materialized views in the same tablespace.
  • Even though the materialized view is replicated by primary key, the materialized view log also contains the rowid and the supplementary columns.  This is overkill.  The materialized view log as built be application designer contains every length-bounded column in the source table.  This can significantly increase the overhead of the materialized view log which is maintained as other process update the source table.
SQL> desc mlog$_ps_job
Name Null? Type
----------------------------- -------- -------------------
EFFDT DATE
EFFSEQ NUMBER
EMPLID VARCHAR2(11 CHAR)
EMPL_RCD NUMBER
DEPTID VARCHAR2(10 CHAR)
SETID_DEPT VARCHAR2(5 CHAR)
M_ROW$$ VARCHAR2(255 CHAR)
SEQUENCE$$ NUMBER
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1 CHAR)
OLD_NEW$$ VARCHAR2(1 CHAR)
CHANGE_VECTOR$$ RAW(255)
XID$$ NUMBER
    • If I just created the materialized view log as follows
CREATE MATERIALIZED VIEW LOG ON PS_JOB TABLESPACE PSMATVW 
WITH PRIMARY KEY
--, ROWID, SEQUENCE(DEPTID, SETID_DEPT)
INCLUDING NEW VALUES PURGE IMMEDIATE
/
    • then the materialized view log contains fewer columns
Name                          Null?    Type
----------------------------- -------- --------------------
EFFDT DATE
EFFSEQ NUMBER
EMPLID VARCHAR2(11 CHAR)
EMPL_RCD NUMBER
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1 CHAR)
OLD_NEW$$ VARCHAR2(1 CHAR)
CHANGE_VECTOR$$ RAW(255)
XID$$ NUMBER
  • The materialized view inherits the primary key from the source table because it is a single table materialized view replicated using the primary key.  Therefore there is also a unique index on this materialised view.
SELECT constraint_name, constraint_type, table_name, index_name
FROM user_constraints
WHERE table_name = 'PS_DMK'
AND constraint_type != 'C'
/

CONSTRAINT_NAME C TABLE_NAME INDEX_NAME
-------------------- - ---------- ----------
PS_JOB_PK1 P PS_DMK PS_JOB_PK1

SELECT index_name, index_type, uniqueness
FROM user_indexes
WHERE table_name = 'PS_DMK'
/

INDEX_NAME INDEX_TYPE UNIQUENES
---------- ---------- ---------
PS_JOB_PK1 NORMAL UNIQUE

SELECT index_name, column_position, column_name, descend
FROM user_ind_columns
WHERE table_name = 'PS_DMK'
/

INDEX_NAME COLUMN_POSITION COLUMN_NAME DESC
---------- --------------- -------------------- ----
PS_JOB_PK1 1 EFFDT ASC
PS_JOB_PK1 2 EFFSEQ ASC
PS_JOB_PK1 3 EMPLID ASC
PS_JOB_PK1 4 EMPL_RCD ASC




  • The build script clears out the MV_CAPABILITIES_TABLE when it drops the materialized view.  This table is used to hold the output from DBMS_MVIEW.EXPLAIN_MVIEW (see Oracle Database Data Warehousing Guide - ), which Application Designer executes when the materialized view record is saved.

  • Example 3:DMK_DPT_SEC_MVW is a materialised view that is cloned from security view DEPT_SEC_SRCH.This view references various tables (I have edited out column lists and predicates for readability)

    SELECT …
    FROM PS_DEPT_TBL DEPT
    , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    OR EXISTS (
    SELECT 'X'
    FROM PS_SJT_DEPT SEC
    , PS_SJT_CLASS_ALL CLS
    , PS_SJT_OPR_CLS SOC
    …)
    But only 4 views appear in PSPTMATVWDEP.  PS_SJT_DEPT was not added.
    SELECT * FROM psptmatvwdep WHERE recname = 'DMK_DPT_SEC_MVW'
    /

    RECNAME PTMAT_BASETBL
    --------------- ------------------
    DMK_DPT_SEC_MVW PSOPRDEFN
    DMK_DPT_SEC_MVW PS_DEPT_TBL
    DMK_DPT_SEC_MVW PS_SJT_CLASS_ALL
    DMK_DPT_SEC_MVW PS_SJT_OPR_CLS
    I think this is because it tried and failed to add primary key constraint and materialized view log to PS_SJT_DEPT because it has a 'duplicate key' defined in Application Designer.  The following errors are found in the build log even if the build script is not executed.
    ALTER TABLE PS_SJT_DEPT ADD CONSTRAINT PS_SJT_DEPT_PK PRIMARY KEY 
    (SCRTY_KEY1, SCRTY_KEY2, SCRTY_KEY3, SCRTY_TYPE_CD, SETID)
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated

    CREATE MATERIALIZED VIEW LOG ON PS_SJT_DEPT TABLESPACE PSMATVW
    WITH PRIMARY KEY, ROWID, SEQUENCE (DEPTID, EFFDT_NOKEY)
    INCLUDING NEW VALUES PURGE IMMEDIATE
    Error: DMK_DPT_SEC_MVW - SQL Error. Error Position: 39 Return: 2437
    - ORA-02437: cannot validate (SYSADM.PS_SJT_DEPT_PK) - primary key violated
    Application Designer worked out that PS_SJT_DEPT was referenced in the materialized view query, but it didn't check that the table does not has a unique key defined in PeopleTools.

    We didn't get as far as creating the materialized view.  However, Application Designer passed the create Materialized View command to the EXPLAIN_MVIEW function in order to populate
    EXECUTE DBMS_MVIEW.EXPLAIN_MVIEW (q'[CREATE MATERIALIZED VIEW PS_DMK_DPT_SEC_MVW (SETID, OPRID, DEPTID, DESCR
    , DESCRSHORT, SETID_LOCATION, LOCATION, MANAGER_ID, COMPANY, USE_BUDGETS, USE_ENCUMBRANCES)
    TABLESPACE PSMATVW BUILD DEFERRED REFRESH FAST ON DEMAND AS
    SELECT DEPT.SETID, OPR.OPRID, DEPT.DEPTID , DEPT.DESCR , DEPT.DESCRSHORT , DEPT.SETID_LOCATION
    , DEPT.LOCATION , DEPT.MANAGER_ID , DEPT.COMPANY , DEPT.USE_BUDGETS , DEPT.USE_ENCUMBRANCES
    FROM PS_DEPT_TBL DEPT , PSOPRDEFN OPR
    WHERE EXISTS (
    SELECT 'X' FROM PS_SJT_DEPT SEC , PS_SJT_CLASS_ALL CLS , PS_SJT_OPR_CLS SOC
    WHERE SEC.SETID = DEPT.SETID AND SEC.DEPTID = DEPT.DEPTID AND SEC.EFFDT_NOKEY = DEPT.EFFDT
    AND CLS.SCRTY_SET_CD = 'PPLJOB' AND CLS.SCRTY_TYPE_CD = '001' AND CLS.TREE = 'Y'
    AND CLS.SCRTY_KEY1 = SEC.SCRTY_KEY1 AND CLS.SCRTY_KEY2 = SEC.SCRTY_KEY2
    AND CLS.SCRTY_KEY3 = SEC.SCRTY_KEY3 AND SOC.OPRID
    Example 4: DMK_JOB_CUR_MVW is a materialized view cloned from JOB_CURR_ALL_VWIn this case I will try to create a materialized view on a complex query, but this time the underlying table has a unique key.  When I try to build the materialized view I get the following entries in the error log.  These warnings were obtained from the entries in MV_CAPABILITIES_TABLE which was populated by an attempt to describe the query.
    SQL Build process began on 16/02/2015 at 21:05:30 for database HR92U011. 
    Error: Cannot create Materialized View on record DMK_JOB_CUR_MVW.
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_COMPLETE| Y | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE| N | | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | aggregate function in mv | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_INSERT| N | multiple instances of the same table or view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ONETAB_DML| N | see the reason why REFRESH_FAST_AFTER_INSERT is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REFRESH_FAST_AFTER_ANY_DML| N | see the reason why REFRESH_FAST_AFTER_ONETAB_DML is disabled | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | Oracle error: see RELATED_NUM and RELATED_TEXT for details |expression not supported for query rewrite |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_FULL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_PARTIAL_TEXT_MATCH| N | query rewrite is disabled on the materialized view | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | subquery present in the WHERE clause | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | materialized view cannot support any type of query rewrite | |
    Warning: | PS_DMK_JOB_CUR_MVW | REWRITE_GENERAL| N | query rewrite is disabled on the materialized view | |

    SQL Build process ended on 16/02/2015 at 21:05:30.
    1 records processed, 1 errors, 15 warnings.
    SQL Build script for all processes written to file C:\Temp\PSBUILD.SQL.
    SQL executed online.
    SQL Build log file written to C:\Temp\PSBUILD.LOG.
    • So, Application Designer does try to prevent you from creating materialized views that Oracle won't manage, but the messages back are a little obscure.
    • If I change the refresh mode to Complete, Application Designer does not create materialized view logs. 
    CREATE MATERIALIZED VIEW PS_DMK_JOB_CUR_MVW (EMPLID, EMPL_RCD,
    ACTION_DT, BUSINESS_UNIT, EMPL_STATUS, HR_STATUS, DEPTID, JOBCODE,
    LOCATION, POSITION_NBR, ACTION, ACTION_REASON, COMP_FREQUENCY,
    COMPRATE, CURRENCY_CD, SAL_ADMIN_PLAN, GRADE, COMPANY, PAY_SYSTEM_FLG
    , PAYGROUP, REG_TEMP, FULL_PART_TIME, SETID_DEPT, SETID_JOBCODE,
    SETID_LOCATION, PER_ORG) TABLESPACE PSMATVW BUILD IMMEDIATE REFRESH
    COMPLETE ON DEMAND AS SELECT A.EMPLID ,A.EMPL_RCD ,A.EFFDT
    ,A.BUSINESS_UNIT ,A.EMPL_STATUS ,A.HR_STATUS ,A.DEPTID ,A.JOBCODE
    ,A.LOCATION ,A.POSITION_NBR ,A.ACTION ,A.ACTION_REASON
    ,A.COMP_FREQUENCY ,A.COMPRATE ,A.CURRENCY_CD ,A.SAL_ADMIN_PLAN
    ,A.GRADE ,A.COMPANY ,A.PAY_SYSTEM_FLG ,A.PAYGROUP ,A.REG_TEMP
    ,A.FULL_PART_TIME ,A.SETID_DEPT ,A.SETID_JOBCODE ,A.SETID_LOCATION
    ,A.PER_ORG FROM PS_JOB A WHERE A.EFFDT = ( SELECT MAX (C.EFFDT) FROM
    PS_JOB C WHERE C.EMPLID = A.EMPLID AND C.EMPL_RCD = A.EMPL_RCD AND
    ((C.EFFDT <= TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD')) OR
    (A.EFFDT > TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') AND
    TO_DATE(TO_CHAR(SYSDATE,'YYYY-MM-DD'),'YYYY-MM-DD') < ( SELECT
    MIN(J2.EFFDT) FROM PS_JOB J2 WHERE J2.EMPLID = A.EMPLID AND
    J2.EMPL_RCD = A.EMPL_RCD) ) )) AND A.EFFSEQ = ( SELECT MAX(D.EFFSEQ)
    FROM PS_JOB D WHERE D.EMPLID = A.EMPLID AND D.EMPL_RCD = A.EMPL_RCD
    AND D.EFFDT = A.EFFDT)
    /
    UPDATE PSPTMATVWDEFN SET PTMAT_MATSTAT = 1, PTMAT_REFRESHSTAT = 1,
    (PTMAT_LASTREFRESH, PTMAT_STALENESS) = (SELECT LAST_REFRESH_DATE,
    STALENESS FROM USER_MVIEWS WHERE MVIEW_NAME = 'PS_DMK_JOB_CUR_MVW')
    WHERE RECNAME = 'DMK_JOB_CUR_MVW'
    /
    • Also, It doesn't create a primary key constraint on either the underlying table or the materialized view.  So this materialized view doesn't have any indexes.
    Query ReWriteOne common use of complex materialized views is to allow the optimizer to rewrite the query to use the materialized view when it sees the same query as was used to create the materialized view.  Optionally the optimizer will also check that the view is up to date.  I have added the enable query rewrite clause.
    DROP MATERIALIZED VIEW PS_DMK_PER_DEP_MVW
    /
    CREATE MATERIALIZED VIEW PS_DMK_PER_DEP_MVW (SETID_DEPT, DEPTID, EFFDT
    , DESCR) TABLESPACE PSMATVW
    BUILD IMMEDIATE REFRESH COMPLETE ON DEMAND
    enable query rewrite
    AS SELECT A.SETID ,A.DEPTID ,A.EFFDT ,A.DESCR FROM PS_DEPT_TBL A
    WHERE A.EFFDT= ( SELECT MAX(B.EFFDT) FROM PS_DEPT_TBL B WHERE A.SETID
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    ,'YYYY-MM-DD'),'YYYY-MM-DD'))
    /
    However, expressions - in this case one generated to determine the current effective-dated department - are not supported for query write.
    =B.SETID AND A.DEPTID= B.DEPTID AND B.EFFDT<=TO_DATE(TO_CHAR(SYSDATE
    *
    ERROR at line 7:
    ORA-30353: expression not supported for query rewrite
    This could make it very difficult to use the feature in PeopleSoft. If you want to use the materialized view you are likely to have to reference it explicitly in the code.
    Refreshing materialized ViewsThere is a new component to manage the refresh frequency of materialized views.
    PeopleTools-> Utilities-> Administration -> Oracle Materialized Views -> Materialized View Maintenance
    This component will schedule an Application Engine process called PTMATREFVW.
    What this Application Engine actually does is to specify the refresh frequency for the materialized view
    &AlterSQL = "alter materialized view " | &mview_name 
    | " REFRESH NEXT SYSDATE + (" | &MatRecords.PTMAT_REFINT.Value | "/86400)";
    So the command issued is just
    alter materialized view PS_DMK REFRESH  NEXT SYSDATE + (4242/86400)";
    Effectively, for each materialized view, this creates a refresh group and a database job that refreshes it.
    SELECT rname, next_date, interval FROM user_refresh
    /

    RNAME NEXT_DATE INTERVAL
    ---------- --------- -------------------------
    PS_DMK 24-JAN-15 SYSDATE + (4242/86400)

    SELECT name, type, rname, job, next_date, interval FROM user_refresh_children
    /

    NAME TYPE RNAME JOB NEXT_DATE INTERVAL
    ---------- ---------- ---------- ---------- --------- -------------------------
    PS_DMK SNAPSHOT PS_DMK 21 24-JAN-15 SYSDATE + (4242/86400)

    SELECT job, next_date, next_Sec, interval, what FROM dba_jobs
    /

    JOB NEXT_DATE NEXT_SEC INTERVAL
    ---------- --------- -------- -------------------------
    WHAT
    --------------------------------------------------
    21 24-JAN-15 11:48:52 SYSDATE + (4242/86400)
    dbms_refresh.refresh('"SYSADM"."PS_DMK"');
    • But I might want to group related materialized views together into a single refresh group.
    • I might want to refresh the job at a particular time, which can be done with a more sophisticated function in the interval. 
    • I might prefer to refresh a materialized view at a particular point in a batch schedule.  So I might prefer to code that into an application engine, or have the application engine submit a job that only fires once and does resubmit (depending on whether I want to wait for the refresh).
    My Recommendations
    • Good Things
      • The removal of the descending indexes and the creation of the primary key is a good thing
      • It is useful to be able to define the materialized view query in PeopleTools along with the rest of the applicaiton.
      • The use of EXPLAIN_MVIEW to test the validity of the materialized view and to populate MV_CAPABILITIES_TABLE is clever, but the messages are obscure and should be better documented.
    • Bad Things
      • No checking that a source table in the local database doesn't have a unique key that will support a primary key.
      • I can't build indexes on materialized view.  Although, the primary key will be inherited automatically on single table materialized views.  So you will have to handle that manually outside PeopleTools.
      • There is no support for rowid based materialized views.
      • The materialized view logs created by Application Designer are totally overblown - there is far too much data being logged.  They should be either primark key or rowid (if primary key is not possible), but not both.  I cannot see the point of the additional columns.  I think they are a waste of resource.
      • The flip-flopping of the build script is confusing; you will never be completely sure what you have in the database. 
      • The script dropping the materialized view unnecessarily, which will drop any indexes that you have created manually!
    • I think some of the problems stem from trying to graft materialized views onto the existing view record type, instead of creating a new record type and building it into Application Designer properly and giving it the attributes of both a table and a view.
    • There is not enough control over when a materialized view is refreshed.  Just a time interval is not good enough.  In most systems, you need better control.
    • It is clearly going to be difficult getting database query rewrite to work with complex materialized views in PeopleSoft, especially if effective-date logic is required.  However, that is a reflection on the application code rather than the way support for materialized views has been implemented.
     When you design materialized views into your application you need to make careful choices about
    • Refresh method
      • Complete - pretty much mandatory for multi-table views, but there are some exceptions to this rule described in the Oracle database documentation.
      • Fast - only for single table - rarely used within a single database - more commonly used for moving data between databases.  In which case, you need database links and the materialized view log goes onto the source database and the materialized view is created on the target.
    • Refresh mode
      • On commit - this is potentially dangerous because it could happen too often and there won't be a saving.  Possible exception being when the underlying table is only ever updated by a batch process
      • On demand - manually issue refresh at a specific point in a batch
      • On schedule by a database job.
    • Build options
      • Immediate - the materialized view is populated when it is created
      • Deferred - the materialized view is not poplulated when it is created, and will have to be completely refreshed at some point in the future.
    Choice of refresh method and mode is often a function question.  How stale can the materialized view be allowed to be, especially if you can't get query rewrite to work.  Thoughtful design is required.  I have seen systems severely degraded by the query and redo overhead of excessive refresh. 
    PeopleTools support of materialized views certainly has some good things, but as it stands it is of limited use when it still leaves you with a lot of manual administration to do. 

    In most systems it is the DBAs who will have to manage the materialized views.  They are generally resistant to using PeopleSoft specific tools to do database administration.  That is going to be even more challenging when only a part of the job can be done in PeopleSoft.

    dramatic differences of in memory scanning performance on range queries

    Karl Reitschuster - Tue, 2015-02-17 11:07
    Given following two identical tables, on which run the same SQL,  replicated in memory with oracle in-memory option - one table created out of the other.
    each tables covers 24m rows.
        
    Same structure ...

    Next Generation Outline Extractor - New Version Available

    Tim Tow - Mon, 2015-02-16 14:42
    Today we released a new version of the Next Generation Outline Extractor, version 2.0.3.769.  Here are the release notes from this new version:

    Version 2.0.3.769 supports the following Essbase versions:

    9.3.1
    9.3.1.1
    9.3.1.2
    9.3.3
    11.1.1
    11.1.1.1
    11.1.1.2
    11.1.1.3
    11.1.1.4
    11.1.2
    11.1.2.1
    11.1.2.1.102
    11.1.2.1.103
    11.1.2.1.104
    11.1.2.1.105
    11.1.2.1.106
    11.1.2.2
    11.1.2.2.102
    11.1.2.2.103
    11.1.2.2.104
    11.1.2.3
    11.1.2.3.001
    11.1.2.3.002
    11.1.2.3.003
    11.1.2.3.500
    11.1.2.3.501
    11.1.2.3.502
    11.1.2.3.505
    11.1.2.4

    Issues resolved in version 2.0.3.769:

    2015.02.15 - Issue 1355 - All Writers - Add functionality to replace all line feeds, carriage returns, tabs, and extraneous spaces in formulas

    2015.02.13 - Issue 1354 - RelationalWriter - Changed the default database name from dodeca to extractor

    2015.02.13 - Issue 1353 - RelationalWriter - Added CONSOLIDATION_TYPE_SYMBOL, SHARE_FLAG_SYMBOL, TIME_BALANCE, TIME_BALANCE_SYMBOL, TIME_BALANCE_SKIP, TIME_BALANCE_SKIP_SYMBOL, EXPENSE_FLAG, EXPENSE_FLAG_SYMBOL, TWO_PASS_FLAG, and TWO_PASS_FLAG_SYMBOL columns to the CACHED_OUTLINE_MEMBERS table

    2015.02.13 - Issue 1352 - RelationalWriter - Added Server, Application, and Cube columns to the CACHED_OUTLINE_VERSIONS table

    2015.02.13 - Issue 1351 - Fixed issue with LoadFileWriter where UDA column headers were incorrectly written in the form UDAS0,DimName instead of UDA0,DimName

    In addition, a number of fixes, etc, were put into 2.0.2 and earlier releases and those releases went unannounced.  Those updates included the following items:

    1. There is no longer a default .properties file for the Extractor.  This will force a user to specify a .properties file.  (2.0.2.601)
    2. Removed the "/" character as a switch for command line arguments as it causes problems in Linux. (2.0.2.605)
    3. Fixed issue when combining MaxL input with relational output where a "not supported" error message would appear due to certain properties were not being read correctly from the XML file (2.0.2.601)
    4. Command line operations resulted in an error due to an improper attempt to interact with the GUI progress bar. (2.0.2.601)
    5. Shared members attributes where not be properly written resulting in a delimiter/column count mismatch. (2.0.2.625)
    6. Added encoding options where a user can choose between UTF-8 and ANSI encodings.  The Extractor will attempt to detect encoding from selected outline and, if the detected outline encoding is different from the user selected outline encoding, a warning message appears.
    Categories: BI & Warehousing

    PeopleTools 8.54: Descending Indexes are not supported

    David Kurtz - Mon, 2015-02-16 07:57
    This is the first in a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    "With PeopleTools 8.54, PeopleTools will no longer support descending indexes on the Oracle database platform" - PeopleTools 8.54 Release Notes

    They have gone again!  Forgive the trip down memory lane, but I think it is worth reviewing their history.
    • Prior to PeopleTools 8.14, if you specified a key or alternate search field in record as descending then where it appeared in automatically generated key and alternate search key indexes, that column would be descending.  PeopleTools would add the DESC keyword after the column in the CREATE INDEX DDL.  Similarly, columns can be specified as descending in user indexes (that is to say ones created by the developer with index ID A through Z).
    • In PeopleTools 8.14 to 8.47, descending indexes were not built by Application Designer because of a problem with the descending key indexes in some versions of Oracle 8i.  
      • PeopleSoft had previously recommended setting an initialisation parameter on 8i to prevent descending indexes from being created even if the DESC keyword was specified.parameters.
    _IGNORE_DESC_IN_INDEX=TRUE
    • From PeopleTools 8.48 the descending keyword came back because this was the first version of PeopleTools that was only certified from Oracle 9i, in which the descending index bug never occurred. (see blog posting 'Descending Indexes are Back!' October 2007).
    • In PeopleTools 8.54, there are again no descending indexes because the descending keyword has been omitted from the column list in the CREATE TABLE DDL. You can still specify descending keys in Application Designer because that controls the order in which rows are queried into scrolls in the PIA.  You can also still specify descending order on user indexes, but it has no effect upon either the application or the index DDL.
    I haven’t found any documentation that explains why this change has been made.  This time there is no suggestion of a database bug.  However, I think that there are a good database design reasons behind it.

    Normally creation of a primary key automatically creates a unique index to police the constraint.  It is possible to create a primary key constraint using a pre-existing index.  The index does not have to be unique, but it may as well.  However, there are some limitations.
    • You cannot create primary key on nullable columns - that is a fundamental part of the relational model.  This is rarely a problem in PeopleSoft where only dates that are not marked 'required' in the Application Designer are created nullable in the database.
      • You can create a unique index on nullable columns, which is probably why PeopleSoft has always used unique indexes.  
    • You cannot use a descending index in a primary key constraint because it is implemented as a function-based index.
    CREATE TABLE t (a NUMBER NOT NULL)
    /
    CREATE UNIQUE INDEX t1 ON t(A DESC)
    /
    ALTER TABLE t ADD PRIMARY KEY (a) USING INDEX t1
    /
    ORA-14196: Specified index cannot be used to enforce the constraint.

      • We can see that the descending key column is actually a function of a column and not a column, and so cannot be used in a primary key.
    SELECT index_name, index_type, uniqueness FROM user_indexes WHERE table_name = 'T'
    /

    INDEX_NAME INDEX_TYPE UNIQUENES
    ---------- --------------------------- ---------
    T1 FUNCTION-BASED NORMAL UNIQUE

    SELECT index_name, column_name, column_position, descend FROM user_ind_columns WHERE table_name = 'T'
    /

    INDEX_NAME COLUMN_NAME COLUMN_POSITION DESC
    ---------- ------------ --------------- ----
    T1 SYS_NC00002$ 1 DESC

    SELECT * FROM user_ind_expressions WHERE table_name = 'T'
    /

    INDEX_NAME TABLE_NAME COLUMN_EXPRESSION COLUMN_POSITION
    ---------- ---------- -------------------- ---------------
    T1 T "A" 1
      • Non-PeopleSoft digression: you can create a primary key on a virtual column.  An index on the virtual column is not function-based.  So you can achieve the same effect if you move the function from the index into a virtual column, and you can have a primary key on the function.  However, PeopleTools Application Designer doesn't support virtual columns.
    I think that descending keys have removed from PeopleTools because:
    • It permits the creation of primary key constraints using the unique indexes.
    • It does not pose any performance threat.  In Oracle, index leaf blocks are chained in both directions so it is possible to use an ascending index for a descending scan and vice versa. 
    • Update 20.4.2016: There is an optimisation in Oracle 11.2.0.4 that improves the performance of the MAX() function in correlated sub-queries on ascending indexes only.  This will benefit all PeopleSoft applications, but especially HCM.
    What are the advantages of having a primary key rather than just a unique constraint?
    • The optimizer can only consider certain SQL transformations if there is a primary key.  That mainly affects star transformation.
    • It allows the optimizer to rewrite a query to use a materialized view. 
    • It allows a materialized view refresh to be based on the primary key rather than the rowid (the physical address of the row in a table).  This can save you from performing a full refresh of the materialized view if you rebuild the table (I will come back to materialized views in another posting).
    • If you are using logical standby, you need to be able to uniquely identify a row of data otherwise Oracle will perform supplemental logging.  Oracle will additionally log all bounded-size columns (in PeopleSoft, that means everything except LOBs).  Oracle can use non-null unique constraint, but it cannot use a unique function-based index.
      • Logical standby can be a useful way to minimise downtime during a migration.  For example, when migrating the database from a proprietary Unix to Linux where there is an Endian change.  Minimising supplemental logging would definitely be of interest in this case.
    The descending index change is not something that can be configured by the developer or administrator.  It is just something that is hard coded in Application Design and Data Mover that changes the DDL that they generate.

    There are some considerations on migration to PeopleTools 8.54:
    • It will be necessary to rebuild all indexes with descending keys to remove the descending keys.  
    • There are a lot of descending indexes in a typical PeopleSoft application (I counted the number of function-based indexes in a typical system: HR ~11000 , Financials: ~8500).  If you choose to do this during the migration it may take considerable time.
    • In Application Designer, if your build script settings are set to only recreate an index if modified, Application Designer will not detect that the index has a descending key and rebuild it.  So you will have to work out for yourself which indexes have descending key columns and handle the rebuild manually.
    • With migration to PeopleTools 8.54 in mind, you might choose to prevent Oracle from building descending indexes by setting _IGNORE_DESC_IN_INDEX=TRUE. Then you can handle the rebuild in stages in advance
    ConclusionI think the removal of descending indexes from PeopleSoft is a sensible change that enables a number of Oracle database features, while doing no harm.

    PeopleTools 8.54 for the Oracle DBA

    David Kurtz - Mon, 2015-02-16 06:36
    The UKOUG PeopleSoft Roadshow 2015 comes to London on 31st March 2015.  In a moment of enthusiasm, I offered to talk about new and interesting features of PeopleTools 8.54 from the perspective of an Oracle DBA.

    I have been doing some research, and have even read the release notes!  As a result, I have picked out some topics that I want to talk about.  I will discuss how the feature has been implemented, and what I think are the benefits and drawbacks of the feature:
    This post is not about a new feature in PeopleTools 8.54, but it is something that I discovered while investigating the new version.
      Links have been added to the above list as I have also blogged about each.  I hope it might produce some feedback and discussion.  After the Roadshow I will also add a link to the presentation.

      PeopleTools 8.54 is still quite new, and we are all still learning.  So please leave comments, disagree with what I have written, correct things that I have got wrong, ask questions.

      Ephemeral Port Issue with Essbase Has Been Fixed!

      Tim Tow - Fri, 2015-02-13 09:24
      The issue that has plagued a number of Essbase customers over the years related to running out of available ports has finally been fixed!

      This issue, which often manifested itself with errors in the Essbase error 10420xx range, was caused by how the Essbase Java API communicated with the server. In essence, whenever a piece of information was needed, the Essbase Java API grabbed a port from the pool of available ports, did its business, and then released the port back to the pool. That doesn’t sound bad, but the problem occurs due to how Windows handles this pool of ports. Windows will put the port into a timeout status for a period of time before it makes the port available for reuse and the default timeout in Windows is 4 minutes! Further, the size of the available pool of ports is only about 16,000 ports in the later versions of Windows. That may sound like a lot of ports, but the speed of modern computers makes it possible, and even likely, that certain operations, such as the outline APIs, that call Essbase many, many times to get information would be subject to this issue. Frankly, we see this issue quite often with both VB and the Java Essbase Outline Extractors.

      We brought this issue to the attention of the Java API team and assisted them by testing a prerelease version of the Java API jars. I am happy to report the fix was released with Essbase 11.1.2.3.502. In addition, there is a new essbase.properties setting that allows you to turn the optimization on or off:

      olap.system.socketoptimization=false

      It is our understanding that this optimization is turned on by default. I also checked the default essbase.properties files shipped with both Essbase 11.1.2.3.502 and 11.1.2.4 and did not see that setting in those files. It may be one of those settings that is there in case it messes something else up. The work of our own Jay Zuercher in our labs and searching Oracle Support seems to have confirmed that thought. There is apparently an issue where EIS drill-through reports don't work in Smart View if socket optimization is turned on. It is documented in Oracle Support Doc ID 1959533.1.

      There is also another undocumented essbase.properties setting:

      olap.server.socketIdleTime

      According to Oracle development, this value defaults to 300 ms but there should be little need to ever change it. The only reason it is there is to tune socket optimization in case more than 2 sockets are used per Java API session.

      Jay also tested the 11.1.2.4 version in our labs with the Next Generation Outline Extractor. With the default settings, one large test outline we have, "BigBad", with about 120,000 members in it, extracted in 1 minute and 50 seconds.  With socket optimization turned off, the same outline was only about 25% complete after 2 hours.   In summary, this fix will be very useful for a lot of Oracle customers.

      Categories: BI & Warehousing

      Fine Grained Auditing (FGA) and Protecting Oracle E-Business PII Data for Executives

      With the recent news about yet another database breach of Personally Identifiable Information (PII), Integrigy had a discussion with a client about how to better protect the PII data of their executives.

      The following Fine-Grained-Auditing (FGA) policy started the discussion. The policy below will conditionally log direct connections to the Oracle E-Business Suite database when the PII data of corporate executives is accessed. For example, it will ignore E-Business Suite end-user connections to the database, but will catch people directly connecting to the database from their laptop. However, it will only do so if PII data for executives is accessed:

      BEGIN
      
      DBMS_FGA.ADD_POLICY (
         object_schema     =>  'HR',
         object_name       =>  'PER_ALL_PEOPLE_F',
         policy_name       =>  'FGA_PPF_NOT_GUI_AND_OFFICER',
         audit_condition   =>  ' PER_ALL_PEOPLE_F.PERSON_ID IN (
               SELECT PAX.PERSON_ID
               FROM PER_ASSIGNMENTS_X PAX, PER_JOBS J, PER_JOB_DEFINITIONS JD
               WHERE PAX.JOB_ID = J.JOB_ID
               AND J.JOB_DEFINITION_ID = JD.JOB_DEFINITION_ID
               AND UPPER(JD.SEGMENT6) LIKE UPPER(''%EXECUTIVE%''))
               AND NOT (SYS_CONTEXT (''USERENV'',''IP_ADDRESS') IN
               (''IP of your DB server’’, ‘’IP of your cm server’’, 
                 ‘’IP of your application server’’) 
              AND SYS_CONTEXT (''USERENV'',''CURRENT_USER'') = ''APPS'' ) ',
         audit_column      =>   NULL,
         handler_schema    =>   NULL,
         handler_module    =>   NULL,
         enable            =>   TRUE,
         statement_types   =>  'SELECT',
         audit_trail       =>   DBMS_FGA.DB,
         audit_column_opts =>   DBMS_FGA.ANY_COLUMNS);
      
      END;

      Here is an explanation of the policy above:

      • Audits only direct database activity and ignores database connections from the E-Business Suite user interface, the database server, the web and application servers, as well as the concurrent manager.
      • Audits SELECT activity against PER_ALL_PEOPLE_F or any view based on the table PER_ALL_PEPOPLE_F. PII data exists outside of PER_ALL_PEOPLE_F but this table is the central table within the E-Business Suite that defines a person and thus contains critical PII data such as name, birthdate and National Identifier.
      • Audits ALL columns in the table but could easily be restricted to only specific columns.
      • Audits ONLY those result sets that includes current or ex-employee whose job title has ‘%Executive%' in the Job Title. Note this policy was demonstrated using the Vision demo database. Your Job Key Flexfield definition will be different.
      • FGA comes standard with the Enterprise license of the Oracle database. If you own the Oracle E-Business Suite, you don't need an additional license to use FGA.

      The policy above would certainly strengthen an overall database security posture, but it does have several immediate drawbacks:

      • While it does address risks with direct database activity, including the use of the APPS account from a laptop, it will not guard against privileged database users such as DBAs.
      • Spoofing of USRENV attributes is possible which precludes using any USERENV attribute other than the IP address and DB username.
      • Audit data needs security stored and regularly purged. Privileged users may have access to FGA data and policies. Audit data also needs to be retained and purged per corporate policies.
      • Lastly, the performance impact of the policy above would need to be carefully measured. If the policy above were to be implemented, it would need to be seriously tested, especially if modules are to be used such as Oracle Advanced Benefits and/or Payroll.

      As part of a database security program, Integrigy recommends that all clients implement defense in depth. No one tool or security feature will protect your data. Oracle Traditional Auditing (TA) as well as FGA policies similar to the above should be implemented, but the both TA and FGA have limitations and trade-offs.

      Integrigy recommends that both Oracle TA and FGA be used with database security solutions such as the Oracle Audit Vault and Database Firewall (AVDF), Splunk, Imperva, and IBM Guardium.  Database monitoring and alerting needs to be automated and should done using a commercial tool. You also need to secure and monitor privileged users such as DBAs and database security cannot come at the cost of overall application performance.

      Our client conversation about the FGA policy above concluded that while the policy could work, given the variety of different database connections, a better solution would be to utilize a variation of the policy above along with Splunk, which they already own.

      If you have questions about the sample FGA policy above or about database security, please contact us at: mailto:info@integrigy.com

      References

      Auditing, Sensitive Data, HIPAA, Oracle E-Business Suite
      Categories: APPS Blogs, Security Blogs

      Oracle Querayable Patch Interface

      Pakistan's First Oracle Blog - Thu, 2015-02-12 18:39
      Starting from Oracle 12c, from within the SQL patching information can be obtained. A new package DBMS_QOPATCH offers some really nifty procedures to get the patch information. Some of that information is shared below:




      To get patch information from the inventory:


      SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


      Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
      Inventory      : /u01/app/oraInventory

      The following is an equivalent of opatch lsinventory command at the OS level:

      SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


      Oracle Querayable Patch Interface 1.0
      --------------------------------------------------------------------------------
      Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
      Inventory      : /u01/app/oraInventory
      --------------------------------------------------------------------------------Installed Top-level Products (1):
      Oracle Database 12c                       12.1.0.1.0
      Installed Products ( 131)

      Oracle Database 12c                        12.1.0.1.0
      Sun JDK                             1.6.0.37.0
      oracle.swd.oui.core.min                     12.1.0.1.0
      Installer SDK Component                     12.1.0.1.0
      Oracle One-Off Patch Installer                    12.1.0.1.0
      Oracle Universal Installer                    12.1.0.1.0
      Oracle USM Deconfiguration                    12.1.0.1.0
      Oracle Configuration Manager Deconfiguration            10.3.1.0.0
      Oracle RAC Deconfiguration                    12.1.0.1.0
      Oracle DBCA Deconfiguration                    12.1.0.1.0
      Oracle Database Plugin for Oracle Virtual Assembly Builder  12.1.0.1.0
      Oracle Configuration Manager Client                10.3.2.1.0
      Oracle Net Services                        12.1.0.1.0
      Oracle Database 12c                        12.1.0.1.0
      Oracle OLAP                            12.1.0.1.0
      Oracle Spatial and Graph                    12.1.0.1.0
      Oracle Partitioning                        12.1.0.1.0
      Enterprise Edition Options                    12.1.0.1.0


      Interim patches:

      Categories: DBA Blogs

      How do I ...

      Tim Dexter - Thu, 2015-02-12 15:23

      An email came in this morning to an internal mailing list,

      We have an Essbase customer with some reporting requirements and we are evaluating BI Publisher as the potential solution. I'd like to ask for your help with any document, blog or white paper with guidelines about using BI Publisher with Essbase as the main data source.

      Is there any tutorial showing how to use BI Publisher with Essbase as the main data source?

      There is not one to my knowledge but trying to be helpful I came up with the following response

      I'll refer to the docs ...
      First set up your connection to Essbase
      http://docs.oracle.com/cd/E28280_01/bi.1111/e22255/data_sources.htm#BIPAD294
      Then create your data model using that Essbase connection
      http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/create_data_sets.htm#BIPDM404
      Use the MDX query builder to create the query or write it yourself (lots of fun :)
      http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/create_data_sets.htm#BIPDM431
      Add parameters (optional)
      http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/add_params_lovs.htm#BIPDM306
      Then build layouts for your Essbase query
      http://docs.oracle.com/cd/E28280_01/bi.1111/e22254/toc.htm
      annnnd your're done :)

      Simple, right? Well simple in its format but it required me to know the basic steps to build said report and then where to find the appropriate pages in the doc for the links. Leslie saw my reply and commented on how straightforward it was and how our docs are more like reference books than 'how to's.' This got us thinking. I have noticed that the new 'cloud' docs have How do I ... sections where a drop down will then show maybe 10 tasks associated with the page Im on right now in the application.

      Getting that help functionality into the BIP is going to take a while. We thought, in the mean time, we could carve out a section on the blog for just such content. Here's where you guys come in. What do you want to know how to do? Suggestions in the comment pleeeease!

      Categories: BI & Warehousing

      Essbase VB API is Officially Dead

      Tim Tow - Thu, 2015-02-12 11:06
      It is with a sad heart that bring you the news that, as of Essbase 11.1.2.4, the Essbase VB API is officially dead.  I cut my teeth in Essbase working with that API way back in the mid-1990's, but the writing had been on the wall for some time.  Microsoft stopped supporting VB years ago, so it was only a matter of time before this time would come.  That being said, I haven't used the VB API for any new Essbase work since about 2001; the Essbase Java API is alive and growing so my efforts have been there.

      Here is the official notification:













      You can read the notification yourself in the Essbase 11.1.2.4 Readme.
      Categories: BI & Warehousing

      Oracle Maven Repository - Rolling News

      Steve Button - Wed, 2015-02-11 17:20
      Oracle Maven Repository Is Live  The Oracle Maven Repository is live and available for public access to the public APIs, libraries, utilities and archetypes that are shipped as part of the Oracle WebLogic Server 12.1.2 and 12.1.3 releases, including corresponding Coherence versions.

      ** The repository also publishes the same from the ADF, SOA, OSB  and other Fusion Middleware options.
       
      Oracle Maven Repository http://maven.oracle.com

      Using the Oracle Maven Repository https://maven.oracle.com/doc.html
       Rolling News   Oracle Maven Repository is Live

      https://redstack.wordpress.com/2015/01/14/happy-new-year-happy-new-oracle-maven-repository

      WebLogic Server and Oracle Maven Repository

      https://blogs.oracle.com/WebLogicServer/entry/weblogic_server_and_the_oracle

      Oracle Maven Repository Index Available

      https://redstack.wordpress.com/2015/01/23/oracle-maven-repository-index-now-available-more-to-come/

      https://blogs.oracle.com/WebLogicServer/entry/oracle_maven_repository_index_now

      JFrog Artifactory Supports Oracle Maven Repositoryhttp://www.jfrog.com/confluence/display/RTF/Artifactory+3.5.1

      http://buttso.blogspot.com.au/2015/02/fronting-oracle-maven-repository-with.html

      Sonatype Nexus Supports Oracle Maven Repositoryhttps://github.com/archenroot/nexus-oss/releases/tag/nexus-2.11.2-01

      https://issues.sonatype.org/browse/NEXUS/fixforversion/14525/?selectedTab=com.atlassian.jira.jira-projects-plugin:version-summary-panel



      Using RequireJS to Manage Dependencies

      Jim Marion - Wed, 2015-02-11 14:42

      I have a handful of blog posts describing how to deal with JavaScript dependency conflicts such as multiple library inclusions, namespace conflicts, etc. These workarounds are necessary in portal environments that mash up content from various providers that may use the same libraries. On a portal (Interaction Hub) homepage, for example, you may have a Pagelet Wizard pagelet that uses the jQuery Cycle plugin as well as a Pagelet Wizard pagelet that uses jQuery UI. Both of these pagelets will include the jQuery library, but different jQuery plugins. As each pagelet loads, it will load jQuery and then its required plugins. Both pagelets will try to put $ and jQuery into the global (window) namespace. The last pagelet to load will reload window.$ and window.jQuery causing the browser to drop any previously loaded jQuery plugins.

      One technique JavaScript developers use to manage dependencies in a scenario like this is to use RequireJS with Asynchronous Module Definitions (AMD). With RequireJS and AMD, you would define a RequireJS configuration file pointing to each JavaScript library and plugin and then write JavaScript that uses these libraries with a RequireJS closure. This approach keeps custom libraries out of the global namespace and ensures that libraries only load once (not once for each pagelet). PeopleTools 8.54 makes implementing this really easy through the new Branding Objects module and Branding System Options. Let's review an example. Let's say that I have RequireJS, jQuery, and jQuery UI loaded into JavaScript objects in the Branding Objects module as shown in the following image

      Now let's say you have your Branding System Options configured to include RequireJS and the RequireJS configuration file as described in the following screenshot:

      You could then create a Pagelet Wizard pagelet containing HTML like the following and not worry about dependencies or namespace pollution. Everything would just work

      <div id="NAA_DIALOG_TEST_html">
      <style type="text/css">
      @import url(%StyleSheet(NAA_JQ_UI_1_11_2));
      #NAA_DIALOG_TEST_html .dialog { display: none };
      </style>
      <script>
      require(['jquery', 'jqueryui'], function ($) {
      $(document).ready(function() {
      console.log("dialog pagelet is using version " + $.fn.jquery);
      $("#NAA_DIALOG_TEST_html").find("button")
      .button()
      .click(function( event ) {
      event.preventDefault();
      $("#NAA_DIALOG_TEST_html .dialog").dialog();
      });
      });
      });

      </script>
      <button>Show Dialog</button>
      <div class="dialog" title="Basic dialog">
      <p>This is the default dialog which is useful for displaying information.
      The dialog window can be moved, resized and closed with the 'x' icon.</p>
      </div>
      </div>

      Of course, this assumes that your RequireJS configuration file looks something like this:

      /**
      * RequireJS global configuration. Include after RequireJS in branding settings
      *
      * @returns {undefined}
      */
      (function () {
      /**
      * Build a URL based on the current component's URL
      * @param {type} scriptId
      * @returns {String} derived URL for JavaScript
      */
      var getScriptUrl = function (scriptId) {
      var mainUrl = /*window.strCurrUrl ||*/ window.location.href;
      var parts =
      mainUrl.match(/ps[pc]\/(.+?)(?:_\d)*?\/(.+?)\/(.+?)\/[chs]\//);
      return window.location.origin + "/psc/" + parts[1] + "/" + parts[2] +
      "/" + parts[3] +
      "/s/WEBLIB_PTBR.ISCRIPT1.FieldFormula.IScript_GET_JS?ID=" + scriptId;
      };

      require.config({
      paths: {
      /* Using non-standard name because 1.6.2 is not AMD compliant whereas
      * later versions are compliant. Don't want conflict with later version
      */
      'jquery': getScriptUrl("NAA_JQ_1_11_2_JS"),
      'jqueryui': getScriptUrl("NAA_JQ_UI_1_11_2_JS"),
      'jquery-private': getScriptUrl("NAA_JQ_PRIVATE_JS")
      },
      map: {
      // '*' means all modules will get 'jquery-private'
      // for their 'jquery' dependency.
      '*': { 'jquery': 'jquery-private' },

      // 'jquery-private' wants the real jQuery module
      // though. If this line was not here, there would
      // be an unresolvable cyclic dependency.
      'jquery-private': { 'jquery': 'jquery' }
      }
      });
      }());

      And your jQuery-private module looks something like this:

      // http://requirejs.org/docs/jquery.html#noconflictmap
      define(['jquery'], function (jq) {
      return jq.noConflict( true );
      });

      What's up with the getScriptUrl function? JavaScript HTML definitions do not yet support %JavaScript Meta-HTML. The getScriptUrl JavaScript function attempts to perform the same task, but using client-side JavaScript.

      Why do we need a jquery-private module? The point is to hide all of our dependencies and just expose them within the RequireJS closure. That way we avoid conflicts with older code that uses jQuery as well as any PeopleTools delivered JavaScript that may user

      This technique also works well for loading dependencies. I often use a JavaScript library in a pagelet, with JavaScript executed directly in the pagelet. One challenge I have had is ensuring that my browser parses and processes any JavaScript libraries before JavaScript embedded in a pagelet. RequireJS solves this by first loading the dependencies, and then executing the JavaScript within the define/require function.

      Note: For this to work properly, it is important that your JavaScript libraries are either AMD compliant or can be appropriately shimmed. Current versions of jQuery and jQuery UI are AMD compliant. The older version of jQuery UI that ships with PeopleTools 8.54 and earlier is NOT AMD compliant. Instead, I downloaded the latest jQuery UI and uploaded it using the new Branding Objects module. To work successfully, the jQuery UI CSS must be updated to use %Image() for each image resource and each image must be uploaded using the Branding Objects component.

      What Is That Light-Green Oracle Database CPU Wait Time?

      This page has been permenately moved. Please CLICK HERE to be redirected.

      Thanks, Craig.
      What Really Is That Light-Green Oracle Database CPU Wait Time?


      Have you ever wondered what that light-green "cpu wait time" really means in Oracle Enterprise Manager? It's what I call, the "gap" time. The "gap" time is the "missing" or the "leftover" time when DB Time does not equal the DB CPU (foreground process CPU consumption) plus the non-idle wait time. And, it happens more often than you might think.

      If you have ever noticed that the database time seems too large, then you need to read this article. And, if you really want to know what the light-green "cpu wait time" in your OEM charts is, then you need to read this article. It's that good.

      If you're serious about Oracle performance tuning and analysis, you'll want to know I just posted my complete 2015 public training schedule. It's on the main OraPub.com page HERE. Remember, alumni receive a 50% discount...crazy I know.

      My Experiment Shows...
      My experiment shows a strong relationship between the "gap" time and operating system CPU utilization. This means that a significant portion of the "gap" time is Oracle foreground processes sitting in the CPU run queue ready to consume CPU. This CPU run queue time is not part of DB CPU but it part of DB Time. So, when the CPU run queue time increases, so does DB Time and so does the "gap" time. And I have the data to show it! And you can run the same experiment yourself.

      Let me put this another way. Most of the DB Time "gap" is Oracle foreground processes waiting in the operating system CPU run queue so they can eventually and truly consume CPU.

      This is really important: When an Oracle foreground process is not consuming CPU but is sitting in the CPU run queue, Oracle Active Session History (ASH) facility records the session sample state as "CPU" and if the Oracle process is a foreground process (not a background process) Oracle's time model records this time as DB Time but not DB CPU. So in both the ASH and time model cases, someone must do some math to calculate this "cpu wait time".

      But that name... "cpu wait"!

      CPU Wait Time Is A Lousy Name

      "CPU wait time" is a lousy name. Why? Mainly because it has caused lots of confusion and speculation. The name would be more appropriately called something like, "cpu queue time." Three reasons come to mind.

      First, wait time means something special to Oracle DBAs. To an Oracle DBA anything associate with a "wait" should have a wait event name, a wait occurance, the time should be instrumented (i.e., measured) and should be recorded in the many wait interface related views, such as v$system_event or v$session.

      Second, from an Oracle perspective the process is truly "on cpu" because the process is not "waiting." Remember, an Oracle session is either in one of two states; CPU or WAIT. There is no third choice. So the words "CPU Wait" are really confusing.

      Third, from an OS perspective or simply a non-Oracle perspective, the Oracle process is sitting in the CPU run queue.

      I'm sure in some Oracle Corporation meeting the words "cpu wait" were considered a great idea, but it has caused lots of confusion. And I'm sure it's here to stay.

      What Does This "CPU WAIT" Look Like In OEM?

      In OEM, the "cpu wait" is a light green color. I grabbed a publically available screenshot off the internet and posted it below. Look familiar? 


      OK, so it's really easy to spot in OEM. And if you've seen it before you know EXACTLY what I'm referring to.

      What Is CPU Wait Time?

      First, let's review what we do know.

      1. DB CPU is true Oracle foreground process CPU consumption as reported by the OS through a system call, such as getrusage.

      2. CPU Wait time is derived, that is, somebody at Oracle wrote code to calculate the "cpu wait" time.

      3. CPU Wait time is a lousy name because it causes lots of confusion.

      4. CPU Wait time is shown in OEM as a light green color. DB CPU is shown as a dark/normal green color.

      Second, I need to define what I'll call the DB Time "gap." This is not error and I am not implying something is wrong with database time, that it's not useful or anything like that. All I am saying is that sometimes DB Time does not equal DB CPU plus the non-idle wait time. Let's put that in a formula:

      DB Time = DB CPU + non Idle Wait Time + gap

      Really, What Is CPU Wait Time?

      Now I'm ready to answer the question, "What is CPU WAIT time?" Here is the answer stated multiple ways.

      "CPU Wait" time is Oracle foreground process OS CPU run queue time.

      I ran an experiment (detailed below) and as the OS CPU utilization increased, so did the DB Time "gap" implying that the gap is CPU run queue time or at least a significant part of it.

      I ran an experiment and there was a strong correlation between OS CPU utilization and the DB Time "gap" implying that the gap is CPU run queue time.

      I ran an experiment and using queuing theory I was able to predict the "gap" time implying that the gap is CPU run queue time. (Whoops... sorry. That's what I'll present in my next post!)

      So I'm very comfortable stating that when DB Time is greater than Oracle process CPU consumption plus the non-idle wait time, it's probably the result of Oracle foreground process CPU run queue time.

      Yes, there could be some math problems on Oracle's side, there could be uninstrumented time (for sure it's happened before), the operating system could be reporting bogus values or a host of other potential issues. But unless there is an obvious wrong value, I'm sticking with the experimental evidence.

      Now I'm going to show the experimental "evidence" that is, that the DB Time "gap" time correlates with the OS CPU utilization.

      Let The Data Drive Our Understanding

      You can download all the data collection scripts, raw experimental data, Mathematica notepad files, graphic files, etc HERE in a single zip file.

      You should be able to run the experiment on any Linux Oracle test system. All you need is a logical IO load and for that I used my free opload tool which, you can download HERE.

      The experiment placed an increasing logical IO load on an Linux Oracle 12c system until the operating system CPU utilization exceeded 90%. The load was increased 18 times. During each of the 18 loads, I gathered 31 three minute samples. Each sample contains busy time (v$osstat), idle time (v$osstat), logical IO (v$sysstat "session logical reads"), non-idle wait time (v$system_event where wait_class != 'Idle'), DB CPU (v$sys_time_model), background cpu time (v$sys_time_model), database time (v$sys_time_model DB time) and the sample time (dual table current_timestamp).

      The CPU utilization was calculated using the "busy idle" method that I blog about HERE. This method is detailed in my Utilization On Steroids online video seminar.

      The workload is defined as the logical IOs per second, lio/s.

      Below is a table summarizing the experimental data. The times shown are the averages. If you look at the actual raw experimental data contained in the analysis pack, you'll notice the data is very consistent. This is not suprising since the load I placed should produce a very consistent workload.

      Do you see the gaps? Look closely at load 18. The DB Time is 8891.4 seconds. But the sum of DB CPU (996.8 seconds) and the non-idle wait time (2719.2) seconds only equals 3716.0. Yet DB Time is 8891.4. So the "gap" is 5175.3 which is DB Time (8891.3) minus DB CPU (996.8) minus the non-idle wait time (2719.2).

      Note: Load 11 and 12 where excluded because of a problem with my data collection. Sorry.

      While we can numberically see the DB Time "gap" increase as the CPU utilization increases, check out the graphic in the next section!

      The Correlation Between CPU Utilization And DB Time Gap
      We can numerically and visually see that as the CPU utilization increases, so does the DB Time "gap." But is there a strong mathematical correlation? To determine this, I used all the experimental samples (except load 11 and 12). Because there was 17 different workloads and with each workload I gathered 31 samples, the correlation comprises of something like 527 samples. Pretty good sample set I'd say.

      The correlation coefficient is a strong 0.891. The strongest is 1.0 and the weakest is 0.

      Graphically, here is the scatterplot showing the relationship between the CPU utilization and the workload.

      Don't expect the DB Time "gap" and OS CPU utilization correlation to be perfect. Remember that DB Time does not include Oracle background process CPU consumption, yet it is obviously part of the OS CPU utilization.

      Summary
      My experiment indicated the light-green "CPU wait time" is primarily Oracle foreground process operating system CPU run queue time. This is DB Time "gap" time.

      My experiment also showed the "gap" time is highly correlated with CPU utilization. Which means, as the CPU utilization increases, so does the "gap" time.

      If there are Oracle Database instrumentation bugs or a host of other potential problems, that will also affect the "gap" time.

      If you want a more complete and detailed DB Time formula is would be this:

      DB Time = DB CPU + Non Idle Wait Time + gap time

      In my next post, I'll show you how to calculate the gap time based on queuing theory!

      Thanks for reading!

      Craig.








      Categories: DBA Blogs

      Some changes to be aware of, as Oracle Application Express 5 nears...

      Joel Kallman - Tue, 2015-02-10 06:25
      As the release of Oracle Application Express 5 gets closer, I thought it's worth pointing out some changes that customers should be aware of, and how an upgrade to Oracle Application Express 5 could impact their existing applications.


      1. As Trent Schafer (@trentschafer) noted in his latest blog post, "Reset an Interactive Report (IR)", there have been numerous customer discussions and blog posts which show how to directly use the gReport JavaScript object to manipulate an Interactive Report.  The problem?  With the massive rewrite to support multiple Interactive Reports in Oracle Application Express 5, gReport no longer exists.  And as Trent astutely points out, gReport isn't documented.  And that's the cautionary tale here - if it's not documented, it's not considered supported or available for use and is subject to change, effectively without notice.  While I appreciate the inventiveness of others to do amazing things in their applications, and share that knowledge with the Oracle APEX community, you must be cautious in what you adopt.
      2. In the rewrite of Interactive Reports, the IR component was completely revamped from top to bottom.  The markup used for IRs in APEX 5 is dramatically improved:  less tables, much smaller and more efficient markup, better accessibility, etc.  However, if you've also followed this blog post from Shakeeb Rahman (@shakeeb) from 2010, and directly overrode the CSS classes used in Interactive Reports, that will no longer work in IRs in APEX 5.  Your custom styling by using these classes will not have any effect.
      3. As the Oracle Application Express 5 Beta documentation enumerates, there is a modest list of deprecated features and a very small list of features which are no longer supported.  "Deprecated" means "will still work in APEX 5, but will go away in a future release of APEX, most likely the next major release of APEX".  In some cases, like the deprecated page attributes for example, if you have existing applications that use these attributes, they will still function as in earlier releases of APEX, but you won't have the ability to set it for new pages.  Personally, I'm most eager to get rid of all uses of APEX_PLSQL_JOB - customers should use SYS.DBMS_SCHEDULER - it's far richer in functionality.
      Please understand that we have carefully considered all of these decisions - even labored for days, in some cases.  And while some of these changes could be disruptive for existing customers, especially if you've used something that is internal and not documented, we would rather have the APEX Community be made aware of these changes up front, rather than be silent about it and hope for the best.

      Annonce : Remise du prix de la meilleure thèse

      Jean-Philippe Pinte - Tue, 2015-02-10 03:12
      Oracle France a remis à M. Gérald Patterson (ISEP / 2ième promotion du Master Cloud Computing) le trophée Oracle pour sa thèse intitulée Improving Cloud Computing availability with Openstack Enhanced Performance.
      http://www.cloud-computing-formation.fr/rencontre/


      Ce fut également l'occasion de présenter le aux élèves de l'ISEP.

      Fronting Oracle Maven Repository with Artifactory

      Steve Button - Mon, 2015-02-09 22:44
      The JFrog team announced this week the release of Artifactory 3.5.1, which is a minor update that now works with the Oracle Maven Repository.

      http://www.jfrog.com/confluence/display/RTF/Artifactory+3.5.1

      I spent a little while yesterday having a look at it, working through the configuration of a remote repository and testing it with a maven project to see how it worked.

      Once I'd downloaded it and started it up -- much love for simple and obvious bin/*.sh scripts -- it was a very simple two step process:

      1. Since we live behind a firewall first add a proxy configuration to point at our proxy server.



      2. Add a new remote repository and pointed it at the Oracle Maven Repository, specifying its URL and using my OTN credentials as username and password.


      The Artifactory 3.5.1 documentation stated that the Advanced Settings >  Lenient host authentication and Enable cookie management options must be checked when accessing the Oracle Maven Repository.


      The Test button is handy to verify the server settings have been entered correctly.

      3. Use the Home tab > Client Settings > Maven Settings link to generate and save a settings.xml file that uses the artifactory server.



      With the repository running, configured and the settings.xml saved, its then possible to try it out with an existing maven project such as https://github.com/buttso/weblogic-with-arquillian.

      I also nuked my local repository to force/verify that the dependencies were fetched through the specified Artifactory server.

      $ rm -fr ~/.m2/repository/com/oracle
      $ mvn -s artifactory-settings.xml test

      Viewing the output of the mvn process and the running Artifactory server you can see that maven is downloading dependencies from http://localhost:8081/artifactory and correspondingly Artifactory is downloading the requested artifact from https://maven.oracle.com.


      Once the maven process has completed and all the requested artifacts downloaded, Artifactory will have cached them locally for future use.
       
      Using the Search functionality of the Artifactory Web UI you can search for weblogic artifacts.


      Using the Repository Browser functionality of the Artifactory Web UI you can view and navigate around the contents of the remote Oracle Maven Repository.

      Nice JFrog > Artifactory team - thanks for the quick support of our repository.

      One further thing I'd look at doing is enabling the Configure Passwords Encryption option in the Security settings to encrypt your OTN password, so that it's not stored in cleartext in the etc/artifactory.config.latest.xml file.


      JavaScript Stored Procedures and Node.js Applications with Oracle Database 12c

      Kuassi Mensah - Mon, 2015-02-09 20:44
                                            Kuassi Mensah
                                          db360.blogspot.com | @kmensah | https://www.linkedin.com/in/kmensah

      Introduction                                                            
      Node.js and server-side JavaScript are hot and trendy; per the latest “RedMonk Programming Languages Rankings[1], JavaScript and Java are the top two programming languages. For most developers building modern Web, mobile, and cloud based applications, the ability to use the same language across all tiers (client, middle, and database) feels like Nirvana but the IT landscape is not a green field; enterprises have invested a lot in Java (or other platforms for that matter) therefore, the integration of JavaScript with it becomes imperative. WebSockets and RESTful services enable loose integration however, the advent of JavaScript engines on the JVM (Rhino, Nashorn, DynJS), and Node.js APIs on the JVM (Avatar.js, Nodyn, Trireme), make possible and very tempting to co-locate Java and Node applications on the same JVM.
      This paper describes the steps for running JavaScript stored procedures[2]directly on the embedded JVM in Oracle database 12c and the steps for running Node.js applications on the JVM against Orace database 12c, using Avatar.js, JDBC and UCP.
                
      JavaScript and the Evolution of Web Applications Architecture                                   
      At the beginning, once upon a time, long time ago, JavaScript was a browser-only thing while business logic, back-end services and even presentations where handled/produced in middle-tiers using Java or other platforms and frameworks. Then JavaScript engines (Google’s V8, Rhino) leave the browsers and gave birth to server-side JavaScript frameworks and Node.js.
      Node Programming Model
      Node.js and similar frameworks bring ease of development rapid prototyping, event-driven, and non-blocking programming model[3]to JavaScript. This model is praised for its scalability and good enough performance however, unlike Java, Node lacks standardization in many areas such as database access i.e., JDBC equivalent, and may lead, without discipline, to the so called “callback hell[4]”.
      Nonetheless, Node is popular and has a vibrant community and a large set of frameworks[5].
      Node Impact on Web Applications Architecture
      With the advent of Node, REST and Web Sockets, the architecture of Web applications has evolved into 
      (i) plain JavaScript on browsers (mobiles, tablets, and desktops); 
      (ii) server-side JavaScript modules (i.e., Node.js, ORM frameworks) interacting with Java business logic and databases.
      The new proposal for Web applications architecture is the integration of Node.js and Java on the JVM.  Let’s discuss the enabling technologies: JavaScript engine on the JVM and Node API on the JVM and describe typical use cases with Oracle database 12c.  
      JavaScript on the JVM
      Why implement a JavaScript engine and run JavaScript on the JVM? For starters, i highly recommend Mark Swartz ‘s http://moduscreate.com/javascript-and-the-jvm/and Steve Yegge’s  http://steve-yegge.blogspot.com/2008/06/rhinos-and-tigers.htmlblog posts. 
      In summary, the JVM brings (i) portability; (ii) manageability; (iii) Java tools; (iv) Java libraries/technologies such as JDBC, Hadoop; and (v) the preservation of investments in Java. 
      There are several implementations/projects of Java based JavaScript engines including Rhino, DynJS and Nashorn.Rhino
      First JavaScript engine entirely written in Java; started at NetScape in 1997 then, became an open-source Mozilla project[6]. Was for quite some time the default JavaScript engine in Java SE, now  replaced by Nashorn in Java SE 8. 
      DynJS
      DynJS is another open-source JavaScript engine for the JVM. Here is the project homepage http://dynjs.org/. 
      Nashorn
      Introduced in Java 7 but “production” in Java 8[7], the goal of project Nashorn (JEP 174), is to enhance the performance and security of the Rhino JavaScript engine on the JVM. It integrates with javax.script API (JSR 223) and allows seamless interaction between Java and JavaScript (i.e., invoking Nashorn from Java and invoking Java from Nashorn).

      To illustrate the reach of Nashorn on the JVM and the interaction between Java and JavaScript, let’s run some JavaScript directly on the database-embedded JVM in Oracle database 12c. 
      JavaScript Stored Procedures with Oracle database 12c Using Nashorn
      Why would anyone run JavaScript in the database? For the same reasons you’d run Java in Oracle database. Then you might ask: why run Java in the database, in the first place? As discussed in my book[8], the primary motivations are: 
      (i) reuse skills and code, i.e., which programming languages are your new hire knowledgeable of or willing to learn; 
      (ii) avoid data shipping[9] i.e., in-place processing of billions of data/documents; 
      (iii) combine SQL with foreign libraries to achieve new database capability thereby extending SQL and the reach of the RDBMS, e.g., Web Services callout, in-database container for Hadoop[10]
      Some developers/architects prefer a tight separation between the RDBMS and applications therefore, no programming language in the database[11]but there are many pragmatic developers/architects who run code near data, whenever it is more efficient than shipping data to external infrastructure.

      Co-locating functions with data on the same compute engine is shared by many programming models such as Hadoop. With the surge and prevalence of Cloud computing, RESTful service based architecture is the new norm. Data-bound services can be secured and protected by the REST infrastructure, running outside the RDBMS. Typical use case: a JavaScript stored procedures service would process millions/billions of JSON documents in the Oracle database and would return the result sets to the service invoker.

      To conclude, running Java, JRuby, Python, JavaScript, Scala, or other programming language on the JVM in the database is a sound architectural choice. The best practices consist in: (i) partitioning applications into data-bound and compute-bound modules or services; (ii) data-bound services are good candidates for running in the database; (iii) understand Oracle database 
      DEFINER INVOKER rights [12]and grant only the necessary privilege(s) and/or permission(s). 

      The Steps
      The following steps allow implementing JavaScipt stored procedure  running in Oracle database; these steps represent an enhancement from the ones presented at JavaOne and OOW 2014 -- which consisted in reading the JavaScript from the file system; such approach required granting extra privileges to the database schema for reading from RDBMS file system something not recommended from security perspective. Here is a safer approach:

      1.      Nashorn is part of Java 8 but early editions can be built for Java 7; the embedded JavaVM in Oracle database 12c supports Java 6 (the default) or Java 7. For this proof of concept, install Oracle database 12c with Java SE 7 [13]
      2.      Build a standard Nashorn.jar[14]; (ii) modify the Shell code to interpret the given script name as an OJVM resource; this consists mainly in invoking getResourceAsStream()on the current thread's context class loader ; (iii) rebuild Nashorn.jar with the modified Shell
      3.  Load the modified Nashorn jar into an Oracle database shema e.g., HR
       loadjava -v -r -u hr/ nashorn.jar
      4.      Create a new dbms_javascript  package for invoking Nashorn’s Shell with a script name as parameter
      create or replace package dbms_javascript as
        procedure run(script varchar2);
      end;
      /
      create or replace package body dbms_javascript as
        procedure run(script varchar2) as
        language java name 'com.oracle.nashorn.tools.Shell.main(java.lang.String[])';
      end;
      /

      Then call dbms_javascript,run(‘myscript.js’)from SQL which will invoke Nashorn  Shell to execute the previously loaded myscript.js.
      5.  Create a custom role, we will name it NASHORN, as follows, connected as SYSTEM
      SQL> create role nashorn;
      SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'createClassLoader', '' );
      SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.lang.RuntimePermission', 'getClassLoader', '' );
      SQL> call dbms_java.grant_permission('NASHORN', 'SYS:java.util.logging.LoggingPermission', 'control', '' );
      Best practice: insert those statements in a nash-role.sqlfile and run the script as SYSTEM
      6.      Grant the NASHORN role created above to the HR schema as follows (connected as SYSTEM):

      SQL> grant NASHORN to HR;

      7.      Insert the following JavaScript code in a file e.g., database.js stored on your client machine’s (i.e., a machine from which you will invoke loadjava as explained in the next step).
      This script illustrates using JavaScript and Java as it
      uses the server-side JDBC driver to execute a PreparedStatement to retrieve the first and last names from the EMPLOYEES table.

      var Driver = Packages.oracle.jdbc.OracleDriver;
      var oracleDriver = new Driver();
      var url = "jdbc:default:connection:";   // server-side JDBC driver
      var query ="SELECT first_name, last_name from employees";
      // Establish a JDBC connection
      var connection = oracleDriver.defaultConnection();
      // Prepare statement
      var preparedStatement = connection.prepareStatement(query);
      // execute Query
      var resultSet = preparedStatement.executeQuery();
      // display results
           while(resultSet.next()) {
           print(resultSet.getString(1) + "== " + resultSet.getString(2) + " " );
           }
      // cleanup
      resultSet.close();
      preparedStatement.close();
      connection.close();


      8.      Load database.js in the database as a Java resource (not a vanilla class)
      loadjava –v –r –u hr/ database.js

      9.      To run the loaded script

      sqlplus hr/
      SQL>set serveroutput on
      SQL>call dbms_java.set_output(80000)
      SQL>call dbms_javascript.run(‘database.js’);

      The Nashorn Shell reads ‘database.js’ script stored as Java Resource from internal table; the JavaScript in its turn invokes JDBC to execute a PreparedStatement and the result set is displayed on the console. The message “ORA=29515: exit called from Java code with status 0” is due to the invocation of java.lang.Runtime.exitInternal; and status 0 means normal exit (i.e., no error). The fix is to remove that call from Nashorn. 
      Node.js on the JVM
      As discussed earlier, Node.js is becoming the man-in-the-middle between Web applications front ends and back-end legacy components and since companies have invested a lot in Java, it is highly desirable to co-locate Node.js and Java components on the same JVM for better integration thereby eliminating the communication overhead. There are several projects re-implementing Node.js APIs on the JVM including: Avatar.js, Nodyn, and Trireme. This paper will only discuss Oracle’s Avatar.js.
      Project Avatar.js[15]
      The goal of project Avatar.js is to furnish “Node.js on the JVM”; in other words, an implementation of Node.js APIs, which runs on top of Nashorn and enables the co-location of Node.js programs and Java components. It has been outsourced by Oracle under GPL license[16]. Many Node frameworks and/or applications have been certified to run unchanged or slightly patched, on Avatar.js.

      There are binary distributions for Oracle Enterprise Linux, Windows and MacOS (64-bits). These builds can be downloaded from https://maven.java.net/index.html#welcome. Search for avatar-js.jar and platform specific libavatar-js libraries (.dll, .so, dylib). Get the latest and rename the jar and the specific native libary accordingly. For example: on  Linux, rename the libary to avatar-js.so; on Windows, rename the dll to avatar-js.dll and add its location to your PATH (or use -Djava.library.path=).

      RDBMSes in general and Oracle database in particular remain the most popular persistence engines and there are RDBMS specific Node drivers[17]as well as ORMs frameworks. However, as we will demonstrate in the following section, with Avatar.js, we can simply reuse existing Java APIs including JDBC and UCP for database access.

      Node Programming with Oracle Database using Avatar.js, JDBC and UCP 
      The goal of this proof of concept is to illustrate the co-location of a Node.js application, the Avatar.js library, the Oracle JDBC driver and the Oracle Universal Connection Pool (UCP) on the same Java 8 VM.
      The sample application consists in a Node.js application which performs the following actions:
      (i) Request a JDBC-Thin connection from the Java pool (UCP)
      (ii)Create a PreparedStatement object for “SELECT FIRST_NAME, LAST_NAME FROM EMPLOYEES
      (iii)Execute the statement and return the ResultSet in a callback
      (iv)Retrieve the rows and display in browser on port 4000
      (v) Perform all steps above in a non-blocking fashion – this is Node.js’s raison d’être. The demo also uses Apache ab load generator to simulate concurrent users running the same application in the same/single JVM instance.For the Node application to scale in the absence of asynchronous JDBC APIs, we need to turn synchronous calls into non-blocking ones and retrieve the result set via callback.
      Turning Synchronous JDBC Calls into Non-Blocking Calls
      We will use the following wrapper functions to turn any JDBC call into a non-blocking call i.e., put the JDBC call into a thread pool and free up the Node event loop thread.
      var makeExecutecallback = function(userCallback) {
       return function(name, args){
            ...
            userCallback(undefined, args[1]);
        }
      }
       function submit(task, callback, msg) {
          var handle = evtloop.acquire();
          try {    var ret = task();
                     evtloop.post(new EventType(msg, callback, null, ret)); {catch{}
          evtloop.submit(r);
      }

      Let’s apply these wrapper functions to executeQuery JDBC call, to illustrate the concept
      exports.connect = function(userCallback) {..} // JDBC and UCP settings
      Statement.prototype.executeQuery = function(query, userCallback) {
               var statement = this._statement;
                var task = function() {
                return statement.executeQuery(query);
             }
           submit(task, makeExecutecallback(userCallback), "jdbc.executeQuery");
      }
      Similarly the same technique will be applied to other JDBC statement APIs.
      Connection.prototype.getConnection = function() {…}
      Connection.prototype.createStatement = function() {..}
      Connection.prototype.prepareCall = function(storedprocedure) {..}
      Statement.prototype.executeUpdate = function(query, userCallback) {..}
      Returning Query ResultSet through a Callback
      The application code fragment hereafter shows how: for every HTTP request: (i) a connection is requested, (ii) the PreparedStatement is executed, and (iii) the result set printed on port 4000.
         ...
         var ConnProvider = require('./connprovider').ConnProvider;
      var connProvider = new ConnProvider(function(err, connection){.. });

      var server = http.createServer(function(request, response) {
        connProvider.getConn(function(name,data){..});     
        connProvider.prepStat(function(resultset) {
                      while (resultset.next()) {
                         response.write(resultset.getString(1) + " --" + resultset.getString(2));
                         response.write('
      ');
          }
          response.write('
      ');
          response.end();   
      }
      server.listen(4000, '127.0.0.1');

      Using Apache AB, we were able to scale to hundreds of simultaneous invocations of the Node application. Each instance grabs a Java connection from The Universal Connection Pool (UCP), executes the SQL statements through JDBC then return the result set via a Callbak on port 4000.
      Conclusions
      As server-side JavaScript (typified by Node.js) gains in popularity it’ll have to integrate with existing components (COBOL is still alive!!). Developers, architects will have to look into co-locating JavaScript with Java, across middle and database tiers.





      [1] http://redmonk.com/sogrady/2015/01/14/language-rankings-1-15/
      [2] I’ll discuss the rationale for running programming languages in the database, later in this paper.
      [3] Request for I/O and resource intensive components run in separate process then invoke a Callback in the main/single Node  thread, when done.
      [4] http://callbackhell.com/
      [5] Search the web for “Node.js frameworks
      [6] https://developer.mozilla.org/en-US/docs/Mozilla/Projects/Rhino
      [7] Performance being one of the most important aspect
      [8] http://www.amazon.com/exec/obidos/ASIN/1555583296
      [9] Rule of thumb: when processing more than ~20-25% of target data, do it in-place, where data resides (i.e., function shipping).
      [10] In-database Container for Hadoop is not available, as of this writing.
      [11] Other than database’s specific procedural language, e.g.,  Oracle’s PL/SQL
      [12] I discuss this in chapter 2 of my book; see also Oracle database docs.
      [13] See Multiple JDK Support in http://docs.oracle.com/database/121/JJDEV/E50793-03.pdf
      [14] Oracle does not furnish a public download of Nashorn.jar for Java 7; search “Nashorn.jar for Java 7”.
      [15]  https://avatar-js.java.net/
      [16] https://avatar-js.java.net/license.html
      [17] The upcoming Oracle Node.js driver was presented at OOW 2014. 

      Oracle Maven Repository - Viewing Contents in Eclipse

      Steve Button - Mon, 2015-02-09 16:18
      With the Oracle Maven Repository now accessible one way to have explore its contents is to use the Maven Repositories viewer feature available in most development tools. I've seen the repository contents displayed easily in NetBeans so I decided to take a look at what it looks like in Eclipse as well.

      I had to make a few minor setting changes to get it to work so decided to document them here.  If you've gotten it to work with less setting changes, let me know!

      As initial setup, I configured my local maven environment to support access to the Oracle Maven Repository.  This is documented here https://maven.oracle.com/doc.html.  I also installed maven-3.2.5 that includes the updated Wagon module that supports authentication.

      Next I downloaded and used the new network installer that the Oracle Eclipse team has published on OTN to install the latest version of Oracle Enterprise Pack for Eclipse.



      This network installer lets developers select the version of Eclipse to install and the set of Oracle extensions --  Weblogic, GlassFish and other stuff -- to add in to it.

       Once Eclipse is installed, you can add the Maven Repository viewer by selecting   Window > Show View > Other > Maven Repositories from the Eclipse toolbar.



      I also added a Console > Maven viewer to see what was happening under the covers and arranged them so they were visible at the same time:


      With the Maven views ready to go, expand the Global Repositories node. This will show Maven Central (any other repositories you may have configured) and the Oracle Maven Repository if you have configured it correctly in the settings.xml file.

      The initial state of the Oracle Maven Repository doesn't show any contents indicating that its index hasn't been downloaded to display.

      Right mouse clicking on it and selecting the Rebuild Index option causes an error to be shown in the console output indicating that the index could not be accessed.


      To get it to work, I made the following changes to my environment.  
      Configure Eclipse to Use Maven 3.2.5Using the Eclipse > Preferences > Maven > Installation dialog, configure Eclipse to use Maven 3.2.5.  This is preferred version of Maven to use to access the Oracle Maven Repository since it automatically includes the necessary version of the Wagon HTTP module that supports the required authentication configuration and request flow.


      Configure Proxy Settings in Maven Settings File ** If you don't need a proxy to access the Internet then step won't be needed **
       
      If you sit behind a firewall and need to use a proxy server to access public repositories then you need to configure a proxy setting inside the maven settings file.

      Interestingly for command line maven use and NetBeans a single proxy configuration in settings.xml was enough to allow the Oracle Maven Repository to be successfully accesses and its index and artifacts used.

      However with Eclipse, this setting alone didn't allow the Oracle Maven Repository to be accessed.  Looking at the repository URL for the Oracle Maven Repository you can see ity's HTTPS based -- https://maven.oracle.com and it appears for Eclipse that a specific HTTPS based proxy setting is required for Eclipse to access HTTPS based repositories.


      Rebuild Index SuccessWith the settings in place, the Rebuild Index operation succeeds and the contents of the Oracle Maven Repository are displayed in the repository viewer.



      Pages

      Subscribe to Oracle FAQ aggregator