Feed aggregator

See You at SXSW 2016

Oracle AppsLab - Fri, 2016-03-11 09:59

sxsw If you happen to be in Austin this weekend for SXSWi, look for Osvaldo (@vaini11a), me (@noelportugal) and friend of the ‘Lab Rafa (@rafabelloni).

We will be following closely all things UX, IoT, VR, AI. Our schedules are getting full with some great sessions and workshops. Check back in a week or so to read some of our impressions!Possibly Related Posts:

Changes to DBCA Patch Application Behaviour Causes PDB Cloning to Fail

Pythian Group - Fri, 2016-03-11 07:23
Background

A test upgrade from 11g to 12c and conversion to a container and pluggable database recently pointed out some important 12c behavior differences with respect to the DBCA and whether or not it automatically applies PSUs installed in the Oracle Home.

The original objective was to take an existing 11.2.0.4 database and upgrade it to 12.1.0.2 and convert it to a PDB.

From a high level the procedure was:

  • Install the Oracle 12.1.0.2 software and apply the latest PSU (in this case the JAN2016 PSU).
  • Create a new CDB to house the upgraded database.
  • Upgrade the 11.2.0.4 database to 12.1.0.2 in-place using the DBUA.
  • Convert the upgraded 12c database to a PDB (via the clone through DB link method).

Seemed pretty straight forward. However as part of the PDB conversion (running of the noncdb_to_pdb.sql script), the following error was encountered:

SQL> DECLARE
  2     threads pls_integer := &&1;
  3  BEGIN
  4     utl_recomp.recomp_parallel(threads);
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
ORA-00600: internal error code, arguments: [kql_tab_diana:new dep], [0x0CF59D0B8], [0x7F1525B91DE0], [1], [2], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1

 

The noncdb_to_pdb.sql script can only be run once so at this point the PDB conversion has failed and must be restarted. But first we must understand what went wrong or what steps we missed.

Root Cause: DBCA no longer automatically applies PSUs

It’s obvious from the ORA-04045 error that the issue is related to patching. But the question is still what was missed in the process since the 12c Oracle Home was fully patched before creating or upgrading any databases?

The problem is that DBAs have maybe become complacent with respect to PSU applications after creating databases. With Oracle Database 11g whenever we created a database via the DBCA, the latest PSU was automatically applied. It doesn’t matter if we created the database from a template or used a custom install. Regardless of which DBCA method was used, after DB creation we’d see something similar to:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by 2;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU 11.2.0.4.160119            04-MAR-16 02.43.52.292530 PM

SQL>

 

Clearly the latest PSU (JAN2016 in this case) installed in the Oracle Home was applied automatically by the DBCA. And of course this is reflected in the official README documentation (in this example for DB PSU patch 21948347 [JAN2016] – requires a My Oracle Support login to view) which states:

There are no actions required for databases that have been upgraded or created after installation of PSU 11.2.0.4.160119.

 

However this functionality has completely changed with Oracle Database 12c! The change in behaviour is documented in My Oracle Support (MOS) Note: “12.1:DBCA (Database Creation) does not execute “datapatch” (Doc ID 2084676.1)” which states:

DBCA does not execute datapatch in Oracle 12.1.0.X. The solution is to apply the SQL changes manually after creating a new Database

 

Similarly the 12c JAN2016 DB PSU (patch 21948354) README documentation states:

You must execute the steps in Section 3.3.2, “Loading Modified SQL Files into the Database” for any new or upgraded database.

 

This is a significant change in behaviour and is the root cause of the PDB creation error!

 

Resolving the “ORA-00600 [kql_tab_diana:new dep]” error

Back to the CDB creation error, the first logical place to check whenever experiencing plug-in or PDB creation errors is the PDB_PLUG_IN_VIOLATIONS view:

SQL> CREATE PLUGGABLE DATABASE MY_PROD FROM NON$CDB@clone_link FILE_NAME_CONVERT=('/u01/app/oracle/oradata/MY_PROD','/u01/app/oracle/oradata/CPRD1/MY_PROD');

Pluggable database created.

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                  ACTION
-------- --------- --------- ---------------------------------------- ----------------------------------------
MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires no Run noncdb_to_pdb.sql.
                             ncdb_to_pdb.sql be run.

MY_PROD  WARNING   PENDING   CDB parameter compatible mismatch: Previ Please check the parameter in the curren
                             ous '11.2.0.4.0' Current '12.1.0.2.0'    t CDB

MY_PROD  WARNING   PENDING   Service name or network name of service  Drop the service and recreate it with an
                             MY_PROD in the PDB is invalid or conflic  appropriate name.
                             ts with an existing service name or netw
                             ork name in the CDB.


SQL>

 

Nothing there is really concerning yet. It’s pretty much what we’d expect to see at this point. However, taking the next step in the PDB clone process encounters the error:

SQL> ALTER SESSION SET CONTAINER=MY_PROD;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
...
...
SQL> DECLARE
  2     threads pls_integer := &&1;
  3  BEGIN
  4     utl_recomp.recomp_parallel(threads);
  5  END;
  6  /
DECLARE
*
ERROR at line 1:
ORA-04045: errors during recompilation/revalidation of SYS.DBMS_SQLPATCH
ORA-00600: internal error code, arguments: [kql_tab_diana:new dep],
[0x062623070], [0x7FB582065DE0], [1], [2], [], [], [], [], [], [], []
ORA-06512: at "SYS.DBMS_UTILITY", line 1294
ORA-06512: at line 1


Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

 

Now looking in the PDB_PLUG_IN_VIOLATIONS view the error is evident:

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     PENDING   SQL patch ID/UID 22139226/19729684 (Database PSU 1 Call datapatch to install in the PDB or the CDB
                             2.1.0.2.160119, Oracle JavaVM Component (Jan2016))
                             : Installed in the PDB but not in the CDB.

MY_PROD  ERROR     PENDING   PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.


6 rows selected.

SQL>

 

At this point since the CDB clone has failed and since the noncdb_to_pdb.sql script cannot be run twice, the new PDB should be dropped. Resolving the root cause of the error by patching and then repeating the clone is necessary.

Applying the PSU

Fortunately the fix is conceptually simple: apply the PSU patch into the database. Though the catch is that I actually had installed the “Combo of 12.1.0.2.160119 OJVM PSU and 12.1.0.2.160119 DB PSU (Jan 2016)” bundle patch (22191659) into the Oracle Home. This combo includes the DB PSU (patch 21948354) plus the OJVM PSU (patch 22139226). And while the DB PSU can be applied without outage, the OJVM patch cannot. Instead for the OJVM patch or the combo, the CDB and the PDBs must all be restarted in UPGRADE mode.

Restarting in UPGRADE mode is fine in this case study where the CDB was just recently created to house the newly upgraded PDB. But if trying to plug the new database into an existing CDB with other applications running in production, shutting down the entire CDB to run datapatch may cause a problem.

Following the README documentation for the just the JAN2016 DB PSU (patch 21948354) doesn’t help. It states that the patch can be applied the database and pluggable databases open (section “3.3.2 Loading Modified SQL Files into the Database“). However because I’ve installed the combo patch into the Oracle Home, trying to patch with the database open will cause the patching to fail:

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:45:27 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in the binary registry only
Bundle series PSU:
  ID 160119 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Error: prereq checks failed!
  patch 22139226: The pluggable databases that need to be patched must be in upgrade mode
Prereq check failed, exiting without installing any patches.

Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_3260_2016_03_04_15_45_27/sqlpatch_invocation.log
for information on how to resolve the above errors.

SQL Patching tool complete on Fri Mar  4 15:45:52 2016
$

 

The solution to this error is to start the CDB and PDBs in UPGRADE mode (as per the OJVM patch documentation) and then re-run datapatch:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade;
ORACLE instance started.

Total System Global Area 2097152000 bytes
Fixed Size                  2926320 bytes
Variable Size             603982096 bytes
Database Buffers         1476395008 bytes
Redo Buffers               13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database all open upgrade;

Pluggable database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 15:50:59 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_5137_2016_03_04_15_50_59/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in the binary registry only
Bundle series PSU:
  ID 160119 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    The following patches will be applied:
      22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016))
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Installing patches...
Patch installation complete.  Total patches installed: 8

Validating logfiles...
Patch 22139226 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_CDBROOT_2016Mar04_15_51_23.log (no errors)
Patch 21948354 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_CDBROOT_2016Mar04_15_51_24.log (no errors)
Patch 22139226 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/22139226/19729684/22139226_apply_CPRD1_PDBSEED_2016Mar04_15_51_28.log (no errors)
Patch 21948354 apply (pdb PDB$SEED): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_PDBSEED_2016Mar04_15_51_29.log (no errors)
SQL Patching tool complete on Fri Mar  4 15:51:31 2016
$

 

Now retrying the CDB cloning process:

SQL> ALTER SESSION SET CONTAINER=MY_PROD;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
...
...

SQL> alter session set container = "&pdbname";

Session altered.

SQL>
SQL> -- leave the PDB in the same state it was when we started
SQL> BEGIN
  2    execute immediate '&open_sql &restricted_state';
  3  EXCEPTION
  4    WHEN OTHERS THEN
  5    BEGIN
  6      IF (sqlcode  -900) THEN
  7        RAISE;
  8      END IF;
  9    END;
 10  END;
 11  /

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL> WHENEVER SQLERROR CONTINUE;
SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;

Warning: PDB altered with errors.

SQL> connect / as sysdba
Connected.
SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  ERROR     PENDING   PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                              : 12.1.0.2.160119 (21948354)): Installed in the C
                             DB but not in the PDB.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'


6 rows selected.

SQL>

 

Note that first time the error was related to the OJVM PSU patch and stated that the PDB was patched but the CDB was not. Now after patching the CDB the error message states that the DB PSU patch is installed in the CDB but not the PDB.

Again the solution is to run datapatch one more time. Fortunately since we’re only patching a PDB, we no longer need to worry about starting the CDB and PDBs in UPGRADE mode to apply the OJVM patch. The OJVM patch does not apply to the PDBs.  Hence we can patch successfully with both the CDB and PDBs open:

$ ./datapatch -verbose
SQL Patching tool version 12.1.0.2.0 on Fri Mar  4 16:19:06 2016
Copyright (c) 2015, Oracle.  All rights reserved.

Log file for this invocation: /u01/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_9245_2016_03_04_16_19_06/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 22139226 (Database PSU 12.1.0.2.160119, Oracle JavaVM Component (Jan2016)):
  Installed in binary and CDB$ROOT PDB$SEED MY_PROD
Bundle series PSU:
  ID 160119 in the binary registry and ID 160119 in PDB CDB$ROOT, ID 160119 in PDB PDB$SEED

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED
    Nothing to roll back
    Nothing to apply
  For the following PDBs: MY_PROD
    Nothing to roll back
    The following patches will be applied:
      21948354 (Database Patch Set Update : 12.1.0.2.160119 (21948354))

Installing patches...
Patch installation complete.  Total patches installed: 1

Validating logfiles...
Patch 21948354 apply (pdb MY_PROD): SUCCESS
  logfile: /u01/app/oracle/cfgtoollogs/sqlpatch/21948354/19553095/21948354_apply_CPRD1_MY_PROD_2016Mar04_16_19_31.log (no errors)
SQL Patching tool complete on Fri Mar  4 16:19:32 2016
$

 

And finally the cloned PDB can be opened successfully:

SQL> ALTER PLUGGABLE DATABASE MY_PROD CLOSE;

Pluggable database altered.

SQL> ALTER PLUGGABLE DATABASE MY_PROD OPEN;

Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 MY_PROD                        READ WRITE NO

SQL> SELECT name, type, status, message, action FROM pdb_plug_in_violations ORDER BY 1,2;

NAME     TYPE      STATUS    MESSAGE                                            ACTION
-------- --------- --------- -------------------------------------------------- --------------------------------------------------
MY_PROD  ERROR     RESOLVED  PDB plugged in is a non-CDB, requires noncdb_to_pd Run noncdb_to_pdb.sql.
                             b.sql be run.

MY_PROD  ERROR     RESOLVED  PSU bundle patch 160119 (Database Patch Set Update Call datapatch to install in the PDB or the CDB
                              : 12.1.0.2.160119 (21948354)): Installed in the C
                             DB but not in the PDB.

MY_PROD  WARNING   RESOLVED  Service name or network name of service MY_PROD in Drop the service and recreate it with an appropria
                              the PDB is invalid or conflicts with an existing  te name.
                             service name or network name in the CDB.

MY_PROD  WARNING   PENDING   Database option OLS mismatch: PDB installed versio Fix the database option in the PDB or the CDB
                             n NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   PENDING   Database option DV mismatch: PDB installed version Fix the database option in the PDB or the CDB
                              NULL. CDB installed version 12.1.0.2.0.

MY_PROD  WARNING   RESOLVED  CDB parameter compatible mismatch: Previous '11.2. Please check the parameter in the current CDB
                             0.4.0' Current '12.1.0.2.0'


6 rows selected.

SQL>

The warnings marked as “PENDING” can be safely ignored.

Conclusion

What started out as an issue when cloning a non-CDB into a PDB led to some learning about patching with Oracle Database 12c.

The most important take-away is that Oracle Database 12c introduces a change in behaviour when it comes to patch applications through the DBCA. This change is well documented in both the patch and MOS documents. So if a DBA reads through the documentation thoroughly, they won’t have a problem.  However if the DBA is used to doing things the “old way” and only skims through the documentation they may unexpectedly get caught with errors such as the ORA-00600 encountered when creating a PDB through cloning.

References

My Oracle Support (MOS) Documents:

  • 12.1:DBCA(Database Creation) does not execute “datapatch” (Doc ID 2084676.1)
  • How to Convert Non PDB to PDB Database in 12c – Testcase (Doc ID 2012448.1)
  • How to migrate an existing pre12c database(nonCDB) to 12c CDB database ? (Doc ID 1564657.1)
  • Complete Checklist for Upgrading to Oracle Database 12c Release 1 using DBUA (Doc ID 1516557.1)

Pythian Blogs:

 

Categories: DBA Blogs

Wrong Results ?

Jonathan Lewis - Fri, 2016-03-11 03:18

I gather that journalistic style dictates that if the headline is a question then the answer is no. So, following on from a discussion of possible side effects of partition exchange, let’s look at an example which doesn’t involve partitions.  I’ve got a schema that holds nothing by two small, simple heap tables, parent and child, (with declared primary keys and the obvious referential integrity constraint) and I run a couple of very similar queries that produce remarkably different results:


select
        par.id      parent_id,
        chi.id      child_id,
        chi.name    child_name
from
        parent  par,
        child   chi
where
        chi.id_p = par.id
order by
        par.id, chi.id
;

 PARENT_ID   CHILD_ID CHILD_NAME
---------- ---------- ----------
         1          1 Simon
         1          2 Sally
         2          1 Janet
         2          2 John
         3          1 Orphan

5 rows selected.

Having got this far with my first query I’ve decided to add the parent name to the report:


select
        par.id      parent_id,
        par.name    parent_name,
        chi.id      child_id,
        chi.name    child_name
from
        parent  par,
        child   chi
where
        chi.id_p = par.id
order by
        par.id, chi.id
;

 PARENT_ID PARENT_NAM   CHILD_ID CHILD_NAME
---------- ---------- ---------- ----------
         1 Smith2              1 Simon
         1 Smith               1 Simon
         1 Smith2              2 Sally
         1 Smith               2 Sally
         2 Jones               1 Janet
         2 Jones               2 John

6 rows selected.

How could adding a column to the select list result in one child row disappearing and two child rows being duplicated; and is this a bug ?

To avoid any confusion, here’s the complete script I used for creating the schema owner, in 11.2.0.4, with no extra privileges granted to PUBLIC:


create user u1
        identified by u1
        default tablespace test_8k
        quota unlimited on test_8k
;

grant
        create session,
        create table
to
        u1
;


Update

It didn’t take long for a couple of people to suggest that the oddity was the consequence of constraints that had not been enabled and validated 100% of the time, but the suggestions offered were a little more convoluted than necessary. Here’s the code I ran from my brand new account before running the two select statements:


create table parent (
        id      number(4),
        name    varchar2(10),
        constraint par_pk primary key (id)
        rely disable novalidate
)
;

create table child(
        id_p    number(4)
                constraint chi_fk_par
                references parent
                on delete cascade
                rely disable novalidate,
        id      number(4),
        name    varchar2(10),
        constraint chi_pk primary key (id_p, id)
                rely disable novalidate
)
;

insert into parent values (1,'Smith');
insert into parent values (1,'Smith2');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Janet');
insert into child values(2,2,'John');

insert into child values(3,1,'Orphan');

commit;

begin
        dbms_stats.gather_table_stats(user,'child');
        dbms_stats.gather_table_stats(user,'parent');
end;
/


In a typical data warehouse frame of mind I’ve added plenty of constraints, but left them all disabled and novalidated, but told Oracle to rely on them for optimisation strategies. This means all sorts of incorrect data could get into the tables, with all sorts of unexpected side effects on reporting. The example above shows duplicates on primary keys (and if you checked the table definition you’d find that the primary key columns were nullable as well), child rows with no parent key.

In fact 11g and 12c behave differently – the appearance of the Orphan row in the first sample query is due, as Chris_cc pointed out in the first comment, to the optimizer deciding that it could use join elimination because it was joining to a single-column primary key without selecting any other columns from the referenced table. In 12c the optimizer doesn’t use join elimination for this query, so both queries have the same (duplicated) output.

Update:

Make sure you read the articles linked to by Dani Schneider’s comment below, and note especially the impact on the query_rewrite_integrity parameter.


Centralized authentication with Red Hat Directory Server for Linux systems

Pythian Group - Thu, 2016-03-10 15:14

User management on Linux systems can be tedious, and when you add in more than 10 systems the chances are it is going to take a good amount of time for you to manage user accounts on all systems individually.

There are various tools available to overcome this, and all of these use LDAP in some way.

The same goes for Red Hat Directory Server, which is an extension of LDAP by Red Hat to provide centralized user management. Though I have primarily demonstrated integration with Red Hat Directory Server with Linux systems, it can be used on all systems which supports LDAP authentication.

You can find the official Red Hat Directory Server installation guide here.

For our test scenario I used two RHEL 5 servers named as Server101 which is the Red Hat Directory Server and Server201 which is the client.

For RHEL based systems you need to make sure that you are subscribed to RHDS repo for installing Red Hat Directory Server. If you are using CentOS or other derivatives you can use 389-Directory Server which is upstream for Red Hat Directory Server.

Once you have the prerequisite ready you can start with installation.

Installing Red Hat Directory Server

I have designated server101 as Red Hat Directory Server.

Below are the steps to Install packages required for Red Hat Directory Server.

[root@server101 ~]#yum install redhat-ds -y

yum install redhat-ds

Installing RHDS

RHDS005

RHDS006

 

Once the installation is complete we can move to configuring Red Hat Directory Server.

 

Configuring Red Hat Directory Server

 

[root@server101 ~]# setup-ds-admin.pl

Once you run this command you will be prompted for inputs by the setup script which are mostly straight forward.

But there are few things that need to be taken care of before we proceed with the configuration.

We want to run the ldap service as ldap user, so create ldap user and group if its not there.

Then open the below ports on your firewall/iptables so that directory server can work properly.

  • 389 for LDAP service
  • 686 for secure LDAP service
  • 9830 for directory server admin console connectivity

You should also increase the number of file descriptors as it can help Red Hat Directory Server access files more efficiently. Editing the maximum number of file descriptors the kernel can allocate can also improve file access speeds.

  • First, check the current limit for file descriptors in  /proc/sys/fs/file-max
  • If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter and set it to 64000 or up.
  • Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry
    *        -        nofile        8192

 

After this we can start configuring Red Hat Directory Server with setup-ds-admin.pl command.

Once it is executed it will prompt for inputs which are mostly self explanatory, like below. Mostly we will accept the default values, as this is a fresh installation. We will only change the system user and group which will run ldap service from nobody to ldap user we created earlier. Don’t forget to make a note of passwords for admin and Directory Manager as it will be used to login to Admin Console.

 

[root@server101 ~]# setup-ds-admin.pl -k

==============================================================================
This program will set up the Red Hat Directory and Administration Servers.

It is recommended that you have “root” privilege to set up the software.
Tips for using this program:
– Press “Enter” to choose the default and go to the next screen
– Type “Control-B” then “Enter” to go back to the previous screen
– Type “Control-C” to cancel the setup program

Would you like to continue with set up? [yes]: yes

==============================================================================
BY SETTING UP AND USING THIS SOFTWARE YOU ARE CONSENTING TO BE BOUND BY
AND ARE BECOMING A PARTY TO THE AGREEMENT FOUND IN THE
LICENSE.TXT FILE. IF YOU DO NOT AGREE TO ALL OF THE TERMS
OF THIS AGREEMENT, PLEASE DO NOT SET UP OR USE THIS SOFTWARE.

Do you agree to the license terms? [no]: yes

==============================================================================
Your system has been scanned for potential problems, missing patches,
etc. The following output is a report of the items found that need to
be addressed before running this software in a production
environment.

Red Hat Directory Server system tuning analysis version 10-AUGUST-2007.

NOTICE : System is i686-unknown-linux2.6.18-308.el5 (1 processor).

WARNING: 502MB of physical memory is available on the system. 1024MB is recommended for best performance on large production system.

NOTICE: The net.ipv4.tcp_keepalive_time is set to 7200000 milliseconds
(120 minutes). This may cause temporary server congestion from lost
client connections.

WARNING: There are only 1024 file descriptors (hard limit) available, which
limit the number of simultaneous connections.

WARNING: There are only 1024 file descriptors (soft limit) available, which
limit the number of simultaneous connections.

Would you like to continue? [no]: yes

==============================================================================
Choose a setup type:

1. Express
Allows you to quickly set up the servers using the most
common options and pre-defined defaults. Useful for quick
evaluation of the products.

2. Typical
Allows you to specify common defaults and options.

3. Custom
Allows you to specify more advanced options. This is
recommended for experienced server administrators only.

To accept the default shown in brackets, press the Enter key.

Choose a setup type [2]:

==============================================================================
Enter the fully qualified domain name of the computer
on which you’re setting up server software. Using the form
<hostname>.<domainname>
Example: eros.example.com.

To accept the default shown in brackets, press the Enter key.

Computer name [server101.suratlug.org]: server101.example.com

==============================================================================
The servers must run as a specific user in a specific group.
It is strongly recommended that this user should have no privileges
on the computer (i.e. a non-root user). The setup procedure
will give this user/group some permissions in specific paths/files
to perform server-specific operations.

If you have not yet created a user and group for the servers,
create this user and group using your native operating
system utilities.

System User [nobody]: ldap
System Group [nobody]: ldap

==============================================================================
Server information is stored in the configuration directory server.
This information is used by the console and administration server to
configure and manage your servers.  If you have already set up a
configuration directory server, you should register any servers you
set up or create with the configuration server. To do so, the
following information about the configuration server is required: the
fully qualified host name of the form
<hostname>.<domainname>(e.g. hostname.example.com), the port number
(default 389), the suffix, the DN and password of a user having
permission to write the configuration information, usually the
configuration directory administrator, and if you are using security
(TLS/SSL). If you are using TLS/SSL, specify the TLS/SSL (LDAPS) port
number (default 636) instead of the regular LDAP port number, and
provide the CA certificate (in PEM/ASCII format).

If you do not yet have a configuration directory server, enter ‘No’ to
be prompted to set up one.

Do you want to register this software with an existing
configuration directory server? [no]:

==============================================================================
Please enter the administrator ID for the configuration directory
server. This is the ID typically used to log in to the console.  You
will also be prompted for the password.

Configuration directory server
administrator ID [admin]:
Password:
Password (confirm):

==============================================================================
The information stored in the configuration directory server can be
separated into different Administration Domains. If you are managing
multiple software releases at the same time, or managing information
about multiple domains, you may use the Administration Domain to keep
them separate.

If you are not using administrative domains, press Enter to select the
default. Otherwise, enter some descriptive, unique name for the
administration domain, such as the name of the organization
responsible for managing the domain.

Administration Domain [example.com]:

==============================================================================
The standard directory server network port number is 389. However, if
you are not logged as the superuser, or port 389 is in use, the
default value will be a random unused port number greater than 1024.
If you want to use port 389, make sure that you are logged in as the
superuser, that port 389 is not in use.

Directory server network port [389]:

==============================================================================
Each instance of a directory server requires a unique identifier.
This identifier is used to name the various
instance specific files and directories in the file system,
as well as for other uses as a server instance identifier.

Directory server identifier [server101]:

==============================================================================
The suffix is the root of your directory tree.  The suffix must be a valid DN.
It is recommended that you use the dc=domaincomponent suffix convention.
For example, if your domain is example.com,
you should use dc=example,dc=com for your suffix.
Setup will create this initial suffix for you,
but you may have more than one suffix.
Use the directory server utilities to create additional suffixes.

Suffix [dc=example, dc=com]:

==============================================================================
Certain directory server operations require an administrative user.
This user is referred to as the Directory Manager and typically has a
bind Distinguished Name (DN) of cn=Directory Manager.
You will also be prompted for the password for this user. The password must
be at least 8 characters long, and contain no spaces.
Press Control-B or type the word “back”, then Enter to back up and start over.

Directory Manager DN [cn=Directory Manager]:
Password:
Password (confirm):

==============================================================================
The Administration Server is separate from any of your web or application
servers since it listens to a different port and access to it is
restricted.

Pick a port number between 1024 and 65535 to run your Administration
Server on. You should NOT use a port number which you plan to
run a web or application server on, rather, select a number which you
will remember and which will not be used for anything else.

Administration port [9830]:

==============================================================================
The interactive phase is complete.  The script will now set up your
servers.  Enter No or go Back if you want to change something.

Are you ready to set up your servers? [yes]:
Creating directory server . . .
Your new DS instance ‘server101’ was successfully created.
Creating the configuration directory server . . .
Beginning Admin Server creation . . .
Creating Admin Server files and directories . . .
Updating adm.conf . . .
Updating admpw . . .
Registering admin server with the configuration directory server . . .
Updating adm.conf with information from configuration directory server . . .
Updating the configuration for the httpd engine . . .
Starting admin server . . .
The admin server was successfully started.
Admin server was successfully created, configured, and started.
Exiting . . .
Log file is ‘/tmp/setupZa3jGe.log’

[root@server101 ~]#

RHDS012

RHDS013

RHDS020

RHDS021

RHDS022

 

Now that we have installed and configured Red Hat Directory Server its not set to autostart during system boot.

So we need to make Red Hat directory service and redhat directory console admin service to start at boot.

[root@server101 ~]# chkconfig dirsrv-admin --list 
dirsrv-admin   0:off1:off2:off3:off4:off5:off6:off 
[root@server101 ~]# chkconfig dirsrv --list 
dirsrv         0:off1:off2:off3:off4:off5:off6:off 
[root@server101 ~]# chkconfig dirservrv on 
[root@server101 ~]# chkconfig dirsrv-admin on 
[root@server101 ~]#

Now that we have our server ready, we need to add a user to it.

We will use Directory Server admin console to connect to the GUI and will create ldap user from there.

We can invoke directory server admin console gui with redhat-idm-console. It will open a GUI like below.

Directory Server Admin Console GUI

Directory Server Admin Console GUI

The user id is directory manager which was created during directory server setup, mostly it will be default as cn=Directory Manager. Now put your password and Administration url as http://server101:9830.

Directory Server Admin Console

Directory Server Admin Console

Once you login you will be presented with console screen as below.

RHDS051

Now click on Users and Groups tab and then click on create button, there select user from the menu.

RHDS031  RHDS032

Now Select organizational unit, mostly we will use the default and will select people from the list as below.

RHDS033

It will open Create User menu.

RHDS034

Now we will create ldapuser account as shown below. Fill in required details. Also select posix user tab as we need the account for unix system login. Fill up required details for posix account as well.

RHDS035   RHDS036

RHDS037

Now that we have created user account we can start configuring client.

 

Configuring Linux client for LDAP login

I have created server201 which we will configure for LDAP login.

For that we need to execute authconfig-tui from console.

It will open a terminal ui to configure authconfig to use LDAP.

[root@server201 pam.d]# authconfig-tui

RHDS040

Select Use LDAP for user information.

RHDS041

Select Use LDAP Authentication.

RHDS042

After this we need to make sure when user login on the server with LDAP authentication the home dir is created automatically, which is not enabled by default.

We can do this by executing below command at console.

[root@server201 pam.d]# authconfig –enablemkhomedir –update

RHDS052

Once this is done you can now use your ldap user to login to client server.

Now that we have created LDAP, we can use it to centralized login for all linux systems in the environment.

The user management is easy from single location.

We can also configure TLS and do replication for redundancy.

We can define schema and policies as well but that is for another time.

 

Categories: DBA Blogs

Log Buffer #464: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-03-10 13:42

This Log Buffer Edition delves deep into the realms of Oracle, SQL Server and MySQL while gathering up some nifty blog posts for this week.


Oracle:

Speed, Security, and Best Practices in the Cloud: Oracle Releases Market-Leading Retail Demand Forecasting Solution

OBIEE 12c – Your Answers After Upgrading

Using the SQL ACCESS Advisor PL/SQL interface

How has JD Edwards EnterpriseOne 9.2 Transformed your Business?

In the article you will have a look at an example of configuring Fast Start Failover (FSFO).

SQL Server:

How to show Quarters Dynamically in SQL

Azure SQL Data Warehouse is a fully-managed and scalable cloud service. It is still in preview, but solid.

The occasional problems that you can get with POST and GET are typical of the difficulties of separating any command and query operations.

4 Convenient Ways To Run PowerShell Scripts

10 New Features Worth Exploring in SQL Server 2016

MySQL:

Maintaining mission critical databases on our pitchfork wielding brother, the “Daemon” of FreeBSD, seems quite daunting, or even absurd, from the perspective of a die-hard Linux expert, or from someone who has not touched it in a long time.

Planets9s: Sign up for our best practices webinar on how to upgrade to MySQL 5.7

Using jemalloc heap profiling with MySQL

Sometimes a Variety of Databases is THE Database You Need

Taking the new MySQL 5.7 JSON features for a test drive

Categories: DBA Blogs

Webcast Q&A: Next-Generation Accounts Payable Automation and Dynamic Discounting

WebCenter Team - Thu, 2016-03-10 13:12

Next-Generation Accounts Payable Automation and Dynamic Discounting

We wanted to capture the great Q&A session that occurred during the Next-Generation Accounts Payable Automation and Dynamic Discounting webcast! If you missed the live webcast, you can view the on demand webcast here.

Q: A lot of your competitors claim they can provide 80% automation. How is your offering different?
A: What we provide is measurable automation - this is what most of our customers are getting, The automation we talk about is end-to-end process automation. Not just for a portion of the process. When our competitor’s talk about 80% automation, they are talking about what you could potentially get with OCR. They provide really poor integration with your ERP system and that is where the real problem is. That is the traditional approach where after OCR, about 82% of invoices end up with exceptions in your ERP system and so your AP personnel have to manually resolve those invoices one-by-one. o Our next generation approach provides you end-to-end automation. Not only do we provide best-in-class OCR, but we have cracked the code on how we integrate real-time with your ERP systems and provide exception-free creation of invoices and 2-way and 3-way matching.

Q: Can your cloud offering integrate with our on-premise ERP systems? 
A: We have Oracle E-Business Suite and JD Edwards. Yes, our cloud offering can integrate with your on-premise of cloud ERP systems. A lot of our customers have different ERP systems. We can integrate with multiple ERP systems seamlessly and provide real-time integration, and unified Application, Workflow and Analytics across all your multiple ERP systems.

Q: How is this different from Fusion AP? And Fusion Automated Invoice Processing?
A: Fusion AP and Automated Invoice processing uses the traditional approach. 1. There is almost no control on the OCR engine that is provided 2. unvalidated data is passed over to Fusion AP where all exceptions have to be handled manually one by one 3. 2-way matching is only partially automated 4. 3-way matching is not automated at all 5. Workflow capabilities are almost non-existent with very little ability to do re-assignment, routing 6. Work-queues capabilities are almost non-existent

Q: How is your 2-way and 3-way matching different from competition?
A: There are vendors who claim they do automated 2-way and 3-way match. However they handle a small % of the use-cases. E.g. for 2-way matching, invoices that need to be matched against blanket Pos are not properly handled For 3-way matching, cases where receipts happen after invoices come in are not handled These are just a few examples. Inspyrus provides a complete solution - that handles all such use-cases. Tried and tested with customers across a lot of verticals.

Q: We receive invoices via mail, email and EDI. Can your offering provide us a consistent process for all these?
A: Yes. Irrespective of how you receive your invoices, we provide a consistent Application, Workflow and Analytics for all of these.

Q: We have Oracle E-Business Suite and SAP for our ERP systems. Will your solution integrate with both our ERP systems?
A: Yes,our solutions comes pre-integrated with Oracle E-Business Suite and SAP and if you have both ERP systems, a single instance of our application will be integrated with both.

Q: Is the solution set specific to Oracle's eBusiness suite or can this solution bolt on to something like MS Dynamics to replace the AP transactions processing?
A: The solution is available for most major ERP systems including MS Dynamics. Also available for SAP, PeopleSoft & JD Edwards.

Q: 100% of our invoices are coded to a Project/Task/Expenditure Type in E Business Suite. Does this support full coding validation against Project related invoices?
A: Yes, it does.

Q: How does this solution compare to BPM?
A: BPM is a technology. What we are presenting here is a specialized pre-built Solution that is based on (leverages) Oracles BPM technology, along with Imaging, Content Management, OCR/OFR and SOA integration.

Q: What is OCR?
A: OCR - Optical Character Recognition. It allows characters to be extracted from an image. e.g. for an invoice, it allows us to automatically extract header and line-level information.

Q: We have Oracle E-Business Suite and SAP for our ERP systems. Will your solution integrate with both our ERP systems?
A: Yes,our solutions comes pre-integrated with Oracle E-Business Suite and SAP and if you have both ERP systems, a single instance of our application will be integrated with both.

Q: Would this solution work if we have a mix of invoices where some are match to po and some are match to receipt?
A: Yes, that is very common.

Q: How is this different from iSupplier? Can we use this instead of iSupplier?
A: If you are using iSupplier, I wouldn't suggest replacing it. If you are not, this would be a good alternative.

Q: What kind of validations happens when it hits the unified workflow?
A: Whatever is required for the successful creation of the invoice in the ERP system. Basically, validation against every setup, rule, config of the ERP system.

Q: Will this work if I have many suppliers with different invoices formats?
A: Yes - The solution leverages pattern-based recognition rather than relying on invoice templates.

Q: Supplier Enablement - is that integrated with the ERP systems? And is it integrated with your invoice automation?
A: Yes, it is. That is a clear differentiator. Invoice Automation, Supplier Enablement and Dynamic Discounting are part of the same suite of applications.

Q: Do you have the capability of electronic signatures on invoices?
A: Yes.

Q: Would we need to configure our matching rules within the tool?
A: No, we use matching rules that are setup in your ERP system.

Q: How do you automate GL How do you onboard customers for dynamic discounting?
A: We can use specific rules to automate GL coding. e.g. if you want to use specific vendor or invoice line description and use that to always code the invoice to a specific account. Suppliers are onboarded for dynamic discounting using a specialized Dynamic Discounting program. That consists of identifying suppliers that have the highest propensity to provide you discounts and targeting them. The onboarding is done by an outreach program.

Q: What's involved to get to automated GL coding?
A: If there are specific business rules that you can tell us to automate GL coding - say for a particular vendor or for certain descriptions on invoice lines, we can automate GL coding.

Q: Is the integration to the ERP systems unidirectional or bidirectional?
A: Our integration is real-time with the ERP system. We don't need to store any ERP information in our system. We do bring in information about payments back into our system - thus making it bidirectional.

Q: Is complex approval rules able to be used with this application?
A: Yes, we can handle all kinds of complex approval rules.

Q: Does it work with third party ocr solution?
A: It could work with third party OCR if that ocr is able to send out a structured document (e.g. XML) after the OCR
100% of our invoices are coded to a Project/Task/Expenditure Type in E Business Suite. 

Q: Does this support full coding validation against Project related invoices?
A: Yes, it does.

Q: Can vendors integrate into this solution as well, as in submitting invoices EDI to the cloud offereing (instead of emailing to customer who then turns around and uploaded into AP solution)?
A: Absolutely. Then can send EDI invoices to our cloud.

Q: Will the 3 way match verify the Project number, from the Oracle Projects module?
A: Yes, it can.

Q: Can we self-host?
A: Yes, this can be hosted in your data center.

Q: Why would you pay an invoice prior to approval?
A: The workflow/validation process will ensure that the invoice is approved before it is submitted for payment.

Q: Is Oracle SOA required for downstream integration to other ERP, including SAP, etc?
A: Oracle SOA comes embedded in this solution. That is the integration framework we use to integrate with all the ERP systems.

Q: Do you offer French user interface ? Also, do you host in Canada?
A: Yes, the interface is available in French. Our hosting partner, Rackspace, offers hosting options in Canada.

Q: Do you have the capability for invoices to obe signed off electronically by an authorized signer?
A: Yes, all approvals are electronic.

Q: Is one of the 24 languages covered by OCR Chinese?
A: Simplified Chinese - yes

Q: Do you offer e-payment?
A: Payments are generally done as part of your regular payment process. We do not provide any capability for that.

Q: Do suppliers submit invoices directly to Inspyrus or EBS?
A: They can do that via email or send EDI invoices.

Q: Will it integrate with an ancient PO/RO system that is not Oracle?
A: Yes, we have the ability to integrate with 3rd party PO systems.

Q: Can you briefly explain how this is based on Oracle webcenter? We have WebCenter right now and we want to know how we can utilize it.
A: Yes, it is built on Oracle WebCenter. You can reach out to Inspyrus for more information. www.inspyrus.com

Q: After the OCR data extraction, if there are any errors/mistakes, how are they corrected before pushing into the ERP?
A: Inspyrus provides a unified application where these are corrected - as part of the workflow.

Q: You replied that all your approvals are electronic - can they be visible like a digital signature in pdf?
A: We do not touch the pdf - for compliance reasons. The electronic approvals are available as a separate document tied to the invoice record.

Q: What criteria/invoices should satisfy for Automatic GL coding proper work?
A: If there are specific business rules that you can tell us to automate GL coding - say for a particular vendor or for certain descriptions on invoice lines, we can automate GL coding.

Oracle Cloud – Oracle Management Cloud

Marco Gralike - Thu, 2016-03-10 09:09
Oracle Management Cloud is a new Oracle Cloud offering launched during Oracle Open World 2015,…

Getting Started with MapR Streams

Tugdual Grall - Thu, 2016-03-10 04:18
Read this article on my new blog You can find a new tutorial that explains how to deploy an Apache Kafka application to MapR Streams, the tutorial is available here: Getting Started with MapR Streams MapR Streams is a new distributed messaging system for streaming event data at scale, and it’s integrated into the MapR converged platform. MapR Streams uses the Apache Kafka API, so Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG

Rittman Mead Consulting - Thu, 2016-03-10 04:00

Background

A few months before the start of the 2014 World Cup, Jon Mead, Rittman Mead’s CEO, asked me to come up with a way to showcase our strengths and skills while leveraging the excitement generated by the World Cup. With this in mind, my colleague Pete Tamisin and I decided to create our own game-tracking page for World Cup matches, similar to the ones you see on popular sports websites like ESPN and CBSSports, with one caveat: we would build the game-tracker inside an OBIEE dashboard.

Unfortunately, after several long nights and weekends, we weren’t able to come up with something we were satisfied with, but we learned tons along the way and kept a lot of the content we created for future use. That future use came several months later when we decided to create our own soccer match (“The Rittman Mead Cup”) and build a game-tracking dashboard that would support this match. We then had the pleasure to present our work in a few industry conferences, like the BI Forum in Atlanta and KScope in Hollywood, Florida.

GaOUG Tech Day

Recently I had the privilege of delivering that presentation one last time, at Georgia Oracle Users Group’s Tech Day 2016. With the right amount of silliness (yes, The Rittman Mead cup was played/acted by our own employees), this presentation allowed us to discuss with the audience our approach to designing a “sticky” application; meaning, an application that users and consumers will not only find useful, but also enjoyable, increasing the chances they will return to and use the application.

We live in an era where nice, fun, pretty applications are commonplace, and our audience expects the same from their business applications. Validating the numbers on the dashboard is no longer enough. We need to be able to present that data in an attractive, intuitive, and captivating way. So, throughout the presentation, I discussed with the audience the thoughtful approach we used when designing our game-tracking page. We focused mainly on the following topics: Serving Our Consumers; Making Life Easier for Our Designers, Modelers, and Analysts; and Promoting Process and Collaboration (the latter can be accomplished with our ChitChat application). Our job would have been a lot easier if ChitChat were available when we first put this presentation together….

Finally, you can find the slides for the presentation here. Please add your comments and questions below. There are usually multiple ways of accomplishing the same thing, so I’d be grateful to hear how you guys are creating “stickiness” with your users in your organizations.

Until the next time.

The post Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Slides and demo script from my ORDS talk at apex.world 2016 in Rotterdam

Dietmar Aust - Thu, 2016-03-10 01:23
Hi everybody,

I just came back from the apex.world conference in Rotterdam ... very nice location, great content and the wonderful APEX community to hang out with ... always a pleasure.

As promised, you can download the slides and the demo script (as is) from my site.

Instructions are included.

See you at APEX Connect in Berlin or KScope in Chicago, #letswreckthistogether .

Cheers and enyoy!
~Dietmar. 



Application Testing: The Oracle Utilities Difference

Anthony Shorten - Wed, 2016-03-09 19:33

Late last year we introduced a new product to the Oracle Utilities product set. It was the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities. This pack is a set of prebuilt content and utilities based upon Oracle Application Testing Suite.

One of the major challenges in any implementation, or upgrade, is the amount of time that testing takes in relation to the overall time to go live. Typically testing is on the critical path for most implementations and upgrades. Subsequently, customers have asked us to help address this for our products.

Typically one technique to reduce testing time is to implement automated testing as much as possible. The feedback we got from most implementations was that the adoption of automated testing tools initially was quite high as you needed to build and maintain the assets for the automated testing to be cost effective. This typically requires specialist skills in the testing tool.

This also brought up another issue with traditional automated testing techniques. Most traditional based automated testing tools use the user interface to record their automation scripts. Let me explain. Typically using traditional methods, the tool will "record" your interactions with the online system including the data you used. This is then built into a testing "script" to reproduce the interactions to automated them. This is limiting in that to use the same script with another set of data, for alternative sceanrios, you have to get a script developer to get involved and this requires additional skills. This is akin to programming.

Now let me explain the difference with Oracle Application Testing Suite in combination with the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities:

  • Prebuilt Testing Assets - We provide a set of prebuilt component based assets that the product developers use to QA the product. These greatly reduce the need for building assets from scratch and get you testing earlier.
  • One pack, multiple products, multiple versions - The pack contains the components for the Oracle Utilities products supported and the versions supported.
  • Service based not UI based - The components in the pack are service based rather than using the UI approach traditionally used. This is to isolate your functionality from any user experience changes. In a traditional approach, any changes to the User Interface would require either to re-record the script or making programming changes to the script. This is not needed for the service based approach.
  • Supports Online, Web Services and Batch - Traditional approaches typically would cover online testing only. Oracle Application Testing Suite and the pack allows for online, web services and batch testing as well which greatly expands the benefits.
  • Component Generator Utility - Whilst the pack supplies the components you will need, we are aware the fact that some implementations are heavily customized so we provide a Component Generator which uses the product meta data to generate a custom component that can be added to the existing library.
  • Assemble not code - We use the Oracle Flow Builder product, used by many Oracle eBusiness Suite customers, to assemble the components into a flow that models your business processes. Oracle Flow Builder simply generates the script that is executed with the need for technical script development.
  • Upgrade easier - The upgrade process is much simpler with the flows simply pointed to the new version of the components supplied to perform your upgrade testing.
  • Can Co-exist with UI based Components - Whilst our solution is primarily service based, it is possible to use all the facilities in Oracle Application Testing Suite to build components, including traditional recording, to add any logic introduced on the browser client. The base product does not introduce business logic into the user interface so the base components are not user interface based. We do supply a number of UI based components in the Oracle Utilities Application Framework part of the pack to illustrate that UI based components can co-exist.
  • Cross product testing - It is possible to test across Oracle Utilities products within a single flow. As the license includes the relevant Oracle Application Testing Suite tools (Flow Builder, OpenScript etc) it is possible to add components for bespoke and other solutions, that are web or service based, in your implementation as well.
  • Flexible licensing - The licensing of the testing solution is very flexible. You not only get the pack and the Oracle Application Testing Suite but the license allows the following:
    • The license is regardless of the number of Oracle Utilities products you use. Obviously customers with more than one Oracle Utilities product we see a greater benefit but it is cost effective regardless.
    • The license is regardless of the number of copies of products you run the testing against. There is a server enablement that needs to be performed as part of the installation but you are not restricted to non-production copies you run the solution against.
    • The license conditions include full use of the Oracle Application Testing Suite for licensed users. This can be used against any web or Web Service based application on the site so that you can include third party integration as part of your flows if necessary.
    • The license conditions include OpenScript which allows technical people to build and maintain their own custom assets to add to the component libraries to perform a wide range of ancillary testing.
  • Data is separated from process - In the traditional approach you included the data as part of the test. Using this solution, the flow is built independent of the data. The data, in the form of databanks (CSV, MS Excel etc) can be attached at the completion of the flow, in the flow definition or altered AFTER the flow has been built. Even after the script has been built, Oracle Flow Builder separates the data from the flow so that you can substitute the data without the need to regenerate the script. This means you have greater reuse and greater flexibility in your testing.
  • Flexible execution of Testing - The Flow Builder product generates a script (that typically needs no alteration after generation). This script can be executed in OpenScript (for developers), using the optional Oracle Test Manager product, loaded into the optional Oracle Load Testing product for performance/load testing or executed by a third party tool via a command line interface. This flexibility means greater reuse of your testing assets. 
Support for Extensions

One of the most common questions I get about the pack is the support for customization (or extensions as we call them). Let me step back before answer and put extensions into categories.

When I discuss extending our product there is a full range of facilities available. To focus on the impact of extensions I am going to categorize these into three simple categories:

  • User Interface extensions - These are bits of code in CSS or Java script that extend the user interface directly or add business logic into the browser front end. These are NOT covered by the base components as the product has all the business logic in the services layer. The reason for this is that the same business rules can be reused regardless of the channel used (such as online, web services and batch). If you have it in just one channel then you miss those business rules elsewhere. To support these you can use the features of Oracle Application Testing Suite to record that logic and generate a component for you. You can then include that component in any flow, with other relevant components, to test that logic.
  • Tier 1 extensions - These are extensions that alter the structure of the underlying object. Anything that changes the API to the object are what I am talking about. Extension types such as custom schemas which alter the structure of the object (e.g. flattening data, changing tags, adding rules in the schema etc). These will require the use of the Component Generator as the API will be different than the base component.
  • Tier 2 extensions - These are extensions within the objects themselves that alter behavior. For example, algorithms, user exits, change handlers etc are example of such extensions. These are supported by the base components directly as they alter the base data not the structure. If you have a combination of Tier 1 and Tier 2 then you must use the Component Generator as the structure is altered.

Customers will use a combination of all three and in some cases will need to use the component generators (the UI one or the meta data one) but generally the components supplied will be reused for at least part of the testing, which saves time.

We are excited about this new product and we look forward to adding more technology and new features over the next few releases.

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2016-03-09 13:05

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2016-03-09 13:05

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Oracle Cloud – Moving a dumpfile into the Database as a Service Cloud

Marco Gralike - Wed, 2016-03-09 08:26
Now that I got myself a bit acquainted with the Database as a Service offering,…

I’m Sasank Vemana and this is how I work

Duncan Davies - Wed, 2016-03-09 08:00

The next profile in our ‘How I Work‘ series is Sasank Vemana. Sasank burst onto the PeopleSoft blogging scene in 2014 with his Sasank’s PeopleSoft Log site, and has been adding entries at a ferocious pace since. He is probably best known for his series of posts on altering the PeopleSoft branding to make it match a corporate palette, as well as configuration and code changes related to UI/UX.

I met Sasank at OOW15 and he’s a lovely chap. He has given some great responses to the questions. I’d love to know how he persuaded his employer to give him 4 monitors and about his use of dual mice!

Name: Sasank Vemana

Occupation: PeopleSoft/Enterprise Technology
Location: Tallahassee, Florida, USA
Current computer:
Desktop: Dell Optiplex 9020 (Windows 7, Intel Core i7, 8 GB RAM)
Laptop: Dell LATITUDE | E6530 (Windows 7, Intel Core i7, 8 GB RAM)
Current mobile devices: Samsung Galaxy S4. Yes – That reminds me I need an upgrade!
I work: To solve problems.

What apps/software/tools can’t you live without?
Google is my friend and my portal to everything. I try not to overload myself with information, which I know I can find. Google search helps me find what I am looking for. On a side note, I use Whatsapp and FaceBook to keep in touch with my family and friends who are scattered in different parts of the world. I also use S Health app to keep track of my physical activities and monitor my health.

Besides your phone and computer, what gadget can’t you live without?
Not a big gadget fan! I can live without them as long as I have a good internet connection, which seems to be the most important thing for me these days. With that, I can do my reading, research and also remote to any of my computers (if needed) regardless of the device. Same goes with entertainment – Netflix, Spotify, etc.

What’s your workspace like?
Over the past year and a half, I have been using a standing desk at work, thanks to my current employers who were kind enough to allow me to rearrange my workspace. When I am at work and not in meetings, I try to stand as much as possible and use a bar stool when I tend to get tired. Occasionally, I also just sit down with my laptop wherever I find space. The four monitor desktop setup helps tremendously when I have multiple applications running. I also have two mice and try to switch between my left and right hand. I am ambidextrous so it works for me (I will not recommend this otherwise!).

Sasank Vemana - How I Work - Picture 1

Standing desk, 4 monitors and dual mice

Sasank Vemana - How I Work - Picture 2

What do you listen to while you work?
Usually, I am zoned into whatever I am doing and mostly oblivious to events around me. I don’t listen to music while I am at work these days. At times, I listen to live cricket or tennis commentary if anything I care about is going on. A set of Bose noise canceling headphones has long been on my wish list (in case Santa is reading!).

What PeopleSoft-related productivity apps do you use?
Oracle Virtual Box/PUM Images – My savior for evaluation, experimentation and proof of concept purposes.
Web Services: SoapUI, Postman (Chrome Add-On)
Web Development: Browser based Developer Tools (Chrome/Firefox/IE), DOM/StyleSheets/JavaScripts Explorers, Device Emulators, etc., Fiddler, Live HTTP Headers (Firefox Add-On)
Text Editors/Journals: Notepad++, Programmer’s File Editor (PFE), WinMerge, Evernote
DB Tools: Golden (for the most part since it is light weight and does not hog resources), SQL Developer (for some activities), OEM – Oracle Enterprise Manager
Screen Capture/Recording: SnagIt and Jing (short videos) are great for communication

Do you have a 2-line tip that some others might not know?
Tracing tip: Use PeopleCode – 2048 (Show Each), SQL – 3 (Statement, Bind). This gives us every line of code and SQL that executed in sequence without all the other clutter which is not always useful especially when we are just trying to understand the logic.

What SQL/Code do you find yourself writing most often?
Generally speaking, queries on PeopleTools metadata tables. E.g.: PSAUTHITEM (security related queries), PSPRSMDEFN (portal navigation queries), etc.

What would be the one item you’d add to PeopleSoft if you could?
I would add/implement a log aggregation and mining utility. I have spent many hours combing through log files distributed across different servers. It would be great to see something that aggregates all server logs and provides mining capabilities (regex and/or free-form search). After attending Oracle OpenWorld 2015, I understand that PeopleTools 8.55 has some new features – as part of Health Center – that might assist with logs. I look forward to evaluating this functionality!

What everyday thing are you better at than anyone else?
Probably exploring! Although, I would be careful not to say that I am better at it than others. I just find myself doing that a lot without worrying about getting lost. It might seem like a wasteful effort at times but it is a natural way of learning for me.

What’s the best advice you’ve ever received?
These are not really advice received from someone but some of my favorite quotes that I can think of right now:
– Learn to profit from your losses.
– Don’t make decisions during a storm.
– A manager gets work done through people whereas a leader inspires people to meet shared goals.
– And miles to go before I sleep.


Webcast Q&A: Marketing Asset Management Integrated with Marketing Cloud

WebCenter Team - Wed, 2016-03-09 07:10

WEBCAST Marketing Asset Management Integrated with Marketing Cloud

Thank you to everyone who joined us last Wednesday on our live webcast: Marketing Asset Management Integrated with Marketing cloud; we appreciate your interest and the great Q&A that followed! For those who missed it or who would like to watch it again, we now have the on-demand replay available for the webcast here.

Mariam Tariq

On the webcast, Mariam Tariq, Senior Director of Product Management -- Content and Process at Oracle discussed how organizations are struggling with managing marketing assets across multiple digital channels where content on each channel (web, email, Facebook page, etc.) is created and delivered by different teams of marketers using different technologies. Mariam gave specific examples and a great demonstration to show audience members how you can enable IT to empower Line of Business by putting the power to create rich microsites in their hands -- driving business agility and innovation.

We also wanted to capture some of the most asked questions and Mariam’s responses here. But please do follow up with us and comment here if you have additional questions or feedback. We always look forward to hearing from you.

Q: Rather than pushing assets directly into other system or even using the microsite portal to share them, can or marketing team simply share links directly from Document Cloud and not use Process or Sites Cloud?

Mariam: Absolutely. If you need only the collaboration platform, you can use Documents on its own. Keep in mind that Documents Cloud includes limited use of Sites Cloud for test use cases. So you can try out Sites Cloud with Documents. 

Q: Since you didn’t show Oracle SRM in the demo, can you explain how that integration works?

Mariam: There is a release coming this spring with Documents integration into Oracle SRM. Oracle SRM to quickly summarize enables social marketers to create and manage social feeds. This includes a layout editor to create Facebook pages and ability to schedule and publish updates like twitter posts. In SRM, you will see a button allowing you to directly access Documents Cloud content like an image. The file will get copied into SRM and you can then use that file in your social messages.

Q: Can the approvers get email about tasks?

Mariam: Most definitely. Approvers get an email about the task with a link to take them directly into Process Cloud to review the files and do the approval.

Q: How does pricing work?

Mariam: Documents and Process are user based pricing starting. Sites Cloud is priced on a metric called ‘interactions’ which is a measure for data consumption, so essentially priced by the amount of data delivered across all your microsites.  More detail is available at cloud.oracle.com.

Q: With the website tool you showed, can we restrict who has access to the site?

Mariam: Yes. Absolutely. You can secure access to the site. 

Q: Are the conversation features you showed in Document Cloud related to the Oracle Social Network offering… it looks similar?

Mariam: Yes. We have officially rolled Oracle Social Network (OSN) into Documents Cloud. Since the social collaboration process often involves sharing and discussing documents like Word and PowerPoint files, the feedback we received from our customers was to simply merge the two rather than having a separate service for each. So now, the OSN features are part of Documents. You can directly access the comments and discussions in context of the organized files and folders of Documents Cloud rather than referencing documents in a separate interface.

Q: Where are the assets stored? Isn't WebCenter Sites from Oracle a content repository?

Mariam: WebCenter Sites is definitely used as a content management system. It's on-premise. Documents Cloud is a multi-tenant cloud solution. You can use them both together. What Documents Cloud provides you is a flexible cloud-based collaboration platform that you can also use with external agencies. Those assets can be consumed within WebCenter Sites. This is a 'hybrid' cloud to on-premise setup. Now in WebCenter Sites you can reference the cloud assets directly from Documents. A subset of your assets could be managed this way. It simply provides a more nimble cloud collaboration extension to complement WebCenter Sites (or any WCM system).

Q: Is it "all push" to customers/clients or is there a feedback loop from customers/clients to track the success of the campaign?

Mariam: In the demo, we're simply pushing the content out cross-channel. We have analytics coming later in this year that will show consumption of the content that will help with the feedback loop to measure engagement.

In case you missed the webcast and would like to listen to the on demand version, you can do so here.

UKOUG Application Server & Middleware SIG

Tim Hall - Wed, 2016-03-09 05:50

ukougI’ll be speaking at the UKOUG Application Server & Middleware SIG tomorrow.

It’s going to be another hit-and-run affair for me. I’m in meetings at work all morning, then I’ll be doing a mad dash to get to my presentation at the SIG, then straight back to work to do an upgrade during the evening.

The agenda looks cool, so I would have liked to stay the whole day, but sadly that’s not going to happen. :(

My favourite bit of any tech event is interacting with people, so just turning up to present is not ideal, but in this case I don’t have a choice in the matter, unless I go AWOL from work… :)

Hope to see you there, even if it is only briefly!

Cheers

Tim…

UKOUG Application Server & Middleware SIG was first posted on March 9, 2016 at 12:50 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Free Oracle Database Monitoring Webinar by Oracle ACE Ronald Rood

Gerger Consulting - Wed, 2016-03-09 05:19
Attend our webinar and learn how you can monitor your Oracle Database and cloud infrastructure with Zabbix, the open source monitoring tool.

The presentation is hosted by Oracle ACE and Certified Master Ronald Rood.

Learn more about the webinar at this link.


Categories: Development

OSB 12c Adapter for Oracle Utilities

Anthony Shorten - Tue, 2016-03-08 23:32

In Oracle Utilities Application Framework V4.2.0.3.0 we introduced  Oracle Service Bus adapters to allow that product to process Outbound Messages and for Oracle Utilities Customer Care And Billing, Notification and Workflow records.

These adapters were compatible with Oracle Service Bus 11g. We have not patched these adapters to be compatible with new facilities in Oracle Service Bus 12c. The following patches must be applied:

 Version  Patch Number  4.2.0.3.0  22308653  4.3.0.0.1  21760629  4.3.0.1.0  22308684  

Pages

Subscribe to Oracle FAQ aggregator