Skip navigation.

Feed aggregator

You shouldn't think this happens only to you

Denes Kubicek - Fri, 2014-12-05 03:17
Since several hours I am getting this while trying to access all blogs at http://www.odtug.com/apex. It seems that this list has a lots of problems listing all the relevant APEX blogs. The previous version from Dimitri was so much better and user friendly.
Categories: Development

Closure

Jonathan Lewis - Fri, 2014-12-05 02:11

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

I have three identical tables, one million rows, no indexes. The SQL to create the first table is one I supplied a couple of days ago to demonstrate changes in join cardinality dependent on Oracle version:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

Here’s a simple SQL statement that joins the three tables:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t2.n_90
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

Given the various n_400 = {constant} predicates we should expect to see close to 2,500 rows from each table participating in the join – and that is exactly what Oracle predicts in the execution plan. The question is: what is the cardinality of the final join? Before showing you the execution plan and its prediction I’m going to bring transitivity into the picture.  Note the lines numbered 6 and 7.  If t2.n_90 = t1.n_90 and t3.n_90 = t2.n_90 then t3.n_90 = t1.n_90; so I might have written my query slightly differently – note the small change at line 7 below:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t1.n_90		-- changed
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

So here’s the exciting bit. My two queries are logically equivalent, and MUST return exactly the same row set. Check the final cardinality predictions in these two execution plans (from 12.1.0.2, but you get the same results in 11.2.0.4, older versions have other differences):


First Version - note the predicate for operation 3
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      |  2554 |   139K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_90"="T2"."N_90" AND "T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

Second Version - note the predicate for operation 1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10575 |   578K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90" AND "T3"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

The a small change in the choice of presenting the predicates gives me a factor of 22 in the cardinality estimate – oops!

The actual result with my data was close to 3,000 rows – so one of the estimates in the second version was pretty good; but the point of the blog isn’t that you can “tune” the optimizer by carefully picking your way through transitive closure, the point is that a small “cosmetic” change you might make to a query could result in a significant change in the cardinality calculations which could then make a dramatic difference to the final execution plan. This example, by the way, depends on the same “multi-column sanity check” that showed up in the previous posting.

I will be expanding on this posting some time in the next couple of weeks but, again, the example should come up in my session on calculating selectivity at “Super Sunday” at UKOUG Tech 14.

 

 


Announcing SLOB 2.2 : Think Time and Limited-Scope User-Data Modification

Kevin Closson - Fri, 2014-12-05 00:19

This is a hasty blog post to get SLOB 2.2 out to those who are interested.

In addition to doing away with the cumbersome “seed” table and procedure.sql, this kit introduces 5 new slob.conf parameters. By default these parameters are disabled.

This SLOB distribution does not require re-executing setup.sh. One can simply adopt the kit and use it to test existing SLOB databases. The following explains the new slob.conf parameters:

DO_UPDATE_HOTSPOT=FALSE

When set to TRUE, modify SQL will no longer affect random rows spanning each session’s schema. Instead, each session will only modify HOTSPOT_PCT percent of their data.

HOTSPOT_PCT=10

This parameter controls how much of each session’s schema gets modified when UPDATE_PCT is non-zero. The default will limit the scope of each session’s data modifications to a maximum of 10% of their data.

THINK_TM_MODULUS=0

When set to non-zero this is a frequency control on how often sessions will incur think time. For example, if set to 7, every seventh SQL statement will be following by a sleep (think time) for a random amount of time between THINK_TM_MIN and THINK_TM_MAX. It’s best to assign a prime number to THINK_TM_MODULUS.

THINK_TM_MIN=.1

The low-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

THINK_TM_MAX=.5

The high-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

Notes About Think Time

The resolution supported for think time is hundreds of a second. The following is a link to the SLOB 2.2 release tarball (md5 is be3612c50d134636a56ef9654b5865c5) :

https://my.syncplicity.com/share/5vmflakvyqbawsy/2014.12.05.slob_2.2.1.tar

The additional tarball (at the following link) has a slob.conf, simple.ora and awr.txt that show a way to have 256 sessions produce the following load profile (on 2s16c32t E5 Xeon):
https://my.syncplicity.com/share/geydubw3q42okrt/think-time-help-files.tar

load-profile-think-time


Filed under: oracle, SLOB Tagged: Oracle, SLOB

SQL Server tips: how to list orphaned logins

Yann Neuhaus - Thu, 2014-12-04 21:56

I read a lot of about orphaned database users in SQL Server, but I have almost never read about orphaned logins. Many of my customers migrate or remove databases in SQL Server. They forget - not every time but often - to remove the logins and jobs associated with these databases. I have created a script - without any cursors, YES, it is possible - allowing to search all logins who are not "attached" to a database of an instance.

GUI Be Gone

Michael Dinh - Thu, 2014-12-04 18:08

I am losing the luxury of using GUI since clients typically may not have X-Windows or VNC Server installed

Adapt or die and re-learning command line again.

Better to verify before installation than to clean up a problematic install.

Use runcluvfy to verify DB install:
runcluvfy.sh stage -pre dbinst -n rac01,rac02 -r 11gR2 -d /u01/app/oracle/product/11.2.0.4/db_1 -osdba dba -fixup -fixupdir /tmp -verbose

Use runInstaller -executePrereqs to verify responseFile for silent install and detect issues:
runInstaller -silent -executePrereqs -waitforcompletion -force -responseFile /media/sf_Linux/11.2.0.4/database/rac_db_swonly.rsp

Use grep to find results from installActions log:
grep -e ‘[[:upper:]]:’ installActions2014-12-04_02-49-27PM.log|cut -d “:” -f1|sort -u

DEMO:

[oracle@rac01:/media/sf_Linux/11.2.0.4/grid]
$ ./runcluvfy.sh stage -pre dbinst -n rac01,rac02 -r 11gR2 -d /u01/app/oracle/product/11.2.0.4/db_1 -osdba dba -fixup -fixupdir /tmp -verbose

Performing pre-checks for database installation

Checking node reachability...

Check: Node reachability from node "rac01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac02                                 yes
  rac01                                 yes
Result: Node reachability check passed from node "rac01"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Verification of the hosts config file successful


Interface information for node "rac02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:87:91:11 1500
 eth1   192.168.56.12   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.33   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.32   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.22   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth2   10.10.10.12     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:E8:D6:21 1500
 eth2   169.254.82.236  169.254.0.0     0.0.0.0         10.0.2.2        08:00:27:E8:D6:21 1500


Interface information for node "rac01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:02:B1:57 1500
 eth1   192.168.56.11   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth1   192.168.56.21   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth1   192.168.56.31   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth2   10.10.10.11     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:60:79:0F 1500
 eth2   169.254.34.109  169.254.0.0     0.0.0.0         10.0.2.2        08:00:27:60:79:0F 1500


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[192.168.56.12]            rac02[192.168.56.33]            yes
  rac02[192.168.56.12]            rac02[192.168.56.32]            yes
  rac02[192.168.56.12]            rac02[192.168.56.22]            yes
  rac02[192.168.56.12]            rac01[192.168.56.11]            yes
  rac02[192.168.56.12]            rac01[192.168.56.21]            yes
  rac02[192.168.56.12]            rac01[192.168.56.31]            yes
  rac02[192.168.56.33]            rac02[192.168.56.32]            yes
  rac02[192.168.56.33]            rac02[192.168.56.22]            yes
  rac02[192.168.56.33]            rac01[192.168.56.11]            yes
  rac02[192.168.56.33]            rac01[192.168.56.21]            yes
  rac02[192.168.56.33]            rac01[192.168.56.31]            yes
  rac02[192.168.56.32]            rac02[192.168.56.22]            yes
  rac02[192.168.56.32]            rac01[192.168.56.11]            yes
  rac02[192.168.56.32]            rac01[192.168.56.21]            yes
  rac02[192.168.56.32]            rac01[192.168.56.31]            yes
  rac02[192.168.56.22]            rac01[192.168.56.11]            yes
  rac02[192.168.56.22]            rac01[192.168.56.21]            yes
  rac02[192.168.56.22]            rac01[192.168.56.31]            yes
  rac01[192.168.56.11]            rac01[192.168.56.21]            yes
  rac01[192.168.56.11]            rac01[192.168.56.31]            yes
  rac01[192.168.56.21]            rac01[192.168.56.31]            yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:192.168.56.11             rac02:192.168.56.12             passed
  rac01:192.168.56.11             rac02:192.168.56.33             passed
  rac01:192.168.56.11             rac02:192.168.56.32             passed
  rac01:192.168.56.11             rac02:192.168.56.22             passed
  rac01:192.168.56.11             rac01:192.168.56.21             passed
  rac01:192.168.56.11             rac01:192.168.56.31             passed
Result: TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[10.10.10.12]              rac01[10.10.10.11]              yes
Result: Node connectivity passed for interface "eth2"


Check: TCP connectivity of subnet "10.0.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:10.10.10.11               rac02:10.10.10.12               passed
Result: TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         3.8674GB (4055296.0KB)    1GB (1048576.0KB)         passed
  rac01         3.8674GB (4055296.0KB)    1GB (1048576.0KB)         passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         2.6666GB (2796104.0KB)    50MB (51200.0KB)          passed
  rac01         2.7855GB (2920820.0KB)    50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
  rac01         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
Result: Swap space check passed

Check: Free disk space for "rac02:/u01/app/oracle/product/11.2.0.4/db_1,rac02:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/oracle/product/11.2.0.4/db_1  rac02         /             45.459GB      6.7GB         passed
  /tmp              rac02         /             45.459GB      6.7GB         passed
Result: Free disk space check passed for "rac02:/u01/app/oracle/product/11.2.0.4/db_1,rac02:/tmp"

Check: Free disk space for "rac01:/u01/app/oracle/product/11.2.0.4/db_1,rac01:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/oracle/product/11.2.0.4/db_1  rac01         /             45.4215GB     6.7GB         passed
  /tmp              rac01         /             45.4215GB     6.7GB         passed
Result: Free disk space check passed for "rac01:/u01/app/oracle/product/11.2.0.4/db_1,rac01:/tmp"

Check: User existence for "oracle"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists(54321)
  rac01         passed                    exists(54321)

Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed
Result: User existence check passed for "oracle"

Check: Group existence for "oinstall"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "dba"

Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             yes           yes           yes           yes           passed
  rac01             yes           yes           yes           yes           passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed

Check: Membership of user "oracle" in group "dba"
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             yes           yes           yes           passed
  rac01             yes           yes           yes           passed
Result: Membership check for user "oracle" in group "dba" passed

Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         5                         3,5                       passed
  rac01         5                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          65536         65536         passed
  rac01             hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          1024          1024          passed
  rac01             soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          16384         16384         passed
  rac01             hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          16384         2047          passed
  rac01             soft          16384         2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         x86_64                    x86_64                    passed
  rac01         x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
  rac01         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             250           250           250           passed
  rac01             250           250           250           passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             32000         32000         32000         passed
  rac01             32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             100           100           100           passed
  rac01             100           100           100           passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             128           128           128           passed
  rac01             128           128           128           passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4398046511104  4398046511104  2076311552    passed
  rac01             4398046511104  4398046511104  2076311552    passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4096          4096          4096          passed
  rac01             4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4294967296    4294967296    2097152       passed
  rac01             4294967296    4294967296    2097152       passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             6815744       6815744       6815744       passed
  rac01             6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
  rac01             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4194304       4194304       4194304       passed
  rac01             4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
  rac01         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
  rac01         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac01         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
  rac01         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
  rac01         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
  rac01         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
  rac01         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
  rac01         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
  rac01         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         ksh-20120801-21.el6.1     ksh-20100621              passed
  rac01         ksh-20120801-21.el6.1     ksh-20100621              passed
Result: Package existence check passed for "ksh"

Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         make-3.81-20.el6          make-3.81                 passed
  rac01         make-3.81-20.el6          make-3.81                 passed
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
  rac01         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
  rac01         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
  rac01         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
  rac01         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Check for consistency of root user's primary group passed

Check default user file creation mask
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         0022                      0022                      passed
  rac01         0022                      0022                      passed
Result: Default user file creation mask check passed

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac02"
The Oracle Clusterware is healthy on node "rac01"

CRS integrity check passed

Checking Cluster manager integrity...


Checking CSS daemon...

  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 running
  rac01                                 running

Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         yes                       yes                       passed
  rac01         yes                       yes                       passed
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         yes                       yes                       passed
  rac01         yes                       yes                       passed
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         no                        no                        exists
  rac01         no                        no                        exists
GSD node application is offline on nodes "rac02,rac01"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         no                        yes                       passed
  rac01         no                        yes                       passed
ONS node application check passed


Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  rac02                                 Active
  rac01                                 Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac02         0.0                       passed
  rac01         0.0                       passed

Time offset is within the specified limits on the following set of nodes:
"[rac02, rac01]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "rac02"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Check: Time zone consistency
Result: Time zone consistency check passed

Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?
  ----------------  ------------  ------------  ------------  ------------  ------------
  dinh-scan         rac01         true          LISTENER_SCAN1  1521          true
  dinh-scan         rac02         true          LISTENER_SCAN2  1521          true
  dinh-scan         rac02         true          LISTENER_SCAN3  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?
  ------------  ------------------------  ------------------------
  rac01         LISTENER_SCAN1            yes
  rac01         LISTENER_SCAN2            yes
  rac01         LISTENER_SCAN3            yes
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "dinh-scan"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  dinh-scan     192.168.56.33             passed
  dinh-scan     192.168.56.31             passed
  dinh-scan     192.168.56.32             passed

Verification of SCAN VIP and Listener setup passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Database and Clusterware version compatibility


Checking ASM and CRS version compatibility
ASM and CRS versions are compatible
Database version "11.2" is compatible with the Clusterware version "11.2.0.4.0".
Database Clusterware version compatibility passed

Pre-check for database installation was successful.
[oracle@rac01:/media/sf_Linux/11.2.0.4/grid]
$

[oracle@rac01:/media/sf_Linux/11.2.0.4/database]
$ ./runInstaller -silent -executePrereqs -showProgress -waitforcompletion -force -responseFile /media/sf_Linux/11.2.0.4/database/rac_db_swonly.rsp

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 44381 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 8191 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-12-04_02-49-27PM. Please wait ...
[oracle@rac01:/media/sf_Linux/11.2.0.4/database]
$ cd /u01/app/oraInventory/logs/
[oracle@rac01:/u01/app/oraInventory/logs]
$ ls -lrt
total 1068
-rw-r-----. 1 grid   oinstall     47 Nov 30 19:40 time2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall      0 Nov 30 19:42 oraInstall2014-11-30_07-40-41PM.err
-rw-r-----. 1 grid   oinstall      0 Nov 30 19:45 oraInstall2014-11-30_07-40-41PM.err.rac02
-rw-r-----. 1 grid   oinstall    121 Nov 30 19:46 oraInstall2014-11-30_07-40-41PM.out.rac02
-rw-r-----. 1 grid   oinstall   7650 Nov 30 19:46 AttachHome2014-11-30_07-40-41PM.log.rac02
-rw-r-----. 1 grid   oinstall    348 Nov 30 19:46 silentInstall2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall   1968 Nov 30 19:46 oraInstall2014-11-30_07-40-41PM.out
-rw-r-----. 1 grid   oinstall 975962 Nov 30 19:46 installActions2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall      0 Nov 30 20:12 oraInstall2014-11-30_08-12-19PM.err
-rw-r-----. 1 grid   oinstall      0 Nov 30 20:12 oraInstall2014-11-30_08-12-19PM.err.rac02
-rw-r-----. 1 grid   oinstall   7357 Nov 30 20:13 UpdateNodeList2014-11-30_08-12-19PM.log.rac02
-rw-r-----. 1 grid   oinstall     33 Nov 30 20:13 oraInstall2014-11-30_08-12-19PM.out.rac02
-rw-r-----. 1 grid   oinstall  11305 Nov 30 20:13 UpdateNodeList2014-11-30_08-12-19PM.log
-rw-r-----. 1 grid   oinstall     33 Nov 30 20:13 oraInstall2014-11-30_08-12-19PM.out
-rw-r--r--. 1 oracle oinstall     47 Dec  4 14:49 time2014-12-04_02-49-27PM.log
-rw-rw----. 1 oracle oinstall  56317 Dec  4 14:49 installActions2014-12-04_02-49-27PM.log
-rw-r--r--. 1 oracle oinstall      0 Dec  4 14:49 oraInstall2014-12-04_02-49-27PM.out
-rw-r--r--. 1 oracle oinstall      0 Dec  4 14:49 oraInstall2014-12-04_02-49-27PM.err

[oracle@rac01:/u01/app/oraInventory/logs]
$ tail -20 installActions2014-12-04_02-49-27PM.log

INFO: Actual Value:libaio-devel(x86_64)-0.3.107-10.el6
INFO: -----------------------------------------------
INFO: *********************************************
INFO: Users With Same UID: This test checks that multiple users do not exist with user id as "0".
INFO: Severity:CRITICAL
INFO: OverallStatus:SUCCESSFUL
INFO: -----------------------------------------------
INFO: Verification Result for Node:rac01
WARNING: Result values are not available for this verification task
INFO: *********************************************
INFO: Root user consistency: This test checks the consistency of the primary group of the root user across the cluster nodes
INFO: Severity:IGNORABLE
INFO: OverallStatus:SUCCESSFUL
INFO: -----------------------------------------------
INFO: Verification Result for Node:rac01
WARNING: Result values are not available for this verification task
INFO: All forked task are completed at state prepInstall
INFO: Exit Status is 0
INFO: Shutdown Oracle Database 11g Release 2 Installer
INFO: Unloading Setup Driver

[oracle@rac01:/u01/app/oraInventory/logs]
$ grep -e ‘[[:upper:]]:’ installActions2014-12-04_02-49-27PM.log|cut -d “:” -f1|sort -u

INFO
/tmp/OraInstall2014-12-04_02-49-27PM
WARNING
[oracle@rac01:/u01/app/oraInventory/logs]
$

EBS VMs explained

Wim Coekaerts - Thu, 2014-12-04 16:59
A great blog entry from the EBS team explaining the various Oracle VM appliances for EBS :

https://blogs.oracle.com/stevenChan/entry/e_business_suite_virtual_machines

Oracle Priority Support Infogram for 04-DEC-2014

Oracle Infogram - Thu, 2014-12-04 14:46

RAC
Oracle Database In-Memory on RAC - Part 2, from Oracle Database In-Memory.
Performance
From flashdba: awr-parser.sh – Script for parsing Oracle AWR Reports.
OVM
Oracle VM 3.2.9 Released, from Oracle's Virtualization Blog.
ZFS
New ZFS Videos, from Oracle EMEA Value-Added Distributor News.
Getting Down to the Metal
Announcing Oracle Server X5-2 and X5-2L, from Systems Technology Enablement for Partners (STEP).
From the same source: Now Available: Oracle FS1 Flash Storage System Implementation Exam.
VCA
From Wim Coekaerts Blog, SAP certification for Oracle's Virtual Compute Appliance X4-2 (VCA X4-2).
SOA
Best Practices for SOA Suite 11g to 12c Upgrade, from the SOA & BPM Partner Community Blog.
From the same source: SOA 12c demo system (12.1.3) hosted at Oracle Cloud – free for Oracle Partners.
WLS
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Java
From the JCP Program Office: JSR Updates - Java EE 8 & Java SE 9.
ADF
From Archbeat: 2 Minute Tech Tip: Using Oracle ADF Libraries.
Analytics
Advisor Webcast: Getting Started with Essbase Aggregate Storage Option - ASO 101, from Business Analytics - Proactive Support.
EBS
From Oracle E-Business Suite Technology:
JRE Support Ends Earlier than JDK Support
EBS VMs: Appliances, Templates, and Assemblies Explained
November 2014 Updates to AD and TXK for EBS 12.2
From Oracle E-Business Suite Support Blog:
Webcast: Rapid Planning: Enabling Mass Updates to Demand Priorities and Background Processing
Webcast: Discrete Costing Functional Changes And Bug Fixes For 12.2.3 And 12.2.4
Considering Customizations with POR_CUSTOM_PKG for iProcurement Requisitions? Check this out!
Webcast: Get Proactive with Doc ID 432.1

Adaptive Case Management 12c and ADF Human Tasks

Andrejus Baranovski - Thu, 2014-12-04 14:08
I'm diving into the new topic - Adaptive Case Management 12c and ADF integration. Today will be the first post in the category and there are more posts planned for the future. I strongly believe that ACM (Adaptive Case Management) makes a great extension for standard BPM. Mainly because it allows to define a loose process, without strict order steps. Process steps can be executed in different order, depending on the situation requirements, at given time. I will be explaining how to implement ADF Human Task for ACM activity and will share several tips, how to make it run in BPM Workspace application.

This is how sample application (HotelBookingProcessing_v1.zip) is constructed, there are two Human Tasks (AddHotelBooking and ValidateHoteBooking) and HotelBookingProcessing Case control:


HotelBookinfProcessing case is defined with Hotel Booking Details data type (this type is based on XSD schema and is defined as Business Component variable - don't mix up with ADF Business Components) - you can think about it as about main data structure type for the case, this can be transferred into every case activity:


There are two stakeholders defined, this could help to control who could have access to human task and case activity. Customer Service Representative is supposed to add new hotel booking, while Financial Accountant can approve or reject it:


I have created Human Task activity directly through composite, it is not necessary to have BPM process to define human tasks. Important to set Application Context property for Human Task to be OracleBPMProcessRolesApp, this will help later with security roles configuration in BPM workspace:


In order to register human task with Case management, we are given option to promote human task as a case activity. This will allow to initiate human task from the case management:


We can define input and output for the case activity, based on the same data type defined in the case. This will allow to transfer data from the case to the activity, and to the underlying human task in our situation:


You could generate ADF form case data, this form will be rendered in BPM workspace case UI. I'm going to look into customisation options of this kind of form in my future posts (checkbox is set to generate editable form):


This is how case data form is rendered, out of the box is given option to save and reset data for the case - Hotel Booking Details:


Human task form is generated in the same way as it was in 11g - no change here for 12c. You could auto generate this form, it generates a lot of code and I would prefer to build custom light form instead:


Important hint - auto generated human task form will not render in BPM workspace window. You need to change FRAME_BUSTING parameter generated in web.xml from differentDomain to never. With differentDomain option it doesn't render human task form, in 11g it was generating with option set to never, for some reason this was changed in 12c - not for good:


With FRAME_BUSTING set to never, human task form renders well:


Human task is started directly from the case activity - Add Hotel Booking from the available list of activities:


We can track in the case activity log - when activity was started, completed or modified. This is quite helpful info to track activity history:


One of the main advantages - user could decide the order of activities, on contrary to strict BPM process. Start Validate Hotel Booking activity, this will create new task for Financial Accountant:


Activity was started, we can see this from the audit log:


This is readonly human task, rendered in BPM workspace - Financial Accountant could approve or reject it:


Case can be closed and hotel booking approved:

Presenting at #UKOUG_APPS14 (8th Dec Monday 4:30 PM) : EBS integration with Identity Management

Online Apps DBA - Thu, 2014-12-04 14:05
  I am presenting paper Integrating Oracle E-Business Suite with Identity & Access Management & Lessons Learned with Neha Mittal. Presentation in on 8th December Monday 4:30 PM at Liverpool (UK) covering Overview of Oracle Identity & Access Management  Integration options including OAM (SSO), OIM (Provisioning & Reconciliation) & GRC (SoD) High level lessons learned from our various [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Getting Started with Oracle Fusion Cloud Integrations

Angelo Santagata - Thu, 2014-12-04 12:32

Hey all,

If your getting started with integrating your application with Oracle Fusion Cloud then I wholeheartedly recommend you read the following resources before starting.. Most of the below is specific to Oracle Sales Cloud because it has App Composer, however much of the below is also applicable to HCM, ERP and other Fusion products.. 

Some of these are a MUST have read before you start integrating/coding/customizing :-) I've put them here in the order I think would work for most people... Kinda like a getting started check-list

I consider this blog entry an living blog entry, in that  I'll be updating it on a regular basis, so make sure you periodically check this location 



Top 5 Fusion Integrations Must Reads 

1. Familiarise yourself with the Sales Cloud Documentation. Specifically :
    • Go through the "User" section, documents like "Using Sales Cloud", "book. If your a techie like me you'll sit there and think, "Hey this is functional why do I need to read this?", well you do.. Even as a technical person, reading through the various user documents like the Using Sales Cloud" bits as an end user helps you understand what the different concepts/topics are.. You'll also understand things like the difference between a Prospect and a Sales Account, territories, assessments and much more.. Its worth a quick read, but do make sure you have a functional consultant to hand to make sure your not building something which can be done by configuration....
    • Read through all the books in the "Extensibility" section. The only anomaly here is the "Business Card Scanner mobile App" document. Its a walkthrough of how to integrate SalesCloud with a 3rd party Service to do business card scanning with MAF... Id leave that till last...
    • Peruse the Development section, this section contains a number of example usecases, ie how to create a customer in R8, how to call an outbound service, its a good read....
2. Get an overview of the tasks you might do
    • Once you've this then look at the "Tasks" section of the docs....Here the curriculum development folk have categorised some of the most common tasks and put short cuts to the documentation detailing how to do this.. e.g. like adding a field to SalesCloud, calling a soap webservice etc
3. Are you going to be customizing the SalesCloud User Interface?
    • Most integrations customize the SalesCloud User Interface. The customization could be as simple as adding a few fields to a standard object (like Opportunity), creating new objects (like MyOrder), validation or adding external content to one or many pages.
    • If your adding fields make sure you read the "Introduction to SalesCloud Customizations" section.
    • If you will be adding validation, triggers or calling webservices from SalesCloud then make sure you read up on groovy scripting, and specifically the chapter on calling outbound SOAP webservices from groovy.
    • Make sure you understand the difference between calling a SOAP Service from groovy and creating an outbound webservice call using object workflows
      • In a nutshell , calling SOAP Services from groovy is a synchronous call, and calling a SOAP Service from a object workflow is a fire-and-forget asynchronous call
    • On the subject of groovy be aware that in Sales Cloud you do not have access to the entire groovy language, make sure you understand that we only support a number of groovy functions (whitelisting) and these are documented at the end of the book , Appendix A Supported Groovy Classes and Methods
4. Are you going to be accessing data from SalesCloud from the external app??
    • If you think you will be calling SOAP WebServices in SalesCloud then the "Getting started with WebServices" is a MUST read...  This doc goes into details into how to look up the SOAP webservice in Fusion OER, how to create static proxies, querying data and how to perform CRUD operations...
    • Get to know Oracle Fusion OER,, its a gold mine of information.......
5. Do you need your app to know who is calling it? 
    • Many integrations involve embedding a 3rd party web app into Oracle Sales Cloud as an iFrame or pressing a button in SalesCloud and calling the 3rd party app (either a UI or WebService call) . If your doing this then you'll almost certainly need to pass a "token" to the 3rd party application so it can use that it can call back to Sales Cloud with a key rather than a plain text username/password combo.. We call this key JWT TOKEN and its based on industry standards (http://jwt.io/) .  For a starters read my JWT Getting started blog  entry and then use the links to read the core documentation

That covers the top 5 areas of integration.. Now for a list of locations where you can get even MORE useful information :

More Information sources

  1. Oracle Learning Centres Quick Webinars on SalesCloud Integration
    • I worked with Development to get this mini tutorial series done, its excellent but Im obviously not biased eh  ;-) 
  2. R9 Simplified WebServices doc
    • This is a new document we recently completed based on how to use the new R9 Simplified SOAP TCA Services..  Although the document is targetted at R9 developers, it covers many of the standard topics like how to create a proxy, how to create a create operation etc.. It even has some sample CRUD payloads which are really really useful 
  3. Oracle Fusion Developer Relations
    1. Good friends of mine, they host a fantastic blog, youtube channel and whitepapers for Fusion Developers, another gold mine of information covering customization , extensions and integration code.
  4. Oracle Fusion Developer Relations Youtube channel
    • Not content with an awesome blog the Developer Relations folk even have a you tube channel where they host a collection of short "tutorials", showing all sorts such as "How to add a field to a page" , " How to call a webservice" etc.. 
  5. Oracle Fusion Developer Relations Whitepapers
    1. on topics including custom application development, ESS development, and Groovy and Expression Language.
  6. And finally there is my humble blog where I try and blog on things which arent documented anywhere else.. if they are documented and are interesting I often link to it.. mainly because I want to find it myself :-)

Thats it folks!

If there are blog entries you'd like to see or specific how to's then feel free to contact me

Angelo 


Debugging PeopleSoft Absence Management Forecast

Javier Delgado - Thu, 2014-12-04 12:02
Forecasting is one of the most useful PeopleSoft Absence Management functionalities. It allows users to know which is going to be the resulting balance when entering an absence. The alternative is to wait until the Global Payroll calendar group is calculated, which naturally is far from being an online calculation.

Although this is a handy functionality, the calculation process does not always return the expected results. For some specific needs, the system element FCST ASOF DT, FCST BGN DT and FCST END DT may be needed. These elements are null for normal Global Payroll runs, so the formulas may behave differently in these runs than in the actual forecast execution. If you ever hit a calculation issue in the forecast process that cannot be solved by looking at the element definitions, you may be stuck.

When this type of issues are found in a normal Global Payroll execution, one handy functionality is to enable the Debug information and then review the Element Resolution Chain page. This page shows the step by step calculation of each element and it is particularly helpful in identifying how an element is calculated.

Unfortunately, this information is not available in the standard forecast functionality. Luckily, it can be enabled using a tiny customisation.

In PeopleSoft HCM 9.1, the forecast functionality is executed from two different places:

DERIVED_GP.FCST_PB.FieldFormula - Abs_ForecastSetup function
FUNCLIB_GP_ABS.FCST_PB.FieldFormula - Abs_ForecastExec function

In both PeopleCode events, you will find a sentence like this one:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "N", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

You will notice that the RUN_TRACE_OPTN field is set to "N". If you use "A" instead as the trace option value, you will obtain the Element Resolution Chain:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "A", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

By performing this change, you will notice that GP_AUDIT_TBL table starts to be populated with the Element Resolution Chain information. However, it may still not be visible from the page itself, because some tables are only populated temporarily in the forecast execution. In order to enable the access for the forecast runs, you will need to customise the GP_AUDIT_SEG_VW search record by adding the lines in italics to the SQL definition:

SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , B.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , B.RSLT_VER_NUM 
 , B.RSLT_REV_NUM 
 , B.SEG_BGN_DT 
 , B.SEG_END_DT 
  FROM PS_GP_AUDIT_TBL A 
  , PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID 
  UNION ALL 
 SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , A.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , 1 
 , 1 
 , NULL 
 , NULL 
  FROM PS_GP_AUDIT_TBL A 
 WHERE NOT EXISTS ( 
 SELECT 'X' 
  FROM PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID)

I hope you find this useful. Should you have any question or doubt, I will be happy to assist.

Note: Keep in mind that it is not a good idea to leave the Debug information enabled for Production environments, at least permanently. The time needed to run a forecast calculation with this type of information is significantly higher than without it. So, if you do not want to hit performance issues, my recommendation is to store in a table a flag indicating if the Element Resolution Chain for forecast should be enabled or not.


What Does “Backup Restore Throttle Speed” Wait Mean?

Pythian Group - Thu, 2014-12-04 11:29

After migrating a 10g database to 11g, I asked the Application team to start their tests in order to validate that everything was working as expected. I decided to keep an eye on OEM’s Top Activity page while they were running the most mportant job. I already knew what kind of “colors” I would  find because I had checked its behavior in the former version. Suddenly, a strange kind of wait appeared on my screen: it was my first encounter with Backup Restore Throttle Speed.

 

OEM graph 2

I had never seen this wait before. It was listed in a user’s session so its name really confused me. No RMAN operations were running at that time. FRA was almost empty. I checked Oracle’s documentation and My Oracle Support. I found nothing but one Community post from 24-SEP-2013 with no conclusions. In the meantime, the job ended and I got the confirmation that everything was well, even faster than in the old version. Weird, very weird. It was time to review the PL/SQL code.

After reading lots of lines, a function inside the package caught my attention:

Sleep (l_master_rec.QY_FIRST_WAIT_MIN * 60);

Since the job was using a log table to keep track of its execution, I was able to match the wait time with this function pretty quickly. This code was inside the function’s DDL:

for i in 1 .. trunc( seconds_to_sleep/600 )
loop
sys.DBMS_BACKUP_RESTORE.SLEEP( 600 );
end loop;
sys.DBMS_BACKUP_RESTORE.SLEEP( seconds_to_sleep-trunc(seconds_to_sleep/
600)*600 );

Finally I found the reason for this wait (and the explanation for its backup/restore related name): DBMS_BACKUP_RESTORE.SLEEP. As described in MOS note “How to Suspend Code Execution In a PL/SQL Application (Doc ID 1296382.1)”, the package was used to pause job’s execution while waiting for another task to be finished.

Lastly, it’s worth noting that OEM did not graph this wait on the 10g database but it was always there.

Categories: DBA Blogs

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year

Rittman Mead Consulting - Thu, 2014-12-04 09:50

DRY_1

Among the many reporting requirements that have comprised my last few engagements has been what, from now on, I’ll be referring to as the ‘Dynamic Rolling Year’. That is, through the utilization of a dashboard prompt, the user is able to pick a year – month combination and the chosen view, in this case a line graph, will dynamically display a years’ worth of figures with the chosen year – month as the most current starting point. The picture above better demonstrates this feature. You can see that we’ve chosen the ‘201212’ year – month combination and as a result, our view displays revenue data from 201112 through to 201212. Let’s choose another combination and make sure things work properly and to demonstrate functionality.

DRY_2

From the results on the picture above, it seems we’ve gotten our view to do as expected. We’ve chosen ‘201110’ as our starting year – month combination and the view has changed to display revenue trend back to ‘201010’. Now how do we do this? A quick web search will offer up myriad complex and time-consuming solutions which, while completely feasible, can be onerous to implement. While it was tempting to reap the benefits of someone else’s perhaps hard fought solution, I just knew there had to be an easier way. As a note of caution, I’ll preface the next bit by saying that this only works on a dashboard through using a dashboard prompt in tandem with the target analysis. This solution was tested via a report prompt, however the desired results were not obtained, even when ‘hard coding’ default values for our presentation variables. As well, there are a few assumptions I’m making around what kinds of columns you have in your subject area’s time dimension, namely that you have some sort of Year and Month column within your hierarchy.

1. Establish Your Analysis and Prompt

DRY_3

I’ve brought in the Month (renamed it Year – Month) column from Sample App’s Sample Sales subject area along with the Revenue fact. For the sake of this exercise, we’re going to keep things fairly simple and straightforward. Our Month column exists in the ‘YYYY / MM’ format so we’re going have to dice it up a little bit to get it to where we need it to be. To get our dynamic rolling year to work, all we’re going to do is a little math. In a nutshell, we’re going to turn our month into an integer as ‘YYYYMM’ and simply subtract a hundred as a filter using the presentation variables that correspond to our new month column. To make the magic happen, bring in a dummy column from your time dimension, concatenate the year and month together, cast them as a char and then the whole statement as an integer (as seen in the following formula: CAST(substring(“Time”.”T02 Per Name Month”, 1, 4)||substring(“Time”.”T02 Per Name Month”, -2, 2) AS INT) ). Then, apply a ‘between’ filter to this column using two presentation variables.

DRY_4

Don’t worry that we’ve yet to establish them on our dashboard prompt as that will happen later. In the picture above, I’ve set the filter to be between two presentation variables entitled ‘YEARMONTH_VAR’. As you can see, I’ve subtracted a hundred as, in this case, my year – month concatenation exists in the ‘YYYYMM’ format. You may alter this number as needed. For example, a client already had this column established in their Time dimension, yet it was in the ‘YYYYMMM’ format with a leading zero before the month. In that case, we had to subtract 1000 in order to establish the rolling year. Moving on…. After you’ve set up the analysis, save it off and let’s build the dashboard prompt. The results tab will error out. This is normal. We’re going to be essentially doing the same thing here.

We need to get our analysis and prompt to work together (like they should) and so we’ll need to, once again, format the year – month column on the prompt the same way we did it on our analysis through the ‘prompt for column’ link. You can simply copy and paste the formula you used in the ‘Edit Formula’ column on your analysis into this screen. Don’t forget to assign a presentation variable to this column with whatever name you used on the filter in your analysis (in this case it was YEARMONTH_VAR).

DRY_6

DRY_7

DRY_8

2. Build the Dashboard

2014-12-04_10-04-56

Starting a new dashboard, bring in your prompt and analysis from the Shared Folder. In this exercise, I’ve placed the prompt below the analysis and have done some custom formatting. If you’d like to create this example exactly, then in addition to the default table view, you’ll need to create a line graph view that groups by our year – month column and uses the revenue column as the measure. And voila. Save and run the dashboard. Note that if you didn’t adopt a default value for your dashboard prompt, it will error out before you actually pick and apply a value from the prompt drop-down. Admittedly, the closer to a concatenated year-month (YYYYMM) column you have in your time dimension, the easier the implementation of this method will be. You could even set your default value to an appropriate server/repository variable so that your dashboard opens to the most current and / or a desired year-month combination. However, now that you’ve seen this, hopefully it’s sparked some inspiration into how you might apply this technique to other facets of your work and future reporting.

DRY_1

Categories: BI & Warehousing

Private or Public Training? (and a Discount Code)

Rittman Mead Consulting - Thu, 2014-12-04 08:22

Rittman Mead offers world class training that covers various Business Intelligence, Data Warehousing and Data Integration tools.

Rittman Mead currently offers two types of training, Public and Private – so what is the best option for your organization?

Both Private and Public options are staffed by the same highly qualified, experienced instructors. One thing to note is our instructors aren’t merely technology instructors, but they are real world consultants.

What does that mean for you? It means questions aren’t answered by the instructor flipping to a slide in a deck and reading off the best practice. You’ll probably hear an answer more like “I was on site with a company a few weeks ago who had the same issue. Let me tell you how we helped that client!”

Let’s start with Public courses. Our offerings include an OBIEE Bootcamp, an ODI Bootcamp, Exalytics for System Administrators, RPD Modeling and our all new OBIEE Front End Development, Design and Best Practices.

Why Public? If you have four or fewer attendees, the public classes are typically the most cost effective choice. You’ll attend the course along with other individuals from various public and private sector organizations. There’s plenty of time to learn the concepts, get your hands dirty by completing labs as well as getting all of your questions answered by our highly skilled instructors.

Now, our Private courses. There’s primarily two reasons a company would choose private training – cost, and the ability to customize.

Cost : Our Private Training is billed based on the instructor’s time not the number of attendees. If you have five or more individuals, it is most likely going to be a more cost effective option for us to come to you. Also, you’ll be covering travel costs for one instructor rather than your entire team.

Customization : Once you select the course that is best for your team, we’ll set up a call with a trainer to review your team’s experience level and your main goals. Based on that conversation, we can tweak the syllabus to make sure the areas you need extra focus are thoroughly covered.

One thing to note is we do offer a few courses (OWB 11gR2 & ODI 11g) privately that aren’t offered publicly.

We also offer a Custom Course Builder (http://www.rittmanmead.com/training/customobiee11gtraining/) which allows you to select modules out of various training classes and build a customized course for your team – this flexibility is a great reason to select Private Training.

Some of our clients love having their team get away for the week and focus strictly on training while others prefer to have training on site at their offices with the flexibility to cut away for important meetings or calls. Whichever program works best for your team, we have got you covered!

For more information, reach out to us at info@rittmanmead.com. If you’ve made it all the way to the end of this blog post, congratulations! As a way of saying thanks, you can mention the code “RMTEN” to save 10% on your next private or public course booking!

Categories: BI & Warehousing

Influence execution plan without adding hints

Oracle in Action - Thu, 2014-12-04 04:54

RSS content

We often encounter situations when a SQL runs optimally when it is hinted but  sub-optimally otherwise. We can use hints to get the desired plan but it is not desirable to use hints in production code as the use of hints involves extra code that must be managed, checked, and controlled with every Oracle patch or upgrade. Moreover, hints freeze the execution plan so that you will not be able to benefit from a possibly better plan in future.

So how can we make such queries use optimal plan until a provably better plan comes along without adding hints?

Well, the answer is to use SQL Plan Management which ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.

To demonstrate the procedure, I have created two tables CUSTOMER and PRODUCT having CUST_ID and PROD_ID respectively as primary keys. PROD_ID column in CUSTOMER table is the foreign key and is indexed.

SQL>onn hr/hr

drop table customer purge;
drop table product purge;

create table product(prod_id number primary key, prod_name char(100));
create table customer(cust_id number primary key, cust_name char(100), prod_id number references product(prod_id));
create index cust_idx on customer(prod_id);

insert into product select rownum, 'prod'||rownum from all_objects;
insert into customer select rownum, 'cust'||rownum, prod_id from product;
update customer set prod_id = 1000 where prod_id > 1000;

exec dbms_stats.gather_table_stats (USER, 'customer', cascade=> true);
exec dbms_stats.gather_table_stats (USER, 'product', cascade=> true);

– First, let’s have a look at the undesirable plan which does not use the index on PROD_ID column of CUSTOMER table.

SQL>conn / as sysdba
    alter system flush shared_pool;

    conn hr/hr

    variable prod_id number
    exec :prod_id := 1000

    select cust_name, prod_name
    from customer c, product p
    where c.prod_id = p.prod_id
    and c.prod_id = :prod_id;

    select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 3134146364

----------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   412 (100)|          |
|   1 |  NESTED LOOPS                |              | 88734 |    17M|   412   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| PRODUCT      |     1 |   106 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0010600 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | CUSTOMER     | 88734 |  9098K|   410   (1)| 00:00:01 |
----------------------------------------------------------------------

– Load undesirable plan into baseline  to establish a SQL plan baseline for this query into which the desired plan will be loaded later

SQL>variable cnt number
    exec :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'b257apghf1a8h');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from     dba_sql_plan_baselines
    where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------- ----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   YES

– Disable undesirable plan so that this plan will not be used

SQL>variable cnt number
    exec :cnt := dbms_spm.alter_sql_plan_baseline (-
    SQL_HANDLE => 'SQL_7d3369334b24a117',-
    PLAN_NAME => 'SQL_PLAN_7ucv96d5k988rfe19664b',-
    ATTRIBUTE_NAME => 'enabled',-
    ATTRIBUTE_VALUE => 'NO');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from   dba_sql_plan_baselines
     where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Now we use hint in the above SQL to generate the optimal plan which uses index on PROD_ID column of CUSTOMER table

SQL>conn hr/hr

variable prod_id number
exec :prod_id := 1000

select /*+ index(c)*/ cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  5x2y12dzacv7w, child number 0
-------------------------------------
select /*+ index(c)*/ cust_name, prod_name from customer c, product p
where c.prod_id = p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

– Now we will load the hinted plan  into baseline –
– Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement.

SQL>variable cnt number
exec :cnt := dbms_spm.load_plans_from_cursor_cache(-
sql_id => '5x2y12dzacv7w',  -
plan_hash_value => 4263155932, -
sql_handle => 'SQL_7d3369334b24a117');

– Verify that there are now two plans loaded for that SQL statement:

  •  Unhinted sub-optimal plan is disabled
  •  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.
SQL>col sql_text for a35 word_wrapped
col enabled for a15

select  sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines
where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rea320380                                                   YES

select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Verify that hinted plan is used even though we do not use hint in the query  –
– The note confirms that baseline has been used for this statement

SQL>variable prod_id number
exec :prod_id := 1000

select cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("P"."PROD_ID"=:PROD_ID)
4 - filter("C"."PROD_ID"=:PROD_ID)

Note
-----
- SQL plan baseline SQL_PLAN_7ucv96d5k988rea320380 used for this statement

With this baseline solution, you need not employ permanent hints the production code and hence no upgrade issues. Moreover, the plan will evolve with time as optimizer discovers better ones.

Note:  Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and then associate sub-optimally performing statement with the optimal plan.

I hope this post was useful.

Your comments and suggestions are always welcome!

References:
http://www.oracle.com/technetwork/issue-archive/2014/14-jul/o44asktom-2196080.html

—————————————————————————————————————————————–

Related links:

HOME
Tuning Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Influence execution plan without adding hints], All Right Reserved. 2014.

The post Influence execution plan without adding hints appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Updated (XML) Content Section

Marco Gralike - Wed, 2014-12-03 15:34
“Once per year I try to update the “XML Content” page that, in principle, should…

SSL with PKCS12 truststore

Laurent Schneider - Wed, 2014-12-03 14:31

Many many moons ago I vaguely remember having a similar issue with java keystore / truststore and microsoft certificates stores.

When you start using SSL for your listener, you could potentially face a large number of issues amoung your toolsets. In my opinion, the most disastrous one is that you cannot monitor your database with Enterprise Manager and a TCPS listener. I tried with 10g, 11g, 12c and I seriously doubt it will come in 13c, even a dozen of ERs have been filled. The best workaround I found is to use a separate listener to monitor your database and monitor the ssl-listener itself with IPC.

Today I had to deal with a driver from Datadirect, which finally works perfectly fine with SSL, but the challenge was to know what to put in the keystore and truststore…

In SQLNET, you use the single-sign-on wallet (cwallet.sso) created by OWM/orapki or ldap.

In Java, per default, you use a java keystore, that you generate with keytool (or even use the default cacerts). There is only a lexical difference between a keystore and a truststore, they could both be the same file. As documented in the JSSE Ref
A truststore is a keystore that is used when making decisions about what to trust

But for some other tools, the java keytool won’t do the trick, if the truststore cannot be of the type JKS.

One common type is the PKCS12. This is the your ewallet.p12 you get with the Wallet Manager.

E.g. from java :


  -Djavax.net.ssl.trustStore=ewallet.p12
  -Djavax.net.ssl.trustStoreType=PKCS12
  -Djavax.net.ssl.trustStorePassword=***

To use single-sign-on, use trustStoreType=SSO as I wrote there : jdbc-ssl

Other command formats are X509 base64 or DER file. The openssl command line allows you easy conversion


openssl pkcs12 -in ewallet.p12 -out file.pem
openssl x509 -outform der -in file.pem -out file.der

or in Windows Explorer, just click on your p12 file and then click on the certificate to export in the certificate store.

Is Kuali Guilty of “Open Washing”?

Michael Feldstein - Wed, 2014-12-03 14:05

Phil and I don’t write a whole lot about Student Information Systems (SISs) and the larger Enterprise Resource Management (ERP) suites that they belong to, not because they’re unimportant but because the stories about them often don’t fit with our particular focus in educational technology. Plus, if I’m being completely honest, I find them to be mostly boring. But the story of what’s going with Kuali, the open source ERP system for higher education, is interesting and relevant for a couple of reasons. First, while we hear a lot of concern from folks about the costs of textbooks and LMSs, those expenses pale in comparison to what colleges and universities pay in SIS licensing and support costs. A decent sized university can easily pay a million dollars or more for their SIS, and a big system like UC or CUNY can pay tens or even hundreds of millions. One of the selling points of Kuali has been to lower costs, thus freeing up a lot of that money to meet other needs that are more closely related to the core university mission. So what happens in the SIS space has a potentially huge budgetary impact on teaching and learning. Second, there’s been a lot of conversation about what “open” means in the context of education and, more specifically, when that label is being abused. I am ambivalent about the term “open washing” because it encourages people to flatten various flavors of openness into “real” and “fake” open when, in fact, there are often legitimately different kinds of open with different value (and values). That said, there certainly exist cases where use of the term “open” stretches beyond the bounds of even charitable interpretation.

When Kuali, a Foundation-developed project released under an open source license, decided to switch its model to a company-developed product where most but not all of its code would be released under a different open source license, did they commit an act of “open washing”? Finding the answer to that question turns out to be a lot more complicated than you might expect. As I started to dig into it, I found myself getting deep into details of both open source licensing and cloud computing, since the code that KualiCo will be withholding is related to cloud offerings. It turned out to be way too long and complex to deal with both of those topics in one post, so I am primarily going to follow up on Phil’s earlier posts on licensing here and save the cloud computing part for a future post.

Let’s start by recapping the situation:

  • The Kuali Foundation is a non-profit entity formed by a group of schools that wanted to pool their resources to build a suite of open source components that together could make up an ERP system suitable for colleges and universities.
  • One of the first modules, Kuali Financials, has been successfully installed and run by a small number of schools which have reported dramatic cost savings, both in the obvious licensing but also in the amount of work it took to get the system running.
  • Other elements of the suite have been been more problematic. For example, the registrar module, Kuali Student, has been plagued with multi-year delays and members dropping out of the project.
  • Kuali code has historically been released under an open source license called the “Educational Community License (ECL)”, which is a slight variation of the Apache license.
  • The Kuali Foundation announced that ownership of the code would be transferred to a new commercial entity, KualiCo. Their reason for doing so is to accelerate the development of the suite, although they have not been entirely clear about what they think the causes of their development speed problem are and why moving to a commercial structure will solve the problems.
  • KualiCo intends to release most, but not all, of future code under a different open source license, the Affero Gnu Public License (AGPL).
  • Some of the new code to be developed by KualiCo will not be released as open source. So far the only piece they have named in this regard is their multitenant code.
  • These changes happened fairly quickly, driven by the board, without a lot of community discussion.

So is this “open washing”? Is it a betrayal of trust of an open source community, or deceptively claiming to be open and proving to be otherwise? In one sense, the answer to that question depends on the prior expectations of the community members. I would argue that the biggest and clearest act of open washing may have occurred over a decade ago with the creation of the term “community source.” Back in the earlier days of Sakai—another “community source” project—I made a habit of asking participants what they thought “community source” meant. Very often, the answer I got was something along the lines of, “It’s open source, but with more emphasis on building a community.” But that’s not what Brad Wheeler meant at all when he popularized the term. As I have discussed in an earlier post, “community source” was intended to be a consortial model of development with an open source license tacked onto the end product. Far from emphasizing community, it was explicitly designed to maximize the control and autonomy of the executive decision-makers from the consortial partners—and to keep a lid on the decision-making power of other community participants. Remember the motto: “If you’ve got the gold, then you make the rules.” Community source, as defined by those who coined the term, is a consortium with a license. But community source was always marketed as an improvement on open source. “It’s the pub between the cathedral and the bazaar where all the real work gets done,” Brad liked to say.

Different “community source” projects followed the centralized, financially-driven model to different degrees. Kuali, for example, was always explicitly and deliberately more centralized in its decision-making processes than Sakai. As an outside observer of Kuali’s culture and decision-making processes, and as a close reader of Brad Wheeler’s articles and speeches about community source, I can’t say that the move to KualiCo surprised me terribly. Nor can I say that it is inconsistent with what Brad has said all along about how community source works and what it is for. The consortial leaders, whose membership I assume was roughly defined by their financial contributions to the consortium, made a decision that supports what they believe is in their interest. All code that was previously released under an open source license will remain under an open source license. Presumably the Kuali Foundation and KualiCo will be clear going forward about which consortial contributions go toward creating functionality that will be open source or private source in the future. I am not privy to the internal politics of the foundation and therefore am not in a position to say whether some of those who brought the gold were left out of the rule-making process. To the degree that Brad’s golden rule was followed, the move to KualiCo is consistent with the clearly stated (if craftily marketed) philosophy of community source.

The question of how much these changes practically affect open source development of Kuali is a more complicated one to answer. It is worth stating that another tenet of community source was that it was specifically intended to be commercial-friendly, meaning that the consortia tried to avoid licensing or other practices that discouraged the development of a healthy and diverse ecosystem of support vendors. (Remember, community source frames problems with the software ecosystem as procurement problems. As such, its architects are concerned with maintaining a robust range of support contracting options.) Here the balancing act is more delicate. On the one hand, to the degree that the changes give KualiCo advantages over potential competitors, Kuali will be violating the commercial-friendly principle of community source. On the other hand, to the degree that the changes do not give KualiCo advantages over potential competitors, it’s not clear why one would think that KualiCo will be a viable and strong enough company to move development faster than the way it has been until now.

The first thing to point out here is that, while KualiCo has only said so far that it will keep the multitenant code private source, there is nothing to prevent them from keeping more code private in the future. Instructure Canvas, which started out with only the multitenant code as private source, currently has the following list of features that are not in the open source distribution:

  • Multi-tenancy extensions
  • Mobile integration
  • Proprietary SIS integrations
  • Migration tools for commercial LMSs
  • Other minor customizations that only apply to our hosted environment
  • Chat Tool
  • Attendance Tool (Roll Call)

I don’t think there is a clear and specific number of private source features that marks the dividing line between good faith open source practices and “open washing”; nor am I arguing that Instructure is open washing here. Rather, my point is that, once you make the decision to be almost-all-but-not-completely open source, you place your first foot at the top of a slippery slope. By saying that they are comfortable withholding code on any feature for the purposes of making their business viable, KualiCo’s leadership opens the door to private sourcing as much of the code as they need to in order to maintain their competitive advantage.

Then there’s the whole rather arcane but important question about the change in open source licenses. Unlike the ECL license that Kuali has used until now, AGPL is “viral,” meaning that anybody who combines AGPL-licensed code with other code must release that other code under the AGPL as well. Anybody, that is, except for the copyright holder. Open source licenses are copyright licenses. If KualiCo decides to combine  open source code to which they own the copyright with private source code, they don’t have to release the private source code under the AGPL. But if a competitor, NOTKualiCo, comes along and combines KualiCo’s AGPL-licensed code with their own proprietary code, then NOTKualiCo has to release their own code under the AGPL. This creates two theoretical problems for NOTKualiCo. First, NOTKualiCo does not have the option of making the code they develop a proprietary advantage over their competitors. They have to give it away. Second, while NOTKualiCo has to share its code with KualiCo, KualiCo doesn’t have the same obligation to NOTKualiCo. So theoretically, it would be very hard for any company to compete on product differentiators when they are building upon AGPL-licensed code owned by another company.

I say “theoretically” because, in practice, it is much more complicated than that. First, there is the question of what it means to “combine” code. The various GPL licenses recognize that some software is designed to work with other software “at arm’s length” and therefore should not be subject to the viral clause. For example, it is permissible under the license to run AGPL applications on a Microsoft Windows or Apple Mac OS X operating system without requiring that those operating systems also be released under the GPL. Some code combinations fall clearly into this category, while others fall clearly into the category of running as part of the original open source program and therefore subject to the viral clause of the GPL. But there’s a vast area in the murky middle. Do tools that use APIs specifically designed for integration fall under the viral clause? It depends on the details of how they integrate as well as who you ask. It doesn’t help that the language used to qualify what counts as “combining” in Gnu’s documentation uses terms that are specific to the C programming language.

KualiCo has said that they will specifically withhold multitenant capabilities from future open source distributions. If competitors developed their own multitenant capabilities, would they be obliged to release that code under the AGPL? Would such code be “combining” with Kuali, or could it be sufficiently arm’s-length that it could be private source? It depends on how it’s developed. Since KualiCo’s CEO is the former CTO of Instructure, let’s assume for the sake of argument that Kuali’s multitenant capabilities will be developed similarly to Canvas’. Zach Wily, Instructure’s Chief Architect, described their multitenant situation to me as follows:

[O]ur code is open-source, but only with single-tenancy. The trick there is that most of our multi-tenancy code is actually in the open source code already! Things like using global identifiers to refer to an object (instead of tenant-local identifiers), database sharding, etc, are all in the code. It’s only a couple relatively thin libraries that help manage it all that are kept as closed source. So really, the open-source version of Canvas is more like a multi-tenant app that is only convenient to run with a single tenant, rather than Canvas being a single-tenant app that we shim to be multi-tenant.

The good news from NOTKualiCo’s (or NOTInstructureCo’s) perspective is that it doesn’t sound like there’s an enormous amount of development required to duplicate that multitenant functionality. Instructure has not gone through contortions to make the development of multitenant code harder for competitors; I will assume here that KualiCo will follow a similar practice. The bad news is that the code would probably have to be released under the AGPL, since it’s a set of libraries that are intended to run as a part of Kuali. That’s far from definite, and it would probably require legal and technical experts evaluating the details to come up with a strong conclusion. But it certainly seems consistent with the guidance provided by the Gnu Foundation.

OK, so how much of a practical difference does this make for NOTKualiCo to be able to compete with KualiCo? Probably not a huge amount, for several reasons. First, we’re not talking about an enormous amount of code here; nor is it likely to be highly differentiated. But also, NOTKualiCo owns the copyright on the libraries that they release. While anybody can adopt them under the AGPL, if KualiCo wanted to incorporate any of NOTKualiCo’s code, then the viral provision would have to apply to KualiCo. The practical net effect is that KualiCo would almost certainly never use NOTKualiCo’s code. A third competitor—call them ALSONOTKualiCo—could come in and use NOTKualiCo’s code without incurring any obligations beyond those that they already assumed by adopting KualiCo’s AGPL code, so there’s a disadvantage there for NOTKualiCo. But overall, I don’t think that withholding multitenant code from KualiCo’s open source releases—assuming that it’s done the way Instructure has done it—is a decisive barrier to entry for competitors. Unfortunately, that may just mean that KualiCo will end up having to withhold other code in order to maintain a sustainable advantage.

So overall, is Kuali guilty of “open washing” or not? I hope this post has helped make clear why I don’t love that term. The answer is complicated and subjective. I believe that “community source” was an overall marketing effort that entailed some open washing, but I also believe that (a) Brad has been pretty clear about what he really meant if you listened closely enough, and (b) not every project that called itself community source followed Brad’s tenets to the same degree or in the same way. I believe that KualiCo’s change in license and withholding of code are a violation of the particular flavor of openness that community source promised, but I’m not sure how much of a practical difference that makes to the degree that one cares about the “commercial friendliness” of the project. Would I participate in work with the Kuali Foundation today if I were determined to work only on projects that are committed to open source principles and methods? No I would not. But I would have given you the same answer a year ago. So, after making you wade through all of those arcane details, I’m sorry to say that my answer to the question of whether Kuali is guilty of open washing is, “I’m not sure that I know what the question means, and I’m not sure how much the answer matters.”

The post Is Kuali Guilty of “Open Washing”? appeared first on e-Literate.

Where Is Talent When Sizing Up The Success of a Business?

Linda Fishman Hoyle - Wed, 2014-12-03 13:51

I like this picture of an animated Mark Hurd (Oracle's CEO) with the holiday baubles hanging in the background. I like it even more that he’s talking to a group of HR executives about the importance of talent in an organization. “I want the best people, I don’t want to lose them, and I want them completely engaged,” said Hurd.

At the London, England conference, Hurd explained the hurdles companies face in the search for good talent. One is an aging workforce that will affect attrition rates when employees start to retire. The other is new college grads who have higher expectations of their career paths and reward systems. Both of these demographic forces will require “HR as a real-time application, on a par with financials” according to Hurd.

Companies who take advantage of the internet, mobile devices, and social networks will be in a much better position that those with outdated systems. Technology, and specifically cloud applications, can help companies standardize best practices across geographies and keep up with the latest features and functionality.

However, the SaaS-based modern HR tools won’t help any company differentiate itself in the marketplace without an earnest pledge to use them to help attract, retain, and motivate the best people for the jobs that need to be done. According to Michael Hickins, who wrote the article in Forbes entitled Oracle CEO Mark Hurd: Make HR A Real-Time Application, a company’s future can hinge on how well it runs the race for talent.

Innovative Work Life Solutions Help Employees Balance Their Professional and Personal Lives

Linda Fishman Hoyle - Wed, 2014-12-03 13:47

A Guest Post by Mark Bennett, Oracle Director of HCM Product Strategy (pictured left)

The best human capital management applications have evolved from just automating standard human resource (HR) processes to become strategic tools for managing talent and analyzing workforce capabilities.

Now, a new wave of innovation is taking HR evolution a step further with work/life applications that are designed to help employees better balance their professional and personal lives. In the following interview Oracle Director of HCM Product Strategy Mark Bennett explains how Oracle’s new Oracle Work Life Solutions Cloud offerings can enhance creativity, boost productivity, and help individuals become more effective inside and outside of work. 

Q: How do you define work/life solutions?

A: They’re applications and features of products that help individuals better understand various facets of their professional and personal lives. Many people want to find a better balance between these two areas, but they’re finding they don’t always have the information they need to make that happen.

Q: Can you give us some examples of work/life solutions?

A: Oracle Work Life Solutions Cloud offers an extensible set of solutions including My Reputation, a feature that helps employees better understand how they are seen within their social media communities. It can also help them develop their professional brand as it aligns with that of their company to help with their career development and advancement. Another example is My Wellness, a feature that leverages wearable devices that enable people to monitor their activity levels throughout the day and their sleep patterns at night. So for example, people with a low energy level may decide to get more sleep to see if that makes a difference.

Offerings that are based around competition, such as the My Competitions feature of Oracle Human Capital Management Cloud, can help organizations create incentives for employees. Software developers may engage in friendly competition to see who fixes the most bugs in a week. Or someone might organize a contest for workgroup members to come up with ideas about new features for a product redesign project.

In the future, work/life solutions might include taking advantage of the video capabilities in smart phones to provide an easy way to upload clips to ask a question of coworkers, share a new research findings, or even to link to short profiles of a person’s appointment contacts—helping individuals connect on a more personal level and gain a better understanding of the context of meetings.

Q: How do organizations benefit from making work/life solutions available to employees?

A: There are three primary ways. First, individuals can better understand their work habits, so, for example, they don’t push themselves so hard they risk burnout or don’t pay enough attention to other facets of their lives.

Second, these solutions send a powerful signal to the workforce by indicating a company’s interest in their employees that goes beyond their professional contributions. Many employees will appreciate that their company is paying attention to their personal as well as professional needs.

Third, there may be a dollars-and-cents payoff. If the information collected from one of the solutions encourages more employees to participate in wellness programs, healthcare costs may start to decrease through rebates from carriers or from reduced claims.

Q: What should employers keep in mind to assure work/life solutions succeed?

A: It’s essential that companies make it clear they are not monitoring any of this data but instead are simply providing tools to help employees gain insights into their lives to help them decide whether they want to make adjustments based on their personal goals. The key is giving employees the tools they need to make their own decisions about their own lives.

Q: How do we learn more about these solutions?

A: Resources for Work Life Solutions include: