Feed aggregator

A nice Descending Index Range Scan

Jeff Kemp - Thu, 2017-03-09 20:06

I’ve been aware of some of the ways that Oracle database optimises index accesses for queries, but I’m also aware that you have to test each critical query to ensure that the expected optimisations are taking effect.

I had this simple query, the requirement of which is to get the “previous status” for a record from a journal table. Since the journal table records all inserts, updates and deletes, and this query is called immediately after an update, to get the previous status we need to query the journal for the record most recently prior to the most recent record. Since the “version_id” column is incremented for each update, we can use that as the sort order.


select status_code
from (select rownum rn, status_code
      from   xxtim_requests$jn jn
      where  jn.trq_id = :trq_id
      order by version_id desc)
where rn = 2;

The xxtim_requests$jn table has an ordinary index on (trq_id, version_id). This query is embedded in some PL/SQL with an INTO clause – so it will only fetch one record (plus a 2nd fetch to detect TOO_MANY_ROWS which we know won’t happen).

The table is relatively small (in dev it only has 6K records, and production data volumes are expected to grow very slowly) but regardless, I was pleased to find that (at least, in Oracle 12.1) it uses a nice optimisation so that it not only uses the index, it is choosing to use a Descending scan on it – which means it avoids a SORT operation, and should very quickly return the 2nd record that we desire.

index_scan_range_descending.PNG

It looks quite similar in effect to the “COUNT STOPKEY” optimisation you can see on “ROWNUM=1” queries. If this was a much larger table and this query needed to be faster or was being run more frequently, I’d probably consider appending status_code to the index in order to avoid the table access. In this case, however, I don’t think it’s necessary.


Filed under: SQL Tagged: oracle12c, SQL

Quiz Night

Jonathan Lewis - Thu, 2017-03-09 16:34

The following is a straight, continuous, untouched, cut-n-paste from an SQL*Plus session on 12.1.0.2. How come the update doesn’t execute in parallel – noting that parallel DML has been enabled and the tablescan to identify rows to be updated does execute in parallel ?


SQL> desc t1
 Name                                                                            Null?    Type
 ------------------------------------------------------------------------------- -------- ------------------------------------------------------
 OWNER                                                                           NOT NULL VARCHAR2(128)
 OBJECT_NAME                                                                     NOT NULL VARCHAR2(128)
 SUBOBJECT_NAME                                                                           VARCHAR2(128)
 OBJECT_ID                                                                       NOT NULL NUMBER
 DATA_OBJECT_ID                                                                           NUMBER
 OBJECT_TYPE                                                                              VARCHAR2(23)
 CREATED                                                                         NOT NULL DATE
 LAST_DDL_TIME                                                                   NOT NULL DATE
 TIMESTAMP                                                                                VARCHAR2(19)
 STATUS                                                                                   VARCHAR2(7)
 TEMPORARY                                                                                VARCHAR2(1)
 GENERATED                                                                                VARCHAR2(1)
 SECONDARY                                                                                VARCHAR2(1)
 NAMESPACE                                                                       NOT NULL NUMBER
 EDITION_NAME                                                                             VARCHAR2(128)
 SHARING                                                                                  VARCHAR2(13)
 EDITIONABLE                                                                              VARCHAR2(1)
 ORACLE_MAINTAINED                                                                        VARCHAR2(1)

SQL> select * from t1 minus select * from all_objects;

OWNER           OBJECT_NAME          SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE             CREATED   LAST_DDL_
--------------- -------------------- ---------------------- ---------- -------------- ----------------------- --------- ---------
TIMESTAMP           STATUS  T G S       NAMESPACE EDITION_NAME         SHARING       E O
------------------- ------- - - - --------------- -------------------- ------------- - -
TEST_USER       T1                                              159331         159331 TABLE                   09-MAR-17 09-MAR-17
2017-03-09:22:16:36 VALID   N N N               1                      NONE            N


1 row selected.

SQL> alter session force parallel dml;

Session altered.

SQL> set serveroutput off
SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 121765358

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  UPDATE               | T1       |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property
   - PDML disabled because single fragment or non partitioned table used


29 rows selected.

SQL> select * from v$pq_tqstat;

DFO_NUMBER      TQ_ID SERVER_TYPE       NUM_ROWS      BYTES  OPEN_TIME AVG_LATENCY      WAITS   TIMEOUTS PROCESS         INSTANCE     CON_ID
---------- ---------- --------------- ---------- ---------- ---------- ----------- ---------- ---------- --------------- -------- ----------
         1          0 Producer              8997     363737 ##########           0         14          0 P004                   1          0
                                            9721     409075 ##########           0         12          0 P007                   1          0
                                            9774     408591 ##########           0         12          0 P005                   1          0
                                            9844     396816 ##########           0         12          0 P003                   1          0
                                            9965     403926 ##########           0         13          0 P006                   1          0
                                            9727     388829 ##########           0         12          0 P002                   1          0
                                            9951     399162 ##########           0         14          0 P001                   1          0
                                           10345     408987 ##########           0         13          0 P000                   1          0
                      Consumer             78324    3179123 ##########           0          0          0 QC                     1          0



9 rows selected.

If you want to see the fully parallel plan, it would look like this (after running the query above against v$pq_tqstat I executed one statement that I’m not showing before carrying on with the statements below):


SQL> update t1 set object_name = lower(object_name) where data_object_id is null;

78324 rows updated.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b16abyv8p2790, child number 0
-------------------------------------
update t1 set object_name = lower(object_name) where data_object_id is
null

Plan hash value: 3991856572

---------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |          |       |       |    26 (100)|          |        |      |            |
|   1 |  PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM) | :TQ10000 | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    UPDATE             | T1       |       |       |            |          |  Q1,00 | PCWP |            |
|   4 |     PX BLOCK ITERATOR |          | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS FULL| T1       | 78324 |  2141K|    26   (0)| 00:00:01 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access(:Z>=:Z AND :Z<=:Z)
       filter("DATA_OBJECT_ID" IS NULL)

Note
-----
   - Degree of Parallelism is 8 because of table property


28 rows selected.

SQL> select object_name, object_type from user_objects;

OBJECT_NAME          OBJECT_TYPE
-------------------- -----------------------
T1                   TABLE

1 row selected.

Answer coming some time tomorrow.


Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin

Yann Neuhaus - Thu, 2017-03-09 15:21

In the Oracle Public Cloud, fast provisioning gets all its meaning when creating a RAC database service: in one hour you can get an operational highly available multitenant database. You can even create it in Data Guard for Disaster Recovery. Now, Oracle is pushing ACFS to store the datafiles rather than direct ASM. Especially in multitenant because a great feature is thin cloning: CREATE PLUGGABLE DATABASE AS SNAPSHOT COPY. However, I encountered an error when I tried it for the first time.

TDE keystore

SQL> create pluggable database pdb2 from pdb1 snapshot copy;
create pluggable database pdb2 from pdb1 snapshot copy
*
ERROR at line 1:
ORA-28357: password required to open the wallet

Oh yes, in the cloud all tablespaces are encrypted. In 12.2 we can put the keystore password in the command:

ORA-17517

SQL> create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d" ;
 
create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d"
*
ERROR at line 1:
ORA-65169: error encountered while attempting to copy file
/u02/app/oracle/oradata/CDB1/49FC0C40FCF62C05E053BAF6C40A9DD6/datafile/o1_mf_users_dcr220sd_.dbf
ORA-17517: Database cloning using storage snapshot failed on file
8:/u02/app/oracle/oradata/CDB1/49FC0C40FCF62C05E053BAF6C40A9DD6/datafile/o1_mf_users_dcr220sd_.dbf

Here we are. The call to the storage snapshot feature has failed. Usually the errors coming from OS calls are accompanied with additional information but not here.

alert.log and trace

In alert.log, the error is displayed with reference to some other trace files:

2017-03-05 16:24:38.935000 +00:00
create pluggable database pdb2 from pdb1 snapshot copy keystore identified by *
AUDSYS.AUD$UNIFIED (SQL_TEXT) - CLOB populated
WARNING: Detected that PDB needs to import keys from source. PDB can only open in restricted mode until import.
2017-03-05 16:24:40.447000 +00:00
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p000_8910.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p002_8918.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p001_8914.trc:
Errors in file /u01/app/oracle/diag/rdbms/cdb1/cdb11/trace/cdb11_p003_8922.trc:
**************************************************************
Undo Create of Pluggable Database PDB2 with pdb id - 4.
**************************************************************
ORA-65169 signalled during: create pluggable database pdb2 from pdb1 snapshot copy keystore identified by * ...

And those trace files have the following information:
ksfdsscre_clone: create snapshot failed error(-1) errmsg(OS dependent failure) voltag(49FF372094256196E053BAF6C40AEB9D) parent_voltag() mntbuf(/u02)

This is not very helpful by itself. We see the snapshot name (voltag) and the parent name (parent_voltag). You may know that error (-1) is EPERM which is ‘operation not permitted’. What I did to be sure was to try to create the snapshot myself:

[oracle@rac1 cdb11]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: CLSU-00100: operating system function: ioctl failed with error data: 1
acfsutil snap create: CLSU-00101: operating system error message: Operation not permitted
acfsutil snap create: CLSU-00103: error location: OI_0
acfsutil snap create: ACFS-03046: unable to perform snapshot operation on /u02

EPERM

This is more clear and I also strace’d it to see where the error comes from:

open("/u02", O_RDONLY) = 41
ioctl(41, RTC_UIE_ON, 0x7fff17ae17a0) = 0
ioctl(41, 0xffffffffc1287021, 0x7fff17ae0e90) = -1 EPERM (Operation not permitted)

I’m running that with the oracle user, as the instance does when creating a PDB:
uid=1001(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba),1003(racoper),1004(asmdba)

grid

When connecting as grid, I am able to create the snapshot

[grid@rac1 ~]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: Snapshot operation is complete.
[grid@rac1 ~]$ acfsutil snap delete 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap delete: Snapshot operation is complete.

Grid has the following permissions:
uid=1000(grid) gid=1001(oinstall) groups=1001(oinstall),1003(racoper),1004(asmdba),1005(asmoper),1006(asmadmin)

asmadmin

This is what /etc/group looks like:

opc:x:54323:
oinstall:x:1001:
dba:x:1002:oracle
racoper:x:1003:oracle,grid
asmdba:x:1004:oracle,grid
asmoper:x:1005:grid
asmadmin:x:1006:grid

This is what the Oracle Public Cloud defines at RAC DBaaS service creation, and asmadmin is not mentioned in documentation.

So, to solve (or workaround) the issue, I’ve added oracle to the asmadmin group:

asmadmin:x:1006:grid,oracle

and now, I’m able to create a snapshot when logging as oracle:

[oracle@rac1 ~]$ acfsutil snap create -w 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap create: Snapshot operation is complete.
[oracle@rac1 ~]$ acfsutil snap delete 49FFA9651D1D58D5E053BAF6C40AF81C /u02
acfsutil snap delete: Snapshot operation is complete.

restart

I thought that restarting the instance would be sufficient, but it is not. I had to restart the cluster. And this is also something easy in the Oracle Public Cloud:

CaptureRestartOPCRAC

A simple click restarts the first node, and then, once it is up again, restarts the second node.Rolling reboot ensures that the service is always up.

Thin clone

Here it is. The instance is now able to create a snapshot.

Connected to:
Oracle Database 12c EE Extreme Perf Release 12.2.0.1.0 - 64bit Production
 
SQL> set echo on
SQL> create pluggable database pdb2 from pdb1 snapshot copy keystore identified by "Ach1z0#d" ;
Pluggable database created.
 
Elapsed: 00:00:30.36

So what?

In my opinion, the configurations that stores a CDB datafiles on ACFS should give the rights to create snapshots to the user running the database. The cloud interface is very simple, but the technology behind is complex. The consequence of this gap is that using the cloud is easy when everything goes as expected, but any exception can bring us into troubleshooting.

 

Cet article Oracle 12cR2, RAC, Cloud, ACFS, PDB thin clones and asmadmin est apparu en premier sur Blog dbi services.

Deploying an ElasticSearch cluster at Oracle Cloud

Marcelo Ochoa - Thu, 2017-03-09 15:21
Continuing with my previous post about how to deploy a Docker Swarm Cluster at Oracle Cloud the idea now is how to deploy as example an Elastic Search cluster with these characteristics:

  • One node working as master
  • Two node working as data nodes
  • One node working as ingest node

a complete set of scripts used in this post is at GitHub, click here for more details.
Swarm nodes preparationFirst We will tag our cluster of five nodes with some tags to control our ES cluster allocation.
To do that using docker-machine to submit commands to the Swarm cluster do, remember oc4 and oc5 are Swarm master capable nodes:
$ eval $(docker-machine env oc5)
[oc5] $ docker node update --label-add type=es_master oc5
[oc5] $ docker node update --label-add type=es_master oc4
[oc5] $ docker node update --label-add type=es_data oc3
[oc5] $ docker node update --label-add type=es_data oc2
[oc5] $ docker node update --label-add type=es_ingest oc1
in my test official Elastic Search 5.0.0 image do not work well to recognize the primary host name on Swarm nodes, so I decided to use a custom ES image, We can build this image on each node with:
$ eval $(docker-machine env oc5)
[oc5]$ git clone https://github.com/marcelo-ochoa/docker.git
[oc5]$ cd docker/es
[oc5]$ docker build -t "elasticsearch/swarm:5.0.0" .
Step 1/6 : FROM elasticsearch:5.0.0
5.0.0: Pulling from library/elasticsearch
386a066cd84a: Pull complete
.....
Step 6/6 : CMD elasticsearch
 ---> Running in a6dd5fb17cf8
 ---> 2f1c7bfe6c67
Removing intermediate container a6dd5fb17cf8
Successfully built 2f1c7bfe6c67
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker build -t "elasticsearch/swarm:5.0.0" .
.....
[oc1]$ docker build -t "elasticsearch/swarm:5.0.0" .
once we have a modified image built on each node of the Swarm cluster, we can start deploying Swarm services, before that We built an specific private interconnect network for our ES cluster with the posibilty of attaching other containers, specifically I'll attach:
[oc1]$ eval $(docker-machine env oc5)
[oc5]$ docker network create -d overlay --attachable --subnet=192.168.0.0/24 es_cluster
Once we have the network a Swarm visualizer could be started on oc5 or oc4 master nodes:
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker run -d \
--name viz \
-p 8080:8080 \
--net es_cluster \
-v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer:latest
 to not open another port on oc4 node We can simple connect to swarm visualizer using an ssh tunnel, for example:
ssh -i /home/mochoa/Documents/Scotas/ubnt -L8080:localhost:8080 ubuntu@oc4
then access to http://localhost:8080/ here a sample output

OK, get ready to start deploying our ES cluster, first jump ES master:
[oc4]$ eval $(docker-machine env oc5) [oc5]$ docker service create --network es_cluster --name es_master --constraint 'node.labels.type == es_master' --replicas=1 --publish 9200:9200/tcp --env ES_JAVA_OPTS="-Xms1g -Xmx1g"  elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=true -E node.data=false -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service ls ID            NAME       MODE        REPLICAS  IMAGEo4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0[oc5]$ docker service ps es_masterID            NAME         IMAGE                      NODE  DESIRED STATE  CURRENT STATE       ERROR  PORTSn4a8tibaqxpw  es_master.1  elasticsearch/swarm:5.0.0  oc5   Running        Running 17 seconds ago         

because es_master was starting at OC5 node we can see the log output with:
[oc5]$ docker ps CONTAINER ID        IMAGE                       COMMAND                  CREATED             STATUS              PORTS                NAMESad8a2ce40f68        elasticsearch/swarm:5.0.0   "/docker-entrypoin..."   2 minutes ago       Up 2 minutes        9200/tcp, 9300/tcp   es_master.1.n4a8tibaqxpwwlcrjt33ywqry [oc5]4 docker logs ad8a2ce40f68 [2017-03-09T19:23:15,427][INFO ][o.e.n.Node               ] [ad8a2ce40f68] initializing ........[2017-03-09T19:23:21,812][INFO ][o.e.n.Node               ] [ad8a2ce40f68] started[2017-03-09T19:23:21,900][INFO ][o.e.g.GatewayService     ] [ad8a2ce40f68] recovered [0] indices into cluster_state
now adding two data nodes and one ingest node:
[oc5]$ docker service create --network es_cluster --name es_data --constraint 'node.labels.type == es_data' --replicas=2 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=false -E node.data=true -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service create --network es_cluster --name es_ingest --constraint 'node.labels.type == es_ingest' --replicas=1 --env ES_JAVA_OPTS="-Xms1g -Xmx1g" elasticsearch/swarm:5.0.0 -E cluster.name="ESCookBook" -E node.master=false -E node.data=false -E node.ingest=true -E discovery.zen.ping.unicast.hosts=es_master [oc5]$ docker service ls
ID            NAME       MODE        REPLICAS  IMAGE
o4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0
ue10tw8c64be  es_ingest  replicated  1/1       elasticsearch/swarm:5.0.0
w9cpobrxvay1  es_data    replicated  2/2       elasticsearch/swarm:5.0.0
 [oc5]]$ docker service ps es_data
ID            NAME       IMAGE                      NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
q18eq708g692  es_data.1  elasticsearch/swarm:5.0.0  oc2   Running        Running 25 seconds ago        
x1ijojty4nrp  es_data.2  elasticsearch/swarm:5.0.0  oc3   Running        Running 25 seconds ago
      [oc5]]$ docker service ps es_ingest
ID            NAME         IMAGE                      NODE  DESIRED STATE  CURRENT STATE           ERROR  PORTS
vcskf9lkd4xr  es_ingest.1  elasticsearch/swarm:5.0.0  oc1   Running        Running 19 seconds ago
        
visually it look like:

connecting Cerebro to ES cluster using:
[oc5]$ eval $(docker-machine env oc4)
[oc4]$ docker run -d \
  -p 9000:9000 \
  --net es_cluster \
  --env JAVA_OPTS="-Djava.net.preferIPv4Stack=true" \
  --name cerebro yannart/cerebro:latest
again to access to Cerebro console http://localhost:9000/ is through an SSH tunnel, here the output:
if you take a look above We are connecting to ES on host 192.168.0.3 even the master node have the IP 192.168.0.4, this is because the routing mesh of Swarm cluster publish our port on an specific address and then route transparent to the target IP.
So let's play with scale up and down our cluster:
[oc4]$ eval $(docker-machine env oc5)
[oc5]$ docker service scale es_data=4
es_data scaled to 4
[oc5]$ docker service scale es_ingest=2
es_ingest scaled to 2
[oc5]$ docker service ls
ID            NAME       MODE        REPLICAS  IMAGE
o4x15kklu520  es_master  replicated  1/1       elasticsearch/swarm:5.0.0
ue10tw8c64be  es_ingest  replicated  2/2       elasticsearch/swarm:5.0.0
w9cpobrxvay1  es_data    replicated  4/4       elasticsearch/swarm:5.0.0
visualize it:

that's great our ES cluster now have 7 nodes :)
Final conclusion, the purpose of this post is to show that Docker Swarm works perfect at Oracle Cloud, you can easily manage the cluster remotely using docker-machine and you can massively deploy a big Elastic Search cluster using a bunch of Compute services, don't worry about hardware crash or scale up/down your cluster, if you have enough nodes your ES indices will change to yellow state first and once your ES recovery process start the cluster will move your shards to existent capacity and that's all.
I'll show scale up/down on another post, stay tuned.
 

TekTalk Webinar: Defining Your Upgrade and Cloud Strategies: A New Path for Oracle WebCenter

WebCenter Team - Thu, 2017-03-09 13:48

TekTalk Webinar: Defining Your Upgrade and Cloud Strategies: A New Path for Oracle WebCenter
March 29th, 2017 | 1 PM EST

Too many companies still rely on legacy systems, or other outdated platforms, that are not capable of supporting the new demands of modern business.

Many companies are looking to embrace Cloud architectures to support traditional on premise applications. Devising strategies to upgrade existing applications and/or moving workloads to the Cloud can be daunting.

TekStream brings tribal knowledge of BEA WebLogic Portal, BEA AquaLogic User Interaction (ALUI), WebCenter Interaction (WCI), Plumtree, Stellent, Universal Content Management (UCM), Optika, Imaging and Process Management (IPM), and FatWire into one centrally manageable platform, Oracle WebCenter.

With many Oracle customers seeking upgrade strategies to Oracle WebCenter 12c, or devising new cloud architecture to reduce their infrastructure costs, TekStream provides you with options to ensure your success. As part of this TekTalk, we will address:
  1. How do I know if I need to upgrade?
  2. What are the upgrade options for 11g, 10g, and older to 12c?
  3. What are the options for on-prem upgrade?
  4. What are the options for upgrade and move from on-prem to Cloud?
  5. What are the options for moving from on-prem to Cloud and what tool sets are required?

Passing Values Between Pages in Oracle Application Builder Cloud Service

Shay Shmeltzer - Thu, 2017-03-09 12:50

A common use case for applications that have multiple pages is passing values between pages. For example you might want to pick up a specific record or value in one page and then use that as a parameter for a query in another page.

In the February release or Oracle Application Builder Cloud Service as part of the extension hook points that we provide, we added support for shared resources. These are JavaScript libraries you can add to your application - and that can be used across your app.

In the demo below I show you how you can use the built-in sample template for a shared resource to define a variable, and then how that variable is exposed in various places in the product through the expression builder allowing you to set its value in one page and use that value in another one.

Check it out:

Categories: Development

Passing Values Between Pages in Oracle Application Builder Cloud Service

Shay Shmeltzer - Thu, 2017-03-09 12:50

A common use case for applications that have multiple pages is passing values between pages. For example you might want to pick up a specific record or value in one page and then use that as a parameter for a query in another page.

In the February release or Oracle Application Builder Cloud Service as part of the extension hook points that we provide, we added support for shared resources. These are JavaScript libraries you can add to your application - and that can be used across your app.

In the demo below I show you how you can use the built-in sample template for a shared resource to define a variable, and then how that variable is exposed in various places in the product through the expression builder allowing you to set its value in one page and use that value in another one.

Check it out:

Categories: Development

Join Elimination

Jonathan Lewis - Thu, 2017-03-09 12:39

A question has just appeared on OTN describing a problem where code that works in 11g doesn’t work in 12c (exact versions not specified). The code in question is a C-based wrapper for some SQL, and the problem is a buffer overflow problem. The query supplied is as follows:


select T1.C1 from T1, T2 where T1.C1 = T2.D1;

The problem is that this works in 11g where the receiving (C) variable is declared as

char myBuffer [31];

but it doesn’t work in 12c unless the receiving variable is declared as:

char myBuffer [51];

There’s an important bit of background information that might be giving us a clue about what’s happened (although what I’m about to describe isn’t actually the problem unless the SQL provided is a simplified version of the problem SQL that is expected to display the problem). Column C1 is defined as char(30) and column D1 is defined as char(50). Here’s some sample code showing why you might need a buffer of 50+1 bytes to hold something that ought to be 30+1 bytes long. (This may be nothing to do with the anomaly described in the original posting – it’s just something I thought of when I first saw the question.)


rem     Script:         join_elimination_oddity.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Mar 2017

create table t1(
        c30     char(30) primary key
);

create table t2(
        d50     char(50) references t1
);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

explain plan for
select
        t1.c30
from
        t1, t2
where
        t1.c30 = t2.d50
;

select * from table(dbms_xplan.display(null,null,'projection'));

So we’re selecting c30 – the 30 byte character column – from t1; what do we actually get ? Here’s the plan with the projection:


--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    52 |     1   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   |     1 |    52 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("T2"."D50" IS NOT NULL)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "T2"."D50"[CHARACTER,50]

Table t1 has been eliminated and the projected column is the “equivalent” column from t2 – which is too long for the expected output. To work around this problem you can disable join elimination either by parameter (_optimizer_join_elimination_enabled=false) or by hinting /*+ no_eliminate_join(t1) */ in which case the plan (with my data) became a nested loop join from t2 to t1 with column c1 projected as expected.

Footnote:

Two things to note about my demonstration

  • If you’re going to create a referential integrity constraint between columns they do need to be of exactly the same type.
  • This extremely simple case demonstrates the problem in 11.2.0.4 as well as 12.1.0.2. Possibly a more complex query could be produced where (thanks to limitations in query transformations) 11g doesn’t spot the option for join elimination while 12c does; alternatively, a very simple two-column example in 11g won’t do join elimination while a two-column example in 12.2 can (though it doesn’t always) – so upgrading to 12.2 MIGHT cause more people to see this anomaly appearing.

 


jQuery, Security and Web Services - Oh My!

Scott Spendolini - Thu, 2017-03-09 11:33

It's going to be a hectic couple of weeks for me, as I get ready to head to Utah this weekend for the annual UTOUG Training Days conference next week.  I love Salt Lake City, and the UTOUG conference is just the right size - not too large, but large enough that most of the rooms are full of attendees.

This year, I've got three slots, each as different as the next:

jQuery & APEX Primer
This session is aimed at the APEX developer who has just never had the time to get into the details of jQuery.  It starts with an overview of the basics, and then proceeds to demonstrate these concepts using a simple HTML page.  After that, it will show some more practical examples of how jQuery can work in an APEX application.

Secure Your APEX Applications with APEX-SERT
Security is as important as ever, and this session will show you how APEX-SERT - a free, open source tool - can be integrated into your development process.  Once installed, APEX-SERT is instantly available to any and all workspace developers.  Evaluations can also be scheduled to run daily, so your application is constantly being checked for potential threats.

GET POST ORDS JSON: Web Services for APEX Decoded
Lastly, web services are also one of the things that APEX developers may not have a lot of experience with.  They are becoming more and more critical in modern web development, and it's a matter of when you'll need to learn them, not if.  This session covers the basics then walks through how to take a standard APEX form and modify it to use web services instead of the built-in DML  processes.

If that wasn't enough, I'll also be a part of the APEX panel on Tuesday - but will likely have to bail out a bit early to make my flights back home.

The full schedule can be found here: http://www.utoug.org/Schedule

See you in SLC!

Ten Signs Your Boss Sees You As A Threat

OracleApps Epicenter - Thu, 2017-03-09 10:17
Every person has a gift, or an ability that sets them apart from the crowd. Good managers recognize this and try to use it to their advantage. Great managers try to surround themselves with talent...they recognize that with that much talent in their corner, they can't fail. Bad managers feel threatened and try to discredit […]
Categories: APPS Blogs

Accelerating Your ODI Implementation, Rittman Mead Style

Rittman Mead Consulting - Thu, 2017-03-09 10:01
Introduction

Over the years, at Rittman Mead, we've built up quite a collection of tooling for ODI. We have accelerators, scripts, reports, templates and even entire frameworks at our disposal for when the right use case arises. Many of these tools exploit the power of the excellent ODI SDK to automate tasks that would otherwise be a chore to perform manually. Tasks like, topology creation, model automation, code migration and variable creation.

In this blog post, I'm going to give you a demo of our latest addition, a tool that allows you to programmatically create ODI mappings. ( and a few other tricks )

So you may be thinking isn't that already possible using the ODI SDK ? and you'd be right, it most definitely is. There are many examples out there that show you how it's done, but they all have one thing in common, they create a fairly simple mapping, with, relatively speaking, quite a lot of code and are only useful for creating the said mapping.

And herein lies the obvious question, Why would you create a mapping using the ODI SDK, when it's quicker to use ODI Studio ?

And the obvious answer is...you wouldn't, unless, you were trying to automate the creation of multiple mappings using metadata.

This is a reasonable approach using the raw ODI SDK, the typical use case being the automation of your source to stage mappings. These mappings tend to be simple 1 to 1 mappings, the low hanging fruit of automation if you like. The problem arises though, when you want to automate the creation of a variety of more complex mappings, you run the risk of spending more time writing the automation code, than you would actually save due to the automation itself. The point of diminishing return can creep up pretty quickly.

The principle, however, is sound. Automate as much as possible by leveraging metadata and free up your ODI Developers to tackle the more complex stuff.

All Aboard the Rittman Mead Metadata Train !

What would be really nice is something more succinct, more elegant, something that allows us to create any mapping, with minimal code and fuss.

Something that will allow us to further accelerate...

  • Migrating to ODI from other ETL products
  • Greenfield ODI Projects
  • Day to Day ODI Development work

..all powered by juicy metadata.

These were the design goals for our latest tool. To meet these goals, we created a mini-mapping-language on top of the ODI SDK. This mini-mapping-language abstracts away the SDK's complexities, while, at the same time, retaining its inherent power. We call this mini mapping language OdiDsl ( Oracle Data Integrator Domain Specific Language ) catchy heh?!

OdiDsl

OdiDsl is written in Groovy and looks something like this...

/*
 * OdiDsl to create a SCD2 dimension load mapping.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

The above code will create the following, fully functional, mapping in ODI 12c (sorry 11g).

It should be fairly easy to eyeball the code and reconcile it with the above mapping image. We can see that we are specifying our datastores, selecting the EMPLOYEES datastore, adding a filter, a couple of joins and then connecting to our target. OdiDsl has been designed in such a way that it mimics the flow based style of ODI 12c's mappings by chaining components onto one another.

Creating a Mapping Using OdiDsl

Let's walk through the above code, starting with just the datastores, adding the rest as we go along...

Datastores

We start by creating the mapping with mapping.create( <project>, <folder>, <mapping name>). We then chain the .datastores(), .commit() and .validate() methods onto it using the "dot" notation. The .datastores() method is the only method you can chain directly onto mapping.create() as it's a requirement to add some datastores before you start building up the mapping. The .commit() method persists the mapping in the repository and the .validate() method calls ODI's validation routine on the mapping to check if all is ok.

/*
 * OdiDsl to create a mapping with 4 datastores.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .commit()
        .validate()

When we execute this code it returns the following to the console. You can see that the mapping has been dropped/created and that ODI has some validation warnings for us.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component EMPLOYEES has no input or output connections.
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And here is the mapping in ODI - well, it's a start at least...

Starting the Flow with a Filter

Before we can start building up the rest of the mapping we need to select a starting datastore to chain off, you've got to start somewhere right? For that, we call .select("EMPLOYEES"), which is a bit like clicking and selecting the component in ODI Studio. The .filter() method is then chained onto it, passing in the filter name and some configuration, in this case, the actual filter condition.

/*
 * OdiDsl to create a mapping with 4 datastores and a filter.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ]) 
        .commit()
        .validate()

We now have an error in the validation results. This is expected as our filter doesn't connect to anything downstream yet.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.
  ERROR: Mapping component NAME_FILTER must have a connection for output connector point OUTPUT1.

And here's the mapping, as you can see the filter is connected to the EMPLOYEES datastore output connector.

Adding a Join

Next we'll create the join between the filter and the DEPARTMENTS table. To do this we can just chain a .join() onto the .filter() method and pass in some arguments to specify the join name, what it joins to and the join condition itself.

/*
 * OdiDsl to create a mapping with 4 datastores a filter and a join.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
        .commit()
        .validate()

Only 2 validation warnings and no errors this time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

We now have a join named EMP_DEPT joining DEPARTMENTS and the filter, NAME_FILTER, together.

Adding another Join

We'll now do the same for the final join.

/*
 * OdiDsl to create a mapping with 4 datastores, a filter and 2 joins.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])      
        .commit()
        .validate()

looking better all the time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And we now have a join named DEPT_JOBS joining JOBS and the join, EMP_DEPT, to each other.

Connecting to the target

The final step is to connect the DEPT_JOBS join to our target datastore, D_EMPLOYEE. For this we can use the .connect() method. This method is used to map upstream attributes to a datastore. When you perform this action in ODI Studio, you'll be prompted with the attribute matching dialog, with options to auto-map the attributes.

OdiDsl will, by default, auto-map all attributes that are not explicitly mapped in the .connect() method. In our completed code example below we are explicitly mapping several attributes to support SCD2 functionality, auto-map takes care of the rest.

/*
 * OdiDsl to create a SCD2 dimension load mapping
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

Nice, all validated this time.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created  
Validation Successful  


What about Updates ?

Yes. We can also update an existing mapping using mapping.update(<project>, <folder>, <mapping name>). This is useful when you need to make changes to multiple mappings or when you can't drop and recreate a mapping due to a dependency. The approach is the same, we start by selecting a component with .select() and then chain a method onto it, in this case, .config().

mapping.update('MYPROJECT', 'DEMO', "EMPLOYEE_DIM_LOAD")  
        .select('DEPT_JOBS')
            .config([join_type: "LEFT_OUTER"])


Which Properties Can I Change for each Component ?

Probably more than you'll ever need to, OdiDsl mirrors the properties that are available in ODI Studio via the SDK.

Can We Generate OdiDsl Code From an Existing Mapping ?

Yes, we can do that too, with .reverse(). This will allow you to mirror the process.

Let's take this, hand built, fictional and completely CRAZY_MAPPING as an example. (fictional and crazy in the sense that it does nothing useful, however, the flow and configuration are completely valid).

If we execute .reverse() on this mapping by calling...

mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  

...OdiDsl will return the following output to the console. What you are seeing here is the OdiDsl required to recreate the crazy mapping above.

Connecting to the repository...

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'TABLE1'],
         ['name':'STAGING.TABLE9', 'alias':'TABLE9'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TABLE4'],
         ['name':'STAGING.TABLE6', 'alias':'TABLE6'],
         ['name':'STAGING.TABLE5', 'alias':'TABLE5'],
         ['name':'STAGING.TABLE7', 'alias':'TABLE7'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE8', 'alias':'TABLE8'],
         ['name':'STAGING.TABLE11', 'alias':'TABLE11'],
         ['name':'STAGING.TABLE12', 'alias':'TABLE12'],
         ['name':'STAGING.TABLE13', 'alias':'TABLE13'],
         ['name':'STAGING.TABLE15', 'alias':'TABLE15'],
         ['name':'STAGING.TABLE14', 'alias':'TABLE14'],
         ['name':'STAGING.TABLE16', 'alias':'TABLE16'],
         ['name':'STAGING.TABLE17', 'alias':'TABLE17'],
         ['name':'STAGING.TABLE42', 'alias':'TABLE42'],
    ])
    .select('TABLE5')
        .join('JOIN2', ['TABLE7'], [join_condition: "TABLE5.ID = TABLE7.ID" ])
        .join('JOIN3', ['TABLE6'], [join_condition: "TABLE6.ID = TABLE7.ID" ])
        .connect('TABLE14', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE7.COL1" ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "TABLE7.COL4" ],
        ])
    .select('JOIN3')
        .expr('EXPRESSION1', [attrs: [
                [ attr: "ID", expression: "TABLE6.ID * 42", datatype: "NUMERIC", size: "38", scale: "0"]]])
        .connect('TABLE15', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .join('JOIN', ['TABLE14'], [join_condition: "TABLE14.ID = TABLE15.ID" ])
        .filter('FILTER2', [filter_condition: "TABLE15.COL3 != 'FOOBAR'" ])
        .connect('TABLE16', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('JOIN')
        .connect('TABLE17', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('TABLE5')
        .sort('SORT1', [sorter_condition: "TABLE5.ID, TABLE5.COL2, TABLE5.COL4" ])
        .connect('TABLE13', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE5.COL1" ],
                [ attr: "COL2", expression: "TABLE5.COL2" ],
                [ attr: "COL3", expression: "TABLE5.COL3" ],
                [ attr: "COL4", expression: "TABLE5.COL4" ],
        ])
    .select('TABLE3')
        .filter('FILTER1', [filter_condition: "TABLE3.ID != 42" ])
    .select('TABLE4')
        .filter('FILTER', [filter_condition: "TABLE4.COL1 = 42" ])
        .lookup('LOOKUP1', 'FILTER1', [join_condition: "TABLE4.ID = TABLE3.ID AND TABLE3.COL1 = TABLE4.COL1"])
        .join('JOIN5', ['TABLE13'], [join_condition: "TABLE13.ID = TABLE3.ID" ])
        .distinct('DISTINCT_', [attrs: [
                [ attr: "COL3_1", expression: "TABLE4.COL3", datatype: "VARCHAR", size: "30"],
                [ attr: "COL4_1", expression: "TABLE4.COL4", datatype: "VARCHAR", size: "30"]]])
    .select('DISTINCT_')
        .join('JOIN4', ['EXPRESSION1'], [join_condition: "TABLE5.ID = TABLE6.COL1" ])
        .sort('SORT', [sorter_condition: "EXPRESSION1.ID" ])
        .connect('TABLE8', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .connect('TABLE12', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "TABLE8.COL1" ],
                [ attr: "COL2", expression: "TABLE8.COL2" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .select('TABLE9')
        .expr('EXPRESSION', [attrs: [
                [ attr: "ID", expression: "TABLE9.ID *42", datatype: "NUMERIC", size: "38", scale: "0"],
                [ attr: "COL4", expression: "TABLE9.COL4 || 'FOOBAR'", datatype: "VARCHAR", size: "30"]]])
        .connect('TABLE1', [
                [ attr: "ID", expression: "EXPRESSION.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "", active_indicator: false ],
                [ attr: "COL4", expression: "TABLE9.COL4" ],
        ])
        .join('JOIN1', ['TABLE2'], [join_condition: "TABLE1.ID = TABLE2.ID" ])
        .aggregate('AGGREGATE', [attrs: [
                [ attr: "ID", expression: "TABLE1.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1", expression: "MAX(TABLE2.COL4)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .lookup('LOOKUP', 'DISTINCT_', [join_condition: "AGGREGATE.ID = DISTINCT_.COL3_1"])
        .aggregate('AGGREGATE1', [attrs: [
                [ attr: "ID", expression: "AGGREGATE.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1_1", expression: "SUM(AGGREGATE.COL4_1)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .filter('FILTER3', [filter_condition: "AGGREGATE1.COL4_1_1 > 42" ])
        .connect('TABLE42', [
                [ attr: "ID", expression: "AGGREGATE1.ID" ],
        ])
    .select('AGGREGATE1')
        .join('JOIN6', ['TABLE8'], [join_condition: "AGGREGATE1.ID = TABLE8.ID" ])
        .connect('TABLE11', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "" ],
                [ attr: "COL2", expression: "" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .commit()
    .validate()

When we execute this OdiDsl code we get, you guessed it, exactly the same crazy mapping with the flow and component properties all intact.

Being able to flip between ODI studio and OdiDsl has some really nice benefits for those who like the workflow. You can start prototyping a mapping in ODI Studio, convert it to code, hack around for a bit and then reflect it all back into ODI. It's also very useful for generating a "code template" from an existing mapping. The generated code template can be modified to accept variables instead of hard coded properties, all you need then is some metadata.

Did Somebody Say Metadata ?

Metadata is the key to bulk automation. You can find metadata in all kinds of places. If you are migrating to ODI from another product then there will be a whole mass of metadata living in your current product's repository or via some kind of export routine which typically produces XML files. If you are starting a fresh ODI implementation, then there will be metadata living in your source and target systems, in data dictionaries, in excel sheets, in mapping specifications documents, all over the place really. This is the kind of metadata that can be used to feed OdiDsl.

A Quick Example of One possible Approach to Using OdiDsl With Metadata

First we build a mapping in Odi Studio, this will act as our template mapping.

We then generate the equivalent OdiDsl code using mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA'). Which gives us this code.

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'LOOKUP_TABLE'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TARGET_TABLE'],
    ])
    .select('TABLE2')
        .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "TABLE2.ID = LOOKUP_TABLE.ID"])
        .join('JOIN', ['TABLE3'], [join_condition: "TABLE2.ID = TABLE3.ID" ])
        .filter('FILTER', [filter_condition: "TABLE3.COL1 = 'FILTER'" ])
        .expr('EXPRESSION', [attrs: [
                [ attr: "CONSTANT", expression: "42", datatype: "VARCHAR", size: "30"]]])
        .connect('TARGET_TABLE', [
                [ attr: "ID", expression: "LOOKUP_TABLE.ID" ],
                [ attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT" ],
                [ attr: "COL2", expression: "TABLE2.COL2" ],
                [ attr: "COL3", expression: "TABLE3.COL3" ],
                [ attr: "COL4", expression: "LOOKUP_TABLE.COL4" ],
        ])
    .commit()
    .validate()

We now need to decorate this code with some variables, these variables will act as place holders for our metadata. The metadata we are going to use is from a database table, I'm keeping it simple for the purpose of this demonstration but the approach is the same. Our metadata table has 10 rows and from these 10 rows we are going to create 10 mappings, replacing certain properties with the values from the columns.

Remember that OdiDsl is expressed in Groovy. That means, as well as OdiDsl code, we also have access to the entire Groovy language. In the following code we are using a mixture of Groovy and OdiDsl. We are connecting to a database, grabbing our metadata and then looping over mapping.create(), once for each row in our metadata table. The columns in the metadata table are represented as the variables row.datastore, row.constant_expr and row.filter_cond. The code comments indicate where we are substituting these variables in place of our previously hard coded property values.

import groovy.sql.Sql

// Connect to the database and retrieve rows from the METADATA table.
def sqlConn = Sql.newInstance("jdbc:oracle:thin:@hostname:1521/pdborcl", "username", "password", "oracle.jdbc.pool.OracleDataSource")  
def rows = sqlConn.rows("SELECT * FROM METADATA")  
sqlConn.close()

// For each row in our METADATA table
rows.eachWithIndex() {  row, index ->

    mapping.create('MY_PROJECT', 'DEMO_FOLDER', "FEED_ME_METADATA_${index+1}") // Interpolate row number to make the mapping name unique
            .datastores([
                    ['name': 'STAGING.TABLE1', 'alias': 'LOOKUP_TABLE'],
                    ['name': "STAGING.${row.datastore}" ], // substitute in a different datastore
                    ['name': 'STAGING.TABLE3', 'alias': 'TABLE3'],
                    ['name': 'STAGING.TABLE4', 'alias': 'TARGET_TABLE'],
            ])
            .select(row.datastore)
                .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "${row.datastore}.ID = LOOKUP_TABLE.ID"]) // substitute in a different datastore
                .join('JOIN', ['TABLE3'], [join_condition: "${row.datastore}.ID = TABLE3.ID"]) // substitute in a different datastore
                .filter('FILTER', [filter_condition: "TABLE3.COL1 = '${row.filter_cond}'"]) // substitute in a different filter condition
                .expr('EXPRESSION', [attrs: [
                    [attr: "CONSTANT", expression: row.constant_expr, datatype: "VARCHAR", size: "30"]]]) // substitute in a different constant for the expression
                .connect('TARGET_TABLE', [
                    [attr: "ID", expression: "LOOKUP_TABLE.ID"],
                    [attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT"],
                    [attr: "COL2", expression: "${row.datastore}.COL2"], // substitute in a different datastore
                    [attr: "COL3", expression: "TABLE3.COL3"],
                    [attr: "COL4", expression: "LOOKUP_TABLE.COL4"],
                ])
            .commit()
            .validate()

}

Here is the output...

Connecting to the repository...

mapping FEED_ME_METADATA_1 created  
Validation Successful  
mapping FEED_ME_METADATA_2 created  
Validation Successful  
mapping FEED_ME_METADATA_3 created  
Validation Successful  
mapping FEED_ME_METADATA_4 created  
Validation Successful  
mapping FEED_ME_METADATA_5 created  
Validation Successful  
mapping FEED_ME_METADATA_6 created  
Validation Successful  
mapping FEED_ME_METADATA_7 created  
Validation Successful  
mapping FEED_ME_METADATA_8 created  
Validation Successful  
mapping FEED_ME_METADATA_9 created  
Validation Successful  
mapping FEED_ME_METADATA_10 created  
Validation Successful  

And here are our 10 mappings, each with it's own configuration.

If we take a look at the FEED_ME_METADATA_5 mapping, we can see the metadata has been reflected into the mapping.

And that's about it. We've basically just built a mini accelerator using OdiDsl and we hardly had to write any code. The OdiDsl code was generated for us using .reverse(). All we really had to code, was the connection to the database, a loop and bit of variable substitution!

Summary

With the Rittman Mead ODI Tool kit, accelerating your ODI implementation has never be easier. If you are thinking about migrating to ODI from another product or embarking on a new ODI Project, Rittman Mead can help. For more information please get in touch.

Categories: BI & Warehousing

Why Explain Plan miss table in function

Tom Kyte - Thu, 2017-03-09 08:06
How to collect explain plan on table used in function call by procedure? Suppose I written simple sql query below. select t.tid,t.tname,(select emp_id from emp e where e.emp_id=t.emp_id) from transaction t; And following Explain plan is for...
Categories: DBA Blogs

Oracle Data pump(DBMS_DATAPUMP)

Tom Kyte - Thu, 2017-03-09 08:06
Dear Mr.Kyte greetings I wrote this pl/sql block to export schema using dbms_datapump package <b>this is step one</b>.(which will create the datapump job). <code>declare h1 number; begin h1 := dbms_datapump.open (oper...
Categories: DBA Blogs

Composite Unique Key on multiple columns

Tom Kyte - Thu, 2017-03-09 08:06
Hi Chris/Connor, I have a table having 25 columns out of which col_1, col_2, col_3, col_4 are included in composite unique key. Daily there are 60k insert/update on this table. My query here is - will it make any performance issues if we cre...
Categories: DBA Blogs

Permissions to create triggers over several schemas

Tom Kyte - Thu, 2017-03-09 08:06
I have a 'normal' schema S, and a schema that stores history H (it contains all tables of S, added with some audit columns, but stripped of any keys). Whenever a DML happens on S, it has to be stored within H (If S contains a table T, and we insert ...
Categories: DBA Blogs

Deterministic function

Tom Kyte - Thu, 2017-03-09 08:06
Hi Tom, Recently I created a function with DETERMINISTIC definition to get performance on my query, but I check that the function is called every time even that it receive the same input. Here the script that I used to check this function: ...
Categories: DBA Blogs

Transaction commit when exiting SQL*Plus

Tom Kyte - Thu, 2017-03-09 08:06
if the case of exit without commit from sqlplus, the running transaction commit or rollback??
Categories: DBA Blogs

List out external databases using DB link

Tom Kyte - Thu, 2017-03-09 08:06
Hello Gurus, Have basic knowledge in Oracle DB Admin or Profiling. I am unable to profiling for below requirement. "Want to identify the list of other database which are usign my Database by created as DB Link." <b>Suppose my company have 10 ...
Categories: DBA Blogs

Exadata questions

Tom Kyte - Thu, 2017-03-09 08:06
Hello Tom, We have acquired an Exadata Server and would like to clarify some issues that I believe are product myths. 1 - After removing indexes, can queries get faster? 2- I have always used indexes on the columns identified as FKs. Is th...
Categories: DBA Blogs

Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX

Amis Blog - Thu, 2017-03-09 03:34

At a public sector organization in the Netherlands there was the need to make an inventory of the deployed OSB services in order to find out, the dependencies with certain external web services (which were on a list to become deprecated).

For this, in particular the endpoints of business services were of interest.

Besides that, the dependencies between services and also the Message Flow per proxy service was of interest, in particular Operational Branch, Route, Java Callout and Service Callout actions.

Therefor an OSBServiceExplorer tool was developed to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. For now, this tool was merely used to quickly return the information needed, but in the future it can be the basis for a more comprehensive one.

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX.

If you are interested in general information about, using MBeans with JMX, I kindly point you to another article (written be me) on the AMIS TECHNOLOGY BLOG: “Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX”, via url: https://technology.amis.nl/2017/02/28/oracle-service-bus-disable-enable-a-proxy-service-via-weblogic-server-mbeans-with-jmx/

Remark: Some names in the examples in this article are in Dutch, but don’t let this scare you off.

MBeans

For ease of use, a ms-dos batch file was created, using MBeans, to explore services (proxy and business). The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

On a server, the ms-dos batch file “OSBServiceExplorer.bat” is called.

The content of the ms-dos batch file “OSBServiceExplorer.bat” is:
java.exe -classpath “OSBServiceExplorer.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbserviceexplorer. OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

In the ms-dos batch file via java.exe a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

In the sample code shown at the end of this article, the use of the following MBeans can be seen:

Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

Provides methods for retrieving runtime information about a server instance and for transitioning a server from one state to another. [https://docs.oracle.com/cd/E11035_01/wls100/javadocs_mhome/weblogic/management/runtime/ServerRuntimeMBean.html]

Provides various API to query, export and import resources, obtain validation errors, get and set environment values, and in general manage resources in an ALSB domain. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ALSBConfigurationMBean.html]

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

Service findService(String name,
                    String type,
                    String location)

This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

In the sample code shown at the end of this article, certain java fields are used. For reading purposes the field values are shown in the following table:

Field Field value DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean ALSBConfigurationMBean.NAME ALSBConfiguration ALSBConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ALSBConfigurationMBean Ref.DOMAIN <Reference to the domain>

Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.

A special Ref DOMAIN refers to the whole domain.
[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

When running the code the following error was thrown:

java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
	at $Proxy0.findService(Unknown Source)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.<init>(OSBServiceExplorer.java:174)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.main(OSBServiceExplorer.java:445)
Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
	... 4 more
Process exited.

So because of the use of com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

See the code fragment below:

fileWriter.write("Found server runtimes:\n");
int length = (int)serverRuntimes.length;
for (int i = 0; i < length; i++) {
    ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

    String name = serverRuntimeMBean.getName();
    String state = serverRuntimeMBean.getState();
    fileWriter.write("- Server name: " + name + ". Server state: " +
                     state + "\n");
}
fileWriter.write("" + "\n");
List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS

See the code fragment below:

Set refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


fileWriter.write("Found total of " + refs.size() + " refs, including the following proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {

        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " + ref.getFullName() +
                         "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

As mentioned before, a Ref object uniquely represents a resource, project or folder. A Ref object has two components:

  • typeId that indicates whether it is a project, folder, or a resource
  • array of names of non-zero length.

For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
For a project, the Ref object simply contains one name component, that is, the project name.
A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.

[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Below is an example of a Ref object that represents a folder (via JDeveloper Debug):

Below is an example of a Ref object that represents a resource (via JDeveloper Debug):

ResourceConfigurationMBean

In order to be able to determine the actual endpoints of the proxy services and business services, the ResourceConfigurationMBean is used. When connected, the Service Bus MBeans are located under com.oracle.osb. [https://technology.amis.nl/2014/10/20/oracle-service-bus-obtaining-list-exposed-soap-http-endpoints/]

When we look at the java code, as a next step, the names of a set of MBeans specified by pattern matching are put in a list and looped through.

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the queryNames method.

Set queryNames(ObjectName name,
               QueryExp query)
               throws IOException

Gets the names of MBeans controlled by the MBean server. This method enables any of the following to be obtained: The names of all MBeans, the names of a set of MBeans specified by pattern matching on the ObjectName and/or a Query expression, a specific MBean name (equivalent to testing whether an MBean is registered). When the object name is null or no domain and key properties are specified, all objects are selected (and filtered if a query is specified). It returns the set of ObjectNames for the MBeans selected.
[https://docs.oracle.com/javase/7/docs/api/javax/management/MBeanServerConnection.html]

See the code fragment below:

String domain = "com.oracle.osb";
String objectNamePattern =
    domain + ":" + "Type=ResourceConfigurationMBean,*";

Set osbResourceConfigurations =
    connection.queryNames(new ObjectName(objectNamePattern), null);

fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
for (ObjectName osbResourceConfiguration :
     osbResourceConfigurations) {
…
    String canonicalName =
        osbResourceConfiguration.getCanonicalName();
    fileWriter.write("- Resource: " + canonicalName + "\n");
…
}

The pattern used is: com.oracle.osb:Type=ResourceConfigurationMBean,*

Example content fragment of the text file:

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
…

Below is an example of an ObjectName object (via JDeveloper Debug), found via the queryNames method:

Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened. Here the previously mentioned ResourceConfigurationMBean’s can be found.


[Via MBean Browser]

The information on the right is as follows (if we navigate to a particular ResourceConfigurationMBean, for example …$UpdateZaak_Lk01_PS) :


[Via MBean Browser]

Here we can see that the attributes Configuration and Metadata are available:

  • Configuration

[Via MBean Browser]

The Configuration is made available in java by the following code fragment:

CompositeDataSupport configuration = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Configuration");
  • Metadata

[Via MBean Browser]

The Metadata is made available in java by the following code fragment:

CompositeDataSupport metadata = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Metadata");
Diving into attribute Configuration of the ResourceConfigurationMBean

For each found proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

See the code fragment below:

String canonicalName =
    osbResourceConfiguration.getCanonicalName();
…
String servicetype =
    (String)configuration.get("service-type");
CompositeDataSupport transportconfiguration =
    (CompositeDataSupport)configuration.get("transport-configuration");
String transporttype =
    (String)transportconfiguration.get("transport-type");
…
fileWriter.write("  Configuration of " + canonicalName +
                 ":" + " service-type=" + servicetype +
                 ", transport-type=" + transporttype +
                 ", url=" + url + "\n");

Proxy service configuration:

Below is an example of a proxy service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

The proxy services which define the exposed endpoints, can be recognized by the ProxyService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

String url = (String)transportconfiguration.get("url");

Business service configuration:

Below is an example of a business service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://xyz/eBus/FolderService.svc

The business services which define the exposed endpoints, can be recognized by the BusinessService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

CompositeData[] urlconfiguration =
    (CompositeData[])transportconfiguration.get("url-configuration");
String url = (String)urlconfiguration[0].get("url");

So, via the url key found in the business service configuration, the endpoint of a business service can be found (for example: http://xyz/eBus/FolderService.svc). So in that way the dependencies (proxy and/or business services) with certain external web services (having a certain endpoint), could be found.

Proxy service pipeline, element hierarchy

For a proxy service the elements (nodes) of the pipeline are investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");


[Via MBean Browser]

Below is an example of a nodes object (via JDeveloper Debug):

If we take a look at the dataMap object, we can see nodes of different types.

Below is an example of a node of type Stage (via JDeveloper Debug):

Below is an example of a node of type Action and label ifThenElse (via JDeveloper Debug):

Below is an example of a node of type Action and label wsCallout (via JDeveloper Debug):

For the examples above the Message Flow part of the UpdateZaak_Lk01_PS proxy service looks like:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .proxy file (in this case: UpdateZaak_Lk01_PS.proxy) for the _ActiondId- identification, mentioned as value for the name key.

<con:stage name="EditFolderZaakStage">
        <con:context>
          …
        </con:context>
        <con:actions>
          <con3:ifThenElse>
            <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7c84</con2:id>
            <con3:case>
              <con3:condition>
                …
              </con3:condition>
              <con3:actions>
                <con3:wsCallout>
                  <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f</con2:id>
                  …

The first node in the dataMap object (via JDeveloper Debug) looks like:

The dataMap object is of type HashMap. A hashMap maintains key and value pairs and often denoted as HashMap<Key, Value> or HashMap<K, V>. HashMap implements Map interface

As can be seen, the key is of type Object and the value of type CompositeData.

In order to know what kind of information is delivered via the CompositeData object, the rowType object can be used.

See the code fragment below:

TabularType tabularType = nodes.getTabularType();
CompositeType rowType = tabularType.getRowType();

Below is an example of a rowType object (via JDeveloper Debug):

From this it is now clear that the CompositeData object for a ProxyServicePipelineElementType contains:

Index key value 0 children Children of this node 1 label Label 2 name Name of the node 3 node-id Id of this node unique within the graph 4 type Pipeline element type

In the code fragment below, an iterator is used to loop through the dataMap object.

Iterator keyIter = nodes.keySet().iterator();

for (int j = 0; keyIter.hasNext(); ++j) {

    Object[] key = ((Collection)keyIter.next()).toArray();

    CompositeData compositeData = nodes.get(key);

    …
}

The key object for the first node in the dataMap object (via JDeveloper Debug) looks like:

The value of this key object is 25, which also is shown as the value for the node-id of the compositeData object, which for the first node in the dataMap object (via JDeveloper Debug) looks like:

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Think for example of a “Stage” having an “If Then” action which in turn contains several “Assign” actions. A proxy service Message Flow can of course contain all kinds of elements (see the Design Palette).

Below is (for another proxy service) an example content fragment of the text file, that reflects the hierarchy:

     Index#76:
       level    = 1
       label    = branch-node
       name     = CheckOperationOperationalBranch
       node-id  = 62
       type     = OperationalBranchNode
       children = [42,46,50,61]
         level    = 2
         node-id  = 42
         children = [41]
           level    = 3
           label    = route-node
           name     = creeerZaak_Lk01RouteNode
           node-id  = 41
           type     = RouteNode
           children = [40]
             level    = 4
             node-id  = 40
             children = [39]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b99
               node-id  = 39
               type     = Action
               children = []
         level    = 2
         node-id  = 46
         children = [45]
           level    = 3
           label    = route-node
           name     = updateZaak_Lk01RouteNode
           node-id  = 45
           type     = RouteNode
           children = [44]
             level    = 4
             node-id  = 44
             children = [43]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b77
               node-id  = 43
               type     = Action
               children = []
         …

Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For this the label and type keys are used.

See the code fragment below:

String label = (String)compositeData.get("label");
String type = (String)compositeData.get("type");

if (type.equals("Action") &&
    (label.contains("wsCallout") ||
     label.contains("javaCallout") ||
     label.contains("route"))) {

    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
} else if (type.equals("OperationalBranchNode") ||
           type.equals("RouteNode"))
{
    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
}

Example content fragment of the text file:

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

When we take a closer look at the node of type Action and label wsCallout with index 106, this can also be found in the MBean Browser:


[Via MBean Browser]

The children node-id’s are 30 (a node of type Sequence and name requestTransform, also having children) and 34 (a node of type Sequence and name responseTransform, also having children).

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found proxy service the metadata information (dependencies and dependents) is written to file.

See the code fragment below:

fileWriter.write("  Metadata of " + canonicalName + "\n");

String[] dependencies =
    (String[])metadata.get("dependencies");
fileWriter.write("    dependencies:\n");
int size;
size = dependencies.length;
for (int i = 0; i < size; i++) {
    String dependency = dependencies[i];
    if (!dependency.contains("Xquery")) {
        fileWriter.write("      - " + dependency + "\n");
    }
}
fileWriter.write("" + "\n");

String[] dependents = (String[])metadata.get("dependents");
fileWriter.write("    dependents:\n");
size = dependents.length;
for (int i = 0; i < size; i++) {
    String dependent = dependents[i];
    fileWriter.write("      - " + dependent + "\n");
}
fileWriter.write("" + "\n");

Example content fragment of the text file:

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS

As can be seen in the MBean Browser, the metadata for a particular proxy service shows the dependencies on other resources (like business services and XML Schemas) and other services that are dependent on the proxy service.


[Via MBean Browser]

By looking at the results in the text file "OSBServiceExplorer.txt", the dependencies between services (proxy and business) and also the dependencies with certain external web services (with a particular endpoint) could be extracted.

Example content of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS
…

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
…

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } if ((level == 1 && type.equals("OperationalBranchNode")) || level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {

                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);

            fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                CompositeDataSupport configuration =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Configuration");

                CompositeDataSupport metadata =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Metadata");

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                if (canonicalName.contains("ProxyService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    String url = (String)transportconfiguration.get("url");
                    
                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                } else if (canonicalName.contains("BusinessService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    CompositeData[] urlconfiguration =
                        (CompositeData[])transportconfiguration.get("url-configuration");
                    String url = (String)urlconfiguration[0].get("url");

                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                }

                if (canonicalName.contains("ProxyService")) {

                    fileWriter.write("" + "\n");

                    CompositeDataSupport pipeline =
                        (CompositeDataSupport)configuration.get("pipeline");
                    TabularDataSupport nodes =
                        (TabularDataSupport)pipeline.get("nodes");

                    TabularType tabularType = nodes.getTabularType();
                    CompositeType rowType = tabularType.getRowType();

                    Iterator keyIter = nodes.keySet().iterator();

                    for (int j = 0; keyIter.hasNext(); ++j) {

                        Object[] key = ((Collection)keyIter.next()).toArray();

                        CompositeData compositeData = nodes.get(key);

                        String label = (String)compositeData.get("label");
                        String type = (String)compositeData.get("type");
                        if (type.equals("Action") &&
                            (label.contains("wsCallout") ||
                             label.contains("javaCallout") ||
                             label.contains("route"))) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        } else if (type.equals("OperationalBranchNode") ||
                                   type.equals("RouteNode")) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        }
                    }

                    fileWriter.write("" + "\n");
                    fileWriter.write("  Metadata of " + canonicalName + "\n");

                    String[] dependencies =
                        (String[])metadata.get("dependencies");
                    fileWriter.write("    dependencies:\n");
                    int size;
                    size = dependencies.length;
                    for (int i = 0; i < size; i++) {
                        String dependency = dependencies[i];
                        if (!dependency.contains("Xquery")) {
                            fileWriter.write("      - " + dependency + "\n");
                        }
                    }
                    fileWriter.write("" + "\n");

                    String[] dependents = (String[])metadata.get("dependents");
                    fileWriter.write("    dependents:\n");
                    size = dependents.length;
                    for (int i = 0; i < size; i++) {
                        String dependent = dependents[i];
                        fileWriter.write("      - " + dependent + "\n");
                    }
                    fileWriter.write("" + "\n");

                }

            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

Pages

Subscribe to Oracle FAQ aggregator