Feed aggregator

Accelerating Your ODI Implementation, Rittman Mead Style

Rittman Mead Consulting - Thu, 2017-03-09 10:01
Introduction

Over the years, at Rittman Mead, we've built up quite a collection of tooling for ODI. We have accelerators, scripts, reports, templates and even entire frameworks at our disposal for when the right use case arises. Many of these tools exploit the power of the excellent ODI SDK to automate tasks that would otherwise be a chore to perform manually. Tasks like, topology creation, model automation, code migration and variable creation.

In this blog post, I'm going to give you a demo of our latest addition, a tool that allows you to programmatically create ODI mappings. ( and a few other tricks )

So you may be thinking isn't that already possible using the ODI SDK ? and you'd be right, it most definitely is. There are many examples out there that show you how it's done, but they all have one thing in common, they create a fairly simple mapping, with, relatively speaking, quite a lot of code and are only useful for creating the said mapping.

And herein lies the obvious question, Why would you create a mapping using the ODI SDK, when it's quicker to use ODI Studio ?

And the obvious answer is...you wouldn't, unless, you were trying to automate the creation of multiple mappings using metadata.

This is a reasonable approach using the raw ODI SDK, the typical use case being the automation of your source to stage mappings. These mappings tend to be simple 1 to 1 mappings, the low hanging fruit of automation if you like. The problem arises though, when you want to automate the creation of a variety of more complex mappings, you run the risk of spending more time writing the automation code, than you would actually save due to the automation itself. The point of diminishing return can creep up pretty quickly.

The principle, however, is sound. Automate as much as possible by leveraging metadata and free up your ODI Developers to tackle the more complex stuff.

All Aboard the Rittman Mead Metadata Train !

What would be really nice is something more succinct, more elegant, something that allows us to create any mapping, with minimal code and fuss.

Something that will allow us to further accelerate...

  • Migrating to ODI from other ETL products
  • Greenfield ODI Projects
  • Day to Day ODI Development work

..all powered by juicy metadata.

These were the design goals for our latest tool. To meet these goals, we created a mini-mapping-language on top of the ODI SDK. This mini-mapping-language abstracts away the SDK's complexities, while, at the same time, retaining its inherent power. We call this mini mapping language OdiDsl ( Oracle Data Integrator Domain Specific Language ) catchy heh?!

OdiDsl

OdiDsl is written in Groovy and looks something like this...

/*
 * OdiDsl to create a SCD2 dimension load mapping.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

The above code will create the following, fully functional, mapping in ODI 12c (sorry 11g).

It should be fairly easy to eyeball the code and reconcile it with the above mapping image. We can see that we are specifying our datastores, selecting the EMPLOYEES datastore, adding a filter, a couple of joins and then connecting to our target. OdiDsl has been designed in such a way that it mimics the flow based style of ODI 12c's mappings by chaining components onto one another.

Creating a Mapping Using OdiDsl

Let's walk through the above code, starting with just the datastores, adding the rest as we go along...

Datastores

We start by creating the mapping with mapping.create( <project>, <folder>, <mapping name>). We then chain the .datastores(), .commit() and .validate() methods onto it using the "dot" notation. The .datastores() method is the only method you can chain directly onto mapping.create() as it's a requirement to add some datastores before you start building up the mapping. The .commit() method persists the mapping in the repository and the .validate() method calls ODI's validation routine on the mapping to check if all is ok.

/*
 * OdiDsl to create a mapping with 4 datastores.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .commit()
        .validate()

When we execute this code it returns the following to the console. You can see that the mapping has been dropped/created and that ODI has some validation warnings for us.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component EMPLOYEES has no input or output connections.
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And here is the mapping in ODI - well, it's a start at least...

Starting the Flow with a Filter

Before we can start building up the rest of the mapping we need to select a starting datastore to chain off, you've got to start somewhere right? For that, we call .select("EMPLOYEES"), which is a bit like clicking and selecting the component in ODI Studio. The .filter() method is then chained onto it, passing in the filter name and some configuration, in this case, the actual filter condition.

/*
 * OdiDsl to create a mapping with 4 datastores and a filter.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ]) 
        .commit()
        .validate()

We now have an error in the validation results. This is expected as our filter doesn't connect to anything downstream yet.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.
  ERROR: Mapping component NAME_FILTER must have a connection for output connector point OUTPUT1.

And here's the mapping, as you can see the filter is connected to the EMPLOYEES datastore output connector.

Adding a Join

Next we'll create the join between the filter and the DEPARTMENTS table. To do this we can just chain a .join() onto the .filter() method and pass in some arguments to specify the join name, what it joins to and the join condition itself.

/*
 * OdiDsl to create a mapping with 4 datastores a filter and a join.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
        .commit()
        .validate()

Only 2 validation warnings and no errors this time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

We now have a join named EMP_DEPT joining DEPARTMENTS and the filter, NAME_FILTER, together.

Adding another Join

We'll now do the same for the final join.

/*
 * OdiDsl to create a mapping with 4 datastores, a filter and 2 joins.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])      
        .commit()
        .validate()

looking better all the time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And we now have a join named DEPT_JOBS joining JOBS and the join, EMP_DEPT, to each other.

Connecting to the target

The final step is to connect the DEPT_JOBS join to our target datastore, D_EMPLOYEE. For this we can use the .connect() method. This method is used to map upstream attributes to a datastore. When you perform this action in ODI Studio, you'll be prompted with the attribute matching dialog, with options to auto-map the attributes.

OdiDsl will, by default, auto-map all attributes that are not explicitly mapped in the .connect() method. In our completed code example below we are explicitly mapping several attributes to support SCD2 functionality, auto-map takes care of the rest.

/*
 * OdiDsl to create a SCD2 dimension load mapping
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

Nice, all validated this time.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created  
Validation Successful  


What about Updates ?

Yes. We can also update an existing mapping using mapping.update(<project>, <folder>, <mapping name>). This is useful when you need to make changes to multiple mappings or when you can't drop and recreate a mapping due to a dependency. The approach is the same, we start by selecting a component with .select() and then chain a method onto it, in this case, .config().

mapping.update('MYPROJECT', 'DEMO', "EMPLOYEE_DIM_LOAD")  
        .select('DEPT_JOBS')
            .config([join_type: "LEFT_OUTER"])


Which Properties Can I Change for each Component ?

Probably more than you'll ever need to, OdiDsl mirrors the properties that are available in ODI Studio via the SDK.

Can We Generate OdiDsl Code From an Existing Mapping ?

Yes, we can do that too, with .reverse(). This will allow you to mirror the process.

Let's take this, hand built, fictional and completely CRAZY_MAPPING as an example. (fictional and crazy in the sense that it does nothing useful, however, the flow and configuration are completely valid).

If we execute .reverse() on this mapping by calling...

mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  

...OdiDsl will return the following output to the console. What you are seeing here is the OdiDsl required to recreate the crazy mapping above.

Connecting to the repository...

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'TABLE1'],
         ['name':'STAGING.TABLE9', 'alias':'TABLE9'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TABLE4'],
         ['name':'STAGING.TABLE6', 'alias':'TABLE6'],
         ['name':'STAGING.TABLE5', 'alias':'TABLE5'],
         ['name':'STAGING.TABLE7', 'alias':'TABLE7'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE8', 'alias':'TABLE8'],
         ['name':'STAGING.TABLE11', 'alias':'TABLE11'],
         ['name':'STAGING.TABLE12', 'alias':'TABLE12'],
         ['name':'STAGING.TABLE13', 'alias':'TABLE13'],
         ['name':'STAGING.TABLE15', 'alias':'TABLE15'],
         ['name':'STAGING.TABLE14', 'alias':'TABLE14'],
         ['name':'STAGING.TABLE16', 'alias':'TABLE16'],
         ['name':'STAGING.TABLE17', 'alias':'TABLE17'],
         ['name':'STAGING.TABLE42', 'alias':'TABLE42'],
    ])
    .select('TABLE5')
        .join('JOIN2', ['TABLE7'], [join_condition: "TABLE5.ID = TABLE7.ID" ])
        .join('JOIN3', ['TABLE6'], [join_condition: "TABLE6.ID = TABLE7.ID" ])
        .connect('TABLE14', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE7.COL1" ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "TABLE7.COL4" ],
        ])
    .select('JOIN3')
        .expr('EXPRESSION1', [attrs: [
                [ attr: "ID", expression: "TABLE6.ID * 42", datatype: "NUMERIC", size: "38", scale: "0"]]])
        .connect('TABLE15', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .join('JOIN', ['TABLE14'], [join_condition: "TABLE14.ID = TABLE15.ID" ])
        .filter('FILTER2', [filter_condition: "TABLE15.COL3 != 'FOOBAR'" ])
        .connect('TABLE16', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('JOIN')
        .connect('TABLE17', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('TABLE5')
        .sort('SORT1', [sorter_condition: "TABLE5.ID, TABLE5.COL2, TABLE5.COL4" ])
        .connect('TABLE13', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE5.COL1" ],
                [ attr: "COL2", expression: "TABLE5.COL2" ],
                [ attr: "COL3", expression: "TABLE5.COL3" ],
                [ attr: "COL4", expression: "TABLE5.COL4" ],
        ])
    .select('TABLE3')
        .filter('FILTER1', [filter_condition: "TABLE3.ID != 42" ])
    .select('TABLE4')
        .filter('FILTER', [filter_condition: "TABLE4.COL1 = 42" ])
        .lookup('LOOKUP1', 'FILTER1', [join_condition: "TABLE4.ID = TABLE3.ID AND TABLE3.COL1 = TABLE4.COL1"])
        .join('JOIN5', ['TABLE13'], [join_condition: "TABLE13.ID = TABLE3.ID" ])
        .distinct('DISTINCT_', [attrs: [
                [ attr: "COL3_1", expression: "TABLE4.COL3", datatype: "VARCHAR", size: "30"],
                [ attr: "COL4_1", expression: "TABLE4.COL4", datatype: "VARCHAR", size: "30"]]])
    .select('DISTINCT_')
        .join('JOIN4', ['EXPRESSION1'], [join_condition: "TABLE5.ID = TABLE6.COL1" ])
        .sort('SORT', [sorter_condition: "EXPRESSION1.ID" ])
        .connect('TABLE8', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .connect('TABLE12', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "TABLE8.COL1" ],
                [ attr: "COL2", expression: "TABLE8.COL2" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .select('TABLE9')
        .expr('EXPRESSION', [attrs: [
                [ attr: "ID", expression: "TABLE9.ID *42", datatype: "NUMERIC", size: "38", scale: "0"],
                [ attr: "COL4", expression: "TABLE9.COL4 || 'FOOBAR'", datatype: "VARCHAR", size: "30"]]])
        .connect('TABLE1', [
                [ attr: "ID", expression: "EXPRESSION.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "", active_indicator: false ],
                [ attr: "COL4", expression: "TABLE9.COL4" ],
        ])
        .join('JOIN1', ['TABLE2'], [join_condition: "TABLE1.ID = TABLE2.ID" ])
        .aggregate('AGGREGATE', [attrs: [
                [ attr: "ID", expression: "TABLE1.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1", expression: "MAX(TABLE2.COL4)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .lookup('LOOKUP', 'DISTINCT_', [join_condition: "AGGREGATE.ID = DISTINCT_.COL3_1"])
        .aggregate('AGGREGATE1', [attrs: [
                [ attr: "ID", expression: "AGGREGATE.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1_1", expression: "SUM(AGGREGATE.COL4_1)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .filter('FILTER3', [filter_condition: "AGGREGATE1.COL4_1_1 > 42" ])
        .connect('TABLE42', [
                [ attr: "ID", expression: "AGGREGATE1.ID" ],
        ])
    .select('AGGREGATE1')
        .join('JOIN6', ['TABLE8'], [join_condition: "AGGREGATE1.ID = TABLE8.ID" ])
        .connect('TABLE11', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "" ],
                [ attr: "COL2", expression: "" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .commit()
    .validate()

When we execute this OdiDsl code we get, you guessed it, exactly the same crazy mapping with the flow and component properties all intact.

Being able to flip between ODI studio and OdiDsl has some really nice benefits for those who like the workflow. You can start prototyping a mapping in ODI Studio, convert it to code, hack around for a bit and then reflect it all back into ODI. It's also very useful for generating a "code template" from an existing mapping. The generated code template can be modified to accept variables instead of hard coded properties, all you need then is some metadata.

Did Somebody Say Metadata ?

Metadata is the key to bulk automation. You can find metadata in all kinds of places. If you are migrating to ODI from another product then there will be a whole mass of metadata living in your current product's repository or via some kind of export routine which typically produces XML files. If you are starting a fresh ODI implementation, then there will be metadata living in your source and target systems, in data dictionaries, in excel sheets, in mapping specifications documents, all over the place really. This is the kind of metadata that can be used to feed OdiDsl.

A Quick Example of One possible Approach to Using OdiDsl With Metadata

First we build a mapping in Odi Studio, this will act as our template mapping.

We then generate the equivalent OdiDsl code using mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA'). Which gives us this code.

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'LOOKUP_TABLE'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TARGET_TABLE'],
    ])
    .select('TABLE2')
        .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "TABLE2.ID = LOOKUP_TABLE.ID"])
        .join('JOIN', ['TABLE3'], [join_condition: "TABLE2.ID = TABLE3.ID" ])
        .filter('FILTER', [filter_condition: "TABLE3.COL1 = 'FILTER'" ])
        .expr('EXPRESSION', [attrs: [
                [ attr: "CONSTANT", expression: "42", datatype: "VARCHAR", size: "30"]]])
        .connect('TARGET_TABLE', [
                [ attr: "ID", expression: "LOOKUP_TABLE.ID" ],
                [ attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT" ],
                [ attr: "COL2", expression: "TABLE2.COL2" ],
                [ attr: "COL3", expression: "TABLE3.COL3" ],
                [ attr: "COL4", expression: "LOOKUP_TABLE.COL4" ],
        ])
    .commit()
    .validate()

We now need to decorate this code with some variables, these variables will act as place holders for our metadata. The metadata we are going to use is from a database table, I'm keeping it simple for the purpose of this demonstration but the approach is the same. Our metadata table has 10 rows and from these 10 rows we are going to create 10 mappings, replacing certain properties with the values from the columns.

Remember that OdiDsl is expressed in Groovy. That means, as well as OdiDsl code, we also have access to the entire Groovy language. In the following code we are using a mixture of Groovy and OdiDsl. We are connecting to a database, grabbing our metadata and then looping over mapping.create(), once for each row in our metadata table. The columns in the metadata table are represented as the variables row.datastore, row.constant_expr and row.filter_cond. The code comments indicate where we are substituting these variables in place of our previously hard coded property values.

import groovy.sql.Sql

// Connect to the database and retrieve rows from the METADATA table.
def sqlConn = Sql.newInstance("jdbc:oracle:thin:@hostname:1521/pdborcl", "username", "password", "oracle.jdbc.pool.OracleDataSource")  
def rows = sqlConn.rows("SELECT * FROM METADATA")  
sqlConn.close()

// For each row in our METADATA table
rows.eachWithIndex() {  row, index ->

    mapping.create('MY_PROJECT', 'DEMO_FOLDER', "FEED_ME_METADATA_${index+1}") // Interpolate row number to make the mapping name unique
            .datastores([
                    ['name': 'STAGING.TABLE1', 'alias': 'LOOKUP_TABLE'],
                    ['name': "STAGING.${row.datastore}" ], // substitute in a different datastore
                    ['name': 'STAGING.TABLE3', 'alias': 'TABLE3'],
                    ['name': 'STAGING.TABLE4', 'alias': 'TARGET_TABLE'],
            ])
            .select(row.datastore)
                .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "${row.datastore}.ID = LOOKUP_TABLE.ID"]) // substitute in a different datastore
                .join('JOIN', ['TABLE3'], [join_condition: "${row.datastore}.ID = TABLE3.ID"]) // substitute in a different datastore
                .filter('FILTER', [filter_condition: "TABLE3.COL1 = '${row.filter_cond}'"]) // substitute in a different filter condition
                .expr('EXPRESSION', [attrs: [
                    [attr: "CONSTANT", expression: row.constant_expr, datatype: "VARCHAR", size: "30"]]]) // substitute in a different constant for the expression
                .connect('TARGET_TABLE', [
                    [attr: "ID", expression: "LOOKUP_TABLE.ID"],
                    [attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT"],
                    [attr: "COL2", expression: "${row.datastore}.COL2"], // substitute in a different datastore
                    [attr: "COL3", expression: "TABLE3.COL3"],
                    [attr: "COL4", expression: "LOOKUP_TABLE.COL4"],
                ])
            .commit()
            .validate()

}

Here is the output...

Connecting to the repository...

mapping FEED_ME_METADATA_1 created  
Validation Successful  
mapping FEED_ME_METADATA_2 created  
Validation Successful  
mapping FEED_ME_METADATA_3 created  
Validation Successful  
mapping FEED_ME_METADATA_4 created  
Validation Successful  
mapping FEED_ME_METADATA_5 created  
Validation Successful  
mapping FEED_ME_METADATA_6 created  
Validation Successful  
mapping FEED_ME_METADATA_7 created  
Validation Successful  
mapping FEED_ME_METADATA_8 created  
Validation Successful  
mapping FEED_ME_METADATA_9 created  
Validation Successful  
mapping FEED_ME_METADATA_10 created  
Validation Successful  

And here are our 10 mappings, each with it's own configuration.

If we take a look at the FEED_ME_METADATA_5 mapping, we can see the metadata has been reflected into the mapping.

And that's about it. We've basically just built a mini accelerator using OdiDsl and we hardly had to write any code. The OdiDsl code was generated for us using .reverse(). All we really had to code, was the connection to the database, a loop and bit of variable substitution!

Summary

With the Rittman Mead ODI Tool kit, accelerating your ODI implementation has never be easier. If you are thinking about migrating to ODI from another product or embarking on a new ODI Project, Rittman Mead can help. For more information please get in touch.

Categories: BI & Warehousing

Why Explain Plan miss table in function

Tom Kyte - Thu, 2017-03-09 08:06
How to collect explain plan on table used in function call by procedure? Suppose I written simple sql query below. select t.tid,t.tname,(select emp_id from emp e where e.emp_id=t.emp_id) from transaction t; And following Explain plan is for...
Categories: DBA Blogs

Oracle Data pump(DBMS_DATAPUMP)

Tom Kyte - Thu, 2017-03-09 08:06
Dear Mr.Kyte greetings I wrote this pl/sql block to export schema using dbms_datapump package <b>this is step one</b>.(which will create the datapump job). <code>declare h1 number; begin h1 := dbms_datapump.open (oper...
Categories: DBA Blogs

Composite Unique Key on multiple columns

Tom Kyte - Thu, 2017-03-09 08:06
Hi Chris/Connor, I have a table having 25 columns out of which col_1, col_2, col_3, col_4 are included in composite unique key. Daily there are 60k insert/update on this table. My query here is - will it make any performance issues if we cre...
Categories: DBA Blogs

Permissions to create triggers over several schemas

Tom Kyte - Thu, 2017-03-09 08:06
I have a 'normal' schema S, and a schema that stores history H (it contains all tables of S, added with some audit columns, but stripped of any keys). Whenever a DML happens on S, it has to be stored within H (If S contains a table T, and we insert ...
Categories: DBA Blogs

Deterministic function

Tom Kyte - Thu, 2017-03-09 08:06
Hi Tom, Recently I created a function with DETERMINISTIC definition to get performance on my query, but I check that the function is called every time even that it receive the same input. Here the script that I used to check this function: ...
Categories: DBA Blogs

Transaction commit when exiting SQL*Plus

Tom Kyte - Thu, 2017-03-09 08:06
if the case of exit without commit from sqlplus, the running transaction commit or rollback??
Categories: DBA Blogs

List out external databases using DB link

Tom Kyte - Thu, 2017-03-09 08:06
Hello Gurus, Have basic knowledge in Oracle DB Admin or Profiling. I am unable to profiling for below requirement. "Want to identify the list of other database which are usign my Database by created as DB Link." <b>Suppose my company have 10 ...
Categories: DBA Blogs

Exadata questions

Tom Kyte - Thu, 2017-03-09 08:06
Hello Tom, We have acquired an Exadata Server and would like to clarify some issues that I believe are product myths. 1 - After removing indexes, can queries get faster? 2- I have always used indexes on the columns identified as FKs. Is th...
Categories: DBA Blogs

Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX

Amis Blog - Thu, 2017-03-09 03:34

At a public sector organization in the Netherlands there was the need to make an inventory of the deployed OSB services in order to find out, the dependencies with certain external web services (which were on a list to become deprecated).

For this, in particular the endpoints of business services were of interest.

Besides that, the dependencies between services and also the Message Flow per proxy service was of interest, in particular Operational Branch, Route, Java Callout and Service Callout actions.

Therefor an OSBServiceExplorer tool was developed to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. For now, this tool was merely used to quickly return the information needed, but in the future it can be the basis for a more comprehensive one.

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX.

If you are interested in general information about, using MBeans with JMX, I kindly point you to another article (written be me) on the AMIS TECHNOLOGY BLOG: “Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX”, via url: https://technology.amis.nl/2017/02/28/oracle-service-bus-disable-enable-a-proxy-service-via-weblogic-server-mbeans-with-jmx/

Remark: Some names in the examples in this article are in Dutch, but don’t let this scare you off.

MBeans

For ease of use, a ms-dos batch file was created, using MBeans, to explore services (proxy and business). The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

On a server, the ms-dos batch file “OSBServiceExplorer.bat” is called.

The content of the ms-dos batch file “OSBServiceExplorer.bat” is:
java.exe -classpath “OSBServiceExplorer.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbserviceexplorer. OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

In the ms-dos batch file via java.exe a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

In the sample code shown at the end of this article, the use of the following MBeans can be seen:

Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

Provides methods for retrieving runtime information about a server instance and for transitioning a server from one state to another. [https://docs.oracle.com/cd/E11035_01/wls100/javadocs_mhome/weblogic/management/runtime/ServerRuntimeMBean.html]

Provides various API to query, export and import resources, obtain validation errors, get and set environment values, and in general manage resources in an ALSB domain. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ALSBConfigurationMBean.html]

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

Service findService(String name,
                    String type,
                    String location)

This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

In the sample code shown at the end of this article, certain java fields are used. For reading purposes the field values are shown in the following table:

Field Field value DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean ALSBConfigurationMBean.NAME ALSBConfiguration ALSBConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ALSBConfigurationMBean Ref.DOMAIN <Reference to the domain>

Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.

A special Ref DOMAIN refers to the whole domain.
[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

When running the code the following error was thrown:

java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
	at $Proxy0.findService(Unknown Source)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.<init>(OSBServiceExplorer.java:174)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.main(OSBServiceExplorer.java:445)
Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
	... 4 more
Process exited.

So because of the use of com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

See the code fragment below:

fileWriter.write("Found server runtimes:\n");
int length = (int)serverRuntimes.length;
for (int i = 0; i < length; i++) {
    ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

    String name = serverRuntimeMBean.getName();
    String state = serverRuntimeMBean.getState();
    fileWriter.write("- Server name: " + name + ". Server state: " +
                     state + "\n");
}
fileWriter.write("" + "\n");
List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS

See the code fragment below:

Set refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


fileWriter.write("Found total of " + refs.size() + " refs, including the following proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {

        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " + ref.getFullName() +
                         "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

As mentioned before, a Ref object uniquely represents a resource, project or folder. A Ref object has two components:

  • typeId that indicates whether it is a project, folder, or a resource
  • array of names of non-zero length.

For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
For a project, the Ref object simply contains one name component, that is, the project name.
A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.

[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Below is an example of a Ref object that represents a folder (via JDeveloper Debug):

Below is an example of a Ref object that represents a resource (via JDeveloper Debug):

ResourceConfigurationMBean

In order to be able to determine the actual endpoints of the proxy services and business services, the ResourceConfigurationMBean is used. When connected, the Service Bus MBeans are located under com.oracle.osb. [https://technology.amis.nl/2014/10/20/oracle-service-bus-obtaining-list-exposed-soap-http-endpoints/]

When we look at the java code, as a next step, the names of a set of MBeans specified by pattern matching are put in a list and looped through.

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the queryNames method.

Set queryNames(ObjectName name,
               QueryExp query)
               throws IOException

Gets the names of MBeans controlled by the MBean server. This method enables any of the following to be obtained: The names of all MBeans, the names of a set of MBeans specified by pattern matching on the ObjectName and/or a Query expression, a specific MBean name (equivalent to testing whether an MBean is registered). When the object name is null or no domain and key properties are specified, all objects are selected (and filtered if a query is specified). It returns the set of ObjectNames for the MBeans selected.
[https://docs.oracle.com/javase/7/docs/api/javax/management/MBeanServerConnection.html]

See the code fragment below:

String domain = "com.oracle.osb";
String objectNamePattern =
    domain + ":" + "Type=ResourceConfigurationMBean,*";

Set osbResourceConfigurations =
    connection.queryNames(new ObjectName(objectNamePattern), null);

fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
for (ObjectName osbResourceConfiguration :
     osbResourceConfigurations) {
…
    String canonicalName =
        osbResourceConfiguration.getCanonicalName();
    fileWriter.write("- Resource: " + canonicalName + "\n");
…
}

The pattern used is: com.oracle.osb:Type=ResourceConfigurationMBean,*

Example content fragment of the text file:

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
…

Below is an example of an ObjectName object (via JDeveloper Debug), found via the queryNames method:

Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened. Here the previously mentioned ResourceConfigurationMBean’s can be found.


[Via MBean Browser]

The information on the right is as follows (if we navigate to a particular ResourceConfigurationMBean, for example …$UpdateZaak_Lk01_PS) :


[Via MBean Browser]

Here we can see that the attributes Configuration and Metadata are available:

  • Configuration

[Via MBean Browser]

The Configuration is made available in java by the following code fragment:

CompositeDataSupport configuration = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Configuration");
  • Metadata

[Via MBean Browser]

The Metadata is made available in java by the following code fragment:

CompositeDataSupport metadata = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Metadata");
Diving into attribute Configuration of the ResourceConfigurationMBean

For each found proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

See the code fragment below:

String canonicalName =
    osbResourceConfiguration.getCanonicalName();
…
String servicetype =
    (String)configuration.get("service-type");
CompositeDataSupport transportconfiguration =
    (CompositeDataSupport)configuration.get("transport-configuration");
String transporttype =
    (String)transportconfiguration.get("transport-type");
…
fileWriter.write("  Configuration of " + canonicalName +
                 ":" + " service-type=" + servicetype +
                 ", transport-type=" + transporttype +
                 ", url=" + url + "\n");

Proxy service configuration:

Below is an example of a proxy service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

The proxy services which define the exposed endpoints, can be recognized by the ProxyService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

String url = (String)transportconfiguration.get("url");

Business service configuration:

Below is an example of a business service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://xyz/eBus/FolderService.svc

The business services which define the exposed endpoints, can be recognized by the BusinessService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

CompositeData[] urlconfiguration =
    (CompositeData[])transportconfiguration.get("url-configuration");
String url = (String)urlconfiguration[0].get("url");

So, via the url key found in the business service configuration, the endpoint of a business service can be found (for example: http://xyz/eBus/FolderService.svc). So in that way the dependencies (proxy and/or business services) with certain external web services (having a certain endpoint), could be found.

Proxy service pipeline, element hierarchy

For a proxy service the elements (nodes) of the pipeline are investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");


[Via MBean Browser]

Below is an example of a nodes object (via JDeveloper Debug):

If we take a look at the dataMap object, we can see nodes of different types.

Below is an example of a node of type Stage (via JDeveloper Debug):

Below is an example of a node of type Action and label ifThenElse (via JDeveloper Debug):

Below is an example of a node of type Action and label wsCallout (via JDeveloper Debug):

For the examples above the Message Flow part of the UpdateZaak_Lk01_PS proxy service looks like:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .proxy file (in this case: UpdateZaak_Lk01_PS.proxy) for the _ActiondId- identification, mentioned as value for the name key.

<con:stage name="EditFolderZaakStage">
        <con:context>
          …
        </con:context>
        <con:actions>
          <con3:ifThenElse>
            <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7c84</con2:id>
            <con3:case>
              <con3:condition>
                …
              </con3:condition>
              <con3:actions>
                <con3:wsCallout>
                  <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f</con2:id>
                  …

The first node in the dataMap object (via JDeveloper Debug) looks like:

The dataMap object is of type HashMap. A hashMap maintains key and value pairs and often denoted as HashMap<Key, Value> or HashMap<K, V>. HashMap implements Map interface

As can be seen, the key is of type Object and the value of type CompositeData.

In order to know what kind of information is delivered via the CompositeData object, the rowType object can be used.

See the code fragment below:

TabularType tabularType = nodes.getTabularType();
CompositeType rowType = tabularType.getRowType();

Below is an example of a rowType object (via JDeveloper Debug):

From this it is now clear that the CompositeData object for a ProxyServicePipelineElementType contains:

Index key value 0 children Children of this node 1 label Label 2 name Name of the node 3 node-id Id of this node unique within the graph 4 type Pipeline element type

In the code fragment below, an iterator is used to loop through the dataMap object.

Iterator keyIter = nodes.keySet().iterator();

for (int j = 0; keyIter.hasNext(); ++j) {

    Object[] key = ((Collection)keyIter.next()).toArray();

    CompositeData compositeData = nodes.get(key);

    …
}

The key object for the first node in the dataMap object (via JDeveloper Debug) looks like:

The value of this key object is 25, which also is shown as the value for the node-id of the compositeData object, which for the first node in the dataMap object (via JDeveloper Debug) looks like:

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Think for example of a “Stage” having an “If Then” action which in turn contains several “Assign” actions. A proxy service Message Flow can of course contain all kinds of elements (see the Design Palette).

Below is (for another proxy service) an example content fragment of the text file, that reflects the hierarchy:

     Index#76:
       level    = 1
       label    = branch-node
       name     = CheckOperationOperationalBranch
       node-id  = 62
       type     = OperationalBranchNode
       children = [42,46,50,61]
         level    = 2
         node-id  = 42
         children = [41]
           level    = 3
           label    = route-node
           name     = creeerZaak_Lk01RouteNode
           node-id  = 41
           type     = RouteNode
           children = [40]
             level    = 4
             node-id  = 40
             children = [39]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b99
               node-id  = 39
               type     = Action
               children = []
         level    = 2
         node-id  = 46
         children = [45]
           level    = 3
           label    = route-node
           name     = updateZaak_Lk01RouteNode
           node-id  = 45
           type     = RouteNode
           children = [44]
             level    = 4
             node-id  = 44
             children = [43]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b77
               node-id  = 43
               type     = Action
               children = []
         …

Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For this the label and type keys are used.

See the code fragment below:

String label = (String)compositeData.get("label");
String type = (String)compositeData.get("type");

if (type.equals("Action") &&
    (label.contains("wsCallout") ||
     label.contains("javaCallout") ||
     label.contains("route"))) {

    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
} else if (type.equals("OperationalBranchNode") ||
           type.equals("RouteNode"))
{
    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
}

Example content fragment of the text file:

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

When we take a closer look at the node of type Action and label wsCallout with index 106, this can also be found in the MBean Browser:


[Via MBean Browser]

The children node-id’s are 30 (a node of type Sequence and name requestTransform, also having children) and 34 (a node of type Sequence and name responseTransform, also having children).

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found proxy service the metadata information (dependencies and dependents) is written to file.

See the code fragment below:

fileWriter.write("  Metadata of " + canonicalName + "\n");

String[] dependencies =
    (String[])metadata.get("dependencies");
fileWriter.write("    dependencies:\n");
int size;
size = dependencies.length;
for (int i = 0; i < size; i++) {
    String dependency = dependencies[i];
    if (!dependency.contains("Xquery")) {
        fileWriter.write("      - " + dependency + "\n");
    }
}
fileWriter.write("" + "\n");

String[] dependents = (String[])metadata.get("dependents");
fileWriter.write("    dependents:\n");
size = dependents.length;
for (int i = 0; i < size; i++) {
    String dependent = dependents[i];
    fileWriter.write("      - " + dependent + "\n");
}
fileWriter.write("" + "\n");

Example content fragment of the text file:

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS

As can be seen in the MBean Browser, the metadata for a particular proxy service shows the dependencies on other resources (like business services and XML Schemas) and other services that are dependent on the proxy service.


[Via MBean Browser]

By looking at the results in the text file "OSBServiceExplorer.txt", the dependencies between services (proxy and business) and also the dependencies with certain external web services (with a particular endpoint) could be extracted.

Example content of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS
…

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
…

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } if ((level == 1 && type.equals("OperationalBranchNode")) || level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {

                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);

            fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                CompositeDataSupport configuration =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Configuration");

                CompositeDataSupport metadata =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Metadata");

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                if (canonicalName.contains("ProxyService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    String url = (String)transportconfiguration.get("url");
                    
                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                } else if (canonicalName.contains("BusinessService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    CompositeData[] urlconfiguration =
                        (CompositeData[])transportconfiguration.get("url-configuration");
                    String url = (String)urlconfiguration[0].get("url");

                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                }

                if (canonicalName.contains("ProxyService")) {

                    fileWriter.write("" + "\n");

                    CompositeDataSupport pipeline =
                        (CompositeDataSupport)configuration.get("pipeline");
                    TabularDataSupport nodes =
                        (TabularDataSupport)pipeline.get("nodes");

                    TabularType tabularType = nodes.getTabularType();
                    CompositeType rowType = tabularType.getRowType();

                    Iterator keyIter = nodes.keySet().iterator();

                    for (int j = 0; keyIter.hasNext(); ++j) {

                        Object[] key = ((Collection)keyIter.next()).toArray();

                        CompositeData compositeData = nodes.get(key);

                        String label = (String)compositeData.get("label");
                        String type = (String)compositeData.get("type");
                        if (type.equals("Action") &&
                            (label.contains("wsCallout") ||
                             label.contains("javaCallout") ||
                             label.contains("route"))) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        } else if (type.equals("OperationalBranchNode") ||
                                   type.equals("RouteNode")) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        }
                    }

                    fileWriter.write("" + "\n");
                    fileWriter.write("  Metadata of " + canonicalName + "\n");

                    String[] dependencies =
                        (String[])metadata.get("dependencies");
                    fileWriter.write("    dependencies:\n");
                    int size;
                    size = dependencies.length;
                    for (int i = 0; i < size; i++) {
                        String dependency = dependencies[i];
                        if (!dependency.contains("Xquery")) {
                            fileWriter.write("      - " + dependency + "\n");
                        }
                    }
                    fileWriter.write("" + "\n");

                    String[] dependents = (String[])metadata.get("dependents");
                    fileWriter.write("    dependents:\n");
                    size = dependents.length;
                    for (int i = 0; i < size; i++) {
                        String dependent = dependents[i];
                        fileWriter.write("      - " + dependent + "\n");
                    }
                    fileWriter.write("" + "\n");

                }

            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

New OA Framework 12.2.5 Update 10 Now Available

Steven Chan - Thu, 2017-03-09 02:05

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:


Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • 22590683 - FORWARD-PORT 21465867 TO 12.2.5
  • 24391264 - 12.2.X : ICONS NOT ALIGNED WITH FUNC NAME IN OLD HOMEPAGE IN CHROME/FIREFOX
  • 24398948 - CUSTOMER NAME MISALIGNED
  • 25056504 - NEED  IMPORTPERZDOCCO.JAVA VERSION .3 BACKPORTED TO BUNDLE 8

    Related Articles

    Categories: APPS Blogs

    Array Size for pl/sql delete and db link

    Tom Kyte - Wed, 2017-03-08 13:46
    Dear Team, If i execute a delete statement within a procedure what will be array size it will work with? create or replace procedure del_test as begin delete from big_table; commit; end; If i execute this procedure from a job is ...
    Categories: DBA Blogs

    Question on recreating a primary key on a partitioned table

    Tom Kyte - Wed, 2017-03-08 13:46
    What happens to the unique index that gets created when we recreate a primary key which is composite on a partitioned table, please note that we are using the "enable novalidate" clause to recreate so that existing rows are not checked for the validi...
    Categories: DBA Blogs

    Oracle BPM 12c: Hide Implementation Details with the Refine Feature

    Jan Kettenis - Wed, 2017-03-08 12:55
    Ever had a case with the Oracle BPM Suite where you wanted to create a BPMN model while hiding the details from the reader? Then the "refine" feature may be what you are looking for. Read on if you want to know more about this feature that has been added since 12c. I actually blogged about it before, but this time I want to also illustrate the impact it has on the flow trace.

    The "refine" feature is a way to detail an activity.  Basically it is a specialization of the (already in 11g present) embedded subprocess. The difference being that - unlike a normal embedded subprocess - the refined activity keeps the icon of the main activity.

    To show this difference take the next example where I hide the details of a Script activity being executed before a User activity is scheduled. When I collapse that embedded subprocess it gets a blue color, hiding this technical detail but also that the main activity (still) is the User activity.



    This can somewhat be mitigated by changing the icon of the activity, but the options are pretty limited. Furthermore, this deviates from the standard BPMN notation what some readers might find somewhat disruptive.


    Now let's have a look at the refine feature. The use case here is a bit different, in that I want to hide from the reader that a User activity in reality is handled by some other application with some asynchronous interface to send the payload (to of what otherwise would be a normal Human Task) via a Send activity, after which I receive the updated payload and outcome via a Receive activity. In case you wonder why on earth I want to do this: the example is inspired by a real customer case where the BPM process orchestrates system and human interactions of which the latter actually are backed by activities in Siebel.

    You refine an activity by chosing "Refine" from the right-mouse-click context menu of the activity itself.


    The initial result is some sort of an embedded subprocess to which a User activity has automatically been added, however without a Start and End event.


    I can now detail this activity by adding a Send and Receive activity to it. Because I don't wamt implement the User activity I put that in draft mode. Before you criticize how ugly this is, consider this: you still may want to express that the Send and Receive actually are a placeholder of something that is not implemented as a Human Task, but still concerns some implementation of what logically is a User activity.


    I can compile and deploy this BPM application without any issue, but ... As it turns out it does not work.


    Because of what I consider a bug, the refined activity actually does need a Start and End event, just like a regular Embedded Subprocess. The compiler just forgets to tell you.




    Not surprising, as you can see the flow trace is not different than that of a regular Embedded Subprocess. And what you can do with it is also the same, as you can tell from the next iteration in which I have implemented some fallback scenario to schedule a User activity whenever the handling by the other application is not done within some time limit.


    And despite all these details, I can still present the activity to the reader as a simple User activity, only difference being the + symbol :-)

    'Upping' the Convenience Factor - Edit Files Directly from the Web

    WebCenter Team - Wed, 2017-03-08 09:43

    By: Marc-Andre Houle, Principal Manager, Product Management, Oracle Cloud Services

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    As a product manager, I’m regularly collaborating with people both internal and external to my company. And as part of that collaboration, I’m regularly being asked to review, edit, and update files that were shared with me. What’s great about the Oracle Content and Experience Cloud is that it lets me collaborate easily on anything shared with me, and I can do that from any place, any device, and at any time.

    But what if I’m not syncing a file or folder to my local machine? Or what if someone sent me a link to a file in a folder I don’t otherwise have access to, but wants me to make some edits? Oracle Content and Experience Cloud makes that easy too. Let me give a quick example of what that might look like.

    Imagine Amy is working on a report and wants me to add some information. What will normally happen is Amy will add a few comments or annotations to the file in the related conversation, then flag me on the relevant comments. That will send me an email notification or a pop-up alert prompting me to review the file and make edits.

    Like everyone else, the first thing I do after reading the email or seeing the notification is click on the link. That, of course, launches the browser and brings me to the file in the cloud. From there, I can clear Amy’s flag, read her comments, reply to each, and add my own annotations.

    At this point, I need to make edits to the file. If I had that file synced locally, I could certainly navigate to the file in Windows Explorer or in the Mac Finder. But it’s far easier for me to click the “Edit” button and let Oracle Content and Experience Cloud do the heavy lifting for me.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    The first time I use the “Edit” feature, I am warned that the Desktop Sync Client needs to be installed. However, if it’s already installed, it is not necessary for it to be running at the time; the client will launch automatically.

    The file I want to edit will then be downloaded to my local machine and a download progress appears. Behind the scenes, a temporary folder is created locally on my machine and the file is downloaded to that folder.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    The Desktop Sync Client will then automatically launch the default application for that file type to edit the file. For example, a .docx file will open in Microsoft Word, a .pptx file will open in Microsoft PowerPoint, etc.

    Once I’m finished making my edits, I can save or close the file, and the file automatically get synced back to the cloud. Versioning is also handled automatically, so the edits will appear as a new version on the file.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    When I go back to the browser, I will see the new version with all my edits. It’s then easy for me to post a comment in the related conversation and then flag Amy so she can get a notification that my changes were made.

    Don’t have Oracle Content and Experience Cloud Service yet? Then, I highly recommend getting started with a free trial version available at https://cloud.oracle.com/content to see how you can now drive content and social collaboration anytime, anywhere and on any device.

    What is Oracle Content and Experience Cloud, you ask? Well, you will be hearing a lot more about it over the next few weeks so stay tuned but in the meantime, if you are no stranger to Oracle Documents Cloud Service then you are already in the know...

    Docker swarm at Oracle Cloud

    Marcelo Ochoa - Wed, 2017-03-08 09:23
    As I mentioned in a previous post the Oracle Container Service seem to be little for my cloud expectativa.
    But the Cloud Compute service is enough to implement my Docker Swarm implementation.
    Why Docker Swarm?

    • Is standard and supported by other cloud providers
    • Is operated through command line using standard docker and docker-machine operations

    When I tested Oracle Cloud Services I found three major problems:

    • You can't modify base OS parameters
    • Access to persistent storage didn't work for me
    • The Docker version supported is a bit outdated

    All these things can be avoided if you deploy your own Swarm cluster using Oracle Compute Cloud Services, here detailed instructions about how to do that, ADVICE: is a long post :)
    Configuring the networkBefore starting creating a compute node (swarm node) is necessary to create a network definition to access using ssh and docker commands, here the step by step instructions using compute console:

    create a security list  by clicking:

    1. Network menu up
    2. Security list left menu
    3. Create Security list right up side
    4. Choose a name (DockerNode in this example)
    5. Choose Inbound Policy as Drop Packet/Outbound Policy Allow
    6. Set a meaningful text description 

    next defines a Security Application for Docker remote operations

    1. Click on left menu Security Applications
    2. Create Security Application
    3. Name it as docker
    4. Choose port type as TCP
    5. Define a port staring range 2376 (docker node daemon)
    6. Define a port end range 2377 (docker swarm cluster management communications)
    7. Set a meaningful text description

    next create a Security Rule allowing SSH and Docker ports
    1. Click on Security Rules left menu
    2. Click on Create Security Rule right up button
    3. Name it (DockerAdminPort/SSHAdminPort in our case)
    4. Choose a Security Application (ssh built in defined, docker defined previously)
    5. Source will be Public Internet
    6. Destination will be our previously defined Security List named DockerNode
    7. Set a meaningful text description

    defines a private network for swarm inter connect operations

    1. Click on left side menu IP Network -> IP Neworks
    2. Click on right up Create IP Network button
    3. Name it in that case LocalInterconnect
    4. Define a private IP range We choose 10.0.0.0/24
    5. IP Exchange Not Set
    6. Set a meaningful text description

    create Public IP Reservation, this will simplify remote operations using docker-machine tool from outside the Oracle Cloud, repeat this step five times.

    1. On Network top menu
    2. Click left side menu IP Reservations
    3. Click on right up button Create IP Reservations
    4. Named it, in our test case SwarmNode1..to..SwarmNode5
    5. By now For Instance None

    Launching five Swarm nodes
    There is no Docker Swarm instances ready at the Oracle Cloud Market Place, but we can easily get one using Ubuntu 16.10 base image, here the step by definition, repeat them five times

    1. At the Compute Console choose Instance upper main menu
    2. At the beginning there isn't instance running, click right button Create Instance


    1. The wizard start showing pre-build Oracle Image, Choose Private images if you already use Ubuntu 16.10 image or Marketplace to find Ubuntu.16.10
    2. Click on Selected button
    3. Next choose go to Shape definition


    1. Because we want a large cluster of cheap hardware (horizontal scaling) Shape is
    2. General Purpose oc3 (1 OCPU/7.5 Gb memory)
    3. Go to the Instance options


    1. Clicking on Instance tab
    2. Define a name for your instance
    3. Define some meaningful tags
    4. Choose a private SSH key (ubnt) or upload a new one using Add SSH Public Key button
    5. Go to next step Network definition


    1. On network preferences
    2. Select Persistente Public IP Reservation option
    3. Chose one of five public IP reservations defined previously
    4. Chose our LocalInterconnect network previously defined (10.0.0.0/24)
    5. Select our DockerNode Security List (ssh and docker enabled ports from outside)
    6. Next go to Storage tab


    1. On the Storage tab by default Ubuntu instance have pre-defined 10Gb boot disk
    2. Click on button Add New Volume
    3. Named it as Repo1 (other instance choose Repo2, Repo3 and so on)
    4. Select 30 GB storage, this storage is for Docker local repository instance, some instance could be large, for example 12cR2 instance is 14 GB size
    5. Set a meaningful text description

    Finally review and click create, after a few minutes the instance will be ready to use, you could access using SSH as:
    $ ssh -i /home/mochoa/Documents/Scotas/ubnt ubuntu@oc1
    ubnt es my private ssh key associated with the public one upload to the cloud instance, ubuntu is de default user on this image instance with sudo rights and oc1 is an alias in my /etc/hostname to public IP reservation defined to this instance.
    The Ubuntu 16.10 instance do not have installed Docker by default, but with this simple script you could do the post-installation steps, here sample output:
    ubuntu@oc1:~$ wget https://raw.githubusercontent.com/marcelo-ochoa/docker/master/es/oracle-cloud-node-conf.sh
    --2017-03-08 14:53:11--  https://raw.githubusercontent.com/marcelo-ochoa/docker/master/es/oracle-cloud-node-conf.sh
    Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.48.133
    Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.48.133|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 757 [text/plain]
    Saving to: 'oracle-cloud-node-conf.sh'
    oracle-cloud-node-conf.sh          100%[===============================================================>]     757  --.-KB/s    in 0s    
    2017-03-08 14:53:12 (201 MB/s) - 'oracle-cloud-node-conf.sh' saved [757/757]
    ubuntu@oc1:~$ chmod +x oracle-cloud-node-conf.sh
    ubuntu@oc1:~$ ./oracle-cloud-node-conf.sh 
    the script do:
    • change OS parameter to vm.max_map_count=262144, required by ElasticSearch
    • define eth1 network interface with DHCP options, Oracle Cloud will provide you an IP on LocalInterconnect subnet
    • Mount /var/lib/docker directory with our 30 GB extra disk /dev/xvdc
    • Update Ubuntu OS with latest patches
    • Install latest Docker version 1.13.1
    you could check the steps by using, remember to reboot to apply kernel updates:
    ubuntu@oc1:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            3.7G     0  3.7G   0% /dev
    tmpfs           749M   17M  732M   3% /run
    /dev/xvdb1      9.6G  2.6G  7.0G  27% /
    tmpfs           3.7G  128K  3.7G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup
    /dev/xvdc        30G  408M   28G   2% /var/lib/docker
    /dev/xvdb15     105M  3.6M  101M   4% /boot/efi
    tmpfs           749M     0  749M   0% /run/user/1000
    ubuntu@oc1:~$ docker version
    Client:
     Version:      1.13.1
     API version:  1.26
     Go version:   go1.7.5
     Git commit:   092cba3
     Built:        Wed Feb  8 06:57:21 2017
     OS/Arch:      linux/amd64
    Server:
     Version:      1.13.1
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.7.5
     Git commit:   092cba3
     Built:        Wed Feb  8 06:57:21 2017
     OS/Arch:      linux/amd64
     Experimental: true
    finally We can define docker-machine definitions in remote machine (my notebook) to administer this five node instance, to add a remote managed machine is necessary to execute locally:
    $ docker-machine create \
      --driver generic \
      --generic-ip-address=129.144.12.62 \
      --generic-ssh-key /home/mochoa/Documents/Scotas/ubnt \
      --generic-ssh-user ubuntu \
      oc1
    after adding our five node docker-machine repository is:
    $ docker-machine ls
    NAME        ACTIVE   DRIVER    STATE     URL                         SWARM   DOCKER    ERRORS
    oc1         -        generic   Running   tcp://129.144.12.62:2376            v1.13.1  
    oc2         -        generic   Running   tcp://129.144.12.36:2376            v1.13.1  
    oc3         -        generic   Running   tcp://129.144.12.145:2376           v1.13.1  
    oc4         -        generic   Running   tcp://129.144.12.14:2376            v1.13.1  
    oc5         -        generic   Running   tcp://129.144.12.235:2376           v1.13.1   
    Note that remote operations are secure managed using ssh encryption and TLS certificates
    Latest step is to prepare Swarm cluster:
    Define OC5 as Swarm master:$ docker-machine ssh oc5
    ubuntu@oc5:~$ docker swarm init --advertise-addr 10.0.0.6
    Swarm initialized: current node (j54j1bricmhl3buten3qletxy) is now a manager.
    To add a worker to this swarm, run the following command:
        docker swarm join \
        --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
        10.0.0.6:2377

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    Define OC4 as Swarm second master ubuntu@oc5:~$ docker swarm join-token manager
    To add a manager to this swarm, run the following command:
        docker swarm join \
        --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-0hk823fjc3mdqz5cvl88etxm8 \
        10.0.0.6:2377

    ubuntu@oc5:~$ exit
    eval $(docker-machine oc4)
    [oc4]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-0hk823fjc3mdqz5cvl88etxm8 \
    >     10.0.0.6:2377

    This node joined a swarm as a manager.
    Note that after logout from OC5 instance We shows how to operate remote (without ssh login) on our Swarm nodes only changing the Docker environments variables with docker-machine env command.
    Finally add latest three nodes as workers
    [oc4]]$ eval $(docker-machine env oc3)
    [oc3]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.
    [oc3]]$ eval $(docker-machine env oc2)
    [oc2]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.
    [oc2]]$ eval $(docker-machine env oc1)
    [oc1]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.

    Our Swarm cluster is ready to use with this set of nodes:
    $ eval $(docker-machine env oc5)
    [oc5]]$ docker node ls
    ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
    6na93y52xu20gqg4gc0epi51l    oc1       Ready   Active      
    cmspvwokukxaq3v6gwxxuz4w2    oc3       Ready   Active      
    j54j1bricmhl3buten3qletxy *  oc5       Ready   Active        Leader
    l69skrce4zb69bjhofhkp9ggq    oc2       Ready   Active      
    vu9l0xip479qlwz83hzisbd2f    oc4       Ready   Active        Reachable
    Next post will be about deploying an Elastic Search cluster using Docker Swarm at Oracle Cloud. Stay tuned.

    12cR1 RAC Posts -- 8a : Setting up SingleInstance DG Standby for RAC

    Hemant K Chitale - Wed, 2017-03-08 09:08
    This is the first of a small series of subposts on setting up SingleInstance DataGuard Standby for my 12cR1 RAC environment.

    Primary Database : 12cR1 MultiTenant RAC : 2 nodes, database on ASM
    Standby Database : 12cR1 MultiTenant SingleInstance : 1 node, database on FileSystem

    The "difficulties" are :
    a.  The Primary Database is MultiTenant with 1 PDB besides the SEED
    b.  The Primary Database is on ASM but the Standby is to be built on FileSystem
    c.  The Primary is a 2-node Cluster while the Standby is a SingleInstance installatin

    This post is on the initial setup.

    RAC Nodes
    collabn1.racattack   192.168.78.51
    collabn2.racattack   192.168.78.52

    SingleInstance Node 
    oem132.racattack     192.168.78.100


    Database Changes :
    SQL> alter database force logging;

    Database altered.

    SQL> alter database flashback on;

    Database altered.

    SQL>
    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL>
    SQL>select  group# , status, type, member from v$logfile order by 1;


    Listener on Standby Server :
    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = oem132.racattack)(PORT = 1521))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
        )
      )

    SID_LIST_LISTENER=
    (SID_LIST =
      (SID_DESC =
        (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
        (SID_NAME = STBY)
       )
      (SID_DESC =
        (GLOBAL_NAME = STBY_DGMGRL)
        (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
        (SID_NAME = STBY)
       )
    )


    tnsnames.ora on first RAC Node :
    STBY =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oem132)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = STBY)
        )
      )

    STBY_DGB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oem132)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = STBY_DGMGRL)
        )
      )


    STBY Instance Parameter File :
    compatible=12.1.0.2.0
    db_name=RAC
    db_unique_name=STBY
    enable_pluggable_database=TRUE
    sga_target=800M
    db_file_name_convert='+DATA/RAC','/u01/app/oracle/oradata/STBY','+FRA/RAC','/u01/app/oracle/fast_recovery_area/STBY'
    log_file_name_convert='+DATA/RAC','/u01/app/oracle/oradata/STBY','+FRA/RAC','/u01/app/oracle/fast_recovery_area/STBY'
    db_create_file_dest='/u01/app/oracle/oradata'
    db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
    db_recovery_file_dest_size=16G
    remote_login_passwordfile=EXCLUSIVE


    Copying the Password File :
    ASMCMD> pwcopy +DATA/RAC/PASSWORD/pwdrac.277.931824933 /var/tmp/RACPasswordFile
    copying +DATA/RAC/PASSWORD/pwdrac.277.931824933 -> /var/tmp/RACPasswordFile
    ASMCMD> exit
    scp /var/tmp/RACPasswordFile oracle@oem132:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwSTBY


    to be continued in post 8b on the filesystem directories for the STBY instance and the execution of the DUPLICATE DATABASE command.

    Categories: DBA Blogs

    NATCAP OUG Reboot

    Scott Spendolini - Wed, 2017-03-08 07:03
    NATCAP OUG - or the National Capitol Region Oracle Users Group - has a long history of providing Oracle-related content to those in the DC metro area.  I remember presenting there back in my Oracle days (early 2000's) on multiple occasions.

    Over the last few months, a few of us have been trying to resurrect this group, as there are likely more Oracle developers & DBAs in the DC area than any other area in the US, perhaps even the world!  This region has long been underserved, and we hope to change that.

    We have had a couple of meeting at Oracle's campus in Reston, and attendance has been, well, not where we need it to be.  So, if you're in the DC area, please consider joining this group.  It's free, and our goal is to have quarterly meetings where both experienced presenters and NATCAP members alike provide the content.

    If interested, please sign up on the NATCAP website here: http://natcapoug.org

    Let's grow this group and give the DC metro area the user group it so badly needs!

    Getting ADF Data in a Jet Component (2)

    Amis Blog - Wed, 2017-03-08 06:47

    In my previous blog I explained how to get ADF Data in a Jet Component, this was done by iterating through a ViewObject and rendering a component per element in the View Object. When you want to use the DVT’s from Oracle Jet, this won’t do the thing, because you will need the entire data set to be present at once in your component. This blog will show you how to do that without using Rest Services.

    My colleague Lucas Jellema made a JSONProviderBean, which makes data from data bindings available as nested JSON object in client side JavaScript.1

    Using this bean we can use the iterator binding of our View Object in our component page fragment.

    
    
    <div>
     <amis-chart chart-data="#{jsonProviderBean[bindings.EmployeesVO.DCIteratorBinding]}"/>
     </div>
    
    
    

    This will pass the JSON as a string to our component.

        {
            "properties": {
                "chartData": {
                    "type": "string"
                }
            }
             
        }
    

    In our component viewmodel we can now parse this string into a json object. The “values” object of this json object contains the data we need for our barchart, but it is not in a form the barchart can understand. Therefore you need to write function to get the data you need and put it into a format that the barchart does understand.

        function AmisChartComponentModel(context) {
            var self = this;
        
            context.props.then(function (propertymap) {
                self.properties = propertymap;
                var dataAsJson = JSON.parse(propertymap.chartData);
                var barData = self.createBarSeries(dataAsJson.values);
                /* set chart data */
                self.barSeriesValue = ko.observableArray(barData);
            })
            
            //function to transform the data.
            self.createBarSeries = function (jsonDataArray) {
                var data = [];
                jsonDataArray.forEach(function (item, index, arr) {
                    data.push( {
                        "name" : item.FirstName, "items" : [item.Salary]
                    });
                })
                return data;
            }    
            
        }
        return AmisChartComponentModel;
    
    });
    

    We now have our entire employee data set available for the barchart. In this case I made a chart for Salary per employee, we can do all the fancy interactions with the component that we can normally do as well, for example stacking the data or changing from a horizontal to a vertical graph.

       

    Sources
    1. https://github.com/lucasjellema/adf-binding-to-json
    2. https://technology.amis.nl/2017/03/07/getting-adf-data-jet-component/
    3. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
    4. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
    5. Source of this demo: Github
    Versions used

    JDeveloper 12.1.3,
    OracleJet V2.2.0

    Disclaimer

    The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

    The post Getting ADF Data in a Jet Component (2) appeared first on AMIS Oracle and Java Blog.

    Create Your Own DVD Plugin in 22 minutes

    Rittman Mead Consulting - Wed, 2017-03-08 06:45
    Introduction

    Oracle DVD played well for the task of elections data analysis. But I faced some limitations and had to curtail my use of one of my favourite charts - Scatter chart. While DVD’s implementation of it looks good, has smooth animations and really useful, it is not suitable for visualisations of huge datasets. Election dataset has about 100K election precinct data rows and if I try to plot all of them at one chart I will get something close to this. It doesn't really important what is the data, just compare how charts look like.

    Picture with Turnout%, United Russia% by PEC

    This picture can give us about zero insights. Actually, I'm not even sure if shows my 100K points. What I want to get is something like this:
    plugin with Turnout%, United Russia% by PEC (source: https://habrahabr.ru/post/313372/)

    What can I see at this chart and can't see at the vanilla one? First of all, it gives a much better understanding of points distribution other the plot. I can see a relatively dense core around coordinates [30;40] and then not so dense tail growing by both dimensions and again a small and very dense core in the top right corner.
    Secondly, it not only shows dense and sparse areas but shows that there are exist a few points which have an unexplainably high concentration. Around [64;62], for example.

    Luckily DVD allows me (and anyone else too) create custom plugins. That's what I'm going to show here. The plugin I'm going to do is something I'd call Minimum Viable Product. It's very, very simple and needs a lot of work to make it good and really useful.

    The code I'm going to show is the simplest I can invent. It doesn't handle errors, exceptions and so on. It should be more modular. It possibly could use some optimisation. But that was done for a purpose. And the purpose is called 'simplicity'. If you want a more comprehensive sample, there is a link to the Oracle's guide in the Summary. I want to show you that writing a DVD plugin is not a rocket science. It takes less than half an hour to build your own DVD plugin.

    And the clock starts now!

    [0:00-5:40] Setup Environment and DVD SDK Setup

    Download and install DVD. I presume you already have it, because from my experience people who doesn't have it very rarely feel a need for its plugins. Hence I didn't include download and installation times into 22 minutes.

    Create a folder to store your plugins and some Gradle-related configs and scripts. It may be any directory you like, just select your favourite place.

    Next step is to define some environment variables.

    • The first one is DVDDESKTOP_SDK_HOME and it should point to thу DVD folder. In most of the cases, it is C:\Program files\Oracle Data Visualisation Desktop.
    • The second variable is PLUGIN_DEV_DIR it points to the folder you created 40 seconds ago. These two variables seem not to be absolutely necessary, but they make your life a bit easier.
    • And one more change to the environment before going further. Add %DVDESKTOP_SDK_HOME%\tools\bin to the PATH variable (do you feel how your life became easier with DVDDESKTOP_SDK_HOME already defined so you can start using it right now?).

    And the last step is to initialise our development environment. Go to the plugins directory and execute the initialization command. Open cmd window and execute:

    cd %PLUGIN_DEV_DIR%  
    bicreateenv  
    

    This will not only initialise everything your need but create a sample plugin to explore and research.

    Let's take a look at what we got. In the folder we have created before (%PLUGIN_DEV_DIR%) this command generated the following structure.

    a screenshot of %PLUGIN_DEV_DIR%

    All names here are more than self-explanatory. You won't need to do anything with .gradle and gradle, and src stores all the code we will develop. Please notice that src already has sample plugin sampleviz. You may use it as a referrence for your own work.

    Create First Plugin

    Every plugin consists of a few parts. We don't need to create them from scratch every time. Oracle gives us a simple way of creating a new plugin. If you didn't close cmd window from previous steps execute the following:

    bicreateplugin viz -id com.rittmanmead.demo -subType dataviz  
    

    Obviously, you should use your own company name for the id. The last word (demo) plus word Plugin is what user will see as a visualisation name. Obviously we can change it but not today.

    This will create a fully operational plugin. Where can we find it? You shouldn't be too surprised if I say "In src folder".

    a screenshot of %PLUGIN_DEV_DIR%

    And we may test it right now. Yes, its functionality is very limited, but you didn't expect it to do something useful, right?

    Start SDK

    New we should start Oracle DVD in SDK mode. The main difference from the normal mode is that DVD will start in the default browser. That's not a typo. DVD will start as a usual site in a browser. It's a good idea to select as the default a browser with good developer console and tools. I prefer Chrome for this task but that's not the only choice. Choose the browser you'd normally use for site development. When you finish with setting up of the default browser do:

    cd %PLUGIN_DEV_DIR%  
    .\gradlew run
    

    If you don't have an ultrafast computer I'd recommend you to make some tea at this point. Because the very first start will take a long time. Actually, it will consume most of the first stage time.

    [5:40-6:15] Test the Plugin

    We didn't start changing the first plugin yet, so I expect everything to be OK at this point. But anyway it's a good idea to test everything. I think we can afford 30-40 seconds for this.

    And again I assume you know how to use DVD and don't need a step-by-step guide on creating a simple visualisation. Just create a new project, select any data source, go to Visualisations, find your plugin and add it. You may even add some data to the plugin and it will show how many rows your data has. I think that's a good place to start from.

    a picture of plugin

    [6:15-22:00] Add Functions Change parameters

    If you didn't skip testing phase and played with this new toy for a while you may have noticed two things we need to change right now. First, for a chosen type of visualisation, I need that my plugin can accept two measures. You possibly noticed that I was unable to add Number of cancelled ballots as a second measure. By default a plugin accepts not more than one (zero or one). Luckily it can be changed very easily.

    We can find all parameter about measures, rows, columns etc inside of extensions folder. In this case it is %PLUGIN_DEV_DIR%\customviz\com-rittmanemad-demo\extensions. Here we can find two folders. The first one is oracle.bi.tech.pluginVisualizationDataHandlerand it has only one file com.rittmanemad.demo.visualizationDataHandler.JSON. This file allows us to define which types of input data our plugin has (measures, columns, rows, colour etc.), what are their types (measures/dimensions), which is the default one and so on.

    a picture of json and plugin side-by side Here we should change maxCount to 2. This will allow us to add more than one measure to our visualisation.

    The second thing I want to change is the number of data points. Elections dataset has data about 100K precinct commissions. And DVD's default is 10K. We can change this value in the second configuration file from extensions folder. Its name is oracle.bi.tech.Visualization. There is again only one JSON file com.rittmanmead.demo.json. We need to change rows to 200000.

    Why 200000 if I need only 100000 points? Well, it's about how data will be passed to our code. Every measure is a separate row, so I need 100K points with 2 measures each and that gives me 200000 rows. It looks like right now that's the only way to have more than one measure in a plugin (at least it's my best knowledge for DVD 12.2.2.2.0-20170208162627).

    a picture of json and plugin side-by side

    Note. I could change here some things like plugin name, input data types shown to user and so on but my aim is simplicity.

    Now we should restart DVD SDK in order to use new parameters.

    two measures and ~200K rows

    Write code

    For my custom scatter chart I'm going to use d3js JavaScript library. It will allow me to concentrate more on logic and less on HTML manipulation. To add it to my plugin I should add a couple of strings in the beginning of my code.

    Before:

    define(['jquery',  
            'obitech-framework/jsx',
            'obitech-report/datavisualization',
            'obitech-reportservices/datamodelshapes',
            'obitech-reportservices/events',
            'obitech-appservices/logger',
            'ojL10n!com-rittmanmead-demo/nls/messages',
            'obitech-framework/messageformat',
            'css!com-rittmanmead-demo/demostyles'],
            function($,
                     jsx,
                     dataviz,
                     datamodelshapes,
                     events,
                     logger,
                     messages) {
    

    After (I added two lines starting with d3):

    define(['jquery',  
            'obitech-framework/jsx',
            'obitech-report/datavisualization',
            'obitech-reportservices/datamodelshapes',
            'obitech-reportservices/events',
            'obitech-appservices/logger',
          'd3js',
            'ojL10n!com-rm-domoViz/nls/messages',
            'obitech-framework/messageformat',
            'css!com-rittmanmead-demo/demostyles'],
            function($,
                     jsx,
                     dataviz,
                     datamodelshapes,
                     events,
                     logger,
                   d3,
                     messages) {
    

    That's all. Now I can use d3js magic. And that's cool.

    OK, it's time to make our plugin do something more useful than a simple counting of rows. All plugin code is in demo.js file and the main procedure is render. It is called every time DVD needs to update the visualisation. The initial version of this function is really small. Without comments there are only four rows. It retrieves data, counts rows, then finds a container to write and writes a message.

    Demo.prototype.render = function(oTransientRenderingContext) {  
          // Note: all events will be received after initialize and start complete.  We may get other events
          // such as 'resize' before the render, i.e. this might not be the first event.
    
          // Retrieve the data object for this visualization
      var oDataLayout = oTransientRenderingContext.get(dataviz.DataContextProperty.DATA_LAYOUT);
    
          // Determine the number of records available for rendering on ROW
          // Because we specified that Category should be placed on ROW in the data model handler,
          // this returns the number of rows for the data in Category.
      var nRows = oDataLayout.getEdgeExtent(datamodelshapes.Physical.ROW);
    
          // Retrieve the root container for our visualization.  This is provided by the framework.  It may not be deleted
          // but may be used to render.
      var elContainer = this.getContainerElem();
      $(elContainer).html(messages.TEXT_MESSAGE.format("Demo Plugin", "" + nRows));
    };
    

    First of all, I need to know the actual size of the plotting area. I simply added that after var elContainer = this.getContainerElem();:

      [...]
      var elContainer = this.getContainerElem();
    
      //container dimensions
      var nWidth = $(elContainer).width();
      var nHeight = $(elContainer).height();
      [...]
    

    The template plugin has code for pulling the data into the script (oDataLayout variable). But for my scatter plot, I need to put this data into an array and find the minimum and the maximum value for both arrays. This part of the code is my least favourite one. It looks like currently we can't make tho separate measures in a custom viz (or at least I can't find the solution), therefore instead of two nice separate measures, I have them as different rows. Like:

    ......... PEC #1Turnout,%34,8% PEC #1United Russia,%44,3% PEC #2Turnout,%62,1% PEC #2United Russia,%54,2% PEC #3Turnout,%25,9% PEC #3United Russia,%33,2% .........

    I really hope that any solution for this will be found. So far I just put even and odd rows into X ad Y coordinates. At the same time, I'm determine minimum and maximum values for both axes.

      //temporary measure
      var tmp_measure;
    
      //current measure
      var cur_measure;
    
      var vMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];
      var hMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];
    
      for(var i=0;i<nRows;i++){
    
        if(i%2==1){
          cur_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
          vMinimax[0]=Math.min(vMinimax[0], cur_measure);
          vMinimax[1]=Math.max(vMinimax[1], cur_measure);
          points.push([cur_measure,tmp_measure]);
        }
        else{
          tmp_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
          hMinimax[0]=Math.min(hMinimax[0], tmp_measure);
          hMinimax[1]=Math.max(hMinimax[1], tmp_measure);
        }
      }
    

    The next part of the code is about handling multiple calls. I should handle things like changing measures or dimensions. I simply delete the old chart and create a new empty one.

      var oSVG;
    
      //Delete old chart if exists
      d3.select("#DemoViz").remove();
    
      //Create new SVG with id=DemoViz
      oSVG=d3.select(elContainer).append("svg");
      oSVG.attr("id","DemoViz");
    
      //Set plot area size
      oSVG=oSVG
           .attr("width", nWidth)
           .attr("height", nHeight)
           .append("g");
    

    Now I have an array with data. I have a prepared SVG plot for drawing. And I have no reason not to combine all this into a chart.

      //Compute scaling factors
      var hStep=(nWidth-40)/(hMinimax[1]-hMinimax[0]);
      var vStep=(nHeight-40)/(vMinimax[1]-vMinimax[0]);
    
      //Draw
      oSVG.selectAll("rect")
          .data(points)
          .enter()
          .append("rect")
          .attr("x", function(d) {return 20+d[1]*hStep;})
          .attr("y", function(d) {return nHeight-(20+d[0]*vStep);})
          .attr("height",1).attr("width",1)
          .attr('fill-opacity', 0.3);
    

    What I should mention here is opacity. If its value is too high (almost no opacity) the chart looks like this:

    0.99

    If I make dots more opaque, the chart looks (and works) better. This is how it looks like with fill-opacity=0.3.
    0.99 Look at the black point in the middle. It's there because a lot of commissions has exactly the same values for both measures. That's how we could find Saratov region anomaly.

    [22:00-] Summary and TODOs

    This shows how we may enhance Oracle DVD with our own visualisation. Yes, it's really, really, really simple. And it needs a lot of work in order to make it more useful. First of all, we should make it more reliable. This realisation doesn't test input parameters for correctness. And we also need to add interactivity. The chart should be zoomable and selectable. It should work as a filter. It should react to resize. It needs visible axes. And so on. But we can create it in less than a half of hour and I think as a first step it works well. I hope you don't afraid of making your own plugins now.

    If you need more complex sample, take a look at the Oracle's tutorial. It covers some of these TODOs.

    Categories: BI & Warehousing

    Pages

    Subscribe to Oracle FAQ aggregator