# Feed aggregator

### Finding gaps with analytic functions

OraFAQ Articles - Sun, 2014-01-12 06:20

Finding gaps is classic problem in PL/SQL. The basic concept is that you have some sort of numbers (like these: 1, 2, 3, 5, 6, 8, 9, 10, 15, 20, 21, 22, 23, 25, 26), where there’s supposed to be a fixed interval between the entries, but some entries could be missing. The gaps problem involves identifying the ranges of missing values in the sequence. For these numbers, the solution will be as follows:
START_GAP END_GAP
4 4
7 7
11 14
16 19
24 24

First, run the following code, to create tab1 table:

``` CREATE TABLE tab1 ( col1 INTEGER );```

Then, insert a few rows:

``` INSERT INTO tab1 VALUES (1); INSERT INTO tab1 VALUES (2); INSERT INTO tab1 VALUES (3); INSERT INTO tab1 VALUES (5); INSERT INTO tab1 VALUES (6); INSERT INTO tab1 VALUES (8); INSERT INTO tab1 VALUES (9); INSERT INTO tab1 VALUES (10); INSERT INTO tab1 VALUES (15); INSERT INTO tab1 VALUES (20); INSERT INTO tab1 VALUES (21); INSERT INTO tab1 VALUES (22); INSERT INTO tab1 VALUES (23); INSERT INTO tab1 VALUES (25); INSERT INTO tab1 VALUES (26);```

COMMIT;

With data, you can take care of solving the gaps problem…

One of the most efficient solutions to the gaps problem involves using analytic functions (also known as window functions)

``` WITH aa AS (SELECT col1 AS cur_value, LEAD (col1) OVER (ORDER BY col1) AS next_value FROM tab1) SELECT cur_value + 1 AS start_gap, next_value - 1 AS end_gap FROM aa WHERE next_value - cur_value > 1 ORDER BY start_gap```

Using the LEAD function, you can return for each current col1 value (call it cur_value) the next value in the sequence (call it next_value). Then you can filter only pairs where the difference between the two is greater than the one.

articles:

### Quick Tip on Using Sqoop Action in Oozie

Chen Shapira - Wed, 2014-01-08 18:59

Another Oozie tip blog post.

If you try to use Sqoop action in Oozie, you know you can use the “command” format, with the entire Sqoop configuration in a single line:

```<pre><workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
...
<action name="myfirsthivejob">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-traker>foo:8021</job-tracker>
<name-node>bar:8020</name-node>
<command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:8020/user/tucu/foo -m 1</command>
</sqoop>
<ok to="myotherjob"/>
<error to="errorcleanup"/>
</action>
...
</workflow-app>
```

This is convenient, but can be difficult to read and maintain. I prefer using the “arg” syntax, with each argument in its own line:

```<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
...
<action name="myfirsthivejob">
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
<job-traker>foo:8021</job-tracker>
<name-node>bar:8020</name-node>
<arg>import</arg>
<arg>--connect</arg>
<arg>jdbc:hsqldb:file:db.hsqldb</arg>
<arg>--table</arg>
<arg>TT</arg>
<arg>--target-dir</arg>
<arg>hdfs://localhost:8020/user/tucu/foo</arg>
<arg>-m</arg>
<arg>1</arg>
</sqoop>
<ok to="myotherjob"/>
<error to="errorcleanup"/>
</action>
...
</workflow-app>
```

As you can see, each argument here is in its own “arg” tag. Even two arguments that belong together like “–table” and “TT” go in two separate tags.
If you’ll try to put them together for readability, as I did, Sqoop will throw its entire user manual at you. It took me a while to figure out why this is an issue.

When you call Oozie from the command line, all the arguments you pass are sent as a String[] array, and the spaces separate the arguments into array elements. So if you call Sqoop with “–table TT” it will be two elements, “–table” and “TT”.
When using “arg” tags in Oozie, you are basically generating the same array in XML. Oozie will turn the XML argument list into an array and pass it to Sqoop just the way it would in the command line. Then Sqoop parses it in exactly the same way.
So every item separated with space in the command line must be in separate tags in Oozie.

Its simple and logical once you figure out why
If you want to dig a bit more into how Sqoop parses its arguments, it is using Apache Commons CLI with GnuParser. You can read all about it.

Categories: DBA Blogs

### New Statement of Direction, September 2013

Gerd Volberg - Wed, 2014-01-08 03:32
The actual lifetimes of Forms and Reports can be found in this document.

The most important at a glance:

Here are some of my older Statement of Directions private copies:

SoD Oracle Fusion 2013 / 09
SoD Forms 2012 / 03
SoD Oracle Fusion 2010 / 05
SoD Forms 2009 / 07
SoD Forms 2008 / 07
SoD Forms 2007 / 11
SoD Forms 2005 / 09
SoD Forms 2005 / 05
SoD Forms 2005 / 03
SoD Forms 2004 / 06

My previous article on this topic is here

### Tabibitosan

Rob van Wijk - Sun, 2014-01-05 17:32
I answered a few SQL questions on Stack Overflow recently, which I could solve easily by using the Tabibitosan method. It's such an elegant and efficient technique, I think it's worth giving it an extra bit of attention through this post. I'm certainly not the first to write about it: it was introduced to me by Japanese Oracle ACE Aketi Jyuuzou on OTN. He wrote a special forum post explaining hisRob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com4

### Looping the hard way

Oracle WTF - Sun, 2014-01-05 05:46

The task was to construct partition names from 'P001' to (for some reason) 'P336', as part of a larger maintenance script. Here's what they came up with:

```declare
p varchar2(4);
i number := 1;
begin
loop
if i < 10 then
p := 'P00' || to_char(i);
elsif i < 100 then
p := 'P0' || to_char(i);
else
p := 'P' || to_char(i);
end if;

i := i + 1;

exit when i > 336;

dbms_output.put_line(p);
end loop;
end;```

### It’s been a while!

Pete Scott - Fri, 2014-01-03 09:20
Well actually 3 (almost 4) years since I last blogged here. So it is high time to write some new content for my personal blog and perhaps change the nature of the content. When I started blogging (on Blogger) many years back it was mainly a technical blog on Data Warehousing with an Oracle slant. […]

### Date to Timestamp Conversion during PeopleTools Upgrade

David Kurtz - Thu, 2014-01-02 20:14
This blog posting describes a script to convert Oracle date columns to Timestamps as used from PeopleTools 8.50 but only rebuilding those indexes that reference those columns, rather than drop and recreate every index in the system, thus producing a significant saving of time during the upgrade.

I am working on a PeopleSoft upgrade project.  We are going from PeopleTools 8.49 to 8.53.  One of the things that happens is that some date columns in the Oracle database become timestamps.

Timestamps were introduced by Oracle in version 10g of the database, and provide the ability to store times accurate to the nanosecond (although the default is microsecond). Dates are accurate to the whole second.

There are 3 types of temporal column in PeopleSoft as defined on PSDBFIELD.  Prior to PeopleTools 8.50 they all become Oracle data columns in the database.  However, from PeopleTools 8.50; Time and DateTime fields are built as TimeStamp columns.if bit 5 of PSSTATUS.DATABASE_OPTIONS (value 32) is set.

PeopleTools Field Type
Database Column Type
PeopleTools <= 8.49 PeopleTools >= 8.50 4 Date DATE DATE 5 Time DATE TIMESTAMP 6 DateTime DATE TIMESTAMP
Timestamps must be handled differently to dates in SQL.  Some date arithmetic must be done differently, in particular the difference between two timestamps is a timestamp rather than a number of days.  Therefore this setting also controls how PeopleCode date macros expand on Oracle.

During the upgrade, PeopleSoft Change Assistant simply alters all the Time and DateTime columns from dates to timestamps.  This generally works well.  The data value doesn't appear to get longer, so the block doesn't run out of room leading to row migration, and so it isn't necessary to rebuild every table that is affected.

However, there are some limitations. If the column being converted to a timestamp falls into one of the following categories you will get an error.
• The column is a key column in a function-based index.
• The table or index is partitioned by the column.
The functional key index issue has not affected many customers because the upgrade template drops all the indexes before altering the tables and rebuilding them again.

However, dropping and recreating all these indexes can be very time consuming and increases the duration of the outage required to perform the upgrade.  This has been my incentive to find a better way.

Function-Based Indexes
PeopleSoft Application Designer defines some key and search fields as descending.  The rows in components and the results of search dialogue are sorted on the key fields in the order specified.  Application Designer then indexes these columns in descending order (prior to PeopleTools 8 and since PeopleTools 8.47).  If any column in an Oracle index is in descending order the index is created as a function-based index.  Consequently, there can be a lot of descending indexes in a PeopleSoft system!  HR systems are particularly affected because many tables are effective-dated, and the field EFFDT is usually a descending key field.

It is not possible to alter a column to a timestamp if it appears anywhere in a function-based index. You get the following error:
``ORA-30556: functional index is defined on the column to be modified ``

Partitioning
Partitioning is not something that you usually encounter in a vanilla PeopleSoft system, but it can be added by customisation.  You generate the necessary DDL yourself if you want to use it. However, from PeopleTools 8.51 Application Designer will preserve existing partitioning.

In the system on which I am working, when partitioned the audit tables by AUDIT_STAMP which is a DateTime field.
``ORA-14060: data type or length of an table partitioning column may not be changed ``
We have had no alternative but to rebuild these tables and repopulate the data. This has also dealt with all locally partitioned indexes.

We have also found that we have one global index partitioned on a timestamp.
``ORA-14061: data type or length of an index partitioning column may not be changed ``
We also have had to drop this index in order to alter the table.

My Approach
We have had no alternative but to rebuild and repopulate our partitioned audit tables which are partitioned by a DateTime field.  However, that is what we did when we first partitioned them.  The scripts are very similar to those generated by Application Designer.  The table is renamed, a new one is built, and the data is copied.  In our case these scripts are built with a PL/SQL utility.  This also addressed the need to rebuild the locally partitioned indexes..

To minimize the number of indexes which must be rebuilt I have written a PL/SQL script (http://www.go-faster.co.uk/scripts/gfc_desc_timestamp_index.sql) that:
• identifies the indexes that need to be dropped.
• captures the DDL to recreate the indexes using DBMS_METADATA and stores it in a table,
• drops the indexes,
• alters the columns that cannot be altered with the index in place,
• recreates the index.
The script successfully handles partitioned function-based indexes.

On this particular HR system we only rebuilt about 400 indexes instead of over 10000. Now the standard PeopleSoft upgrade template can be run without dropping or recreating any further indexes.

### Data Warehouse for Big Data: Scale-Up vs. Scale-Out

Dylan Wan - Thu, 2014-01-02 15:33

Found a very good paper: http://research.microsoft.com/pubs/204499/a20-appuswamy.pdf

This paper discuss if it is a right approach of using Hadoop as the analytics infrastructure.

It is hard to argue with the industry trend.  However, Hadoop is not new any more.  It is time for people to calm down and rethink about the real benefits.

Categories: BI & Warehousing

### November/December Highlights

Jeremy Schneider - Thu, 2014-01-02 12:04

In the Oracle technical universe, it seems that the end of the calendar year is always eventful. First there’s OpenWorld: obviously significant for official announcements and insight into Oracle’s strategy. It’s also the week when many top engineers around the world meet up in San Francisco to catch up over beers – justifying hotel and flight expenses by preparing technical presentations of their most interesting and recent problems or projects. UKOUG and DOAG happen shortly after OpenWorld with a similar (but more European) impact – and December seems to mingle the domino effect of tweets and blog posts inspired by the conference social activity with holiday anticipation at work.

I avoided any conference trips this year but I still noticed the usual surge in interesting twitter and blog activity. It seems worthwhile to record a few highlights of the past two months as the year wraps up.

First, four new scripts that look very interesting:
1. Utility: getMOSpatch (doc: blog)- useful script for downloading a specific patch from MOS. I had tried something similar for RACattack back in 2011. This script written by Maris Elsins looks to be very good. I’ve downloaded this and read up on it but haven’t tried it out yet.
2. Perf: ashtop and ash_wait_chains (doc: blog 1, blog 2) – from the author of snapper, here are two more excellent tools for general performance troubleshooting and quickly pulling information from the ASH. The chains script is an especially brilliant idea – it reports from ASH using a hierarchical join on the blocking_session column. Check out Tanel’s blog posts for details and examples. I’ve used both of these scripts while troubleshooting production issues during the past month.
3. Perf/CPU: fulltime (doc: slides) – Linux specific utility to drill down into CPU-intensive processes. Similar to Tanel’s OStackProf but seems a bit more robust (runs server-side without the windows/vbscript dependencies, also brings cpu/kernel together with wait info in a single report). Rather than oradebug, this uses new lightweight linux kernel instrumentation (perf) to report a sample-based profile of what the Oracle kernel is doing by internal function. This was a collaborative effort by Craig Shallahamer and Frits Hoogland and there are several related articles on both blogs about how it works. I’ve downloaded this but haven’t tried it out yet.
4. Perf/Visualization: [Ora/Py] LatencyMap (doc: blog/sqlplus, blog/python) – very cool looking program which gives a heatmap visual representation of metrics such as I/O. I’m a huge fan of visualizations and use graphical tools daily as a DBA. Make sure to check out the recorded demo of this utility!

I love exploring utilities like these. It brings out my nerdy side a little, that’s why I mentioned them first… :) But there are a few other highlights that come to mind from the past few months too!

On the topic of utilities, I have been working with Tanel’s “tpt” script collection quite a bit during the course of my day-to-day job. I fired out a question this month to the oracle-l mailing list about other publicly posted script collections, and I got the impression that there just aren’t many script collections posted publicly! Here’s the list I came up with:

Script Collections:
Tim Hall (oracle-base)
Dan Morgan
Kerry Osborne (2010 Hotsos Presentation), see also various blog articles for many updated scripts
Tim Gorman
Jeff Hunter

I’ve also read lots of other interesting stuff this month. Three things I remember off the top of my head:
– In 2012, Yury Velikanov wrote up a presentation about oracle database backups. Last month, Maris Elsins made a few tweaks and delivered the updated presentation at UKOUG. The slide deck is a worthwhile read – everybody should be able to learn something from it. If you didn’t see it when Yury first released it last year then take a few minutes to check it out.
– I was interested to read Kellyn Pot’Vin‘s slides about Database as a Service (DBaaS). This is an area I’ve been working on a lot lately and it intersects with my Operationally Scalable Practices series of articles. She’s always got good content about OEM on her blog too – being a heavy OEM user these days, I tend to read what Kellyn’s writing.
Kyle Hailey recorded Christo Kytrovsky‘s excellent OakTable World talk about Oracle, Memory and Linux. Worth listening to sometime.

You may already be aware but I have to mention that RAC Attack has hit the accelerator lately! Through an international collaborative effort, the curriculum was updated to version 12c of the database before OpenWorld 2013 and this was followed by a rapid series of workshops. During the past three months, there have been four workshops in three different countries – and there are more coming on the calendar!

Finally, two quick “news” type mentions. First, I’ve personally tryed to avoid much of the “engineered systems” buzz (not sure why)… but I did notice the new exadata release this month. Second, oracle made an acquisition this year which was particularly significantly to me: a chicago-based company called BigMachines. You may not have heard of this company – but it happens to be mentioned on my LinkedIn profile.

These are a handful of interesting things I remember seeing over the past two months. Please leave me a comment and mention anything else that you noticed recently – I’m very interested to hear any additional highlights!

### Going Native with JCA Adapters

Antony Reynolds - Mon, 2013-12-30 15:55

Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.

When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.

This results in a WSDL message defined as shown below:

<wsdl:types>
xmlns="http://www.w3.org/2001/XMLSchema" >
<element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message name="Write_msg">
<wsdl:part name="opaque" element="opaque:opaqueElement"/>
</wsdl:message>

The Challenge

The challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.

Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.

The example below shows a sample MFL file to describe the above data structure:

<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!--   Enter description of the message format here.   -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
<FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
<FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
<FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
<FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
</MessageFormat>

Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).

I also created an XML version of the MFL that can be used in interfaces.

The XML version can then be imported into a WSDL document to create a web service.

We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:

• We receive the XML request, in the sample code, the sample delivers it as a web service.
• We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
• We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
• We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
• Finally we replace the original \$body contents with the output of an XQuery that matches the adapter expected XML format.

The diagram below shows the OSB pipeline that implements the above.

A Wrinkle

Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:

package antony.blog;

import javax.xml.bind.DatatypeConverter;

public class Base64Encoder {
public static String base64encode(byte[] content) {
return DatatypeConverter.printBase64Binary(content);
}
public static byte[] base64decode(String content) {
return DatatypeConverter.parseBase64Binary(content);
}
}

Wrapping Up

Sample code is available here and consists of the following projects:

• BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
• OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
• BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.

The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.

The following sample input (note 16909060 is 0x01020304)

<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">
<bin:stringField1>First String</bin:stringField1>
<bin:intField>16909060</bin:intField>
<bin:doubleField>1.5</bin:doubleField>
<bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>

Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.

\$ hexdump -C 2.bin
00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|
00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|
00000020  20 53 74 72 69 6e 67                              | String|

Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

### Going Native with JCA Adapters

Antony Reynolds - Mon, 2013-12-30 15:55

Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.

When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.

This results in a WSDL message defined as shown below:

<wsdl:types>
xmlns="http://www.w3.org/2001/XMLSchema" >
<element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message name="Write_msg">
<wsdl:part name="opaque" element="opaque:opaqueElement"/>
</wsdl:message>

The Challenge

The challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.

Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.

The example below shows a sample MFL file to describe the above data structure:

<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!--   Enter description of the message format here.   -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
<FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
<FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
<FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
<FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
<LenField type='UTinyInt'/>
</FieldFormat>
</MessageFormat>

Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).

I also created an XML version of the MFL that can be used in interfaces.

The XML version can then be imported into a WSDL document to create a web service.

We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:

• We receive the XML request, in the sample code, the sample delivers it as a web service.
• We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
• We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
• We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
• Finally we replace the original \$body contents with the output of an XQuery that matches the adapter expected XML format.

The diagram below shows the OSB pipeline that implements the above.

A Wrinkle

Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:

package antony.blog;

import javax.xml.bind.DatatypeConverter;

public class Base64Encoder {
public static String base64encode(byte[] content) {
return DatatypeConverter.printBase64Binary(content);
}
public static byte[] base64decode(String content) {
return DatatypeConverter.parseBase64Binary(content);
}
}

Wrapping Up

Sample code is available here and consists of the following projects:

• BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
• OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
• BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.

The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.

The following sample input (note 16909060 is 0x01020304)

<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">
<bin:stringField1>First String</bin:stringField1>
<bin:intField>16909060</bin:intField>
<bin:doubleField>1.5</bin:doubleField>
<bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>

Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.

\$ hexdump -C 2.bin
00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|
00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|
00000020  20 53 74 72 69 6e 67                              | String|

Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

### List Manipulation in Rules

Antony Reynolds - Thu, 2013-12-26 19:10
Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

• A residential surcharge is applied to residential addresses.
• A residential surcharge is not applied to non-residential addresses.
• The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

### List Manipulation in Rules

Antony Reynolds - Thu, 2013-12-26 19:10
Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

• A residential surcharge is applied to residential addresses.
• A residential surcharge is not applied to non-residential addresses.
• The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

### Cleaning Up After Yourself

Antony Reynolds - Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
\${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd \${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=\${FMW_HOME}/jdk1.7.0_17
BAM_CMD=\${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in \${BAM_OBJECTS}
do
\${BAM_CMD} -cmd clear -name \${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

### Cleaning Up After Yourself

Antony Reynolds - Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
\${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd \${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=\${FMW_HOME}/jdk1.7.0_17
BAM_CMD=\${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in \${BAM_OBJECTS}
do
\${BAM_CMD} -cmd clear -name \${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

### Supporting the Team

Antony Reynolds - Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

### Supporting the Team

Antony Reynolds - Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

### FSG Reporting and BIP

Tim Dexter - Fri, 2013-12-20 11:30

This is a great overview of the Financial Statement Generator (FSG) engine from GL in EBS and how Publisher fits into the picture.Thanks to Helle Hellings on the Financials PM team.

Categories: BI & Warehousing

### \$5 eBook Bonanza

Antony Reynolds - Thu, 2013-12-19 10:20
Packt eBooks \$5 Offer

Packt Publishing just told me about their Christmas offer, get eBooks for \$5.

From December 19th, customers will be able to get any eBook or Video from Packt for just \$5. This offer covers a myriad of titles in the 1700+ range where customers will be able to grab as many as they like until January 3rd 2014 – more information is available at http://bit.ly/1jdCr2W

If you haven’t bought the SOA Developers Cookbook then now is a great time to do so!

### Meet the Oracle ACE Directors Panel - January 9 - Seattle

Tim Tow - Thu, 2013-12-19 08:27

I will be in Seattle on Thursday, January 9th for the Meet the Oracle ACE Directors Panel.  It is at the Sheraton Seattle from 4 - 6 pm and will feature several other ACE Directors including Martin D'Souza, Kellyn Pot'Vin, Tim Gorman, and my longtime friend and collaborator, Cameron Lackpour.

Come see and the panel and stay for the Happy Hour; the beer will be on me!

Categories: BI & Warehousing