Feed aggregator

REST, SSE or WebSockets on WebLogic 10.3.6

Edwin Biemond - Wed, 2014-01-15 14:10
WebLogic 10.3.6 comes with Jersey1.9 and has no support for Server Side Events or WebSockets. But for one of our projects we are making a HTML5 / AngularJS application, which need to invoke some RESTful services and we also want to use of SSE or WebSockets. Off course we can use WebLogic 12.1.2 but we already have an OSB / SOA Suite WebLogic 10.3.6 environment. So when you want to pimp your

What is behind these recent acquisitions by Palo Alto Networks and FireEye ? Domain Talent and Virtualization

Arvind Jain - Tue, 2014-01-14 20:17

Security is a red hot fascinating sector right now, acquisitions are happening left and right and I have stopped trying to do a financial valuation, there is something else happening. When money is cheap, I see these acquisitions happening as a race to get ahead with talent and new technology. But payoff will come for those who are first with economies of scale.

The two outstanding reasons for these acquisitions in my opinion are Virtualization in Security and Talent with domain expertise. Many security startup are focusing on use of in-situ virtual sandboxes to investigate suspicious files to detect malware before letting them loose in the main network.

Blue Coat Systems acquired Norman Shark, which had developed a sandboxing technology platform for malware analysis.  Palo Alto network acquired Morta Security  (CEO Raj Shah) a Silicon Valley-based security startup to bolster its cloud-based WildFire malware inspection technology. Aim was to get NSA talent as well as the virtualization technology. A week earlier FireEye acquired Mandiant which provides endpoint security software and is well known for its threat intelligence research and incident response services.

So what next ….. I am waiting to see some big - Bigdata plus Security related acquisitions and they are coming sooner than you will expect ….


Safe Surfing …

OWB to ODI Migration Utility now available for OWB 11.2.0. ...

Antonio Romero - Tue, 2014-01-14 16:33

The OWB to ODI Migration now supports migration from OWB version 11.2.0.3+CP3 to ODI 12c.

It is available as "Patch 17224695: OWB-ODI MIGRATION PATCH FOR 11.2.0.3 64BIT STANDALONE INSTALL" and can be downloaded from the support website.

This patch(17224695) supports migration from Linux 64-bit and Windows 64-bit standalone OWB 11.2.0.3 + CP3( Patch 16568042) to ODI 12.1.2.0.0 + Patch 17053768.

More information about the migration utility is herehttp://docs.oracle.com/middleware/1212/odi/ODIMG/index.html

OWB to ODI Migration Utility now available for OWB 11.2.0.3 +CP3 Linux and Windows 64-bit standalone installation.

Antonio Romero - Tue, 2014-01-14 16:33

The OWB to ODI Migration now supports migration from OWB version 11.2.0.3+CP3 to ODI 12c.

It is available as "Patch 17224695: OWB-ODI MIGRATION PATCH FOR 11.2.0.3 64BIT STANDALONE INSTALL" and can be downloaded from the support website.

This patch(17224695) supports migration from Linux 64-bit and Windows 64-bit standalone OWB 11.2.0.3 + CP3( Patch 16568042) to ODI 12.1.2.0.0 + Patch 17053768.

More information about the migration utility is here http://docs.oracle.com/middleware/1212/odi/ODIMG/index.html

January 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-01-14 14:50

Hello, this is Eric Maurice.

Oracle released the January 2014 Critical Patch Update today. This Critical Patch Update provided fixes for 144 new security vulnerabilities across a wide range of product families, including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle E-Business Suite, Oracle Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle FLEXCUBE, Oracle Java SE, Oracle and Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.

The January 2014 Critical Patch Update provided 5 fixes for the Oracle Database. The maximum CVSS Base Score for these database vulnerabilities was 5.0. This score was for one vulnerability (CVE-2013-5853), which also happened to be the only remotely exploitable without authentication database vulnerability in this Critical Patch Update.

This Critical Patch Update provided 22 security fixes for Oracle Fusion Middleware, 19 of which were for vulnerabilities that were remotely exploitable without authentication. The most severe CVSS Base Score for these vulnerabilities is 10.0. This score is for vulnerability CVE-2013-4316 which affects Oracle WebCenter Sites (versions 11.1.1.6.1 and 11.1.1.8.0).

Oracle Hyperion received 2 new security fixes. One of these vulnerabilities (CVE-2013-3830) received a CVSS Base Score of 7.1, which denotes a complete compromise if successfully exploited, but also requires a single authentication from the attacker.

This Critical Patch Update also included a number of fixes for Oracle applications. 4 security fixes are for Oracle E-Business Suite (one of the vulnerabilities may be remotely exploitable without authentication), 16 security fixes are for Oracle Supply Chain Products Suites (6 of the vulnerabilities may be remotely exploitable without authentication), 17 security fixes are for Oracle PeopleSoft Enterprise (10 of the vulnerabilities may be remotely exploitable without authentication). 2 security fixes are for Oracle Siebel CRM (one of the vulnerabilities may be remotely exploitable without authentication), etc.

This Critical Patch Update also provided 36 security fixes for Java SE. 34 of these Java SE vulnerabilities may be remotely exploitable without authentication. Only 3 of these vulnerabilities are relevant to Java SE or JSSE server deployments, but are not server side specific (that is they also affect client deployments). The maximum CVSS Base Score for Java SE vulnerabilities fixed in this Critical Patch Update is 10.0. This score affects 5 vulnerabilities (one of them being applicable to server deployments, that is, it can be exploited by supplying data to APIs in the specified component without using sandboxed Java Web Start applications or sandboxed Java applets).

As usual, Oracle recommends that this Critical Patch Update be applied as soon as possible. While a successful exploitation of a number of the vulnerabilities addressed by this Critical Patch Update may not be possible in many customers’ deployments because the affected component is not installed or cannot be easily accessed by malicious attacker, a prompt application of the Critical Patch Update will help ensure that “security in depth” is maintained in the environment. IT environments are dynamic in nature, and systems configurations and security controls (e.g., network access control policies) often change over time. Applying the Critical Patch Update and other vendors’ relevant security patches helps ensure that the related security controls continue to work, should one of the systems fail or its control be circumvented during an attack.

In 2014, the Critical Patch Update program remains Oracle’s primary mechanism for the release of security fixes across all Oracle product families. The recent inclusion of Java SE in the standard Critical Patch Update release schedule has resulted in an increase in the relative size of each Critical Patch Update release since Java SE’s inclusion in October 2013. From a Java SE perspective, this inclusion also meant that security fixes are released for Java SE in 4 annual scheduled releases (as opposed to 3 annual releases prior to the Oracle acquisition of Sun Microsystems.) The schedule of the Critical Patch Update (on the Tuesday closest to the 17th of the months of January, April, July, and October), as well as the frequency of this security patching schedule, is based largely on customers’ feedback who desire a balance between a high level of predictability for managing their systems as well as a reasonable frequency in the release of security patches so as to maintain a proper security posture. As such, from an Oracle perspective, “time to fix (length of time between discovery of the bug or initial reporting and delivery of the fix) is not as relevant a figure as “time to patch” (length of time between discovery of the bug or initial reporting and application of the fix by all affected customers).

For More Information:

The January 2014 Critical Patch Update is located at http://www.oracle.com/technetwork/topics/security/cpujan2014-1972949.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/overview/index.html.

OEM CLI Commands for Bulk Property Changes

Jeremy Schneider - Tue, 2014-01-14 14:05

This will be a brief post, mostly so I can save this command somewhere besides the bash_history file on my OEM server. It may prove useful to a few others too… it has been absolutely essential for me on several occasions! (I was just using it again recently which reminded me to stick it in this blog post.) This is how you can make bulk property changes to a large group of targets in OEM:

(oracle)$ emcli login -username=jeremy
(oracle)$ emcli get_targets -noheader -script | sed \
  's/Metric Collection Error/MCE/;s/Under Blackout/Blackout/;s/Status Pending/Pending/' >targets

(oracle)$ less targets
(oracle)$ awk '{print$4" "$5" "$6" "$7" "$8"~"$3"~Department~default"}' targets >inp

  or... awk '{print$4" "$5" "$6" "$7" "$8"~"$3"~Line of Business~test"}' targets >inp
  or... awk '{print$4" "$5" "$6" "$7" "$8"~"$3"~Location~chicago"}' targets >inp
  or... awk '{print$4" "$5" "$6" "$7" "$8"~"$3"~LifeCycle Status~Production"}' targets >inp

(oracle)$ less inp
(oracle)$ emcli set_target_property_value -property_records=REC_FILE \
  -input_file=REC_FILE:inp -separator=property_records="\n" -subseparator=property_records=~

(oracle)$ emcli logout

Note that the property name is case-sensitive: “Lifecycle” won’t work but “LifeCycle” does. Also, the commands above are of course intended to be tinkered with. Use grep to filter out targets; search on any regular expression you can dream up.

This process is important here because we use Administration Groups to automatically propagate monitoring templates (with standardized metric thresholds for paging) to all of our OEM targets. There have been a number of times when I’ve needed to make bulk property changes and it takes a very long time to do that through the UI. These commands are much faster.

Oracle Database Security Assessment

Asif Momen - Tue, 2014-01-14 13:01
Data Security has taken a front seat in terms of IT Security lately. Database security is an area of information security that is concerned with the use of security controls to protect databases. Organizations are trying to protect their databases from both internal and external threats. 

In this regard, Oracle has put up an online Database Security Assessment on their website. You may now check how secure your database is by answering few simple questions. 

Below is the link for Oracle's Database Security Assessment:

http://www.oracle.com/us/media/survey/index-191234.html


I am also sharing top ten most common database security vulnerabilities by ZDNet. Below is the link for the same:


http://www.zdnet.com/the-top-ten-most-common-database-security-vulnerabilities-7000017320/


Wishing you a secure database !!!

Finding gaps with analytic functions

OraFAQ Articles - Sun, 2014-01-12 06:20

Finding gaps is classic problem in PL/SQL. The basic concept is that you have some sort of numbers (like these: 1, 2, 3, 5, 6, 8, 9, 10, 15, 20, 21, 22, 23, 25, 26), where there’s supposed to be a fixed interval between the entries, but some entries could be missing. The gaps problem involves identifying the ranges of missing values in the sequence. For these numbers, the solution will be as follows:
START_GAP END_GAP
4 4
7 7
11 14
16 19
24 24

First, run the following code, to create tab1 table:


CREATE TABLE tab1
(
col1 INTEGER
);

Then, insert a few rows:


INSERT INTO tab1 VALUES (1);
INSERT INTO tab1 VALUES (2);
INSERT INTO tab1 VALUES (3);
INSERT INTO tab1 VALUES (5);
INSERT INTO tab1 VALUES (6);
INSERT INTO tab1 VALUES (8);
INSERT INTO tab1 VALUES (9);
INSERT INTO tab1 VALUES (10);
INSERT INTO tab1 VALUES (15);
INSERT INTO tab1 VALUES (20);
INSERT INTO tab1 VALUES (21);
INSERT INTO tab1 VALUES (22);
INSERT INTO tab1 VALUES (23);
INSERT INTO tab1 VALUES (25);
INSERT INTO tab1 VALUES (26);

COMMIT;

With data, you can take care of solving the gaps problem…

One of the most efficient solutions to the gaps problem involves using analytic functions (also known as window functions)


WITH aa AS
(SELECT col1 AS cur_value, LEAD (col1) OVER (ORDER BY col1) AS next_value
FROM tab1)
SELECT cur_value + 1 AS start_gap, next_value - 1 AS end_gap
FROM aa
WHERE next_value - cur_value > 1
ORDER BY start_gap

Using the LEAD function, you can return for each current col1 value (call it cur_value) the next value in the sequence (call it next_value). Then you can filter only pairs where the difference between the two is greater than the one.

articles: 

Quick Tip on Using Sqoop Action in Oozie

Chen Shapira - Wed, 2014-01-08 18:59

Another Oozie tip blog post.

If you try to use Sqoop action in Oozie, you know you can use the “command” format, with the entire Sqoop configuration in a single line:

<pre><workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <command>import  --connect jdbc:hsqldb:file:db.hsqldb --table TT --target-dir hdfs://localhost:8020/user/tucu/foo -m 1</command>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

This is convenient, but can be difficult to read and maintain. I prefer using the “arg” syntax, with each argument in its own line:

<workflow-app name="sample-wf" xmlns="uri:oozie:workflow:0.1">
    ...
    <action name="myfirsthivejob">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-traker>foo:8021</job-tracker>
            <name-node>bar:8020</name-node>
            <arg>import</arg>
            <arg>--connect</arg>
            <arg>jdbc:hsqldb:file:db.hsqldb</arg>
            <arg>--table</arg>
            <arg>TT</arg>
            <arg>--target-dir</arg>
            <arg>hdfs://localhost:8020/user/tucu/foo</arg>
            <arg>-m</arg>
            <arg>1</arg>
        </sqoop>
        <ok to="myotherjob"/>
        <error to="errorcleanup"/>
    </action>
    ...
</workflow-app>

As you can see, each argument here is in its own “arg” tag. Even two arguments that belong together like “–table” and “TT” go in two separate tags.
If you’ll try to put them together for readability, as I did, Sqoop will throw its entire user manual at you. It took me a while to figure out why this is an issue.

When you call Oozie from the command line, all the arguments you pass are sent as a String[] array, and the spaces separate the arguments into array elements. So if you call Sqoop with “–table TT” it will be two elements, “–table” and “TT”.
When using “arg” tags in Oozie, you are basically generating the same array in XML. Oozie will turn the XML argument list into an array and pass it to Sqoop just the way it would in the command line. Then Sqoop parses it in exactly the same way.
So every item separated with space in the command line must be in separate tags in Oozie.

Its simple and logical once you figure out why:)
If you want to dig a bit more into how Sqoop parses its arguments, it is using Apache Commons CLI with GnuParser. You can read all about it.


Categories: DBA Blogs

New Statement of Direction, September 2013

Gerd Volberg - Wed, 2014-01-08 03:32
The actual lifetimes of Forms and Reports can be found in this document.

The most important at a glance:


Here are some of my older Statement of Directions private copies:

SoD Oracle Fusion 2013 / 09
SoD Forms 2012 / 03
SoD Oracle Fusion 2010 / 05
SoD Forms 2009 / 07
SoD Forms 2008 / 07
SoD Forms 2007 / 11
SoD Forms 2005 / 09
SoD Forms 2005 / 05
SoD Forms 2005 / 03
SoD Forms 2004 / 06

My previous article on this topic is here

Tabibitosan

Rob van Wijk - Sun, 2014-01-05 17:32
I answered a few SQL questions on Stack Overflow recently, which I could solve easily by using the Tabibitosan method. It's such an elegant and efficient technique, I think it's worth giving it an extra bit of attention through this post. I'm certainly not the first to write about it: it was introduced to me by Japanese Oracle ACE Aketi Jyuuzou on OTN. He wrote a special forum post explaining hisRob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com4

Looping the hard way

Oracle WTF - Sun, 2014-01-05 05:46

The task was to construct partition names from 'P001' to (for some reason) 'P336', as part of a larger maintenance script. Here's what they came up with:

declare
   p varchar2(4);
   i number := 1;
begin
   loop
      if i < 10 then
         p := 'P00' || to_char(i);
      elsif i < 100 then
         p := 'P0' || to_char(i);
      else
         p := 'P' || to_char(i);
      end if;
      
      i := i + 1;
      
      exit when i > 336;

      dbms_output.put_line(p);
   end loop;
end;

It’s been a while!

Pete Scott - Fri, 2014-01-03 09:20
Well actually 3 (almost 4) years since I last blogged here. So it is high time to write some new content for my personal blog and perhaps change the nature of the content. When I started blogging (on Blogger) many years back it was mainly a technical blog on Data Warehousing with an Oracle slant. […]

Date to Timestamp Conversion during PeopleTools Upgrade

David Kurtz - Thu, 2014-01-02 20:14
This blog posting describes a script to convert Oracle date columns to Timestamps as used from PeopleTools 8.50 but only rebuilding those indexes that reference those columns, rather than drop and recreate every index in the system, thus producing a significant saving of time during the upgrade.

(A longer version of this article is available on my website)

I am working on a PeopleSoft upgrade project.  We are going from PeopleTools 8.49 to 8.53.  One of the things that happens is that some date columns in the Oracle database become timestamps.

Timestamps were introduced by Oracle in version 10g of the database, and provide the ability to store times accurate to the nanosecond (although the default is microsecond). Dates are accurate to the whole second.

There are 3 types of temporal column in PeopleSoft as defined on PSDBFIELD.  Prior to PeopleTools 8.50 they all become Oracle data columns in the database.  However, from PeopleTools 8.50; Time and DateTime fields are built as TimeStamp columns.if bit 5 of PSSTATUS.DATABASE_OPTIONS (value 32) is set.

PeopleTools Field Type
Database Column Type
PeopleTools <= 8.49 PeopleTools >= 8.50 4 Date DATE DATE 5 Time DATE TIMESTAMP 6 DateTime DATE TIMESTAMP
Timestamps must be handled differently to dates in SQL.  Some date arithmetic must be done differently, in particular the difference between two timestamps is a timestamp rather than a number of days.  Therefore this setting also controls how PeopleCode date macros expand on Oracle.

During the upgrade, PeopleSoft Change Assistant simply alters all the Time and DateTime columns from dates to timestamps.  This generally works well.  The data value doesn't appear to get longer, so the block doesn't run out of room leading to row migration, and so it isn't necessary to rebuild every table that is affected.

However, there are some limitations. If the column being converted to a timestamp falls into one of the following categories you will get an error.
  • The column is a key column in a function-based index.
  • The table or index is partitioned by the column.
The functional key index issue has not affected many customers because the upgrade template drops all the indexes before altering the tables and rebuilding them again.

However, dropping and recreating all these indexes can be very time consuming and increases the duration of the outage required to perform the upgrade.  This has been my incentive to find a better way.

Function-Based Indexes
PeopleSoft Application Designer defines some key and search fields as descending.  The rows in components and the results of search dialogue are sorted on the key fields in the order specified.  Application Designer then indexes these columns in descending order (prior to PeopleTools 8 and since PeopleTools 8.47).  If any column in an Oracle index is in descending order the index is created as a function-based index.  Consequently, there can be a lot of descending indexes in a PeopleSoft system!  HR systems are particularly affected because many tables are effective-dated, and the field EFFDT is usually a descending key field.

It is not possible to alter a column to a timestamp if it appears anywhere in a function-based index. You get the following error:
ORA-30556: functional index is defined on the column to be modified 

Partitioning
Partitioning is not something that you usually encounter in a vanilla PeopleSoft system, but it can be added by customisation.  You generate the necessary DDL yourself if you want to use it. However, from PeopleTools 8.51 Application Designer will preserve existing partitioning.

In the system on which I am working, when partitioned the audit tables by AUDIT_STAMP which is a DateTime field.
ORA-14060: data type or length of an table partitioning column may not be changed 
We have had no alternative but to rebuild these tables and repopulate the data. This has also dealt with all locally partitioned indexes.

We have also found that we have one global index partitioned on a timestamp.
ORA-14061: data type or length of an index partitioning column may not be changed 
We also have had to drop this index in order to alter the table.

My Approach
We have had no alternative but to rebuild and repopulate our partitioned audit tables which are partitioned by a DateTime field.  However, that is what we did when we first partitioned them.  The scripts are very similar to those generated by Application Designer.  The table is renamed, a new one is built, and the data is copied.  In our case these scripts are built with a PL/SQL utility.  This also addressed the need to rebuild the locally partitioned indexes..

To minimize the number of indexes which must be rebuilt I have written a PL/SQL script (http://www.go-faster.co.uk/scripts/gfc_desc_timestamp_index.sql) that:
  • identifies the indexes that need to be dropped.
  • captures the DDL to recreate the indexes using DBMS_METADATA and stores it in a table,
  • drops the indexes,
  • alters the columns that cannot be altered with the index in place,
  • recreates the index.
The script successfully handles partitioned function-based indexes.

On this particular HR system we only rebuilt about 400 indexes instead of over 10000. Now the standard PeopleSoft upgrade template can be run without dropping or recreating any further indexes.

Data Warehouse for Big Data: Scale-Up vs. Scale-Out

Dylan Wan - Thu, 2014-01-02 15:33

Found a very good paper: http://research.microsoft.com/pubs/204499/a20-appuswamy.pdf

This paper discuss if it is a right approach of using Hadoop as the analytics infrastructure.

It is hard to argue with the industry trend.  However, Hadoop is not new any more.  It is time for people to calm down and rethink about the real benefits.

Categories: BI & Warehousing

November/December Highlights

Jeremy Schneider - Thu, 2014-01-02 12:04

In the Oracle technical universe, it seems that the end of the calendar year is always eventful. First there’s OpenWorld: obviously significant for official announcements and insight into Oracle’s strategy. It’s also the week when many top engineers around the world meet up in San Francisco to catch up over beers – justifying hotel and flight expenses by preparing technical presentations of their most interesting and recent problems or projects. UKOUG and DOAG happen shortly after OpenWorld with a similar (but more European) impact – and December seems to mingle the domino effect of tweets and blog posts inspired by the conference social activity with holiday anticipation at work.

I avoided any conference trips this year but I still noticed the usual surge in interesting twitter and blog activity. It seems worthwhile to record a few highlights of the past two months as the year wraps up.

First, four new scripts that look very interesting:
1. Utility: getMOSpatch (doc: blog)- useful script for downloading a specific patch from MOS. I had tried something similar for RACattack back in 2011. This script written by Maris Elsins looks to be very good. I’ve downloaded this and read up on it but haven’t tried it out yet.
2. Perf: ashtop and ash_wait_chains (doc: blog 1, blog 2) – from the author of snapper, here are two more excellent tools for general performance troubleshooting and quickly pulling information from the ASH. The chains script is an especially brilliant idea – it reports from ASH using a hierarchical join on the blocking_session column. Check out Tanel’s blog posts for details and examples. I’ve used both of these scripts while troubleshooting production issues during the past month.
3. Perf/CPU: fulltime (doc: slides) – Linux specific utility to drill down into CPU-intensive processes. Similar to Tanel’s OStackProf but seems a bit more robust (runs server-side without the windows/vbscript dependencies, also brings cpu/kernel together with wait info in a single report). Rather than oradebug, this uses new lightweight linux kernel instrumentation (perf) to report a sample-based profile of what the Oracle kernel is doing by internal function. This was a collaborative effort by Craig Shallahamer and Frits Hoogland and there are several related articles on both blogs about how it works. I’ve downloaded this but haven’t tried it out yet.
4. Perf/Visualization: [Ora/Py] LatencyMap (doc: blog/sqlplus, blog/python) – very cool looking program which gives a heatmap visual representation of metrics such as I/O. I’m a huge fan of visualizations and use graphical tools daily as a DBA. Make sure to check out the recorded demo of this utility!

I love exploring utilities like these. It brings out my nerdy side a little, that’s why I mentioned them first… :) But there are a few other highlights that come to mind from the past few months too!

On the topic of utilities, I have been working with Tanel’s “tpt” script collection quite a bit during the course of my day-to-day job. I fired out a question this month to the oracle-l mailing list about other publicly posted script collections, and I got the impression that there just aren’t many script collections posted publicly! Here’s the list I came up with:

Script Collections:
Tanel Poder (tpt), see also E2SN for even more
Tim Hall (oracle-base)
Dan Morgan
Kerry Osborne (2010 Hotsos Presentation), see also various blog articles for many updated scripts
Tim Gorman
Jeff Hunter

I’ve also read lots of other interesting stuff this month. Three things I remember off the top of my head:
– In 2012, Yury Velikanov wrote up a presentation about oracle database backups. Last month, Maris Elsins made a few tweaks and delivered the updated presentation at UKOUG. The slide deck is a worthwhile read – everybody should be able to learn something from it. If you didn’t see it when Yury first released it last year then take a few minutes to check it out.
– I was interested to read Kellyn Pot’Vin‘s slides about Database as a Service (DBaaS). This is an area I’ve been working on a lot lately and it intersects with my Operationally Scalable Practices series of articles. She’s always got good content about OEM on her blog too – being a heavy OEM user these days, I tend to read what Kellyn’s writing.
Kyle Hailey recorded Christo Kytrovsky‘s excellent OakTable World talk about Oracle, Memory and Linux. Worth listening to sometime.

You may already be aware but I have to mention that RAC Attack has hit the accelerator lately! Through an international collaborative effort, the curriculum was updated to version 12c of the database before OpenWorld 2013 and this was followed by a rapid series of workshops. During the past three months, there have been four workshops in three different countries – and there are more coming on the calendar!

Finally, two quick “news” type mentions. First, I’ve personally tryed to avoid much of the “engineered systems” buzz (not sure why)… but I did notice the new exadata release this month. Second, oracle made an acquisition this year which was particularly significantly to me: a chicago-based company called BigMachines. You may not have heard of this company – but it happens to be mentioned on my LinkedIn profile.

These are a handful of interesting things I remember seeing over the past two months. Please leave me a comment and mention anything else that you noticed recently – I’m very interested to hear any additional highlights!

Going Native with JCA Adapters

Antony Reynolds - Mon, 2013-12-30 15:55
Formatting JCA Adapter Binary Contents

Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.

Adapter Creation

When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.

This results in a WSDL message defined as shown below:

<wsdl:types>
<schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/"
        xmlns="http://www.w3.org/2001/XMLSchema" >
  <element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message name="Write_msg">
    <wsdl:part name="opaque" element="opaque:opaqueElement"/>
</wsdl:message>

The Challenge

The challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.

Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.

The example below shows a sample MFL file to describe the above data structure:

<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!--   Enter description of the message format here.   -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
    <FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
    <FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
    <FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
    <FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
</MessageFormat>

Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).

I also created an XML version of the MFL that can be used in interfaces.

The XML version can then be imported into a WSDL document to create a web service.

Full Steam Ahead

We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:

  • We receive the XML request, in the sample code, the sample delivers it as a web service.
  • We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
  • We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
  • We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
  • Finally we replace the original $body contents with the output of an XQuery that matches the adapter expected XML format.

The diagram below shows the OSB pipeline that implements the above.

A Wrinkle

Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:

package antony.blog;

import javax.xml.bind.DatatypeConverter;

public class Base64Encoder {
    public static String base64encode(byte[] content) {
        return DatatypeConverter.printBase64Binary(content);
    }
    public static byte[] base64decode(String content) {
        return DatatypeConverter.parseBase64Binary(content);
    }
}

Wrapping Up

Sample code is available here and consists of the following projects:

  • BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
  • OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
  • BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.

The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.

The following sample input (note 16909060 is 0x01020304)

<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">
    <bin:stringField1>First String</bin:stringField1>
    <bin:intField>16909060</bin:intField>
    <bin:doubleField>1.5</bin:doubleField>
    <bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>

Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.

$ hexdump -C 2.bin
00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|
00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|
00000020  20 53 74 72 69 6e 67                              | String|

Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

Going Native with JCA Adapters

Antony Reynolds - Mon, 2013-12-30 15:55
Formatting JCA Adapter Binary Contents

Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.

Adapter Creation

When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.

This results in a WSDL message defined as shown below:

<wsdl:types>
<schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/"
        xmlns="http://www.w3.org/2001/XMLSchema" >
  <element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message name="Write_msg">
    <wsdl:part name="opaque" element="opaque:opaqueElement"/>
</wsdl:message>

The Challenge

The challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.

Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.

The example below shows a sample MFL file to describe the above data structure:

<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!--   Enter description of the message format here.   -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
    <FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
    <FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
    <FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
    <FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
</MessageFormat>

Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).

I also created an XML version of the MFL that can be used in interfaces.

The XML version can then be imported into a WSDL document to create a web service.

Full Steam Ahead

We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:

  • We receive the XML request, in the sample code, the sample delivers it as a web service.
  • We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
  • We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
  • We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
  • Finally we replace the original $body contents with the output of an XQuery that matches the adapter expected XML format.

The diagram below shows the OSB pipeline that implements the above.

A Wrinkle

Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:

package antony.blog;

import javax.xml.bind.DatatypeConverter;

public class Base64Encoder {
    public static String base64encode(byte[] content) {
        return DatatypeConverter.printBase64Binary(content);
    }
    public static byte[] base64decode(String content) {
        return DatatypeConverter.parseBase64Binary(content);
    }
}

Wrapping Up

Sample code is available here and consists of the following projects:

  • BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
  • OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
  • BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.

The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.

The following sample input (note 16909060 is 0x01020304)

<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">
    <bin:stringField1>First String</bin:stringField1>
    <bin:intField>16909060</bin:intField>
    <bin:doubleField>1.5</bin:doubleField>
    <bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>

Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.

$ hexdump -C 2.bin
00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|
00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|
00000020  20 53 74 72 69 6e 67                              | String|

Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

List Manipulation in Rules

Antony Reynolds - Thu, 2013-12-26 19:10
Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

  • A residential surcharge is applied to residential addresses.
  • A residential surcharge is not applied to non-residential addresses.
  • The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Download

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

List Manipulation in Rules

Antony Reynolds - Thu, 2013-12-26 19:10
Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

  • A residential surcharge is applied to residential addresses.
  • A residential surcharge is not applied to non-residential addresses.
  • The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Download

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

Pages

Subscribe to Oracle FAQ aggregator