Antony Reynolds

Subscribe to Antony Reynolds feed
Oracle Blogs
Updated: 13 hours 19 min ago

List Manipulation in Rules

Thu, 2013-12-26 19:10
Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

  • A residential surcharge is applied to residential addresses.
  • A residential surcharge is not applied to non-residential addresses.
  • The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Download

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

Cleaning Up After Yourself

Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOAInfra DB password
SOAINFRA_PASSWORD=welcome1
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd ${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
sqlplus ${SOA_PREFIX}_soainfra/${SOAINFRA_PASSWORD} @truncate_soa_oracle.sql << EOF
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=${FMW_HOME}/jdk1.7.0_17
BAM_CMD=${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in ${BAM_OBJECTS}
do
  ${BAM_CMD} -cmd clear -name ${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Ready for Inspection

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

Cleaning Up After Yourself

Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOAInfra DB password
SOAINFRA_PASSWORD=welcome1
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd ${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
sqlplus ${SOA_PREFIX}_soainfra/${SOAINFRA_PASSWORD} @truncate_soa_oracle.sql << EOF
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=${FMW_HOME}/jdk1.7.0_17
BAM_CMD=${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in ${BAM_OBJECTS}
do
  ${BAM_CMD} -cmd clear -name ${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Ready for Inspection

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

Supporting the Team

Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

Supporting the Team

Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

$5 eBook Bonanza

Thu, 2013-12-19 10:20
Packt eBooks $5 Offer

Packt Publishing just told me about their Christmas offer, get eBooks for $5.

From December 19th, customers will be able to get any eBook or Video from Packt for just $5. This offer covers a myriad of titles in the 1700+ range where customers will be able to grab as many as they like until January 3rd 2014 – more information is available at http://bit.ly/1jdCr2W

If you haven’t bought the SOA Developers Cookbook then now is a great time to do so!

Postscript on Scripts

Fri, 2013-11-15 17:07
More Scripts for SOA Suite

Over time I have evolved my startup scripts and thought it would be a good time to share them.  They are available for download here.  I have finally converted to using WLST, which has a number of advantages.  To me the biggest advantage is that the output and log files are automatically written to a consistent location in the domain directory or node manager directory.  In addition the WLST scripts wait for the component to start and then return, this lets us string commands together without worrying about the dependencies.

The following are the key scripts (available for download here):

Script Description Pre-Reqs Stops when
Task Complete startWlstNodeManager.sh Starts Node Manager using WLST None Yes startNodeManager.sh Starts Node Manager None Yes stopNodeManager.sh Stops Node Manager using WLST Node Manager running Yes startWlstAdminServer.sh Starts Admin Server using WLST Node Manager running Yes startAdminServer.sh Starts Admin Server None No stopAdminServer.sh Stops Admin Server Admin Server running Yes startWlstManagedServer.sh Starts Managed Server using WLST Node Manager running Yes startManagedServer.sh Starts Managed Server None No stopManagedServer.sh Stops Managed Server Admin Server running Yes Samples

To start Node Manager and Admin Server

startWlstNodeManager.sh ; startWlstAdminServer.sh

To start Node Manager, Admin Server and SOA Server

startWlstNodeManager.sh ; startWlstAdminServer.sh ; startWlstManagedServer soa_server1

Note that the Admin server is not started until the Node Manager is running, similarly the SOA server is not started until the Admin server is running.

Node Manager Scripts startWlstNodeManager.sh

Uses WLST to start the Node Manager.  When the script completes the Node manager will be running.

startNodeManager.sh

The Node Manager is started in the background and the output is piped to the screen. This causes the Node Manager to continue running in the background if the terminal is closed. Log files, including a .out capturing standard output and standard error, are placed in the <WL_HOME>/common/nodemanager directory, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the Node Manager.

stopNodeManager.sh

Uses WLST to stop the Node Manager.  When the script completes the Node manager will be stopped.

Admin Server Scripts startWlstAdminServer.sh

Uses WLST to start the Admin Server.  The Node Manager must be running before executing this command.  When the script completes the Admin Server will be running.

startAdminServer.sh

The Admin Server is started in the background and the output is piped to the screen. This causes the Admin Server to continue running in the background if the terminal is closed.  Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find.  This script pipes the output of the log file to the screen and keeps doing this until terminated,  Terminating the script does not terminate the server.

stopAdminServer.sh

Stops the Admin Server.  When the script completes the Admin Server will no longer be running.

Managed Server Scripts startWlstManagedServer.sh <MANAGED_SERVER_NAME>

Uses WLST to start the given Managed Server. The Node Manager must be running before executing this command. When the script completes the given Managed Server will be running.

startManagedServer.sh <MANAGED_SERVER_NAME>

The given Managed Server is started in the background and the output is piped to the screen. This causes the given Managed Server to continue running in the background if the terminal is closed. Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the server.

stopManagedServer.sh <MANAGED_SERVER_NAME>

Stops the given Managed Server. When the script completes the given Managed Server will no longer be running.

Utility Scripts

The following scripts are not called directly but are used by the previous scripts.

_fmwenv.sh

This script is used to provide information about the Node Manager and WebLogic Domain and must be edited to reflect the installed FMW environment, in particular the following values must be set:

  • DOMAIN_NAME – the WebLogic domain name.
  • NM_USERNAME – the Node Manager username.
  • NM_PASSWORD – the Node Manager password.
  • MW_HOME – the location where WebLogic and other FMW components are installed.
  • WEBLOGIC_USERNAME – the WebLogic Administrator username.
  • WEBLOGIC_PASSWORD - the WebLogic Administrator password.

The following values may also need changing:

  • ADMIN_HOSTNAME – the server where AdminServer is running.
  • ADMIN_PORT – the port number of the AdminServer.
  • DOMAIN_HOME – the location of the WebLogic domain directory, defaults to ${MW_HOME}/user_projects/domains/${DOMAIN_NAME}
  • NM_LISTEN_HOST – the Node Manager listening hostname, defaults to the hostname of the machine it is running on.
  • NM_LISTEN_PORT – the Node Manager listening port.
_runWLst.sh

This script runs the WLST script passed in environment variable ${SCRIPT} and takes its configuration from _fmwenv.sh.  It dynamically builds a WLST properties file in the /tmp directory to pass parameters into the scripts.  The properties filename is of the form <DOMAIN_NAME>.<PID>.properties.

_runAndLog.sh

This script runs the command passed in as an argument, writing standard out and standard error to a log file.  The log file is rotated between invocations to avoid losing the previous log files.  The log file is then tailed and output to the screen.  This means that this script will never finish by itself.

WLST Scripts

The following WLST scripts are used by the scripts above, taking their properties from /tmp/<DOMAIN_NAME>.<PID>.properties:

  • startNodeManager.py
  • stopNodeManager.py
  • startServer.py
  • startServerNM.py
Relationships

The dependencies and relationships between my scripts and the built in scripts are shown in the diagram below.

Thanks for the Memory

Fri, 2013-11-15 15:28
Controlling Memory in Oracle SOA Suite

Within WebLogic you can specify the memory to be used by each managed server in the WebLogic console.  Unfortunately if you create a domain with Oracle SOA Suite it adds a new config script, setSOADomainEnv.sh, that overwrites any USER_MEM_ARGS passed in to the start scripts.  setDomainEnv.sh only sets a single set of memory arguments that are used by all servers, so an admin server gets the same memory parameters as a BAM server.  This means some servers will have more memory than they need and others will be too tightly constrained.  This is a bad thing.

A Solution

To overcome this I wrote a small script, setMem.sh, that checks to see which server is being started and then sets the memory accordingly.  It supports up to 9 different server types, each identified by a prefix.  If the prefix matches then the max and min heap and max and min perm gen are set.  Settings are controlled through variables set in the script,  If using JRockit then leave the PERM settings empty and they will not be set.

SRV1_PREFIX=Admin
SRV1_MINHEAP=768m
SRV1_MAXHEAP=1280m
SRV1_MINPERM=256m
SRV1_MAXPERM=512m

The above settings match any server whose name starts Admin.  The above will result in the following USER_MEM_ARGS value

USER_MEM_ARGS=-Xms768m -Xmx1280m -XX:PermSize=256m -XX:MaxPermSize=512m

If the prefix were soa then it would match soa_server1, soa_server2 etc.

There is a set of DEFAULT_ values that allow you to default the memory settings and only set them explicitly for servers that have settings different from your default.  Note that there must still be a matching prefix, otherwise the USER_MEM_ARGS will be the ones from setSOADomainEnv.sh.

This script needs to be called from the setDomainEnv.sh.  Add the call immediately before the following lines:

if [ "${USER_MEM_ARGS}" != "" ] ; then
        MEM_ARGS="${USER_MEM_ARGS}"
        export MEM_ARGS
fi

The script can be downloaded here.

Thoughts on Memory Setting

I know conventional wisdom is that Xms (initial heap size) and Xmx (max heap size) should be set the same to avoid needing to allocate memory after startup.  However I don’t agree with this.  Setting Xms==Xmx works great if you are omniscient, I don’t claim to be omniscient, my omni is a little limited, so I prefer to set Xms to what I think the server needs, this is what I would set it to if I was setting the parameters to be Xms=Xmx.  However I like to then set Xmx to be a little higher than I think the server needs.  This allows me to get the memory requirement wrong and not have my server fail due to out of memory.  In addition I can now monitor the server and if the heap memory usage goes above my Xms setting I know that I calculated wrong and I can change the Xms setting and restart the servers at a suitable time.  Setting them not equal buys be time to fix increased memory needs, for exampl edue to change in usage patterns or new functionality, it also allows me to be less than omniscient.

ps Please don’t tell my children I am not omniscient!

Share & Enjoy : Using a JDeveloper Project as an MDS Store

Sat, 2013-10-12 01:24
Share & Enjoy : Sharing Resources through MDS

One of my favorite radio shows was the Hitchhikers Guide to the Galaxy by the sadly departed Douglas Adams.  One of the characters, Marvin the Paranoid Android, was created by the Sirius Cybernetics Corporation whose corporate song was entitled Share and Enjoy!  Just like using the products of the Sirius Cybernetics Corporation, reusing resources through MDS is not fun, but at least it is useful and avoids some problems in SOA deployments.  So in this blog post I am going to show you how to re-use SOA resources stored in MDS using JDeveloper as a development tool.

The Plan

We would like to have some SOA resources such as WSDLs, XSDs, Schematron files, DVMs etc. stored in a shared location.  This gives us the following benefits

  • Single source of truth for artifacts
  • Remove cross composite dependencies which can cause deployment and startup problems
  • Easier to find and reuse resources if stored in a single location

So we will store a WSDL and XSD in MDS, using a JDeveloper project to maintain the shared artifact and using File based MDS to access it from development and Database based MDS to access it from runtime.  We will create the shared resources in a JDeveloper project and deploy them to MDS.  We will then deploy a project that exposes a service based on the WSDL.  Finally we will deploy a client project to the previous project that uses the same MDS resources.

Creating Shared Resources in a JDeveloper Project

First lets create a JDeveloper project and put our shared resources into that project.  To do this

  1. In a JDeveloper Application create a New Generic Project (File->New->All Technologies->General->Generic Project)
  2. In that project create a New Folder called apps (File->New->All Technologies->General->Folder) – It must be called apps for local File MDS to work correctly.
  3. In the project properties delete the existing Java Source Paths (Project Properties->Project Source Paths->Java Source Paths->Remove)
  4. In the project properties a a new Java Source Path pointing to the just created apps directory (Project Properties->Project Source Paths->Java Source Paths->Add)
    JavaSourcePaths

Having created the project we can now put our resources into that project, either copying them from other projects or creating them from scratch.

Create a SOA Bundle to Deploy to a SOA Instance

Having created our resources we now want to package them up for deployment to a SOA instance.  To do this we take the following steps.

  1. Create a new JAR deployment profile (Project Properties->Deployment->New->Jar File)
  2. In JAR Options uncheck the Include Manifest File
  3. In File Groups->Project Output->Contributors uncheck all existing contributors and check the Project Source Path
  4. Create a new SOA Bundle deployment profile (Application Properties->Deployment->New->SOA Bundle)
  5. In Dependencies select the project jar file from the previous steps.
    SOABundle
  6. On Application Properties->Deployment unselect all options.
    SOABundle2

The bundle can now be deployed to the server by selecting Deploy from the Application Menu.

Create a Database Based MDS Connection in JDeveloper

Having deployed our shared resources it would be good to check they are where we expect them to be so lets create a Database Based MDS Connection in JDeveloper to let us browse the deployed resources.

  1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
  2. Make the Connection Type DB Based MDS and choose the database Connection and parition.  The username of the connection will be the <PREFIX>_mds user and the MDS partition will be soa-infra.

Browse the repository to make sure that your resources deplyed correctly under the apps folder.  Note that you can also use this browser to look at deployed composites.  You may find it intersting to look at the /deployed-composites/deployed-composites.xml file which lists all deployed composites.

DbMDSbrowse

    Create an File Based MDS Connection in JDeveloper

    We can now create a File based MDS connection to the project we just created.  A file based MDS connection allows us to work offline without a database or SOA server.  We will create a file based MDS that actually references the project we created earlier.

    1. Create a new MDS Connection (File->All Technologies->General->Connections->SOA-MDS Connection)
    2. Make the Connection Type File Based MDS and choose the MDS Root Folder to be the location of the JDeveloper project previously created (not the source directory, the top level project directory).
      FileMDS

    We can browse the file based MDS using the IDE Connections Window in JDeveloper.  This lets us check that we can see the contents of the repository.

    Using File Based MDS

    Now that we have MDS set up both in the database and locally in the file system we can try using some resources in a composite.  To use a WSDL from the file based repository:

    1. Insert a new Web Service Reference or Service onto your composite.xml.
    2. Browse the Resource Palette for the WSDL in the File Based MDS connection and import it.
      BrowseRepository
    3. Do not copy the resource into the project.
    4. If you are creating a reference, don’t worry about the warning message, that can be fixed later.  Just say Yes you do want to continue and create the reference.
      ConcreteWSDLWarning

    Note that when you import a resource from an MDS connection it automatically adds a reference to that MDS into the applications adf-config.xml.  SOA applications do not deploy their adf-config.xml, they use it purely to help resolve oramds protocol references in SOA composites at design time.  At runtime the soa-infra applications adf-config.xml is used to help resolve oramds protocol references.

    The reason we set file based MDS to point to the project directory rather than the apps directory underneath is because when we deploy SOA resources to MDS as a SOA bundle the resources are all placed under the apps MDS namespace.  To make sure that our file based MDS includes an apps namespace we have to rename the src directory to be apps and then make sure that our file based MDS points to the directory aboive the new source directory.

    Patching Up References

    When we use an abstract WSDL as a service then the SOA infrastructure automatically adds binging and service information at run time.  An abstract WSDL used as a reference needs to have binding and service information added in order to compile successfully.  By default the imported MDS reference for an abstract WSDL will look like this:

    <reference name="Service3"
       ui:wsdlLocation="oramds:/apps/shared/WriteFileProcess.wsdl">
      <interface.wsdl interface="
    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.interface(WriteFileProcess)"/>
      <binding.ws port="" location=""/>
    </reference>

    Note that the port and location properties of the binding are empty.  We need to replace the location with a runtime WSDL location that includes binding information, this can be obtained by getting the WSDL URL from the soa-infra application or from EM.  Be sure to remove any MDS instance strings from the URL.

    EndpointInfo

    The port information is a little more complicated.  The first part of the string should be the target namespace of the service, usually the same as the first part of the interface attribute of the interface.wsdl element.  This is followed by a #wsdl.endpoint and then in parenthesis the service name from the runtime WSDL and port name from the WSDL, separated by a /.  The format should look like this:

    {Service Namespace}#wsdl.endpoint({Service Name}/{Port Name})

    So if we have a WSDL like this:

    <wsdl:definitions
       …
      
    targetNamespace=
       "http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess"
    >
       …
       <wsdl:service name="writefileprocess_client_ep">
          <wsdl:port name="WriteFileProcess_pt"
                binding="client:WriteFileProcessBinding">
             <soap:address location=… />
          </wsdl:port>
       </wsdl:service>
    </wsdl:definitions>

    Then we get a binding.ws port like this:

    http://xmlns.oracle.com/Test/SyncWriteFile/WriteFileProcess# wsdl.endpoint(writefileprocess_client_ep/WriteFileProcess_pt)

    Note that you don’t have to set actual values until deployment time.  The following binding information will allow the composite to compile in JDeveloper, although it will not run in the runtime:

    <binding.ws port="dummy#wsdl.endpoint(dummy/dummy)" location=""/>

    The binding information can be changed in the configuration plan.  Deferring this means that you have to have a configuration plan in order to be able to invoke the reference and this means that you reduce the risk of deploying composites with references that are pointing to the wrong environment.

    Summary

    In this blog post I have shown how to store resources in MDS so that they can be shared between composites.  The resources can be created in a JDeveloper project that doubles as an MDS file repository.  The MDS resources can be reused in composites.  If using an abstract WSDL from MDS I have also shown how to fix up the binding information so that at runtime the correct endpoint can be invoked.  Maybe it is more fun than dealing with the Sirius Cybernetics Corporation!

    Multiple SOA Developers Using a Single Install

    Wed, 2013-10-09 16:37
    Running Multiple SOA Developers from a Single Install

    A question just came up about how to run multiple developers from a single software install.  The objective is to have a single software installation on a shared server and then provide different OS users with the ability to create their own domains.  This is not a supported configuration but it is attractive for a development environment.

    Out of the Box

    Before we do anything special lets review the basic installation.

    • Oracle WebLogic Server 10.3.6 installed using oracle user in a Middleware Home
    • Oracle SOA Suite 11.1.1.7 installed using oracle user
    • Software installed with group oinstall
    • Developer users dev1, dev2 etc
      • Each developer user is a member of oinstall group and has access to the Middleware Home.
    Customizations

    To get this to work I did the following customization

    • In the Middleware Home make all user readable files/directories group readable and make all user executable files/directories group executable.
      • find $MW_HOME –perm /u+r ! –perm /g+r | xargs –Iargs chmod g+r args
      • find $MW_HOME –perm /u+x ! –perm /g+x | xargs –Iargs chmod g+x args
    Domain Creation

    When creating a domain for a developer note the following:

    • Each developer will need their own FMW repository, perhaps prefixed by their username, e.g. dev1, dev2 etc.
    • Each developer needs to use a unique port number for all WebLogic channels
    • Any use of Coherence should use Well Known Addresses to avoid cross talk between developer clusters (note SOA and OSB both use Coherence!)
    • If using Node Manager each developer will need their own instance, using their own configuration.

    Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Tue, 2013-10-08 17:14
    Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

    Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

    A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

    The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

    I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

    The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

    Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

    The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

    Enterprise Deployment Presentation at OpenWorld

    Thu, 2013-09-26 12:51
    Presentation Today Thursday 26 September 2013 Today Matt & I together with Ram from Oracle Product Management and Craig from Rubicon Red will be talking about building a highly available, highly scalable enterprise deployment. We will go through Oracles Enterprise Deployment Guide and explain why it recommends what is does and also identify alternatives to its recommendations. Come along to Moscone West 2020 at 2pm today. it would be great see you there. Update Thanks to all who attended, we were gratified to see around 100 people turn out on Thursday afternoon. I know I am usually presentationed out by then! I have uploaded the presentation here.

    Oracle SOA Suite 11g Performance Tuning Cookbook

    Tue, 2013-08-13 17:39

    Just received this to review.

    It’s a Java World

    The first chapter identifies tools and methods to identify performance bottlenecks, generally covering low level JVM and database issues.  Useful material but not really SOA specific and the authors I think missed the opportunity to share the knowledge they obviously have of how to relate these low level JVM measurements into SOA causes.

    Chapter 2 uses the EMC Hyperic tool to monitor SOA Suite and so this chapter may be of limited use to many readers.  Many but not all of the recipes could have been accomplished using the FMW Control that ships and is included in the license of SOA Suite.  One of the recipes uses DMS, which is the built in FMW monitoring system built by Oracle before the acquisition of BEA.  Again this seems to be more about Hyperic than SOA Suite.

    Chapter 3 covers performance testing using Apache JMeter.  Like the previous chapters there is very little specific to SOA Suite, indeed in my experience many SOA Suite implementations do not have a Web Service to initiate composites, instead relying on adapters.

    Chapter 4 covers JVM memory management, this is another good general Java section but has little SOA specifics in it.

    Chapter 5 is yet more Java tuning, in this case generic garbage collection tuning.  Like the earlier chapters, good material but not very SOA specific.  I can’t help feeling that the authors could have made more connections with SOA Suite specifics in their recipes.

    Chapter 6 is called platform tuning, but it could have been titled miscellaneous tuning.  This includes a number of Linux optimizations, WebLogic optimizations and JVM optimizations.  I am not sure that yet another explanation of how to create a boot.properties file was needed.

    Chapter 7 homes in on JMS & JDBC tuning in WebLogic.

    SOA at Last

    Chapter 8 finally turns to SOA specifics, unfortunately the description of what dispatcher invoke threads do is misleading, they only control the number of threads retrieving messages from the request queue, synchronous web service calls do not use the request queue and hence do not use these threads.  Several of the recipes in this chapter do more than alter the performance characteristics, they also alter the semantics of the BPEL engine (such as “Changing a BPEL process to be transient”) and I wish there was more discussion of the impacts of these in the chapter.  I didn’t see any reference to the impact on recoverability of processes when turning on in-memory message delivery.  That said the recipes do cover a lot of useful optimizations, and if used judiciously will cause a boost in performance.

    Chapter 9 covers optimizing the Mediator, primarily tweaking Mediator threading.  THe descriptions of the impacts of changes in this chapter are very good, and give some helpful indications on whether they will apply to your environment.

    Chapter 10 touches very lightly on Rules and Human Workflow, this chapter would have benefited from more recipes.  The two recipes for Rules do offer very valuable advice.  The two workflow recipes seem less valuable.

    Chapter 11 takes into the area where the greatest performance optimizations are to be found, the SOA composite itself.  7 generally useful recipes are provided, and I would have liked to see more in this chapter, perhaps at the expense of some of the java tuning in the first half of the book.  I have to say that I do not agree with the “Designing BPEL processes to reduce persistence” recipe, there are better more maintainable and faster ways to deal with this.  The other recipes provide valuable ideas that may help performance of your composites.

    Chapter 12 promises “High Performance Configuration”.  Three of the recipes on creating a cluster, configuring an HTTP plug in and setting up distributed queues are covered better in the Oracle documentation, particularly the Enterprise Deployment Guide.  There are however some good suggestions in the recipe about deploying on virtualized environments, I wish they had spoken more about this.  The use of JMS bridges recipe is also a very valuable one that people should be aware of.

    The Good, the Bad, and the Ugly

    A lot of the recipes are really just trivial variations on other recipes, for example they have one recipe on “Increasing the JVM heap size” and another on “Setting Xmx and Xms to the same value”.

    Although the book spends a lot of time on Java tuning, that of itself is reasonable as a lot fo SOA performance tuning is tweaking JVM and WLS parameters.  I would have found it more useful if the dots were connected to relate the Java/WLS tuning sections to specific SOA use cases.

    As the authors say when talking about adapter tuning “The preceding sets of recipes are the basics … available in Oracle SOA Suite. There are many other properties that can be tuned, but their effectiveness is situational, so we have chosen to focus on the ones that we feel give improvement for the most projects.”.  They have made a good start, and maybe in a 12c version of the book they can provide more SOA specific information in their Java tuning sections.

    Add the book to your library, you are almost certain to find useful ideas in here, but make sure you understand the implications of the changes you are making, the authors do not always spell out the impact on the semantics of your composites.

    A sample chapter is available on the Packt Web Site.

    WebLogic Admin Cookbook Review

    Tue, 2013-08-13 14:42
    Review of Oracle WebLogic Server 12c Advanced Administration Cookbook

    Like all of Packts cookbook titles, the book follows a standard format of a recipe followed by an explanation of how it works and then a discussion of additional recipe related features and extensions.

    When reading this book I tried out some of the recipes on an internal beta of 12.1.2 and they seemed to work fine on that future release.

    The book starts with basic installation instructions that belie its title.  The author is keen to use console mode, which is often needed for servers that have no X11 client libraries, however for all but the most simple of domains I find console mode very slow and difficult to use and would suggest that where possible you persuade the OS admin to make X11 client libraries available, at least for the duration of the domain configuration.

    Another pet peeve of mine is using nohup to start servers/services and not redirecting output, with the result that you are left with nohup.out files scattered all over your disk.  The book falls into this trap.

    However we soon sweep into some features of WebLogic that I believe are less understood such as using the pack/unpack commands and customizing the console screen.  The “Protecting changes in the Administration Console” recipe is particularly useful.

    The next chapter covers HA configuration.  One of the nice things about this book is that most recipes are illustrated not only using the console but also using WLST.  The coverage of multiple NICs and dedicated network channels is very useful in the Exalogic world as well as regular WLS clusters.  One point I would quibble with is the setting up of HA for the AdminServer.  I would always do this with a shared file system rather than copying files around, I would also prefer a floating IP address to avoid having to update the DNS.

    Other chapters cover JDBC & JMS, Monitoring, Stability, Performance and Security.

    Overall the recipes are useful, I certainly learned some new ways of doing things.  The WLST example code is a real plus.  Well worth being added to your WebLogic Admin library.

    The book is available on the Packt website.

    SOA Suite 11g Developers Cookbook Published

    Fri, 2013-06-28 10:33
    SOA Suite 11g Developers Cookbook Available

    Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year!

    In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward.

    The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers.

    Amazon has a couple of reviews.

    Table of Contents

    Chapter 1: Building an SOA Suite Cluster
    Chapter 2: Using the Metadata Service to Share XML Artifacts
    Chapter 3: Working with Transactions
    Chapter 4: Mapping Data
    Chapter 5: Composite Messaging Patterns
    Chapter 6: OSB Messaging Patterns
    Chapter 7: Integrating OSB with JSON
    Chapter 8: Compressed File Adapter Patterns
    Chapter 9: Integrating Java with SOA Suite
    Chapter 10: Securing Composites and Calling Secure Web Services
    Chapter 11: Configuring the Identity Service
    Chapter 12: Configuring OSB to Use Foreign JMS Queues
    Chapter 13: Monitoring and Management

    More Reviews

    In addition to the Amazon Reviews I also found some reviews on GoodReads.

    Free WebLogic Administration Cookbook

    Fri, 2013-06-28 10:16
    Free WebLogic Admin Cookbook

    Packt Publishing are offering free copies of Oracle WebLogic Server 12c Advanced Administration Cookbook : http://www.packtpub.com/oracle-weblogic-server-12c-advanced-administration-cookbook/book  in exchange for a review either on your blog or on the title’s Amazon page.

    Here’s the blurb:

    • Install, create and configure WebLogic Server
    • Configure an Administration Server with high availability
    • Create and configure JDBC data sources, multi data sources and gridlink data sources
    • Tune the multi data source to survive database failures
    • Setup JMS distributed queues
    • Use WLDF to send threshold notifications
    • Configure WebLogic Server for stability and resilience

    If you’re a datacenter operator, system administrator or even a Java developer this book could be exactly what you are looking for to take you one step further with Oracle WebLogic Server, this is a good way to bag yourself a free cookbook (current retail price $25.49).

    Free review copies are available until Tuesday 2nd July 2013, so if you are interested, email Harleen Kaur Bagga at: harleenb@packtpub.com.

    I will be posting my own review shortly!

    Target Verification

    Tue, 2013-05-21 13:02
    Verifying the Target

    I just built a combined OSB, SOA/BPM, BAM clustered domain.  The biggest hassle is validating that the resource targeting is correct.  There is a great appendix in the documentation that lists all the modules and resources with their associated targets.  The only problem is that the appendix is six pages of small print.  I manually went through the first page, verifying my targeting, until I thought ‘there must be a better way of doing this’.  So this blog post is the better way Smile

    WLST to the Rescue

    WebLogic Scripting Tool allows us to query the MBeans and discover what resources are deployed and where they are targeted.  So I built a script that iterates over each of the following resource types and verifies that they are correctly targeted:

    • Applications
    • Libraries
    • Startup Classes
    • Shutdown Classes
    • JMS System Resources
    • WLDF System Resources
    Source Data

    To get the data to verify my domain against, I copied the tables from the documentation into a text file.  The copy ended up putting the resource on the first line and the targets on the second line.  Rather than reformat the data I just read the lines in pairs, storing the resource as a string and splitting apart the targets into a list of strings.  I then stored the data in a dictionary with the resource string as the key and the target list as the value.  The code to do this is shown below:

    # Load resource and target data from file created from documentation
    # File format is a one line with resource name followed by
    # one line with comma separated list of targets
    # fileIn - Resource & Target File
    # accum - Dictionary containing mappings of expected Resource to Target
    # returns - Dictionary mapping expected Resource to expected Target
    def parseFile(fileIn, accum) :
      # Load resource name
      line1 = fileIn.readline().strip('\n')
      if line1 == '':
        # Done if no more resources
        return accum
      else:
        # Load list of targets
        line2 = fileIn.readline().strip('\n')
        # Convert string to list of targets
        targetList = map(fixTargetName, line2.split(','))
        # Associate resource with list of targets in dictionary
        accum[line1] = targetList
        # Parse remainder of file
        return parseFile(fileIn, accum)

    This makes it very easy to update the lists by just copying and pasting from the documentation.

    Each table in the documentation has a corresponding file that is used by the script.

    The data read from the file has the target names mapped to the actual domain target names which are provided in a properties file.

    Listing & Verifying the Resources & Targets

    Within the script I move to the domain configuration MBean and then iterate over the resources deployed and for each resource iterate over the targets, validating them against the corresponding targets read from the file as shown below:

    # Validate that resources are correctly targeted
    # name - Name of Resource Type
    # filename - Filename to validate against
    # items - List of Resources to be validated
    def validateDeployments(name, filename, items) :
      print name+' Check'
      print "====================================================="
      fList = loadFile(filename)
      # Iterate over resources
      for item in items:
        try:
          # Get expected targets for resource
          itemCheckList = fList[item.getName()]
          # Iterate over actual targets
          for target in item.getTargets() :
            try:
              # Remove actual target from expected targets
              itemCheckList.remove(target.getName())
            except ValueError:
              # Target not found in expected targets
              print 'Extra target: '+item.getName()+': '+target.getName()
          # Iterate over remaining expected targets, if any
          for refTarget in itemCheckList:
            print 'Missing target: '+item.getName()+': '+refTarget
        except KeyError:
          # Resource not found in expected resource dictionary
          print 'Extra '+name+' Deployed: '+item.getName()
      print

    Obtaining the Script
    I have uploaded the script here.  It is a zip file containing all the required files together with a PDF explaining how to use the script.

    To install just unzip VerifyTargets.zip. It will create the following files

    • verifyTargets.sh
    • verifyTargets.properties
    • VerifyTargetsScriptInstructions.pdf
    • scripts/verifyTargets.py
    • scripts/verifyApps.txt
    • scripts/verifyLibs.txt
    • scripts/verifyStartup.txt
    • scripts/verifyShutdown.txt
    • scripts/verifyJMS.txt
    • scripts/verifyWLDF.txt
    Sample Output

    The following is sample output from running the script:

    Application Check
    =====================================================
    Extra Application Deployed: frevvo
    Missing target: usermessagingdriver-xmpp: optional
    Missing target: usermessagingdriver-smpp: optional
    Missing target: usermessagingdriver-voicexml: optional
    Missing target: usermessagingdriver-extension: optional
    Extra target: Healthcare UI: soa_cluster
    Missing target: Healthcare UI: SOA_Cluster ??
    Extra Application Deployed: OWSM Policy Support in OSB Initializer Aplication

    Library Check
    =====================================================
    Extra Library Deployed: oracle.bi.adf.model.slib#1.0-AT-11.1-DOT-1.2-DOT-0
    Extra target: oracle.bpm.mgmt#11.1-DOT-1-AT-11.1-DOT-1: AdminServer
    Missing target: oracle.bpm.mgmt#11.1.1-AT-11.1.1: soa_cluster
    Extra target: oracle.sdp.messaging#11.1.1-AT-11.1.1: bam_cluster

    StartupClass Check
    =====================================================

    ShutdownClass Check
    =====================================================

    JMS Resource Check
    =====================================================
    Missing target: configwiz-jms: bam_cluster

    WLDF Resource Check
    =====================================================

    IMPORTANT UPDATE

    Since posting this I have discovered a number of issues.  I have updated the configuration files to correct these problems.  The changes made are as follows:

    • Added WLS_OSB1 server mapping to the script properties file (verifyTargets.properties) to accommodate OSB singletons and modified script (verifyTargets.py) to use the new property.
    • Changes to verifyApplications.txt
      • Changed target from OSB_Cluster to WLS_OSB1 for the following applications:
        • ALSB Cluster Singleton Marker Application
        • ALSB Domain Singleton Marker Application
        • Message Reporting Purger
      • Added following application and targeted at SOA_Cluster
        • frevvo
      • Adding following application and targeted at OSB_Cluster & Admin Server
        • OWSM Policy Support in OSB Initializer Aplication
    • Changes to verifyLibraries.txt
      • Adding following library and targeted at OSB_Cluster, SOA_Cluster, BAM_Cluster & Admin Server
        • oracle.bi.adf.model.slib#1.0-AT-11.1.1.2-DOT-0
      • Modified targeting of following library to include BAM_Cluster
        • oracle.sdp.messaging#11.1.1-AT-11.1.1

    Make sure that you download the latest version.  It is at the same location but now includes a version file (version.txt).  The contents of the version file should be:

    FMW_VERSION=11.1.1.7

    SCRIPT_VERSION=1.1

    Event Processed

    Wed, 2012-10-31 19:24
    Installing Oracle Event Processing 11g

    Earlier this month I was involved in organizing the Monument Family History Day.  It was certainly a complex event, with dozens of presenters, guides and 100s of visitors.  So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP).

    CEP has a developer side based on Eclipse and a runtime environment.

    Server install

    The server install is very straightforward (documentation).  It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are:

    1. Download required software
      • JRockit – I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”.
      • Oracle Event Processor – I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)”
    2. Install JRockit
      • Run the JRockit installer, the download is an executable binary that just needs to be marked as executable.
    3. Install CEP
      • Unzip the downloaded file
      • Run the CEP installer,  the unzipped file is an executable binary that may need to be marked as executable.
      • Choose a custom install and add the examples if needed.
        • It is not recommended to add the examples to a production environment but they can be helpful in development.
    Developer Install

    The developer install requires several steps (documentation).  A developer install needs access to the software for the server install, although JRockit isn’t necessary for development use.

    1. Download required software
      • Eclipse  (Linux) – It is recommended to use version 3.6.2 (Helios)
    2. Install Eclipse
      • Unzip the download into the desired directory
    3. Start Eclipse
    4. Add Oracle CEP Repository in Eclipse
      • http://download.oracle.com/technology/software/cep-ide/11/
    5. Install Oracle CEP Tools for Eclipse 3.6
      • You may need to set the proxy if behind a firewall.
    6. Modify eclipse.ini
      • If using Windows edit with wordpad rather than notepad
      • Point to 1.6 JVM
        • Insert following lines before –vmargs
          • -vm
          • \PATH_TO_1.6_JDK\jre\bin\javaw.exe
      • Increase PermGen Memory
        • Insert following line at end of file
          • -XX:MaxPermSize=256M

    Restart eclipse and verify that everything is installed as expected.

    Voila The Deed Is Done

    With CEP installed you are now ready to start a server, if you didn’t install the demoes then you will need to create a domain before starting the server.

    Once the server is up and running (using startwlevs.sh) you can verify that the visualizer is available on http://hostname:port/wlevs, the default port for the demo domain is 9002.

    With the server running you can test the IDE by creating a new “Oracle CEP Application Project” and creating a new target environment pointing at your CEP installation.

    Much easier than organizing a Family History Day!

    Oracle SOA Suite 11g Administrator's Handbook

    Fri, 2012-10-12 12:06
    SOA Administration Book

    I have just received a copy of the “Oracle SOA Suite 11g Administrator's Handbook” so as soon as I have read it I will let you know what I think.  In the meantime the first thing that struck me was the author credentials, although I have never met either of them as I remember, I have read Admeds blog postings and they are a great community resource, so immediately I am well disposed towards the book.  Similarly Arun is an employee of my friend and co-author Matt Wright, and I have heard good things about him from Rubicon Red people.

    A first glance at the table of contents looks encouraging, I particularly like their approach to performance tuning where they give a clear concise explanation of the knobs they are using.

    More when I have read more.

    Following the Thread in OSB

    Wed, 2012-10-10 14:31
    Threading in OSB

    The Scenario

    I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic:

    1. Receive Request
    2. Send Request to External System
    3. If Response has a particular value
      3.1 Modify Request
      3.2 Resend Request to External System
    4. Send Response back to Requestor

    All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details):

    • Proxy Service to Receive Request and Send Response
    • Request Pipeline
      •   Copies Original Request for use in step 3
    • Route Node
      •   Sends Request to External System exposed as a Business Service
    • Response Pipeline
      •   Checks Response to Check If Request Needs to Be Resubmitted
        • Modify Request
        • Callout to External System (same Business Service as Route Node)

    The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool.

    The Surprise

    Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.

     

    Basic Thread Model

    OSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node.

    Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2.

    OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request.

    This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage.

    First Wrinkle

    Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example:

    • Request Pipeline makes a blocking callout, say to perform a database read.
    • Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read.
    • New requests arrive and contend with responses arriving for the available threads.

    Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example.

    Solution

    The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads.

    Do Nothing Route Thread Model

    So what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager.

    So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.

     

    Proxy Chaining Thread Model

    So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline?

    Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”.

    Call Out Threading Model

    So what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way.

    When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response.

    Second Wrinkle

    This leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows:

    • Pipeline makes a Callout and the thread is suspended but still allocated
    • Multiple Pipeline instances using the same Work Manager are in this state (common for a system under load)
    • Response comes back but all Work Manager threads are allocated to blocked pipelines.
    • Response cannot be processed and so pipeline threads never unblock – deadlock!
    Solution

    The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself.

    The Solution to My Problem

    Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed.

    The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager.

    Summary
    • Request Pipeline
      • Executes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
    • Route Node
      • Request sent using Proxy WM Thread
      • Proxy WM Thread is released before getting response
      • Muxer is used to handle response
      • Muxer hands off response to Business Service (BS) WM
    • Response Pipeline
      • Executes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
    • No Route Node (Echo functionality)
      • Proxy WM thread released
      • New thread from the default WM used for response pipeline
    • Service Callout
      • Request sent using proxy pipeline thread
      • Proxy thread is suspended (not released) until the response comes back
      • Notification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
        • Note this is a very short lived use of the thread
      • After notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread.
    • Route/Callout to Proxy Service
      • Request Pipeline of callee executes on requestor thread
      • Response Pipeline of caller executes on response thread of requested proxy
    • Throttling
      • Request message may be queued if limit reached.
      • Requesting thread is released (route node) or suspended (callout)

    So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having.
    You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline.
    It also means you may want to have different work managers for the proxy and business service in the route node.
    Basically you need to think carefully about how threading impacts your proxy services.

    References

    Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me.

    Pages