Darwin IT

Subscribe to Darwin IT feed
Darwin-IT professionals do ICT-projects based on a broad range of Oracle products and technologies. We write about our experiences and share our thoughts and tips.Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.comBlogger407125
Updated: 1 month 3 weeks ago

Persisting of settings in a SOA Suite Enterprise Deployment

Thu, 2018-10-04 04:56
About a year ago, at my previous customer, a co-worker and I encountered and described a persistence problem with setting the Global Tokens in SOA Suite.

What are Global Tokens again?The problem with a middleware product as Oracle Service Bus, SOA Suite (and the same probably counts for MuleSoft, or any other integration tool) is that when you move an integration through the development lifecycle from development, to test, preproduction and production, you need to update the endpoints. When I have an integration with a (BPEL) Process that does a check-in of a document in WebCenter Content, for instance, then on the test environment it should do the check-in to another WCC server than on pre-production or production. We don't want to have our test documents in production, do we?

To solve that, in OSB we have customization files, and in SOA Suite 11g and onwards, we use config plans. But, in 11g PatchSet 6 (, SOA Suite introduced Global Tokens. That way you can create a token that refers to the WCC host, eg. ${wcc_url}, and use that as a reference in your binding properties.

These properties can be set using Enterprise Manager FMW Control 12c:
which lead to:
Where you can add tokens or import a file with the tokens.

These settings are stored in the mdm-url-resolver.xml file, in the $DOMAIN_HOME/config/fmwconfig folder.

What about Enterprise Deployment?The Enterprise Deployment Guide of SOA Suite 12c is quite complex. But in short, as we implemented it, we installed Fusion Middleware Infrastructure and SOA Suite on one node/host. Then, of course, ran the Repository Creation Utility, and configured the domain for the AdminServer. That domain was configured on a shared disk, let's say, /u01/data/oracle/domains/soa1_domain. Then it is cloned to cater for the managed servers, using pack/unpack, to local storage, for instance, /u02/data/oracle/domains/soa1_domain. In short we have 2 domain homes:
  • ASERVER_DOMAIN_HOME=/u01/data/oracle/domains/soa1_domain
  • MSERVER_DOMAIN_HOME=/u02/data/oracle/domains/soa1_domain
Where  /u01 is mounted on a shared disk and /u02 on local storage.

So, the AdminServer runs from the ASERVER_DOMAIN_HOME domain, that is on shared storage. This way, when the host running the AdminServer goes down, the AdminServer can be brought up on the second host. The Managed Servers run on a clone of the domain on local storage.

Side note: in 12c we have a per-domain NodeManager by default. So cloning the domain, implicitly clones the nodemanager config. And running that against another adapter, allows for a nodemanager for the Admin server and one for the ManagedServers.

Why is this important? Well, this allows for a High Available setup, including functionality as Zero Downtime Patching.
What is the problem then?Updating the Global Tokens is done in FMW Control, that runs on the AdminServer. It stores the properties in the mdm-url-resolver.xml. But, which particular mdm-url-resolver.xml file? Well, the changes are stored in the one in $MSERVER_DOMAIN_HOME/config/fmwconfig!
After that you need to restart the SOA Server, to get the properties loaded. And then something very smart happens. When starting the SOA Server, the AdminServer sends it's copy from the $ASERVER_DOMAIN_HOME/config/fmwconfig to the SOA Server. And so the changes are cleared by the version from the AdminServers domain!

So, in an Enterprise Deployment configuration of the SOA Suite a restart of the SOA Server, will clear the changes of the Global Tokens.
But there is more!As I wrote above, we found this a year ago. And we created a Prio 1 Service Request. The issue is very straight forward, reproducable, and in the status Development Working for about a year now:
(I'm not writing this to bash Support by the way. No offense intended, altough I would really like a patch by now...)

But, today another co-worker and I encountered a very similar problem with configuring the email driver of the User Messaging Services. A description on how to configure that can be found here. The email driver settings are stored in driverconfig.xml in the $MSERVER_DOMAIN_HOME/config/fmwconfig/servers/soa_server1/applications/usermessagingdriver-email/configuration.

And again, restarting the domain, or soa_server1, these are overwritten by the driverconfig.xml at the same subfolder location in the $ASERVER_DOMAIN_HOME! And since this works like this for the Global Tokens and the email-driver it problably works like this for other UMS drivers, or even other functionality.
The workaround?Is quite simple: copy the updated mdm-url-resolver.xml or driverconfig.xml in the
$MSERVER_DOMAIN_HOME to the counterparts in $ASERVER_DOMAIN_HOME. Then start the servers again. On startup the AdminServer will copy it's variant (that is a copy of the correct, updated one) to the SOA Server again.
ConclusionI still do like SOA Suite. It's an impressive Middleware suite. But I really hope Oracle does invest in making it more stable, decreasing the footprint and adapt the functionality to the Enterprise Deployment Guide. Since, the behavior above does not match the recommendations as described in the EDG. And I think SOA Suite, OSB and even Weblogic could be a lot smaller and faster with the same functionality. I encounter a lot of duplicated libraries throughout the ORACLE_HOME. Or several different versions of the same library. I assume those can be reduced quite a bit. And that will benefit both the Cloud variants of the software as the On Premise variant.

Split your flow trace in BPEL

Tue, 2018-09-25 07:03
A quick one today. In the past we suffered from very large flowtraces in SOA Suite 11g, due to which it could happen that a flow trace wasn't parsable in EM. And therefor could not be shown.

Also, you might have other reasons to split up your flow trace. Maybe because you want to have a callee BPEL process that may run for a longer time run on, while the calling BPEL project is redeployed (although I haven't tested that yet, so I'm not sure if that would work).

I did know it should be possible to split up the flow traces by changing the ECID (Execution Context ID). But, haven't seen it and wasn't able to find it. But, today I found the how-to in Oracle Support Note 2278472.1. So, as  a note-to-myself, here it is.

In the invoke activity to a child process you should add the following property:
<bpelx:toProperty name="tracking.ecid" variable="ora:generateGUID()"/>

This will update the tracking.ecid to a GUID.You should/need to do this on the invoke only (not on the receive). It should not cause any collission or conflict, since it generates a Global Unique Identifier.

SOA 12c MDS configuration

Tue, 2018-09-18 07:05
In a previous post I described how to have the integrated Weblogic of your SOA/BPM QuickStart refer to the same filebased MDS that you might refer to in the SOADesignTimeRepository in JDeveloper.

These days for my current customer I'm looking into upgrading 11g, as can be read from my previous posts. This customer also has a legacy with projects migrated from 10g.

In the 11g workspace there was a reference to the database MDS in the Development database. In 12c we have a designtime mds reference. I would recommend to refer that to the mds artefacts in your VCS (Version Control System: svn or git) working copy. To do so, call-up the Resources pane in JDeveloper and right click on the SOADesignTimeRepository:
Then navigate to the location in your working copy:
Mind that SOASuite expects an apps folder within this folder, so resulting references in the composite.xml, etc. are expected to start with oramds:/apps/....

Now, I migrated a few projects including a adf-config.xml. In stead of the DB MDS repo, I replaced it with a file-based reference in the adf-config.xml, refering to the SOADesignTimeRepository. If you create a new 12c SOA Application, the adf-config.xml will look like:
<?xml version="1.0" encoding="windows-1252" ?>
<adf-config xmlns="http://xmlns.oracle.com/adf/config" xmlns:adf="http://xmlns.oracle.com/adf/config/properties"
<adf:adf-properties-child xmlns="http://xmlns.oracle.com/adf/config/properties">
<adf-property name="adfAppUID" value="MyApplication-1234"/>
<sec:adf-security-child xmlns="http://xmlns.oracle.com/adf/security/config">
<CredentialStoreContext credentialStoreClass="oracle.adf.share.security.providers.jps.CSFCredentialStore"
<adf-mds-config xmlns="http://xmlns.oracle.com/adf/mds/config">
<mds-config xmlns="http://xmlns.oracle.com/mds/config">
<namespace path="/soa/shared" metadata-store-usage="mstore-usage_1"/>
<namespace path="/apps" metadata-store-usage="mstore-usage_2"/>
<metadata-store-usage id="mstore-usage_1">
<metadata-store class-name="oracle.mds.persistence.stores.file.FileMetadataStore">
<property name="partition-name" value="seed"/>
<property name="metadata-path" value="${soa.oracle.home}/integration"/>
<metadata-store-usage id="mstore-usage_2">
<metadata-store class-name="oracle.mds.persistence.stores.file.FileMetadataStore">
<property name="metadata-path" value="${soamds.apps.home}"/>

In the metadata-store-usage with id mstore-usage_2 you'll find the reference ${soamds.apps.home} in the metadata-path property. This refers to the folder as choosen in your SOADesignTimeRepository.

Now, I found in the past several times that although the adf-config.xml was similar to the above, that the MDS references did not work. In those cases, as a workaround, I put the absolute path reference in the metadata-path property.

Today I found something similar because of the upgrade, and searched on MDS-01333: missing element "mds-config" This resulted in this article, that gave me the hint.

It turns out that the snippet:

<adf:adf-properties-child xmlns="http://xmlns.oracle.com/adf/config/properties">
<adf-property name="adfAppUID" value="MyApplication-1234"/>

get's in the way. the UID refers to the application name and some generated number. It turns out not enough to change it the name of the application with a generated number. I haven't found what the proper number should be. So I just removed that snippet and then it worked.

SOA Bundelpatch for available (since juli 2018)

Mon, 2018-09-17 04:46
Because of the version level my previous customer was on, I mostly worked with the version of the BPM QuickStart. Recently I started at an other customer that is still on SOA Suite 11g. Since I'm looking into upgrading those the latest 12c version, I installed BPM QuickStart again.

Doing a patch search on support.oracle.com, I found out that juli 17th, 2018, a SOA BundlePatch on was released. It's patch 28300397.

The readme shows quite a list of bugs solved. The version of JDeveloper and the main components stay unaffected. The version changes are shown in the extensions. The vanilla, unpatched, JDeveloper shows:

And the patched JDeveloper shows:

Since it's been a year already since was released (august 2017, if I recollect correctly), this bundle patch is welcome.

By the way, the reason that I was looking into the patches, was that I created a few .xsl files to pre-upgrade our 11g projects. And the didn't reformat properly. JDeveloper behaves strangely, apparenlty it does not recognize an .xsl file as xml. When you copy and paste it into an .xml file it does format properly. I think I have to dig into the preferences to see if this can be tweaked.

To install it, unzip the patch. I'm used to create a patches folder within the OPatch folder in the Oracle Home:
And unzip the patch in to that folder. Because the unzip functionality in Windows is limited to 256 characters in the resulting path names, it is advised to use a tool like 7Zip. Since I use TotalCommander for about everything, (file related that is), I get a neat dialog mentioning this and allowing me to keep the names.

Make sure you have closed JDeveloper.

Then open a command window and navigate to the patches folder:
Microsoft Windows [Version 10.0.17134.285]
(c) 2018 Microsoft Corporation. All rights reserved.

C:\Windows\system32>cd \oracle\JDeveloper\12213_BPMQS\OPatch\patches\28300397

C:\oracle\JDeveloper\12213_BPMQS\OPatch\patches\28300397>set ORACLE_HOME=C:\oracle\JDeveloper\12213_BPMQS

First set the the ORACLE_HOME variable to the location where you installed JDeveloper, C:\oracle\JDeveloper\12213_BPMQS in my case.

Using opatch apply you can apply the patch:
c:\Oracle\JDeveloper\12213_SOAQS\OPatch\patches\28300397>..\..\opatch apply
Oracle Interim Patch Installer version
Copyright (c) 2018, Oracle Corporation. All rights reserved.

Oracle Home : c:\Oracle\JDeveloper\12213_SOAQS
Central Inventory : C:\Program Files\Oracle\Inventory
from :
OPatch version :
OUI version :
Log file location : c:\Oracle\JDeveloper\12213_SOAQS\cfgtoollogs\opatch\opatch2018-09-17_12-05-54PM_1.log

OPatch detects the Middleware Home as "C:\Oracle\JDeveloper\12213_SOAQS"

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 28300397

Do you want to proceed? [y|n]
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = 'c:\Oracle\JDeveloper\12213_SOAQS')

Is the local system ready for patching? [y|n]
User Responded with: Y
Backing up files...
Applying interim patch '28300397' to OH 'c:\Oracle\JDeveloper\12213_SOAQS'
ApplySession: Optional component(s) [ oracle.mft, ] , [ oracle.soa.workflow.wc, ] , [ oracle.integ
emina, ] , [ oracle.mft.apachemina, ] , [ oracle.bpm.plugins, ] , [ oracle.oep.examples, ] not present in the Oracle Home or a higher version is found.

Patching component oracle.soa.all.client,

Patching component oracle.integration.bam,

Patching component oracle.rcu.soainfra,

Patching component oracle.rcu.soainfra,

Patching component oracle.soacommon.plugins,

Patching component oracle.oep,

Patching component oracle.integration.soainfra,

Patching component oracle.integration.soainfra,

Patching component oracle.soa.common.adapters,

Patching component oracle.soa.procmon,
Patch 28300397 successfully applied.
Log file location: c:\Oracle\JDeveloper\12213_SOAQS\cfgtoollogs\opatch\opatch2018-09-17_12-05-54PM_1.log

OPatch succeeded.


Answer 'y' on the questions to proceed and if the oracle home is ready to be patched. And with  opatch lsinventory you can check if the patch (and possibly others) is applied:

c:\Oracle\JDeveloper\12213_SOAQS\OPatch\patches\28300397>..\..\opatch lsinventory
Oracle Interim Patch Installer version
Copyright (c) 2018, Oracle Corporation. All rights reserved.

Oracle Home : c:\Oracle\JDeveloper\12213_SOAQS
Central Inventory : C:\Program Files\Oracle\Inventory
from :
OPatch version :
OUI version :
Log file location : c:\Oracle\JDeveloper\12213_SOAQS\cfgtoollogs\opatch\opatch2018-09-17_12-11-19PM_1.log

OPatch detects the Middleware Home as "C:\Oracle\JDeveloper\12213_SOAQS"

Lsinventory Output file location : c:\Oracle\JDeveloper\12213_SOAQS\cfgtoollogs\opatch\lsinv\lsinventory2018-09-17_12-11-19PM.txt

Local Machine Information::
ARU platform id: 233
ARU platform description:: Microsoft Windows Server 2003 (64-bit AMD)

Interim patches (5) :

Patch 28300397 : applied on Mon Sep 17 12:08:10 CEST 2018
Unique Patch ID: 22311639
Patch description: "SOA Bundle Patch"
Created on 6 Jul 2018, 01:58:16 hrs PST8PDT
Bugs fixed:
26868517, 27030883, 25980718, 26498324, 26720287, 27656577, 27639691
27119541, 25941324, 26739808, 27561639, 26372043, 27078536, 27024693
27633270, 27073918, 27210380, 27260565, 27247726, 27880006, 27171517
26573292, 26997999, 26484903, 27957338, 27832726, 27141953, 26851150
26696469, 27494478, 27150210, 27940458, 26982712, 27708925, 26645118
27876754, 24922173, 27486624, 26571201, 26935112, 26953820, 27767587
26536677, 27311023, 26385451, 26796979, 27715066, 27241933, 24971871
26472963, 27411143, 27230444, 27379937, 27640635, 26957183, 26031784
26408150, 27449047, 27019442, 26947728, 27368311, 26895927, 27268787
26416702, 27018879, 27879887, 27929443

Patch 26355633 : applied on Wed Sep 12 12:00:33 CEST 2018
Unique Patch ID: 21447583
Patch description: "One-off"
Created on 1 Aug 2017, 21:40:20 hrs UTC
Bugs fixed:

Patch 26287183 : applied on Wed Sep 12 11:59:56 CEST 2018
Unique Patch ID: 21447582
Patch description: "One-off"
Created on 1 Aug 2017, 21:41:27 hrs UTC
Bugs fixed:

Patch 26261906 : applied on Wed Sep 12 11:59:11 CEST 2018
Unique Patch ID: 21344506
Patch description: "One-off"
Created on 12 Jun 2017, 23:36:08 hrs UTC
Bugs fixed:
25559137, 25232931, 24811916

Patch 26051289 : applied on Wed Sep 12 11:58:45 CEST 2018
Unique Patch ID: 21455037
Patch description: "One-off"
Created on 31 Jul 2017, 22:11:57 hrs UTC
Bugs fixed:


OPatch succeeded.

Advanced SoapUI: Mocking a Async Request Response Service supporting WS-Addressing

Fri, 2018-09-14 08:18
Lately, I sat down with my appreciated colleague Frank, prepping for a SoapUI/ReadyAPI training due next week. After having solved some issues he had, we agreed upon designing an advanced lab.

In the past I wrote an article about how to test een Asynchronous Request Response BPEL process, with WS-Addressing. This uses SoapUI as a test client, to test an Async BPEL Process and catching the response. Frank suggested to create a SoapUI project that mocks the BPEL Process. And that's a whole other ball-game!

SoapUI does support Mock Services. But those are in fact Synchronous services: upon request they send back a response. They're very flexible in that you can determine and select the responses that are send back in several ways. You can even script the lot using an OnRequest Groovy script.

But in this case we do not want to send back a response. The thing with an Asynchronous Request Response service is that they're actually two complementary Fire & Forget services.
  1. The actual request service is a fire and forget service implemented by the service provider. It does not respond with a message, but it just starts processing the request.
  2. Then the service client implements a CallBack fire and forget service. Upon processing the request to the point that a response is build, the Service Provider will call this service with the actual response as a request.
How would you implement this with SoapUI? First, you create a Project with a TestCase as described in my referenced article. It will invoke the SOAP Service with a request and then bring up a SOAP Mock Response to catch the response.

For the Async MockService we create a MockService that catches the request. But we leave the response empty: we do not want to reply with a response immediately. In stead, we use the On Request script to call a Test Case that simulates the proces. The interesting part is to pass the info from the request: the WS-Addressing elements (ReplyTo-Address and MessageId) and message Content. But let's sort that out step-by-step.

By the way I worked this out as a Lab together with my colleague Frank, in both SoapUI and ReadyAPI simultaneously. So it works in both products. In stead of ReadyAPI's 'Virts', I stick with the SoapUI term MockServices. But the principles and code snippets work one-on-one.

Create a SoapUI project with a MockServiceFirst create a SoapUI project. I used the wsdl and xsd that I published here on github.
Then create a MockService on the BPELProcessAsyncBinding Request binding:

  • Name it for example: BPELProcessAsyncBinding MockService.
  • Service Path: /mockBPELProcessAsyncBinding
  • Port: 8088
We don’t provide a response on purpose: it will be an async service, that will respond by doing an invoke later on.

Add the mockservice’s endpoint to the interface binding:
Remove the original endpoint from the interface, since it is a dummy endpoint ('http://localhost:7101/soa-infra/services/default/helloWorldBPELAsync/bpelprocessasync_client_ep').

Now you can test the MockService with an adhoc request.

Create a 'Client' Test caseIn the SoapUI Project, create a TestSuite called TestSuite AsyncBPEL and add a TestCase, called AsyncSvcClient:
Then clone the Adhoc Test Request to the testcase and call it InvokeAsyncService:

To pick up the response we need to add a MockResponse based on the CallBack binding of the wsdl:
Base it on the CallBack Binding of the wsdl:
Take note of the Port and the Path, if you choose to use something else as 8090 and /HelloWorldCallback that I used for this article.

It is important that this step is started as soon as the request is sent. It takes time to startup the MockResponse listener. So, you need to couple it to the corresponding Soap Request step. To do so, you need to get to the properties of the AsyncReceive MockResponse step and set the start step of the MockResponse step to InvokeAsyncService:

This will ensure that when the InvokeAsyncService step is executed the AsyncReceive mock response is started, so that it can be called as soon as the ServiceProvider wants to send back its response.

Note that the xml request of the AsyncReceive step is empty, as well as the response. The response will stay unused, but the request is to capture the callback message from the service provider, as we will see later on.

Setup the Async Service ProviderThe MockService inherently is a synchronous mechanism, so normally used to respond with a response message on request. Since we want to implement an asynchronous request-reply mock service, we won’t respond with a message. So the response message stays empty. How are we going to respond then? We will build a second test case, that will be executed on request from a Groovy Script on the MockService. It will build up a context from the request message and providing that to the running testcase we will provide the test case with the information to invoke the AsyncReceive step of the client test case.

Thus we create a new test case, and it will do two things:
  1. Extract the request properties from the context, they will consist of the following properties:
    1. WS Addressing ReplyTo Address
    2. WS Addressing MessageId
    3. HelloWorld Input message (payload elements)
  2. Do the Callback based on the provided information.
To implement this perform the following:
  1. Create a new TestSuite, called AsyncBPELSvcProvider and add a TestCase, called AsyncSvcProvider.
  2. Add a SOAP Request step, named CallBackAsyncSvcClient and based that on the BPELProcessAsyncCallbackBinding:
  3. As a result value provide ‘Hello’ for now.
  4. As an endpoint set http://localhost:8090/HelloWorldCallback. We will change that to a property, later , fetched from the context.
  5. Remove a possible assertion to check on the Soap Response Message (since we won’t get one).
  6. If you want to test now, you can run the AsyncSvcClient but it will wait on the AsyncReceive step. To have that execute, you should manually run the AsyncSvcProvider test case.

Now we need to have the new TestCase called from the OnRequest script of the MockService.
For that we add a few properties to the MockService, to denote the TestSuite and the containing TestCase that implements our ServiceProvider process.
Then using a basic Groovy script that we will extend later on, we make sure that that test case is ran.

  1. Add two Custom Properties:
    1. AsyncTestSuite, with value: AsyncBPELSvcProvider
    2. AsyncSvcProvTestCase, with value: AsyncSvcProvider
  2. On the OnRequest script of the Mock Service:

    Add the following script:
    def mockService = context.mockService
    def method = mockService.name+".Response 1.OnRequest Script"
    log.info("Start "+method)
    def project = mockService.project
    log.info("Project "+project.name)
    def asyncTestSuiteName = mockService.getPropertyValue( "AsyncTestSuite")
    def asyncTestSuite = project.getTestSuiteByName(asyncTestSuiteName)
    log.info("TestSuite: "+asyncTestSuite.name)
    def asyncSvcProvTestCaseName = mockService.getPropertyValue( "AsyncSvcProvTestCase")
    def asyncSvcProvTestCase = asyncTestSuite.getTestCaseByName(asyncSvcProvTestCaseName)
    log.info("TestCase: "+asyncSvcProvTestCase.name)
    //Log Request

    // Set Service Context
    def svcContext = (com.eviware.soapui.support.types.StringToObjectMap)context

    //Invoke Async Service Provider TestCase
    asyncSvcProvTestCase.run(svcContext, false)
    // End Method
    log.info("End "+method)

    What this does is the following:
    1. Define the mockService and the project objects from the context variable.
    2. Get the TestSuite and TestCase objects based on the MockService property values of the TestCase to be called.
    3. Create a serviceContext, to be used to do property transfer later on.
    4. Run the testCase using the created serviceContext.
  3. Now you can test this by invoking the AsyncSvcClient test case. You might want to remove the current content of the request of the AsyncReceive .

Transfer Request Context properties to ServiceProvider TestCaseNow we want to at least transfer the helloworld input in the request from the MockService to the service provider testcase, so that it can add it to the response message.

In the OnRequest Groovy Script we already created a context. We can simply set additional properties to that context. The values we can extract from the request, by xpath.

  1. Go to the OnRequest groovy script and extend your existing script to reflect the following:
    def mockService = context.mockService
    def method = mockService.name+".Response 1.OnRequest Script"
    log.info("Start "+method)
    def project = mockService.project
    log.info("Project "+project.name)
    def asyncTestSuiteName = mockService.getPropertyValue( "AsyncTestSuite")
    def asyncTestSuite = project.getTestSuiteByName(asyncTestSuiteName)
    log.info("TestSuite: "+asyncTestSuite.name)
    def asyncSvcProvTestCaseName = mockService.getPropertyValue( "AsyncSvcProvTestCase")
    def asyncSvcProvTestCase = asyncTestSuite.getTestCaseByName(asyncSvcProvTestCaseName)
    log.info("TestCase: "+asyncSvcProvTestCase.name)
    //Log Request
    // Added lines ==>
    def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
    // Set Namespaces and query request
    def holder = groovyUtils.getXmlHolder(mockRequest.getRequestContent())
    holder.namespaces["soapenv"] = "http://schemas.xmlsoap.org/soap/envelope/"
    holder.namespaces["bpel"] = "http://xmlns.oracle.com/ReadyAPIHellloWorldSamples/helloWorldBPELAsync/BPELProcessAsync"
    holder.namespaces["wsa"] = "http://www.w3.org/2005/08/addressing"
    def helloInput = holder.getNodeValue("/soapenv:Envelope/soapenv:Body/bpel:process/bpel:input")
    // Set Service Context
    def svcContext = (com.eviware.soapui.support.types.StringToObjectMap)context
    // <==Added lines
    log.info("helloInput: "+svcContext.helloInput)
    //Invoke Async Service Provider TestCase
    asyncSvcProvTestCase.run(svcContext, false)
    // End Method
    log.info("End "+method)
    This adds the following:
    1. A declaration of the groovyUtils, that is used to get an so called XmlHolder that contains the content of the Request in parsed XML Format.
    2. Declare namespace references in the holder.
    3. Query the helloInput using the xpath expression: "/soapenv:Envelope/soapenv:Body/bpel:process/bpel:input” from the request.
    4. Set this as a helloInput property on the service context.

  • Now we need to extract these properties in the AsyncSvcProvider TestCase, so that we can use it in the request of the callback. To do so add a Groovy Test Step to the AsyncSvcProvider TestCase, as a first step:

    Call it GetContextProperties, and move it as the first step in the TestCase:
  • Add the following to the script:
    def testCase=testRunner.testCase
    def testSuite=testCase.testSuite
    def methodName=testSuite.name+"."+testCase.name+".getContextProperties"
    log.info("Start MethodName: "+methodName)
    def helloInput=context.helloInput
    log.info(methodName+" Received HelloInput: "+helloInput)
    log.info("End MethodName: "+methodName)

    As you can see in the top right corner of the editor, you can see that besides a log variable also a context variable is provided:

    This variable will contain the properties we set in the call to the testcase from the MockService.
    As you can see we get the property from the context, and set it as a TestCase property.
  • Add the helloInput property to the AsyncSvcProvider TestCase. You don’t need to provide a value, it just needs to exist. 
  • Lastly, in the request of the CallBackAsyncSvcClient step, add ${#TestCase#helloInput} to the result:

  • Configure WS-AddressingIn the previously mentioned blog article you can read how to create a test case that supports WS Addressing to call and test an asynchronous (BPEL) request response service. Now with the above, we have the plumbing in place to add the WS Addressing specifics to simulate and test the Asynchronous RequestResponse Service Provider.

    We need then to provide and process the following:
    • A WS Addressing Reply To Address, based on property values that matches the port and path of the AsyncReceive step.
    • A message id that is used to validate if the response back is using the correct provided messageId header value. In a real life case this message Id is used by the SOA Suite infrastructure to correlate the response to the correct process instance that requested it. This is not supported/implemented in SoapUI, since that tool is not meant for that. But we can add an assertion to check the correct responding of this property.
    To implement this, perform the following:
    1. On the AsyncSvcClient test case add the following properties:
      • callbackURI, with value: HelloWorldCallback
      • callbackPort, with value: 8090
      • callbackHost, with value: localhost
      • wsAddressingReplyToEndpoint, with value: http://${#TestCase#callbackHost}:${#TestCase#callbackPort}/${#TestCase#callbackURI}
      • wsAddressingMessageId, with no value

      You see that the wsAddressingReplyToEndpoint is dynamically build up from the previous properties. The callbackURIand the callbackPort should exactly match the values of the path and the port of the AsyncReceive step (without the initial slash):

      The property wsAddressingMessageId does not need a value: we will generate a value in another Groovy TestStep.
    2.  Add a Groovy TestStep to AsyncSvcClient test case, call it GenerateWSAMessageId,  and move it to the top, and add the following code:
      def testCase=testRunner.testCase
      def testSuite=testCase.testSuite
      def methodName=testSuite.name+"."+testCase.name+".GenerateWSAMessageId"
      log.info("Start "+methodName)
      def wsAddressingMessageId=Math.round((Math.random()*10000000000))
      testCase.setPropertyValue("wsAddressingMessageId", wsAddressingMessageId.toString())
      log.info("End "+methodName)

      This will do a randomize and multiply it with a big number to create an integer value.
    3. Now we will add the WS Addressing properties to the request. Open the InvokeAsyncService test step and click on the WS-A tab at the bottom:

      Set the following properties:
      • Check Enable WS-A Addressing
      • Set Must understand to TRUE
      • Leave WS-A Version to 200508
      • Check Add default wsa:Action
      • Set Reply to to: ${#TestCase#wsAddressingReplyToEndpoint}
      • Uncheck Generate MessageID
      • Set MessageID to: ${#TestCase#wsAddressingMessageId}
      The Reply To address and the MessageID now are based on the earlier determined properties.
      • If you would test this, then the request that will be send will look like:
        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:bpel="http://xmlns.oracle.com/ReadyAPIHellloWorldSamples/helloWorldBPELAsync/BPELProcessAsync">
        <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
        <wsa:Action soapenv:mustUnderstand="1">process</wsa:Action>
        <wsa:ReplyTo soapenv:mustUnderstand="1">
        <wsa:MessageID soapenv:mustUnderstand="1">9094853750</wsa:MessageID>

        You see that the ReplyTo Address is set (as a nested element) and the MessageId. You won’t see this in the Request XML panel, but in the http log or in the script-log since we log the request in the OnRequest script of the MockService. The WS-Addressing properties are added to the soap:header on invoke.
      • Since we have these elements in the request, we can extract those the same way as we did with the helloInput in the OnRequest script of the MockService. Add the lines denoted with // Added lines ==> and // <==Added lines: from the following script in your script (or copy&paste complete script):
        def mockService = context.mockService
        def method = mockService.name+".Response 1.OnRequest Script"
        log.info("Start "+method)
        def project = mockService.project
        log.info("Project "+project.name)
        def asyncTestSuiteName = mockService.getPropertyValue( "AsyncTestSuite")
        def asyncTestSuite = project.getTestSuiteByName(asyncTestSuiteName)
        log.info("TestSuite: "+asyncTestSuite.name)
        def asyncSvcProvTestCaseName = mockService.getPropertyValue( "AsyncSvcProvTestCase")
        def asyncSvcProvTestCase = asyncTestSuite.getTestCaseByName(asyncSvcProvTestCaseName)
        log.info("TestCase: "+asyncSvcProvTestCase.name)
        //Log Request
        def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
        // Set Namespaces and query request
        def holder = groovyUtils.getXmlHolder(mockRequest.getRequestContent())
        holder.namespaces["soapenv"] = "http://schemas.xmlsoap.org/soap/envelope/"
        holder.namespaces["bpel"] = "http://xmlns.oracle.com/ReadyAPIHellloWorldSamples/helloWorldBPELAsync/BPELProcessAsync"
        holder.namespaces["wsa"] = "http://www.w3.org/2005/08/addressing"
        def helloInput = holder.getNodeValue("/soapenv:Envelope/soapenv:Body/bpel:process/bpel:input")
        // Added lines ==>
        def wsaReplyToAddress = holder.getNodeValue("/soapenv:Envelope/soapenv:Header/wsa:ReplyTo/wsa:Address")
        def wsaInReplyToMsgId = holder.getNodeValue("/soapenv:Envelope/soapenv:Header/wsa:MessageID")
        // <Added lines
        // Set Service Context
        def svcContext = (com.eviware.soapui.support.types.StringToObjectMap)context
        // Added lines ==>
        // <Added lines
        log.info("helloInput: "+svcContext.helloInput)
        // Added lines ==>
        log.info("wsaReplyToAddress: "+svcContext.wsaReplyToAddress)
        log.info("wsaInReplyToMsgId: "+svcContext.wsaInReplyToMsgId)
        // <Added lines
        //Invoke Async Service Provider TestCase
        asyncSvcProvTestCase.run(svcContext, false)
        // End Method
        log.info("End "+method)
      • These context properties need to be extracted in the GetContextProperties of the AsyncSvcProvider test case, to set those as TestCase Properties. So, add the following properties (with no values) to the AsyncSvcProvider test case:
        • wsaReplyToAddress
        • wsaInReplyToMsgId
      • In the GetContextProperties test step, add the lines with the added properties (or copy and paste the complete script):
        def testCase=testRunner.testCase
        def testSuite=testCase.testSuite
        def methodName=testSuite.name+"."+testCase.name+".getContextProperties"
        log.info("Start MethodName: "+methodName)
        def wsaReplyToAddress=context.wsaReplyToAddress
        def wsaInReplyToMsgId=context.wsaInReplyToMsgId
        def helloInput=context.helloInput
        log.info(methodName+" Received wsaReplyToAddress: "+wsaReplyToAddress)
        log.info(methodName+" Received wsaInReplyToMsgId: "+wsaInReplyToMsgId)
        log.info(methodName+" Received HelloInput: "+helloInput)
        // End
        log.info("End MethodName: "+methodName)

        (Since the wsaInReplyToMsgId is an integer, it should be "toStringed"...)
      • As a pre-final step is to adapt the CallBackAsyncSvcClient step to use the wsaReplyToAddress as an endpoint and the wsaInReplyToMsgId as a header property. Edit the endpoint in the step to ${#TestCase#wsaReplyToAddress}:

        Edit the soap header to:
           <soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">

      • The final step is to add an XPath Match assertion on the AsyncReceive to validate the response of the wsaInReplyToMsgId. Call it WSAInReplyToMessageId and provide the following xpath:
        declare namespace wsa='http://www.w3.org/2005/08/addressing';
        declare namespace bpel='http://xmlns.oracle.com/ReadyAPIHellloWorldSamples/helloWorldBPELAsync/BPELProcessAsync';
        declare namespace soapenv='http://schemas.xmlsoap.org/soap/envelope/';


        As an Expected Result value provide: ${#TestCase#wsAddressingMessageId}.
      • Test the completed AsyncSvcClient.
      ConclusionThis will conclude this setup. And shows how to create a WS-Addressing supporting Asynchronous Request Response Service. I hope you got this far. In that case: I'm impressed. This is quite advanced SoapUI/ReadyAPI! stuff. But it shows the power of the tools. And if you wouldn't use this as it is, you might get some nice tips from it.

      Upgrade SOA 11g to 12c: Invalid Composite File

      Fri, 2018-09-14 05:02
      You might not get it, but not every customer already moved from SOA Suite 11g to 12c. My current customer isn't for instance. Because we're in a bit of a lee period, I'm looking into upgrading their composite projects.
      Solve invalid composite filesOne thing I ran into quite immediately is that for several projects the composite.xml was invalid. It turned out that it wasn't even upgraded.

      I found this support.oracle.com article, DocID 2333742.1. It says that it's not a bug. Because the problem is in the 11g project. Now, we could discus about it, since the 11g project works, and might have been upgraded from an earlier version (10g or early 11g patchset). So, the upgrade process could have been improved. Well the solution is however quite simple.

      According to the support note, the .jpr file lacks several elements. A closer look brought me the idea that it could be narrowed down to only one element:
      <hash n="oracle.ide.model.TechnologyScopeConfiguration">
        <list n="technologyScope">
          <string v="SOA"/>
      In other words, the project lacks the Techonlogy scope SOA. Apparently Oracle changed the Integration technology in SOA down the product evolution (No bug, but it would be nice that the upgrade process would take this into account). Because of this, the composite is not upgraded.

      Changing every one of the tens or hundreds of composite projects can be a tedious job. And I figured that it wouldn't be the only one problem I will run into.

      Luckily, the .jpr file is an XML file. So, with an xslt file we could pre-upgrade the .jpr file. So, based on this example, I created the following prepareJpr.xsl stylesheet:
      <?xml version="1.0"?>
      <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
      <!-- Update JPR
      @Author: M. van den Akker, Darwin-IT Professionals, 2018-09-12
      Based on: https://stackoverflow.com/questions/5876382/using-xslt-to-copy-all-nodes-in-xml-with-support-for-special-cases#5877772
      <xsl:template match="@*|node()">
      <xsl:apply-templates select="@*|node()"/>
      <!-- Check Existence of SOA technology, because of Invalid composite file exceptions. MOS DocId 2333742.1-->
      <xsl:template match="hash[@n='oracle.ide.model.TechnologyScopeConfiguration']/list[@n='technologyScope']">
      <xsl:apply-templates select="@*|node()"/>
      <!--If SOA doesn't exist then add it -->
      <xsl:when test="count(./string[@v='SOA'])=0">
      <xsl:comment>Add SOA technology</xsl:comment>
      <string v="SOA"/>
      <xsl:otherwise><xsl:comment>SOA technology already present </xsl:comment>

      If the technology scope does not exist, it adds it with a comment to denote it was added by this utility. If it does not exist, I add a comment to denote that, to be sure that the xslt works and that the project have been handled, but considered ok.

      Then I created an prepareSOA11gApplication.xml ANT project that loops over the projects in the target application folder and runs the XSLT over every .jpr file in the application.

      It looks like something as follows (since I added some other functionality that I removed for this article):
      <?xml version="1.0" encoding="windows-1252" ?>
      <!--Ant buildfile generated by Oracle JDeveloper-->
      <!--Generated Sep 12, 2018 3:06:25 PM-->
      <project xmlns="antlib:org.apache.tools.ant" name="Prepare11gProjects" default="prepareApplication" basedir=".">
      <property file="build.properties"/>
      <taskdef resource="net/sf/antcontrib/antlib.xml">
      <pathelement location="${ant-contrib.jar}"/>
      <target name="prepareApplication" description="Refactor SOA 11g Project pre upgrade to 12c." depends="">
      <echo>Prepare ${SOA11gAppName} in ${SOA11gAppFolder} </echo>
      <echo>. Prepare projects</echo>
      <foreach target="PrepareProjectFile" param="projectFile">
      <fileset dir="${SOA11gAppFolder}" casesensitive="yes">
      <include name="**/*.jpr"/>
      <target name="PrepareProjectFile">
      <echo message=".. Prepare ${projectFile}"></echo>
      <property name="projectFileOrg" value="${projectFile}.org"/>
      <echo message="... backup ${projectFile} to ${projectFileOrg}"></echo>
      <move file="${projectFile}" tofile="${projectFileOrg}" overwrite="false"/>
      <echo message="... transform ${projectFileOrg} to ${projectFile} using ${prepareJprXsl}"></echo>
      <xslt style="${prepareJprXsl}" in="${projectFile}.org" out="${projectFile}"/>
      I also added functionality to add the projects in a emptied .jws file, and to add a adf-config.xml file.
      That way, I get a prepared pre-upgraded workspace that contains only the projects I want to upgrade at that time.

      Adapt .jca filesAnother type of files that I encountered to be invalid after upgrade, are .jca files. Some projects showed invalid JCA adapters in the composite.
      It turns out that Oracle also changed the adapter names in the jca files. A .jca  file starts with:
      <adapter-config name="dbDatabaseAdapterService" adapter="db" wsdlLocation="../WSDLs/dbDatabaseAdapterService.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">

      But in some files this line reads:
      <adapter-config name="dbDatabaseAdapterService" adapter="DB Adapter" wsdlLocation="../WSDLs/dbDatabaseAdapterService.wsdl" xmlns="http://platform.integration.oracle/blocks/adapter/fw/metadata">

      So, I need a similar XSLT file, that adapts these attributes. For other adapters changes would be:
      • AQ Adapter: aq
      • Apps Adapter: apps
      • etc.
      I would rule out that I need other adaptions too.

      FMW 12c Topology Suggestions

      Tue, 2018-09-11 02:14
      A year or two, maybe three ago  I found this teriffic article on Fusion Middleware 12c topology suggestions.

      I find my self searching for this one from time to time. So it's time to write this little note.

      It explains which combinations of FMW products in a domain makes sense and especially what the different Server Groups mean.

      Zipping is easy in Java/Spring/SOASuite

      Wed, 2018-09-05 12:35
      Why using Spring in SOASuite?
      This post is actually about Spring in SOASuite. Last few weeks I've come around several questions and implementations using Embedded Java in BPEL. The Java activity in Oracle BPEL is ideal for very small java snippets, to do a simple Base64 encode/decode for example. However, in the situations I encountered these weeks, the code samples were more complex. And my very first recommendation in these situations is to use the SpringContext component, a technique introduced in SOASuite 11g, that I wrote about in 2012 already. I called it 'Forget about WSIF: welcome Spring', but really: forget about Embedded Java in most cases too! So, with this article, I want to showcase the SpringContext component in SOASuite.

      A few disadvantages of the Embedded Java activity:
      • To get to, and set or manipulate, your BPEL data, you'll need to compose getVariableData() and setVariableData() functions with xpath-references. You can use temporary Assign activities with copy rules, from where you can extract the actual variable and xpath references, to copy and paste into the getVariableData() functions. But best workaround, to me is to declare a few xsd:string based variables and use Assigns before and after to exchange the data with the java snippet.
      • You can't test/debug the java snippets properly.
      • Catching and handling exceptions may not behave as you might want.
      • You can introspect variables that are referenced in your java snippet from the flow trace. But you must rely on the visual code compare between bpel and java snippet.
      The advantages of using a SpringContext component:
      • You can test your java code standalone, and just reuse the methods as is.
      • Wiring your Spring Component to your BPEL will result in a WSDL and a PartnerLink. From BPEL you just invoke it like any other service. 
      • In the flowtrace you will see the input and output variables as is (provided that you set the audit level on Development). And you can just assign the data to and fro the variables based on the wsdl message types.
      • It's all declarative.
      A disadvantage might be that it is a bit more complex. It might feel like a hazzle, so you wouldn't use it for 2 lines of Java code, probably. At the other hand, try it out, I guess you might experience it as so much easier and more reliable.
      The case at handIn the case at hand, as referred to in the title,  we need to read a zip file. We get files from Oracle B2B, that are saved on the filesystem with a system generated filename. That is different from the file as send, and provided using the context-disposition MIME header. The client wants to have the file moved and saved using the originating filename. We couldn't get it from B2B in an acceptable way. B2B saves the file using a filename like 'M1234987309891.2341921@myTPR1234_te2309098033.dat'. But it turns out to be a zip file that contains the file, with the name as we want it. So, we could just introspect the file to determine the name as which we want to move it. This means of course that we need to read the file, from Java. And to me it does not feel right to do that from a Java activity in BPEL. It's something more or less functional, so I want to abstract that in a service, based on a java class.
      Unzip in JavaI found an unzip method in Java that does just about what I want here. It comes down to:
      package nl.darwinit.ziputils;

      import java.io.File;
      import java.io.FileInputStream;
      import java.io.FileOutputStream;
      import java.io.IOException;

      import java.util.zip.ZipEntry;
      import java.util.zip.ZipInputStream;

      import nl.darwinit.ziputils.beans.ListZipEntriesRequest;
      import nl.darwinit.ziputils.beans.ListZipEntriesResponse;
      import nl.darwinit.ziputils.beans.ZipEntryFile;

      public class Unzip implements IUnzip {

      public static final String className = "nl.darwinit.ziputils.Unzip";

      public Unzip() {

      private static void unzip(String zipFilePath, String destDir) {
      File dir = new File(destDir);
      // create output directory if it doesn't exist
      if (!dir.exists())
      FileInputStream fis;
      //buffer for read and write data to file
      byte[] buffer = new byte[1024];
      try {
      fis = new FileInputStream(zipFilePath);
      ZipInputStream zis = new ZipInputStream(fis);
      ZipEntry ze = zis.getNextEntry();
      while (ze != null) {
      String fileName = ze.getName();
      File newFile = new File(destDir + File.separator + fileName);
      System.out.println("Unzipping to " + newFile.getAbsolutePath());
      //create directories for sub directories in zip
      new File(newFile.getParent()).mkdirs();
      FileOutputStream fos = new FileOutputStream(newFile);
      int len;
      while ((len = zis.read(buffer)) > 0) {
      fos.write(buffer, 0, len);
      //close this ZipEntry
      ze = zis.getNextEntry();
      //close last ZipEntry
      } catch (IOException e) {
      public static void main(String[] args) {
      Unzip unzip = new Unzip();
      int i = 0;
      for (String arg : args) {
      log("main", "arg[" + i++ + "]: " + arg);
      ListZipEntriesRequest listZipEntriesRequest = new ListZipEntriesRequest();
      ListZipEntriesResponse listZipEntriesResponse = unzip.listZipEntries(listZipEntriesRequest);
      log("main", listZipEntriesResponse.toString());

      Besides reading all the ZipEntries in the zip file one by one, this does also save it using a FileOutputStream. But I do not want to output the files, but just get the filename. Another thing: this example shows a private static function. But for the SpringContext I need an instantiatable java class, so with a constructor, and a public method. So I created an Unzip class (with default constructor) and the following method:
          public ListZipEntriesResponse listZipEntries(ListZipEntriesRequest listZipEntriesRequest) {
      final String methodName = "ListZipEntriesResponse";
      ListZipEntriesResponse listZipEntriesResponse = new ListZipEntriesResponse();
      FileInputStream fileInputStream;
      //buffer for read and write data to file
      log(methodName, "List entries of " + listZipEntriesRequest.getZipFilePath());
      try {

      fileInputStream = new FileInputStream(listZipEntriesRequest.getZipFilePath());
      ZipInputStream zipInputStream = new ZipInputStream(fileInputStream);
      ZipEntry zipEntry = zipInputStream.getNextEntry();
      while (zipEntry != null) {
      String fileName = zipEntry.getName();
      log(methodName, "Entry: " + fileName);
      ZipEntryFile zipEntryFile = new ZipEntryFile();
      //close this ZipEntry
      zipEntry = zipInputStream.getNextEntry();
      //close last ZipEntry
      } catch (IOException e) {
      return listZipEntriesResponse;

      The logStart, logEnd and log methods are just wrappers around System.out.println(). As an input I have a bean with:
      package nl.darwinit.ziputils.beans;

      import java.io.File;

      public class ListZipEntriesRequest implements IListZipEntriesRequest {

      private String zipFileFolder;
      private String zipFileName;

      public void setZipFileFolder(String zipFileFolder) {
      this.zipFileFolder = zipFileFolder;

      public String getZipFileFolder() {
      return zipFileFolder;

      public void setZipFileName(String zipFileName) {
      this.zipFileName = zipFileName;

      public String getZipFileName() {
      return zipFileName;

      public String getZipFilePath() {
      return getZipFileFolder() +File.separator+ getZipFileName();

      Then as a response a bean with:
      package nl.darwinit.ziputils.beans;

      import java.util.ArrayList;
      import java.util.List;

      public class ListZipEntriesResponse implements IListZipEntriesResponse {
      private List zipEntryFiles;

      public void setZipEntryFiles(List zipEntryFiles) {
      this.zipEntryFiles = zipEntryFiles;

      public List getZipEntryFiles() {
      return zipEntryFiles;

      public void addZipEntryFile(ZipEntryFile zipEntryFile) {
      if (zipEntryFiles == null) {
      zipEntryFiles = new ArrayList();

      public String toString(){
      StringBuffer strBuf = new StringBuffer("ListZipEntriesResponse\n");
      for (ZipEntryFile zipEntryFile : zipEntryFiles){
      strBuf.append("ZipEntryFile: "+zipEntryFile.toString()+"\n");
      return strBuf.toString();

      That uses a ZipEntryFile bean, as a List:
      package nl.darwinit.ziputils.beans;

      public class ZipEntryFile implements IZipEntryFile {
      private String fileName;

      public void setFileName(String fileName) {
      this.fileName = fileName;

      public String getFileName() {
      return fileName;

      public String toString(){
      return "fileName: "+getFileName()+"\n";

      Place your the source of your classes in the src of your SOA project. The classes are compiled in the the SOA/SCA-INF/classes folder (in 11g, you won't have the SOA subfolder). In 12c the sources can also be placed in the SOA/SCA-INF/src folder. But I found that this will cause a misterious nullpointer exception.
      The spring contextMy article 'Forget about WSIF: welcome Spring' neatly describes how to create a spring context in 11g. In 12c it does not differ much. You don't need to define separate Spring components per bean, though, but you actually can as shown in the article.  You do need to extract an Interface out of the main class. With only those methods you need in the interface. I actually extracted interfaces from all the beans, but you shouldn't have to.

      In this case the Service xml would look like:
      <?xml version="1.0" encoding="windows-1252" ?>
      <beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util"
      xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang"
      xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
      xmlns:sca="http://xmlns.oracle.com/weblogic/weblogic-sca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tool http://www.springframework.org/schema/tool/spring-tool.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://xmlns.oracle.com/weblogic/weblogic-sca META-INF/weblogic-sca.xsd">
      <!--Spring Bean definitions go here-->
      <sca:service name="Unzip" target="UnzipBean"
      <bean id="UnzipBean" class="nl.minvenj.ind.indigo.ziputils.Unzip"/>
      <bean id="ListZipEntriesRequestBean" class="nl.darwinit.ziputils.beans.ListZipEntriesRequest"/>
      <bean id="ListZipEntriesResponseBean" class="nl.darwinit.ziputils.beans.ListZipEntriesResponse"/>
      <bean id="ZipEntryFileBean" class="nl.darwinit.ziputils.beans.ZipEntryFile"/>

      Wire your Spring component to the BPEL component, and then you should have a dialog telling you that the WSDL is created, probably with an accompanying wrapper wsdl for the partnerlink roles.

      And then in BPEL just invoke the code.

      Happy Spring in SOA (better that than the otherway around if you speak Dutch...).

      Docker on Oracle Linux

      Mon, 2018-09-03 09:05
      It occurred to me that if you want to start using Docker there are plenty examples that use Ubuntu as a base platform. I read a book called Learning Docker that assumes Ubuntu for the examples, for instance. I know I am quite stubborn, a "know-it-better" person, but I want to be able to do the same on Oracle Linux.

      Docker on Oracle Linux turns out not too complicated. But I ran in to a caveat that I solved and want to share.

      I use Vagrant to bring up an Oracle Linux Box, based on a Vagrantfile that prepares the box.With that as a starting point, I created a script that does the complete docker installation. In the following I'll build it up for you, step by step. I'll add my project to GitHub, and provide a link to the complete script in the end.
      InitFirst some initialization and a function to read the property file:
      SCRIPTPATH=$(dirname $0)
      # Install docker on Oracle Linux.
      # @author: Martien van den Akker, Darwin-IT Professionals.
      function prop {
      grep "${1}" $SCRIPTPATH/makeDockerUser.properties|cut -d'=' -f2

      DOCKER_USER=$(prop 'docker.user')

      This sets the $SCRIPTPATH property as the folder where the script resides, so I can refer to other files relative to that one.
      The function prop alows me to get a property from a property file. A smart function that I got from a colleague (thanks Rob). Based on this property file called makeDockerUser.properties:

      I set the DOCKER_USER and DOCKER_GROUP properties.
      The DOCKER_GROUP property is hardcoded however, but that is the standard group that is created at installation of Docker to allow other users to use Docker.
      Install Docker EngineThe first actual step is to install the docker engine. Now you can go for the community edition and I've seen that there are examples that pulls the docker-ce (docker Community Engine) for you. However, one of the reasons I am stubborn to stick with Oracle Linux (as you know a RedHat derivate) is that Oracle Linux is the flavor that is used with most of my customers. And if not, it is RedHat. And then I just want to rely on the standard repositories.

      To install the docker engine, I have to add the ol7_addons and the ol7_optional_latest repositories. During my OL prepare script, I already added the ol7_developer_EPEL repository. Then the docker-engine package can simply be installed by yum:

      echo 1. Install Docker Engine
      echo . add ol7_addons and ol7_optional_latest repos.
      sudo yum-config-manager --enable ol7_addons
      sudo yum-config-manager --enable ol7_optional_latest
      echo . install docker-engine
      sudo yum install -q -y docker-engine
      Install CurlFor most docker related actions, it is convenient to have  curl installed as well:
      echo 2. Install curl
      sudo yum install -q -y curl
      Add docker group to docker userAfter the docker installation, we need to add the docker group to the docker user (in my case the sort-of default oracle user):
      echo 3. Add ${DOCKER_GROUP} group to ${DOCKER_USER}
      sudo usermod -aG ${DOCKER_GROUP} ${DOCKER_USER}

      This allows the ${DOCKER_USER} (set in the initialization phase) to use the docker command.
      Check the docker installNow let's add a check if docker works:
      echo 4. Check Docker install
      docker --version
      sudo systemctl start docker
      sudo systemctl status docker
      This lists the version of the installed docker, then starts the docker service and lists the status of the docker.
      Change the location of the docker containersWhen creating a docker container/image (I leave the difference for now), these are saved by default in the location /var/lib/docker. The thing is that this is on the root disk of your installation. And it can grow quite big. For installations of oracle software for instance, I create an extra disk that I mount on /app. It would be better to have a /data mount point as well, but for now I stick with the /app data. So, I want to have docker place my images on the secondary disk. One solution used by Tim Hall, see here, is to create a second disk, format it with BTRFS, and mount it simply to /var/lib/docker.
      I rather reconfigure docker to use another disk. This is taken from this article.

      To implement this, we first need to know which storage driver Docker uses. We get this from the command docker info, as follows:
      echo 5. Change docker default folder
      # According to oracle-base you should create a filesystem, preferably using BTRFS, for the container-home. https://oracle-base.com/articles/linux/docker-install-docker-on-oracle-linux-ol7.
      # But let's stick with ext4.
      ## Adapted from https://sanenthusiast.com/change-default-image-container-location-docker/
      echo 5.1. Find Storage Driver
      GREP_STRG_DRVR=$(sudo docker info |grep "Storage Driver")
      echo "Storage Driver: ${DOCKER_STORAGE_DRVR}"

      This mentions me the overlay2 driver. Then we need to stop docker:
      echo 5.2. Stop docker
      sudo systemctl stop docker

      And then create the folder where we want to store the images:
      echo 5.3. Add reference to data folders for storage.
      echo mkdir -p ${DOCKER_DATA_HOME}
      sudo mkdir ${DOCKER_DATA_HOME}

      Now I found a bit of a problem with my solution here. When I reconfigure docker to use my custom folder, it turns out that on my system the filesystem is not writable from the docker image. If you want to install software in your image, it of course wants to write the files. And this is prevented. After quite some searching, I came on this question on stackoverflow. It turns out that selinux enforces a policy that prevents writing of docker to a custom device. This can be simply circumvented by disabling the enforcing:
      echo disable selinux enforcing
      sudo setenforce 0

      This disables, as said, the enforcing of selinux. I would say this should be a bit more nuanced. But I don't have that at hand. This however, solved my problem.
      Now all is left to configure docker to use the custom folder. Docker is started using a script. In Oracle Linux this is quite conveniently setup. In the folder /etc/sysconfig you find a few config scripts, amongst others a script called: docker-storage. This is a proper to add options. When you set the DOCKER_STORAGE_OPTIONS variable, it is added to the command line. So we simply need to add the line:
      DOCKER_STORAGE_OPTIONS = --graph="/app/docker/data" --storage-driver=overlay2

      , to the file /etc/sysconfig/docker-storage. This can be done with the following snippet:
      sudo sh -c "echo 'DOCKER_STORAGE_OPTIONS = --graph=\"${DOCKER_DATA_HOME}\" --storage-driver=${DOCKER_STORAGE_DRVR}' >> ${DOCKER_STORAGE_CFG}"

      And then finish up with starting docker service again:
      echo 5.4 Reload deamon
      sudo systemctl daemon-reload
      echo 5.5 Start docker again
      sudo systemctl start docker

      ConclusionLast week I was at the #PaaSSummerCamp in Lisbon. I did some labs with my docker installation, that resulted in the permission problem. As mentioned I resolved that and I could run the labs succesfully with docker containers from Oracle. So, I concluded that this script should suffice. You can download the complete script at my GitHub vagrant repo.

      Add a CCA component to your VBCS Application

      Tue, 2018-08-28 08:45
      Today at the #PaaSSummercamp, VBCS is on the agenda. We did a few labs from the VBCS learning path. Nice thing is that you can do it yourself as well.

      One of the main goals of the OracleJET labs yesterday was to introduce you to CCA components. Since VBCS is mostly based on OracleJET (For the UI that is) it should not be too hard to add a CCA component from our OracleJET application into our VBCS application. So actually, I wanted to try just that.

      If you go to your OracleJET application project, the CCA components are in the ${OJETProjectFolder}\src\js\jet-composites\:
      First create a zipfile out of the component, just zip the folder:
      (I like TotalCommander for this).
      Then in the VBCS Designer tab, browse to the very bottom to find the Custom Heading:
      And click the plus icon.
      This will bring up a dialog in which you can drag and drop your zip file:
      Click import and then the component should appear in the list:
      Now it's ready for use.

      Since it's a form-component itself, you can't add it to an existing Form-component.

      To navigate properly I opened the Structure pane and add the component just before my Form component in the Edit Expense Report page, as a new GridRow:

      I want to have it as a new GridRow, just before the current Expense Report Form. Now, after releasing the compnent, it is added to the screen, and it turns out to be visible too:

      You now just have to link the fields to variable elements.
      So select the component, and the fields will be shown as properties, that can be linked to data-object-elements:

      Click on the (x) icon to link the CCA-property to a page-variable-element. Now, since the only variable I have is the ExpenseReport row variable, I use that to populate my fields. They don't make sense functionally, but that is what I have to work with at the moment.
      I link first-name to ExpenseReport.name:

      Then hire-salary to amount and hire-date to startDate. And that's it. The fields are populated directly in design and live mode. But running the page will show:
      As easy as that...

      OracleJET at the #PaaSSummerCamp

      Mon, 2018-08-27 10:17
      Today the Oracle OPN #PaaSSummerCamp is started. And I have the privilege to join @GeertjanW to kick-off the Application Development and containers track with an OracleJET training.

      OracleJET is Oracle's JavaScript Extension Toolkit, that bundles popular JavaScript frameworks in a toolkit (not a framework) that allows you to QuickStart a JavaScript project.

      One of the advantages is that you don't have to bother about which Frameworks you should use and how to setup your project. Geertjan cooked up a workshop with a series of labs that introduces you to OracleJET up to the point that you have a working CRUD (Create, Retrieve, Update and Delete enabled) application.

      It's all on GitHub and the nice thing is: you can do the labs too, even if you weren't able to come to the Beautiful Lisbon area and attend the #PaaSSummerCamp. You even don't need to go to Oracle OpenWorld to do this workshop, because it all can be found here. You'll learn how to quickly build a OracleJET 5.2.0 based application.

      What do you need? Well, not much. A laptop or desktop computer will help of course. The lab will support you in installing:

      • npm 5.6.0
      • a json-server REST/JSON Mock server
      • ojet 5.2.0 command line interface
      You can use any IDE or source editor you want, for instance Notepad or Notepad++, IntelliJ or JDeveloper if you like. But I found Netbeans 8.2 very convenient for OJET Development. Note that Oracle is handing over Netbeans to the Apache Foundation, where NetBeans 9.0 just have been released.

      Weblogic 12c: Solving Invalid Template error

      Wed, 2018-08-22 06:03
      One of the labs in our Weblogic Advanced Administration 12c course is about using domain templates. When revising the particular lab, we created a domain in Weblogic 12c and then created a template based on the domain. On recreation of the domain based on the template we get an exception:
      This we get regardless of if we provide nodemanager details at the initial creation of the domain.

      We did some investigation and found for instance this forum,  that gave a hint, but not a solution or workaround.

      One important hint is the message 'config-nodemanager.xml: failed to parse the template!(/home/rcma/rcma_domain_template.jar): Parsing the config-nodemanager.xml failed!'. So it relates to the nodemanager configuration and the contents of the config-nodemanager.xml file in the template.

      By the way, my colleague found that the same issue can also be experienced during pack and unpack, as desribed in Oracle support note 2311027.1. Here you can find that the problem in the config-nodemanager.xml is about the password. If you have a nodemanager password set in the domain's config.xml, it is encrypted with the domain's seed/salt. It can't be decrypted and read by the domain configurator or unpack tools, because they don't have the salt of the source domain.

      So, what is the work-around or solution? I see two:
      1. Following the before mentioned support note, you can replace the nodemanager password in the ${SourceDomainHome}/config/config.xml in the  <node-manager-password-encrypted>******</node-manager-password-encrypted> element with a clear text password. Do the same in the <nod:password>******</nod:password> element in the ${SourceDomainHome}/init-info/config-nodemanager.xml
      2. Open the templates jar file:
        Then Extract the config-nodemanager.xml from it and edit it:
        <?xml version="1.0" encoding="UTF-8"?>
        <nod:nodeManagerInfo xmlns:nod="http://xmlns.oracle.com/cie/nodemanager">
        replace the password with something readable, for instance:
        <?xml version="1.0" encoding="UTF-8"?>
        <nod:nodeManagerInfo xmlns:nod="http://xmlns.oracle.com/cie/nodemanager">
        The password does not necessarily be the actual password of the source environment (at least with the template, not sure in case of pack/unpack). Then re-package the file into the template.
      I tested the second work-around, not the one suggested by the note. I actually prefer the second option, since the first option suggest updating the source domain, which I'd rather prevent to do.
      Since you do need to do a change, why not change the template? This we tested succesfully.

      Another issue solved. Well, actually not really. This is a workaround and the Weblogic tooling should prevent this from happening. You should be able to enter a nodemanager password that is added to the template, or you should be asked for a password at unpack or domain creation. In fact, at creation of a new domain based on the template, you are asked for a nodemanager password. So, why would the domain configurator even bother about trying to read the nodemanager password and/or fail at parsing the file?

      If I'd be so honoured to have this blog be read to a product manager or developer of Weblogic, could you take a look into this? Thanks!

      FMW July 2018 patchesets for 11g and 12c releases

      Mon, 2018-07-23 02:56
      I just noticed via community.oracle.com  that bundlepatches and patchset updates for several products in the FusionMiddleware portfolio are released. And thus also for SOA and BPM Suite. You can read more on it in this blog.

      Since SOA/BPM Suite is around for quite some time (since fall 2017 I believe) and I've not heard on a or even 18.x release, I found this quite important. So I took a quick look in the patch set for SOASuite and found some interesting patches. I'd recommend applying this one, where some patches seem to apply for the QuickStart as well.

      If you're on or earlier, I surely recommend upgrade to and apply this patch set.

      Don't spend your summer vacation for it, but schedule it for applying it first thing at return. Or better: do it right before you leave!

      SOASuite12c - BPEL: JTA transaction is not in active state

      Thu, 2018-07-12 03:19
      Yesterday I ran into this pretty weird problem.

      A bit of context...I have two BPEL services to generate documents using BIP. One I created earlier that is based on a generic XML used by BIP to generate multiple letters. Now I had to create another one that is a report, so uses another XML. I generated an XSD for both XML's but since they haven't got a namespace, but same element names, I can't have them in the same composite. So, I duplicated the code.

      I created a WSDL with two operations, one for the letters and one for the report, so I wanted to call the report from the service that created the letters. The first service is called 'GenerateDocument', but with an operation 'GenerateLetter', but with an added operation 'GenerateReport'.

      So I changed the BPEL and replaced the 'Receive' by a Pick:
      In the invoke it calls the 'GenerateReport' BPEL service, that does basically exact the same as in the scope under the 'Generate Letter' OnMessage.

      In the 'GenerateReport' BPEL service (and from the 'Generate Letter' scope) I call a Base64Encoding service. It gets an XML in, and it will encode it to string using ora:getContentAsString() and encode that using a Spring bean, based on a quite simple java bean:

      But now the problem...
      So, called seperately, the 'Generate Report' service functioned just fine. Also the 'Generate Letter' operation of the 'Generate Document' service, thus the 'Generate Lettter' OnMessage from the Pick above, function just fine. But, when I call the 'Generate Document' service using the 'Generate Report' operation, resulting in the other OnMessage I'll get the following message on return from the Base64Encoding service:
      <exception class="com.collaxa.cube.engine.EngineException">JTA transaction is not in active state.
      The transaction became inactive when executing activity "" for instance "60,004", bpel engine can not proceed further without an active transaction. please debug the invoked subsystem on why the transaction is not in active status. the transaction status is "MARKED_ROLLBACK".
      The reason was The execution of this instance "60004" for process "GenereerMachtigingenRapportProcess" is supposed to be in an active jta transaction, the current transaction status is "MARKED_ROLLBACK", the underlying exception is "EJB Exception: " .
      Consult the system administrator regarding this error.
      <root class="oracle.fabric.common.FabricInvocationException">EJB Exception: <stack>

      Most blogs or forums I found suggest increasing the JTA time out, as this one. However, in those cases also the time out is mentioned as a cause in the exception.
      In my case though, there's no mention of a time-out. Nevertheless I did try the suggestion, but as expected, with no luck.
      The investigationHow to proceed? Well, in those cases, I just 'undress' or 'strip-down' my code. Cut out all code to the point it works again. Then, piece by piece, I dress it again, until the point it breaks again. That way you can narrow to the exact point where it goes wrong.

      It turns out it indeed just breaks again when I add the scope with the call to the Base64Encoding service. So, I had to investigate that a bit further. I fiddled with the transaction properties of the Exposed Service:
      I don't want transaction support in this service actually: it doesn't do anything that needs to be transacted. But this wasn't it either.

      A closer look to the composite then:
      <component name="Base64Process" version="2.0">
      <implementation.bpel src="BPEL/Base64Process.bpel"/>
      <service name="base64process_client_ep" ui:wsdlLocation="WSDLs/Base64ServiceWrapper.wsdl">
      <interface.wsdl interface="http://nl.darwin-it.service/wsdl/Base64Service/1.0#wsdl.interface(Base64ServicePortType)"/>
      <reference name="Based64EncoderDecoder.Base64EncoderDecoder"
      <interface.wsdl interface="http://base64.utils.darwin-it.nl/#wsdl.interface(IBase64EncoderDecoder)"/>
      <property name="bpel.config.transaction" type="xs:string" many="false">required</property>
      <property name="bpel.config.completionPersistPolicy" type="xs:string" many="false">deferred</property>
      <component name="Based64EncoderDecoder">
      <implementation.spring src="Spring/Based64EncoderDecoder.xml"/>
      <service name="Base64EncoderDecoder">
      <interface.java interface="nl.darwin-it.utils.base64.IBase64EncoderDecoder"/>

      And there my the following caught my eye:
      <component name="Base64Process" version="2.0">
      <property name="bpel.config.transaction" type="xs:string" many="false">required</property>
      <property name="bpel.config.completionPersistPolicy" type="xs:string" many="false">deferred</property>


      As said, I don't want a transaction, and I'm not interested in deferred persistence. So, I commented this out, and all worked.

      What did I learn?I'm not sure. But apparently, when called from the main-flow directly, these properties don't hurt. The BPEL Engine doesn't feel the need to do a persist directly and therefor a transaction. But with one level deeper, called from another flow, on return from the Base64Encoding flow, it just felt the need to do a persist and thus needed a transaction. That was not there.

      All the services in the composition of the 3 composites (GenerateDocument -> GenerateReport -> Base64Encoding) are synchronous, created with default settings. And therefor I did not expect this behavior.

      The Medrec Datamodel DDL

      Tue, 2018-06-12 06:57
      Next week I deliver the training 'Weblogic 12c Tuning and Troubleshooting' . One of the labs is to have the sample application MedRec generate Stuck Threads, so that the students can investigate and try to solve that. Or actually configure the server so that it will cause a automatic restart.

      To do so I have to deliberately break the application and so I need the source. I have an earlier version of the application, but not the sources. So I have to go to the latest MedRec. I actually like that, because it looks more modern.

      The MedRec application is available if you install WebLogic with samples.

      You can run the script demo_oracle.ddl  from against the database:

      The medrec.ear can be found at:

      I ran in quite some confusion and frustration, but I found that this combination, although from the same samples folder, does not work. Not only this medrec.ear expects the tables in plural (PRESCRIPTIONS) where the script creates them in singular (PRESCRIPTION), it expects a separate DRUGS table with a foreign key column DRUG_ID in PRESCRIPTIONS.  And a few other changes.

      I had a version of the scripts from earlier versions of WebLogic's MedRec. Based the exceptions in the server log, I refactored/reverse engineered the scripts.

      Using those I could succesfully login and view the Patient records of fred@golf.com:

      First we need to create a a schema using createDBUserMedrec.sql:
      prompt Create user medrec with connect, resource roles;
      grant connect, resource to medrec identified by welcome1;
      alter user medrec
      default tablespace users
      temporary tablespace temp;
      alter user medrec quota unlimited on users;

      Drop tables (if needed) using medrec_dropall.sql:









      Create the tables using medrec_tables.sql:
      "EMAIL" VARCHAR(255),
      "PASSWORD" VARCHAR(255),
      "USERNAME" VARCHAR(255),
      PRIMARY KEY ( "ID" ) );

      PRIMARY KEY ( "ID" ) );

      "EMAIL" VARCHAR(255),
      "PASSWORD" VARCHAR(255),
      "USERNAME" VARCHAR(255),
      "PHONE" VARCHAR(255),
      "GENDER" VARCHAR(20),
      "SSN" VARCHAR(255),
      "STATUS" VARCHAR(20),
      "LASTNAME" VARCHAR(255),
      "CITY" VARCHAR(255),
      "COUNTRY" VARCHAR(255),
      "STATE" VARCHAR(255),
      "STREET1" VARCHAR(255),
      "STREET2" VARCHAR(255),
      "ZIP" VARCHAR(255),
      PRIMARY KEY ( "ID" ) );


      "EMAIL" VARCHAR(255),
      "PASSWORD" VARCHAR(255),
      "USERNAME" VARCHAR(255),
      "PHONE" VARCHAR(255),
      "LASTNAME" VARCHAR(255),
      PRIMARY KEY ( "ID" ) );

      "NAME" VARCHAR2(255 BYTE),
      "PRICE" NUMBER(10,2),
      "VERSION" NUMBER(*,0),
      PRIMARY KEY ( "ID" ) );

      PRIMARY KEY ( "ID" ) );

      "NOTES" VARCHAR(255),
      "SYMPTOMS" VARCHAR(255),
      PRIMARY KEY ( "ID" ) );


      Insert data using medrec_data.sql:
      VALUES (


      "ID", "SEQUENCE_VALUE" )
      VALUES (


      VALUES (
      TIMESTAMP '1972-03-18 00:00:00','MALE','888888888','APPROVED',3,
      'Page','Trout','A','Ponte Verde','United States','FL',
      '235 Montgomery St','Suite 15','32301'
      VALUES (
      TIMESTAMP '1965-04-26 00:00:00','MALE','123456789','APPROVED',3,
      'Fred','Winner','I','San Francisco','United States','CA',
      '1224 Post St','Suite 100','94115'
      VALUES (
      TIMESTAMP '1971-09-17 00:00:00','MALE','333333333','APPROVED',3,
      'Gabrielle','Spiker','H','San Francisco','United States','CA',
      '1224 Post St','Suite 100','94115'
      VALUES (
      TIMESTAMP '1973-11-29 00:00:00','MALE','444444444','REGISTERED',3,
      'Charlie','Florida','E','Ponte Verde','United States','FL',
      '235 Montgomery St','Suite 15','32301'
      VALUES (
      TIMESTAMP '1959-03-13 00:00:00','MALE','777777777','APPROVED',3,
      'Larry','Parrot','J','San Francisco','United States','CA',
      '1224 Post St','Suite 100','94115'


      VALUES (


      Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (101,'Advil','1/4hrs',1.0, 2);
      Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (102,'Codeine','1/6hrs',2.5,2);
      Insert into "MEDREC"."DRUGS" (ID,NAME,FREQUENCY,PRICE,VERSION) values (103,'Drixoral','1tspn/4hrs',3.75,2);




      VALUES (
      151,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Allergic to coffee. Drink tea.',
      '','Drowsy all day.',2,51,1,85,70,75,125,98,180
      VALUES (
      152,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Light cast needed.',
      'At least 20 sprained ankles since 15.','Sprained ankle.',
      VALUES (
      153,TIMESTAMP '1989-08-05 00:00:00',TIMESTAMP '1989-08-05 00:00:00','Severely sprained interior ligament. Surgery required.','Cast will be necessary before and after.','Twisted knee while playing soccer.',2,52,1,85,70,75,125,98,180
      VALUES (
      154,TIMESTAMP '1993-06-30 00:00:00',TIMESTAMP '1993-06-30 00:00:00','Common cold. Prescribed codiene cough syrup.','Call back if not better in 10 days.','Sneezing, coughing, stuffy head.',2,52,1,85,70,75,125,98,180
      VALUES (
      155,TIMESTAMP '1999-07-18 00:00:00',TIMESTAMP '1999-07-18 00:00:00','Mild stroke. Aspirin advised.','Patient needs to stop smoking.','Complains about chest pain.',2,52,1,85,70,75,125,98,180
      VALUES (
      156,TIMESTAMP '1991-05-01 00:00:00',TIMESTAMP '1991-05-01 00:00:00','Patient is crazy. Recommend politics.','','Overjoyed with everything.',2,55,1,85,70,75,125,98,180


      VALUES (
      VALUES (
      VALUES (

      To install the datasource you can use this wlst script, createDataSource.py:
      # Create DataSource for WLS 12c Tuning & Troubleshooting workshop
      # @author Martien van den Akker, Darwin-IT Professionals
      # @version 1.1, 2018-01-22
      # Modify these values as necessary
      import os,sys, traceback
      scriptName = sys.argv[0]
      admServerUrl = 't3://'+adminHost+':'+adminPort
      dsName = 'MedRecGlobalDataSourceXA'
      dsJNDIName = 'jdbc/MedRecGlobalDataSourceXA'
      initialCapacity = 5
      maxCapacity = 10
      capacityIncrement = 1
      driverName = 'oracle.jdbc.xa.client.OracleXADataSource'
      dbUrl = 'jdbc:oracle:thin:@darlin-vce.darwin-it.local:1521:orcl'
      dbUser = 'medrec'
      dbPassword = 'welcome1'
      def createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl, targetSvrName):
      # Check if data source already exists
      cd('/JDBCSystemResources/' + dsName)
      print 'The JDBC Data Source ' + dsName + ' already exists.'
      except WLSTException:
      print 'Creating new JDBC Data Source named ' + dsName + '.'
      # Create data source
      jdbcSystemResource = create(dsName, 'JDBCSystemResource')
      jdbcResource = jdbcSystemResource.getJDBCResource()
      # Set JNDI name
      jdbcResourceParameters = jdbcResource.getJDBCDataSourceParams()
      # Create connection pool
      connectionPool = jdbcResource.getJDBCConnectionPoolParams()
      # Create driver settings
      driver = jdbcResource.getJDBCDriverParams()
      driverProperties = driver.getProperties()
      userProperty = driverProperties.createProperty('user')
      # Set data source target
      targetServer = getMBean('/Servers/' + targetSvrName)
      # Activate changes
      print 'Data Source created successfully.'
      return jdbcSystemResource

      def main():
      # Connect to administration server
      connect(adminUser, adminPwd, admServerUrl)
      createDataSource(dsName, dsJNDIName, initialCapacity, maxCapacity, capacityIncrement, dbUser, dbPassword, dbUrl,ttServerName)
      apply(traceback.print_exception, sys.exc_info())
      #call main()

      Also Medrec needs an administrative user, createUser.py:
      print 'starting the script ....'
      admServerUrl = 't3://'+adminHost+':'+adminPort
      realmName = 'myrealm'
      def addUser(realm,username,password,description):
      print 'Prepare User',username,'...'
      if realm is not None:
      authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
      if authenticator.userExists(username)==1:
      print '[Warning]User',username,'has been existed.'
      print '[INFO]User',username,'has been created successfully'


      addUser(realm,'administrator','administrator123','MedRec Administrator')


      Deploy the medrec application, deployMedRec.py:
      # Deploy MedRec for WLS 12c Tuning & Troubleshooting workshop
      # @author Martien van den Akker, Darwin-IT Professionals
      # @version 1.0, 2018-01-22
      # Modify these values as necessary
      import os,sys, traceback
      scriptName = sys.argv[0]
      admServerUrl = 't3://'+adminHost+':'+adminPort
      appName = 'medrec'
      appSource = '../ear/medrec.ear'
      # Deploy the application
      def deployApplication(appName, appSource, targetServerName):
      print 'Deploying application ' + appName + '.'
      progress = deploy(appName=appName,path=appSource,targets=targetServerName)
      # Wait for deploy to complete
      while progress.isRunning():
      print 'Application ' + appName + ' deployed.'
      def main():
      # Connect to administration server
      connect(adminUser, adminPwd, admServerUrl)
      deployApplication(appName, appSource, ttServerName)
      apply(traceback.print_exception, sys.exc_info())
      #call main()

      Add Weblogic12c Vagrant project

      Mon, 2018-06-11 07:53
      Last week I proudly presented a talk on  how to create, provision and maintain VMs with Vagrant, including installing Oracle software (Database, FusionMiddlware,etc.). It was during the nlOUG Tech Experience 18.  I've written about it in the last few posts.

      Today I added my WLS12c Vagrant project to Github. Based on further insights I refactored my database12c installation a bit and pushed an updated Database12c Vagrant project to Github as well.

      Main updates are that I disliked the 'Zipped' subfolder in the stage folder paths for the installations. And I wanted all the installations extracted into the same main folder. Then I can clean that up more easily.

      You should check the references to the stage folder. Download the appropriate Oracle installer zip files and place them in the appropriate folder:

      Coming up: a SOA 12c (including SOA, BPM and OSB) complete with creation of the domain (based on my scripts from 2016 and my TechExperience talk of last year), and configuration of nodemanager. Ready to start up after 'up'.

      I also uploaded my slides of my talk to slideshare:

      Enhance Vagrant provisioning: install java and database

      Fri, 2018-05-04 10:37
      In my previous blog posts (here and here), I wrote about how to create a base box and a create and start a virtual machine out of it. I started with provisioning, to have the vagrant user adapt the kernel settings, add a install user/owner and create a filesystem on an added disk.

      Now let's make the provisioning a bit more interesting and install actual software in it.
      Prepare new Vagrant project For this article I copied the project created from the previous blog. I called it ol75_db12c, since the goal is database 12c. But we'll also add java.
      Now edit the Vagrantfile, since we want a new VM with another name:
      So adapt the VM_NAME variable to something like "OL75U5_DB12c". You see how convenient it is to have those properties set as a global variable?

      You could already try to do vagrant up to try it out. Remember, you can just do vagrant destroy to recreate it.

      Also remove (or don't copy) the .vagrant subfolder, otherwise Vagrant would probably assume the box is already provisioned.  If that is the case, then either do a vagrant destroy to destroy the VM altogether, or vagrant provision to just re-provision the box.
      JavaIn the copied project, lets start with Java. There are several possibilities to install java, you could download the RPM from OTN. But one of the recommended practice I found in installing Java on a server is to put it in a path that hasn't got the java version (especially  the update) in it. When using it to install Weblogic, for instance, this path ends up in several places in scripts. Although it's a handfull, it's more than once.
      Upgrading java is then just bringing down the servers/services using it, backup the version and unzip/untar the new version in the same folder. And there you have it: I like a java distribution that comes in an archive.

      In Oracle Support you can find it by searching for the document All Java SE Downloads on MOS (Doc ID 1439822.1). There you find the current versions of the Java SE pacakges. For this article I used the public version JDK 8 Update 172, that you can download as patch 27412872:
       This contains the rpm as well as a .tar.gz package, that we'll use. Make sure that you download the x86_64 version:
      And copy the download in the Stages folder in your vagrant main project folder (see previous blog).

      To install it, I have the following script installJava.sh:
      #Download a zip with tar.gz containing complete JDK
      #On MOS: Search for Doc ID 1439822.1
      #Download latest 1.8 (public) patch, eg.:
      #27412872 Oracle JDK 8 Update 172 (complete JDK, incl. jmc, jvisualvm)
      SCRIPTPATH=$(dirname $0)
      . $SCRIPTPATH/fmw12c_env.sh

      echo "Checking Java Home: "$JAVA_HOME
      if [ ! -f "$JAVA_HOME/bin/java" ]; then
      #Unzip Java
      if [ ! -f "$JAVA_INSTALL_HOME/$JAVA_INSTALL_TAR" ]; then
      if [ -f "$JAVA_ZIP_HOME/$JAVA_INSTALL_ZIP" ]; then
      mkdir -p $JAVA_INSTALL_HOME
      echo JAVA Zip File $JAVA_ZIP_HOME/$JAVA_INSTALL_ZIP does not exist!
      echo $JAVA_INSTALL_TAR already unzipped
      # Install jdk
      echo Install jdk
      echo create folder $JAVA_INSTALL_TMP
      mkdir -p $JAVA_INSTALL_TMP
      echo create JAVA_HOME $JAVA_HOME
      mkdir -p $JAVA_HOME
      echo jdk 1.8 already installed

      That uses the fmw12c_env.sh script:
      echo set Fusion MiddleWare 12cR2 environment
      export ORACLE_BASE=/app/oracle
      export INVENTORY_DIRECTORY=/app/oraInventory
      export JAVA_HOME=$ORACLE_BASE/product/jdk

      The script checks java already exists. If not then it checks if the tar or zip file exist in /media/sf_Stage/Java. If the tar file does not exist it will unzip the zip file. If the tar file does exist, then it will create a temp folder to extract the tar file into. Then it will create the java-home folder and move the extracted jdk folder to it.

      I put these files in the scripts/fmw folder of my project:
      Call script from provisioningNow we're at the point that took me a lot of time to figure out last winter. How do I call this scripts form the provisioning. Simple question, but let me try to explain the difficulty.
      The provisioning is done using the vagrant user. The vagrant user is in the sudoers list, so it's able to run a script using the permissions of another user, usually the super user (root). But it is still the running vagrant user who owns the resulting files and folders. I could do something like sudo su - oracle -c "script"... However, it still results in all the files owned by vagrant. So, if I run the Java install script, the complete java tree is owned by vagrant. But I want oracle to own it. Now,  I could create another base box and replace vagrant by oracle as the install user. But that is not the idea.

      It took me some time to finally find the great utility runuser. This allows me to run the script as another substitute user. The runuser utility must be run as root, but that's no problem since vagrant is in the sudoers list.

      So add the following lines to your provisioning part of the Vagrantfile:
          echo _______________________________________________________________________________
      echo 3. Java SDK 8
      sudo runuser -l oracle -c '/vagrant/scripts/fmw/installJava.sh'

      With the -l argument I denote the user and with -c the command to run.
      First tryHaving done that, you could do a first try of the provisioning by upping the box.
      Open a command window and start it the first time with vagrant up. Then if all goes well the provisioning ends with:
          darwin: _______________________________________________________________________________
      darwin: 3. Java SDK 8
      darwin: set Fusion MiddleWare 12cR2 environment
      darwin: Checking Java Home: /app/oracle/product/jdk
      darwin: Unzip /media/sf_Stage/Java/p27412872_180172_Linux-x86-64.zip to /media/sf_Stage/Extracted/Java/jdk-8u172-linux-x64.tar.gz
      darwin: Archive: /media/sf_Stage/Java/p27412872_180172_Linux-x86-64.zip
      darwin: inflating: /media/sf_Stage/Extracted/Java/jdk-8u172-linux-x64.rpm
      darwin: inflating: /media/sf_Stage/Extracted/Java/jdk-8u172-linux-x64.tar.gz
      darwin: inflating: /media/sf_Stage/Extracted/Java/readme.txt
      darwin: Install jdk
      darwin: create folder /media/sf_Stage/Extracted/jdk
      darwin: create JAVA_HOME /app/oracle/product/jdk
      darwin: Untar /media/sf_Stage/Java/jdk-8u172-linux-x64.tar.gz to /media/sf_Stage/Extracted/jdk
      darwin: tar: jdk1.8.0_172/bin/ControlPanel: Cannot create symlink to `jcontrol': Protocol error
      darwin: tar: jdk1.8.0_172/man/ja: Cannot create symlink to `ja_JP.UTF-8': Protocol error
      darwin: tar: jdk1.8.0_172/jre/bin/ControlPanel: Cannot create symlink to `jcontrol': Protocol error
      darwin: tar: jdk1.8.0_172/jre/lib/amd64/server/libjsig.so: Cannot create symlink to `../libjsig.so': Protocol error
      darwin: tar: Exiting with failure status due to previous errors
      darwin: Move /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/bin /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/COPYRIGHT /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/db /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/include /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/javafx-src.zip /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/jre /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/lib /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/LICENSE /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/man /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/README.html /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/release /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/src.zip /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/THIRDPARTYLICENSEREADME-JAVAFX.txt /media/sf_Stage/Extracted/jdk/jdk1.8.0_172/THIRDPARTYLICENSEREADME.txt to /app/oracle/product/jdk

      It it all went well you can just destroy it:
      d:\Projects\vagrant\ol75_db12c>vagrant destroy
      darwin: Are you sure you want to destroy the 'darwin' VM? [y/N] y
      ==> darwin: Forcing shutdown of VM...
      ==> darwin: Destroying VM and associated drives...

      Install databaseInstalling the database is a bit more complex. I'll not post every file, the code can be found here in github, as in fact the rest of my project files.
      It expects the database installation as two zip files, V46095-01_2of2.zip and V46095-01_1of2.zip in the folder /media/sf_Stage/DBInstallation/ Like for the rest it works much like my earlier install Fusion Middleware scripts. It unzips the files, installs the database based on the template response files. And it runs the database creation utility.

      I also added scripts to install/unzip sqldeveloper and sql commandline.

      To install the database and possibly SQLDeveloper and/or SQLcl, just add the following lines to your vagrant file:
          echo _______________________________________________________________________________
      echo 4. Database 12c
      sudo runuser -l oracle -c '/vagrant/scripts/database/installDB.sh'
      echo _______________________________________________________________________________
      echo 5.1 SQLCL and SQLDeveloper
      sudo runuser -l oracle -c '/vagrant/scripts/database/installSqlcl.sh'
      echo _______________________________________________________________________________
      echo 5.2 SQLDeveloper
      sudo runuser -l oracle -c '/vagrant/scripts/database/installSqlDeveloper.sh'

      And run vagrant up again.

      ConclusionI didn't get much into the install of the database. I feel that I wrote quite a lot in the past on installing Oracle software. In the upcoming period I'll wrap these lasts blogs into a presentation on creating boxes with Vagrant. This an input to my talk on this subject at the NLOUG Tech Experience 2018.

      By now you should be able to replicate my findings and create other boxes as well. In the next blogs I might write about transforming my other FMW install scripts into vagrant project. We should now be able to create Vagrant projects to install OSB, SOA/BPM Suite, SOA/BPM Quickstart, etc.

      Lately I did a Docker install in an Ubuntu 16 base box. Ubuntu has some peculiarities in creating users and logical volume management as opposed to RedHat/Oracle Linux. I might also write about that. But feel free to leave a comment if you have particular wishes. However, please have a bit of patience with me, since I first need to get my talk on the Tech Experience ready.

      Installing haveged from Oracle repositories

      Tue, 2018-05-01 10:14
      In my previous blog, I mentioned the installation of the utility Haveged that is used to increase your entropy on non-gui systems. Since you can download it from the Oracle yum repository, I figured that I could install it from yum too, instead of using rpm. This prevents me from having to download it myself, and ensures I have the one that is applicable to my version of Linux.

      Too bad it isn't in the default ol7_latest repo. So you have to either add the yum repo above or enable it.

      To add it, you can do:
      [vagrant@darlin-vce ~]$ sudo yum-config-manager --add-repo https://yum.oracle.com/repo/OracleLinux/OL7/developer_EPEL/x86_64

      To remove it, you can remove the corresponding file from /etc/yum.repos.d:
      [vagrant@darlin-vce ~]$ cd /etc/yum.repos.d/
      [vagrant@darlin-vce yum.repos.d]$ ls
      [vagrant@darlin-vce yum.repos.d]$ sudo rm yum.oracle.com_repo_OracleLinux_OL7_developer_EPEL_x86_64.repo
      [vagrant@darlin-vce yum.repos.d]$ ls

      But it appears already registered in the repository list, but disabled:

      [vagrant@darlin-vce yum.repos.d]$ sudo yum repolist all
      Loaded plugins: langpacks, ulninfo
      repo id repo name status
      ol7_MODRHCK/x86_64 Latest RHCK with fixes from Oracle for Oracle Linux 7Server (x86_64) disabled
      ol7_MySQL55/x86_64 MySQL 5.5 for Oracle Linux 7 (x86_64) disabled
      ol7_MySQL56/x86_64 MySQL 5.6 for Oracle Linux 7 (x86_64) disabled
      ol7_MySQL57/x86_64 MySQL 5.7 for Oracle Linux 7 (x86_64) disabled
      ol7_UEKR3/x86_64 Latest Unbreakable Enterprise Kernel Release 3 for Oracle Linux 7Server (x86_64) disabled
      ol7_UEKR3_OFED20/x86_64 OFED supporting tool packages for Unbreakable Enterprise Kernel on Oracle Linux 7 (x86_64) disabled
      ol7_UEKR4/x86_64 Latest Unbreakable Enterprise Kernel Release 4 for Oracle Linux 7Server (x86_64) enabled: 641
      ol7_UEKR4_OFED/x86_64 OFED supporting tool packages for Unbreakable Enterprise Kernel Release 4 on Oracle Linux 7 (x86_64) disabled
      ol7_addons/x86_64 Oracle Linux 7Server Add ons (x86_64) disabled
      ol7_ceph/x86_64 Ceph Storage for Oracle Linux Release 2.0 - Oracle Linux 7.2 or later (x86_64) disabled
      ol7_ceph10/x86_64 Ceph Storage for Oracle Linux Release 1.0 - Oracle Linux 7.1 or later (x86_64) disabled
      ol7_developer/x86_64 Oracle Linux 7Server Development Packages (x86_64) disabled
      ol7_developer_EPEL/x86_64 Oracle Linux 7Server Development Packages (x86_64) disabled
      ol7_developer_gluster310/x86_64 Oracle Linux 7Server Gluster 3.10 Packages for Development and test (x86_64) disabled
      ol7_developer_gluster312/x86_64 Oracle Linux 7Server Gluster 3.12 Packages for Development and test (x86_64) disabled
      ol7_developer_nodejs4/x86_64 Oracle Linux 7Server Node.js 4 Packages for Development and test (x86_64) disabled
      ol7_developer_nodejs6/x86_64 Oracle Linux 7Server Node.js 6 Packages for Development and test (x86_64) disabled
      ol7_developer_nodejs8/x86_64 Oracle Linux 7Server Node.js 8 Packages for Development and test (x86_64) disabled
      ol7_developer_php70/x86_64 Oracle Linux 7Server PHP 7.0 Packages for Development and test (x86_64) disabled
      ol7_developer_php71/x86_64 Oracle Linux 7Server PHP 7.1 Packages for Development and test (x86_64) disabled
      ol7_developer_php72/x86_64 Oracle Linux 7Server PHP 7.2 Packages for Development and test (x86_64) disabled
      ol7_latest/x86_64 Oracle Linux 7Server Latest (x86_64) enabled: 26,602

      To enable it just:
      sudo yum-config-manager --enable ol7_developer_EPEL

      As a response it will display the current configuration.
      Then install haveged by:
      sudo yum -q -y install haveged

      To check if haveged is installed, do a yum -list:
      [vagrant@darlin-vce ~]$ sudo yum list haveged
      Loaded plugins: langpacks, ulninfo
      Installed Packages

      This is drawn by combining this RHEL Tech-Doc with the repo content of my freshly installed OL7U5 box.

      And that solves the note I had left in my previous blog post.

      Base box ready? Let's create a box!

      Tue, 2018-05-01 08:04
      Last week I wrote about creating our own Vagrant base box, based on the new and fresh Oracle Linux 7 Update 5. As a reaction on my article I got noted that Oracle also keeps base boxes for their latest linuxes at http://yum.oracle.com/boxes. They're also a great start.

      To begin, I created a project structure for all my vagrant projects:
      As you can see, I have a vagrant main projects folder on my D: drive, D:\Projects\vagrant. Besides my actual vagrant projects, like oel74, oel74_wls12c etc., I have a boxes folder and a Stage folder. The boxes folder, as you can guess, contains my base boxes:
      The Stage folder contains all my installation-binaries, for database, Weblogic, Java, FusionMiddleware and so on.

      Today I want to create a basic VM that will be a base for further VM's, like database, weblogic, etc.
      I want the VM to have:
      • Linux prepared with correct kernel settings for database, FusionMiddleware, etc.
      • Filesystem created on a second disk. I did not add a second disk to the base box, only a root disk. Thus we need to extend the VM with one.
      • Create an oracle user that can sudo.
      Initialize a vagrant project Begin with creating a folder, like ol75 in the structure, for this project. Open a command window and navigate to it:
      Microsoft Windows [Version 10.0.16299.371]
      (c) 2017 Microsoft Corporation. All rights reserved.


      D:\>cd d:\Projects\vagrant\ol75\

      d:\Projects\vagrant\ol75>vagrant help init
      ==> vagrant: A new version of Vagrant is available: 2.0.4!
      ==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

      Usage: vagrant init [options] [name [url]]


      --box-version VERSION Version of the box to add
      -f, --force Overwrite existing Vagrantfile
      -m, --minimal Use minimal Vagrantfile template (no help comments). Ignored with --template
      --output FILE Output path for the box. '-' for stdout
      --template FILE Path to custom Vagrantfile template
      -h, --help Print this help


      The vagrant command init creates a new vagrant project, with a so called Vagrantfile in it. By default, you'll get a Vagrantfile with the most common settings and comments explaining  the most common additional settings. But using the -m or --minimal setting a just enough vagrant file is created, without comments. I do like a file with the most common settings in the comments, as it allows me to quickly extend it without having to lookup every thing. So I create a basic file:
      d:\Projects\vagrant\ol75>vagrant init
      A `Vagrantfile` has been placed in this directory. You are now
      ready to `vagrant up` your first virtual environment! Please read
      the comments in the Vagrantfile as well as documentation on
      `vagrantup.com` for more information on using Vagrant.

      d:\Projects\vagrant\ol75>dir /w
      Volume in drive D is DATA
      Volume Serial Number is 62D7-9456

      Directory of d:\Projects\vagrant\ol75

      [.] [..] Vagrantfile
      1 File(s) 3,081 bytes
      2 Dir(s) 651,847,397,376 bytes free

      Now, let's expand the file bit by bit. So, open it in your favorite ASCII editor, like Notepad++, for instance.
      # -*- mode: ruby -*-
      # vi: set ft=ruby :

      # All Vagrant configuration is done below. The "2" in Vagrant.configure
      # configures the configuration version (we support older styles for
      # backwards compatibility). Please don't change it unless you know what
      # you're doing.
      Vagrant.configure("2") do |config|
      # The most common configuration options are documented and commented below.
      # For a complete reference, please see the online documentation at
      # https://docs.vagrantup.com.

      # Every Vagrant development environment requires a box. You can search for
      # boxes at https://vagrantcloud.com/search.
      config.vm.box = "base"

      As with many other scripted languages, I find it convenient to have all the configurable values declared as global variables at the top of the file. So let's declare some:
      # -*- mode: ruby -*-
      # vi: set ft=ruby :
      VM_MEMORY = 12288 # 12*1024 MB
      VM_DISK2_SIZE=1024 * 512
      # Stage folders
      Then find the line ‘Vagrant.configure("2") do |config|’. This starts the block that configures the box. All the following configuration is done between that line and the corresponnding closing end line.

      Start with renaming the box based on the global variable:
        config.vm.box = BOX_NAME

      And add the following lines:
      config.vm.define "darwin"

      The config.vm.box_url directive refers to the base box used. Often this is just the name of the box that is automatically downloaded from the Hashicorp/Vagrant repository. You can also use a URL to a box on internet, it will be downloaded automatically. But we created our own box, so that we exactly know and control what's in it. It does not need to be downloaded, it's available right away.
      Side note: manage your boxes At start up this box is added to your local Vagrant box repository. You can see the boxes in your repository with the command box list:
      d:\Projects\vagrant\ol75>vagrant box list
      OL7U4 (virtualbox, 0)
      OL7U4-1.1b (virtualbox, 0)
      Ubuntu16.0.4LTS (virtualbox, 0)


      These boxes are found in the .vagrant.d/boxes folder your user profile:
      My new ol75 box is not added yet, as you can see. It will be done at first up. Since they're eating up costly space on my SSD drive, it's sensible to remove unused boxes. For instance, the  OL7U4 one is superseded by the OL7U4-1.1b. The Ubuntu one I still use, although the Ubuntu version is a bit old. So, I should at least remove the OL7U4 one. It can be done with the command box remove ${name}:

      d:\Projects\vagrant\ol75>vagrant box remove OL7U4
      Removing box 'OL7U4' (v0) with provider 'virtualbox'...

      d:\Projects\vagrant\ol75>vagrant box list
      OL7U4-1.1b (virtualbox, 0)
      Ubuntu16.0.4LTS (virtualbox, 0)

      Support for multi-machine boxesThe other line I added was the config.vm.define directive. This one allows to define multi-machine definitions in one Vagrant project. It can come in handy when you have a project where one VM is servicing your database, while another is doing your front-end application. You can have them started and provisioned automatically, where the provisioning can be done in stages. Or you can switch off auto-start for certain machines and start those explicitly. It's a nice-to-know, but I'll leave it for now. I use it for noting the machine in the provisioning logs.
      Read more about multi-machine configs here.
      SSH Vagrant UserBelow the config.vm.* lines, within the config block, you can add some config lines for the vagrant username/password:

      The password line should be optional, since we injected a key for the vagrant user. You'll see that Vagrant will replace that key. The ssh.port will direct Vagrant to create a port-forwarding for the local port 2222 to the ssh port on the vm.
      Provider config
      Within the config block, below the line config.fm.define add the following block:
        config.vm.provider :virtualbox do |vb|
      vb.name = VM_NAME
      vb.gui = true
      vb.memory = VM_MEMORY
      vb.cpus = VM_CPUS
      # Set clipboard and drag&drop bidirectional
      vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
      vb.customize ["modifyvm", :id, "--draganddrop", "bidirectional"]
      # Create a disk
      unless File.exist?(VM_DISK2)
      vb.customize [ "createmedium", "disk", "--filename", VM_DISK2, "--format", "vdi", "--size", VM_DISK2_SIZE , "--variant", "Standard" ]
      # Add it to the VM.
      vb.customize [ "storageattach", :id , "--storagectl", "SATA", "--port", "2", "--device", "0", "--type", "hdd", "--medium", VM_DISK2]

      This block begins setting the following properties:
      vb.name Name of the VM to create. This is how the VM will appear in VirtualBox. vb.gui This toggles the appearance of the UI of the VM. If false, it is started in the background. It is then only reachable through the network settings. Using the VirtualBox manager the GUI can then be brought to appear using the show button. vb.memory This sets the memory available to the VM. In the global variables I have set it to 12GB (12*1024 MB) vb.cpus Number of CPU cores availabe to the VM. In the globals I have set it to 4.
      Using the customize provider command you can change the VM's configuration. It is in fact an API to the VboxManage utility of VirtualBox.

      With modifyvm we set both the properties --clipboard and --draganddrop to bidirectional. When the GUI is shown (vb.gui = true) then it allows us to copy and paste into the VM for instance.

      Then using  the createmedium command a standard vdi disk is created. The name VM_DISK2 is based on the variables VMS_HOME and VM_NAME. See the top of the file. It's convenient to have the file created in the same folder as the VM is created in. So check the VirtualBox preferences:

      The Default Machine Folder preference is used to create the VM in, in a subfolder indicated by the name of the VM. So make sure that the VMS_HOME variable matches the value in that preference.

      The value for --format the createmedium command I used is vdi. This is the default Virtual Disk Image format of VirtualBox. When exporting a VM into an OVA ( Open Virtual Appliance package, which is a tar archive file) the VDI disks are converted to the VMDK  (Virtual Machine Disk).
      The size is set with the --size parameter, in my example set to the VM_DISK2_SIZE that I created in the top of the file as 512GB (1024 * 512).
      The --variant Standard indicates a dynamically allocated file, that grows with the filling of it. So, you won't loose the complete filesize on diskspace. The 512GB limits the growth of the disk.

      I want the disk to persist and not recreated at every startup, so I surrounded that command with the
      unless File.exist?(VM_DISK2) block.

      Using the storageattach I add it to my SATA controller using the --storagectl SATA parameters. It's added to --port 2 and --device 0, since it is my second drive. It needs to appear secondly in Linux. Then ofcourse the --type is hdd and the --medium is VM_DISK2.

      You see a special variable :id. This refers to the VM that is created in VirtualBox. Of course I want the disk attached to the proper VM.
      Shared/Synced foldersVagrant by default creates a folder-link, a so-called Synced Folder, the wrapper around VirtualBox’s Shared Folder functionality. The default refers to the Vagrant project folder from which the VM is created and provisioned. Thus, in fact the folder where the Vagrantfile resides. That folder is mounted in the VM as /vagrant. So, navigating to the /vagrant folder in the VM will show you the files from the vagrant-project folder and child folders. This is convenient, because subfolders in that folder, for instance a scripts folder with provisioning scripts, are immediately available at startup.

      Since we want to install software from the Stage folder on our host, we need a mapping to that.
      So find the following line:
       # config.vm.synced_folder "../data", "/vagrant_data"

      This is an example line for configuring additional synced folders. Add a line below it as follows:
        config.vm.synced_folder STAGE_HOST_FOLDER, STAGE_GUEST_FOLDER

      This maps the folder on the host as denoted in the global variable STAGE_HOST_FOLDER, and then mount that as the value from the global STAGE_GUEST_FOLDER. Notice that I doing so I map the folder /media/sf_Stage in the VM to the folder d:/Projects/vagrant/Stage on the host. Wich is a sub-folder in my main vagrant project folder.
      ProvisioningHaving the VM configured, the provisioning part is to be configured. Vagrant allows for several provisioners like Puppet, Chef, Ansible, Salt, and Docker. But a Shell snippet is provided in our Vagrantfile. I'll expand that one. At the bottom of the file, right above the closing end of our configure block, we'll find the snippet:
      # config.vm.provision "shell", inline: <<-SHELL
      # apt-get update
      # apt-get install -y apache2
      # SHELL
      Replace it with the following block:
        config.vm.provision "shell", inline: <<-SHELL
      export SCRIPT_HOME=/vagrant/scripts
      echo _______________________________________________________________________________
      echo 0. Prepare Oracle Linux
      echo _______________________________________________________________________________
      echo 1. Create Filesystem
      echo _______________________________________________________________________________
      echo 2. Create Oracle User
      This provides an inline script. I like that because it allows me to see directly what happens in helicopter view. But, I do not like to have all the detailed steps in here, so I only call sub-scripts within this block. You can also uses external scripts, even remote ones, as described in the doc.

      As can be seen the scripts I'll describe below have to be placed in the scripts folder as part of the vagrant project folder. Remember that this folder is mapped as a synched folder to /vagrant in the VM.

      I  have the following script, called 0.PrepOEL.sh, to  update the Oracle Linux installation in the VM:
      SCRIPTPATH=$(dirname $0)
      . $SCRIPTPATH/install_env.sh
      echo Installing packages required by the software
      sudo yum -q -y install compat-libcap1* compat-libstdc* libstdc* gcc-c++* ksh libaio-devel* dos2unix system-storage-manager
      echo install Haveged
      sudo rpm -ihv $STAGE_HOME/Linux/haveged-1.9.1-1.el7.x86_64.rpm
      echo 'Adding entries into /etc/security/limits.conf for oracle user'
      if grep -Fq oracle /etc/security/limits.conf
      echo 'WARNING: Skipping, please verify!'
      echo 'Adding'
      sudo sh -c "sed -i '/End of file/i # Oracle account settings\noracle soft core unlimited\noracle hard core unlimited\noracle soft data unlimited\noracle hard data unlimited\noracle soft memlock 3500000\noracle hard memlock 3500000\noracle soft nofile 1048576\noracle hard nofile 1048576\noracle soft rss unlimited\noracle hard rss unlimited\noracle soft stack unlimited\noracle hard stack unlimited\noracle soft cpu unlimited\noracle hard cpu unlimited\noracle soft nproc unlimited\noracle hard nproc unlimited\n' /etc/security/limits.conf"

      echo 'Changing /etc/sysctl.conf'
      if grep -Fq net.core.rmem_max /etc/sysctl.conf
      echo 'WARNING: Skipping, please verify!'
      echo 'Adding'
      sudo sh -c "echo '
      fs.aio-max-nr = 1048576
      fs.file-max = 6815744
      kernel.shmall = 2097152
      kernel.shmmax = 4294967295
      kernel.shmmni = 4096
      kernel.sem = 250 32000 100 128
      net.ipv4.ip_local_port_range = 9000 65500
      net.core.rmem_default = 262144
      net.core.rmem_max = 4194304
      net.core.wmem_default = 262144
      net.core.wmem_max = 4194304
      /sbin/sysctl -p
      It uses the following script, install_env.sh:
      echo set Install environment
      export STAGE_HOME=/media/sf_Stage
      export SCRIPT_HOME=/vagrant/scripts
      to set a few HOME variables.
      It first install required packages using sudo yum . These packages are required for most of the Oracle software setup, like database and/or FusionMiddleware.
      Then from the Linux folder in the STAGE_HOME folder, the tool Haveged is installed. You can download it from the Oracle yum repository. So, I should be able to have it installed with yum as well. One improvement point noted.

      Why Haveged you ask? On non-gui, terminal only virtualized systems, the entropy maybe low, which causes slow encryption/decryptions. Installing and configuring FusionMiddleware, for instance, or starting WebLogic maybe very slow.

      Then it set some limits for the oracle  user and kernel configs. These can be found in the install guides of the Oracle software.

      To create the file system I use the script 1.FileSystem.sh:
      echo Create folder for mountpoint /app
      sudo mkdir /app
      echo Create a Logical Volume group and Volume on sdb
      sudo ssm create -s 511GB -n disk01 --fstype xfs -p pool01 /dev/sdb /app
      sudo ssm list
      sudo sh -c "echo \"/dev/mapper/pool01-disk01 /app xfs defaults 0 0\" >> /etc/fstab"

      This one creates a folder  /appfor the file mount. Then it uses ssm (System Storage Manager) to create a 511GB (because of overhead I can't create a filesystem of 512GB) filesystem using a Logical Volume in a Logical Volume Group on the sdb device in Linux. For more info on how this works, read my blog article on it.

      That leaves us to create an oracle user. For that I use the script 2.MakeOracleUser.sh:
      # Script to create a OS group and user
      # The script is using the file /home/root/oracle.properties to read the properties you can set, such as the password
      SCRIPTPATH=$(dirname $0)


      function prop {
      grep "${1}" $SCRIPTPATH/makeOracleUser.properties|cut -d'=' -f2

      # As we are using the database as well, we need a group named dba
      echo Creating group dba
      sudo /usr/sbin/groupadd -g 2001 dba

      # We also need a group named oinstall as Oracle Inventory group
      echo create group oinstall
      sudo /usr/sbin/groupadd -g 2000 oinstall

      # Create the Oracle user
      echo Create the oracle user
      sudo /usr/sbin/useradd -u 2000 -g oinstall -G dba oracle
      echo Setting the oracle password to...
      sudo sh -c "echo $(prop 'oracle.password') |passwd oracle --stdin"
      sudo chown oracle:oinstall /app
      # Add Oracle to sudoers so he can perform admin tasks
      echo Adding oracle user to sudo-ers.
      sudo sh -c "echo 'oracle ALL=NOPASSWD: ALL' >> /etc/sudoers"
      # Create oraInst.loc and grant to Oracle
      echo Create oraInventory folder
      sudo chown -R oracle:oinstall /app
      sudo mkdir -p /app/oracle/oraInventory
      sudo chown -R oracle:oinstall /app/oracle
      echo Create oraInst.loc and grant to Oracle
      sudo sh -c "echo \"inventory_loc=/app/oracle/oraInventory\" > /etc/oraInst.loc"
      #sudo sh -c "echo \"\" > /etc/oraInst.loc"
      sudo sh -c "echo \"inst_group=oinstall\" >> /etc/oraInst.loc"
      sudo chown oracle:oinstall /etc/oraInst.loc

      It uses a property file makeOracleUser.properties for the oracle password:

      Using the prop function this property is read.

      The groups dba and oinstall are created. Then the oracle user is created and it's password set. The filesystem mounted on /app is assigned to the oracle user to own. And then the user is added to tue /etc/sudoers file.

      Lastly the oraInventory and the oraInst.loc file are created.
      Up, up and up it goes…!If everything went alright, you’re now ready to fire up your VM.
      So open a command window and navigate to your vagrant project folder, if not done already.
      Then simply issue the following command:
      d:\Projects\vagrant\ol75>vagrant up

      And then you wait…. And watch carefully to see that Vagrant imports the box, creates the VM, and provisions it.

      Some other helpfull commandsI'll finish with some other helpfull commands:
      vagrant up Start a VM, and provision it the first time. vagrant halt Stop the VM. vagrant suspend Remove the VM. vagrant destroy Number of CPU cores availabe to the VM. In the globals I have set it to 4. vagrant box list Lists the base boxes in your repository. vagrant box remove Remove a listed base box from your repository. vagrant package --base <VM Name> --output <box filename> Package the VM <VM Name> from the provider (VirtualBox) into a base box with given <box filename>.
      Next stop: installing software as an oracle (or othernon-vagrant) user  user.

      Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box

      Tue, 2018-04-24 09:31
      It's been busy, so unfortunately it's already been almost two weeks I wrote my introductory story on Vagrant. Today I happen to have an afternoon off, and I noticed that Oracle Linux 7 Update 5 is out. I based my first boxes on 7.4, so nice moment to start with creating a new Base Box.

      De essentials on creating a Vagrant base box can be read here. But I'm going to guide you trough the process step by step, so I hope you will be able to repeat this yourself, using this guide-through.

      First of, Vagrant recommends Packer to automate the creation of base boxes. But I'm a bit confused, because in this guide it is apparently stated that this is deprecated by march 2018. I haven't tried Packer yet, but I feel that over the years I created a base VM only a few times. I used to create a base VM that I import/clone to create new VMs over and over again. And often, I start of with a VM that already contains a pre-installed database for instance.

      Vagrant has a built in command to create a base box out of an existing VM. That is what I use.
      Base box requirementsWhat is a Base Box actually? Well, it's in fact sort of a template that is used by Vagrant to create and configure a new VM and provision that. It should contain the following
      • An OS: I use Oracle Linux 7 Update 5 for this story. I also have a base box with Ubuntu. Ubuntu has some peculiarities I want to discuss later on in this series. For this base box I'll install a server-with-gui. But further as basic as possible.
      • A vagrant user. The vagrant user is used for provisioning the box. We'll place a public insecure key in it, that will be replaced by Vagrant at first startup. We'll add vagrant to the sudoers list, so the user can sudo without passwords.
      • A started ssh daemon:  Vagrant connects via ssh using the vagrant-user to do the provisioning.
      • A NAT (Network Address Translation) Adapter as the first one: needed to do kernel/package updates without further network configuration.
      • VirtualBox GuestAdditions installed: Vagrant makes use of shared folders to map the project folder to get to the scripts. Also it's convenient to add an extra stage folder mapping. 
      • Password of root: not a requirement, but apparently it's a bit of a standard to set the root password to vagrant as ease of sharing. But at least note down the passwords.
      That's about it. Maybe I forget something, but since it's digital, I can edit it later... So let's get started.

      Download  Oracle LinuxAll the serious enterprise stuff of Oracle can be downloaded at edelivery. Search for Oracle Linux:
      Then add the 7.5 version to the Cart by clicking it:

      Follow the wizard instructions and you'll get to:
      I downloaded V975367-01.iso        Oracle Linux Release 7 Update 5 for x86 (64 bit), 4.1 GB.

      Create the VMThe ISO is downloading, so let's create a VM in VirtualBox. I assume VirtualBox with VirtualBox Extension Pack is installed. And for later on Vagrant of course.

      From the Oracle VM VirtualBox Manager, create a new VM, I called it OL75, for Oracle Linux 64 bit:
      I followed the wizard and gave it 10240 MB memory and a 128GB dynamically allocated virtual disk:

      In the VM Settings, I set the number of processors to 4 and for now I kept everything to the default.

      In the meantime my download is ready, so in the VM Settings, under Storage I added the disk by clicking the disk icon next to the IDE controller:

      Then navigate to your downloaded iso:
      and select it. Now the VM is ready to kick-off:

      It will startup automatically after a minute, but let's not wait that long.

      I don't need much, but in the Sofware Selection I do want Server with GUI:
      But with out selecting other packages. What I might need later on, I'll install at provisioning.

      I do not like default local domain networknames. So I changed the network hostname to darlin-vce.darwin-it.local:
      Hostname darlin stands for Darwin Linux and vce for Virtual Course Environment.

      Then hit Begin Installation:

      Soon in the installation the installer asks for the Root password:
      And the password is as said: vagrant.
      Then I add also a vagrant with the same password:
      Having done that, we need to wait for the installer to finish. At the end of the Install, do a reboot:

      This leads to 2 questions to be answered. One is about accepting the licensing. I assume that can be answered without guidance. The other is about connecting the network.

      You need to switch on the network adapter, but to have it done automatically you need to configure it and check the box Automatically connect to this network when it is available on the General tab. You'll need to have this done, otherwise Vagrant will have difficulties in connecting to the box.
      Then finish the configuration:
      Install guest additionsTo be able to install the guest additions, we need to add some kernel packages. We could have done that by installing additional kernel packages. But I wanted to have a as basic as possible installation. And the following is more fun...

      So open a terminal and switch to the super user:

      [vagrant@darlin-vce ~]$ su -
      Last login: Tue Apr 24 09:41:21 EDT 2018 on pts/0

      Then stop package kit, because it will probably hold a lock pausing yum:
      [root@darlin-vce ~]# systemctl stop packagekit

      And then install the packages kernel-uek-devel kernel-uek-devel-4.1.12-112.16.4.el7uek.x86_64, that are suggested by the GuestAdditions installer, by the way:
      [root@darlin-vce ~]# yum -q -y install kernel-uek-devel kernel-uek-devel-4.1.12-112.16.4.el7uek.x86_64
      No Presto metadata available for ol7_UEKR4
      warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/cpp-4.8.5-28.0.1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
      Public key for cpp-4.8.5-28.0.1.el7.x86_64.rpm is not installed
      Public key for kernel-uek-devel-4.1.12-124.14.1.el7uek.x86_64.rpm is not installed
      Importing GPG key 0xEC551F03:
      Userid : "Oracle OSS group (Open Source Software group) "
      Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
      Package : 7:oraclelinux-release-7.5-1.0.3.el7.x86_64 (@anaconda/7.5)
      From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

      Having done that, insert the GuestAdditions CD:
      It brings the following pop-up, click Run:

      And provide the Administrator password:

      In my case the script ran and during that the display got messed up. But after a reset of the VM (I waited until I got the impression it was done), the VM got up with a Hi-res display, indicating that the install went ok. Also the bi-directional clipboard worked.

      Configure vagrant userAgain in a terminal switch to super user and add the following line to the /etc/sudoers file:
      vagrant ALL=(ALL) NOPASSWD: ALL

      Exit and as vagrant user create a .ssh folder in the vagrant home folder, cd to it and create the file authorized_keys:
      [vagrant@darlin-vce ~]$ mkdir .ssh
      [vagrant@darlin-vce ~]$ cd .ssh
      [vagrant@darlin-vce .ssh]$ vi authorized_keys

      Insert the following content:
      ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key

      This is the insecure key of vagrant that can be downloaded here.
      It will be replaced by Vagrant at first startup.

      Package the boxSo, now we have a base install that can function as a base box for Vagrant. Thus we can now shut it down to export it to an OVA (just as a backup for VirtualBox) and then create are base box out of it.

      After creating your export of the OVA, that I skip describing here, you just open a command window. I assume you have Vagrant installed.

      To package the box, you use the package subcommand of vagrant:

      Microsoft Windows [Version 10.0.16299.371]
      (c) 2017 Microsoft Corporation. All rights reserved.

      d:\Projects\vagrant>vagrant package --base OL75 --output d:\Projects\vagrant\boxes\OL75v1.0.box
      ==> OL75: Exporting VM...
      ==> OL75: Compressing package to: d:/Projects/vagrant/boxes/OL75v1.0.box


      Conclusion Well, that concludes this part of the series. We have our own base box and it's barely 3GB. Next: create a VM with it. Stay tuned.