Darwin IT

Subscribe to Darwin IT feed
Darwin-IT professionals do ICT-projects based on a broad range of Oracle products and technologies. We write about our experiences and share our thoughts and tips.Martien van den Akkerhttps://plus.google.com/110503432901891966671noreply@blogger.comBlogger300125
Updated: 18 hours 5 min ago

ServiceBus 12c: Logging

Fri, 2016-06-24 04:49
As a developer you probably 'log-a-lot' in OSB. (Funny term, perfectly to mock people that have the tendency to excessively add log activities/statements to their code. And hey, if you're being mocked like this: I'm happy to join you, let's make it a 'Geuzennaam').

So as a log-a-lot, I was questioned by a OSB developer this week on a OSB11g->SB12c upgrade that I support, that his logs weren't visible in the server-logs in de 12cR2 SOAQuickStart Integrated Weblogic.

We reviewed the logging settings of the server, all set on Debug.

But it turns out that in the SB Logging configuration in EM all is set to Warning (Inherited).

To check it out, go to http://localhost:7101/em (on a default configured SOA QuickStart Integrated Weblogic domain).

Click on the Target Navigator and Navigate to soa-infra->Service Bus.

Then in the Service Bus menu, navigate to Logs ->  Log Configuration:

Then you need the Log-Levels tab:
Here you see that you can set another level on different SB subsystems. It can be interesting to check out some of those. For instance, I think it was oracle.osb.debug.instancetracking (but I'm not sure, I checked several of them) enable the logging of message contents and variable changes. You see that all the log-level settings are set to Warning, inherited from the level above.

But the one that is of interest for the pipeline-logging is oracle.osb.logging.pipeline:
Set that one on Trace. I used TRACE:16 (FINER). But it turns out that the levels here are different from those in the pipeline log activity and that of those of  the WebLogic Server logs. I haven't got a mapping at hand, but this one let's Debug messages through.

Hit the apply button top right:

This should enable the logging of the Pipeline.

A couple of notes of the automatic generation of a SOA Suite/OSB domain

Fri, 2016-06-24 04:49
Earlier you could have enjoyed my article on the automatic generation of a SOA/OSB domain. Earlier this week I encountered some issues with a domain created at a customer this way.

I got the change to dive into that this week and luckily not only I learned a lot again, but I found the problem as well. I adapted my scripts. I won't repost them completely, I've created a github account, and try to place them there in the near future.

But I'll cover the changes, especially those that caused the problems.

Enrollment
When you create a domain using the config.sh/.cmd wizard, and choose to configure the nodemanager, you'll be asked for the nodemanager user and password. You'll get a per domain nodemanager for free and the domain is enrolled for the nodemanager for you. You might need to adapt the nodemanager.properties in <domain_home>/nodemanager and set the property SecureListener to false. The default is true, while in the Machine definition in the domain the nodemanager/machine is configured to SSL. If you disable the SecureListener, you need to set the property in the wls-console under Machines to Plain instead of SSL. And adapt the listener-port.

The docs on configuring the nodemanager for 12.2.1 can be found here. One of the biggest changes between 11g and 12cR1 is that in 12cR2 you get a per domain nodemanager, with a nodemanager home in the domain home, instead of a nodemanager home in the FMW_HOME, the home of the binaries. Also in the bin folder of the domain you'll find scripts to start and stop the nodemanager.

So for a domain configured in the config-wizard, you need little to do to get the nodemanager working.

But using the scripts in my previous article, you'll find that although a nodemanager password file is created, something is missed for a proper startup of your servers using the nodemanager. You'll find that although the nodemanager starts, connecting to it using nmConnect(), fails with: a message like:
WLSTException: Error occured while performing nmConnect : Cannot connect to Node Manager. : Access to domain 'osb_domain' for user 'weblogic' denied.

You can go to the Domain->security settings in the weblogic console, and change the nodemanager password, but apparently this is not enough. You'll need to explicitly enroll the domain against the nodemanager. To do so:
  • Start the AdminServer, using the startWebLogic.sh/.cmd script in the domain home.
  • Start wlst.sh/.cmd
  • Connect to the AdminServer using:  connect(adminUser, adminPwd, adminURL)
  • Perform an NodeManager Enroll using: nmEnroll(soaDomainHome, nodeManagerHome)
  •  Stop the AdminServer
  • (Re)Start the nodemanager
  • In wlst: Connect to the nodemanager using nmConnect(....)
  • Perform nmStart('AdminServer')
The following enrollDomain.py script might help:
#############################################################################
# Create a SOA/BPM/OSB domain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.1, 2016-06-23
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'enrollDomain.py'
#
#Home Folders
soaDomainHome = domainsHome+'/'+soaDomainName
nodeManagerHome = soaDomainHome+'/'+'nodemanager'
#
#
lineSeperator='__________________________________________________________________________________'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl=localhost:7101"
print "adminUser=weblogic"
print "adminPwd=welcome1"
#
#
def main():
try:
#
# Section 1: Base Domain + Admin Server
print (lineSeperator)
print ('Enroll '+soaDomainName+' for NodeManager')
print('\nConnect to AdminServer ')
print (lineSeperator)
adminURL=adminListenAddress+':'+adminListenPort
connect(adminUser, adminPwd, adminURL)
#
print('\nPerform nmEnroll')
print (lineSeperator)
#
nmEnroll(soaDomainHome, nodeManagerHome)
#
print ('\nFinished')
#
print('\nExiting...')
exit()
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)
#call main()
main()
exit()

It can be called with a shell script (enrollDomain.sh) like:
#!/bin/bash
. fmw12c_env.sh
echo
echo Enroll domain
wlst.sh enrollDomain.py -loadProperties darlin-vce-db-osb.properties

I also adapted the function 'createUnixMachine' in the createSoaBpmDomain.py script:
#
# Create a Unix Machine
def createUnixMachine(serverMachine,serverAddress, serverPort, nmType):
print('\nCreate machine '+serverMachine+' with type UnixMachine')
print (lineSeperator)
cd('/')
create(serverMachine,'UnixMachine')
cd('UnixMachine/'+serverMachine)
create(serverMachine,'NodeManager')
cd('NodeManager/'+serverMachine)
set('ListenAddress',serverAddress)
set('ListenPort',int(serverPort))
set('NMType',nmType)
This allows for the non-default setting of the nodemanager to another listen port and nodemanager type, based on the following properties in the property file:
#
# Server Settings
nmType=Plain
server1Machine=darlin-vce-db
server1Address=darlin-vce-db
server1Port=5555
server2Enabled=false
server2Machine=darlin-vce-db
server2Address=darlin-vce-db2
server2Port=5555
An example of the property file can be found in the earlier article: automatic generation of a SOA/OSB domain.

LoggingIn the example property file in my scripts you can set a location for the logging:
# Logs
logsHome=/u01/app/work/logs

If this is a absolute path, 'nothing is on the hand'. Your domain should start correclty. But if you choose to use a relative path, like 'logs', then starting the AdminServer might succeed (in my case) but starting the managed servers fail with a 'No such file or directory' message:
wls:/nm/osb_domain> nmStart('Adminserver')

Starting server Adminserver ...

Traceback (innermost last):

File "<console", line 1, in ?

File "<iostream", line 188, in nmStart

File "<iostream>", line 553, in raiseWLSTException

WLSTException: Error occurred while performing nmStart : Error Starting server Adminserver : Received error message from Node Manager Server: [Server start command for WebLogic server 'Adminserver' failed due to: [No such file or directory]. Please check Node Manager log and/or server 'Adminserver' log for detailed information.]. Please check Node Manager log for details.

Use dumpStack() to view the full stacktrace :

wls:/nm/osb_domain

Now it turns out that this is caused by the JavaArgs that are generated and set in the script. I found that if you set JavaArgs you need to set redirects for weblogic.Stdout and weblogic.Stderr, like:
'-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m -Dweblogic.Stdout='+logsHome+'AdminServer.out -Dweblogic.Stderr='+logsHome+'AdminServer_err.out'

Where logFolder should be an absolute path. This has to do with the context in which the nodemanager is starting the server. From that context the relative reference apparently does not evaluate to the proper location.
You can however, leave the Java args empty. So I changed my scripts to not use the getServerJavaArgs function, anymore, but get them from the property file. I replaced the xxxJavaArgsBase  with xxxJavaArgs variables. And left them empty. 

The configurator doesn't set the JavaArgs, it leaves them over to the setDomain.sh/.cmd and setStartupEnv.sh/.cmd. If you do so, you can use a relative path, and the servers will start properly.

ServiceBus 12c: Logging

Fri, 2016-06-24 04:49
As a developer you probably 'log-a-lot' in OSB. (Funny term, perfectly to mock people that have the tendency to excessively add log activities/statements to their code. And hey, if you're being mocked like this: I'm happy to join you, let's make it a 'Geuzennaam').

So as a log-a-lot, I was questioned by a OSB developer this week on a OSB11g->SB12c upgrade that I support, that his logs weren't visible in the server-logs in de 12cR2 SOAQuickStart Integrated Weblogic.

We reviewed the logging settings of the server, all set on Debug.

But it turns out that in the SB Logging configuration in EM all is set to Warning (Inherited).

To check it out, go to http://localhost:7101/em (on a default configured SOA QuickStart Integrated Weblogic domain).

Click on the Target Navigator and Navigate to soa-infra->Service Bus.

Then in the Service Bus menu, navigate to Logs ->  Log Configuration:

Then you need the Log-Levels tab:
Here you see that you can set another level on different SB subsystems. It can be interesting to check out some of those. For instance, I think it was oracle.osb.debug.instancetracking (but I'm not sure, I checked several of them) enable the logging of message contents and variable changes. You see that all the log-level settings are set to Warning, inherited from the level above.

But the one that is of interest for the pipeline-logging is oracle.osb.logging.pipeline:
Set that one on Trace. I used TRACE:16 (FINER). But it turns out that the levels here are different from those in the pipeline log activity and that of those of  the WebLogic Server logs. I haven't got a mapping at hand, but this one let's Debug messages through.

Hit the apply button top right:

This should enable the logging of the Pipeline.

Start problems with Nodemanager

Tue, 2016-06-21 02:05
Since my previous long running assignment, I'm involved in a few OSB 11g to 12c upgrade trajects, where I have been working on automatic installs. Hence, my articles about automatic installs, patching and domain configuration.

When I create a new domain, 12cR2 (12.2.1), using my scripts, I'm not able to use the nodemanager of the domain to start the servers. Actually, I can't connect to it. I get it running alright, but connecting to it fails with a message like:
WLSTException: Error occured while performing nmConnect : Cannot connect to Node Manager. : Access to domain 'osb_domain' for user 'weblogic' denied.

When you google the standard solution is to change the nodemanager password on the security tab of the domain in the console.Like in this article. Actually a very good suggestion, but it did not work for me. It turns out that apparently performing an nmEnroll() did the job in that case. What you need to do is:
  • Start the admin server using the startWeblogic.sh script in the domain root.
  • Verify and correct the nodemanager settings under Environment->Machines-><your machine>, ensuring it is inline with the file nodemanager.properties in {domainHome}/nodemanager.
  • Start wlst.sh
  • Connect to the adminserver using: connect({adminuser}, {adminpassword} ,'{adminHost}:{adminPort}')
  • Perform: nmEnroll({domainHome}, {domainHome}/nodemanager)
Here I assume the use of a per-domain nodemanager, where {domainHome}/nodemanager is the nodemanager home within the domain home.

Then I was able to connect to the nodemanager, start the AdminServer and then the OSB Server.


At my customer, they have been struggling in configuring a 'standalone' nodemanager, as they did in the 11g situation. Nodemanager can be started, and connected to. But doning an nmStart of the admin server got:

wls:/nm/osb_domain> nmStart('Adminserver')

Starting server Adminserver ...

Traceback (innermost last):

File "", line 1, in ?

File "", line 188, in nmStart

File "", line 553, in raiseWLSTException

WLSTException: Error occurred while performing nmStart : Error Starting server Adminserver : Received error message from Node Manager Server: [Server start command for WebLogic server 'Adminserver' failed due to: [No such file or directory]. Please check Node Manager log and/or server 'Adminserver' log for detailed information.]. Please check Node Manager log for details.

Use dumpStack() to view the full stacktrace :

wls:/nm/osb_domain

This is also in 12cR2 (12.2.1), with a domain created with the same script. Sharp eyes may notice that the adminserver name is not default: it has a lowercase 's' in stead of a uppercase. They've been fiddling a round with naming of the admin server. What we finally did was to keep the un-default naming, but cleansed the server folder by removing the data, cache and tmp folder. We also removed the logs folder to be able to see if new logs were made from starting from the nodemanager. We configured the per-domain nodemanager and then we did the same as above, performing an nmEnroll() against the domain-nodemanager. After that the 'Adminserver' was startable.

ConclusionI hardly ever had the need to use nmEnroll(), not with a new domain and in 11g at least not even using a seperate nodemanager. From colleagues I did not hear the need to use it in 12c. So why did I need it to solve the sketched problems? I haven't sorted that out, I hope I once get a finger around it. For the moment, take advantage of these experiences.

Object Oriented Pl/Sql

Sat, 2016-06-18 05:01
Years ago, in my Oracle years I wrote an article on Oracle (Object) Types, and how those make Pl/Sql so much more powerfull. It was in Dutch, since I wrote it for our monthly internal consulting magazine called 'Snapshot'. Since it was in Dutch and I regularly refer to it on our blog or in questions on forums, I wanted to rewrite it for years. So let's go. Oracle Types are introduced in the Oracle 8/8i era. And enabled me to build stuff that were not possible using regular Pl/Sql.

In Oracle 8/8i the implementation lacked constructors, but it was already very powerful. From Oracle 9i the possibilities were extended a lot and brought it to where it still is, I think. So everything that I post here is possible from Oracle 9i and later.

And you may ask: why bother an extension in Pl/Sql dated about 10 tot 15 years ago? And why if I'm into SOA Suite and/or OSB? Well, I really think that Pl/Sql in combination with Object Types is the best tool at hand for creating API's to Oracle Database applications. And the DB Adapter's capabilities of calling Pl/Sql functions with Object Type parameters is very strong. Together it is the best integration pattern for the Oracle Database. Even for data retrieval, it's much stronger than stand alone queries or views.

Do these Object Types make Pl/Sql an object oriented language? I'm not going to discuss that in extend. I think it is much more like Turbo Pascal 5.5 was OO: I think it's more a 3GL with Object-extensions. So if you're an OO-purist: you're right upfront, as far as I'm concerned. But object-types make the life of a Pl/Sql programmer a lot more fun. And I feel that still after all those years the capabilities aren't utilized as much as could be.

 So let's dive in to it. We start at the basics.
A type with a constructor A type with a constructor and some methods can be created as follows:

create or replace type car_car_t as object
(
-- Attributes
license varchar2(10)
, category number(1)
, year number(4)
, brand varchar2(20)
, model varchar2(30)
, city varchar2(30)
, country varchar2(30)
-- Member functions and procedures
, constructor function car_car_t(p_license in varchar2)
return self as result
, member function daily_rate(p_date date)
return number
, member procedure print
)
/

I don't intent to give a college on object orientation here,  but if you look at the type specification it is immediatly clear that an Oracle Type is a kind of record-type on it's own, but that besides attribute it also contains executable additions: methods.

Methods are functies and/or procedures that execute on the attributes of the type. Within the method you can see the attributes as 'global' package variables.

As said, a very convenient addition in the Types Implementation is the possibility to define your own  constructors. They're declared as:
  , constructor function car_car_t(p_license in varchar2)
return self as result

A constructor starts with the keyword constructor and is always a function that returns the 'self' object as a result. Also, the constructor is always named the same as the type itself. Implicitly there's always a constructor with all attribute as a parameter. This was already the case in Oracle 8, but from Oracle 9i/10g onwards this is still delivered for free. But besides the default constructor you can define several of your own. br /> This enables you to instantiate an object based on a primary key value, for example. Based on that key you can do a select into the attributes from a particular table. Or instantiate a type based on the read of a file. Or parameter-less so that you can just instantiate a dummy object that can be assigned values in a process. This is especially convenient if you have a very large object, where not all the attributes are mandatory.
Often I add a print method or a to_xml or to_string method. This enables you to print all the attributes or return an XML with them, including the call of the same method in child objects. Child objects are attributes based on other types or collections.
The implementation of the methods are in the Type Body:
create or replace type body car_car_t is

-- Member procedures and functions
constructor function car_car_t(p_license in varchar2)
return self as result
is
begin
select license
, category
, year
, brand
, model
, city
, country
into self.license
, self.category
, self.year
, self.brand
, self.model
, self.city
, self.country
from cars
where license = p_license;
return;
end;
member function daily_rate(p_date date)
return number
is
l_rate number;
cursor c_cae( b_license in varchar2
, b_date in date)
is select cae.dailyrate
from carsavailable cae
where b_date between cae.date_from and nvl(cae.date_to, b_date)
and cae.car_license = b_license
order by cae.date_from;
r_cae c_cae%rowtype;
begin
open c_cae( b_license => self.license
, b_date => p_date);
fetch c_cae into r_cae;
close c_cae;
l_rate := r_cae.dailyrate;
return l_rate;
end;
member procedure print
is
l_daily_rate number;
begin
dbms_output.put_line( 'License : '||self.license);
dbms_output.put_line( 'Category : '||self.category);
dbms_output.put_line( 'Year : '||self.year);
dbms_output.put_line( 'Brand : '||self.brand);
dbms_output.put_line( 'Model : '||self.model);
dbms_output.put_line( 'City : '||self.city);
dbms_output.put_line( 'Country : '||self.country);
l_daily_rate := daily_rate(p_date => sysdate);
if l_daily_rate is not null
then
dbms_output.put_line('Daily Rate: '||l_daily_rate);
else
dbms_output.put_line('No cars available');
end if;
end;

end;
/

Here you see that I used a primary key based constructor to do a select from the cars table into the attributes. And do a simple return. I do not have to specify what I want to return, since it somehow does return 'itself'. That is: an instance of the type. So the return variable is more or less implicit.

The print method enables me to test the object easily after the instantiation:
declare 
-- Local variables here
l_car car_car_t;
begin
-- Test statements here
l_car := car_car_t(:license);
l_car.print;
end;
Collections An object don't come alone very often. Same counts for Object-instances. We talk with the database so in most cases we have more than one instance of an entity

Een a set of object-instances is called a collection. And is defined as:
create or replace type car_cars_t as table of car_car_t;


So a collection is actually a table of objects  of a certain type. Oracle is even able to query on such a collection, but I'll elaborate on that later.

Note, by the way, that there now is a reference to, or put otherwise, a dependency to the particular object type. This means that the object-specification of in this case 'car_car_t' can't be changed anymore, without dropping all the references to it. This may become quite inconvenient when changing a large hierarchy of object types. So you'd better create an install script right away, that can recreate (drop and create) the complete tree.

The 'body', the source code, can be recompiled. This is important, because the specification defines the  structure of the object (the class) and other objects are to depended on this interface. Maybe Oracle should define an interface type, so this can be made a bit more loosly coupled.

This counts especially when it comes to table definitions (in the database) where you can add an object-type based column. After all, a physical table can't become invalid. What should become of the data in that case? That could become 'undefined'.  For the rest, you can see Collections an ordanary Pl/Sql table, comparable to an "index by binary_integer"-table. But  with the difference that it is an (stand-alone) object in itself, containing other objects. This means that to be able to use a Collection it has to be instantiated. This can be done implicitly by means of a query:
select cast(multiset(
select license
, category
, year
, brand
, model
, city
, country
from cars
) as car_cars_t)
from dual

What this actually does is that the result set of the select on the table cars is being redefined as a Collection. The Multiset-function denotes that what is returned is actually a data-set of zero or more rows. The Cast-function is used to denote as what datatype/objecttype the multiset should be considered. You could say that over the result set a collection layer is layed. I haven't been able to test it, but I am curious about the performance effects. What would be the difference between this query and for example a for-cursor-loop? Now it is seasoned with a collection-sauce you can handle this resultset as were it a Pl/Sql-table in memory after all.


Of course you can instantiate and fill the collecation more explicitly like:
declare 
l_cars car_cars_t;
begin
l_cars := car_cars_t();
for r_car in (select license from cars)
loop
l_cars.extend;
l_cars(l_cars.count) := car_car_t(r_car.license);
end loop;
if l_cars.count > 0
then
for l_idx in l_cars.first..l_cars.last
loop
dbms_output.put_line('Car '||l_idx||':');
l_cars(l_idx).print;
end loop;
end if;
end;

In this case in the collection is instantiated in the first line. Then a for loop is started based on the select of the primary key on the cars table, which is the license column. Then in the loop, on each iteration the collection is extended. And then a new instance of car_car_t, using the primary key constructor with the car's license is assigned to the last row of the collection, denoted with the implicit count attribute of the collection.
In the second loop an example is given, which shows how easily you can traverse the collection and print each row-object.

Object-Functions and Views The creation and propagation of a collection can also be put in a function of course:
create or replace function get_cars 
return car_cars_t is
l_cars car_cars_t;
begin
select cast(multiset(
select license
, category
, year
, brand
, model
, city
, country
from cars
) as car_cars_t)
into l_cars
from dual;
return(l_cars);
exception
when no_data_found then
l_cars := car_cars_t();
return l_cars;
end get_cars;
/

This function get_cars has no input parameters, you could restrict the query based on model or year for instance. But it returns the car_cars_t collection. If there are no cars available this query raises a no_data_found exception, since it does a select-into. But since it happens in a function, the nice thing is that you can catch the exception and just return an empty collection.

But the fun part is that you can use the result of that function as the source of a query. So, you see in the function that you can lay a collection-sause over a result-set, but the other way around is also possible: a collection can be queried:
select car.license
, car.category
, car.year
, car.brand
, car.model
, car.city
, car.country
, car.daily_rate(sysdate) daily_rate
from
table(get_cars) car

The trick is in the table function. That directs Oracle to consider the outcome of the function as a result set. The example also shows that the methods are also available. But of course only if it's a function. By the way, in this example the attributes and the method-results are simple scalair datatypes, but they also could be types or collection. And those are available in the query. The attributes of object-attributes can be referenced in the query with the dot-notiation ('.'). In other words: hierarchical deeper attributes can be fetched as a column value and returned this way.

In this case we use a function as the base for the query. In that case it is also possible to create a view on top of it. As long as the function and the object-types that are returned  are 'visible' for the user/the schema that is owner ovthe view and/or uses the view.

But to stretch it some more: not only the result of a function can be used as the base of a function. Also a local variable or package-variable can be supplied as the source of a query:
declare
l_cars car_cars_t;
l_rate number;
begin
l_cars := get_cars;
select car.daily_rate(sysdate - 365) daily_rate
into l_rate
from table(l_cars) car
where license = '79-JF-VP';
dbms_output.put_line( l_rate);
end;
Isn't this a lot easier than to traverse a pl/sql-tabel in search for that one particular row?
Now, you could think: isn't this a full-table scan then? Yes indeed. But this fulltable scan is done completely in memory and therefor very fast. And let's be honest: who created a pl/sql-table of more than a gigabyte? Although using the examples above this can be done quite easily. So a bit of  performance-aware programming is recommended.

PipeliningIn the previous paragraph I already mentioned performance. With the collection-function-methodevan above you could program your own 'External Tables' in Oracle 8i already (External Tables were introduced in 9i). So you could, for example, read a file in a Pl/Sql-function using UTL_File and process it into a collection and return this.

Then you could create a view around it with the table-functie and do a query on a file! Impressive, huh? A very important disadvantage of this method is that the function is executed as a logical/functional unite completely. So the complete file is read, the complete collection is built and returned to the caller as a whole. That means that doing a sleect on that function, the function is executed completely, before you'll get your result. This is especially inconvenient when the after-processing on the result of the function is time-expensive as well. This is why in Oracle 9i pipelining is introduced.

A pipelined function is functionally identical to the collection-returning-function as I described above. The most important difference is that is denoted that is about pipelined function (duh!), but more-over that the in-between=results are piped (sorry, in Dutch this is funny, but I did not made up the term) and thus returned as soon as they become available.

This looks like:
create or replace function get_cars_piped(p_where in varchar2 default null)
return car_cars_t
pipelined
is
l_car car_car_t;
type t_car is record
( license cars.license%type);
type t_cars_cursor is ref cursor;
c_car t_cars_cursor;
r_car t_car;
l_query varchar2(32767);
begin
l_query := 'Select license from cars '||p_where;
open c_car for l_query;
fetch c_car into r_car;
while c_car%found
loop
l_car := car_car_t(p_license => r_car.license);
pipe row(l_car);
fetch c_car into r_car;
end loop;
close c_car;
return;
end get_cars_piped;

So you see indeed the keyword 'pipelined' in the specification of the function, and after that in the loop that each separate object using the 'pipe row' statement is returned. You could say that 'pipe row' is like an intermediate return. Besides that you get in this function, completely for free and on the house, an example of the use of a ref-cursor. With this it is possible to build up a flexibele cursor of which you can adapt the query. You can call this function as follows:
select *
from table(get_cars_piped('where license != ''15-DF-HJ'''))

I found that is is not possible to call this function in a pl/sql-block directly. If you think about it, it seems logical What happens in the statemnt is that the sql-engine calls the pl/sql-function and receives each row directly and is able to process it. This way it is possible to execute the function and process the result simultaneously. Pl/Sql in it self does not support threads or pipe-lines. Pl/Sql expects the result of a functie-call as a whole and can advance with processing only if the function is completely done.
Object ViewsNow you have seen how to create a Collection sauce over a resultset and how a Collection can be queried using Select-statements. An other important addition Oracle 9i are the so called  object views (I say important, but I haven't seen them much out in the open). Object views are views that can return object-instances. This contrast to regular views that return rows with columns.
An object view is defined as follows:
create or replace view car_ov_cars
of car_car_t
with object oid (license)
as
select license
, category
, year
, brand
, model
, city
, country
from cars

Typical to an object view is that you denote what the object-type is where the view is based upon and what the object-identifier (oid) is. That is actually the attribute or set of attributes that count as a  primary-key of the object.

You could query this view as a regular view, but the strength is in the ability to fetch a row in the form of an object. This is done using the function 'value':
declare
l_car car_car_t;
begin
select value(t)
into l_car
from car_ov_cars t
where license = '79-JF-VP';
l_car.print;
end;

This delivers you an object-instantie from the view without any hassle. Very handy if you're using the objects extensively.
ReferencesWhen you have a extensive object model, than you might run into objects with one or more collections  as attributes. Those collections can also have multiple instances of other object types. This can become quite memory intensive. Besides that, you can run into the need to implement circulaire-references. For example a department with a manager is an employee him/her self and directs one or more other empoyees. It could be that you wanted to model that as an employee with an attribute typed as a collection type on the employee-type. It could be convenient to have a more louse coupling  between objects.

For that a concept of References is called into live. In fact, a reference is nothing more than a pointer to another object-instance. And that uses less memory than a new object instance. You could refer to an object-instance in a object-table of or an object-view. And than the object-identifier from the previous paragraph comes in handy.

An collection of references is defined as:
create or replace type car_cars_ref_t as table of ref car_car_t;

You can propagate this with the make_ref function:
declare
l_cars car_cars_ref_t;
l_car car_car_t;
begin
-- Bouw collectie met references op
select cast(multiset(select make_ref( car_ov_cars
, cae.car_license
)
from carsavailable cae) as car_cars_ref_t)
into l_cars
from dual;
-- Verwerk collection
if l_cars.count > 0
then
for l_idx in l_cars.first..l_cars.last
loop
dbms_output.put_line( 'Car '||l_idx||': ');
-- Haal object-value op basis van reference op
select deref(l_cars(l_idx))
into l_car
from dual;
-- Druk object af
l_car.print;
end loop;
end if;
end;

Here you see that the make_ref needs a reference to an object-view and the particular object identifier. The underlying query than delivers a reference to the objects that need to be processed. That query can be different to the query of the object-view.

What it actually means is that you first determine which  objects are to be taken into account. For those objects you determine a reference/pointer based on the  object-view. And than you can get the actual instance using the reference in a later stage.

The latter is done using the deref-function. This deref-function expects a reference and delivers the actual object instance. The deref is only available in a SQL-function taste, by the way. You cannot use it in Pl/Sql directly. Under water a  'select deref()'-query is translated to a  select on the object-view.

It is important then, to design your object model and object view in a way that the actual query on that object view is indexed properly. The experience learns that it can be quite difficult to determine why the optimiser does or doesn't use the index with derefs.  In that the deref is a nasty abstraction.

The ref that you see in the ref-collection declaration, you can use in the declaration of attributes as well. When you want to use an object as an attribute in another object, for instance an object car in the object garage, than you can use the keyword ref  to denote that you don't want the object itself but a reference:
create or replace type car_garage_t as object
(
car ref car_car_t
)

Then there is also a ref function that creates references to seperate objects:
select ref(car) reference
, license
from car_ov_cars car

This function is actually a counterpart of the value-function.
The difference between the functions ref and make_ref is actually that 'ref'  gets the object as a parameter for which a reference must be determined. Make_ref, however is based on an object-view or object-table and determince the reference based upon the primary-key or object-id in the object-view or -table.

The ref-function is used when you ned to create a reference to an object that is a direct result of a query on the object-view. But if you want to determine the primary keys of objects you want to process,  based upon a query on other tabels and/or views than make_ref comes in handy. Because then you deliver the primary-keys of the objects to process separately and  then make_ref uses the object-view and the primary-key values to determine the references.
MAP and Order methodsNow sometimes you need to order a objects. Which one is bigger or smaller and how do I sort them? Obviously this is important when comparing objects but also when querying object-views and object-tables.
For the comparison of objects you can create a map-method:
map member function car_size
return number
is
begin
return 1000; -- or a calculation of the content of the car, or the prize or fuel-consumption
end;

In the implementation you can do a calculation on the attributes of the object. The result needs to be of a scalair datatype (number, date, varchar2) and 'normative' for the object with regards to other objects of the same  object-type. The map-method can then be used by Oracle to do comparisons like  l_car1 > l_car2, and comparisons that are implied in  select-clausules as: DISTINCT, GROUP BY, and ORDER BY. Imagine how compact your code can be if you implement methods like these.

You can also make use of an Order method:
order member function car_order(p_car car_car_t)
return number
is
l_order number := 0;
c_smaller constant number := -1;
c_larger constant number := 1;
begin
if licence < p_car.license
then
l_order := c_smaller;
elsif licence > p_car.license
then
l_order := c_larger;
end if;
return l_order;
end;

The difference with the map-method is that the map-method returns a value that only has meaning for the object it self. The implicit parameter is only the 'self'-object. Oracle determines the results of the map-method for the two objects to be compared and compares the two results. With the order-method Oracle will provide one object as a parameter to the order-method of the other object. Therefor the order method always needs an extra parameter besides the implicit self-parameter. In the function's implementation you code the comparison between those two objects yourself. And that can be a lot more complex then above. Then you provide a negative value if the self-object is smaller then the provided object and a positive value if the self-object turns out larger. A value of 0 denotes an equalness of the two objects.  The Order-method is used with l_car1 > l_car2 comparisons and always need to have a numeral return datatype.

An object can have only one map-method and one order-method.
ConclusionMaybe it dazzles you by now. But if you got through to here, then I'm impress. It might seem like a bit boring stuf. And it might seem quite devious if you start with it. Most functionality you need to build can be done in the Oracle 7 way. But certain solution can become a lot more powerful if you do it using object-types. I use them thankfully for years now. But then I am someone who likes to solve a similar problem in a different way the next time it comes around.

Because of object-types Pl/Sql becomes a lot more powerful and it provides you with more handles to solve some nasty performance-problems. Or pieces of functionality that really aren't solvabale int the Oracle 7 way.

And as said in the intro: Oracle Types are really the a game-saver for SOA Suite and Service Bus integrations with the Database Adapter. Because using a hierarchy of objects you'll be able to fetch a complete piece of database with one database call. I even created a Type-Generation-framework (I called it Darwin-Hibernate) that can create types based on the datamodel in the database. It then creates constructors and collection-objects over foreign-keys that allows you to instantiate a complete structure based on the constructor of the top-level object. For instance a Patient with all it's medical records, addresses, etc.

Al the examples already work with Oracle 9i. But under 10g, 11g, 12c it will run a lot smother and faster because of the performance optimalisations of the Pl/Sql-engine.(Oracle 9i was not quite a performance topper).

This wasn't a story about Object Oriented Pl/Sql, actually. I didn't talk about super and sub-types. you can read about that in Chapter 12 of the Pl/Sql User's Guide enReference van Oracle 10g (I really ran into that page when Googling on it...). Or this page in 11g.
But I wanted you to get started with Object Types, and show you what you can do with it and how powerful Pl/Sql has become with it.

For some more advanced stuff you can read my earlier article about type inheritance, regarding EBS, but interesting enough for non-EBS developers. And another one. And yet another one.

Have fun with Pl/Sql (you might think by now that I really feel Pl/Sql needs this uplift), because I think with Object Types Pl/Sql is really fun. The scripts and datamodel for the examples can be found here.

Servicebus Overview diagrams in 12cR2 not opened for upgraded services

Wed, 2016-06-15 09:24
In ServiceBus 12c you get a 'composite'-alike service overview for your project. It shows you how the proxy services (like Exposed Services in SOASuite) via pipelines are 'wired'to business services (like Referenced Services). This is nice!

If you upgrade a project from 11g or 12cR1 (12.1.3) to 12cR2 (12.2.1) this fails. Initially you might see a (correct) diagram, but after restarting JDeveloper this is empty. You'll get a Class cast exception:
java.lang.ClassCastException: oracle.tip.tools.ide.fabric.addin.CompositeNode cannot be cast to oracle.sb.tooling.ide.sca.internal.sca.SbCompositeNode
at
oracle.sb.tooling.ide.sca.internal.sync.CompositeEditorListener.editorOpened(CompositeEditorListener.java:72)

Also after opening a earlier upgraded project, OSB diagrams fail to open and instead generate this java.lang.ClassCastException.
I search on support.oracle.com  and found this document: 'Unable to open OSB Diagrams upgraded from 12.1.3 to 12.2.1 (Doc ID 2124208.1)'
It refers to the patch:
  • 22226040: java.lang.NullPointer for XQuery File ver 1.0 in JDEV 12.2.1 OSB Proj
 Apply this patch (for instance via my patch script in my previous article). The document suggests to backup and remove the user-data folder of your JDeveloper QuickStart installation. I found that not necessary. But you do need to re-import the projects from the earlier versions. I would empty the .data folders on application and project levels before starting JDeveloper again. Then delete the upgraded projects and reimport them again. Now after restarting JDeveloper the overviews should remain.

Automatic Patching of SOA/BPM QuickStarts

Wed, 2016-06-15 06:14
Earlier I wrote how to automatically install the SOA/BPM QuickStarts. Actually, I'm quite busy with doing automatic/scripted installs for SOA/BPM Suite and OSB, as you might have read.

At my current customer we encountered that in the last months there are many one-off-patches released on support.oracle.com. We selected a pretty large bunch of patches and apply them one by one is a tedious job. But the thing is with these automatic installs that you want to have a uniform installation for each developer so each developer should have the same patches installed, in the same location. And you probably want to be able to quickly do a re-install to a uniform setup.

So I figured out how to do a silent install of the patches and to do this in a loop.

Out of the selected patches I found 5 catechories:
  • 001: JDeveloper patches, where only a few we found possibly applicable for the SOA/BPM QuickStarts
  • 002: ServiceBus related patches
  • 003: SOA Suite merged patches
  • 004: SOA Suite related other patches
  • 005: BPM Suite merged patches
  • 006: BPM Suite related patches
So I divided the patches in those 5 folders ('001', '002', ... , '005'). And I numbered them in 3 digit folders, since my scripting needed to figure out the patch number from the path name. Each patch is a zip with a sub-folder with only the patch number (without the 'p'). So the path-lengths needed to be equal (so not 'jdev' and 'soa' or 'osb').

I have one main script called 'installQSPatches.bat' that loops over the files in each sub-folder:
@echo off
rem check SOA12.2 QS
setlocal
set FMW_HOME=C:\oracle\JDeveloper\12210_BPMQS
set ORACLE_HOME=%FMW_HOME%
set SOA_PATCH_SOURCE=SOA
set SOA_PATCH_HOME=%FMW_HOME%\Opatch\patches
set CUR_DIR=%~dp0
echo Current Dir: %CUR_DIR%
if exist "%FMW_HOME%" goto :SOAQS_HOME_EXISTS
echo %FMW_HOME% not installed yet! Install first!
goto :DONE
:SOAQS_HOME_EXISTS
echo %FMW_HOME% exists, install Patches
echo ____________________________________________________
call %FMW_HOME%\wlserver\server\bin\setWLSEnv.cmd
echo ____________________________________________________
:JDEV_PATCHES
echo -
echo JDeveloper Patches
echo ____________________________________________________
for %%f in (001\*.zip) do (
echo %%f
call applyPatch %%f
)
:SB_PATCHES
echo -
echo ServiceBus Patches
echo ____________________________________________________
for %%f in (002\*.zip) do (
echo %%f
call applyPatch %%f
)
:SOA_MERGE_PATCHES
echo -
echo SOA Suite Merged Patches
echo ____________________________________________________
for %%f in (003\*.zip) do (
echo %%f
call applyPatch %%f
)
:SOA_PATCHES
echo -
echo SOA Suite Patches
echo ____________________________________________________
for %%f in (004\*.zip) do (
echo %%f
call applyPatch %%f
)
:BPM_MERGE_PATCHES
echo -
echo BPM Suite Merged Patches
echo ____________________________________________________
for %%f in (005\*.zip) do (
echo %%f
call applyPatch %%f
)
:BPM_PATCHES
echo -
echo BPM Suite Patches
echo ____________________________________________________
for %%f in (006\*.zip) do (
echo %%f
call applyPatch %%f
)
:DONE
echo Done installing patches
endlocal
You need to shutdown JDeveloper and IntegratedWeblogic before starting this script.
And it needs to be started in an elevated (as Administrator) command window.
 
For each patch it calls the applyPatch.bat script:
set PATCH=%1
set PATCH_NR=%PATCH:~5,8%
echo ____________________________________________________
echo Check Patch %PATCH% for patch nr %PATCH_NR%
if exist "%FMW_HOME%\Opatch\patches\%PATCH_NR%" goto :PATCH_EXISTS
echo "%FMW_HOME%\Opatch\patches\%PATCH_NR%" does not exist.
set SOA_PATCH_HOME=%FMW_HOME%\Opatch\patches
rem set SOA_PATCH_HOME=c:\temp\patches
echo .. Unzip %SOA_PATCH_SOURCE%\%PATCH% to %SOA_PATCH_HOME%
call ant -f ant-zip.xml unzip -Dzip-file=%PATCH% -Dunzip-destination=%SOA_PATCH_HOME%
cd %ORACLE_HOME%\Opatch\patches\%PATCH_NR%
echo .. Apply %ORACLE_HOME%\Opatch\patches\%PATCH_NR%
call %ORACLE_HOME%\Opatch\opatch apply -silent
cd %CUR_DIR%
goto :DONE
:PATCH_EXISTS
echo Patch %PATCH_NR% already exits!
:DONE
echo Done for patch %PATCH_NR%
echo ____________________________________________________

This one figures out what the patchnumber (%PATCH_NR%) is based on the patch-file-name, and if that patch already exists in the %FMW_HOME%/Opatch/patches folder. If not it will unzip the patch file to that folder, resulting in a sub-folder named with the patch number. Then it will apply the patch using Opatch in silent mode.
For the unzip, it uses a simple ANT build file, since the Windows Command screen does not support a commandline-unzip (as far as I could find). The ANT script is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<project name="zip" default="zip" basedir=".">

<target name="zip">
<zip destfile="${zip-file}.zip" basedir="${folder-to-zip}" excludes="dont*.*" />
</target>

<target name="unzip">
<unzip src="${zip-file}" dest="${unzip-destination}" />
</target>

</project>

When I finished this I thought I should convert it to a complete ANT script. But maybe later.

The selected patches for 12.2.1:
The selected JDeveloper patches (sub-folder 001) were:
  • 22283405    JDEV 12.2.1 - NULLPOINTEREXCEPTION ENCOUNTERED (Patch)
  • 23266774    VALIDATION ERRORS WHEN CLICKING ON MANDATORY FIELDS IN JDEV12.2.1 (Patch)
  • 22463346    NOT STRESS SOA: MULTIPLE ERROR MESSAGES OF DEFINITIONMANAGER.LOCKINGLOGGER (Patch)
  • 21890657    CREATE REST THROWS ERROR 500 FOR REFERENCED ENTITY (Patch)
These are not all the possible JDeveloper patches and even may not apply to developing SOA/BPM or SB processes or services.

The selected Service Bus patches (sub-folder 002) were:
  • 23223332 Need to provide a holistic solution in main line for Bug 22887808
  • 21824551 NPE while trying to read OWSM keystore
  • 21168191 UnsupportedOperationException from ServiceAccountRuntimeCache
  • 23184618 Http Transport throws NullPointerException
  • 21827583 Deploying existing .sbar with Maven
  • 22738111 OSB Java Call out method's not visible from JDeveloper
  • 20119834 need to trim headers size if size exceeds 998 characters.
  • 22358699 OSB12C fn-bea:inlinedXML does not work properly
  • 22374613 In 12.2.1 the OSB Projet pom files still says 12.1.3
  • 20196110 JDev OSB Extension has missed export Split-Join and Proxy Flow as png
  • 22187224 OSB 12.2.1 - MessageID changes between request and response messages
  • 22392646 Maven could not be used in Jdev in 12.2.1.0.0
  • 22602059 12C: OSB pipeline based on XML - $body structure incomplete - missing node
  • 22276364 12C: OSB pipeline based on XML - $body structure unavailable
  • 21659900 OSB removes WSA headers on outbound request 
 The selected SOA Suite merged patches (sub-folder 003) were:
  • 23543517 MERGE REQUEST ON TOP OF 12.2.1.0.0 FOR BUGS 23527297 22875806 22995356 23062804
  • 23138916 MERGE REQUEST ON TOP OF 12.2.1.0.0 FOR BUGS 21549249 21572567
  • 23106839 MERGE REQUEST ON TOP OF 12.2.1.0.0 FOR BUGS 21826430 22912570
  • 23134140 Diagnostic Tracking Bug for Bug 23108573 v2
The selected SOA Suite regular patches (sub-folder 004) were:
  • 23056585 XQuery transformation is not showing all the types of XSD in the design view
  • 21904101 Error when running ValidateComposite to project with bpel calling HWF
  • 23205706 MALFORMEDURLEXCEPTION EXCEPTION ON JDEV 12.2.1.0.0 - BAM 12C IDE CONNECTION
  • 23193066 BPEL polls from UMS Adapter results in too large CONVERSATION_ID error
  • 21698320 SELECTING ELEMENT IN LARGE DOCUMENT TAKES FREEZES 2+MINUTES
  • 23108573 TrackingContextProperty causes ClassCastException in SOA 12c Spring Composite
  • 21925552 SOA Maven Plugin requires deploying sar to server when running mvn install
  • 23186275 JDeveloper 12.2.1 JMS Adapter not displaying Elements from the imported xsd file
  • 22337707 Naming conflict with EDN event subscribers from single Mediator component
  • 23052343 16.2.3 : WS JOB REQUEST ENDS UP IN ERROR STATE WHEN INVOKED VIA OTD URL
  • 22026475 JDEV : Adapter : Nullpointerexception while creating the BPEL Process
  • 22815366 Unable to use WSDL containing an unsupported Notification Operation portType
  • 22978098 Indicators configured on Response payload coming from a DB ADapter do not work
  • 21835972 SFTP FileListing not work as expected. Need similiar bug fix as in bug 21176154
  • 22648699 SOA Suite12.2.1-JCA files not getting updated with Configuration Plan values
  • 22300448 JDeveloper doesn't add a libraries file group to existing SOA deployment profile
  • 16548396 Can not emulate fault from the SOA Test UI
Of these the patches 21925552 conflicts with 21904101 and 23193066 conflicts with 23108573. So you should choose which one you want to apply.

For BPM (sub-folder 005) the following merged patches were selected:
  • 22571194 NPE in o.bpm.project.sca.loader.impl.ElementContainer:82
  • 23283093 NPE in o.bpm.project.sca.loader.impl.ElementContainer:198
For BPM (sub-folder 006) the following patches were selected:
  • 22191778 In the Business Architecture Modeling links section, buttons are missing
  • 21325503 12c compilation err-crm: oracle.bpm.services : extncontentpublicmodeleventaction
  • 22736087 BPM Workspace not refreshing the task list after Initiator task page is closed
  • 22018713 MeasurementPublisher has been optimized
  • Software 22217468 Refresh is not working properly in Impact Analysis Report - 500 error
  • 22018703 MIAuditLog has been optimized
  • 22690780 workspace Title & logo unchanged on loginpage after patch 22272135
  • 19536412 Case Service handles namespaces incorrectly in SOAP XML messages
  • 22348182 EDG RC3a :Task comments cannot be added
  • 22087208 When approving a task it immediately opens the next task in your tasklist
  • 22753983 12c e-manager authorization issues - List view
  • 22111729 BPM instance is not released after completing an HT ith the voting patern
  • 22581888 Configure global option to enable task details to appear as pop-up window in 12c
  • 23077279 Composer crash when creating Org Unit or Application system for human task
  • 23125361 REST hrefs for attachments containing spaces in their names do not work.
  • 23205514 Flexfields not properly displayed in parallel task with SDO
  • 23301503 weblogic user or Administrator group shouldn't be necessary to login to composer
  • 23491602 Studio generated project_properties.wsdl missing the correlation mapping
  • 21792613 NOT STRESS BPM REST:get WebFormService - no valid constructor error 
 From these the 22581888 conflicts with 22087208. And  23205514 conflicts with 22348182.

Download the applicable patches for your situation and put them in the appropriate folder. If you use the SOA QuickStart in stead of  BPM QuickStart, you should skip the BPM Related patches of course.

Scripted Domain Creation for SOA/BPM, OSB 12.2.1

Thu, 2016-06-09 02:53
Recently I blogged about the automatic install of SOASuite and ServiceBus 12.2.1. It catered for the installation of the binaries and the creation of a repository.

What it does not handles is the creation of a domain. The last few weeks I worked on a script to do that. It's based on a wlst script for 12.1.3 by Edwin Biemond. This one was quite educational for me. But as denoted: it was made for 12.1.3. And I wanted more control on what and how certain artefacts are created. So gave it my own swing, also to adapt it for some wishes of my current customer.

Although there are certainly some improvements thinkable, and probably will be done in future uses, for now it is ready to share with the world.

One of the changes I did was to divide the main function in sections to make clear what the structure of the script is. I also moved some of the duplicate code or functional parts into separate functions.

Let me describe the sections first.

1. Create Base domainThe script starts with creating a base domain. It reads the default wls template 'wls.jar'. Sets the log properties of the domain. It then adapts the AdminServer to
  • change name as set in the property file: you can have your own naming convention.
  • change listen addres + port
  • Set default SSL settings
  • Set log file properties.
Then set the admin password on the domain and set the server start mode. Save the domain.
It then creates boot.properties files for nodemanager and AdminServer and set the password of the NodeManager. Finally setting the Applications home folder.

2. Extending domain with templatesThe second section extends the domain with templates. Another improvement I did is that you can select which components you want to add by toggling the appropriate 'xxxEnabled' switches  in the property file, where xxx stands for the component (for instance 'soa', 'bpm', 'osb', 'bam', 'ess', etc.)

It supports the following components:
  • ServiceBus
  • SOA and BPM Suite and B2B
  • BAM
  • Enterprise Scheduler Service
Components such as  Managed File Transfer can be added quite easily.
3. DataSourcesSection 3 takes care of setting the datasources to the created repository based on the repository user '{Prefix}_STB', via the 'LocalScvTblDataSource' datasource. In the property file you need to set
  • soaRepositoryDbUrl: jdbc connect string to the repository database
  • soaRepositoryDbUserPrefix=prefix used in the Repository creation 
  • soaRepositoryStbPwd=Password for the {Prefix}_STB user.
 It will adapt the LocalScvTblDataSource with these properties to load the information on the other repository users to create and adapt the datasources that are essential to the modules with which the domain is extended. These datasources are then made XA-Enabled.

4. Create UnixMachines, Clusters and Managed ServersThis section creates Machine definitions of type 'Unix', based on the properties:
  • server1Address=darlin-vce-db.darwin-it.local
  • server1Machine=darlin-vce-db
  • server2Enabled=true
  • server2Address=darlin-vce-db2.darwin-it.local
  • server2Machine=darlin-vce-db2
The machine denoted with 'server1Machine' is created always. The AdminServer is added to it, as well all the first, default, managed servers. The machine denoted with the property 'server2Machine' is only created when 'server2Enabled' is set to true.

I realize that 'server' in this context might be a little confusing. In serverYAddress and  serverYMachine, I actually mean a server-host, not a managed or admin server.

 For each component to configure (soa, osb, etc.) a cluster, denoted with for instance osbClr or soaClr, is created.

When you extend the domain with SOA Suite or OSB then automatically a managed server called 'soa_server1' or 'osb_server1' created with the appropriate deployments targeted to it. In the script of Edwin these are removed and new ones are created. I found problems with that and found that it's quite unnecessary, since we can rename the current ones with the given name in the property file, denoted with soaSvr1 or osbSvr1, etc., as is done with the AdminServer. So I leave the already created ones, but rename them to the desired value.

These first servers are added to the appropriate cluster, what causes to re-target the deployments to that cluster, magically.

Then if enabled, as with osbSvr2Enabled or soaSvr2Enabled, etc., the particular 'second' servers are created and added to the particular cluster.

5. Add Servers to ServerGroupsNew in 12c is the concept of ServerGroups. In 11g you had only one definition of USER_MEM_ARGS in the setDomainEnv.sh/cmd. So these counted for each server (admin or managed) that are started using the start(Managed)Weblogic.sh/cmd scripts. But in 12c the determination of the USER_MEM_ARGS are done in a separate script: setStartupEnv.sh/cmd.
In this script the determination is done based on so-called ServerGroups. This provides a means to differentiate in memory settings for the particular servers, which was lacking in 11g.

So in this sections all the Managed and Admin Servers are added to a particular ServerGroup.
6. Create boot properties filesLastly, for each created managed server a boot.properties file with the username password is created. Smart: I used to do this every single time by hand...

The example property fileHere's an example of the property file:
#############################################################################
# Properties voor Creeëren SOADomain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-15
#
#############################################################################
#
fmwHome=/u01/app/oracle/FMW12210
#
soaDomainName=osb_domain
domainsHome=/u01/app/work/domains
applicationsHome=/u01/app/work/applications
productionMode=true
#
# Server Settings
server1Address=darlin-vce-db.darwin-it.local
server1Machine=darlin-vce-db
server2Enabled=true
server2Address=darlin-vce-db2.darwin-it.local
server2Machine=darlin-vce-db2
#
# Properties for AdminServer
adminServerName=AdminServer
adminListenAddress=darlin-vce-db
adminListenPort=7001
adminJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
# Properties for OSB
osbEnabled=true
osbJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
osbClr=OsbCluster
osbSvr1=OsbServer1
osbSvr1Port=8011
osbSvr2Enabled=true
osbSvr2=OsbServer2
osbSvr2Port=8012
# Properties for SOA
soaEnabled=true
bpmEnabled=true
b2bEnabled=true
soaJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=752m -Xms1024m -Xmx1532m
soaClr=SoaCluster
soaSvr1=SoaServer1
soaSvr1Port=8001
soaSvr2Enabled=true
soaSvr2=SoaServer2
soaSvr2Port=8002
# Properties for ESS
essEnabled=true
essJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
essClr=essCluster
essSvr1=EssServer1
essSvr1Port=8021
essSvr2Enabled=true
essSvr2=EssServer2
essSvr2Port=8022
# Properties for BAM
bamEnabled=true
bamJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
bamClr=BamCluster
bamSvr1=BamServer1
bamSvr1Port=9001
bamSvr2Enabled=true
bamSvr2=BamServer2
bamSvr2Port=9002
# AdminUser
adminUser=weblogic
adminPwd=welcome1
# SoaRepository Settings
soaRepositoryDbUrl=jdbc:oracle:thin:@darlin-vce-db.darwin-it.local:1521/pdborcl
soaRepositoryDbUserPrefix=DEV
soaRepositoryStbPwd=DEV_STB
# Logs
logsHome=/u01/app/work/logs
fileCount=10
fileMinSize=5000
fileTimeSpan=24
rotationType=byTime
#
# Settings
webtierEnabled=false
jsseEnabled=false

Save it with a name like darlin-vce-db.properties, but adapted for each particular environment.

The script(And of course I don't mean the band that my daughter likes...)
#############################################################################
# Create a SOA/BPM/OSB domain
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-09
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'createSoaBpmDomain.py'
#
#Home Folders
wlsHome = fmwHome+'/wlserver'
soaDomainHome = domainsHome+'/'+soaDomainName
soaApplicationsHome = applicationsHome+'/'+soaDomainName
#
# Templates for 12.1.3
#wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
#oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
#wlservicetpl=oracleCommonTplHome+'/oracle.wls-webservice-template_12.1.3.jar'
#osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template_12.1.3.jar'
#applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub.1.0.0_template.jar'
#soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template_12.1.3.jar'
#bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template_12.1.3.jar'
#bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template_12.1.3.jar'
#essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template_12.1.3.jar'
#essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template_12.1.3.jar'
#ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template_12.1.3.jar'
#b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template_12.1.3.jar'
#
# Templates for 12.2.1
wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
oracleCommonTplHome=fmwHome+'/oracle_common/common/templates'
wlservicetpl=oracleCommonTplHome+'/wls/oracle.wls-webservice-template.jar'
osbtpl=fmwHome+'/osb/common/templates/wls/oracle.osb_template.jar'
applCoreTpl=oracleCommonTplHome+'/wls/oracle.applcore.model.stub_template.jar'
soatpl=fmwHome+'/soa/common/templates/wls/oracle.soa_template.jar'
bamtpl=fmwHome+'/soa/common/templates/wls/oracle.bam.server_template.jar'
bpmtpl=fmwHome+'/soa/common/templates/wls/oracle.bpm_template.jar'
essBasicTpl=oracleCommonTplHome+'/wls/oracle.ess.basic_template.jar'
essEmTpl=fmwHome+'/em/common/templates/wls/oracle.em_ess_template.jar'
ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template.jar' # need to be validated!
b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template.jar' # need to be validated!
#
# ServerGroup definitions
adminSvrGrpDesc='WSM-CACHE-SVR WSMPM-MAN-SVR JRF-MAN-SVR'
adminSvrGrp=["WSM-CACHE-SVR" , "WSMPM-MAN-SVR" , "JRF-MAN-SVR"]
essSvrGrpDesc="ESS-MGD-SVRS"
essSvrGrp=["ESS-MGD-SVRS"]
soaSvrGrpDesc="SOA-MGD-SVRS"
soaSvrGrp=["SOA-MGD-SVRS"]
bamSvrGrpDesc="BAM12-MGD-SVRS"
bamSvrGrp=["BAM12-MGD-SVRS"]
osbSvrGrpDesc="OSB-MGD-SVRS-COMBINED"
osbSvrGrp=["OSB-MGD-SVRS-COMBINED"]
#
#
lineSeperator='__________________________________________________________________________________'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"
#
# Create a boot properties file.
def createBootPropertiesFile(directoryPath,fileName, username, password):
print ('Create Boot Properties File for folder: '+directoryPath)
print (lineSeperator)
serverDir = File(directoryPath)
bool = serverDir.mkdirs()
fileNew=open(directoryPath + '/'+fileName, 'w')
fileNew.write('username=%s\n' % username)
fileNew.write('password=%s\n' % password)
fileNew.flush()
fileNew.close()
#
# Create Startup Properties File
def createAdminStartupPropertiesFile(directoryPath, args):
print 'Create AdminServer Boot Properties File for folder: '+directoryPath
print (lineSeperator)
adminserverDir = File(directoryPath)
bool = adminserverDir.mkdirs()
fileNew=open(directoryPath + '/startup.properties', 'w')
args=args.replace(':','\\:')
args=args.replace('=','\\=')
fileNew.write('Arguments=%s\n' % args)
fileNew.flush()
fileNew.close()
#
# Set Log properties
def setLogProperties(logMBeanPath, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print '\nSet Log Properties for: '+logMBeanPath
print (lineSeperator)
cd(logMBeanPath)
print ('Server log path: '+pwd())
print '. set FileName to '+logFile
set('FileName' ,logFile)
print '. set FileCount to '+str(fileCount)
set('FileCount' ,int(fileCount))
print '. set FileMinSize to '+str(fileMinSize)
set('FileMinSize' ,int(fileMinSize))
print '. set RotationType to '+rotationType
set('RotationType',rotationType)
print '. set FileTimeSpan to '+str(fileTimeSpan)
set('FileTimeSpan',int(fileTimeSpan))
#
#
def createServerLog(serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print ('\nCreate Log for '+serverName)
print (lineSeperator)
cd('/Server/'+serverName)
create(serverName,'Log')
setLogProperties('/Server/'+serverName+'/Log/'+serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Change DataSource to XA
def changeDatasourceToXA(datasource):
print 'Change datasource '+datasource
print (lineSeperator)
cd('/')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDriverParams/NO_NAME_0')
set('DriverName','oracle.jdbc.xa.client.OracleXADataSource')
print '. Set UseXADataSourceInterface='+'True'
set('UseXADataSourceInterface','True')
cd('/JDBCSystemResource/'+datasource+'/JdbcResource/'+datasource+'/JDBCDataSourceParams/NO_NAME_0')
print '. Set GlobalTransactionsProtocol='+'TwoPhaseCommit'
set('GlobalTransactionsProtocol','TwoPhaseCommit')
cd('/')
#
#
def createCluster(cluster):
print ('\nCreate '+cluster)
print (lineSeperator)
cd('/')
create(cluster, 'Cluster')
#
# Create a Unix Machine
def createUnixMachine(serverMachine,serverAddress):
print('\nCreate machine '+serverMachine+' with type UnixMachine')
print (lineSeperator)
cd('/')
create(serverMachine,'UnixMachine')
cd('UnixMachine/'+serverMachine)
create(serverMachine,'NodeManager')
cd('NodeManager/'+serverMachine)
set('ListenAddress',serverAddress)
#
# Add server to Unix Machine
def addServerToMachine(serverName, serverMachine):
print('\nAdd server '+serverName+' to '+serverMachine)
print (lineSeperator)
cd('/Servers/'+serverName)
set('Machine',serverMachine)
#
# Determine the Server Java Args
def getServerJavaArgs(serverName,javaArgsBase,logsHome):
javaArgs = javaArgsBase+' -Dweblogic.Stdout='+logsHome+'/'+serverName+'.out -Dweblogic.Stderr='+logsHome+'/'+serverName+'_err.out'
return javaArgs
#
# Change Managed Server
def changeManagedServer(server,listenAddress,listenPort,javaArgs):
print '\nChange ManagedServer '+server
print (lineSeperator)
cd('/Servers/'+server)
print '. Set listen address and port to: '+listenAddress+':'+str(listenPort)
set('ListenAddress',listenAddress)
set('ListenPort' ,int(listenPort))
# ServerStart
print ('. Create ServerStart')
create(server,'ServerStart')
cd('ServerStart/'+server)
print ('. Set Arguments to: '+javaArgs)
set('Arguments' , javaArgs)
# SSL
cd('/Server/'+server)
print ('. Create server SSL')
create(server,'SSL')
cd('SSL/'+server)
print ('. Set SSL Enabled to: '+'False')
set('Enabled' , 'False')
print ('. Set SSL HostNameVerificationIgnored to: '+'True')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
# Create a Managed Server
def createManagedServer(server,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nCreate '+server)
print (lineSeperator)
cd('/')
create(server, 'Server')
cd('/Servers/'+server)
javaArgs=getServerJavaArgs(server,javaArgsBase,logsHome)
changeManagedServer(server,listenAddress,listenPort,javaArgs)
createServerLog(server, logsHome+'/'+server+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+server+' to cluster '+cluster)
cd('/')
assign('Server',server,'Cluster',cluster)
addServerToMachine(server, machine)
#
# Adapt a Managed Server
def adaptManagedServer(server,newSrvName,listenAddress,listenPort,cluster,machine,
javaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan):
print('\nAdapt '+server)
print (lineSeperator)
cd('/')
cd('/Servers/'+server)
# name of adminserver
print '. Rename '+server+' to '+ newSrvName
set('Name',newSrvName )
cd('/Servers/'+newSrvName)
javaArgs=getServerJavaArgs(newSrvName,javaArgsBase,logsHome)
changeManagedServer(newSrvName,listenAddress,listenPort,javaArgs)
createServerLog(newSrvName, logsHome+'/'+newSrvName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+newSrvName+' to cluster '+cluster)
cd('/')
assign('Server',newSrvName,'Cluster',cluster)
addServerToMachine(newSrvName, machine)
#
# Change Admin Server
def changeAdminServer(adminServerName,listenAddress,listenPort,javaArguments):
print '\nChange AdminServer'
print (lineSeperator)
cd('/Servers/AdminServer')
# name of adminserver
print '. Set Name to '+ adminServerName
set('Name',adminServerName )
cd('/Servers/'+adminServerName)
# address and port
print '. Set ListenAddress to '+ server1Address
set('ListenAddress',server1Address)
print '. Set ListenPort to '+ str(listenPort)
set('ListenPort' ,int(listenPort))
#
# ServerStart
print 'Create ServerStart'
create(adminServerName,'ServerStart')
cd('ServerStart/'+adminServerName)
print '. Set Arguments to: '+javaArguments
set('Arguments' , javaArguments)
# SSL
cd('/Server/'+adminServerName)
print 'Create SSL'
create(adminServerName,'SSL')
cd('SSL/'+adminServerName)
set('Enabled' , 'False')
set('HostNameVerificationIgnored', 'True')
#
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
set('JSSEEnabled','True')
else:
print ('. Set JSSEEnabled to: '+ 'False')
set('JSSEEnabled','False')
#
#
def main():
try:
#
# Section 1: Base Domain + Admin Server
print (lineSeperator)
print ('1. Create Base domain '+soaDomainName)
print('\nCreate base wls domain with template '+wlsjar)
print (lineSeperator)
readTemplate(wlsjar)
#
cd('/')
# Domain Log
print('Set base_domain log')
create('base_domain','Log')
setLogProperties('/Log/base_domain', logsHome+soaDomainName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
# Admin Server
adminJavaArgs = getServerJavaArgs(adminServerName,adminJavaArgsBase,logsHome)
changeAdminServer(adminServerName,adminListenAddress,adminListenPort,adminJavaArgs)
createServerLog(adminServerName, logsHome+adminServerName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
#
print('\nSet password in '+'/Security/base_domain/User/weblogic')
cd('/')
cd('Security/base_domain/User/weblogic')
# weblogic user name + password
print('. Set Name to: ' +adminUser)
set('Name',adminUser)
cmo.setPassword(adminPwd)
#
if productionMode == 'true':
print('. Set ServerStartMode to: ' +'prod')
setOption('ServerStartMode', 'prod')
else:
print('. Set ServerStartMode to: ' +'dev')
setOption('ServerStartMode', 'dev')
#
print('write Domain...')
# write path + domain name
writeDomain(soaDomainHome)
closeTemplate()
#
createAdminStartupPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/data/nodemanager',adminJavaArgs)
createBootPropertiesFile(soaDomainHome+'/servers/'+adminServerName+'/security','boot.properties',adminUser,adminPwd)
createBootPropertiesFile(soaDomainHome+'/config/nodemanager','nm_password.properties',adminUser,adminPwd)
#
es = encrypt(adminPwd,soaDomainHome)
#
readDomain(soaDomainHome)
#
print('set Domain password for '+soaDomainName)
cd('/SecurityConfiguration/'+soaDomainName)
set('CredentialEncrypted',es)
#
print('Set nodemanager password')
set('NodeManagerUsername' ,adminUser )
set('NodeManagerPasswordEncrypted',es )
#
cd('/')
setOption( "AppDir", soaApplicationsHome )
#
print('Finished base domain.')
#
# Section 2: Templates
print('\n2. Extend Base domain with templates.')
print (lineSeperator)
print ('Adding Webservice template '+wlservicetpl)
addTemplate(wlservicetpl)
# SOA Suite
if soaEnabled == 'true':
print ('Adding SOA Template '+soatpl)
addTemplate(soatpl)
else:
print('SOA is disabled')
# BPM
if bpmEnabled == 'true':
print ('Adding BPM Template '+bpmtpl)
addTemplate(bpmtpl)
else:
print('BPM is disabled')
# OSB
if osbEnabled == 'true':
print ('Adding OSB template '+osbtpl)
addTemplate(osbtpl)
else:
print('OSB is disabled')
#
print ('Adding ApplCore Template '+applCoreTpl)
addTemplate(applCoreTpl)
#
if bamEnabled == 'true':
print ('Adding BAM Template '+bamtpl)
addTemplate(bamtpl)
else:
print ('BAM is disabled')
#
if webtierEnabled == 'true' == true:
print ('Adding OHS Template '+ohsTpl)
addTemplate(ohsTpl)
else:
print('OHS is disabled')
#
if b2bEnabled == 'true':
print 'Adding B2B Template '+b2bTpl
addTemplate(b2bTpl)
else:
print('B2B is disabled')
#
if essEnabled == 'true':
print ('Adding ESS Template'+essBasicTpl)
addTemplate(essBasicTpl)
print ('Adding ESS Em Template'+essEmTpl)
addTemplate(essEmTpl)
else:
print('ESS is disabled')
#
dumpStack()
print ('Finished templates')
#
# Section 3: Change Datasources
print ('\n3. Change datasources')
print 'Change datasource LocalScvTblDataSource'
cd('/JDBCSystemResource/LocalSvcTblDataSource/JdbcResource/LocalSvcTblDataSource/JDBCDriverParams/NO_NAME_0')
set('URL',soaRepositoryDbUrl)
set('PasswordEncrypted',soaRepositoryStbPwd)
cd('Properties/NO_NAME_0/Property/user')
set('Value',soaRepositoryDbUserPrefix+'_STB')
#
print ('Call getDatabaseDefaults which reads the service table')
getDatabaseDefaults()
#
if soaEnabled == 'true':
changeDatasourceToXA('EDNDataSource')
if osbEnabled == 'true':
changeDatasourceToXA('wlsbjmsrpDataSource')
changeDatasourceToXA('OraSDPMDataSource')
changeDatasourceToXA('SOADataSource')
#
if bamEnabled == 'true':
changeDatasourceToXA('BamDataSource')
#
print 'Finshed DataSources'
#
# Section 4: Create UnixMachines, Clusters and Managed Servers
print ('\n4. Create UnixMachines, Clusters and Managed Servers')
print (lineSeperator)
cd('/')
#
createUnixMachine(server1Machine,server1Address)
if server2Enabled == 'true':
createUnixMachine(server2Machine,server2Address)
#
addServerToMachine(adminServerName,server1Machine)
#
cd('/')
# SOA Suite
if soaEnabled == 'true':
createCluster(soaClr)
adaptManagedServer('soa_server1',soaSvr1,server1Address, soaSvr1Port,soaClr,server1Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if soaSvr2Enabled == 'true':
createManagedServer(soaSvr2,server2Address,soaSvr2Port,soaClr,server2Machine,
soaJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create SOA Server2')
#
# OSB
if osbEnabled == 'true':
createCluster(osbClr)
adaptManagedServer('osb_server1',osbSvr1,server1Address,osbSvr1Port,osbClr,server1Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if osbSvr2Enabled == 'true':
createManagedServer(osbSvr2,server2Address,osbSvr2Port,osbClr,server2Machine,
osbJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create OSB Server2')
#
# BAM
if bamEnabled == 'true':
createCluster(bamClr)
adaptManagedServer('bam_server1',bamSvr1,server1Address,bamSvr1Port,bamClr,server1Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if bamSvr2Enabled == 'true':
createManagedServer(bamSvr2,server2Address,bamSvr2Port,bamClr,server2Machine,
bamJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create BAM Server2')
#
# ESS
if essEnabled == 'true':
createCluster(essClr)
adaptManagedServer('ess_server1',essSvr1,server1Address,essSvr1Port,essClr,server1Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
if essSvr2Enabled == 'true':
createManagedServer(essSvr2,server2Address,essSvr2Port,essClr,server2Machine,
essJavaArgsBase,fileCount,fileMinSize,rotationType,fileTimeSpan)
else:
print('Do not create ESS Server2')

#
print ('Finshed creating Machines, Clusters and ManagedServers')
#
# Section 5: Add Servers to ServerGroups.
print ('\n5. Add Servers to ServerGroups')
print (lineSeperator)
cd('/')
print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
setServerGroups(adminServerName, adminSvrGrp)
# SOA
if soaEnabled == 'true':
print 'Add server group '+soaSvrGrpDesc+' to '+soaSvr1+' and possibly '+soaSvr2
setServerGroups(soaSvr1, soaSvrGrp)
if soaSvr2Enabled == 'true':
setServerGroups(soaSvr2, soaSvrGrp)
#
# OSB
if osbEnabled == 'true':
print 'Add server group '+osbSvrGrpDesc+' to '+osbSvr1+' and possibly '+osbSvr2
setServerGroups(osbSvr1, osbSvrGrp)
if osbSvr2Enabled == 'true':
setServerGroups(osbSvr2, osbSvrGrp)
#
if bamEnabled == 'true':
print 'Add server group '+bamSvrGrpDesc+' to '+bamSvr1+' and possibly '+bamSvr2
setServerGroups(bamSvr1, bamSvrGrp)
if bamSvr2Enabled == 'true':
setServerGroups(bamSvr2, bamSvrGrp)
#
if essEnabled == 'true':
print 'Add server group '+essSvrGrpDesc+' to '+essSvr1+' and possibly '+essSvr2
setServerGroups(essSvr1, essSvrGrp)
if essSvr2Enabled == 'true':
setServerGroups(essSvr2, essSvrGrp)
#
print ('Finshed ServerGroups.')
#
updateDomain()
closeDomain();
#
# Section 6: Create boot properties files.
print ('\n6. Create boot properties files')
print (lineSeperator)
# SOA
if soaEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr1+'/security','boot.properties',adminUser,adminPwd)
if soaSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+soaSvr2+'/security','boot.properties',adminUser,adminPwd)
#
# OSB
if osbEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr1+'/security','boot.properties',adminUser,adminPwd)
if osbSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+osbSvr2+'/security','boot.properties',adminUser,adminPwd)
#
if bamEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
if bamSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+bamSvr1+'/security','boot.properties',adminUser,adminPwd)
#
if essEnabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr1+'/security','boot.properties',adminUser,adminPwd)
if essSvr2Enabled == 'true':
createBootPropertiesFile(soaDomainHome+'/servers/'+essSvr2+'/security','boot.properties',adminUser,adminPwd)
#
print ('\nFinished')
#
print('\nExiting...')
exit()
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)
#call main()
main()
exit()

ConclusionAs said, although I think this script is already quite adaptable using the property file, of course there are many improvements thinkable for your particular situation. It creates a 'skeleton' SOA or Service Bus domain, but you might need to adapt for network topologies, security settings.
And although it creates a 'per domain' nodemanager configuration, you would need to adapt it for your particular needs to get the domain started. I only tested this by starting the Admin server using the startWeblogic.sh script.

Having such a script is such  a valuable asset: it allows you to (re-)create your domains repeatably in a standard way, ensuring that different environments (dev, test, acc, prod) are created similarly.

One, last thing though, the script somehow registers the creation of the domain and thus the use of the datasources in the repository. So you can't just throw away the domain and recreate it to the current Repository. You'll need to rereate the Repository as well.


BPEL Chapter 1: Hello World BPEL Project

Mon, 2016-05-30 08:50
Long ago, back in 2004/2005 when Oracle released Oracle BPEL 10.1.2 (and its predecessor the global available release of the rebranded Collaxa product) and in 2006 with the release of the first SOASuite 10.1.3, you had a Project per BPEL process. Each project was setup around the BPEL process file. Since 11g BPEL is a component in the Software Component Architecture (SCA) and a project can contain multiple BPEL components together with other components as Mediator, Rules, etc. In a later section I'll elaborate on the SCA setup, but for now I'll focus on the BPEL component. When I do I probably edit this introduction.

I had to make this introduction because using 12c we have to create a SOA Application with a SOA Project for our first BPEL process.

I assume you have installed the SOA QuickStart. If not do so, for instance with the use of the silent install. That article describes the installation of the BPM QuickStart, but for SOA QuickStart it works exactly the same, but you might want to change the home folder from c:\oracle\JDeveloper\12210_BPMQS to c:\oracle\JDeveloper\12210_SOAQS.

Then start JDeveloper from the JDeveloper home. If you used the silent install option, then you can start JDeveloper via c:\oracle\JDeveloper\12210_BPMQS\jdeveloper\jdev\bin\jdev64W.exe.
Create SOA ApplicationAfter having JDeveloper started, click on the 'New Application' link, top right. You can also use the New icon () or the File->New menu, option Application or From Gallery:
You'll get the following screen:



Choose SOA Application and click OK. This results in the following first page of the Create SOA Application wizard:
Name the application 'HelloWorld' and provide a Directory for the root folder for the application. For instance 'c:\Data\Source\HelloWorld'. And click Next.
Then provide a name for the project. For now you can use the same name as the application. The Project Directory should be adapted automatically. If not make sure it's a subfolder in the application's directory, and click Next:

In the last step of the wizard you can choose a Standard Composite (with a start component) or a Template for the composite. You could choose for a composite with a BPEL Process, but for now, let's choose an Empty Composite, since we need to create a schema first.

JDeveloper SOA Application Screen OverviewWhen you click finished your JDeveloper screen looks more or less like:

Top left you'll find the Project Navigator that shows the generated artefacts that you'll find in the project folder. In the middle of the left panes you'll find a Resource Navigator which enables you to edit some specific property files in the SOA Application. But we'll leave them for the moment. Bottom left you'll find two tabs, one of which is the structure pane. This one comes in handy occasionally, but we'll get into that later as well.

In the middle you'll find the Composite Designer. We'll cover it later more extensively, but for now this is the start of assembling our application. Top Right you'll find a collapsed pane for the Components. You can expand it by clicking on it:
 
Top right you'll find a  button to dock the pane. This will make the pane visible permanently. This is the default. This is the default, but to free up space and have a larger portion of your screen available for the designer(s), you can collapse it using the minus-icon on the same place as the Dock-button:


To add components to the application you simply drag and drop than on the appropriate pane.
Create an XML SchemaA BPEL process is exposed as a service in most cases. To do so you have to create a WSDL (WebService Definition Language) document. This is an interface contract for a service described in XML. It defines the input and output of a service and the possible operations on the service. Each possible operation needs refer to it's input and possible output message. And each message refers to the definition of that message. The message definition is done as an element in an XML Schema Definition (XSD). Although you can have it auto-generated, it's recommended to do a bottom-up definition of the contract. So let's start with the XSD.

Using the File->New->From Gallery menu choose 'XML Schema' and click OK:
 
Enter the details of the file: name it 'HelloWorld.xsd'. The Directory is per default the 'Schemas' folder within the project. Namespaces are important in a xsd and the wsdl, so give a proper namespace, like 'http://www.darwin-it.nl/xsd/HelloWorld' and provide a convenient prefix for it, like 'hwd':

This generates the following  xsd:
 
The xsd should contain an element for the request as well as the response  message. First rename the 'exampleElement' to 'helloWorldRequestMessage'. If you click on the element the name of the elememt should be highlighted. Then you can change the name by starting to type the new name. You can also bring up the properties pane and change the name there:


Change the name to helloWorldRequestMessage; I like to have elements have a proper name and in this case it's defining a request message for the HelloWorld service. I use the Java Standard Camel Case notition where elements have a lowercase first character.

It's time to extend the XSD. Since we're in the XSD editor, the component pallette is changed and contains the possible components to use in an XSD:

Drag and drop a new element from the pallette to under the helloWorldRequestMessage, until you see a line under the element and a plus-sign to the mouse-pointer:



Name the new element helloWorldResponseMessage.
In the same way add a complexType and name it HelloWorldRequestMessageType. I don't like implicit nameless complexTypes within elements. So I tend to always create named comlextypes, that I name the same as the corresponding elements, except for a UpperCase first character and the suffix 'Type'.

Then add a sequence on the helloWorldRequestMessage:

Do the same for complexType HelloWorldResponseMessageType
Then in the HelloWorldRequestMessageType add a sequence:

And in the sequence drag and drop an element called 'name'.
Do the same for HelloWorldResponseMessageType, except name the element 'greeting'.

Now we need to set a type for the elements. Right-click on the name element and choose 'Set Type' from the context menu:


In the dropdown box choose xsd:string:

You can also set the type from the properties:

Set also the greeting element to xsd:string.

Then set the helloWorldRequestMessage to the type hwd:HelloWorldRequestMessageType:



And the helloWorldResponseMessage to the type hwd:HelloWorldResponseMessageType accordingly.
After saving it, the resulting xsd shood look like:


Create the WSDLNow we can create a WSDL based on the xsd. JDeveloper has a nice WSDL generator and editor.
The easiest way to generate or kickstart a wsdl is to drag and drop a SOAP service from the component palette:

to the Exposed Services lane on the canvas:
This opens the dialog for defining the SOAP Service. Name it HelloWorld. Then you can choose a WSDL or generate/define it from scratch based on the xsd. To do so click on the cog:

The name of the wsdl is pre-defined on the SOAP service, but you can leave the suggestion. Do yourself a favor and provide a sensible namespace. I choose 'http://xmlns.darwin-it.local/xsd/HelloWorld/HelloWorldWS'. Think of a good convention where you have a company base url (I'll get back to it later, but for now it's common convention to use an URI, that does not need, in most cases even don't, refer an actual existing internet address).
Set the the Interface type to 'Synchronous Interface' and click the green plus icon to define a message for the Input:

There you can give the message part a name, for this you can also define a convention, but there is actually no real point in doing so. So you can leave it just to the suggested 'part1'. Click on the magnifier icon and look for the HelloWorld.xsd under Project Schema Files. Select the 'helloWorldRequestMessage' and click OK:

Back in the Add Message Part dialog, click OK again:
Back in the Create WSDL dialog, you can leave the names for Port Type and Operation, to the proposed values. For this service execute is a quite proper name. But in the future it is wise to think of a proper verb that gives a proper indication of what the operation means. Later when I'll discuss WSDL's you'll find that a WSDL can hold several operations and porttypes.

For now click OK again:

And also in the Create Web Service dialog click OK again:




What results in a HelloWorldWS Service in the Composite definition:

Create the HelloWorld BPEL processAt this point we're ready to add a BPEL Process. In the component palette look for the BPEL Proces Icon:

and drag&drop it on the Components lane in the Composite canvas:




This will bring up the following dialog.



Fill in the folowing properties like:
Property
Value
NameHelloWorldBPELProcessNamespacehttp://xmlns.darwin-it.nl/bpel/HelloWorld/HelloWorldBPELProcess
The namespace is not really important here, you could leave it at the suggested value. It's only used internally, not exposed since we're using a pre-defined wsdl to expose the service. However you can change it to a proper value like here to identify it as a BPEL created by you/your company. This can be sensible when using tools like Oracle Enterprise Repository. But especially when you generate the wsdl by exposing the BPEL process as a SOAP service.

Click on the import 'Find Existing WSDLs' icon to bring up the following WSDL search dialog:

Here you see that you can browse for WSDLs through various channels: look into an Application Server for already deployed services, search in the SOA-MDS (more on that later) and so on. But since we have the WSDL in our project, choose for the File System

Within the project navigate to the WSDLs folder and choose the HelloWorldWS.wsdl, and click OK.
This will bring us back to the Create BPEL Process dialog. Important here is to deselect the 'Expose as a SOAP service' checkbox, since we want to reuse the already defined SOAP service, instead of having a new service generated. In the case you missed this, and a new service is generated, you can delete it easily.

Now the only thing to do, before implementing the process, is to 'wire' the HelloWorldWS SOAP Service to the BPEL Process. Do so by picking up the '>' icon and drag it over to the top  '>' icon in the BPEL Process:


You'll see that a green line appears between the start icon, that gets a green circle around it, and the possible target icons.

Implement the BPEL processOpen the BPEL designer by right clicking on the BPEL component and choose  'Edit' or  double-clicking it:





This opens the BPEL Designer with a skeleton BPEL process:




It shows a so-called 'Partner link'  to the left. There are two 'Partner Links' Lanes one on each side of the process. They're equal, the designer picks out a lane for each new Partner Link as it finds appropriate, but you can drag and drop them to the other side when it suites the drawing better. But the so-called client Partner Link is always shown left at start, and it is common to have it left.

Then you'll see two activities in the process in the main-process-scope: receiveInput  and replyOutput. The first receives the call and initiates the BPEL Process.

If you edit this activity, you'll be able to see and edit the name of the activity. It shows the input variable where the request document is stored. And very important the checkbox 'Create Instance' is checked. This means that this is the input activity that initiates a process instance. It is very important that this is the very first Receive or Pick activity in the BPEL process.

The Reply activity is the counterpart of the first Receive and is only applicable for a synchronous BPEL process. It responses to the calling Service Consumer with the contents of the outputVariable.

So in between we need to build up the contents of the outputVariable based up on the provided input.

Since we're in the BPEL Designer, you could have noticed that the Component Palette has changed. It shows the BPEL Constructs and Activities, and below those some other extension categories that I'll probably cover in a more advanced chapter:



Take the Assign activity:



and drag and drop it between the 'receiveInput' and 'replyOutput' activities:

When dragging the Assign activity, you'll notice the green add-bullets denoting the valid places where you could drop the activity. This results in:



You can right-click-and-edit or double-click the activity:



Which results in the Assign Editor. It opens on the Copy Rules tab, but first click on the General tab to edit the Name of the Assign:


Switch back to the Copy Rules Tab. On the right, locate the outputVariable and 'Expand All Child Nodes', by right-clicking it and choose the appropriate option:



Then drop the Expresion icon (the most right in the button bar) on the greeting element of the variable:


This brings up the Expression Builder:



Here choose the concat() function under the category 'String Functions' under Functions. Place the cursor between the brackets and locate the name element under the helloWorldRequestMessage element under the inputVariable.part1. Edit the expression to prefix the name with 'Hello ' and suffix it with an exclamation mark. resulting in the expression:
concat( 'Hello ', $inputVariable.part1/ns2:name,'!')

This results in:




 Here you see that an element of the inputVariable is used in a function to build up the greeting. For more complex XSD's you can add several copy rules, as many as needed, just by drag and dropping elements to their target-elements. For now, the assign is finished, so click OK.



Your process now looks like the example above. Save the process by clicking the Save or Save All, that look like the old-school floppy disks.
Fire up the Integrated WeblogicTo be able to test the BPEL process, we'll need a SOA Server. At a real-live project, a proper server installation will be provided to you and your team (preferably and presumably). This is recommended even for development to be able to do integration tests for a complex chain of services and to be able to work against enterprise databases and information systems.

But for relatively small unit tests as this Oracle provided an integrated weblogic in the SOA QuickStart topology of JDeveloper with a complete SOASuite and Service Bus. To start it choose the menu option Run->Start Server Instance (IntegratedWeblogicServer):



The first time you do this, you'll have to provide a user id for the administrator (usually weblogic) and a corresponding password (often welcome1). Provide the credentials to your liking (and remember them):


When clicking OK, a log viewer pops up at the bottom that shows the server output log of the integrated weblogic server:




Wait until the message appears:

SOA Platform is running and accepting requests. Start up took 92300 ms
IntegratedWebLogicServer startup time: 363761 ms.
[IntegratedWebLogicServer started.]

Now the SOA Suite Server is available to receive the HelloWorld Composite and do some tests.

Deploy ...Deploying a SOA Composite is a kind of special way of deployment that differs from deploying a regular Web Application. Although you can find a deploy menu option in the Application menu of JDeveloper, this one is not suitable for SOA Composite deployments. For SOA Composite projects, you need to right-click on the project name and choose Deploy and then the name of the Project.


This starts a wizard with on the first page two options, Deploy to Application Server and Generate SAR File:

A SAR file is a so-called SOA-Archive and is used to be able to have the project deployed by an administrator to a target environment without the need of JDeveloper. Choose the option Deploy to Application Server and click Next.

In the next page, select the Overwrite any existing composite with the same revision ID checkbox, but deselect the Keep running instances after redeployment. Although this is the first deployment (so there are no existing running instances yet), after this deployment you'll find that this sort of profile is added to the deployment context menu, and not checking the Overwrite... option will cause a failure in the next deployments. But there's no need to keep existing instances running. Keep the Revision ID. Click Next.

Select the pre-defined IntegratedWeblogicServer and click Finish.


The log pane is opened on the SOA tab:

If you've done everything ok, then a BUILD SUCCESSFUL message is shown. By the way, alhough it might not be clear to you, the integrated build-tool Apache ANT is used for this.

When the BUILD SUCCESSFUL message is shown, you can switch to the Deployment tab. After building the SAR it is send to the weblogic server. If this finishes without errors you'll be able to test it.

... and test the BPEL processTesting and monitoring the SOA Composite instances can be done using the Enterprise Manager - Fusion Middleware Control.

To open it, start a Internet Browser and navigate to http://localhost:7101/em (or the server:port where your admin-server runs extended with the URI '../em'). This should bring you to the Login page:



Use the credentials you used to start the IntegratedWeblogicServer for the first time to login.
Then the Enterprise Manager - Fusion Middleware Control landing page is opened for the Weblogic Domain

Click on the Navigator Icon to open up the Target Navigation and navigate to the composite under SOA->soa-infra->default:



Here the SOA node contains both service-bus projects and SOA Suite (soa-infra) Composite Deployments. The node Default is actually the Default partition. You can create several partitions, to catalog your deployments.

Open the HelloWorld[1.0] composite by clicking on it. Click on the Test button:




You can select a Service, Port and Operation as defined in the composite. Expand the nodes under SOAP Body, to navigate to the name input field. Fill in a nice value for name and click on Test Web Service:


After a (very) short while (dependent on the complexity of the service) the response will be shown:




Only for Synchronous services, a response message will be shown. But you can (practically) always click on the Launch Flow Trace  button to review the running, faulted or completed instance:




The flowtrace shows the different initiated components in the flow in a hierarchical way.
Here you can click on the particular components that you want to introspect. Clicking on the HelloWorldBPELProcess will open up the BPEL Flow:




If you click on the the assign activity this will show:




It might be that in your case you'll not be able to open this, or viewing the message contents. This is caused by the Audit Level of the SOA Suite that by default is set to 'Production'. Even on the SOA QuickStart...

To resolve this you'll need to go to the Common Properties of the SOA Infrastructure. You can use the Target Navigation browser to navigate to the soa-infra. But you can also use the SOA Composite to navigate to the SOA Infrastructure Common Properties:




Then use the SOA Infrastructure menu to navigate to SOA Administration -> Common Properties:



Here you can set the Audit Level to Development. Click on the Apply button to effectuate the change.




Do another test and check-out the BPEL Flow and review the message contents.

ConclusionSo this concludes the first part of my series BPEL. And if you bravely followed my step-by-step instructions you should be able to do some basic navigation through JDeveloper and the Enterprise Manager Fusion Middleware Control. I hope you find this entertaining, although you might already be an experienced SOA Developer. But then you probably didn't get this far...

In next articles I hope to slowly increase the level of the subjects. Coming up: invoking other services and using scopes. But probably I need to explain something about SoapUI to provide mockservices. And to deliver a more convenient way to test your services.

How to clear MDS Cache

Thu, 2016-05-26 07:06
In the answer on a question on community.oracle.com, I found the following great tip: http://www.soatutor.com/2014/09/clear-mds-cache.html.

In 12cR2 this looks like:
1. Start System MBean Browser; In Domain, pull down the Weblogic Domain menu and choose 'System MBean Browser':


2. Browse for the Application Defined Beans:



3. Expand it and navigate to oracle.mds.lcm, choose server (AdminServer or SOAServer1)
 4. Navigate to the node Application em (AdminServer) or soa-infra (SOAServer) -> MDSAppRuntime -> MDSAppRuntime. Click on the tab Operations, and then on clearCache


5. Click on Invoke:


Then the following confirmation is shown:


By the way, often a redeploy of the SOA Composite that calls the WSDL or other artefact helps. But unfortunately not always.





Automatic install of SOA Suite and Service Bus 12cR2.

Wed, 2016-05-25 05:41
Lately I worked on a set of scripts to automatically install Weblogic Infrastructure, SOA/BPM Suite, Service Bus, etc. Since I implemented a reworked set yesterday at another customer it might be nice to describe them here.

The scripts help in installing the software and creating the Repository. I started to create a script for creating the domain, but haven't it working yet. A good starting poing would be this blog of Edwin Biemond for the 12cR1 (12.1.3) version. If I managed have it working for 12c related to my other scripts I will get back to it. Probably a nice reference would also be this description of Lucas Jellema (also 12.1.3).

To create the scripts I followed the Enterprise Deployment guide for SOASuite 12c, Install tasks documentation. To administer your different environments (dev, test, acc, prod) of the Fusion Middleware the Enterprise Deployment Workbook might come in handy. And then there is the Installing and Configuring Oracle SOA Suite and Oracle Business Process Management.

The scripts are based on my earlier work on the automatic install of the quickstarts under Linux.

By the way: for these scripts I use shell (bash) under Linux. But since the response files use references that you'd probably want to have based on properties (I would) I should rework those using something like awk/sed (which I don't know) or ANT (which I do know, but need an ANT installation. But maybe in a next phase.

For this installation we need the following downloads, from edelivery:
Product
Jar File
Zip file
Note
Fusion Middleware Infrastructure fmw_12.2.1.0.0_infrastructure.jar V78156-01.zip OracleFMW12cInfrastructure SOA & BPM Suite fmw_12.2.1.0.0_soa.jar V78169-01.zip SOASuiteAndBPM Service Bus fmw_12.2.1.0.0_osb.jar V78173-01.zip ServiceBus Managed File Transfer fmw_12.2.1.0.0_mft.jar V78174-01.zip ManagedFileTransfer
The scripts and software is placed in a folderstructure containing the following sub-folders:
Folder Name
Containing
JavaJava jdk U74+ rpm: jdk-8u74-linux-x64.rpmManagedFileTransfer
  • V78174-01.zip
  • fmw_12.2.1.0.0_mft.rsp
OracleFMW12cInfrastructure
  • V78156-01.zip
  • fmw_12.2.1.0.0_infrastructure.rsp
rcu
  • rcuSOAPasswords.txt
  • rcuSOA.rsp
scripts
  • fmw12c_env.sh
  • install.sh
  • installFMW.sh
  • installJava.sh
  • installMFT.sh
  • installSB.sh
  • installSOA.sh
  • rcuSOA.sh
ServiceBus
  • V78173-01.zip
  • fmw_12.2.1.0.0_osb.rsp
SOASuiteAndBPM
  • V78169-01.zip
  • fmw_12.2.1.0.0_soa.rsp

The scripts and response (.rsp) files I'll explain below. In each product subfolder there is the downloaded zip file (containing the installation-jar file) and the accompanying response file. In the scripts folders there are the product installation scripts and the master script install.sh. So create a folder structure as above and place the downloaded products and the provided scripts in the appropriate folder.

So here we go.

Setting the environmentFirst I need a fmw12c_env.sh script to set some basic environment variables and especially the location of the FMW_HOME, where the software is going to be installed:
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft


Adapt the location of the FMW_HOME and possibly the (desired or current) location of your JAVA_HOME. The other 'homes' are relative to the FMW_HOME: these are the locations within the FMW_HOME where the products are installed (In 11g these were Oracle_SOA1 or Oracle_OSB1.
Install JavaFor the 12cR2 version of the  we need an Java 8 Installment. Of course preferably the latest version but at least above Update 65. I used update 74, but you can change it to a later update. The script for the installation is as follows:
#!/bin/bash
. $PWD/fmw12c_env.sh
export JAVA_INSTALL_HOME=$PWD/../Java
export JAVA_INSTALL_RPM=jdk-8u74-linux-x64.rpm
#
echo JAVA_HOME=$JAVA_HOME
if [ ! -d "$JAVA_HOME" ]; then
# Install jdk
echo Install jdk 1.8
sudo rpm -ihv $JAVA_INSTALL_HOME/$JAVA_INSTALL_RPM
else
echo jdk 1.8 already installed
fi
Save it as installJava.sh under scripts.

Update the JAVA_INSTALL_RPM according to the downloaded rpm as placed in the Java subfolder. Again adapt the JAVA_HOME in the fmw12c_env.sh accordingly.What this script does is check if the folder as in JAVA_HOME exists. If not then apparently the denoted version is not installed and so it does.

Sudo grants to  oracle-user
To be able to run the script above (since it uses rpm via sudo)  we need to adapt the sudo-ers file.

Log on as root via the command:
[oracle@darlin-vce- db ~]$ su -

Password:

Last login: Fri Feb 26 06:44:05 EST 2016 on pts/0

Edit de sudoers file:
[root@darlin-vce- db ~]# vi /etc/sudoers

Uncomment the lines for the Cmnd_Alias-es SOFTWARE en SERVICES (remove the hash ’#’ at the beginning of the line):
## Installation and management of software

Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum

## Services

Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig

And add the follwing two lines at the end of the file:
## Extra rights for oracle to do for instance rpm without password.

oracle ALL= NOPASSWD: SERVICES, SOFTWARE

Save the file (use an exclamation mark in the ‘:wq!’ command, since sudoers is readonly.After this you can run the installJava.sh.
Install InfrastructureFirst we need to install the Fusion Middleware InfraStructure.This is a Weblogic Server delivery that includes a RCU for the infrastructure schema's in the database. You can't use the 'vanilla' delivery of weblogic server, you'll need this one.

The install script is as follows:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export FMW_INSTALL_HOME=$PWD/../OracleFMW12cInfrastructure
export FMW_INSTALL_JAR=fmw_12.2.1.0.0_infrastructure.jar
export FMW_INSTALL_RSP=fmw_12.2.1.0.0_infrastructure.rsp
export FMW_INSTALL_ZIP=V78156-01.zip
#
# Fusion Middlware Infrastucture
if [ ! -d "$FMW_HOME" ]; then
#Unzip FMW
if [ ! -f "$FMW_INSTALL_HOME/$FMW_INSTALL_JAR" ]; then
if [ -f "$FMW_INSTALL_HOME/$FMW_INSTALL_ZIP" ]; then
echo Unzip $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP to $FMW_INSTALL_HOME/$FMW_INSTALL_JAR
unzip $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP -d $FMW_INSTALL_HOME
else
echo $FMW_INSTALL_HOME/$FMW_INSTALL_ZIP does not exist
fi
else
echo $FMW_INSTALL_JAR already unzipped.
fi
if [ -f "$FMW_INSTALL_HOME/$FMW_INSTALL_JAR" ]; then
echo Install Fusion Middleware Infrastucture 12cR2
$JAVA_HOME/bin/java -jar $FMW_INSTALL_HOME/$FMW_INSTALL_JAR -silent -responseFile $FMW_INSTALL_HOME/$FMW_INSTALL_RSP
else
echo $FMW_INSTALL_JAR not available!
fi
else
echo $FMW_HOME available: Fusion Middleware 12c Infrastucture already installed.
fi

Save it as installFMW.sh under scripts. As in the installJava.sh this script checks if the FMW_HOME already exists. If not it checks on the availability of the installer-jar. If not then it checks the zip file that should contain the installer-jar. If so then it will unzip the zipfile. If the zip file does not exist then it stops with a message. You can unzip the zip-file prior in starting the scripts, because that is the primary requirement. You can leave the jar file for subsequent installation on other servers. It would be handy if you put this on a shared staging repository folder.

If  in the end the jar-file exists it starts the installer with java from the JAVA_HOME and performs a silent install using a  a response file. This is a file that is recorded at the end of a manual installation session and contains the choices made in the Oracle Universal Installer wizard. It is placed together with the zip file in the product folder.
It looks like as follows:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. Fusion Middleware Infrastructure, Fusion Middleware Infrastructure With Examples.
INSTALL_TYPE=Fusion Middleware Infrastructure

#Provide the My Oracle Support Username. If you wish to ignore Oracle Configuration Manager configuration provide empty string for user name.
MYORACLESUPPORT_USERNAME=

#Provide the My Oracle Support Password
MYORACLESUPPORT_PASSWORD=<SECURE VALUE>

#Set this to true if you wish to decline the security updates. Setting this to true and providing empty string for My Oracle Support username will ignore the Oracle Configuration Manager configuration
DECLINE_SECURITY_UPDATES=true

#Set this to true if My Oracle Support Password is specified
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false

#Provide the Proxy Host
PROXY_HOST=

#Provide the Proxy Port
PROXY_PORT=

#Provide the Proxy Username
PROXY_USER=

#Provide the Proxy Password
PROXY_PWD=<SECURE VALUE>

#Type String (URL format) Indicates the OCM Repeater URL which should be of the format [scheme[Http/Https]]://[repeater host]:[repeater port]
COLLECTOR_SUPPORTHUB_URL=



Save it as fmw_12.2.1.0.0_infrastructure.rsp under OracleFMW12cInfrastructure.

If you choose to use another FMW_HOME as suggested, you'll need to change the ORACLE_HOME variable in the file accordingly. This is one of the elements that I want to have replaced automatically using a property, based on the FMW_HOME env-variable.

Install SOA and BPM SuiteThe script for installation of the SOA and BPM Software is more or less the same as the FMW Infrastructure:

#!/bin/bash
. $PWD/fmw12c_env.sh
#
export SOA_INSTALL_HOME=$PWD/../SOASuiteAndBPM
export SOA_INSTALL_JAR=fmw_12.2.1.0.0_soa.jar
export SOA_INSTALL_RSP=fmw_12.2.1.0.0_soa.rsp
export SOA_INSTALL_ZIP=V78169-01.zip
#
# SOA and BPM Suite 12c
if [[ -d "$FMW_HOME" && ! -d "$SOA_HOME" ]]; then
#
#Unzip SOA&BPM
if [ ! -f "$SOA_INSTALL_HOME/$SOA_INSTALL_JAR" ]; then
if [ -f "$SOA_INSTALL_HOME/$SOA_INSTALL_ZIP" ]; then
echo Unzip $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP to $SOA_INSTALL_HOME/$SOA_INSTALL_JAR
unzip $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP -d $SOA_INSTALL_HOME
else
echo $SOA_INSTALL_HOME/$SOA_INSTALL_ZIP does not exist!
fi
else
echo $SOA_INSTALL_JAR already unzipped
fi
if [ -f "$SOA_INSTALL_HOME/$SOA_INSTALL_JAR" ]; then
echo Install SOA and BPM Suite 12cR2
$JAVA_HOME/bin/java -jar $SOA_INSTALL_HOME/$SOA_INSTALL_JAR -silent -responseFile $SOA_INSTALL_HOME/$SOA_INSTALL_RSP
else
echo $SOA_INSTALL_JAR not available!.
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$SOA_HOME" ]; then
echo $SOA_HOME available: SOA Already installed
fi
fi

Save it as installSOA.sh under scripts.
This installs the software for both SOA and BPM. The choice to include BPM or not are made at creation of the domain. Or adapt the INSTALL_TYPE element in the response file below. this one use BPM, but if you adapt it to SOA (I haven't got the actual value at hand, but assume it would be SOA) I assume the BPM software is omitted.

As in the FMW infrastructure installation we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. SOA Suite, BPM.
INSTALL_TYPE=BPM



Save it as fmw_12.2.1.0.0_soa.rsp under SOASuiteAndBPM.
This one is a little smaller then the FMW-infra one. And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install Service BusThe script for installation of the Service Bus Software is more or less the same as the SOA and BPM:

#!/bin/bash
. $PWD/fmw12c_env.sh
#
export OSB_INSTALL_HOME=$PWD/../ServiceBus
export OSB_INSTALL_JAR=fmw_12.2.1.0.0_osb.jar
export OSB_INSTALL_RSP=fmw_12.2.1.0.0_osb.rsp
export OSB_INSTALL_ZIP=V78173-01.zip
#
# ServiceBus 12c
if [[ -d "$FMW_HOME" && ! -d "$OSB_HOME/bin" ]]; then
#
#Unzip ServiceBus
if [ ! -f "$OSB_INSTALL_HOME/$OSB_INSTALL_JAR" ]; then
if [ -f "$OSB_INSTALL_HOME/$OSB_INSTALL_ZIP" ]; then
echo Unzip $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP to $OSB_INSTALL_HOME/$OSB_INSTALL_JAR
unzip $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP -d $OSB_INSTALL_HOME
else
echo $OSB_INSTALL_HOME/$OSB_INSTALL_ZIP does not exist!
fi
else
echo $OSB_INSTALL_JAR already unzipped
fi
if [ -f "$OSB_INSTALL_HOME/$OSB_INSTALL_JAR" ]; then
echo Install ServiceBus 12cR2
$JAVA_HOME/bin/java -jar $OSB_INSTALL_HOME/$OSB_INSTALL_JAR -silent -responseFile $OSB_INSTALL_HOME/$OSB_INSTALL_RSP
else
echo $OSB_INSTALL_JAR not available!
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$OSB_HOME" ]; then
echo $OSB_HOME available: ServiceBus Already installed
fi
fi

Save it as installSB.sh under scripts.
This installs the software for both SOA and BPM. The choice to include BPM or not are made at creation of the domain.
As in the FMW infrastructure installation we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210

#Set this variable value to the Installation Type selected. e.g. Service Bus.
INSTALL_TYPE=Service Bus



Save it as fmw_12.2.1.0.0_osb.rsp under ServiceBus.
This one is a little smaller then the FMW-infra one. And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install Managed File TransferThe script for installation of the Managed File Transfer Software is again more or less the same as the SOA and BPM:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export MFT_INSTALL_HOME=$PWD/../ManagedFileTransfer
export MFT_INSTALL_JAR=fmw_12.2.1.0.0_mft.jar
export MFT_INSTALL_RSP=fmw_12.2.1.0.0_mft.rsp
export MFT_INSTALL_ZIP=V78174-01.zip
#
# MFT 12c
if [[ -d "$FMW_HOME" && ! -d "$MFT_HOME/bin" ]]; then
#
#Unzip MFT
if [ ! -f "$MFT_INSTALL_HOME/$MFT_INSTALL_JAR" ]; then
if [ -f "$MFT_INSTALL_HOME/$MFT_INSTALL_ZIP" ]; then
echo Unzip $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP to $MFT_INSTALL_HOME/$MFT_INSTALL_JAR
unzip $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP -d $MFT_INSTALL_HOME
else
echo $MFT_INSTALL_HOME/$MFT_INSTALL_ZIP does not exist!
fi
else
echo $MFT_INSTALL_JAR already unzipped
fi
if [ -f "$MFT_INSTALL_HOME/$MFT_INSTALL_JAR" ]; then
echo Install MFT 12cR2
$JAVA_HOME/bin/java -jar $MFT_INSTALL_HOME/$MFT_INSTALL_JAR -silent -responseFile $MFT_INSTALL_HOME/$MFT_INSTALL_RSP
else
echo $MFT_INSTALL_JAR not available!
fi
else
if [ ! -d "$FMW_HOME" ]; then
echo $FMW_HOME not available: First install Fusion Middlware Infrastucture
fi
if [ -d "$MFT_HOME" ]; then
echo $MFT_HOME available: MFT Already installed
fi
fi

Save it as installMFT.sh under scripts.
Again we need a response file:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<SECURE VALUE>

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=/u01/app/oracle/FMW12210



Save it as fmw_12.2.1.0.0_mft.rsp under ManagedFileTransfer.
And again here the ORACLE_HOME should be adapted in the case you choose to use another FMW_HOME location.
Install the lotYou could run the scripts above one-by-one. Or have them called using a master script:
#!/bin/bash
echo _______________________________________________________________________________
echo Java SDK 8
./installJava.sh
echo
echo _______________________________________________________________________________
echo Fusion Middleware Infrastructure
./installFMW.sh
echo
echo _______________________________________________________________________________
echo SOA & BPM Suite
./installSOA.sh
echo
echo _______________________________________________________________________________
echo ServiceBus
./installSB.sh
echo
echo _______________________________________________________________________________
echo Managed File Transfer
./installMFT.sh

Save it as install.sh under scripts.
Repository CreationWhen the software is installed, it's time to create the repository. This requires:
  • a database, for instance an 11g XE, 11gR2 latest or 12c
  • Sys password
Where in 11g you had a separate repository creation utility in a giant (as big as a soasuite installation) installer, in 12c the RCU comes in parts per product. However, in the end it is one utility that 'grows' with each added product.

The commandline interface of the RCU is described here. In that document the commandline interface and options are described. In turns out (but not described) that the RCU also supports a response file.

The rcu install script is as follows:
#!/bin/bash
. $PWD/FMW12c_env.sh
echo Run rcu for SOA Infrastucture
export RCU_INSTALL_HOME=$PWD/../rcu
export RCU_SOA_RSP=rcuSOA.rsp
export RCU_SOA_PWD=rcuSOAPasswords.txt
#export RCU_SOA_PWD=rcuSOAPasswords-same.txt
$FMW_HOME/oracle_common/bin/rcu -silent -responseFile $RCU_INSTALL_HOME/$RCU_SOA_RSP -f < $RCU_INSTALL_HOME/$RCU_SOA_PWD

Save it as rcuSOA.sh under scripts.

This script uses both a respone file and a password file.
The response file is as follows:
#RCU Operation - createRepository, generateScript, dataLoad, dropRepository, consolidate, generateConsolidateScript, consolidateSyn, dropConsolidatedSchema, reconsolidate
operation=createRepository

#Enter the database connection details in the supported format. Database Connect String. This can be specified in the following format - For Oracle Database: host:port:SID OR host:port/service , For SQLServer, IBM DB2, MySQL and JavaDB Database: Server name/host:port:databaseName. For RAC database, specify VIP name or one of the Node name as Host name.For SCAN enabled RAC database, specify SCAN host as Host name.
connectString=darlin-vce-db:1521:PDBORCL

#Database Type - [ORACLE|SQLSERVER|IBMDB2|EBR|MYSQL] - default is ORACLE
databaseType=ORACLE

#Database User
dbUser=sys

#Database Role - sysdba or Normal
dbRole=SYSDBA

#This is applicable only for database type - EBR
#edition=

#Prefix to be used for the schema. This is optional for non-prefixable components.
schemaPrefix=DEV

#List of components separated by comma. Remove the components which are not needed.
componentList=UCSUMS,MDS,WLS,STB,OPSS,IAU,IAU_APPEND,IAU_VIEWER,SOAINFRA,ESS,MFT

#Specify whether dependent components of the given componentList have to be selected. true | false - default is false
#selectDependentsForComponents=false

#If below property is set to true, then all the schemas specified will be set to the same password.
useSamePasswordForAllSchemaUsers=false

#This allows user to skip cleanup on failure. yes | no. Default is no.
#skipCleanupOnFailure=no

#Yes | No - default is Yes. This is applicable only for database type - SQLSERVER.
#unicodeSupport=no

#Location of ComponentInfo xml file - optional.
#compInfoXMLLocation=

#Location of Storage xml file - optional
#storageXMLLocation=

#Tablespace name for the component. Tablespace should already exist if this option is used.
#tablespace=

#Temp tablespace name for the component. Temp Tablespace should already exist if this option is used.
#tempTablespace=

#Absolute path of Wallet directory. If wallet is not provided, passwords will be prompted.
#walletDir=

#true | false - default is false. RCU will create encrypted tablespace if TDE is enabled in the database.
#encryptTablespace=false

#true | false - default is false. RCU will create datafiles using Oracle-Managed Files (OMF) naming format if value set to true.
#honorOMF=false

#Variable required for component SOAINFRA. Database Profile (SMALL/MED/LARGE)
SOA_PROFILE_TYPE=SMALL

#Variable required for component SOAINFRA. Healthcare Integration(YES/NO)
HEALTHCARE_INTEGRATION=NO


Regarding the elements you want to fill using properties, this one is the largest. Important are mostly:
  • connectString=darlin-vce-db:1521:PDBORCL
  • databaseType=ORACLE
  • dbUser=sys
  • dbRole=SYSDBA
  • schemaPrefix=DEV
  • componentList=UCSUMS,MDS,WLS,STB,OPSS,IAU,IAU_APPEND,IAU_VIEWER,SOAINFRA,ESS,MFT
  • useSamePasswordForAllSchemaUsers=false

I think properties like connectString, databaseType, dbUser, dbRole speak more or less for them selves. The property 'schemaPrefix' need to be adapted according to the target environment. This can be something like DEV, TST, ACC or PRD. Or SOAO, SOAT, SOAA, SOAP (the last one is funny...)

Then the component list. For SOA and MFT there are several required components. These can be found here in the 12.1.3 docs. For 12.2.1 the list of component id's can be founde here. Unfortunately there you can't find the requirements in detail as in 12.1.3.

Then there is a password file. If you set useSamePasswordForAllSchemaUsers to true, you need only two: the sys password and the generic schema password. If as in this example the value is false you need to specify them for each schema. The password file I use looks like:

welcome1
DEV_UMS
DEV_MDS
DEV_WLS
DEV_WLS_RUNTIME
DEV_STB
DEV_OPSS
DEV_IAU
DEV_IAU_APPEND
DEV_IAU_VIEWER
DEV_SOAINFRA
DEV_ESS
DEV_MFT

The first password in the list is the system password. Then in the order of the components the passwords are listed. A few remarks:
  • The component UCSUMS (User Messaging Services) result in a schema DEV_UMS (provided that he schemaPrefix = DEV).
  • I use here passwords that equal the schema names. You probably would not do that in acceptance and/or production, but maybe you do in Dev and test. However, in the example it is handy to know at which place which password need to go.
  • The component WLS needs two passwords, since it results in two schema's: DEV_WLS and DEV_WLS_RUNTIME. It is not documented (I could not find it) but it took me considerable time, since afte DEV_WLS the passwords did not match and it complained about a missing password. Looking in a manual created repository I found that it also created the DEV_WLS_RUNTIME.
  • For Managed File Transfer (MFT) also Enterprise Schedule Service (ESS) is needed. As well as the prerequisites for SOAINFRA.
  • SOAINFRA is needed for both SOA&BPM and  Service Bus. So even if you only install Service Bus, you need to install SOAINFRA.

ConclusionAs said I these scripts help in installing the software and installing the Repository. They use shell scripts but it should not be too hard to translate them to ANT or other tooling like Ansible or Puppet if you're into one of those. To me it would be a nice finger-practice to translate it to ANT to be able to dynamically adapt the response files. I'd probably do that in the near future. And it would be a nice learning path to implement this in Ansible or Puppet.

But first for me it would be a challence to create a domain script in wlst. So hopefully I get to write about that soon.

Have you seen the menu?

Fri, 2016-04-22 06:49
And did you like it? Hardly possible to miss I think. It kept me nicely busy for a few hours. Got some great examples, and this one is purely based on css and unnumbered lists in combination with anchors. Unfortunately the menu worked with non-classed <ul>, <li> and <a> tags. So embedding the css, caused my other elements to be redefined. (It even redefined the padding of all elements).

But with some trial and error I got it working in a subclassed form. And I like it, do you?

I also found that besides articles, you also can create pages in blogger. Did not know about that, completely overlooked that. I think I try something out, so if you're a regular visitor, you might find that there's work in progress.

The wish for a menu popped up a little while ago, and I kept thinking about it, to be able to get some structure in my articles. From the beginning I tagged every article, but not with a real plan. So I got tags stating 'Oracle BPM Suite', but also 'SOA Suite'. And 'Database', but also 'Database 11g'. Not so straightforward and purposeful.

But a purpose arose. For a longer while I'm thinking about if writing a book would be something for me. I like to write articles on a (ir)regular basis. On this blog you can find a broad range of subjects. But could I do a longer series on a particular subject? And could it lead to a more structured and larger form like a book? I learned from a former co-worker that he had this idea to write articles on a regular basis to buildup a book gradually. And I like that. But what subject would it be? My core focus area is SOA Suite and BPM Suite. But loads of books are written about that. Well, maybe not loads, but at least some recognized, good ones. And even PCS (Process Cloud Service) and ICS (Integration Cloud Service) are (being) covered.

But when Oracle acquired Collaxa in 2004, I worked at Oracle Consulting and got to work with it in the very early days. And I think in the Netherlands at least, I was (one of) the first one(s) from Oracle to provide training on BPEL, at least for Oracle University in the Netherlands. So I got involved in BPEL from the first hour Oracle laid hands on it. Could BPEL be a subject I could cover? Of course I'll not be the first one to cover that. Both on BPEL 1.1 as on 2.0 you can google up a book (is that already a term?), the one on 1.1 I still had stacked in a pile behind another one on my bookshelf.

So let's see where this leads me. You can expect a series on BPEL, in parallel of other articles on subjects that come around during my work. From real novice (do you already use scopes and local variables?), up to some more advanced stuff (how about dynamic partnerlinks; are you already into Correlation Sets, transaction handling, BPEL and Spring? )

It might bleed to death. It might become a nice series and nothing more than that. And it might turn out a real informative stack of articles that could be re-edited into a book. But when I'm at it, turning to cover the more advanced subjects, I plan to pol for what you want to have covered. I think I do know something about BPEL. But as you read with me, maybe you could point me out to subjects I don't know yet. Consider yourself invited to read along.

XA Transactions with SOASuite JMS Adapter

Tue, 2016-04-19 13:39
JMS is perfect for setting transaction boundaries and in OSB it is pretty clear on how JMS transactions are handled. However, in SOASuite using the JMS adapter the SOA Infrastructure is handling your JMS transactions by default; and messages are removed from the queue rightaway because the Get's are Auto-acknowledged. If something fails, you would expect that messages are rolled back to the JMS queue and eventually moved to the error queue. But, again by default, not with the SOASuite/JMS Adapter. In that case the BPEL process, for instance, fails and get's in a recovery state, to be handled in the 'Error Hospital'in Enterprise Manager. But I want JMS to handle it! (Says the little boy...)

So how do we accomplish that? Today I got the chance to figure that out.

Start with a JMS setup with a JMS Server, Module and a Queue with an Error Queue that is configured to be the error destination on the first queue. On the first queue set a redelivery limit to 3 and a redelivery delay on for instance 60000 ms (or something like that). I'm not going in to that here.
Create also a Connection Factory in the JMS Module with a proper jndi, something like 'jms/myApplicationCF'.

In the JMS adapter on SOASuite there are several OutboundConnectionFactories already pre-configured. It is quite convenient to use the one with JNDI 'eis/wls/Queue'. But if you look into that, you'll see that it uses the default WebLogic JMS Connection factory 'weblogic.jms.XAConnectionFactory'. Not much wrong with that, but you can't configure that for your own particular situation. But more over it is configured with 'AcknowledgeMode' = 'AUTO_ACKNOWLEDGE'. As you can read in the docs there are three values for the AcknowledgeMode:
  • DUPS_OK_ACKNOWLEDGE, for consumers that are not concerned about duplicate messages
  • AUTO_ACKNOWLEDGE, in which the session automatically acknowledges the receipt of a message
  • CLIENT_ACKNOWLEDGE, in which the client acknowledges the message by calling the message's acknowledge method
So create a new outbound connection factory, with a JNDI like 'eis/jms/MyApp'. 
Now, apparently we don't want  'AUTO_ACKNOWLEDGE', because that would cause the message-get acknowledged 'On Get'. So you could rollback until 'Saint Juttemis' (as we say in our family) but it won't go back on the queue. Dups aren't ok with me, so I'll choose 'CLIENT_ACKNOWLEDGE' here. Then there's another option: 'IsTransacted'. I want that one on 'true'. Then in ConnectionFactoryLocation, you'd put the JNDI of your JMS Connection factory, in my example 'jms/myApplicationCF'.

So you'll get something like:

On the tab Transaction, validate that the transaction support is set to a XA Transaction:

Having done that, you can update/redeploy your JMS Adapter with the changed plan. I figure that how to do that is straight forward, especially when you've done that with DB Adapters already.

I created two SOA Projects (actually I adapted those created by a co-worker). The first one is TestPutJMS:

The project is straight forward with a WSDL refering to an xsd with two fields:







The bpel is then as follows:

It assigns the request to the input variable of the invoke of the JMSPut. The JMS_Put is an jms-adapter configuration, referring to the JNDI 'eis/jms/myApp', defined in the JMS Adapter.

After that there's an if on the action field, where in the case of a certain value a fault is thrown, to validate if the Put is rolled back.

In my case it's more interesting to look at the Get part. That project is as follows:

In this case there's a mediator wired to the get adapter config, also referring to the 'eis/jms/myApp' JNDI. The mediator routes to the bpel process. The transaction handling of a mediator is simple and straight-forward:
  • If there's a transaction it will subscribe to that,
  • if there isn't, a new transaction is created.
The JMS Adapter creates an new XA Transaction. On the JMS Adaptor on WLS we configured that no Auto Acknowledge should occur, and we want a transaction. Thus, this is the transaction that is re-used by the Mediator. But how about the BPEL?  The BPEL is asynchronous request only. Since it has no way to reply the response, or it would be on a response queue.
By default you would have a property 'bpel.config.oneWayDeliveryPolicy' set to 'async.persist'. But that would mean that a new thread is started. Setting it on 'sync' would cause the thread that is started by the Adapter is reused. I also want to subscribe to the already running transaction of the JMS Adapter as it is passed through by the mediator. Setting the property 'bpel.config.transaction' to 'required' will take care of that. Summarized, I set the following properties on the bpel:
  • bpel.config.transaction: required => subscribe to already opened transaction
  • bpel.config.oneWayDeliveryPolicy: sync => reuse existing running thread


The process looks like:


Here I have an if with a conditional throw of an exception as well. Based on the value of the action element I can have it to throw a custom exception, that will cause the BPEL to fail and the transaction rolled back.
When I have a redelivery limit to 3, I'll get three retries, so in total 4 tries of the BPEL process. After that, the message is moved to the JMS Error Queue.

A nice article on the JMS Transactions from the A team is found here. However, the setup above leaves the redelivery handling by JMS. So, in 12cR2 that is, I find that the properties of the JMS Queue apparently has preference over the settings I did in the TestJMSGet Service on the composite:

I hope this article clears things up regarding the JMS Adapter configuration for transactions.

Extend your SOA Domain with Insight

Mon, 2016-04-11 12:43
Lately I wrote about how to install RealTime Integration Business Insight. It's about installing the software, actually. In the quickstart you'll read that you actually have to extend your domain as well.

It actually states that you can install it in your SOA QuickStart  installment as well, but I didn't try that (yet).

However, you need to extend your domain with the following items:
  • Insight SOA Agent 12.2.1 [soa]
  • Insight Service Bus Agent 12.2.1 [osb]
  • Insight 12.2.1 [soa]
To do so, shutdown your domain (if not done so), but (as I found needed) start (or leave it up) your infra database.

Set your FMW environment, as I put in my fmw12c_env.sh script:
[oracle@darlin-vce-db bin]$ cat ~/bin/fmw12c_env.sh
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export WL_HOME=${FMW_HOME}/wlserver
export NODEMGR_HOME=/u01/app/work/domains/soabpm12c_dev/nodemanager

export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft
#
echo call setWLSEnv.sh
. $FMW_HOME/wlserver/server/bin/setWLSEnv.sh
export PATH=$FMW_HOME/oracle_common/common/bin:$WL_HOME/common/bin/:$WL_HOME/server/bin:$PATH[oracle@darlin-vce-db bin]$
... and navigate to the $FMW_HOME/oracle_common/common/bin folder and start config.sh:

[oracle@darlin-vce-db ~]$ . fmw12c_env.sh
set Fusion MiddleWare 12cR2 environment
call setWLSEnv.sh
CLASSPATH=/usr/java/jdk1.8.0_74/lib/tools.jar:/u01/app/oracle/FMW12210/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=/u01/app/oracle/FMW12210/wlserver/server/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:/usr/java/jdk1.8.0_74/jre/bin:/usr/java/jdk1.8.0_74/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/oracle/.local/bin:/home/oracle/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[oracle@darlin-vce-db ~]$ cd $FMW_HOME/oracle_common/common/bin
[oracle@darlin-vce-db bin]$ ls
clonedunpack.sh config_builder.sh pack.sh reconfig.sh
commBaseEnv.sh config.sh prepareCustomProvider.sh setHomeDirs.sh
commEnv.sh configWallet.sh printJarVersions.sh unpack.sh
commExtEnv.sh getproperty.sh qs_config.sh wlst.sh
[oracle@darlin-vce-db bin]$ ./config.sh

In the first screen set the radio button to 'Update an existing domain':

Then Click Next, and check the items listed above:

Click Next, Next, ... Finish.
If you would have checked the 'Deployments' checkbox under the Advanced Configuration, you could have reviewed that the particular deployments are automatically targeted to the BAM, OSB and SOA clusters.

After this you can start your servers and start using insight, for example beginning with the Set up of the Insight Demo Users. This is properly described in the Quickstart Guide. But, as I'm on to it, let me try right a way. The demo users setup is downloadable here. Download it and unzip it in a folder on your server.

First we'll have to set the environment. So I call my neat fmw12_env.sh script first (in a new terminal), and explicitly set the $MW_HOME variable:
[oracle@darlin-vce-db bin]$ . fmw12c_env.sh
set Fusion MiddleWare 12cR2 environment
call setWLSEnv.sh
CLASSPATH=/usr/java/jdk1.8.0_74/lib/tools.jar:/u01/app/oracle/FMW12210/wlserver/modules/features/wlst.wls.classpath.jar:

PATH=/u01/app/oracle/FMW12210/wlserver/server/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2/bin:/usr/java/jdk1.8.0_74/jre/bin:/usr/java/jdk1.8.0_74/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/oracle/.local/bin:/home/oracle/bin:/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.maven_3.2.5/bin

Your environment has been set.
[oracle@darlin-vce-db bin]$ export MW_HOME=$FMW_HOME
[oracle@darlin-vce-db bin]$ echo $MW_HOME
/u01/app/oracle/FMW12210
[oracle@darlin-vce-db bin]$ echo $JAVA_HOME
/usr/java/jdk1.8.0_74
[oracle@darlin-vce-db bin]$ echo $ANT_HOME
/u01/app/oracle/FMW12210/wlserver/../oracle_common/modules/org.apache.ant_1.9.2

We're going to call an ant script that apparently needs the following variables set:
  • MW_HOME= <Middleware home of the environment>
  • JAVA_HOME= <Location of java home>
  • ANT_HOME=$MW_HOME/oracle_common/modules/org.apache.ant_1.9.2
  • PATH=$JAVA_HOME/bin:$ANT_HOME/bin:$PATH
The first one is not set by my script (I called it $FMW_HOME), so I needed to set $MW_HOME to $FMW_HOME, the last three are set by my script.

Running the script with a developer topology domain (everything in the AdminServer or DefaultServer in the SOA QuickStart) will probably go ok. But a stuborn guy as I am tries to do this in a more production like topology with seperate SOA, OSB and BAM clusters. So it turns out that you need to adapt the insight.properties that is in the bin folder of the InsightDemoUserCreation.zip (also when you're not like me, you'll need to review it...).
After editing, mine looks like:

#Insight FOD Automation file

wls.host = darlin-vce-db
wls.port = 7001
soa_server_port = 7005
bam_server_port = 7006
userName = weblogic
passWord = welcome1
oracle_jdbc_url = jdbc:oracle:thin:@darlin-vce-db:1521:ORCL
db_soa_user = DEV_SOAINFRA
oracle_db_password = DEV_SOAINFRA
db_mds_user = DEV_MDS
mds.password = DEV_MDS
jdbc_driver = oracle.jdbc.OracleDriver

When all is right then you can run:
[oracle@darlin-vce-db bin]$ cd /media/sf_Stage/InsightDemoUserCreation/bin/
[oracle@darlin-vce-db bin]$ ant createInsightDemoUsers

Unfortunately I can't show you correct output since, although I seem to have set my properties correctly, I get failures. It turns out that my server (all in one VM) ran so slow, that Insight could not be started due to time outs in getting a database connection....
After restarting BAM all went well, except for the exceptions indicating that the users were already created.

Use external property file in WLST

Thu, 2016-04-07 01:41
I frequently create a wlst script, that needs properties. Not so exciting, but how to do that in a convenient way, and how to detect in a clean way that properties aren't set?

You could read a property file like described here. The basics are to use in fact Java to create a properties object and a FileInputStream to read it:
#Script to load properties file.

from java.io import File
from java.io import FileInputStream
from java.util import Properties


#Load properties file in java.util.Properties
def loadPropsFil(propsFil):

inStream = FileInputStream(propsFil)
propFil = Properties()
propFil.load(inStream)

return propFil

I think the main disadvantage is that it clutters the script-code and you need to call 'myPorpFil.getProperty(key)' to get the property value.

Following the documentation you can use the commandline option '-loadProperties propertyFilename' to explicitly provide a property file. I found this actually quite clean. Every property in the file becomes automatically available as a variable in your script.

Besides that I found a teriffic blog-post on error handling in wlst. It states that with ' except NameError, e:' you can handle the reference to a variable that is not declared earlier.

I combined these two sources to come up with a script template that alows me to provide property files for different target environments as a commandline option, while detecting if properties are provided. So let's assume you create a porpererty file named for instance 'localhost.properties' like:
#############################################################################
# Properties voor localhost Integrated Weblogic
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-06
#
#############################################################################
#
# Properties voor localhost
adminUrl=localhost:7101
adminUser=weblogic
adminPwd=welcome1
clustername=LocalCluster
# Generieke properties voor het creeeren van JMS componenten
#jmsFileStoresBaseDir=/app/oracle/config/cluster_shared/filestore/
jmsFileStoresBaseDir=c:/Data/JDeveloper/SOA/filestore
#Filestore 01
...

Then you can use that with the following script, named for instance 'createJMSServersWithFileStoreV2.py':
#############################################################################
# Create FileStores and JMS Servers
#
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-06
#
#############################################################################
# Modify these values as necessary
import sys, traceback
scriptName = 'createJMSServersWithFileStoreV2.py'
#
#
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd'+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh'+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"

def main():
try:
#Connect to administration server
print '\nConnect to AdminServer via '+adminUrl+' with user '+adminUser
connect(adminUser, adminPwd, adminUrl)
...
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
usage()
except:
apply(traceback.print_exception, sys.exc_info())
stopEdit('y')
exit(exitcode=1)

#call main()
main()
exit()

You can call it like 'wlst createJMSServersWithFileStoreV2.py -loadProperties localhost.properties'. If you don't provide a property file you'll get:
e:\wls>wlst createJMSServersWithFileStoreV2.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Apparently properties not set.
Please check the properties: exceptions.NameError adminUrl
Call script as:
Windows: wlst.cmdcreateJMSServersWithFileStoreV2.py -loadProperties localhost.properties
Linux: wlst.shcreateJMSServersWithFileStoreV2.py -loadProperties environment.properties
Property file should contain the following properties:
adminUrl='localhost:7101'
adminUser='weblogic'
adminPwd='welcome1'


Exiting WebLogic Scripting Tool.


e:\wls>

Pretty clean. You could even use the 'except NameError, e:' construct to conditionally execute code when properties are set by ignoring/handling the situation when particular properties are intentionally not provided.

Install Oracle Real-Time Integration Business Insight

Fri, 2016-04-01 08:50
Yes, Oracle FMW Integration Insight is available, as I wrote in an earlier post. You can download it here.
But of course we're very curious on how to install it. Do I have to unzip it into my FMW_HOME? Is there a nice Oracle Installer that I can run silently? No, none of that: it comes as a set of OPatch patches on SOASuite 12.2.1:
  1. p22189824_122100_Generic.zip: OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
  2. p22655174_122100_Generic.zip: OPatch containing updates to SOA and BAM 12.2.1.0.0 
  3. p22659236_122100_Generic.zip: OPatch containing updates to Service Bus 12.2.1.0.0
Following the README.txt in the zip the correct order is to first install ORIBI and then patch SOA&BPM and then SB.


Earlier I wrote about installing BPM QuickStart under Linux. Based on that I created a script to install SOASuite. Maybe I should write about that in another post. I haven't tried if it's possible to install this in a SOA or BPM Quickstart, but I did it in a full FMW installation, that I built using my scripts. So this installation I scripted following the scripting-work I've done earlier


For this setup I have 2 folders:
  • scripts: with the scripts.
  • ofm_integration_insight_1221: with the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
The scripts folder contains two scripts:
fmw12c_env.sh:
#!/bin/bash
echo set Fusion MiddleWare 12cR2 environment
export JAVA_HOME=/usr/java/jdk1.8.0_74
export FMW_HOME=/u01/app/oracle/FMW12210
export SOA_HOME=$FMW_HOME/soa
export OSB_HOME=$FMW_HOME/osb
export MFT_HOME=$FMW_HOME/mft

This provides the settings to the FMW_HOME and the JAVA_HOME. And the product homes I needed for my SOABPM Suite installation (I definately should write that down!).

The actual install script is installOII.sh:
#!/bin/bash
. $PWD/fmw12c_env.sh
#
export CD=$PWD
export OII_INSTALL_HOME=$CD/../ofm_integration_insight_1221
export OII_INSTALL_ZIP=ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip
export OPATCH_SOABPM_ZIP=p22655174_122100_Generic.zip #OPatch containing updates to SOA and BAM 12.2.1.0.0
export OPATCH_SOABPM_NR=22655174
export OPATCH_OSB_ZIP=p22659236_122100_Generic.zip #OPatch containing updates to Service Bus 12.2.1.0.0
export OPATCH_OSB_NR=22659236
export OPATCH_OII_ZIP=p22189824_122100_Generic.zip #OPatch containing Oracle Real-Time Integration Business Insight 12.2.1.0.0
export OPATCH_OII_NR=22189824
export PATCHES_HOME=$FMW_HOME/OPatch/patches
export ORACLE_HOME=$FMW_HOME
# Unzip OII Install zip
if [ ! -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
if [ -f "$OII_INSTALL_HOME/$OII_INSTALL_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP to $OII_INSTALL_HOME
unzip $OII_INSTALL_HOME/$OII_INSTALL_ZIP -d $OII_INSTALL_HOME
else
echo $OII_INSTALL_HOME/$OII_INSTALL_ZIP does not exist
fi
fi
#
echo Check zips
cd $OII_INSTALL_HOME
md5sum -c patches.MD5
cd $CD
#
# Check patches folder
if [ ! -d "$PATCHES_HOME" ]; then
mkdir $PATCHES_HOME
else
echo $PATCHES_HOME available
fi
#
#Unzip OII patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OII_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OII_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OII_ZIP -d $PATCHES_HOME
echo Apply OII Patch
cd $PATCHES_HOME/$OPATCH_OII_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OII_ZIP does not exist!
fi
else
echo OII Patch $PATCHES_HOME/$OPATCH_OII_NR already available
fi
cd $CD
#Unzip SOA&BPM patch
if [ ! -d "$PATCHES_HOME/$OPATCH_SOABPM_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP -d $PATCHES_HOME
echo Apply SOA BPM Patch
cd $PATCHES_HOME/$OPATCH_SOABPM_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_SOABPM_ZIP does not exist!
fi
else
echo SOA-BPM Patch $PATCHES_HOME/$OPATCH_SOABPM_NR already available
fi
cd $CD
#Unzip OSB patch
if [ ! -d "$PATCHES_HOME/$OPATCH_OSB_NR" ]; then
if [ -f "$OII_INSTALL_HOME/$OPATCH_OSB_ZIP" ]; then
echo Unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP to $PATCHES_HOME
unzip $OII_INSTALL_HOME/$OPATCH_OSB_ZIP -d $PATCHES_HOME
echo Apply OSB Patch
cd $PATCHES_HOME/$OPATCH_OSB_NR
$ORACLE_HOME/OPatch/opatch apply
else
echo $OII_INSTALL_HOME/$OPATCH_OSB_ZIP does not exist!
fi
else
echo OSB Patch $PATCHES_HOME/$OPATCH_OSB_NR already available
fi
cd $CD
echo Finished installing Oracle Fusion MiddleWare Integration Insight


The script first unzips the downloaded ofm_integration_insight_12.2.1.0.0_disk1_1of1.zip into:
  • p22189824_122100_Generic.zip
  • p22655174_122100_Generic.zip
  • p22659236_122100_Generic.zip
  • patches.MD5
  • README.txt
Then  it performs 'md5sum -c patches.MD5' to checks on the zips, but it ignores the results, just prints them.
Then for each patch it checks if the patch is already unzipped in the FMW_HOME/Opatch/patches folder. If so, it just assumes that its applied as well. If not the patch-zip is unzipped in the patches folder. And then it will perform opatch apply.
Opatch will ask if you want to proceed (answer with 'y') and if the system is ready to be patched (again answer with 'y'). For the SB Patch (the last in the list) it will look like:
Apply OSB Patch
Oracle Interim Patch Installer version 13.3.0.0.0
Copyright (c) 2016, Oracle Corporation. All rights reserved.


Oracle Home : /u01/app/oracle/FMW12210
Central Inventory : /u01/app/oraInventory
from : /u01/app/oracle/FMW12210/oraInst.loc
OPatch version : 13.3.0.0.0
OUI version : 13.3.0.0.0
Log file location : /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log


OPatch detects the Middleware Home as "/u01/app/oracle/FMW12210"

Verifying environment and performing prerequisite checks...
OPatch continues with these patches: 22659236

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/oracle/FMW12210')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '22659236' to OH '/u01/app/oracle/FMW12210'

Patching component oracle.osb.server, 12.2.1.0.0...

Patching component oracle.osb.server, 12.2.1.0.0...
Patch 22659236 successfully applied.
Log file location: /u01/app/oracle/FMW12210/cfgtoollogs/opatch/22659236_Apr_01_2016_09_37_29/apply2016-04-01_09-37-22AM_1.log

OPatch succeeded.
Finished installing Oracle Fusion MiddleWare Integration Insight

If you have a home with only OSB or only SOA-BPM adapt the script yourself to not patch the not-installed product.

Oh, I did not check on the install, for now I assume it worked.  Next step for me is describe the SOA/BPM install and check-out on the Integration Insight product.

Auto DDL: delete obsolete columns from table

Wed, 2016-03-30 06:55
A quick one. In the past I used to generate ddl based on queries, like the following. But I find myself to re-invent them again. So to have it saved for my offspring: here's one on deleting obsolete columns as generated on importing an excel sheet in SQLDeveloper:


declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'COLUMN%';
begin
for r_cols in c_cols loop
execute immediate 'alter table '||l_schema_name||'.'||l_table_name||' drop column '||r_cols.column_name;
end loop;
end;
/

And here's one to generate a check constraint on all index colunns of a table:

declare
l_schema_name varchar2(30) := 'MY_SCHEMA';
l_table_name varchar2(30) := 'A_TABLE';
l_constraint_name_pfx varchar2(30) := 'XXX_ALIAS_CHK';
l_idx pls_integer := 1;
cursor c_cols is
select column_name
from all_tab_columns col
where col.table_name = l_table_name
and col.owner = l_schema_name
and col.column_name like 'IND_%';
begin
for r_col in c_col loop
execute immediate 'ALTER TABLE '||l_schema_name||'.'||l_table_name||' ADD CONSTRAINT '||l_constraint_name_pfx||l_idx||' CHECK ('||r_col.column_name||' in (''J'',''N''))ENABLE';
l_idx := l_idx+1;
end loop;
end;
/

Real-Time Integration Business Insight Available

Wed, 2016-03-23 12:22
To day Real-Time Integration Business Insight is available. I wrote about it in my summary of the OPN FMW Community forum. I hope I can get into it in the near future.

Enable Process Analytics in BPM12c

Wed, 2016-03-23 12:07
To be able to use BAM12c together with BPM12c, you'll need to enable process analytics. This means that only when that is enabled BAM12c will write the sample data to the proces cubes/star schema.

To do so you'll need to go to the enterprise manager (eg. http://darlin-vce-db:7001/em). Then open up the System MBean Browser. This can be started from the soa-infra:

And than from the SOA Infrastructure -> Administration -> System MBean Browser:


However, you can also start it a little quicker from the Weblogic Domain menu:
In the MBean Browser look for 'Application Defined MBeans':
Than look for 'oracle.as.soainfra.config'-> 'your server' -> AnalyticsConfig -> analytics:

Then in the pane make sure that both 'DisableAnalytics' and 'DisableProcessMetrics' are set to false:


 And click 'Apply'.

Above you'll see the layout of 12.2.1, but in 12.1.3 it works the same. Restart the SOA Server after that.

I'm not the first one to write about these changes, but I found that you can only update these fields if you have started the BAM server at least once. Apparently the BAM Server registers itself so that only after that you can update and apply these attributes.




BAM 12c: Extent Data objects

Wed, 2016-03-23 11:42
BAM 12c is a huge improvement against 11g. Best thing I think is that it is quite a lot easier to create a dashboard. There are several tutorials on BAM, for instance at the BAM12c site, so I'm not going to explain how to create a dashboard here.

One thing however on business queries: the examples mostly start with a BPM process and then query from the Process of Activity Data Object as created on deployment of the particular process. How-ever, often you'll find that you want to filter on a certain date range, for instance process started less than a day or a week ago. Or activities running less than an hour, or between an hour and two hours, two and three hours or longer. But then you'll find that you can't filter in the Business Queries on a date function. For instance you can't filter on something like  '{process start date} < now() - 7'.

To solve that you can add extra Calculated Fields that return yes or no or  1 or 0 if a certain date calculation condition is met. To do so go to the administration tab of the BAM Composer (eg. http://darlin-vce-db:7006/bam/composer):

 Then you can expand the Data Objects and you'll find that the process that is deployed resulted in two Dataobjects, one for Activities and one or the Process-instances:

By the way, to get those you can need to have the process analytics enabled. I'll explain that in another blog.

Click for instance on the CustomerSurvey Activity, then on the tab 'Calculated Fields' and then on 'Add Calculated Field':
d

You need to provide a name that has no spaces but only lowercase or uppercase letters and underscores. Then you can provide a name that is shown in the designer and in flat queries. The column type can be measure, dimension or attribute, but in this case you'll want attribute, to be able to filter on it. In this case I returned 'J' or 'N' for 'Ja' (Yes) or 'Nee' (No). This is sufficient for filtering. But if you want to count/summarize instances that are running less than one hour, or between one or two hours, etc., then you might want to return 1 or 0.

Click on OK and then save:

By clicking on the pencil-icon you can edit the field.

I'll provide some other examples that I found helpfull for the activity dataobject:



Field NameDisplay NameColumn TypeExpressiondescriptionactivity_started_lt_week_agoActivity started less than week agoAttributeIF(DATEDIFF(SQL_TSI_DAY,{Activity Start Time},now())<=7)THEN("J")ELSE("N")Is the activity started at most 7 days ago? (J/N)activity_started_lt_day_agoActivity started less than day agoAttributeIF(DATEDIFF(SQL_TSI_HOUR,{Activity Start Time},now())<=24)THEN("J")ELSE("N")Is the activity started at most 24 hours ago? (J/N)Activiteit_Looptijd_minActiviteit Loop tijd (min)AttributeIF({Activity Instance Status}=="ACTIVE")THEN(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now()))ELSE(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},{Activity End Time}))Actual running time of the activity instance. If the instance is active, than the result is the difference between the start time and the current time (NOW()), otherwise it is the difference between de  start time and the end time. The "Activity Running Time" is aparently different from the predefined runningtime field, because of the sampling moments. Sometimes the Running time is near to zero, while the instance is still active. Activiteit_Looptijd_lt_1hrActiviteit Looptijd < 1 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<60 td="">Is Activity Running less than an hour?Activiteit_Looptijd_lt_2hrActiviteit Looptijd < 2 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>=60&&DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<120 td="">Is Activity Running more than one but less than two hours?Activiteit_Looptijd_lt_3hrActiviteit Looptijd< 3 uurAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>=120&&DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())<180 td="">Is Activity Running more than two but less than three hours?Activiteit_Looptijd_gt_maxActiviteit Looptijd > maxAttributeIF({Activity Instance Status}=="ACTIVE")&&(DATEDIFF(SQL_TSI_MINUTE,{Activity Start Time},now())>180)THEN(1)ELSE(0)Is Activity Running 3 hours or longer?Activiteit_is_openActiviteit is open?AttributeIF({Activity Instance Status}=="ACTIVE")THEN("J")ELSE("N")Is the activity still Open?
For the process Data Objects these are a good starting point:
Field NameDisplay NameColumn TypeExpressiondescriptionProcess_Running_Time_Min_attrProcess Running Time (Min) AttrAttribute{Process Running Time (millisecs)}/600000Number of minutes a process is executed. There is another comparable field already defined, but that is of type 'Measurement'. You can't use that for  analytid functions as AVG, MIN, MAX, etc.process_started_lt_week_agoProcess started less than week agoAttributeIF(DATEDIFF(SQL_TSI_DAY,{Process Start Time},now())<=7)THEN("J")ELSE("N")Is the process instantie started at most 7 days ago? (J/N)process_started_lt_day_agoProcess started less than day agoAttributeIF(DATEDIFF(SQL_TSI_HOUR,{Process Start Time},now())<=24)THEN("J")ELSE("N")Is the process instance started at most 24 hours ago? (J/N)Process_Looptijd_in_minProcess Looptijd (min)AttributeIF({Process Instance Status}=="ACTIVE")THEN(DATEDIFF(SQL_TSI_MINUTE,{Process Start Time},now()))ELSE(DATEDIFF(SQL_TSI_MINUTE,{Process Start Time},{Process End Time}))Actual running time of the process instance. If the instance is active, than the result is the difference between the start time and the current time (NOW()), otherwise it is the difference between de  start time and the end time. The "Process Running Time" is aparently different from the predefined runningtime field, because of the sampling moments. Sometimes the Running time is near to zero, while the instance is still active. So these help you in filter and aggregate on activity and process running times. Sorry for the dutch names, but I figure you can get the meaning.

The expressions are based on info I got from the user guide. You can find the 12.2.1 user guide over here. The 12.1.3 can be found here. Look for chapter 13.8 (in the 12.2.1 user guide) or 14.8 (in the 12.1.3 user guide).

Pages