Darwin IT

Subscribe to Darwin IT feed
Darwin-IT professionals do ICT-projects based on a broad range of Oracle products and technologies. We write about our experiences and share our thoughts and tips.Martien van den Akkerhttps://plus.google.com/110503432901891966671noreply@blogger.comBlogger364125
Updated: 12 hours 12 min ago

Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box

Tue, 2018-04-24 09:31
It's been busy, so unfortunately it's already been almost two weeks I wrote my introductory story on Vagrant. Today I happen to have an afternoon off, and I noticed that Oracle Linux 7 Update 5 is out. I based my first boxes on 7.4, so nice moment to start with creating a new Base Box.

De essentials on creating a Vagrant base box can be read here. But I'm going to guide you trough the process step by step, so I hope you will be able to repeat this yourself, using this guide-through.

First of, Vagrant recommends Packer to automate the creation of base boxes. But I'm a bit confused, because in this guide it is apparently stated that this is deprecated by march 2018. I haven't tried Packer yet, but I feel that over the years I created a base VM only a few times. I used to create a base VM that I import/clone to create new VMs over and over again. And often, I start of with a VM that already contains a pre-installed database for instance.

Vagrant has a built in command to create a base box out of an existing VM. That is what I use.
Base box requirementsWhat is a Base Box actually? Well, it's in fact sort of a template that is used by Vagrant to create and configure a new VM and provision that. It should contain the following
  • An OS: I use Oracle Linux 7 Update 5 for this story. I also have a base box with Ubuntu. Ubuntu has some peculiarities I want to discuss later on in this series. For this base box I'll install a server-with-gui. But further as basic as possible.
  • A vagrant user. The vagrant user is used for provisioning the box. We'll place a public insecure key in it, that will be replaced by Vagrant at first startup. We'll add vagrant to the sudoers list, so the user can sudo without passwords.
  • A started ssh daemon:  Vagrant connects via ssh using the vagrant-user to do the provisioning.
  • A NAT (Network Address Translation) Adapter as the first one: needed to do kernel/package updates without further network configuration.
  • VirtualBox GuestAdditions installed: Vagrant makes use of shared folders to map the project folder to get to the scripts. Also it's convenient to add an extra stage folder mapping. 
  • Password of root: not a requirement, but apparently it's a bit of a standard to set the root password to vagrant as ease of sharing. But at least note down the passwords.
That's about it. Maybe I forget something, but since it's digital, I can edit it later... So let's get started.

Download  Oracle LinuxAll the serious enterprise stuff of Oracle can be downloaded at edelivery. Search for Oracle Linux:
Then add the 7.5 version to the Cart by clicking it:

Follow the wizard instructions and you'll get to:
I downloaded V975367-01.iso        Oracle Linux Release 7 Update 5 for x86 (64 bit), 4.1 GB.

Create the VMThe ISO is downloading, so let's create a VM in VirtualBox. I assume VirtualBox with VirtualBox Extension Pack is installed. And for later on Vagrant of course.

From the Oracle VM VirtualBox Manager, create a new VM, I called it OL75, for Oracle Linux 64 bit:
I followed the wizard and gave it 10240 MB memory and a 128GB dynamically allocated virtual disk:

In the VM Settings, I set the number of processors to 4 and for now I kept everything to the default.

In the meantime my download is ready, so in the VM Settings, under Storage I added the disk by clicking the disk icon next to the IDE controller:

Then navigate to your downloaded iso:
and select it. Now the VM is ready to kick-off:


It will startup automatically after a minute, but let's not wait that long.

I don't need much, but in the Sofware Selection I do want Server with GUI:
But with out selecting other packages. What I might need later on, I'll install at provisioning.

I do not like default local domain networknames. So I changed the network hostname to darlin-vce.darwin-it.local:
Hostname darlin stands for Darwin Linux and vce for Virtual Course Environment.

Then hit Begin Installation:


Soon in the installation the installer asks for the Root password:
And the password is as said: vagrant.
Then I add also a vagrant with the same password:
Having done that, we need to wait for the installer to finish. At the end of the Install, do a reboot:

This leads to 2 questions to be answered. One is about accepting the licensing. I assume that can be answered without guidance. The other is about connecting the network.

You need to switch on the network adapter, but to have it done automatically you need to configure it and check the box Automatically connect to this network when it is available on the General tab. You'll need to have this done, otherwise Vagrant will have difficulties in connecting to the box.
Then finish the configuration:
Install guest additionsTo be able to install the guest additions, we need to add some kernel packages. We could have done that by installing additional kernel packages. But I wanted to have a as basic as possible installation. And the following is more fun...

So open a terminal and switch to the super user:

[vagrant@darlin-vce ~]$ su -
Password:
Last login: Tue Apr 24 09:41:21 EDT 2018 on pts/0
...


Then stop package kit, because it will probably hold a lock pausing yum:
[root@darlin-vce ~]# systemctl stop packagekit

And then install the packages kernel-uek-devel kernel-uek-devel-4.1.12-112.16.4.el7uek.x86_64, that are suggested by the GuestAdditions installer, by the way:
[root@darlin-vce ~]# yum -q -y install kernel-uek-devel kernel-uek-devel-4.1.12-112.16.4.el7uek.x86_64
No Presto metadata available for ol7_UEKR4
warning: /var/cache/yum/x86_64/7Server/ol7_latest/packages/cpp-4.8.5-28.0.1.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Public key for cpp-4.8.5-28.0.1.el7.x86_64.rpm is not installed
Public key for kernel-uek-devel-4.1.12-124.14.1.el7uek.x86_64.rpm is not installed
Importing GPG key 0xEC551F03:
Userid : "Oracle OSS group (Open Source Software group) "
Fingerprint: 4214 4123 fecf c55b 9086 313d 72f9 7b74 ec55 1f03
Package : 7:oraclelinux-release-7.5-1.0.3.el7.x86_64 (@anaconda/7.5)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle

Having done that, insert the GuestAdditions CD:
It brings the following pop-up, click Run:

And provide the Administrator password:

In my case the script ran and during that the display got messed up. But after a reset of the VM (I waited until I got the impression it was done), the VM got up with a Hi-res display, indicating that the install went ok. Also the bi-directional clipboard worked.

Configure vagrant userAgain in a terminal switch to super user and add the following line to the /etc/sudoers file:
vagrant ALL=(ALL) NOPASSWD: ALL

Exit and as vagrant user create a .ssh folder in the vagrant home folder, cd to it and create the file authorized_keys:
[vagrant@darlin-vce ~]$ mkdir .ssh
[vagrant@darlin-vce ~]$ cd .ssh
[vagrant@darlin-vce .ssh]$ vi authorized_keys

Insert the following content:
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key

This is the insecure key of vagrant that can be downloaded here.
It will be replaced by Vagrant at first startup.

Package the boxSo, now we have a base install that can function as a base box for Vagrant. Thus we can now shut it down to export it to an OVA (just as a backup for VirtualBox) and then create are base box out of it.

After creating your export of the OVA, that I skip describing here, you just open a command window. I assume you have Vagrant installed.

To package the box, you use the package subcommand of vagrant:

Microsoft Windows [Version 10.0.16299.371]
(c) 2017 Microsoft Corporation. All rights reserved.

d:\Projects\vagrant>vagrant package --base OL75 --output d:\Projects\vagrant\boxes\OL75v1.0.box
==> OL75: Exporting VM...
==> OL75: Compressing package to: d:/Projects/vagrant/boxes/OL75v1.0.box

d:\Projects\vagrant>

Conclusion Well, that concludes this part of the series. We have our own base box and it's barely 3GB. Next: create a VM with it. Stay tuned.




Garbage First in JDeveloper

Thu, 2018-04-19 01:07
At my current customer we work with VDI's: Virtual Desktop Images, that at several times a day very, very slow. Even so slow that it more or less stalls for a minute or two.

JDeveloper is not known as a Ferrari under the IDE's. One of the causes is that by default heap settings is very poor: 128M-800M. Especially when you use it in  SOA or BPM Quickstart then at startup it will need to grow several times. But very soon working in it you'll get out of memory errors.

Because of the VDI's I did several changes to try to improve performance.
Main thing is set Xms and Xmx both at 2048M. I haven't found needing more up to this day.

But I found using the Garbage First collector gives me a slightly better performance.

To set it, together with the heap, add/change the following options in the ide.conf in ${JDEV_HOME}\jdeveloper\ide\bin\:
# Set the default memory options for the Java VM which apply to both 32 and 64-bit VM's.
# These values can be overridden in the user .conf file, see the comment at the top of this file.
#AddVMOption -Xms128M
#AddVMOption -Xmx800M
AddVMOption -Xms2048M
AddVMOption -Xmx2048M
AddVMOption -XX:+UseG1GC
AddVMOption -XX:MaxGCPauseMillis=200

Find more on the command line options in this G1GC tutorial.

You can also use the ParNew incombination with the ParOld or ConcMarkSeep collector, as suggested in this blog. But from Java9 onwards G1GC is the default, and I expect that it better fits the behavior of JDeveloper, as in SOASuite and OSB installations.

The vagrant way of provisioning - an introduction

Wed, 2018-04-11 09:16
About 2003, I guess, I was introduced into VMWare by a colleague. I was hooked about right away.
Since then I made numerous VMs for as many purposes. I played around with several VMWare products, but since Oracle acquired Sun, I stuck with VirtualBox.

A few years ago the tool Vagrant was mentioned to me. But I did not get the advantage of it, since all I needed to do I could do using VirtualBox.

However, over the years I found that maintaining VM's is a tedious job. And often I create and use a VM, but shut it down for months. And when I need it again, I don't the state anymore. Although you can use snapshots, it's nice to be able to start with a fresh install again.

In between, Oracle can have come up with another (minor) version of Fusion Middleware. Oracle Linux can have an new upgrade. There's a new patch set. Then you want to do a re-install of the software. And I find it nice that I can drop a VM and recreate it from scratch again.

For those purposes Vagrant comes in handy. It allows you to define a Box, based on a template, a base box, and configure it by configure CPU and memory settings, add disks and then after first boot, provision it.
So if I want to adapt a VM with a slightly different setting, or I need an extra disk, I destroy my box, adapt my Vagrant project file, and boot my box up again.

So, let's see what Vagrant is, actually. Then in a follow-up, I'll explain how I setup my Vagrant Project.


Vagrant is an open-source software product for building and maintaining portable virtual software development environments. So it helps you in creating and building Virtual Machines, especially in situations where you need to do that regularly and distribute those.

It simplifies software configuration management of virtualizations, to increase development productivity. Vagrant automates both the creation of VM’s and the provisioning of created VM's.  It does this by abstracting the configuration of the virtualization component and the installation/setup of the software within the VM, via a project file.

The architecture distinguishes two building blocks: Providers and Provisioners.
Providers are services to set up and create VMs, for instance:
  • VirtualBox
  • Docker
  • Vmware
  • AWS
Provisioners are tools to customize the configuration of VM, for example, configure the guest OS and install software within the VM. Possible provisioners are: 
  • Shell
  • Ansible
  • Puppet
  • Chef
I haven't made my self familiar with Ansible or Puppet, yet, (still on my list), so I work with the default provisioner: Shell.

A Vagrant project is in fact a folder with the Vagrant file in it. The vagrant file contains all the configuration of the resulting Vagrant box, the actual created VM.

A Vagrant project is always based on  a base box. Often, a downloadable box from a Vagrant repository is used. In fact, if you don't specify an url, but only a name, it will try to find it from the Vagrant repository. A popular one is the hashicorp/precise64, used in many examples. However, I prefer to use my own local box. For two main reasons:
  • I then know what's init.
  • It's local, so I don't have to download it.
To be able to be used by Vagrant, a box has the following requirements:
  • It contains an actual VM, with OS installed in it.
  • A vagrant user is defined, with sudo rights, and an insecure key (downloadable from vagrant's github, but it will be replaced by a generated secure key at first startup), but  you can specify a password (as I do).
  • NAT network adapter as a first NIC.
  • SSH deamon running.
There is a tool, called Packer, that is  able to create a box, with an OS installed. I haven't tried it, but actually, I created a very simple VM in VirtualBox, installed Oracle Linux 7 Update 4, with the server with gui option as a next-next-finish install in it, defined the vagrant user as mentioned in it. And then with the vagrant package command I got the particular box. I had a few iterations to get it as I wanted it. But once you get it right, you should not need to touch it. Unless another Linux update comes along.

Now I have only one simple base box, and I only need to define different vagrant projects and a stage folder with the latest greatest on the software downloaded. And a simple vagrant up command will create my VM a new, and install all the software in it.

Last year, on the NLOUG's Tech Experience '17, together with my colleague Rob, I spoke about how to script a complete Oracle Fusion MiddleWare environment. It was a result of a series of projects we did up to the event, where we tried to automate the environment creation as much as possible. See my series of blogs on the matter. In the upcoming period, I plan to write about how to leverage these scripts with Vagrant to set up a complete VM, with the latest greatest FMW in it.

So stay tuned.




PaaSForum and the talk with the two ladies...

Tue, 2018-04-10 07:13
It's already been a week or three that I've been to the excellent PaaSForum '18 in Budapest.
Much is already said and written about it. About the talks and breakout sessions.
To see and hear about the state of art of the Oracle PaaS products: every year I'm having a good time with Oracle Friends around Europe and beyond.

It was nice to play around with API Management, Dynamic Processes, Oracle JET and ChatBots. And, ..., to do a few runs to the Donau and back again. I hadn't run for about half a year because of my relocation and remodelling of our new home.

But besides the great talks with Product Management and other Oracle Friends, the thing that maybe made me most exciting was the talk with two ladies: Mary Beth and Liza... A review session about the User Experience of the Oracle PaaS products.

My major concern about the PaaS products, or maybe the Oracle Cloud products in general, is that they're very 'siloed'. Looking at ICS, PCS, VBCS (now bundled in OIC), API Management, Chatbots, CX, etc., they all have a very different history of birth.
Created in different teams. And all have a different User Experience, although Oracle did work in creating a uniform UI definition.
If I want to start with a project with different services, then I need to create and provision my different services. Those all have different URLs, etc. And if I want to move my artifacts to PreProduction or Production, I need to create new services, with their own authentication and authorisation schemas.
And I need to do the release/install of those artifacts to the different environments, myself.


In the feedback session, Liza presented me two personas (Mary Beth was taking notes): a Development Manager and a Developer.
The Development manager will be able to login to the PaaS environment landing page and create a new project. He will be able to select the components by himself or base the project on a template. This will then select the particular components, or project features, so to speak. You could compare it with creating an application in JDeveloper where you get to choose between a Java application, ServiceBus, SOA Suite or BPM Suite, and one or more appropriate projects. Creating such a PaaS Project will provision the necessary Cloud services, as indicated by the chosen components. The Development Manager can also invite project members, that get an invite with URL via email, or Slack, etc.

The Developer, can follow the link in the invite and log on to the Unified PaaS environment, or One PaaS, and from a palette can select a component he/she wants to create and work upon. I suggested that the palette should be restricted by the components selected on creation by the Development Manager. Cloud Services cost subscription fees, pressing on the budget. So, when a developer finds he needs to be able to create a certain component, the Development Manager should approve the addition of that component type, to solve the particular problem. Maybe the tiles of component types for which the project does not have a Cloud Service could be grayed out.

Another suggested addition is the environments management. They foresee a kind of devops administration page, where you can see the dev, test, pre-prod and production environments, and the artifact-versions mapped over that. So that you can see what version of which artifacts are on which environment/services. I suggested that it would be nice, in my opinion, to define releases or configurations of artifacts. Some artifacts are related to each other, for instance a certain version of a VBCS screen is dependent on a version of a REST service in OIC and/or API Mgt. So you want to combine those in a release, to make sure that they're released/installed to the next environment together.

Of course I can't show you any screens or inside information. Not lastly because I only saw mock-up screens myself. 

I got a nice present, a small Bluetooth speaker with a surprisingly great sound.  But, and be assured: I don't have Oracle shares (anymore), the biggest takeaway for me, that made me enthusiastic was the knowledge and the assurance that Oracle is really putting much effort in this. It is important and I believe this is going to make the big difference in the PaaS offering. Although the different offerings on their own are promising, a unified UI and development and management experience is going to make it actually usable. As a developer I do want to create UI's or Processes or Integrations, but I do not want to bother about the URLs to use for which environment. And I want to be helped by promoting my artifacts on a uniform way to the next environment. I should not export artifacts in different ways and import those one by one in a target cloud service.

The other day I also had an introductory meeting with one of the directors for UI/UX design. And that stressed the importance of UX on the unified PaaS UX initiatives.

As you'll understand: I'm very curious and looking forward to see new developments. If they reach me, I'll keep you posted (as far as I'm allowed of course).


SQLDeveloper: User Defined Extensions and ForeignKey query revised

Thu, 2018-03-22 02:42
It was so fun: yesterday I wrote  a small article on creating a query on Foreign Keys refering a certain table. A post with content that I made up dozens of times in my Oracle carreer. And right away I got 2 good comments. One was on the blog itself.

And of course Anonymous is absolutely right. So I added 'U' as a constraint type option.

The other comment was from my much appreciated colleague Erik. He brought this to another level, by pointing me out how to add this as a User Defined Extension in SQL Developer.

I must say I was already quite pleased with the Snippets in SQLDeveloper. So I already added the query as a snippet:
But the tip of Erik is much cooler.
He refered to a tip by Sue Harper that explains this (What, it's been in there since 2007?!).

Now what to do? First create an xml file, for instance referred_by_fks.xml,  with the following content:
<items>
<item type="editor" node="TableNode" vertical="true">
<title><![CDATA[FK References]]></title>
<query>
<sql>
<![CDATA[select fk.owner,
fk.table_name,
fk.constraint_name,
fk.status
from all_constraints fk
join all_constraints rpk on rpk.constraint_name = fk.r_constraint_name
where fk.constraint_type='R'
and rpk.constraint_type in('P','U')
and rpk.table_name = :OBJECT_NAME
and rpk.owner = :OBJECT_OWNER
order by fk.table_name, fk.constraint_name;]]>
</sql>
</query>
</item>
</items>

Note that I updated my query a bit.


Then to add the extension to SQL Developer:
  • Open the prefereces via: Tools > Preferences
  • Navigate to Database > User Defined Extensions
  • Click "Add Row" button
  • In Type choose "EDITOR", Location is where you saved the xml file above
  • Click "Ok" then restart SQL Developer

Now, if you click on a table in the navigater, you will have an extra tab on your table editor:

Cool stuff! And it's been there for ages!

Which tables have foreign keys refering to a particular table?

Wed, 2018-03-21 02:44
Ok, this time a quick not so exciting post. Actually, I find my self recreating a query again, that I created many times in my carreer. So, why not post it?

Last year, I published my Darwin Object Type Accelerator (Dotacc). It allows you to generate objects from a datamodel. What it also does is create collection types for tables that refer to the tabel you want to generate an object for. For some you want that, but for others you don't. Simply because you don't need them to be queried along. Therefor, I added functionality to disable those.

But then comes the question: which are the tables with their foreignkey constraints that refer to this particular table?

The answer is in the ALL_CONSTRAINTS view (with the variants of DBA_% and USER_%).
There are several types of constraints:
  • C: Check constraints
  • R: Referential -> the particular foreign keys
  • P: Primary Key
  • U: Unique Key
I'm interested in the Foreign keys, thus those where constraint_type='R'. But those refer not to a table but to another constraint. So, I need to get the primary key, constraint_type='P', of the table that I want to query and join those together.

That get's me:
select fk.* 
from all_constraints fk
join all_constraints rpk on rpk.constraint_name = fk.r_constraint_name
where fk.constraint_type='R'
and rpk.constraint_type='P'
and rpk.table_name = 'DWN_MY_TABLE';

Set the minimum password length on your default authenticator in Weblogic

Thu, 2018-03-08 05:55
End of last year I wrote how to create a demo community of users in your Weblogic using wlst.
Using these scripts I wanted to do the same at my current customer: creating test users in the DefaultAuthenticator. However, I faced that the minimum password length was 8, while one of the user failed creation, because the password was the same as the user, and only 5 characters long.

So I need to change the password validator. And preferably using WLST (of course). Now, the password validator of de authenticator can also be found through the console. However, the Weblogic realm also has a system password validator. Both have a default length of 8.

Let me show you some snippets (that you can add to the create users script, or your own purpose), on how to change the minimum password length.

First a method to get the default realm:
#
#
def getRealm(name=None):
cd("/")
if name == None:
realm = cmo.getSecurityConfiguration().getDefaultRealm()
else:
realm = cmo.getSecurityConfiguration().lookupRealm(name)
return realm

With that you can get the authenticator:
#
#
def getAuthenticator(realm, name=None):
if name == None:
authenticator = realm.lookupAuthenticationProvider("DefaultAuthenticator")
else:
authenticator = realm.lookupAuthenticationProvider(name)
return authenticator

With a realm an an authenticator, we can change the password length:
#
#
def setMinPasswordLengthOnDftAuth(minPasswordLength):
try:
edit()
startEdit()
# Get Realm and Authenticator
realm = getRealm()
authenticator = getAuthenticator(realm)
authenticator.setMinimumPasswordLength(int(minPasswordLength))
passwordValidator=realm.lookupPasswordValidator('SystemPasswordValidator')
passwordValidator.setMinPasswordLength(int(minPasswordLength))
save()
activate(block='true')
print('Succesfully set minimum password length to '+minPasswordLength+ ' on '+authenticator.getRealm().getName()+'.')
print('For '+ authenticator.getName() +': '+str(authenticator.getMinimumPasswordLength()))
print('For SystemPasswordValidator of '+getRealm().getName()+': '+ str(passwordValidator.getMinPasswordLength()))
except WLSTException:
stopEdit('y')
message="Failed to update minimum password length!"
print (message)
raise Exception(message)

The minimum password length from the authenticator can be set directly. From the realm this function looks up the SystemPasswordValidator. And on that it set the minimum password length.

This function goes to edit mode, saves and activates the changes. But if you want to add users, you need to get wlst into domainConfig() mode.

Other password validator property setters are:
  • setMinPasswordLength()
  • setMaxPasswordLength()
  • setMaxConsecutiveCharacters()
  • setMaxInstancesOfAnyCharacter()
  • setMinAlphabeticCharacters()
  • setMinNumericCharacters()
  • setMinLowercaseCharacters()
  • setMinUppercaseCharacters()
  • setMinNonAlphanumericCharacters()
  • setMinNumericOrSpecialCharacters()
  • setRejectEqualOrContainUsername(true)
  • setRejectEqualOrContainReverseUsername(true) 
See the docs for more.

Weblogic 12c + SAML2: publish your metadata over an URL

Fri, 2018-02-09 13:09
This week I got to do a SAML2 implementation again for APEX against ADFS. Actually the same setup as last year. One pitfall I fell into with open eyes, was the Redirect URI on the 'Web SSO Partner Provider'. I entered /ords/f*, but it had to be with out the wild-card: /ords/f. But that aside.

At one step in the setup of a SAML2 configuration is that you have to publish the metadata, by clicking a button. Some SAML2 capabable middleware solutions can publish the metadata over an URL. ADFS does support a URL to get the metadata from the Service Provider, being Weblogic12c servicing your application. This prevents that you need to hand over the xml file every time you change/update your configuration. For instance because of expired certificates. How nice would it be if Weblogic supported this?

Well, actually, you can! Sort of... Weblogic does support to service a document-folder, like the htdocs folder of Apache. To do so, you need to create a war file, with only a weblogic.xml file that couples a context-root to a certain folder. And apparently Glassfish can do so too!

When you install ORDS on Weblogic, following the steps, you generate an i.war that is actually the example for this post. You could extract that file and adapt it for this purpose. But I wanted to be able to generate it. Doing so I could reuse this for several other purposes if I would need to.

So I started with a new Saml2MetaData project folder and created a src folder, with a WEB-INF folder beneath it.
Then I copied the three deployment descriptors:
  • sun-web.xml
  • web.xml
  • weblogic.xml
 The sun-web.xml (not being the travel company):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE sun-web-app PUBLIC "-//Sun Microsystems, Inc.//DTD GlassFish Application Server 3.0 Servlet 3.0//EN" "http://www.sun.com/software/appserver/dtds/sun-web-app_3_0-0.dtd">
<sun-web-app>
<!-- This element specifies the context path the static resources are served from -->
<context-root>${samlMetaData.contextRoot}</context-root>
<!-- This element specifies the location on disk where the static resources are located -->
<property name="alternatedocroot_1" value="from=/* dir=${samlMetaData.home}"/>
</sun-web-app>

As you can see I placed the ${samlMetaData.contextRoot} the property in the context-root-tag and the property named alternatedocroot_1 got the directory reference containing the ${samlMetaData.home}.

The web.xml is there for completeness, but does not contain a directory reference:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC
"-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
"http://java.sun.com/j2ee/dtds/web-app_2_3.dtd">
<web-app>
<!-- This Web-App leverages the alternate doc-root functionality in WebLogic and GlassFish to serve static content
For WebLogic refer to the weblogic.xml file in this folder
For GlassFish refer to the sun-web.xml file in this folder
-->
</web-app>

And then the weblogic.xml: including the same properties referencing the context-root and folder:
<weblogic-web-app xmlns="http://www.bea.com/ns/weblogic/weblogic-web-app">
<!-- This element specifies the context path the static resources are served from -->
<context-root>${samlMetaData.contextRoot}</context-root>
<virtual-directory-mapping>
<!-- This element specifies the location on disk where the static resources are located -->
<local-path>${samlMetaData.home}</local-path>
<url-pattern>/*</url-pattern>
</virtual-directory-mapping>
</weblogic-web-app>

Then I need an ANT build file that copies these files replacing the properties. I would have done it with WLST if I had found a way to wrap the lot into a war file, that quickly. But ANT does the job well. First I need a build.properties file, that denotes the properties values:
build.dir=${basedir}/build
dist.dir=${basedir}/dist
src.dir=${basedir}/src
samlMetaData.home=c:\\certs\\saml2
samlMetaData.contextRoot=/samlMetaData


And then the ANT build.xml file:
<?xml version="1.0" encoding="UTF-8"?>
<project name="SamlMetaData" basedir="." default="build">
<property file="build.properties" />
<!-- Clean & Init -->
<target name="clean">
<echo>Delete build and dist folder</echo>
<delete dir="${build.dir}" />
<delete dir="${dist.dir}" />
</target>
<target name="init" depends="clean">
<echo>Create build and dist folder</echo>
<mkdir dir="${build.dir}" />
<mkdir dir="${dist.dir}" />
</target>
<!-- war the project -->
<target name="war">
<property name="war.dir" value="${dist.dir}/${ant.project.name}" />
<property name="war.file" value="${war.dir}/${ant.project.name}.war" />
<echo>Create war file ${war.file} from ${build.dir}</echo>
<mkdir dir="${war.dir}" />
<jar destfile="${war.file}" basedir="${build.dir}">
<manifest />
</jar>
</target>
<!-- Build the war file -->
<target name="build" depends="init">
<echo>Copy ${src.dir} to ${build.dir}, expanding properties</echo>
<copy todir="${build.dir}">
<fileset dir="${src.dir}" />
<filterchain>
<expandproperties />
</filterchain>
</copy>
<ant target="war" />
</target>
</project>

Run this with ANT andit will create a build and a dist  folder with the war file.
This can be deployed to Weblogic that results in a context root as configured in the build.properties. Everything placed in the folder as configured in the samlMetaData.home folder can be fetched through Weblogic.

So just publish your metadata to that folder and the IdentityProvider can get it auto-magically.

How to install the Notepad++ 64-bit plugin manager

Fri, 2018-02-09 06:08
I'm a Notepad++ fan for years. And as soon as a 64-bit version arose I adopted it.
But since a few months I have a new laptop, and I apparently didn't get the plugin manager with the latest new install.  And only now I took the opportunity to sort it out and write about it.

I found that the plugin manager is available since April 2017 on GitHub, version 1.49. I downloaded the zip from the mentioned location, I choose the _x64 version:
Then unzipped it into my Notepad++ folder:

Then started Notepad++ and the plugin manager appears:

So now I can format my XML files again...

SoapUI: validate a date field in response with current date

Tue, 2018-01-23 09:30
Once in a while you need to validate a service that has dates in the response. Although SoapUI has xpath and xquery match assertions, validate against strings is quite difficult. How to do a date comparison against for instance the current date?

You can do it with a script assertion:
And the content of this can be:
def groovyUtils = new com.eviware.soapui.support.GroovyUtils(context)
// Set Namespaces
def holder = groovyUtils.getXmlHolder(messageExchange.responseContent)
//holder.namespaces["soapenv"] = "http://schemas.xmlsoap.org/soap/envelope/"
def dateFoundStr = holder.getNodeValue("/Results/ResultSet/Row[1]/DATE_FOUND")
def dateFound = new Date().parse("yyyy-MM-dd hh:mm:ss", dateFoundStr)
dateFoundStr = dateFound.format("yyyy-MM-dd")
//Current Date
def date = new Date()
def currentDate=date.format("yyyy-MM-dd")
//
assert dateFoundStr == currentDate

First we need to fetch and parse the response content using the holder variable, parsing the messageExchange.responseContent using groovyUtils.getXmlHolder.

Second, the particular date is found, here dateFound as a field from a JDBC response. A JDBC response does not have namespaces, but from a SOAP response it helps to declare namespaces. For an example see the commented line for holder.namespaces["soapenv"].

Third, I parse the found date, which is a string as fetched from the xml, to a date time, then format it to a string to get only the date part. This could be done simply using substring methods, but I wanted to try this. And get and formatthe currentDate as a string.

In the end just do assert with a comparison of both values.

There you go.

Modify your nodemanager.properties in wlst

Mon, 2018-01-22 09:12
In 2016 I did several posts on automatic installs of Fusion MiddleWare, including domain creation using wlst.

With weblogic 12c you automatically get a pre-configured per-domain nodemanager. But you might find the configuration not completely suiting your whishes.

It would be nice to update the nodemanager.properties file to with your properties in the same script.

Today I started with upgrading our Weblogic Tuning and Troubleshooting training to 12c, and one of the steps is to adapt the domain creation script. In the old script, the AdminServer is started right way, to add the managed server to the domain. In my before mentioned script, I do that offline. But since I like to be able to update the nodemanager.properties file I figured that out.

Earlier, I created  a function to just write a new property file:
#
# Create a NodeManager properties file.
def createNodeManagerPropertiesFile(javaHome, nodeMgrHome, nodeMgrType, nodeMgrListenAddress, nodeMgrListenPort):
print ('Create Nodemanager Properties File for home: '+nodeMgrHome)
print (lineSeperator)
nmProps=nodeMgrHome+'/nodemanager.properties'
fileNew=open(nmProps, 'w')
fileNew.write('#Node manager properties\n')
fileNew.write('#%s\n' % str(datetime.now()))
fileNew.write('DomainsFile=%s/%s\n' % (nodeMgrHome,'nodemanager.domains'))
fileNew.write('LogLimit=0\n')
fileNew.write('PropertiesVersion=12.2.1\n')
fileNew.write('AuthenticationEnabled=true\n')
fileNew.write('NodeManagerHome=%s\n' % nodeMgrHome)
fileNew.write('JavaHome=%s\n' % javaHome)
fileNew.write('LogLevel=INFO\n')
fileNew.write('DomainsFileEnabled=true\n')
fileNew.write('ListenAddress=%s\n' % nodeMgrListenAddress)
fileNew.write('NativeVersionEnabled=true\n')
fileNew.write('ListenPort=%s\n' % nodeMgrListenPort)
fileNew.write('LogToStderr=true\n')
fileNew.write('weblogic.StartScriptName=startWebLogic.sh\n')
if nodeMgrType == 'ssl':
fileNew.write('SecureListener=true\n')
else:
fileNew.write('SecureListener=false\n')
fileNew.write('LogCount=1\n')
fileNew.write('QuitEnabled=true\n')
fileNew.write('LogAppend=true\n')
fileNew.write('weblogic.StopScriptEnabled=true\n')
fileNew.write('StateCheckInterval=500\n')
fileNew.write('CrashRecoveryEnabled=false\n')
fileNew.write('weblogic.StartScriptEnabled=true\n')
fileNew.write('LogFile=%s/%s\n' % (nodeMgrHome,'nodemanager.log'))
fileNew.write('LogFormatter=weblogic.nodemanager.server.LogFormatter\n')
fileNew.write('ListenBacklog=50\n')
fileNew.flush()
fileNew.close()

But this one just rewrites the file, and so I need to determine the values for properties like DomainsFile, JavaHome, etc., which are already set correctly in the original file. I only want to update the ListenAddress, and ListenPort, and possibly the SecureListener property based on the nodemanager type. Besides that, I want to backup the original file as well.

So, I adapted this  function to:
#
# Update the Nodemanager Properties
def updateNMProps(nmPropertyFile, nodeMgrListenAddress, nodeMgrListenPort, nodeMgrType):
nmProps = ''
print ('Read Nodemanager properties file%s: ' % nmPropertyFile)
f = open(nmPropertyFile)
for line in f.readlines():
if line.strip().startswith('ListenPort'):
line = 'ListenPort=%s\n' % nodeMgrListenPort
elif line.strip().startswith('ListenAddress'):
line = 'ListenAddress=%s\n' % nodeMgrListenAddress
elif line.strip().startswith('SecureListener'):
if nodeMgrType == 'ssl':
line = 'SecureListener=true\n'
else:
line = 'SecureListener=false\n'
# making sure these properties are set to true:
elif line.strip().startswith('QuitEnabled'):
line = 'QuitEnabled=%s\n' % 'true'
elif line.strip().startswith('CrashRecoveryEnabled'):
line = 'CrashRecoveryEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StartScriptEnabled'):
line = 'weblogic.StartScriptEnabled=%s\n' % 'true'
elif line.strip().startswith('weblogic.StopScriptEnabled'):
line = 'weblogic.StopScriptEnabled=%s\n' % 'true'
nmProps = nmProps + line
# Backup file
print nmProps
nmPropertyFileOrg=nmPropertyFile+'.org'
print ('Rename File %s to %s ' % (nmPropertyFile, nmPropertyFileOrg))
os.rename(nmPropertyFile, nmPropertyFileOrg)
# Save New File
print ('\nNow save the changed property file to %s' % nmPropertyFile)
fileNew=open(nmPropertyFile, 'w')
fileNew.write(nmProps)
fileNew.flush()
fileNew.close()
It first reads the property file, denoted with nmPropertyFile line by line.
If a line starts with a particular property that I want to set specifically, then the line is replaced. Each line is then added to the nmProps  variable. For completeness and validation I print the resulting variable.
Then I rename the original file to nmPropertyFile+'.org' using os.rename(). And lastly, I write the contents of the nmProps to the original file in one go.


This brings me again one step further to a completely scripted domain.

Run SQLcl from ANT

Thu, 2017-12-21 04:29
I think since last year ORacle released SQLcl which could be seen as the commandline variant of SQL Developer. But even better: a replacement of SQL Plus.

A few years ago I created what I called a InfraPatch framework, to do preparations on an infrastructure as a pre-requisite for the deployment of services and/or applications. It can run WLST scripts for creating datasouces, jms-queues, etc.  It also supported the running of database scripts, but it required an sqlplus installation, for instance using the instant client. Since it was part of a release/deploy toolset, where the created release is to be deployed by an IT admin on a test, acceptance or production environment, I had to rely on a correct Oracle/instant client installation on an agreed location.

I'm in the process of revamping that framework and renamed to InfraPrep,  since preparing an infrastructural environment makes it more clear what it does. (It does not patch a system with Oracle patches...).

Now I'm at the point that I have to implement the support of running database scripts. The framework uses ANT, which in fact is Java. And SQLcl has two big advantages that makes it ideal for me to use in my InfraPrep framework:
  • It is incredibly small: it's only 19MB! And that includes the ojdbc and xmlparser jars. Since i used ANT from a FusionMiddleWare home, I could make it even smaller! 
  • It is Java, so I can leverage the java ant task.
 So, how to call SQLcl from ANT? I need a few ingredients:
  • Download and unzip SQLcl into my Ant project and add a sqlcl.home property:
    sqlcl.home=${basedir}/sqlcl
  • The actual sqlcl jar file and add the sqlcl.jar property for that:
    sqlcl.jar=oracle.sqldeveloper.sqlcl.jar
  • The main class file = oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli
These ingredients can be found in the sql.bat in the bin folder of the SQLcl download.

Then of course in my environment property file I need the user name, password and database url.
Something like:
DWN.dbUrl=(description=(address=(host=darlin-vce-db.org.darwinit.local)(protocol=tcp)(port=1521))(connect_data=(service_name=orcl)))
DWN.dbUserName=dwn_owner
DWN.dbPassword=dwn_owner

I used a TNS-style database URL, since it is the same as used in the creation of the corresponding DataSource. And it can be reused to connect with SQLcl.

Now, to make it easier to use and to abstract the plumbing in a sort of  ANT task, I crated a macrodef:


 <!-- Create Add Outbound connection pool to DB adapter-->
<macrodef name="runDbScript">
<attribute name="dbuser"/>
<attribute name="dbpassword"/>
<attribute name="dburl"/>
<attribute name="dbscript"/>
<sequential>
<logMessage message="DatabaseUrl: @{dburl}" level="info"/>
<logMessage message="DatabaseUser: @{dbuser}" level="info"/>
<logMessage message="DatabasePassword: ****" level="info"/>
<property name="dbConnectStr" value='@{dbuser}/@{dbpassword}@"@{dburl}"'/>
<property name="dbScript.absPath" location="@{dbscript}"/>
<property name="dbScriptArg" value="@${dbScript.absPath}"/>
<logMessage message="Run Database script: ${dbScriptArg}" level="info"/>
<record name="${log.file}" action="start" append="true"/>
<java classname="oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli" failonerror="true" fork="true">
<arg value="${dbConnectStr}"/>
<arg value="${dbScriptArg}"/>
<classpath>
<pathelement location="${sqlcl.home}/lib/${sqlcl.jar}"/>
</classpath>
</java>
<record name="${log.file}" action="stop"/>
</sequential>
</macrodef>
</project>

In this macrodefinition, I first build up a database connect string using the username, password and database url:
      <property name="dbConnectStr" value='@{dbuser}/@{dbpassword}@"@{dburl}"'/>
Then I use a little trick to create an absolute path of the dbscript path:
      <property name="dbScript.absPath" location="@{dbscript}"/>
The trick is in the location attribute of the property.
And since that now is a property instead of an attribute, I circumvented the need for escaping the @ character:
      <property name="dbScriptArg" value="@${dbScript.absPath}"/>
The logmessage task you see is another macrodef I use:
      <macrodef name="logMessage">
<attribute name="message" default=""/>
<attribute name="level" default="debug"/>
<sequential>
<echo message="@{message}" level="@{level}"></echo>
<echo file="${log.file}" append="true"
message="@{message}${line.separator}" level="@{level}"></echo>
</sequential>
</macrodef>

It both echo's the output to the console and to a log file.
Since I want the output of the java task into the same log file, I enclosed the java task with record tasks to start and stop the appending of the output-stream to the log file.

The java task is pretty simple, referencing the jar file in the classpath and providing the connect string and the script run argument as two separate arguments.
There are however two important properties:
  • failonerror="true": I want to quit my ANT scripting when the database script fails.
  • fork="true": when providing the exit statement in the sql script, SQLcl tries to quit the JVM. This is not allowed, because it runs by default in the same JVMas ANT. Not providing the exit statement in the script will keep the thread in SQLcl, which is not acceptable. So, forking the JVM will allow SQLcl to quit properly.
Now, the macro can be called as follows:
    <propertycopy name="dbUser" from="${database}.dbUserName"/>
<propertycopy name="dbUrl" from="${database}.dbUrl"/>
<propertycopy name="dbPassword" from="${database}.dbPassword"/>
<runDbScript dbuser="${dbUser}" dbpassword="${dbPassword}" dburl="${dbUrl}" dbscript="${prep.folder}/${dbScript}"/>

Where these properties are used:
database=DWN
dbScript=sample.sql

Ant the sample.sql file:
select * from global_name;
exit;

And this works like a charm:
runPrep:
[echo] Script voor uitvoeren van database script.
[echo] Environment:
[echo] Prep folder: ../../infraPreps/BpmDbS0004
[echo] Load prep property file ../../infraPreps/BpmDbS0004/BpmDbS0004.properties
[echo] Run Script
[echo] DatabaseUrl: (description=(address=(host=darlin-vce-db.org.darwinit.local)(protocol=tcp)(port=1521))(connect_data=(service_name=orcl)))
[echo] DatabaseUser: dwn_owner
[echo] DatabasePassword: ****
[echo] Run Database script: @c:\temp\FMWReleaseAll\DWN\1.0.0\infraPreps\BpmDbS0004\sample.sql
[java]
[java] SQLcl: Release 17.3.0 Production on do dec 21 11:18:50 2017
[java]
[java] Copyright (c) 1982, 2017, Oracle. All rights reserved.
[java]
[java] Connected to:
[java] Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
[java] With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
[java] Data Mining and Real Application Testing options
[java]
[java]
[java] GLOBAL_NAME
[java] --------------------------------------------------------------------------------
[java] ORCL
[java]
[java]
[java] Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
[java] With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
[java] Data Mining and Real Application Testing options
[echo] Done running preperations.

BUILD SUCCESSFUL
Total time: 12 seconds

One thing to be arranged though is the fetch of the username/password from the commandline, instead of properties. This can be as follows:
    <input message="Enter database user for environment ${database}: " addproperty="db.user"/>
<input message="Enter password for user ${db.user}: " addproperty="db.password">
<handler classname="org.apache.tools.ant.input.SecureInputHandler"/>
</input>

ConclusionSQLcl is great, since it is small and in java. So it turns out incredibly easy to distribute it within your own framework.

OSB 12c Customization in WLST, some new insights: use the right jar for the job!

Wed, 2017-12-20 04:41
Problem setting  and investigationYears ago I created a Release & Deploy framework for Fusion Middleware, also supporting Oracle Service Bus. Recently revamped it to use 12c. It uses WLST to import the OSB service to the Service Bus, including the execution the customization file.

There are lots of examples to do this, but I want to zoom in on the execution of the customization file.

The WLST function that does this, that I use is as follows:
#=======================================================================================
# Function to execute the customization file.
#=======================================================================================
def executeCustomization(ALSBConfigurationMBean, createdRefList, customizationFile):
if customizationFile!=None:
print 'Loading customization File', customizationFile
inputStream = FileInputStream(customizationFile)
if inputStream != None:
customizationList = Customization.fromXML(inputStream)
if customizationList != None:
filteredCustomizationList = ArrayList()
setRef = HashSet(createdRefList)
print 'Filter to remove None customizations'
print "-----"
# Apply a filter to all the customizations to narrow the target to the created resources
print 'Number of customizations in list: ', customizationList.size()
for customization in customizationList:
print "Add customization to list: "
if customization != None:
print 'Customization: ', customization, " - ", customization.getDescription()
newCustomization = customization.clone(setRef)
filteredCustomizationList.add(newCustomization)
else:
print "Customization is None!"
print "-----"
print 'Number of resulting customizations in list: ', filteredCustomizationList.size()
ALSBConfigurationMBean.customize(filteredCustomizationList)
else:
print 'CustomizationList is null!'
else:
print 'Input Stream for customization file is null!'
else:
print 'No customization File provided, skip customization.'

The parameter ALSBConfigurationMBean can be fetched with:
...
sessionName = createSessionName()
print 'Created session', sessionName
SessionMBean = getSessionManagementMBean(sessionName)
print 'SessionMBean started session'
ALSBConfigurationMBean = findService(String("ALSBConfiguration.").concat(sessionName), "com.bea.wli.sb.management.configuration.ALSBConfigurationMBean")
...

The other parameter is the createdRefList, that is build up from the default ImportPlan during import of the config jar:
...
print 'ÒSB project', project, 'will get updated'
osbJarInfo = ALSBConfigurationMBean.getImportJarInfo()
osbImportPlan = osbJarInfo.getDefaultImportPlan()
osbImportPlan.setPassphrase(passphrase)
operationMap=HashMap()
operationMap = osbImportPlan.getOperations()
print
print 'Default importPlan'
printOpMap(operationMap)
set = operationMap.entrySet()

osbImportPlan.setPreserveExistingEnvValues(true)

#boolean
abort = false
#list of created artifact refences
createdRefList = ArrayList()
for entry in set:
ref = entry.getKey()
op = entry.getValue()
#set different logic based on the resource type
type = ref.getTypeId
if type == Refs.SERVICE_ACCOUNT_TYPE or type == Refs.SERVICE_PROVIDER_TYPE:
if op.getOperation() == ALSBImportOperation.Operation.Create:
print 'Unable to import a service account or a service provider on a target system', ref
abort = true
else:
#keep the list of created resources
print 'ref: ',ref
createdRefList.add(ref)
if abort == true :
print 'This jar must be imported manually to resolve the service account and service provider dependencies'
SessionMBean.discardSession(sessionName)
raise
print
print 'Modified importPlan'
printOpMap(operationMap)
importResult = ALSBConfigurationMBean.importUploaded(osbImportPlan)
printDiagMap(importResult.getImportDiagnostics())
if importResult.getFailed().isEmpty() == false:
print 'One or more resources could not be imported properly'
raise
...

The meaning is to build up a set of references of created artefact, to narrow down the customizations to only execute them on the artefacts that are actually imported.

Now, back to the executeCustomization function. It first creates an InputStream on the customization file:
inputStream = FileInputStream(customizationFile)

on which it builds a list of customizations using the .fromXML method of the Customization object:
        customizationList = Customization.fromXML(inputStream)

These customizations are interpreted from the Customization file. If you open that you can find several customization elements:
 <cus:customization xsi:type="cus:EnvValueActionsCustomizationType">
<cus:description/>
...
<cus:customization xsi:type="cus:FindAndReplaceCustomizationType">
<cus:description/>
...
<cus:customization xsi:type="cus:ReferenceCustomizationType">
<cus:description/>


These all are mapped to subclasses of the Customization. And now the reason that I write this blogpost is that I ran into a problem with my import tooling. In the EnvValueActionsCustomizationType the endpoint replacements for the target environments is done. And the weren't executed. In fact these customizations were in the customizationList, but as a None/Null object. Thus, executing this complete list using ALSBConfigurationMBean.customize(filteredCustomizationList) would run in an exception, refering to a null object in the customization list. That's why they're filtered out. But why weren't these interpreted by the .fromXml() method?

Strangely enough in the javaAPI docs of 12.2.1 the EnvValueActionsCustomization does not exist, but the EnvValueCustomization does. But searching My Oracle Support shows in Note 1679528.2: 'A new customization type EnvValueActionsCustomizationType is available in 12c which is used when creating a configuration plan file.' and here in the Java API doc (click on com.bea.wli.config.customization) it is stated that EnvValueCustomization is deprecated and EnvValueActionsCustomization should be used in stead.
Apparently the docs is not updated completely....
And it seems that I used a wrong jar file: The customization file was created using the Console, and executing the customization file using the console did execute the endpoint replacements. So I figured that I must be using a wrong version of the jar file.
So I searched on my BPM quickstart installation (12.2.1.2) for the class EnvValueCustomization:
Jar files containing EnvValueCustomization
  • C:\Oracle\JDeveloper\12210_BPMQS\osb\lib\modules\oracle.servicebus.configfwk.jar/com\bea\wli\config\customization\EnvValueCustomization.class
  • C:\Oracle\JDeveloper\12210_BPMQS\oep\spark\lib\spark-osa.jar/com\bea\wli\config\customization\EnvValueCustomization.class
  • C:\Oracle\JDeveloper\12210_BPMQS\oep\common\modules\com.bea.common.configfwk_1.3.0.0.jar/com\bea\wli\config\customization\EnvValueCustomization.class
And then I did a search with EnvValueActionsCustomization.
Jar files containing EnvValueActionsCustomization:
  • C:\Oracle\JDeveloper\12210_BPMQS\osb\lib\modules\oracle.servicebus.configfwk.jar/com\bea\wli\config\customization\EnvValueActionsCustomization.class
SolutionIt turns out that in my ANT script I used:
<path id="library.osb">
<fileset dir="${fmw.home}/oep/common/modules">
<include name="com.bea.common.configfwk_1.3.0.0.jar"/>
</fileset>
<fileset dir="${weblogic.home}/server/lib">
<include name="weblogic.jar"/>
<include name="wls-api.jar"/>
</fileset>
<fileset dir="${osb.home}/lib">
<include name="alsb.jar"/>
</fileset>
</path>

Where I should use:
<path id="library.osb">
<fileset dir="${fmw.home}/osb/lib/modules">
<include name="oracle.servicebus.configfwk.jar"/>
</fileset>
<fileset dir="${weblogic.home}/server/lib">
<include name="weblogic.jar"/>
<include name="wls-api.jar"/>
</fileset>
<fileset dir="${osb.home}/lib">
<include name="alsb.jar"/>
</fileset>
</path>
ConclusionIt took me quite some time to debug this. But learned how the customization works. I found quite some examples that use com.bea.common.configfwk_1.X.0.0.jar. And apparently during my revamping, I updated this class path (actually I had 1.7, and found only 1.3 in my environment).  But, somehow Oracle found it sensible to replace it with oracle.servicebus.configfwk.jar while keeping the old jar files.
So use the right Jar for the job!

Create the SOA/BPM Demo User Community, with just WLST.

Mon, 2017-12-18 06:31
As said in my previous post (I've learned somewhere you should not post twice on the same day, but spread it out over time), I'm delivering a BPM 12c training. And based it on the BPM Quickstart. Although nice for UnitTests and development, the integrated weblogic lacks a  proficient set of users to test your task definitions.

Oracle has a demo community and a set of ANT and Servlet based scripts to povision your SOA or BPMSuite environment with a set of American literature writers, to be used in demo's and trainings. I some how found this years ago and had it debugged to be used in 12.1.3 a few years ago. However, I did not know where I got it and if it was free to be delivered.

Apparently it is, and you can find it at my oracle support. Our friends with Avio Consulting also  did a good job in making it available and working with 12c. However, I could not make it work smoothly end 2 end. I got it seeded, but figured that I would not need ANT and a Servlet.

Last year, in 2016, I created a bit of WLST scripting to create users for OSB and have them assigned to OSB Application roles. You can read about that here for the user creation, and here for the app-role assignment.

One thing that's missing in those scripts is the setting of the user attributes. So I googled around and found a means to add those too.

First, I had to transform the demo community seeding xml file to a property file. Like this:

#
cdickens.password=welcome1
cdickens.description=Demo User
cdickens.email=cdickens@emailExample.com
cdickens.title=CEO
cdickens.firstName=Charles
cdickens.lastName=Dickens
cdickens.timeZone=America/Los_Angeles
cdickens.languagePreference=en-US
cdickens.workPhone=100000001
cdickens.homePhone=200000001
cdickens.mobile=300000001
cdickens.im=jabber|cdickens@exampleIM.com

The complete, usersAndGroups.properties file is available here.
In an earlier blog, I wrote about how to read a property file. But my prefered method, does not allow me to determine the property to be fetched dynamically. That's why I split my basic createDemoUsers.properties file, that refers to the usersAndGroups.properties file, and contains the properties refering the Oracle/Jdeveloper home and the connection details for the AdminServer. This property file also contains comma separated lists of users, groups and AppRoles to be created or  granted.

The actual createDemoUsers.py file loops over the three lists and creates the particular users and groups, and grants the AppRoles.

To set the attributes, the setUserAttributeValue of the authenticatorMBean can be used as follows:
    #Set Properties
firstName=userProps.getProperty(userName+".firstName")
lastName=userProps.getProperty(userName+".lastName")
displayName=nvl(firstName, " ")+" "+nvl(lastName, " ")
authenticator.setUserAttributeValue(userName,"displayName",displayName.strip())

I published the complete set of scripts on the GitHub repo I shared with my colleague.
You can download them all, adapt the createDemoUsers.sh, to refer the correct MW_HOME to your JDeveloper environment. For Windows you might translate it to a .bat/.cmd file.

And of course you can use it for your own set of users.

I think I covered near to all of the Demo User community. Except for management-chains: I could not find how to register a manager for a user in Weblogic. Neither in the console, nor inWLST. So, I currently I conclude it cannot be done. But, if you have a tip, please be so good to leave a comment. I would highly appreciate it.




BPM 12.2.1.3: Exception when deploying BPM project with Human tasks

Mon, 2017-12-18 04:25
This week I deliver a BPM 12c Workshop, that I based on the 12.2.1.3 BPM QuickStart. When the students worked on the lab on Human Workflow, they hit an error deploying the Composite, where in the log you can find something like:

Caused By: oracle.fabric.common.FabricException: Error occurred during deployment of component: RequestHolidayTask to service engine: implementation.workflow, for composite: HolidayRequestProcess: ORABPEL-30257

exception.code:30257
exception.type: ERROR
exception.severity: 2
exception.name: Error while Querying workflow task metadata.
exception.description: Error while Querying workflow task metadata.
exception.fix: Check the underlying exception and the database connection information. If the error persists, contact Oracle Support Services.
: exception.code:30257
exception.type: ERROR
exception.severity: 2
exception.name: Error while Querying workflow task metadata.
exception.description: Error while Querying workflow task metadata.
exception.fix: Check the underlying exception and the database connection information. If the error persists, contact Oracle Support Services.

Caused By: oracle.fabric.common.FabricDeploymentException: ORABPEL-30257


Caused By: java.sql.SQLSyntaxErrorException: Column 'WFTM.PACKAGENAME' is either not in any table in the FROM list or appears within a join specification and is outside the scope of the join specification or appears in a HAVING clause and is not in the GROUP BY list. If this is a CREATE or ALTER TABLE statement then 'WFTM.PACKAGENAME' is not a column in the target table.

Apparently in the repository of 12.2.1.3 a column is missing in the Workflow Metadata table.

Luckily, I stumbled upon a question in the community.oracle.com forum that hit this 'bug' as well; and provided a solution. You need to do an alter table to resolve this:
ALTER TABLE SOAINFRA.WFTASKMETADATA ADD PACKAGENAME varchar (200);

The smart guy that provided the answer, used a separate Database UI tool. But fortunately, JDeveloper is perfectly capable to provide you de means as well.

First open the Resource Pallette in JDeveloper. Make sure that you have started your Integrated WebLogic already (since that will run the DerbyDB.

Then in the Resource Pallette, create a new Database Connection:


Provide the following details:
Give it a name, like soainfraDB, as a Connection Type select 'Java DB / Apache Derby'. You can leave Username and Password empty. Then as a Driver Class, choose the 'org.apache.derby.jdbc.ClientDriver' (not the default). Then as a Host Name provide localhost, as a JDBC Port enter 1527 en as  a Database Name enter soainfra.

You can  Test Connection  and then, if Successfull, hit OK.

Then from the IDE Connections pallette  right click your newly created database connection, and choose 'Open in Databases Window':



And from that right click on the database connection and choose 'Open SQL Worksheet':

There you can enter and execute the alter statement:
After this, deployment should succeed. Since it is persisted in the DerbyDB it will survive restarts.

This might apply to the SOA QuickStart as well (did not try).

OSB: Disable Chunked Streaming Mode recommendation

Fri, 2017-12-01 05:37
Intro These weeks I got involved in a document generation performance issue. This ran for several months, maybe years even. But it stayed quite unclear what the actual issue was.

Often we got complaints that document generation from the front-end application (based on Siebel) was taking very long. End users often hit the button several times, but with no luck. Asking further, it did not mean that there appeared a document in the content management system (Oracle UCM/WCC). So, we concluded that it wasn't so much a performance issue, but an exception along the process of document generation. Since we upgraded BI Publisher to 12c, it was figured that it might got something to do with that. But we did not find any problems with BI Publisher, itself. Also, there was an issue with Siebel it's self, but that's also out of the scope of this article.
The investigationFirst, on OSB the retry interval of the particular Business Service was decreased from 60 seconds to 10. And the performance increased. Since the retry interval was shorter, OSB does a retry on shorter notice. But of course this did not solve the problem.

As Service developers we often are quite laconical about retries. We make up some settings. Quite default is an interval of 30 seconds and a retry count of 3. But, we should actually think about this and figure out what the possible failures could be and what a sensible retry setting would be. For instance: is it likely that the remote system is out of order? What are the SLA's for hoisting it back up again? If the system startup is 10 minutes, then a retry count of 3 and interval of 30 seconds is not making sense. The retries are done long before the system's up again. But of course, in our case sensible settings for system outage would cause delays being too long. We apparently needed to cater for network issues.

Last week our sysadmins encountered network failures, so they changed the LoadBalancer of BIP Publisher, to get chunks/packets of one requests routed to the same BI Publisher node. I found SocketReadTimeOuts in the logfiles. And from the Siebel database a query was done and plotted out in Excel showing lots of request in the  1-15 seconds range, but also some plots in ranges around 40 seconds and 80 seconds. We wondered why these were.

The Connection and Read TimeOut settings on the Business Service were set to 30s. So I figured the 40 and 80 seconds range could have something to do with a retry interval of 10s added to a time out of 30 seconds.

I soon found out that in OSB on the Business Service, the Chunked Streaming Mode  was enabled. This is a setting we struggled with a lot. Several issues we encountered were blamed on this one. As a Helpdesk employee would ask you if you have restarted your system, on OSB questions I would ask you about this setting first... Actually, I did for this case, long before I got actively involved.
Chunked Streaming Mode explainedLet's start with a diagram:

In this diagram you'll see that the OSB is fronted by a Load Balancer. But since 12c the Oracle HTTP Server is added to the Weblogic Infrastructure. And following the Enterprise Deployment Guide we added an OHS to the Weblogic Infrastructure Domain, as a co-located OHS Instance. And since the OSB as well as the Service Provider (in our case BI Publisher) are clustered, the OHS will load balance the requests.

Now, the Chunked transfer encoding is an HTTP 1.1 specification. It is an improvement that allows clients to process the data in chunks right after the chunk is read. But in most (of our) cases a chunk on itself is meaning-less, since a SOAP Request/XML Document need to be parsed as a whole.
The Load Balancer also process the chunks as separate entities. So,by default, it will route the first one to the first endpoint, and the other one to the next. And thus each SP Managed Server gets an incomplete message and there for a so-called Bad Request. This happens with big requests, where for instance a report is requested together with the complete content. Then chances are that the request is split up in chunks.

But although the SysAdmins adapted the SP Load Balancer, and although I was involved in the BIPublisher 12c setup, even I forgot about the BIP12c OHS! And even when the LoadBalancer tries to keep the chunks together, then again the OHS will mess with them. Actually, if the LoadBalancer did not keep them together, the OHS instances could reroute them again to the correct end-node.
The SolutionSo for all those Service Bus developers amongst you, I'd like you to memorize two concepts: "Chunked Streaming Mode" and "disable", and the latter in combination with the first, of course.
In short: remember to set Chunked Streaming Mode to disable in every SOAP/http based Business Service. Especially with services that send potentially large requests, for instance document check-in services on Content/Document Management Systems.
The proof of the puddingAfter some discussion and not being able to test it on the Acceptance Test environment, due to rebuilds, we decided to change this in production (I would/should not recommend that, at least not right away).

And this was the result:


This picture shows that the first half of the day, plenty requests were retried at least once, and several even twice. Notice the request durations around the 40 seconds (30 seconds read timeout + 10 seconds retry interval) and 80 seconds. But since 12:45, when we disabled the Chunked Streaming Mode we don't see any timeout exceptions any more. I hope the end users are happy now.

Or how a simple setting can throw a spanner in the works. And how difficult it is to get such a simple change into production. Personally I think it's a pity that the Chunked Streaming Mode is enabled by default, since in most cases it causes problems, while in rare cases it might provide some performance improvements. I think you should rationalize the enablement of it, in stead of actively needing to disable it.

BPM BAC Subversion Server refusing connections Revised

Fri, 2017-11-24 05:34
A little backgroundIn April I wrote about our BPM Server installation, that is actually a single host cluster on dev and test. But installed like it was a dual-node  server, so we had a cloned domain.

A BPM installation has a component that is called the Process Asset Manager. Under the hood it uses a replicated SubVersion server. Each node has one, so they had to synchronize. But since we're on the same host, we needed to differentiate in port numbers. Although we used virtual host names for each server node. In an article in april 'BPM BAC Subversion Server refusing connections' I wrote about that.

It turns out: it did not work appropriately. After some investigation, apparently the bac_node1/subversion server calls the subversion server on bac_node2, with it's own address, but with the port of the other:
...
<Jul 24, 2017, 2:37:29,543 PM CEST> <Debug> <oracle.bpm.bac.svnserver.replication> <darlin01> <bpm_server1> <Active Sync Thread [/54680efc-9328-478e-953c-834bbb250725/]> <<anonymous>> <> <20a54a15-d8d8-4e58-a4f6-11a90e992967-00000008> <1500899849543> <[severity-value: 128] [rid: 0:490:13:19] [partition-id: 0] [partition-name: DOMAIN] > <BEA-000000> <[oracle.bpm.log.Logger:debug] About to synchronize against node Member(Id=2, Timestamp=2017-07-12 08:01:13.309, Address=10.0.0.38:37055, MachineId=9002, Location=site:tst.darwin-it.local,machine:bpm_machine1,process:24566,member:bpm_server2, Role=bpm_cluster), url svn://syncuser@t-bpm-1-bpm-1-vhn.tst.darwin-it.local:8424/54680efc-9328-478e-953c-834bbb250725>
...

Of course there's no listen address so unsurprisingly we get errors like:
...
[2017-05-30T12:55:51.704+02:00] [bpm_server1] [ERROR] [] [oracle.bpm.bac.svnserver.replication] [tid: Active Sync Thread [/b91abb78-6b3d-4448-af6a-e82125f261f0/]] [userId: ] [ecid: ed0bc6ce-fefb-4608-8dbe-46f7206a1573-0000000a,0:527] [APP: OracleBPMBACServerApp] [partition-name: DOMAIN] [tenant-name: GLOBAL] org.tmatesoft.svn.core.SVNException: svn: E210003: connection refused by the server[[
oracle.bpm.bac.subversion.server.repository.exceptions.RepositoryException: org.tmatesoft.svn.core.SVNException: svn: E210003: connection refused by the server
at oracle.bpm.bac.subversion.server.repository.exceptions.RepositoryException.wrap(RepositoryException.java:56)
at oracle.bpm.bac.subversion.server.repository.SVNKitRepositorySession.getRepositoryUUID(SVNKitRepositorySession.java:98)
at oracle.bpm.bac.subversion.server.repository.RepositorySVNSync.sync(RepositorySVNSync.java:74)
at oracle.bpm.bac.subversion.server.repository.RepositorySVNSync.sync(RepositorySVNSync.java:59)
at oracle.bpm.bac.subversion.server.repository.ha.aa.ActiveAARepository$Synchronizer.runImpl(ActiveAARepository.java:341)
at oracle.bpm.bac.subversion.server.repository.ha.aa.ActiveAARepository$Synchronizer.run(ActiveAARepository.java:304)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.tmatesoft.svn.core.SVNException: svn: E210003: connection refused by the server
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:85)
at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:69)
at org.tmatesoft.svn.core.internal.io.svn.SVNPlainConnector.open(SVNPlainConnector.java:62)
at org.tmatesoft.svn.core.internal.io.svn.SVNConnection.open(SVNConnection.java:77)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.openConnection(SVNRepositoryImpl.java:1252)
at org.tmatesoft.svn.core.internal.io.svn.SVNRepositoryImpl.testConnection(SVNRepositoryImpl.java:95)
at org.tmatesoft.svn.core.io.SVNRepository.getRepositoryUUID(SVNRepository.java:280)
at oracle.bpm.bac.subversion.server.repository.SVNKitRepositorySession.getRepositoryUUID(SVNKitRepositorySession.java:95)
... 5 more
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.tmatesoft.svn.core.internal.util.SVNSocketFactory.connect(SVNSocketFactory.java:112)
at org.tmatesoft.svn.core.internal.util.SVNSocketFactory.createPlainSocket(SVNSocketFactory.java:68)
at org.tmatesoft.svn.core.internal.io.svn.SVNPlainConnector.open(SVNPlainConnector.java:53)
... 10 more

]]

(to get this page searchable).
SolutionBut now, since this week, Oracle Support presents us.... (drum rolls)  Patch 26775572 for version 12.2.1.2.0, the official fix for this issue, is finally ready and available for download.
I hope to be able to install it next monday. But feel free to try with me: 'Patch 26775572: BAC node using the wrong port when attempts synchronization with Virtual IP '

By the way, for about 2 months we have diagnostic patch running that solves this. But named patch is the official delivery of our diagnostic variant.

By the way, since 12.2.1.3.0 is out in the field for quite some time already, I expect that this did not landed in that version yet. If you're running in the same issue for 12.2.1.3.0 then create a SR to request for a port of this patch.


SOASuite 12c: keep running instances using ANT

Wed, 2017-11-15 08:41
At my current customer I implemented a poor man's devops solution for Release and Deploy. It was based on a framework created as bunch of Ant projects, that I created years ago. It was based on scripts from Edwin Biemond. See for instance here, here and here. I never wrote about my solution, because although I refactored them quite intensively, the basics were already described thoroughly by him.

What I did was that I modularized the lot, split the environment property files, added logging, added OSB 12c  support, based on the config jar tool, etc.

One thing I ran into this week was that at first deployment from our team to the test environment using my framework, the running instances for the BPM projects were aborted.

Now, if you take a look at the deploy.sarLocation target in Edwin's article about deploying soa suite composites  you'll find that he also supported the overwrite and forceDefault properties.

When re-deploying a composite from JDeveloper you're probably familiar with the 'keep running instances' checkbox. I was looking for the ANT alternative in the ${oracle.home}/bin/ant-sca-deploy.xml scripting. First I looked in the 12c docs (see 47.9.4 How to Use ant to Deploy a SOA Composite Application), but it is not documented there.

But when I opened the particular ant-sca-deploy.xml script I read:
 <condition property="overwrite" value="false">
<not>
<isset property="overwrite"/>
</not>
</condition>
<condition property="forceDefault" value="true">
<not>
<isset property="forceDefault"/>
</not>
</condition>
<condition property="keepInstancesOnRedeploy" value="false">
<not>
<isset property="keepInstancesOnRedeploy"/>
</not>
</condition>
...
<target name="deploy">
<input message="Please enter server URL:" addproperty="serverURL"/>
<input message="Please enter sar location:" addproperty="sarLocation"/>
<input message="Please enter username:" addproperty="user"/>
<input message="Please enter password:" addproperty="password">
<handler classname="org.apache.tools.ant.input.SecureInputHandler" />
</input>
<deployComposite serverUrl="${serverURL}" sarLocation="${sarLocation}" realm="${realm}" user="${user}" password="${password}"
overwrite="${overwrite}" forceDefault="${forceDefault}" keepInstancesOnRedeploy="${keepInstancesOnRedeploy}"
regenerateRuleBase="${regenerateRuleBase}" configPlan="${configplan}" scope="${scope}"
sysPropFile="${sysPropFile}" failOnError="${failOnError}" timeout="${timeout}" folder="${folder}"/>
</target>

So, the script kindly supports the keepInstancesOnRedeploy property. And thus I implemented a deploy.keepInstancesOnRedeploy property the same way as the deploy.forceDefault/deploy.overwrite properties.

This probably is usefull for Maven based (re-)deployments.

Enable WebService test client on SOA/BPM production mode environments

Thu, 2017-11-02 06:13
At my current assignment I need to trouble shoot the identity service because of a BPM->OID coupling. I use the support document 1327140.1 for it, that suggest to test http://<soa-server>:<port>/integration/services/IdentityService/identity

Doing so in a production mode soa or bpm environment, you'll soon find out that it uses the WebService test client via uri /ws_utc, and that this does not work. Resulting in http-404 Not found errors.

First I found a blog of Maarten of Amis mentioning this as well. But unfortunately, he could not get around it either. But luckily I found note 1915317.1, that tells me that the WebServices test Client is not enabled by default.

You can enable it on your domain via the EM:

And then expand the Advanced node:


Check the 'Enable Web Service Test Page' check box.

Since it is about a production mode environment, you need to 'lock & edit'. The note suggests to do that in the /console and then do the change in /em. And back in /console do the activate. I found that peculiar, since you can do it in the change center in /em as well.

You need to restart the servers (apparently including the AdminServer) to get this in effect.

So now lunch and check if my restart worked.

Implementing the KeyStore Service with Fusion MiddleWare 12c

Fri, 2017-09-08 09:31
For the passphrases, use the passphrases used earlier.Thinking about TLS (Transport Layer Security, the succesor of Secure Socket Layer, SSL) and WebLogic and Oracle HTTP Server, allways gave me Cold Water Fear. You have to create keystores with keys, wallets, certificate signing requests, import signed and trusted certificate chains. Not to mention the configuration of WebLogic and OHS.

Now, creating keystores with the Java Keytool turns out not that hard. And generating the Certificate signing requests and importing the certificates are also a walk in the park, nowadays. The internet world is full of example so I'm not going to do that here.

But lately, our Service Bus developers found that they needed to replace the configured demo identity and trust key stores with Custom Stores. But this broke the connection between the AdminServer and the Nodemanagers, resulting in TLS/SSL Handshake errors. By default, the nodemanagers work with the demo-identities when running in TLS.

This drove us to work out an infrastructural configuration of TLSin our FMW environments, in a  way that the SB developers can extend that with their certificates.

In this article I want to describe how to configure TLSin Weblogic using the KeyStore Service, and also how to reconfigure the nodemanagers to have them running TLS using the custom stores.

Keystores and the KeyStore Service (KSS)When implementing in TLS in Fusion MiddleWare 12c you have the choice to use the new KeyStore Service for creating keys and certificates directly in the KSS, or the Java Key Tool.

Until now WebLogic preferred a Java Key Store (JKS) for storing certificates. This is a file that functions as a vault to store your keys and certificates savely. You can use the commandline tool Keytool that is delivered with the JDK. But there are several graphical tools, like Portecle, that can make your encrypting life even more simple.

Since a keystore is a atomic file, you need to copy it or put it on a share to have a complete multi-node clustered domain use the same store.

Per Fusion MiddleWare 12c Oracle introduced the KeyStoreService. The KSS is part of the Oracle Platform Security Services, and stores the keys in so-called Stripes in the MDS. For some components it is necessary to sync them to a keystores.xml file in the domain. But we'll get to that.
The KSS enables you to create, delete and manage keystores from the Enterprise Manager/FustionMiddleWare Control.

As said, you can start with creating your keys using the KSS. We however did choose not to. We started with creating a JKS using the KeyTool. The reaons for that were:
  • Using the keytool, you can already create a store and a Certificate Signing Request (CSR) before configuring your domain.
  • We wanted to make sure we had the configuration of Weblogic and Nodemanager correct, before transition to KSS. 
  • With a JKS you have a backup of your identity in case you mess up your configuration. Keep in mind that you need to import the CSR into the keystore from where you generated the key. Even a new JKS created the exact sameway won't work.
  • Using OraPKI, you can migrate your certificates from a JKS to a wallet. We are still in the 'figure-out-phase' of implementing OHS with KSS.
  • Oh, and we wanted to have scripts and easy to document commands for creating and importing the stores.
Identity and Trust StoreSo we started with creating  an Identity Keystore with the Java Keytool, created a CSR and had it signed.
We had several virtual hostnames refering to the hostname (darlin001.org.darwin-it.local) loadbalancer (o-dwn-1.ont.org.darwin-it.local), managed servers (o-dwn-1-dwn-1-vhn.ont.org.darwin-it.local), Admin Virtual host (o-dwn-1-admin-vhn.ont.org.darwin-it.loca), etc. To have them accepted with the same certificate, you need to create the Certificate Signing Request with an extention called 'Subject Alternative Names'. This can be done with the -ext parameter including the SAN keyword as follows:
... -ext SAN=dns:darlin001.org.darwin-it.local,dns:o-dwn-1-admin-vhn.ont.org.darwin-it.local,dns:o-dwn-1-dwn-1-vhn.ont.org.darwin-it.local,dns:o-dwn-1-dwn-2-vhn.ont.org.darwin-it.local,dns:o-dwn-1-ohs-1-vhn.ont.org.darwin-it.local,dns:o-dwn-1-internal.ont.org.darwin-it.local,dns:o-dwn-1-admin.ont.org.darwin-it.local,dns:o-dwn-1.ont.org.darwin-it.local
This is a comma seperated list, with each addres need to be prefixed with dns:. See also the keytool documentation.

Don't forget to add the Common Name (CN) to the SAN, since clients are supposed to ignore the CN, when using a SAN. Also you need to add the -ext SAN parameter to the CSR. You can use it when creating the Key, but that's not enough for the CSR.

For the Trust Store, I copied the cacerts from the JDK. I changed the password and imported the CA and root certificates. But you need to consider if you want to accept all the JDK's trusted certificates. Or only those you really need to trust.

 Scope of the KeystoreWe choose to have the domain the scope of the keystore, not the host. So as a CommonName I choose the loadbalancer address. The thing is that we want to have the managed servers use the same keystore. And with a whole server migration, a managed server could migrate to a new host, that is not yet in the certificate. Although we add the known hosts, we could be forced to use a new host. And with whole server migrations, you can't way for your CSR to be signed.
As name of the keystores and the identity alias I choose a reference to our domain name. As an environment shortage I choose 'dwn', that could also refer to the type of environment. Like BPM, OSB, SOA, etc. The first letter ('o') denotes the development environment (Dutch: 'ontwikkeling'). And the digit denotes the environment number, where you could have multiple development or test environment.
Import KeyStores into KSSHaving created your KeyStores and made sure Weblogic works with them, you can import them into the KSS. The UI (EM) does allow you to create a keystore, and import certificates. It does not allow you to import a complete JKS. But you can do so using WLST. This caters also for my quest to script as much as possible.

The commands are executed in WLST Online mode, connected to the AdminServer. So start wlst and connect to the AdminServer.

In the following command commands, I use the following environment variables:
  • $JAVA_HOME: refering to, yes, you're right....
  • $JKS_LOC=/u01/oracle/config/keystore
  • $JKS_NAME=o-dwn-1.jks
  • $JKS_TRUST_NAME=o-dwn-1-trust.jks
Connected to your AdminServer you need to get a handle on the 'KeyStoreService object', that has methods to do the imports,  etc.:
wls:/o-dwn-1_domain/serverConfig/> svc = getOpssService(name='KeyStoreService')

With the handle you can import the Identity keystore:
wls:/o-dwn-1_domain/domainRuntime/> svc.importKeyStore(appStripe='system', name='o-dwn-1-id', password='<password>', aliases='o-dwn-1', keypasswords='<password>', type='JKS', permission=false, filepath='/u01/oracle/config/keystore/o-dwn-1.jks')
Already in Domain Runtime Tree

Keystore imported. Check the logs if any entry was skipped.

The parameters aliases and keypasswords contain a comma-separated list of key-aliases and corresponding key-passphrases.
My advise would be to use the same password for your key as for your keystore. Many products (like Oracle B2B) allow you to provide only one password, that is used for both.

Importing the Trust Store, works similar:
svc.importKeyStore(appStripe='system', name='o-dwn-1-trust', password='<password>', aliases='ca_dwn_org,our-root-ca', keypasswords='none,none', type='JKS', permission=false, filepath='/u01/oracle/config/keystore/o-dwn-1-trust.jks')
Already in Domain Runtime Tree

Keystore imported. Check the logs if any entry was skipped.

One thing with Trusts: they don't contain Keys. So no keyprases are applicable. But the keypasswords property is mandatory and also need to contain the same number of passphrases as there are aliases named to be imported in the KSS. Apparently it is sufficient to use 'none' as a passphrase.

A wallet can also be imported the same way. For a type then choose 'OracleWallet', and in filepath a reference to the wallet folder.

To check the content of the keystore, we can list it:
wls:/o-dwn-1_domain/domainRuntime/> svc.listKeyStoreAliases(appStripe="system",name="o-dwn-1-id",password='<password>',type="*")
Already in Domain Runtime Tree

o-dwn-1

And for the trust:
wls:/o-dwn-1_domain/domainRuntime/> svc.listKeyStoreAliases(appStripe="system",name="o-dwn-1-trust",password='<password>',type="*")
Already in Domain Runtime Tree

ca_dwn_org
our-root-ca

Now the keystores are in the KSS. You can see them in EM. But some FMW components like nodemanagers, (but it appeared to me even Weblogic Servers), don't work right away. They can't look into the KSS, but use a keystores.xml file in the ${DOMAIN_HOME}/config/fmwconfig. This turns out to be an export of the keystores. To synchronize the KSS with that file you need to do:
wls:/o-dwn-1_domain/domainRuntime/> syncKeyStores(appStripe='system',keystoreFormat='KSS')
Already in Domain Runtime Tree

Keystore sync successful.
List the content of KSS with FMW ControlTo list the KSS within FMW Control, logon and navigate to Weblogic Domain -> Security -> Keystore:
This brings you to the content of the KSS with the particular Stripes. For Weblogic we used the system stripe to import our custom keystores:

Select a store, and click the manage button in the bar. It will ask for the keystore password (the one you used creating the JKS with the keytool). Then it shows:


Here you can import, export certificates, create keypairs and generate a CSR for it.

Reconfigure WeblogicNow we can proceed in configuring Weblogic and the nodemanagers.

For the weblogic servers, logon to the Admin console, navigate to the particular Server, tab Keystores and select 'Custom Identity Keystore and Custom Trust keystore' in the pop-list.

Then enter the following values:
Attribute
Value
Custom Identity Keystore kss://system/o-dwn-1-id Custom Identity Keystore Type KSS Custom Trust Keystore kss://system/o-dwn-1-trust Custom Trust Keystore Type KSS
For the passphrases, use the passphrases used earlier.
Then on the SSL tab you can provide the Identity alias and the key-passphrase.

Restart the Server (or at least Restart the SSL, under the control tab).

Reconfigure Nodemanagers for KSSFor the nodemanager, navigate to the nodemanager.properties file in ${DOMAIN_HOME}/nodemanager.
At the top of the file add the following properties:
KeyStores=CustomIdentityAndCustomTrust
CustomIdentityKeyStoreFileName=kss://system/o-dwn-1-id
CustomIdentityKeyStoreType=KSS
CustomIdentityKeyStorePassPhrase=
CustomIdentityAlias=o-dwn-1
CustomIdentityPrivateKeyPassPhrase=

Restart the nodemanager and check if it succesfully loads the kss://system/o-dwn-1-id as a keystore and starts the listener in SSL mode.

Then in the AdminServer console, under Domain Structure -> Machines -> <Machine_Name> check if the NodeManager is 'Reachable'.

ConclusionThat's about it. Don't forget to Sync the KSS in wlst, when the nodemanager does not restart properly and the managed/admin server could not start SSL properly.

 I hope I'll soon be able to write down the directions to reconfigure SSL in OHS to use the KSS. But as said we're still in the 'figure-out-phase'.

Pages