Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 2 hours 44 min ago

Configuring Oracle Traffic Director 12c with WebGate

Sun, 2017-05-28 15:01

At a recent customer install, I was faced with configuring Oracle Traffic Director (OTD) 12.2.1.2.0 Webgate with Oracle Access Manager.

Deploying Webgate on OD 12c is very well described in the documentation. See A Configuring OAM Agent (WebGate) for Oracle Traffic Director 12.2.1.2

There is however a flaw in the documentation. I came across that when I reached the point where Webgate get’s configured in the conf files of OTD.

When you configure Webgate for OTD 12c, the OTD Conf files such as magnus.conf and virtual-server-*-obj.conf are updated. (on a collocated installation)
If you follow the documentation completely, you will end up with conf files that either have no WebGate configuration in them or with the configuration dedicated to the first OTD instance on both servers. In the latter case, the second instance will no longer start.
I created a Service Request at Oracle support to address the issue. They didn’t have a solution for the problem and I ended up being bounced between the OTD and the WebGate support teams. Finally one of the guys from the WebGate team really tried to help me, but couldn’t resolve the issue. So I went along and solved the problem myself. As I will describe below.

When you reach pt.5 of the documentation A.2 Configuring Oracle Traffic Director 12c WebGate

Change the EditObjConf line as follows

./EditObjConf -f Domain_Home/config/fmwconfig/components/
OTD/otd_configuration_name/config/virtual_server_name-obj.conf -w webgate_instanceDirectory [-oh Oracle_Home] -ws otd

For example

OTD Configuration Name: TST1
OTD Instance 1: otd_TST1_host1.domain.local
OTD Instance 2: otd_TST1_ host1.domain.local
Domainhome: /u01/app/oracle/config/domains/otd_domain_tst

./EditObjConf -f /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/TST1/config/virtual-server-tst1-obj.conf -w /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/instances/otd_TST1_ host1.domain.local -oh $ORACLE_HOME -ws otd

Where TST1 is the name of the configuration and host1.domain.local is the name of the first server.
This will change the magnus.conf and virtual-server-tst1-obj.conf for Webgate.
In virtual-server-tst1-obj.conf there are no instance specific references.
However in the magnus.conf there are references to the first instance, since this is the one that we used with EditObjConf.

This is what the magnus.conf in the OTD configuration section (on global level) looks like after EditObjConf command.
Notice the hardcoded instance name in four places.

less /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/ OTD/TST1/config/magnus.conf</pre>
#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#

Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401, OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="<some_local_dir>"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# <WebGateLockFileDir>/<Hash of WebGateInstancedir>. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.
#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

Leaving it like this will result in this hardcoded instance name being distributed to all instance. Hence only one instance would start.

Now how to fix this.

Open magnus.conf with an editor

Replace the hardcoded instance name with a variable called ${INSTANCE_NAME}
(I picked up the existence of this variable in the server.xml which is also on the OTD Configuration level and get distributed all instances.)

In our example the magnus.conf now looks like this.

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#</pre>
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
#WebGateLockFileDir="<some_local_dir>"
# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# <WebGateLockFileDir>/<Hash of WebGateInstancedir>. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd" 
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN
Now to distributed these files

Open Enterprise ManagerFusion Middleware Control 12c and go to the OTD Configuration

Go to Virtual Server section and click Lock and Edit

EM will show the Pull Components Changes bar.

DON’T pull the changes!
This will replace the conf files of the configuration with those currently in use by the instances.

Instead make a minor, insignificant, change in the configuration.
For example add a hostname to the Virtual Server Settings. (We remove it later)
Now activate the changes
Again, don’t Pull the changes

Discard the Instance Changes and Activate Changes.

Again Discard Changes

And finally Discard Changes to distributed the correct conf files to the instances.

Now let’s look at the magnus.conf on both instances. (We already know that the virtual-server-tst1-obj.conf is the same everywhere)

On Instance 1

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#</pre>
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules" funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="<some_local_dir>"

# WebGateLockFileDir: Optional directive specifying the location to create # webgate lock files.
#
# If configured, then all webgate lock files will be created under
# <WebGateLockFileDir>/<Hash of WebGateInstancedir>. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

And on Instance 2

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#</pre>
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules" funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
#WebGateLockFileDir="<some_local_dir>"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# <WebGateLockFileDir>/<Hash of WebGateInstancedir>. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"

obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

The files look good on both instances.

Now Restart Instances

Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local ------------------------------------------------
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local
------------------------------------------------
Checking operation status on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local succeeded
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local succeeded

 

Now you’re good to go with WebGate correctly configured on OTD 12c.

I put the solution in the service request and got thanks from the guys at Oracle Support. They told me, they where going to change the documentation to match my solution. Always nice to get this kind of appreciation from them.

The post Configuring Oracle Traffic Director 12c with WebGate appeared first on AMIS Oracle and Java Blog.

Docker, WebLogic Image on Microsoft Azure Container Service

Wed, 2017-05-24 09:37

This blog series shows how to get started with WebLogic and Docker – in 3 different Clouds:

This blog is running a WebLogic Docker Container image from the Docker Hub registry on the Microsoft Azure Container Service.

Starting point & Outline

Starting point for this blog is:

  • A computer running Ubuntu and with Docker installed
  • A WebLogic Docker Container Image in a private Docker Hub repository, as described in this blog
  • Access to Microsoft Azure, e.g. via a trial subscription

The blog itself consists of 2 main parts:

  1. Create an Azure Container Service
  2. Run the container image from the Docker Hub repository on the created Cloud Service

 

Create an Azure Container Service

The Azure Container Service offers the choice between using Docker Swarm, DC/OS, or Kubernetes for orchestration/management of the Docker container solution. For our specific use case, I picked Docker Swarm.

The high level steps for creating your Azure Container Service are:

  • create an SSH RSA public key
  • deploy an Azure Container Service cluster (via the Azure portal), by using an Azure Resource Manager template (for Docker Swarm)

Let’s get started.

  • create an SSH RSA public key

Log in into the Ubuntu machine and generate the key:

developer@developer-VirtualBox:~$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/developer/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/developer/.ssh/id_rsa.

Your public key has been saved in /home/developer/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:Lpm8BrZoQscz1E6Maq9J0WdjjLjHAP5fxZXBdlrdzMY developer@developer-VirtualBox

The key's randomart image is:

+---[RSA 2048]----+
|          ..  .+.|

|           ooo .E|

|.   +     .o+  . |

|o ooo+  . ..     |

| =+oo*  So       |

| +*=*o.+.        |

|ooo*oo=..        |

|o =.o oo         |

| =.  o.          |

+----[SHA256]-----+
developer@developer-VirtualBox:~$
  • deploy an Azure Container Service cluster (via the Azure portal), by using an Azure Resource Manager template (for Docker Swarm)

First, login into the Microsoft Azure Portal on http://portal.azure.com. Here, click the + sign, select ‘Containers’ and then ‘Azure Container Service’:

The next screen appears:

Click the Create button.

Complete the settings like shown in the figure above and click OK to move to the next page:

For the Master configuration, complete the settings as shown above. Use the SSH key that was created in step (1). Note that the ‘Master’ is the Manager node in a Docker Swarm. One Master node is enough for this simple configuration. Next, click OK.

That brings us to the Agent configuration page, where Agent is actually a Docker Swarm Worker node. We need only 1 agent. For the Virtual Machine size, the DS2 profile is chosen, which has 2 cpu cores and 7GB of RAM. That should be enough to run the WebLogic container on.

Click OK to continue:

Review the Summary page, and the click OK to start creation of your Azure Container Service:

After some time, your Azure Container Service will be created!

 

Run the WebLogic image from Docker Hub registry

Now, we will start to get the WebLogic image from the Docker Hub registry running on the Azure Container Service. This is achieved from the command line of our local Ubuntu machine – the one that also has Docker installed.

The following steps will be done:

  • Make a tunnel to the Master node
  • Run the WebLogic container
  • Add a LoadBalancer rule for forwarding port 7001
  • Test

Let’s get started.

  • Make a SSH tunnel to the Master node

From the local Ubuntu machine, make an SSH tunnel to the Master node. In that way, docker commands from the Ubuntu machine will be handled by the Docker Swarm container on the Master node. First, establish the tunnel, set the Docker host that will be used by the local Ubuntu machine and then list the images. The output of the ‘docker images list’ command shows what Docker images are available on the Master node – in our case: none (as this is a fresh installation):

developer@developer-VirtualBox:~$ ssh -fNL 2375:localhost:2375 -p 2200 lgorisse@lgomgmt.northeurope.cloudapp.azure.com

developer@developer-VirtualBox:~$ export DOCKER_HOST=:2375

developer@developer-VirtualBox:~$ docker images list

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

developer@developer-VirtualBox:~$
  • Run the WebLogic container

Now, give the commands below on the local Ubuntu machine to start the WebLogic container on the Azure Container Service. The WebLogic container will run on the agent machine.

The following steps have to be done:

  • Login into the Docker Hub, using your Docker Hub account. This is necessary because the Docker Hub registry that I used is a private registry (= password protected)
  • Pull the WebLogic Container image with the docker pull command
  • Run the image with the port mapping for port 7001
  • Notice in the output below that I also had a yeasy/simple-web container running
developer@developer-VirtualBox:~$ docker login

Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.

Username: lgorissen

Password:

Login Succeeded

developer@developer-VirtualBox:~$ docker pull lgorissen/myfirstweblogic

Using default tag: latest

swarm-agent-595FAAF3000001: Pulling lgorissen/myfirstweblogic:latest... : downloaded

developer@developer-VirtualBox:~$ docker run -d -p 7001:7001 lgorissen/myfirstweblogic

6506b88dc7cc166df55c470e4e7f9732cfb55353c8a1a84d8048c7689c886a7c

developer@developer-VirtualBox:~$ docker container ps

CONTAINER ID        IMAGE                       COMMAND                  CREATED                  STATUS              PORTS                                                   NAMES

6506b88dc7cc        lgorissen/myfirstweblogic   "startWebLogic.sh"       Less than a second ago   Up 21 seconds       5556/tcp, 7002/tcp, 8453/tcp, 10.0.0.5:7001->7001/tcp   swarm-agent-595FAAF3000001/competent_allen

fd4c32b1fd19        yeasy/simple-web            "/bin/sh -c 'pytho..."   8 minutes ago            Up 8 minutes        10.0.0.5:80->80/tcp                                     swarm-agent-595FAAF3000001/pensive_pike

developer@developer-VirtualBox:~$
  • Add a LoadBalancer rule for forwarding port 7001

In this set-up, the Agent machine can’t be reached over port 7001. A route in the LoadBalancer that routes traffic for this port has to be created. Creating that LoadBalancer route can be done in the Azure Portal. First, look up the resource groups in the portal:

Open the weblogic-demo resource group by clicking on it:

Click on the Load balancer for the agent (remember, out WebLogic container is running on the agent virtual machine). That brings up the Load balancer screen:

Click on the Add sign to add a new load balancer rule for port 7001:

Enter front-end, back-end and port numbers to establish the new route and save it.

 

  • Test

Again look into the resource groups in the Azure Portal to find the public dns name for the agents:

From the figure above, we see that the url to use for accessing the WebLogic container is: http://lgoagents.northeurope.cloudapp.azure.com:7001/

Nifty :-S

The post Docker, WebLogic Image on Microsoft Azure Container Service appeared first on AMIS Oracle and Java Blog.

Docker, WebLogic Image on Oracle Container Cloud Service

Wed, 2017-05-24 09:36

This blog series shows how to get started with WebLogic and Docker – in 3 different Clouds:

Starting point & Outline

Starting point for this blog is:

  • A computer with a browser
  • A WebLogic Docker Container Image in a private Docker Hub repository, as described in this blog [todo: make reference]
  • Identity Domain Administrator access to the Oracle Public Cloud, e.g. via a trial account

The blog itself consists of 2 main parts:

  1. Create a Container Cloud Service
  2. Run the container image from the Docker Hub repository on the created Cloud Service
Create a Container Cloud Service

First step is to create an Oracle Container Cloud Service. Start by logging in as Identity Domain Administrator. When you have a trial account, you will have received a mail like the one below:

Login into MyServices Administration using the url from the mail as shown above:

Click on the ‘Create Instance’ to start provisioning fo the Container Cloud Service. A pop-up is now shown:

Pick the Container (cloud service), which brings you to the first page of the wizard for creation of the Container Cloud Service:

Click ‘Create Service’ button:

On this page, create the SSH Public Key: click the Edit button and select ‘Create a New Key’:

Click Enter:

Download the key. Continue by clicking Next. That brings you to the overview page:

Review the data and then click ‘Create’ to start creation of the Container Cloud Service:

Now, the service is creating for several minutes – mine took 11 minutes. The page will then look like:

Click on the ‘WebLogicService’, which brings you to a more detailed overview of your created Container Cloud Service:

Like shown in the above picture: go to the ‘Container Console’:

Login with the admin username and password that you entered when creating the Container Cloud Service. This will now bring you to the Dashboard of the Cloud Container Service named WebLogicService:

In this console, the main concepts of the Oracle Container Cloud Service are clearly visible:

  • Task: action created in the Oracle Container Cloud Service as a response to your requests
  • Event: individual, discrete operations on the Oracle Container Cloud Service
  • Service: comprises the information required for running a Docker image on a host
  • Stack: comprises the configuration for running a set of services as a single entity
  • Deployment: a deployed service or stack on the Oracle Container Cloud Service
  • Container: a Docker container, i.e. a process created to run a Docker image
  • Image: a Docker image
  • Hosts: the Oracle Compute virtual machines that are managed by the Oracle Container Cloud Service (also: worker nodes)
  • Resource Pools: a combination of hosts into groups of compute resources
  • Registry: a Docker registry, i.e. a system for storing and sharing Docker images
  • Tags: labels for organizing resource pools and the hosts within them (used for management)

 

Run the WebLogic image from Docker Hub registry

With the Oracle Container Cloud Service up and running, we can start running the WebLogic image that we have in Docker Hub. The following has to be done:

  1. Update the registry to access the private Docker Hub repository where the WebLogic image is stored
  2. Create a Service for the WebLogic container image
  3. Deploy the Service
  4. Test the deployment

Steps are shown below:

  • Update the registry to access the private Docker Hub repository where the WebLogic image is stored

Goto registries:

Click the Edit button and add the authorization details for the private Docker Hub registry:

 

  • Create a Service for the WebLogic container image

Next, we’ll create a Service that describes how we want to deploy the WebLogic container image. Goto the Services page and click on the New Service button:

In the Service Editor pop-up, enter the values like shown below. Note the port mapping settings!

Click Save and note that the Service is added:

 

  • Deploy the Service

 

In the previous sceen, click the green Deploy button to start deployment of the service.

Accept the default settings and click Deploy. After some time, i.e. when the deployment has completed, the Deployments tab will be colored green:

 

  • Test the deployment

First, check that the image has actually been pulled in on your host, by looking at the Images tab:

Then, check for the container to be up-and-running in the Containers tab:

Now, as a final check, you’ll want to log in into the weblogic console.

First, go to the Hosts tab, and find the public ip address:

Then, point your browser to the familiar url, here: http://129.150.70.46:7001/console

… and login to observe that the WebLogic server has the developer_domain, which is a clear indication that it is indeed running in our container from Docker Hub.

 

The post Docker, WebLogic Image on Oracle Container Cloud Service appeared first on AMIS Oracle and Java Blog.

Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache

Sun, 2017-05-21 23:41

In a previous article I talked about a generic Docker Container Image that can be used to run any Node.js application directly from GitHub or some other Git instance by feeding the Git repo url as Docker run parameter (see https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/). In this article, I create a simple Node.js application that will be pushed to GitHub and run in that generic Docker container. It will use a Redis cache that is running in a separate Docker Container.

image

The application does something simple: HTTP requests are handled: each request will lead to an increment of the request counter and the current value of the request counter is returned. The earlier implementation of this functionality used a local Node.js variable to keep track of the request count. This approach had two spectacular flaws: horizontal scalability (adding instances of the application fronted by a load balancer of sorts) led to strange results because each instance kept its own request counter. And a restart of the application caused the count to be reset. The incarnation we discuss in this article uses a Redis cache as a shared store for the request counter, one that will also survive the restart of the Node.js application instances. Note: of course this means Redis becomes a single point of failure, unless we cluster Redis too and/or use a persistent file as backup. Both options are available but are out of scope for this article.

Sources for this article can be found on GitHub: https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017/tree/master/part1 .

Run Redis

To run a Docker Container with a Redis cache instance, we only have to execute this statement:

docker run -d –name redis -p 6379:6379 redis

We run a container based on the Docker image called redis. The container is also called redis and its internal port 6379 is exposed and mapped to port 6379 in the host. That it all it takes. The image is pulled and the container is started.

image

Create Node.js Application RequestCounter – Talking to Redis

To talk to Redis from a Node.js application, there are several modules available. The most common and generic one seems to be called redis. To use it, I have to install it with npm:

npm install redis –save

image

To leverage Redis in my application code, I need to require(‘redis’) and create a client connection. For that, I need the host and port for the Redis instance. The port was specified when we started the Docker container for Redis (6379) and the host ip is the ip of the Docker machine (I am running Docker Tools on Windows).

Here is the naïve implementation of the request counter, backed by Redis. Naïve because it does not cater for race conditions between multiple instances that could each read the current counter value from Redis, each increase it and write it back, causing one or multiple counts to be potentially lost. Note that the REDIS_HOST and REDIS_PORT can be specified through environment variables (read with process.env.<name of variable>.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
var http = require('http');
var redis = require("redis");

var redisHost = process.env.REDIS_HOST ||"192.168.99.100" ;
var redisPort = process.env.REDIS_PORT ||6379;

var redisClient = redis.createClient({ "host": redisHost, "port": redisPort });

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    var requestCounter = 0;

    redisClient.get(redisKeyRequestCounter, function (err, reply) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            if (!reply || reply == null) {
                console.log("no value found yet");
                redisClient.set(redisKeyRequestCounter, requestCounter);
            } else {
                requestCounter = Number(reply) + 1;
                redisClient.set(redisKeyRequestCounter, requestCounter);
            }
            res.write('Request Count (Version 3): ' + requestCounter);
            res.end();
        }
    })
}).listen(PORT);

    //        redisClient.quit();

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

 

Run the Node.JS Application talking to Redis

The Node.js application can be run locally – from the command line directly on the Node.js runtime.

Alternatively, I have committed and pushed the application to GitHub. Now I can run it using the generic Docker Container Image lucasjellema/node-app-runner that I prepared in this article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ using a single startup command:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-3.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

This command passes relevant values as environment variable – such as the GitHub Repo url, the directory in that repo and the exact script to run and also the host and port for Redis as well as the port that the Node.js application should listen at for requests. In the standard Docker way, the internal port (8080) is mapped to the external port (8015).image

 

The application can accessed from the browser:

image

 

Less Naïve Implementation using Redis Watch and Multi for Optimistic Locking

Although the code shown overhead seems to be working – it is not robust. When scaling out –  multiple instances can race against each other and overwrite each other’s changes in Redis because no locking has been implemented. Based on this article: https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo I have extended the code with an optimistic locking mechanism. Additionally, the treatment of client connections is improved – reducing the chance of leaking connections.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
// use an optmistic locking strategy to prevent race conditions between multiple clients updating the requestCount at the same time
// based on https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo 
var http = require('http');
var Redis = require("redis");

var redisHost = process.env.REDIS_HOST || "192.168.99.100";
var redisPort = process.env.REDIS_PORT || 6379;

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    increment(redisKeyRequestCounter, function (err, newValue) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            res.write('Request Count (Version 3): ' + newValue);
            res.end();
        }
    })
}).listen(PORT);


function _increment(key, cb) {
    var replied = false;
    var newValue;

    var redis = Redis.createClient({ "host": redisHost, "port": redisPort });
    // if the key does not yet exist, then create it with a value of zero associated with it
    redis.setnx(key, 0);
    redis.once('error', done);
    // ensure that if anything changes to the key-value pair in Redis (from a different connection), this atomic operation will fail
    redis.watch(key);
    redis.get(key, function (err, value) {
        if (err) {
            return done(err);
        }
        newValue = Number(value) + 1;
        // either watch tells no change has taken place and the set goes through, or this action fails
        redis.multi().
            set(key, newValue).
            exec(done);
    });

    function done(err, result) {
        redis.quit();

        if (!replied) {
            if (!err && !result) {
                err = new Error('Conflict detected');
            }

            replied = true;
            cb(err, newValue);
        }
    }
}

function increment(key, cb) {
    _increment(key, callback);

    function callback(err, result) {
        if (err && err.message == 'Conflict detected') {
            _increment(key, callback);
        }
        else {
            cb(err, result);
        }
    }
}

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

This Node.js application is run in exactly the same way as the previous one, using requestCounter-4.js as APP_STARTUP rather than requestCounter-3.js.

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-4.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

image

The post Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache appeared first on AMIS Oracle and Java Blog.

Running Node.js applications from GitHub in generic Docker Container

Sun, 2017-05-21 06:27

This article shows how I create a generic Docker Container Image to run any Node.JS application based on sources for that application on GitHub. The usage of this image is shown in this picture:

 

image

Any Node.JS application in any public GitHub repo can be run using this Docker Container Image. When a container is run from this image, the url for the GitHub Repo is passed in as environment variable – as well as (optionally) the directory in the repo to run the application from, the name of the file to run and the specific version of the Node runtime to use. An example of the command line to use:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8005:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

This command will run the script requestCounter.js in the part1 directory in the repo found in GitHub at the URL specified. It passes an additional environment variable APP_PORT to the runtime – to be used in the node application (process.env.APP_PORT). It maps port 8080 inside the container to port 8005 on the host in the standard Docker way.

To run an entirely different Node.js application, I can use this command:

docker run -e “GIT_URL=https://github.com/lucasjellema/nodejs-serversentevents-quickstart”  -p 8010:8888 -e”PORT=8888″ -e “APP_HOME=.”  -e “APP_STARTUP=app.js”   lucasjellema/node-app-runner

The same image is started, passing a different GIT_URL and different instructions regarding the directory and the script to run – and also a different environment variable called PORT.

Note: this work is based on the Docker Image created by jakubknejzlik – see https://hub.docker.com/r/jakubknejzlik/docker-git-node-app/ and https://github.com/jakubknejzlik/docker-git-node-app/blob/master/Dockerfile.

My own sources are part of the GitHub Repository at https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017 – with resources for a workshop on Microservices, Choreography, Docker, Kubernetes, Node.jS, Kafka and more.

The steps described in this article:

image

1. Docker file to build the container

2. bootstrap.sh file to run when the container is started

3. Create image from the container

4. Push image to public Docker Hub Registry (https://hub.docker.com/r/lucasjellema/node-app-runner/)

(5. Create Node.js application and push to GitHub)

6. Run Node.js application from GitHub repository by starting a Docker container from the image created in the previous steps

 

1. Docker file to build the container

The Docker file is shown here

FROM node

ENV NODE_VERSION stable
ENV NPM_SCRIPT start
ENV GIT_URL https://github.com/heroku/node-js-sample
ENV APP_PORT 3000

ENV APP_HOME .
ENV APP_STARTUP ""
# JUST_RUN specifies whether node should be installed and git should be cloned
ENV JUST_RUN N

COPY ./docker-work /code

WORKDIR /code

#RUN chown -R app:app /code/*
RUN chmod +x /code/bootstrap.sh

RUN npm install -g n --silent
RUN n stable

ENTRYPOINT ["/code/bootstrap.sh"]

It starts from the Docker Image node – the official base image (see https://hub.docker.com/_/node/ for details). The scripts defines a number of environment variables with (default) values; these values can be overwritten when a container is run. The contents of directory docker-work (off the current working directory) is copied into directory /code inside the Docker image. The file bootstrap.sh – which is in that docker-work directory – is made executable. NPM package n is installed (https://www.npmjs.com/package/n) for doing version management of Node.js and the currently stable release of Node.js is installed – in addition to the version of Node.js shipped in the Node Docker image . Finally, the entrypoint is set to bootstrap.sh – meaning that when a container is started based on the image, this file will be executed.

 

2. bootstrap.sh file to run when the container is started

The file bootstrap.sh is executed when the container is started. This file takes care of

* install a special version of the Node.js runtime if required

* cloning the Git repository – to bring the application sources into the container

* installing all required node-modules by running npm install

* running the Node.js application

The file uses a number of environment variables for these actions:

– NODE_VERSION – if a specific version of Node runtime is required

– GIT_URL – the URL to the Git repository that contains the application sources

– APP_HOME – the directory within the repository that contains package.json and the start script for the application to run

– APP_STARTUP – the file that should be executed (node $APP_STARTUP); when this parameter is not passed, the application is started with npm start – based on the start script in package.json

– JUST_RUN – when this variable has the value Y, then the container will not attempt to install a new version of Node.js nor will it clone the Git repo (again)

#!/bin/bash

if [ "$JUST_RUN" = "N" ]; then
  echo switching node to version $NODE_VERSION
  n $NODE_VERSION --quiet
fi

echo node version: `node --version`

if [ "$JUST_RUN" = "N" ]; then
  git clone $GIT_URL app
fi

cd app

cd $APP_HOME
echo Application Home: $APP_HOME

if [ "$JUST_RUN" = "N" ]; then
  if [ "$YARN_INSTALL" = "1" ]; then
    yarn install --production --silent
  else
    npm install --production --silent
  fi
fi

if [ "$APP_STARTUP" = "" ]; then
  npm run $NPM_SCRIPT
else
  node $APP_STARTUP
fi

 

3. Build container image

In my environment, I am working on a Windows7 laptop. On this laptop, I have installed Docker Tools. I am running in the Docker Tools Quickstart terminal (fka boot2docker) – Docker Machine on a Linux client running a small Oracle VirtualBox VM.

Using this command I build the container image from the Dockerfile:

docker build -t lucasjellema/node-app-runner .

SNAGHTMLb25b94a

To inspect whether the image contains the setup, I can run the image and start a Bash shell – just check on the contents of the file system:

docker run  -it –entrypoint /bin/bash  lucasjellema/node-app-runner

I can now try out the image, using a command like this:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8004:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

image

This runs a container, clones the Git repo at the indicated URL to directory /code/app , navigate into directory /code/app/part1, performs an npm install to get required modules and runs requestCounter.js with Node.js, listening at port 8004 for http requests on the host that are forwarded to port 8080 inside the container.

In order to access the application on my Windows host, I need to know the IP address of Docker Machine – the Linux VM instance that runs the Docker server inside VirtualBox. This is done using

docker-machine ip default

which will return the IP address assigned to the VM.

image

I can then access the Node.js application at http://IP_ADDRESS:8004.

 

image

 

4. (optional) Push image to public Docker Hub Registry (https://hub.docker.com/r/lucasjellema/node-app-runner/)

The image has proven itself, and we can now push it to a public or private registry. To push to Docker Hub:

docker login

docker push lucasjellema/node-app-runner

image

5. Create Node.js application and push to GitHub

image

6. Run Node.js application from GitHub repository by starting a Docker container from the image created in the previous steps

I have several Node.js applications that I would like to run – each in their own container, listening at their own port. This is now very simple and straightforward – using several calls to docker run, each with different values for GIT_URL, APP_HOME and APP_STARTUP as well as APP_PORT or PORT.

For example – run three containers in parallel:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8001:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8005:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-2.js”   lucasjellema/node-app-runner

docker run -e “GIT_URL=https://github.com/lucasjellema/nodejs-serversentevents-quickstart”  -p 8010:8888 -e”PORT=8888″ -e “APP_HOME=.”  -e “APP_STARTUP=app.js”   lucasjellema/node-app-runner

We can look at the logging from a container:

docker logs <container id>

We can stop each container:

docker stop <container id>

list all containers – running and stopped:

docker container ls -all

restart a container (now the time to restart is very short):

docker start <container id>

7. Turn Container into Image

Note: it is easy to turn one of these containers running a specific Node.js application itself into an image from which subsequent containers can be run. This image would contain the correct version of Node.js as well as the application and all its dependent modules – allowing for a faster startup time. The steps:

docker commit CONTAINER_ID NAME_OF_IMAGE

for example:

docker commit a771 request-counter

Subsequently we can run a container based on this image; note that this time we do not specify the GIT_URL – because the application and all node_modules are baked into the image. The environment variables used in bootstrap.sh and in the application can still be passed. The startup time for this container should be very short – since hardly any preparation needs to be performed:

docker run  -e “APP_PORT=8080” -p 8004:8080 -e “APP_HOME=part1” -e “JUST_RUN=Y” -e “APP_STARTUP=requestCounter.js”   request-counter

 

Notes

Note: remove old containers

list exited containers:

docker ps -aq -f status=exited

remove them (http://blog.yohanliyanage.com/2015/05/docker-clean-up-after-yourself/)

docker rm -v $(docker ps -a -q -f status=exited)

remove dangling images

list them:

docker images -f “dangling=true” -q

Remove them:

docker rmi $(docker images -f “dangling=true” -q)

The post Running Node.js applications from GitHub in generic Docker Container appeared first on AMIS Oracle and Java Blog.

Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises

Thu, 2017-05-18 04:38

One of the challenges with programming in JavaScript (ECMA Script) in general and Node.JS in particular is having to deal with asynchronous operations. Whenever a call is made to a function that will handle the request asynchronously, care has to be taken to be prepared to receive the result from the function in an asynchronous fashion. Additionally, we have to ensure that the program flow does not continue prematurely – only those steps that can be performed without the result from the function call can proceed. Orchestrating multiple asynchronous – some of them sequential or chained and others possibly in parallel – and gathering the results from those calls in the proper way is not trivial.

Traditionally, we used callback functions to program the asynchronous interaction: the caller passed a reference to a function to the asynchronous operation and when done with the asynchronous operation, the called function would invoke this callback function to hand it the outcome. The call(ed)back function would then take over and continue flow of the program. A simple example of a callback function is seen whenever an action is scheduled for execution using setTimeout():

setTimeout(function () {
  console.log("Now I am doing my thing ");
}, 1000);

or perhaps more explicitly:

function cb() {
  console.log("Now I am doing my thing ");
}

setTimeout(cb, 1000);
Chain of Asynchronous Actions

With multiple mutually dependent (chained) calls, using callback functions results in nested program logic that quickly becomes hard to read, debug and maintain. An example is shown here:

image

 

Function readElementFromJsonFile does what its name says: it reads the value of a specific element from the file specified in the input parameter. It does so asynchronously and it will call the callback function to return the result when it has been obtained. Using this function, we are after the final value. Starting with file step1.json, we read the name of the nextfile element which indicates the next file to read, in this case step2.json. This file in turn indicates that nextStep.json should be inspected and so on. Clearly we have a case of a chain of asynchronous actions where each action’s output provides the input for the next action.

In classic callback oriented JavaScript, the code for the chain of calls looks like this – the nested structure we have come to expect from using callback functions to handle asynchronous situations:

// the classic approach with nested callbacks
var fs = require('fs');
var step1 = "/step1.json";

function readElementFromJsonFile(fileName, elementToRead, cb) {
    var elementToRetrieve = 'nextfile';
    if (elementToRead) {
        elementToRetrieve = elementToRead;
    }
    console.log('file to read from ' + fileName);
    fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
        var element = "";
        if (err) return cb(err);
        try {
            element = JSON.parse(data)[elementToRetrieve];
        } catch (e) {
            return cb(e);
        }
        console.log('value of element read = ' + element);
        cb(null, element);
    });
}//readElementFromJsonFile

readElementFromJsonFile(step1, null, function (err, data) {
    if (err) return err;
    readElementFromJsonFile(data, null, function (err, data) {
        if (err) return err;
        readElementFromJsonFile(data, null, function (err, data) {
            if (err) return err;
            readElementFromJsonFile(data, null, function (err, data) {
                if (err) return err;
                readElementFromJsonFile(data, 'actualValue', function (err, data) {
                    if (err) return err;
                    console.log("Final value = " + data);
                });
            });
        });
    });
});

The arrival of the Promise in ES6 – a native language mechanism that is therefore available in recent versions of Node.JS – makes things a little bit different and more organized, readable and maintainable. The function readElementFromJsonFile() will now return a Promise – a placeholder for the eventual result of the asynchronous operation. Even though the result will be provided through the Promise object at a later moment, we can program as if the Promise represents that result right now – and we can anticipate in our code at what to do when the function delivers on its Promise (by calling the built in function resolve inside the Promise).

The result of the resolution of a Promise is a value – in the case of function readElementFromJsonFile it is the value read from the file. The then() operation that is executed when the Promise is resolved with that value, calls the function that it was given as a parameter. The result (resolution outcome) of the Promise is passes as input into this function. In the code sample below we see how readElementFromJsonFile(parameters).then(readElementFromJsonFile) is used. This means: when the Promise returned from the first call to the function is resolved, then call the function again, this time using the outcome of the first call as input to the second call. With the fourth then this is a little bit more explicit: since in the final call to the function readElementFromJsonFile we need to pass not just the outcome from the previous call to the function as an input parameter but also the name of the element to read from the file. Therefore we use an anonymous function that takes the resolution result as input and makes the call to the function with the additional parameter. Something similar happens with the final then – where the result from the previous call is simply printed to the output.

The code for our example of subsequently and asynchronously reading the files becomes:

var fs = require('fs');
var step1 = "step1.json";

function readElementFromJsonFile(fileName, elementToRead) {
    return new Promise((resolve, reject) => {
        var elementToRetrieve = 'nextfile';
        if (elementToRead) {
            elementToRetrieve = elementToRead;
        }
        console.log('file to read from ' + fileName);
        fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
            var element = "";
            if (err) return reject(err);
            try {
                element = JSON.parse(data)[elementToRetrieve];
            } catch (e) {
                reject(e);
            }
            console.log('element read = ' + element);
            resolve(element);
        });
    })// promise
}

readElementFromJsonFile(step1)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(function (filename) { return readElementFromJsonFile(filename, 'actualValue') })
    .then(function (value) { console.log('Value read after processing five files = ' + value); })
Scheduled Actions as Promise or how to Promisify setTimeout

The setTimeout() built in expects a call back function. It does not currently return a Promise. Something like:

setTimeout(1000).then(myFunc)

would be nice but does not exist.

This entry on Stackoverflow has a nice solution for working with setTimeout Promise style:

function delay(t) {
   return new Promise(function(resolve) { 
       setTimeout(resolve, t)
   });
}

function myFunc() {
    console.log('At last I can work my magic!');
}

delay(1000).then(myFunc);

The post Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises appeared first on AMIS Oracle and Java Blog.

Oracle Database standard Geo Location Support using Locator (included in every edition!)

Wed, 2017-05-17 08:59

imageMany databases have native support for locations en geodata – and determining distance and closest locations (within a certain perimeter). Oracle Database has the [Graph and] Spatial Option – that supports even the most advanced and exotic forms of location related data querying (including multidimensional shapes and probably relativistic effects); this option comes on top of Enterprise Edition and carries additional costs. What may be not as well know is the Locator functionality that is part of every edition of the Oracle Database – including XE and SE, (without any additional costs) – with geo support as found in most databases. In this article I will give a very brief introduction of what this Locator feature can be used for.

See for extensive documentation on Locator:  Oracle Database 12c Documenation – Locator (and http://docs.oracle.com/cd/E11882_01/appdev.112/e11830/sdo_locator.htm#SPATL340 for Oracle Database 11g).

I will assume the legacy data model of DEPT and EMP (download DDL script for creating SCOTT’s database schema objects: scott_build.sql).

 

1. Prepare a table for Geo Spatial Data

— add geospatial data for departments (longitude, lattitude)

alter table dept
add (geo_location SDO_GEOMETRY)

SDO_GEOMETRY is an object type that describes and supports any type of geometry. Examples of SDO_GTYPE values include 2001 for a two-dimensional point. The SRID value 8307 is associated with the widely used WGS84 longitude/latitude coordinate system.

 

2. Add geo information to records in table

Now that a column has been added to hold the SDO_GEOMETRY object, we can start loading location data into the table.

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-96.8005, 32.7801,NULL), NULL, NULL)
where  loc = 'DALLAS'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-73.935242, 40.730610,NULL), NULL, NULL)
where  loc = 'NEW YORK'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -71.0598, 42.3584,NULL), NULL, NULL)
where  loc = 'BOSTON'

update dept
set    geo_location = SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE (-87.6298, 41.8781,NULL), NULL, NULL)
where  loc = 'CHICAGO'

 

3. Prepare meta data in USER_SDO_GEOM_METADATA

For each spatial column (type SDO_GEOMETRY), you must insert an appropriate row into the USER_SDO_GEOM_METADATA view to reflect the dimensional information for the area in which the data is located. You must do this before creating spatial indexes

-- The USER_SDO_GEOM_METADATA view has the following definition:
-- (   TABLE_NAME   VARCHAR2(32),
--  COLUMN_NAME  VARCHAR2(32),
--  DIMINFO      SDO_DIM_ARRAY,
--  SRID         NUMBER
--);

-- insert dimensional information for the  spatial column
-- the dimensional range is the entire Earth, and the coordinate system is the widely used WGS84 (longitude/latitude) system (spatial reference ID = 8307

INSERT INTO USER_SDO_GEOM_METADATA 
(TABLE_NAME, COLUMN_NAME, DIMINFO, SRID) 
VALUES ('DEPT', 'GEO_LOCATION', 
   SDO_DIM_ARRAY 
     (SDO_DIM_ELEMENT('LONG', -180.0, 180.0, 0.5), 
     SDO_DIM_ELEMENT('LAT', -90.0, 90.0, 0.5)), 
   8307);

 

4. Create the Geo Spatial Index

Create index on the column geo_location that holds the SO_GEOMETRY object:

CREATE INDEX dept_spatial_idx 
ON dept(geo_location)
INDEXTYPE IS mdsys.spatial_index;

image

 

5. Start querying with Location based conditions

List all departments, ordered by their distance from Washington DC

SELECT d.loc
,      SDO_GEOM.SDO_DISTANCE
       ( SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -77.0364, 38.8951,NULL), NULL, NULL) /* Washington DC */
       , d.geo_location
       , 0.005
       , 'unit=KM'
       ) "distance from Washington"
from   dept d
order 
by     2

image
We find all departments  within 500 km from Washington DC and get the distance for each department in the property distance in km :

with d as
( SELECT d.loc
  ,      SDO_GEOM.SDO_DISTANCE
         ( SDO_GEOMETRY(2001, 8307,SDO_POINT_TYPE ( -77.0364, 38.8951,NULL), NULL, NULL)
         , d.geo_location
         , 0.005
         , 'unit=KM'
         ) distance
  from   dept d
  order 
  by     2
)
select d.*
from   d
where  d.distance < 500

image

Find two closest neighbouring departments for NEW YORK:

 

SELECT /*+ LEADING(d) INDEX(dn dept_spatial_idx)  */ 
       d.deptno,
       d.loc,
       dn.deptno neighbour_deptno,
       dn.loc neighbour_loc,
       sdo_nn_distance (1) distance
FROM   dept d
       cross join
       dept dn
WHERE  d.deptno = 10 /* NEW YORK */
and    dn.deptno !=  10
AND    sdo_nn /* is dn in set of 3 closest neighbours to d */
       (dn.geo_location, d.geo_location, 'sdo_num_res=3', 1) = 'TRUE'
ORDER 
BY    distance;

(note: the hint in the first line is not required on Oracle Database 12c, but it is on 11g – see forum thread) Here are examples for the use of the SDO_NN operator.

 

image

image

Distance matrix, using pivot:

 

with distances as
(SELECT /*+ LEADING(d) INDEX(dn dept_spatial_idx)  */ 
       d.deptno,
       d.loc,
       dn.deptno neighbour_deptno,
       dn.loc neighbour_loc,
       trunc(sdo_nn_distance (1)) distance
FROM   dept d
       cross join
       dept dn
WHERE  sdo_nn /* is dn in set of 3 closest neighbours to d */
       (dn.geo_location, d.geo_location, 'sdo_num_res=3 unit=km', 1) = 'TRUE'
)
SELECT *
FROM   (SELECT loc, neighbour_loc, distance distance
        FROM   distances)
PIVOT  ( max(distance) AS distance 
         FOR (neighbour_loc) 
         IN ('NEW YORK' AS NEWYORK, 'BOSTON' AS BOSTON, 'CHICAGO' AS CHICAGO, 'DALLAS' as DALLAS)
        );

 

 

image

The post Oracle Database standard Geo Location Support using Locator (included in every edition!) appeared first on AMIS Oracle and Java Blog.

Fastest creation of a Lean VirtualBox VM Image with Oracle Database 11gR2 XE, the Node.JS 7.x and the Oracle DB Driver for Node

Tue, 2017-05-16 11:05

For a workshop on Node.js I needed a VM to demonstrate and students try out the Oracle DB Driver for Node. I wanted a lean VM with the bare minimum: Oracle Database XE, Node, the Oracle DB Driver for Node and the Git client (for fetching sources from GitHub). I stumbled across the OXAR repository in GitHub (https://github.com/OraOpenSource/OXAR ) – Oracle XE & APEX build script along with images for popular cloud platforms http://www.oraopensource.com/oxar/ . Using the sources I found here, I could create my VM in a few simple, largely automated steps. I ended up with a 4.5 GB sized VM image (which exports as a 1.5 GB appliance) that runs in 1 GB. It is more than adequate for my needs.

The steps – for myself if I need to go through them again and of course for you, the reader, to also create this handsome, useful VM.

The steps for creating your own VM image are as follows:

1. make sure that you have Vagrant and VirtualBox installed locally (https://www.vagrantup.com/ and https://www.virtualbox.org/)

2. get the OXAR repository content

git clone https://github.com/OraOpenSource/OXAR

3. Download Oracle 11gR2 XE  installer for Linux from OTN: http://www.oracle.com/technetwork/database/database-technologies/express-edition/downloads/index.html and copy the downloaded file oracle-xe-11.2.0-1.0.x86_64.rpm.zip (the Linux installer for Oracle 11gR2 XE Database downloaded from OTN) to the OXAR/files directory
image

4. edit the file config.properties in the OXAR root directory

SNAGHTML93536f

– set parameter OOS_ORACLE_FILE_URL to file:///vagrant/files/oracle-xe-11.2.0-1.0.x86_64.rpm.zip and save the change:

OOS_ORACLE_FILE_URL=file:///vagrant/files/oracle-xe-11.2.0-1.0.x86_64.rpm.zip

image

Use the OOS_MODULE_XXX flags to specify which components should be installed. Here I have chosen not to install APEX and NODE4ORDS.

5. run vagrant using the statement:

vagrant up

this will run for a file, download the CentOS base image and create the VM (with NAT network configuration), install all of Git client, Oracle 11gR2 XE Database, Node and Node OracleDB Driver
SNAGHTML9564ec

6. after rebooting the system, the VM will be started (or you can start it using vagrant up again or by using the VirtualBox manager).

SNAGHTML973989

You can start an SSH session into it by connecting to localhost:50022, then login to Linux using vagrant/vagrant

image

7. connect to the database using sqlplus hr/oracle

image

8. Try out Node.js

image

9. To try out Node against Oracle Database using the driver, you can clone the GitHub Repository:

git clone https://github.com/lucasjellema/nodejs-oracledbdriver-quickstart

image

Next, cd into the newly created directory

image

and run the file select.js:

image

10. To try out PL/SQL as well:

Create the procedure get_salary using the file get_salary_proc.sql

 

image

Run Node program plsql.js:

image

The post Fastest creation of a Lean VirtualBox VM Image with Oracle Database 11gR2 XE, the Node.JS 7.x and the Oracle DB Driver for Node appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite: Two-way SSL with TLS1.2 made easy (slightly less complicated)

Wed, 2017-05-10 14:14

Transport layer security (TLS) is not an easy topic. Many blogs have been written about this already. Surprisingly though, I did not find a single blog which was more or less complete and provided me with everything I needed to know to get this working on SOA Suite 12.2.1. In this blog I try to make the topic more easy to understand and provide a complete end to end example.

Suppose you only want an implementation and do not care much about the explanation, you can skip the ‘Some basics’ section, only execute the commands in bold in the ‘Lets get started!’ section and the steps in the ‘WebLogic and SOA Suite’ section. Do take into consideration any existing SSL related configuration on your own system.

Some basics SSL/TLS

SSL stands for Secure Sockets Layer. SSL is the predecessor of TLS. SSL should be considered insecure since in October 2014 the POODLE attack was announced. TLS currently has 4 versions. TLS 1.0, 1.1, 1.2 and 1.3. 1.3 is not widely supported/adopted yet. SSL/TLS provide integrity checks, security and authentication.

Identity

A server which hosts traffic on a port which has SSL/TLS enabled, has an identity keystore. This identity keystore contains a private key and a public key/certificate. The public key/certificate can safely be given to other parties. With websites when visiting an HTTPS website (HTTP with SSL enabled), the public key is send to you. The other party / client can use the public key to encrypt messages meant for the server. The only one who can decrypt the messages is the one having the private key of the server. This is usually only the server.

Trust

Can you trust a server? You can use a certificate authority to create a signed public key. If someone trust the certificate authority, that someone also automatically trusts the signed key. With websites you often see a green lock when a certain website uses HTTPS with a public certificate signed by a (by your webbrowser) trusted certificate authority.

Usually a truststore is used to store trusted certificate authorities or specific trusted certificates. If you have many servers in your application landscape, it is recommended to use a certificate authority since it is cumbersome to load every public key of every server in every truststore. Trusting a single certificate authority makes things a lot easier.

Certificate authority

A certificate authority has a private key which it can use to sign a so-called certificate signing request. From this certificate signing request you can create a signed public key.

Certain companies such as Google and Microsoft provide certain checks to confirm someones identity before providing them with a signed public key. You can pay these companies to provide those checks and give you a signed certificate. Most of these companies are trusted certificate authorities by default in several OSs and browsers. This way for a website for example, you do not have to make changes on a client for your certificate to be trusted.

If you run several servers within your internal company network, you often do not require these external checks. You can create your own certificate authority private key and create a signed public key yourself. This certificate authority is not trusted by default so you should trust the public certificate of your self-signed certificate authority in order establish trust.

Cipher

A cipher is an algorithm for encryption and decryption. With SSL, during the handshake phase (the phase which establishes an SSL session), a cipher is determined. The client usually provides a list of the ciphers it supports and the server chooses which one to use. During an SSL handshake you can see in logfiles which cipher is chosen.

Lets get started!

I used 2 SOA Suite 12.2.1.2 installations (complete, no quickstart) in 2 different VM’s for this example. soaserver1 and soaserver2. I used a host-only network with fixed IP’s in VirtualBox and added IP/hostname mappings in the hosts files of the two servers.

Create a self-signed certificate autority

A blog explaining the topic on creating your own certificate authority can be found here. This is just my short summary with some corrections. Do read it for some easy to understand background information.

This simple example uses OpenSSL. OpenSSL is installed by default on most Linux environments and can also be installed on other OSs.

First create a private key for your certificate authority:

openssl genrsa -des3 -out rootCA.key 2048

I create an RSA key and protect it with the DES3 cipher algorithm based on a password. I want my key to have a length of 2048 bytes. You can also choose for ECC keys. They can be smaller when comparing to RSA keys to provide the same level of protection. ECDSA (Elliptic Curve Digital Signature Algorithm) ciphers use ECC keys. Keep this key private! It allows you to sign public keys (see later in this post) and create trust.

Next I self-sign this generated key. This creates a public signed key for the certificate authority. I can load this key in truststores to achieve trust for keys which are signed with this certificate:

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem -subj ‘/CN=Conclusion/OU=Integration/O=AMIS/L=Nieuwegein/ST=Utrecht/C=NL’ -extensions v3_ca

Lets break this down:

  • req: do a request
  • x509: this defines the format of the key to be generated. In the x509 standard, several pieces of metadata can be stored with the certificate and the certificate authority structure is also part of the x509 standard. Read more here.
  • new: generate a new key
  • nodes: this is actually ‘no DES’. My public key does not need to be protected with a password.
  • key: specifies the private key to sign
  • sha256: secure hash algorithm. Hashing is used to provide data integrity functionality. Creating a hash of a transmission allows you to check at a later time if the transmission has been tampered with.
  • days: specifies the validity of the generated certificate
  • subj: provides some metadata for the certificate
  • extensions v3_ca: this adds a metadata field to the certificate indicating that it is a certificate of a certificate authority. If this extension is not added, certain validations might fail

You can use the certificate authority private key and certificate as server identity but you shouldn’t. This will give certain validation errors because of the ‘extensions v3_ca’.

Create server identity keys

Next we create a private key which will be used as identity of the WebLogic server

openssl genrsa -des3 -out soaserver1.key 2048

After we have created this private key, we can create a certificate signing request for this private key

openssl req -new -key soaserver1.key -out soaserver1.csr -subj ‘/CN=soaserver1/OU=Integration/O=AMIS/L=Nieuwegein/ST=Utrecht/C=NL’

This is pretty similar as to what we have done for the certificate authority. However mind the subj clause here. The common name should match the server hostname. This will be used later for verification of the identity of the server by the client. In order to allow two-way SSL, I added the server hostname to IP mapping to every servers hosts file. In an enterprise you would use a DNS (domain name system) for this since you do not want to maintain every mapping in every server locally.

Next sign the certificate using the information in the private key and certificate of the certificate authority.

openssl x509 -req -in soaserver1.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out soaserver1.crt -days 1024 -sha256

This is very similar to signing the certificate authority certificate. Mind that a validity with a longer period than the validity of the certificate authority key is of course useless. Createserial creates a new file rootCA.srl. This serial number is unique for every signed certificate. You should save it so at a later time you can check if a certificate has been tampered with.

The next time you sign a certificate, you can use:

openssl x509 -req -in soaserver1.csr -CA rootCA.pem -CAkey rootCA.key -CAserial rootCA.srl -out soaserver1.crt -days 1024 -sha256

This will increase the previous serial with 1, making sure it is unique.

Creating an identity keystore

Now you have a signed certificate and a private key. Time to make a Java keystore (JKS) which can be used in WebLogic server and SOA Suite and other pieces of Java.

openssl pkcs12 -export -in soaserver1.crt -inkey soaserver1.key -chain -CAfile rootCA.pem -name “soaserver1” -out soaserver1.p12

keytool -importkeystore -deststorepass Welcome01 -destkeystore soaserver1identity.jks -srckeystore soaserver1.p12 -srcstoretype PKCS12

The above steps;

  • creating a private key
  • creating a certificate signing request
  • signing the certificate with the private key of the certificate authority
  • creating an identity keystore

need to be done for every server.

Creating a trust keystore

Here you can pick the fruits of the above work of using a certificate authority to sign your server private keys. You can use the certificate authority certificate in a truststore and every key signed with the certificate is trusted. You do not need to load every specific server certificate into every truststore the server needs access to. Creating a truststore is easy and you can do this once and use the same trust.jks file in all your servers.

keytool -import -alias rootCA -file rootCA.pem -keystore trust.jks -storepass Welcome01

WebLogic and SOA Suite

It is interesting to notice the incoming WebLogic configuration differs from the SOA Suite outgoing configuration. This is of course not surprising since a server usually only has a single identity, but an integration product like SOA Suite should able to interact with multiple protected external resources, maybe requiring different ciphers and keys for each of them. Also SOA Suite in the past (I’m not sure if that is still the case) could run on IBM WebSphere instead of WebLogic Server. Thus I can understand Oracle chose to provide a more generic implementation of SSL in the SOA Suite than the WebLogic specific one.

WebLogic

The WebLogic server configuration is pretty straightforward. In this example I’m only looking at SSL for incoming and outgoing messages for SOA Suite. The WebLogic specific configuration is only relevant for incoming connections. Basically the steps are as followed:

  • Enable SSL for the managed server
  • Specify keystores for identity and trust
  • Configure incoming SSL specifics
  • Force the server to use TLS1.2
Enable SSL for the managed server

First Enable the listen port for SSL. In WebLogic console, environment, servers, specify your server, configuration, general and indicate ‘SSL Listen port enabled’. You can also specify the SSL port here.

Specify the keystores for identity and trust

In WebLogic console, environment, servers, specify your server, configuration, keystores. You can specify the identity and trust keystores you have created during the above steps.

Configure incoming SSL specifics

In WebLogic console, environment, servers, specify your server, configuration, SSL. You can specify the identity key used for the server and several checks which can be done when establishing the SSL connection.

Some important settings:

  • BEA Hostname verifier. This indicates the CN field in the certificate is checked against the server hostname.
  • Client certs requested and enforced. If set, Two-Way SSL will be used and the client won’t be able to connect unless it presents a trusted certificate.
  • Buildin SSL Validation and Cert Path Validators. This checks the certificate chain.

It is important to understand what these checks do. A host name verifier ensures the host name in the URL to which the client connects matches the host name in the digital certificate that the server sends back as part of the SSL connection. This helps prevent man in the middle attacks where the client might connect to a different URL.

The below situation is something you won’t prevent even with this checks. I could connect without problems with the soaserver2 WebLogic server from soaserver1 with the certificate of soaserver2. Also when using the private key of soaserver1 as identity on soaserver2, soaserver2 would not complain about this. FireFox would though and most likely also other clients.

Force TLS1.2

If you want to force WebLogic / SOA Suite to use TLS 1.2 you can specify the following JVM parameters in the setDomainEnv.sh file.

-Dweblogic.security.SSL.minimumProtocolVersion=TLSv1.2 -Dhttps.protocols=TLSv1.2

SOA Suite

The SOA Suite configuration is a bit more elaborate in that it requires configuration in different places of which not all can be done from the GUI.

The steps which need to be performed are:

  • Specify the identity store used by SOA
  • Create a keystore password credential in the credential store
  • Configure composite to use two-way SSL
Specify identity store

First you have to specify the identity store which the SOA Suite will use for outbound connections. You can find this setting by going to SOA, soa-infra, SOA Administration, Common Properties, (scroll down), ‘More SOA Infra Advanced Configuration Properties…’

Here you have to specify the servers identity keystore. In my case /home/oracle/certs/soaserver1identity.jks.

Create a keystore password credential in the credential store

Next you have to specify the keystore password. If you forget to do this, you will encounter errors like:

On the client:
<May 7, 2017, 12:58:43,939 PM CEST> <Error> <oracle.integration.platform.blocks.soap> <BEA-000000> <Unable to create SSL Socket Factory>

On the server:
[2017-05-07T12:26:02.364+02:00] [soa_server1] [NOTIFICATION] [] [oracle.integration.platform.common.SSLSocketFactoryManagerImpl] [tid: [ACTIVE].ExecuteThread: ’25’ for queue: ‘weblogic.kernel.Default (self-tuning)’] [userId: <anonymous>] [ecid: cd874e6b-9d05-4d97-a54d-ff9a3b8358e8-00000098,0] [APP: soa-infra] [partition-name: DOMAIN] [tenant-name: GLOBAL] Could not obtain keystore location or password

You can set the keystore password by going to your domain, Security, Credentials. You can create a credential map SOA with a keyname/user of KeyStorePassword with the password you have used for your keystore. It will use the same password for the key as for the keystore if not specified. You can create a KeyPassword entry for the password for the key if they differ.

  

Configure composite to use two-way SSL

This step is easy. You have to add a binding property to your reference which indicates you want to use two-way SSL.

In the composite.xml file on your reference you can add:

<property name=”oracle.soa.two.way.ssl.enabled”>true</property>

This causes the composite binding to use the identity (with the credential store password) for outbound SSL specified in the previously configured MBean.

You should of course also not forget to set the endpoint to a port which hosts HTTPS and indicate in the URL that it should use HTTPS to call this endpoint. In my example I’ve overridden the URL in the EM. Be aware though that overriding the endpoint URL might still cause the original endpoint to be called when the overridden endpoint is not accessible (if for example the SSL connection has issues).

Some useful tips

If you want to debug SSL connections, the following tips might help you.

FireFox

It might appear strange to use a webbrowser to test SSL connections, but which piece of software uses SSL more than a browser? FireFox is very clear in its error messages what has gone wrong which greatly helps with debugging. FireFox uses its own certificate store and can provide certificates to login to a server. You can configure them from FireFox, Preferences, Advanced, Certificates, View Certificates. Here you can import client certificates such as the p12 files you have generated in an earlier step.

This provides for a very easy way to check whether a server can be accessed with SSL and if the server certificate has been correctly generated / set-up. FireFox also extensively checks certificates to provide the green-lock icons people are quite familiar with.

In this case I have SSL enabled for soaserver1 on port 7002. I open https://soaserver1:7002 in FireFox (do not forget the HTTPS part). Since I have enabled ‘Client certs requested and enforced’ in the WebLogic SSL configuration, it will ask me for a client key.

In this case you can check whether a client certificate will be trusted. When opening https://soaserver1:7002 and you get a 404 message, the WebLogic server is responding to you after the SSL handshake has succeeded.

In FireFox you can tweak the cipher suites which are used. Read about this here. Do mind that SSL connections can be cached and FireFox can remember to send specific keys. If you run FireFox on soaserver1 and open a link on soaserver1, Wireshark (read below) will not detect traffic on the same interface which is used to access soaserver2.

Wireshark

Use Wireshark to monitor connections/handshakes.

  • You can confirm the SSL/TLS version being used
  • You can see the number of messages which have crossed the wire (allows you to distinguish retries of for example a handshake fails)
  • Allows you to decrypt SSL traffic (if you have the private key)
  • It allows you to confirm an SSL connection is actually being set up. If you do not see it in Wireshark, no message has been send and the connection build-up fails on the client. This for example happens when the SOA, KeyStorePassword entry has not been set in the SOA Suite credential store.

SSL debug logging

If you want to see what is happening with your SSL connection, it is very helpful to provide some JVM switches in setDomainEnv.

-Dweblogic.security.SSL.verbose -Djavax.net.debug=all -Dssl.debug=true

You can also enable WebLogic SSL debugging in WebLogic console. Open a server and enable weblogic.security.SSL

Portecle

Portecle is a handy and freely available tool if you want to manage keystores and look at key details.

Which service is called?

Suppose you have process A on server X which calls process B on server Y. For testing you first deploy process B on server X and use the WSDL of process B locally from process A. Next you override the endpoint to refer to the SSL port of server Y. What happens if the SSL connection cannot be established? By default, there are 3 retries after which the process falls back to using the endpoint as specified in the WSDL file. When testing it might seem the call from process A on X to B on Y works but it is actually a local call because the local call is the fallback for the remote call. In this case you should confirm an instance of B is created on Y.

Finally Performance impact

Using SSL of course has a performance impact. 1-way SSL is faster than 2-way SSL. Using encryption is slower than not using encryption. Key length and cipher suites also play a major role in how fast your SSL connection will be. I have not measured the precise cost of the different options, but you should consider what you need and what you are willing to pay for it in terms of performance impact.

  • One way SSL allows the client to verify the server identity (certificate, hostname). The server provides the client with a public key but not the other way around.
  • Two way SSL also allows the server to verify the client. The client also needs to provide a public key.

SSL verifies host identities, keys, certificate chains. It does not allow you to provide (specific user) application authentication or authorization. You could do it with SSL but it would require giving every user a specific certificate. There are better ways to do that such as WS-Security, SAML or OAuth.

Entropy

If you use a server which has a lot of SSL connections, the random number generator is asked often for a new random number. Random numbers are generated by using entropy (a measure of randomness/disorder), which is a limited resource, especially in virtualized environments.

There is a setting which allows WebLogic server to recycle random numbers at the cost of security (the random number generator becomes predictable). Read more about that here.

-Djava.security.egd=file:/dev/./urandom

Oracle does not recommend using this recycling mechanism in production environments since if you can predict the random number generator, you have introduced a security vulnerability which can be exploited. Next to speeding up SSL connections, your server startup will most likely also be improved.

CRLs

I’ve not talked about a lot of things such as certificate revocation lists (CRLs). These lists contain keys which have been compromised. Compromised means the private key of a certificate authority has become public. Using the private CA key, someone is able to create new certificates which are being trusted by people who trust the CA. If a person can do such a thing, he is able to gain access to systems. Remember private keys can also be used to decrypt traffic? This is of course an issue on the internet but also when you have your own certificate authority. More generally speaking, if a private key is compromised, all trust should be revoked since you cannot count anymore on that for example a server is the sole owner of the key and is the only one who can decrypt traffic.

JDBC and SSL

Read more about this in the whitepaper here. It requires Oracle Advanced Security (OAS), which is an Oracle Database Enterprise Edition option. The US Government does not allow double encryption (you can imagine why..). If you configure Oracle Advanced Security to use SSL encryption and another encryption method concurrently, then the connection fails. See SSL Usage issues here.

Other things

I have not talked about securing the connection between managed servers in a cluster and between the NodeManager and managed servers. You can read more about that here. Do mind though that using trust can be more efficient than specifically putting every public key in every truststore. Especially when you have many servers.

The post Oracle SOA Suite: Two-way SSL with TLS1.2 made easy (slightly less complicated) appeared first on AMIS Oracle and Java Blog.

Consequences of stopping Oracle support

Sun, 2017-05-07 03:40

When buying licenses for Oracle, the first year support is mandatory. After that, a Customer may decide to stop paying for the yearly technical support of the Oracle licenses. The consequences of that decision is not always clear to customers. Most OLSA’s will contain the sentence   “If you decide not to purchase technical support, you may not update any unsupported program licenses with new versions of the program.”

This is correct, but there is more to think of.  This post will cover the elements that should be considered when deciding on stopping the support.

Unsupported actions

The Technical Support Policy of Oracle clarifies a bit more of what actions a customer is not entitled to do when stopping the support:

Customers with unsupported programs are not entitled to download, receive, or apply updates, maintenance  releases, patches, telephone assistance, or any other technical support services for unsupported programs.

This means the software instantly become legacy, AND a substantial risk. The Oracle software will not be upgraded or patched, the environment  (O.S., client software, middleware, other connected software) does. With the possible effect the application might not work in the future.

Audit-ranking

However Oracle claims that the departments Support, Accountmanagement and LMS acts more or less seperated and will not share this kind of information, it is naive to assume that the decision of stopping support of (part of) the Oracle licenses has no consequences regarding the rank of the customer on LMS’s list for submitting an audit.

 

Matching Service Levels

The support of the license to be stopped could be part of a socalled ‘subset’. Then the following rule applies according to the Support Policy:

You may desupport a subset of licenses in a license set only if you agree to terminate that subset of licenses.

The definition of a license subset is quite a definition, but here are two examples:

Oracle Database Enterprise Edition with RAC, Diagnostic and Tuning Pack.

Weblogic Suite with SOA Suite

So stopping support of the options is a ‘Matching Service Level’ – thing, what LMS will translate as incompliancy, and the chance that My Oracle Support is not willing to help when submitting a Service Request.

 

Afbeeldingsresultaat voor oracle reinstatement fee

Repricing

Support of Oracle software is related to CSI-numbers, and there may be several CSI-numbers in one contract. And a customer may have more contracts, all with ther own negotiated discounts. The following line in the Support Policy is important when stopping support of a line-item :

Pricing for support is based upon the level of support and the volume of licenses for which support is ordered. In the event that a subset of licenses on a single order is terminated or if the level of support is reduced, support for the remaining licenses on that license order will be priced at Oracle’s list price for support in effect at the time of termination or reduction minus the applicable standard discount.

This is ‘Repricing’, also called ‘Pricing following Reduction ‘. So, the updated support renewal, then, would be recalculated at a less optimal discount. Ending up being no savings – just less product on support for the same costs.

This is mostly the case of terminating a license and not for terminating support (however this is a ‘reduced level of support’), but it’s important to know.

Terminating a license within a CSI-number – in stead of stopping support – is in some cases by the way not a reason for repricing. E.g. when there has been a reorganisation of contracts in the past.

Reinstatement

When a customer decides – for what reason – to reinstate the support, there will be a reinstatement-fee.

The reinstatement fee is computed as follows:

a) if technical support lapsed, then the reinstatement fee is 150% of the last annual technical support fee you paid for the relevant program;

b) if you never acquired technical support for the relevant programs, then the reinstatement fee is 150% of the net technical support fee that would have been charged

Engineered Systems

Stopping support of a productline also has a peculiar effect on products, running on engineered systems.

The lifecycle managment of engineered systems are maintained by so-called  ‘bundle-patches’. These bundle-patches contains patches of storage-firmware, bios-updates, o.s-updates, and .. Oracle software patches.

So, when stopping Oracle support you still receive the database and middleware-patches through the bundle-patches, which is not allowed. And however it could be possible to not use these patches, it will break the life cycle managment of the engineered system. I don’t think this is advisable.

Prerequisites

The prerequisites of making such a decision:

  • An overview of all the Oracle contracts at your firm, what seems pretty obvious, but takes quite an effort sometimes.
  • An overview of what licences you are actually using, compared to what you are entitled to.
Recap

The OPEX (Operational of Operating Expenditures) can be decreased, in some cases substantially, but before jumping into action and conclusions, contact someone who understands the risks, and is able to look further ahead in the future, together with you.

Resources

Example OLSA: http://www.oracle.com/us/corporate/pricing/olsa-ire-v122304-070683.pdf

Oracle Software Technical Support Policies :  http://www.oracle.com/us/support/library/057419.pdf

The post Consequences of stopping Oracle support appeared first on AMIS Oracle and Java Blog.

The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data

Sat, 2017-05-06 01:58

imagePlenty of articles describe this hello world of Machine Learning. I will merely list some references and personal notes – primarily for my own convenience.

The objective is: get a first hands on exposure to machine learning – using a well known example (Iris classification) and using commonly used technology (Python). After this first step, a second step seems logical: doing the same thing with my own set of data.

Useful Resources:

Starting time: 6.55 AM

6.55 AM Download and install latest version of Oracle Virtual Box (5.1.22)

7.00 AM Download Fedora 64-bit ISO image (https://getfedora.org/en/workstation/download/)

7.21 AM Create Fedora VM and install Fedora Linux on it from ISO image (create users root/root and python/python); reboot, complete installation, run dnf update (updates worth 850 MB, 1348 upgrade actions – I regret this step), install Virtual Box Guest Addition (non trivial) using this article: https://fedoramagazine.org/install-fedora-virtualbox-guest/.

8.44 AM Save a Snapshot of the VM to retain its fresh, mint, new car smell  condition.

8.45 AM Install Python environment for Machine Learning (Python plus relevant libraries; possibly install Notebook server)

8.55 AM Save another snapshot of the VM in its current state

now the environment has been prepared, it is time for the real action – based on the second article in the list of resources.

10.05 AM start on machine learning notebook sample – working through Iris classification

10.15 AM done with sample; that was quick. And pretty impressive.

 

It seems the Anaconda distribution of Python may be valuable to use. I have downloaded and installed: https://www.continuum.io/downloads .

Note: to make the contents of a shared Host Directory available to all users

cd (go to home directory of current user)

mkdir share (in the home directory of the user)

sudo mount -t vboxsf Downloads  ~/share/ (this makes the shared folder called Downloads in Virtual Box Host available as directory share in guest (Fedora)

Let’s see about this thing with Jupyter Notebooks (fka as IPython). Installing the Jupyter notebook is discussed here: https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch01/README.md . Since I installed Anaconda (4.3.1 for Python 3.6) I have the Jupyter app installed already.

With the following command, I download a number of notebooks:

git clone https://github.com/rhiever/Data-Analysis-and-Machine-Learning-Projects

Let’s try to run one.

cd /home/python/Data-Analysis-and-Machine-Learning-Projects/example-data-science-notebook

jupyter notebook ‘Example Machine Learning Notebook.ipynb’

And the notebook opens in my browser:

image

I can run the notebook, walk through it step by step, edit the notebook’s contents and run the changed steps. Hey mum, I’m a Data Scientist!

Oh, it’s 11.55 AM right now.

 

Some further interesting reads to get going with Python, Pandas and Jupyter Notebooks – and with data:

The post The Hello World of Machine Learning – with Python, Pandas, Jupyter doing Iris classification based on quintessential set of flower data appeared first on AMIS Oracle and Java Blog.

Golden Gate 12c and DIY Sequence Replication with PL/SQL

Fri, 2017-05-05 10:58

Recently, while migrating AIX 11gR2 Databases to Oracle Linux 12cR1 on an ODA X5-2, our setup of Sequence Replication by Oracle Golden Gate appeared to be faulty. The target side sequences were not automatically incremented.

The problem came to light during the migration of acceptance databases, and under some time pressure it was devised to generate drop + create statements ( start with = DBA_SEQUENCES.LAST_NUMBER + DBA_SEQUENCES.INCREMENT_BY ) of all sequences in the Source, and to run these statements on the Target. Although this eventually resulted in the desired result, there were 2 side effects:

    With a total of 1270 sequences, the operation as a whole took more than an hour.
    Packages and triggers referencing these sequences became invalid.

Further research revealed that the Golden Gate Sequence Replication of Production suffered the same problem and I wondered if I could find a better solution with now a bit more time at hand. Well, I discovered that to set any desired sequence “currval” value, a one-time temporary adjustment of the increment and subsequent call to the sequence “nextval” pseudo column is sufficient. What follows is the output of a quick test, but check out what happens with “USER_SEQUENCES.LAST_NUMBER”, and what it really means in combination with the cache.

Create a test sequence

CREATE SEQUENCE TEST_SEQ_01 
START WITH 10 
INCREMENT BY 1000 
MINVALUE 10 
CACHE 20 
NOCYCLE 
NOORDER;
 
-- the sequence returns no current value yet
SELECT TEST_SEQ_01.CURRVAL from dual;
  ORA-08002: sequence TEST_SEQ_01.CURRVAL is not yet defined in this session.

-- check out last_number... it equals nextval because the cache doesn't exist yet
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01'; 
  MIN_VALUE	INCREMENT_BY CACHE_SIZE	LAST_NUMBER
  10	      1000	       20	        10

-- generate the first number and create the cache
SELECT TEST_SEQ_01.NEXTVAL from dual;
  NEXTVAL
  10

-- last_number is updated as the highest possible number of the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        20010

-- and now a current value is returned
SELECT TEST_SEQ_01.CURRVAL from dual;
  CURRVAL
  10

Set the current sequence value = 20000 without recreating the sequence

-- adjust the increment
ALTER SEQUENCE TEST_SEQ_01 INCREMENT BY 19990;

-- last_number equals the sequence next value
-- the last "alter sequence" command must have flushed the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      19990	       20	        20000

-- generate the next value and create a new cache
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  20000

-- last_number is updated as the highest possible number of the cache
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      19990	       20	        419800

-- the sequence has the desired current value
SELECT TEST_SEQ_01.CURRVAL from dual
  CURRVAL
  20000

Reset the increment

-- set the increment_by value back to original
ALTER SEQUENCE TEST_SEQ_01 INCREMENT BY 1000;

-- again, the cache is flushed and last_number equals the next value
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        21000

-- generate the next value and create a new cache
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  21000

-- last_number is updated as the highest possible number of the cache 
SELECT MIN_VALUE, INCREMENT_BY, CACHE_SIZE, LAST_NUMBER 
FROM user_sequences 
WHERE sequence_name = 'TEST_SEQ_01';
  MIN_VALUE	INCREMENT_BY CACHE_SIZE LAST_NUMBER
  10	      1000	       20	        41000

-- the increment is back to 1000
SELECT TEST_SEQ_01.NEXTVAL from dual
  NEXTVAL
  22000

This test shows that “USER_SEQUENCES.LAST_NUMBER”:

  • Is identical with sequence “nextval” directly after a “create sequence” or “alter sequence” command, because the cache is not there yet after first definition or gets flushed with an alter.
  • Is updated and saved to disk as the highest possible cache number after a call to “nextval”.
  • Serves as safeguard ( i.e. after a crash ) to ensure that sequence numbers do not conflict with numbers previously issued.

  • I decided to use “DBA_SEQUENCES.LAST_NUMBER” instead of the “currval” pseudo column to compare sequences in Source and Target. The reason is that “currval” is only ( and by definition ) the value returned by my sessions last call to “nextval”. If my session has not called “nextval” yet, “currval” is undefined. So I would have to “nextval” 1270 sequences in Source and also in Target before I could even start with the comparison, while last_numbers are already there to compare with. Also, this activity is unwanted during the short inactive Source and inactive Target migration stage and would take too much time. Last but not least, an exact match of sequence “currval” values is not really necessary… a guarantee of higher sequence “currval” values in Target compared to those in Source is quite enough.

    The next short piece of code is what I eventually came up with and used in the Production migration. It took less than 3 minutes processing time, did not render any Oracle object invalid, and contributed highly to a very limited migration inactivity time.

    -- Code assumes:
    --   1. "nocycle" sequences with positive "increment_by" values
    --   2. identical number of sequences and sequence DDL in Source and Target Database 
    -- Grant 'alter any sequence' and 'select any sequence' to the owner
    -- Replace the database link and schema names with your own
    -- Run the code from Target
    declare
      v_ret PLS_INTEGER := 0;
      v_dummy VARCHAR2(100);
      v_ln number := 0;
      v_ib number := 0;
      v_cz number := 0;
      v_incr number := 0;
    begin
      for i in ( select sequence_owner  so
                      , sequence_name   sn
                      , last_number     ln
                      , increment_by    ib
                      , cache_size      cz 
                 from dba_sequences@<DBLINK_FROM_SOURCE2TARGET>
                 where sequence_owner in ('<SCHEMA01>','<SCHEMA02>','<SCHEMA03>','<SCHEMA04>') )
      loop
          select last_number
               , increment_by 
               , cache_size
            into v_ln
               , v_ib
               , v_cz 
          from dba_sequences
          where sequence_owner = i.so
            and sequence_name = i.sn;
    
    -- set the difference in last_numbers as increment if target.last_number < source.last_number
          if v_ln < i.ln then
            v_incr := i.ln - v_ln;
    -- set the cache as increment if last_numbers match  
          elsif v_ln = i.ln then
            v_incr := v_ib * v_cz;
          end if;
          
          if v_ln <= i.ln then    
            execute immediate 'alter sequence '||i.so||'.'||i.sn||' increment by '||v_incr;
            execute immediate 'select '||i.so||'.'||i.sn||'.nextval from dual' into v_dummy;
            execute immediate 'alter sequence '||i.so||'.'||i.sn||' increment by '||v_ib;
            v_ret := v_ret +1;
          end if;
      end loop;
      dbms_output.put_line('Nr. sequences adjusted: '||v_ret);
    end;
    /
    

    The post Golden Gate 12c and DIY Sequence Replication with PL/SQL appeared first on AMIS Oracle and Java Blog.

    Smooth, easy, lightweight – Node.js and Express style REST API with Java SE

    Thu, 2017-05-04 05:28

    It is easy to be seduced by some of the attractive qualities of Node (aka Node.js) – the JavaScript technology that makes server side development fun again. Developing light weight applications that handle HTTP requests in a rapid, straightforward way with little overhead and no bloated infrastructure is easy as pie – and feels a long way from the traditional Java development. I like Node. I feel the attraction. I have used Node for simple and more complex applications. It’s cool.

    I have realized that what is so nice about Node, is also largely available with Java. Of course, there are many ways of doing Java development that is not lightweight and rapid and low overhead at all. As I am sure we can find ways to spoil Node development. More importantly, there are ways to make Java development comparably breezy as Node development. In this article I take a brief look at the development of a REST API using nothing but the [Oracle] Java Runtime and Maven as the package manager (Java’s equivalent to Node’s npm). Using the Java 8 JDK and Maven I am able to program and run a REST API from my command line, running locally on my laptop, using under two dozen lines of code. In a way to that is very similar to what I would do with Node and the Express library. The steps described below can be executed in less than 15 minutes – similar to what Node based development of this type of REST API foundation would require.

    The source code accompanying this article is in GitHub: https://github.com/lucasjellema/java-express-style-rest-api – but it is not a lot of code at all.

    image

    The final result of this article is simple: a REST API running locally that handles simple GET and POST requests. The logic of the API has to be implemented (and some JSON processing may have to be added, which granted is in Java more complex than in Node) – but that is fairly evident to do.

    Here is a screenshot of Postman where the REST API is invoked:

    image

    and here is the command line for the running REST API:

    image

    The application is started with a single command line (compare to npm start) and listens on port 8765 on localhost to process incoming requests.

    The steps for implementing this REST API and running it locally are described below.

    Implementation of REST API

    Again, the only two prerequisites for these steps are: a locally installed Oracle JDK 8 (http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) and Maven 3 environment (https://maven.apache.org/download.cgi)

    1. Create scaffold for new application using Maven (compare npm init)

    mvn -B archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DgroupId=nl.amis.rest -DartifactId=my-rest

    image

    2. Edit Maven’s pom.xml to add dependencies for Jersey and Jersey Container (compare package.json and npm install –save)

    image

    Note: also add build section in pom.xml with explicit indication of Java 1.8 as source and target version (to ensure Lambda expressions are supported)

     

    3. Retrieve required libraries (jar files) using Maven (compare npm install)

    mvn install dependency:copy-dependencies

    This will install all required JARs into directory target\dependency – compare to node-modules in a Node application.

    image

    4. Edit Java class App to create the most simple and straightforward http request serving application conceivable – use imports for required dependencies (compare require instructions in node application)

    package nl.amis.rest;
    
    import java.io.IOException;
    import java.io.OutputStream;
    import java.net.InetSocketAddress;
    
    import com.sun.net.httpserver.HttpExchange;
    import com.sun.net.httpserver.HttpServer;
    
    public class App {
        private final static int port = 8765;
    
        public static void main(String[] args) throws IOException {
            HttpServer server = HttpServer.create(new InetSocketAddress(port), 0);
            server.createContext("/app", (HttpExchange t) -&gt; {
                byte[] response = "Hello World from HttpServer".getBytes();
                t.sendResponseHeaders(200, response.length);
                OutputStream os = t.getResponseBody();
                os.write(response);
                os.close();
            });
            server.setExecutor(null); // creates a default executor
            server.start();
            System.out.println("HTTP Server is running and listening at " + server.getAddress() + "/app");
        }
    }
    

     

    and invoke it:

    image

    5. Class App2.java builds on App2 to add the REST API capabilities – using class Api as the REST Resource (@Path Api) with the Resource Methods to handle GET and POST requests)

    package nl.amis.rest;
    
    import java.io.IOException;
    
    import com.sun.net.httpserver.HttpServer;
    
    import java.net.URI;
    
    import javax.ws.rs.core.UriBuilder;
    
    import org.glassfish.jersey.jdkhttp.JdkHttpServerFactory;
    import org.glassfish.jersey.server.ResourceConfig;
    
    public class App2 {
        private final static int port = 8765;
        private final static String host = "http://localhost/app";
    
        public static void main(String[] args) throws IOException {
            URI baseUri = UriBuilder.fromUri(host).port(port).build();
            ResourceConfig config = new ResourceConfig(Api.class);
            HttpServer server = JdkHttpServerFactory.createHttpServer(baseUri, config);
            System.out.println("HTTP Server is running and listening at "+baseUri+"/api" );
        }
    }
    

    and

    package nl.amis.rest;
    
    import javax.ws.rs.Consumes;
    import javax.ws.rs.GET;
    import javax.ws.rs.POST;
    import javax.ws.rs.Path;
    import javax.ws.rs.Produces;
    import javax.ws.rs.core.Context;
    import javax.ws.rs.core.Request;
    
    @Path("api")
    public class Api {
    
        @POST
        @Consumes("application/json")
        @Produces("text/plain")
        public String postApiMessage(@Context Request request, String json) {
            System.out.println("received event:" + json);
            return "post message received " + json;
        }
    
        @GET
        @Produces("text/plain")
        public String getApiMessage(@Context Request request) {
            return "nothing to report from getApiMessage.";
        }
    
    }
    

    6. Build application using Maven (this step does not really exist for node applications; programming errors come out at run time )

    mvn package

    This creates a JAR file – my-rest-1.0-SNAPSHOT.jar, 6 KB – that can be shipped, cloud deployed or simply executed (as in the next section)

     

    7. Run application which starts the REST API at http://localhost:8765

    java -cp target/my-rest-1.0-SNAPSHOT.jar;target/dependency/* nl.amis.rest.App

    or

    java -cp target/my-rest-1.0-SNAPSHOT.jar;target/dependency/* nl.amis.rest.App2

     

    Resources

    Get URL parameters using JDK HTTP server  http://www.rgagnon.com/javadetails/java-get-url-parameters-using-jdk-http-server.html

    Example of reading headers and of downloading (PDF) file through HTTP Server: http://www.rgagnon.com/javadetails/java-have-a-simple-http-server.html

    The post Smooth, easy, lightweight – Node.js and Express style REST API with Java SE appeared first on AMIS Oracle and Java Blog.

    Net usable storage when using Oracle Database Appliance

    Sat, 2017-04-29 08:19

    Since the birth of the ODA X6-2 HA, the net usable storage has become more of a challenge when advising a customer as the X5-2 HA had substantial more – but slower – storage. This very short blogpost is just a quick summary / checklist of the net usable storage, depending on the ODA and the mirroring you choose.

    By the way, for other ODA-comparisons than storage I wrote another blogpost.

    The following pictures are extracts of Oracle PDF’s.

    Usable storage of the single node ODA’s:

    image

     

    Usable storage of the two-node ODA (including the ‘old’ X5-2 HA for comparison):

    image

     

    ODA X6-2 HA Normal Redundancy – 2 x Mirroring

     

    image

     

    ODA X6-2 HA High Redundancy – 3 x Mirroring

     

    image

     

    Sources:

    ODA, comparing the line: https://technology.amis.nl/2017/01/08/oracle-database-appliance-x6-2l-x6-2-ha-comparing-line/

    Oracle PDF’s :

    – 201701120 EN Oracle Database Appliance X6-2 Portfolio Summary.pdf

    – ODA X6-2HA Net Storage.pdf

    The post Net usable storage when using Oracle Database Appliance appeared first on AMIS Oracle and Java Blog.

    R: Utilizing multiple CPUs

    Sat, 2017-04-22 07:02

    R is a great piece of software to perform statistical analyses. Computing power can however be a limitation. R by default uses only a single CPU. In almost every machine, multiple CPUs are present, so why not utilize them? In this blog post I’ll give a minimal example and some code snippets to help make more complex examples work.

    Utilizing multiple CPUs

    Luckily using multiple CPUs in R is relatively simple. There is a deprecated library multicore available which you shouldn’t use. A newer library parallel is recommended. This library provides mclapply. This function only works on Linux systems so we’re not going to use that one. The below examples work on Windows and Linux and do not use deprecated libraries.

    A very simple example
    library(parallel)
    
    no_cores <- detectCores() - 1
    cl <- makeCluster(no_cores)
    arr <- c("business","done","differently")
    
    #Work on the future together
    result <- parLapply(cl, arr, function(x) toupper(x))
    
    #Conclusion: BUSINESS DONE DIFFERENTLY
    paste (c('Conclusion:',result),collapse = ' ')
    
    stopCluster(cl)
    

    The example is a minimal example of how you can use clustering in R. What this code does is spawn multiple processes and process the entries from the array c(“business”,”done”,”differently”) in those separate processes. Processing in this case is just putting them in uppercase. After it is done, the result from the different processes is combined in Conclusion: BUSINESS DONE DIFFERENTLY.

    If you remove the stopCluster command, you can see there are multiple processes open on my Windows machine:

    After having called the stopCluster command, the number of processes if much reduced:

    You can imagine that for such a simple operation as putting things in uppercase, you might as well use the regular apply function which saves you from the overhead of spawning processes. If however you have more complex operations like the below example, you will benefit greatly from being to utilize more computing power!

    A more elaborate example

    You can download the code of this example from: https://github.com/MaartenSmeets/R/blob/master/htmlcrawling.R

    The sample however does not work anymore since it parses Yahoo pages which have recently been changed. The sample does illustrate however how to do parallel processing.

    Because there are separate R processes running, you need to make libraries and functions available to these processes. For example, you can make libraries available like:

    #make libraries available in other nodes
    clusterEvalQ(cl, {
      library(XML)
      library(RCurl)
      library(parallel)
      }
    )
    

    And you can make functions available like

    clusterExport(cl, "htmlParseFunc")
    
    Considerations

    There are several considerations (and probably more than mentioned below) when using this way of clustering:

    • Work packages are separated equally over CPUs. If however the work packages differ greatly in the amount of work, you can encounter situations where parLapply is waiting for a process to complete while the other processes are already done. You should try and use work packages mostly of equal size to avoid this.
    • If a process runs too long, it will timeout. You can set the timeout when creating the cluster like: cl <- makeCluster(no_cores, timeout=50)
    • Every process takes memory. If you process large variables in parallel, you might encounter memory limitations.
    • Debugging the different processes can be difficult. I will not go into detail here.
    • GPUs can also be utilized to do calculations. See for example: https://www.r-bloggers.com/r-gpu-programming-for-all-with-gpur/. I have not tried this but the performance graphs online indicate a much better performance can be achieved than when using CPUs.

    The post R: Utilizing multiple CPUs appeared first on AMIS Oracle and Java Blog.

    Better track the Usage of Database Options and Management Packs, or it will cost you

    Fri, 2017-04-21 06:34

    So here it is
    Oracle announces a license audit, some urgency kicks in and this familiar but also really serious question comes down from management: “Are we using any unlicensed database features“. The seriousness is quite understandable, because if so, the company can look forward to some negotiations with Oracle over license fees, possibly resulting in considerable and unforeseen extra costs.

    Tracking… why
    To be able to provide a swift and correct answer to this question, I track the usage of database options and management packs. As you might expect, tracking also enables detection of any deliberate or accidental unlicensed feature usage, so I can stop it sooner than later. And stopping it sooner is better because usage during months or years isn’t as easily excused by Oracle as usage during a day or week.

    Tracking… how
    Tracking is done by way of 2 views, both derived from “options_packs_usage_statistics.sql“, provided by Oracle Support –> MOS Note 1317265. Recently this script has been updated to handle version 12.2, so I had to update my views too. The Oracle script can be used on database version 11gR2 and higher, and on 12c container as well as non-container 12c databases. My views can also be used on 11gR2 databases and higher ( EE, SE and SE2 ), but assume a non-container database.

    Bugs
    Some bugs (Doc ID 1309070.1) are associated with DBA_FEATURE_USAGE_STATISTICS, the main data source for “options_packs_usage_statistics.sql“. At this time they mention false positives over the use of compression or encryption with Secure Files and RMAN, and with the reporting of Oracle Spatial usage where only Oracle Locator is used.

    Disclaimer
    The following code provide usage statistics for Database Options, Management Packs and their corresponding features.
    This information is to be used for informational purposes only and does not represent any license entitlement or requirement.

    SET DEFINE OFF;
    CREATE OR REPLACE FORCE VIEW FEATURE_USAGE
    AS
    select product
         , decode(usage, 'NO_USAGE','NO', usage ) "Used"
         , last_sample_date
         , first_usage_date
         , last_usage_date
    ------- following sql is based on options_packs_usage_statistics.sql  --> MOS Note 1317265.1
    from (
    with
    MAP as (
    -- mapping between features tracked by DBA_FUS and their corresponding database products (options or packs)
    select '' PRODUCT, '' feature, '' MVERSION, '' CONDITION from dual union all
    SELECT 'Active Data Guard'                                   , 'Active Data Guard - Real-Time Query on Physical Standby' , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Active Data Guard'                                   , 'Global Data Services'                                    , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Analytics'                                  , 'Data Mining'                                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'ADVANCED Index Compression'                              , '^12\.'                      , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Advanced Index Compression'                              , '^12\.'                      , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Backup HIGH Compression'                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup LOW Compression'                                  , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup MEDIUM Compression'                               , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup ZLIB Compression'                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Data Guard'                                              , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'Flashback Data Archive'                                  , '^11\.2\.0\.[1-3]\.'         , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Flashback Data Archive'                                  , '^(11\.2\.0\.[4-9]\.|12\.)'  , 'INVALID' from dual union all -- licensing required by Optimization for Flashback Data Archive
    SELECT 'Advanced Compression'                                , 'HeapCompression'                                         , '^11\.2|^12\.1'              , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'HeapCompression'                                         , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Heat Map'                                                , '^12\.1'                     , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Heat Map'                                                , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Hybrid Columnar Compression Row Level Locking'           , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Information Lifecycle Management'                        , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Advanced Network Compression Service'             , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Utility Datapump (Export)'                        , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Utility Datapump (Import)'                        , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'SecureFile Compression (user)'                           , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'SecureFile Deduplication (user)'                         , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'ASO native encryption and checksumming'                  , '^11\.2|^12\.'               , 'INVALID' from dual union all -- no longer part of Advanced Security
    SELECT 'Advanced Security'                                   , 'Backup Encryption'                                       , '^11\.2'                     , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Backup Encryption'                                       , '^12\.'                      , 'INVALID' from dual union all -- licensing required only by encryption to disk
    SELECT 'Advanced Security'                                   , 'Data Redaction'                                          , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Encrypted Tablespaces'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Oracle Utility Datapump (Export)'                        , '^11\.2|^12\.'               , 'C002'    from dual union all
    SELECT 'Advanced Security'                                   , 'Oracle Utility Datapump (Import)'                        , '^11\.2|^12\.'               , 'C002'    from dual union all
    SELECT 'Advanced Security'                                   , 'SecureFile Encryption (user)'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Transparent Data Encryption'                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Change Management Pack'                              , 'Change Management Pack'                                  , '^11\.2'                     , ' '       from dual union all
    SELECT 'Configuration Management Pack for Oracle Database'   , 'EM Config Management Pack'                               , '^11\.2'                     , ' '       from dual union all
    SELECT 'Data Masking Pack'                                   , 'Data Masking Pack'                                       , '^11\.2'                     , ' '       from dual union all
    SELECT '.Database Gateway'                                   , 'Gateways'                                                , '^12\.'                      , ' '       from dual union all
    SELECT '.Database Gateway'                                   , 'Transparent Gateway'                                     , '^12\.'                      , ' '       from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Aggregation'                                   , '^12\.'                      , ' '       from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Column Store'                                  , '^12\.1\.0\.2\.0'            , 'BUG'     from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Column Store'                                  , '^12\.1\.0\.2\.[^0]|^12\.2'  , ' '       from dual union all
    SELECT 'Database Vault'                                      , 'Oracle Database Vault'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Database Vault'                                      , 'Privilege Capture'                                       , '^12\.'                      , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'ADDM'                                                    , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Baseline'                                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Baseline Template'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Report'                                              , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Automatic Workload Repository'                           , '^12\.'                      , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Baseline Adaptive Thresholds'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Baseline Static Computations'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Diagnostic Pack'                                         , '^11\.2'                     , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'EM Performance Page'                                     , '^12\.'                      , ' '       from dual union all
    SELECT '.Exadata'                                            , 'Exadata'                                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT '.GoldenGate'                                         , 'GoldenGate'                                              , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression'                             , '^12\.1'                     , 'BUG'     from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression'                             , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression Row Level Locking'           , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Sun ZFS with EHCC'                                       , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'ZFS Storage'                                             , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Zone maps'                                               , '^12\.'                      , ' '       from dual union all
    SELECT 'Label Security'                                      , 'Label Security'                                          , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Multitenant'                                         , 'Oracle Multitenant'                                      , '^12\.'                      , 'C003'    from dual union all -- licensing required only when more than one PDB containers are created
    SELECT 'Multitenant'                                         , 'Oracle Pluggable Databases'                              , '^12\.'                      , 'C003'    from dual union all -- licensing required only when more than one PDB containers are created
    SELECT 'OLAP'                                                , 'OLAP - Analytic Workspaces'                              , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'OLAP'                                                , 'OLAP - Cubes'                                            , '^12\.'                      , ' '       from dual union all
    SELECT 'Partitioning'                                        , 'Partitioning (user)'                                     , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Partitioning'                                        , 'Zone maps'                                               , '^12\.'                      , ' '       from dual union all
    SELECT '.Pillar Storage'                                     , 'Pillar Storage'                                          , '^12\.'                      , ' '       from dual union all
    SELECT '.Pillar Storage'                                     , 'Pillar Storage with EHCC'                                , '^12\.'                      , ' '       from dual union all
    SELECT '.Provisioning and Patch Automation Pack'             , 'EM Standalone Provisioning and Patch Automation Pack'    , '^11\.2'                     , ' '       from dual union all
    SELECT 'Provisioning and Patch Automation Pack for Database' , 'EM Database Provisioning and Patch Automation Pack'      , '^11\.2'                     , ' '       from dual union all
    SELECT 'RAC or RAC One Node'                                 , 'Quality of Service Management'                           , '^12\.'                      , ' '       from dual union all
    SELECT 'Real Application Clusters'                           , 'Real Application Clusters (RAC)'                         , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Real Application Clusters One Node'                  , 'Real Application Cluster One Node'                       , '^12\.'                      , ' '       from dual union all
    SELECT 'Real Application Testing'                            , 'Database Replay: Workload Capture'                       , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT 'Real Application Testing'                            , 'Database Replay: Workload Replay'                        , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT 'Real Application Testing'                            , 'SQL Performance Analyzer'                                , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT '.Secure Backup'                                      , 'Oracle Secure Backup'                                    , '^12\.'                      , 'INVALID' from dual union all  -- does not differentiate usage of Oracle Secure Backup Express, which is free
    SELECT 'Spatial and Graph'                                   , 'Spatial'                                                 , '^11\.2'                     , 'INVALID' from dual union all  -- does not differentiate usage of Locator, which is free
    SELECT 'Spatial and Graph'                                   , 'Spatial'                                                 , '^12\.'                      , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'Automatic Maintenance - SQL Tuning Advisor'              , '^12\.'                      , 'INVALID' from dual union all  -- system usage in the maintenance window
    SELECT 'Tuning Pack'                                         , 'Automatic SQL Tuning Advisor'                            , '^11\.2|^12\.'               , 'INVALID' from dual union all  -- system usage in the maintenance window
    SELECT 'Tuning Pack'                                         , 'Real-Time SQL Monitoring'                                , '^11\.2'                     , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'Real-Time SQL Monitoring'                                , '^12\.'                      , 'INVALID' from dual union all  -- default
    SELECT 'Tuning Pack'                                         , 'SQL Access Advisor'                                      , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Monitoring and Tuning pages'                         , '^12\.'                      , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Profile'                                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Tuning Advisor'                                      , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Tuning Set (user)'                                   , '^12\.'                      , 'INVALID' from dual union all -- no longer part of Tuning Pack
    SELECT 'Tuning Pack'                                         , 'Tuning Pack'                                             , '^11\.2'                     , ' '       from dual union all
    SELECT '.WebLogic Server Management Pack Enterprise Edition' , 'EM AS Provisioning and Patch Automation Pack'            , '^11\.2'                     , ' '       from dual union all
    select '' PRODUCT, '' FEATURE, '' MVERSION, '' CONDITION from dual
    ),
    FUS as (
    -- the current data set to be used: DBA_FEATURE_USAGE_STATISTICS or CDB_FEATURE_USAGE_STATISTICS for Container Databases(CDBs)
    select
        0 as CON_ID,
        NULL as CON_NAME,
        -- Detect and mark with Y the current DBA_FUS data set = Most Recent Sample based on LAST_SAMPLE_DATE
          case when DBID || '#' || VERSION || '#' || to_char(LAST_SAMPLE_DATE, 'YYYYMMDDHH24MISS') =
                    first_value (DBID    )         over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc) || '#' ||
                    first_value (VERSION )         over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc) || '#' ||
                    first_value (to_char(LAST_SAMPLE_DATE, 'YYYYMMDDHH24MISS'))
                                                   over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc)
               then 'Y'
               else 'N'
        end as CURRENT_ENTRY,
        NAME            ,
        LAST_SAMPLE_DATE,
        DBID            ,
        VERSION         ,
        DETECTED_USAGES ,
        TOTAL_SAMPLES   ,
        CURRENTLY_USED  ,
        FIRST_USAGE_DATE,
        LAST_USAGE_DATE ,
        AUX_COUNT       ,
        FEATURE_INFO
    from DBA_FEATURE_USAGE_STATISTICS xy
    ),
    PFUS as (
    -- Product-Feature Usage Statitsics = DBA_FUS entries mapped to their corresponding database products
    select
        CON_ID,
        CON_NAME,
        PRODUCT,
        NAME as FEATURE_BEING_USED,
        case  when CONDITION = 'BUG'
                   --suppressed due to exceptions/defects
                   then '3.SUPPRESSED_DUE_TO_BUG'
              when     detected_usages > 0                 -- some usage detection - current or past
                   and CURRENTLY_USED = 'TRUE'             -- usage at LAST_SAMPLE_DATE
                   and CURRENT_ENTRY  = 'Y'                -- current record set
                   and (    trim(CONDITION) is null        -- no extra conditions
                         or CONDITION_MET     = 'TRUE'     -- extra condition is met
                        and CONDITION_COUNTER = 'FALSE' )  -- extra condition is not based on counter
                   then '6.CURRENT_USAGE'
              when     detected_usages > 0                 -- some usage detection - current or past
                   and CURRENTLY_USED = 'TRUE'             -- usage at LAST_SAMPLE_DATE
                   and CURRENT_ENTRY  = 'Y'                -- current record set
                   and (    CONDITION_MET     = 'TRUE'     -- extra condition is met
                        and CONDITION_COUNTER = 'TRUE'  )  -- extra condition is     based on counter
                   then '5.PAST_OR_CURRENT_USAGE'          -- FEATURE_INFO counters indicate current or past usage
              when     detected_usages > 0                 -- some usage detection - current or past
                   and (    trim(CONDITION) is null        -- no extra conditions
                         or CONDITION_MET     = 'TRUE'  )  -- extra condition is met
                   then '4.PAST_USAGE'
              when CURRENT_ENTRY = 'Y'
                   then '2.NO_CURRENT_USAGE'   -- detectable feature shows no current usage
              else '1.NO_PAST_USAGE'
        end as USAGE,
        LAST_SAMPLE_DATE,
        DBID            ,
        VERSION         ,
        DETECTED_USAGES ,
        TOTAL_SAMPLES   ,
        CURRENTLY_USED  ,
        case  when CONDITION like 'C___' and CONDITION_MET = 'FALSE'
                   then to_date('')
              else FIRST_USAGE_DATE
        end as FIRST_USAGE_DATE,
        case  when CONDITION like 'C___' and CONDITION_MET = 'FALSE'
                   then to_date('')
              else LAST_USAGE_DATE
        end as LAST_USAGE_DATE,
        EXTRA_FEATURE_INFO
    from (
    select m.PRODUCT, m.CONDITION, m.MVERSION,
           -- if extra conditions (coded on the MAP.CONDITION column) are required, check if entries satisfy the condition
           case
                 when CONDITION = 'C001' and (   regexp_like(to_char(FEATURE_INFO), 'compression used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                                              or regexp_like(to_char(FEATURE_INFO), 'compression used: *TRUE', 'i')                 )
                      then 'TRUE'  -- compression has been used
                 when CONDITION = 'C002' and (   regexp_like(to_char(FEATURE_INFO), 'encryption used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                                              or regexp_like(to_char(FEATURE_INFO), 'encryption used: *TRUE', 'i')                  )
                      then 'TRUE'  -- encryption has been used
                 when CONDITION = 'C003' and CON_ID=1 and AUX_COUNT > 1
                      then 'TRUE'  -- more than one PDB are created
                 when CONDITION = 'C004' and 'N'= 'N'
                      then 'TRUE'  -- not in oracle cloud
                 else 'FALSE'
           end as CONDITION_MET,
           -- check if the extra conditions are based on FEATURE_INFO counters. They indicate current or past usage.
           case
                 when CONDITION = 'C001' and     regexp_like(to_char(FEATURE_INFO), 'compression used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                      then 'TRUE'  -- compression counter > 0
                 when CONDITION = 'C002' and     regexp_like(to_char(FEATURE_INFO), 'encryption used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                      then 'TRUE'  -- encryption counter > 0
                 else 'FALSE'
           end as CONDITION_COUNTER,
           case when CONDITION = 'C001'
                     then   regexp_substr(to_char(FEATURE_INFO), 'compression used:(.*?)(times|TRUE|FALSE)', 1, 1, 'i')
                when CONDITION = 'C002'
                     then   regexp_substr(to_char(FEATURE_INFO), 'encryption used:(.*?)(times|TRUE|FALSE)', 1, 1, 'i')
                when CONDITION = 'C003'
                     then   'AUX_COUNT=' || AUX_COUNT
                when CONDITION = 'C004' and 'N'= 'Y'
                     then   'feature included in Oracle Cloud Services Package'
                else ''
           end as EXTRA_FEATURE_INFO,
           f.CON_ID          ,
           f.CON_NAME        ,
           f.CURRENT_ENTRY   ,
           f.NAME            ,
           f.LAST_SAMPLE_DATE,
           f.DBID            ,
           f.VERSION         ,
           f.DETECTED_USAGES ,
           f.TOTAL_SAMPLES   ,
           f.CURRENTLY_USED  ,
           f.FIRST_USAGE_DATE,
           f.LAST_USAGE_DATE ,
           f.AUX_COUNT       ,
           f.FEATURE_INFO
      from MAP m
      join FUS f on m.FEATURE = f.NAME and regexp_like(f.VERSION, m.MVERSION)
      where nvl(f.TOTAL_SAMPLES, 0) > 0                        -- ignore features that have never been sampled
    )
      where nvl(CONDITION, '-') != 'INVALID'                   -- ignore features for which licensing is not required without further conditions
        and not (CONDITION = 'C003' and CON_ID not in (0, 1))  -- multiple PDBs are visible only in CDB$ROOT; PDB level view is not relevant
    )
    select
        grouping_id(CON_ID) as gid,
        CON_ID   ,
        decode(grouping_id(CON_ID), 1, '--ALL--', max(CON_NAME)) as CON_NAME,
        PRODUCT  ,
        decode(max(USAGE),
              '1.NO_PAST_USAGE'        , 'NO_USAGE'             ,
              '2.NO_CURRENT_USAGE'     , 'NO_USAGE'             ,
              '3.SUPPRESSED_DUE_TO_BUG', 'SUPPRESSED_DUE_TO_BUG',
              '4.PAST_USAGE'           , 'PAST_USAGE'           ,
              '5.PAST_OR_CURRENT_USAGE', 'PAST_OR_CURRENT_USAGE',
              '6.CURRENT_USAGE'        , 'CURRENT_USAGE'        ,
              'UNKNOWN') as USAGE,
        max(LAST_SAMPLE_DATE) as LAST_SAMPLE_DATE,
        min(FIRST_USAGE_DATE) as FIRST_USAGE_DATE,
        max(LAST_USAGE_DATE)  as LAST_USAGE_DATE
      from PFUS
      where USAGE in ('2.NO_CURRENT_USAGE', '4.PAST_USAGE', '5.PAST_OR_CURRENT_USAGE', '6.CURRENT_USAGE')   -- ignore '1.NO_PAST_USAGE', '3.SUPPRESSED_DUE_TO_BUG'
      group by rollup(CON_ID), PRODUCT
      having not (max(CON_ID) in (-1, 0) and grouping_id(CON_ID) = 1)            -- aggregation not needed for non-container databases
    order by GID desc, CON_ID, decode(substr(PRODUCT, 1, 1), '.', 2, 1), PRODUCT );
    
    
    CREATE OR REPLACE FORCE VIEW FEATURE_USAGE_DETAILS
    AS
    select product
         , feature_being_used
         , usage
         , last_sample_date
         , dbid
         , ( select name from v$database ) dbname
         , version
         , detected_usages
         , total_samples
         , currently_used
         , first_usage_date
         , last_usage_date
         , extra_feature_info
    ------- following sql is based on options_packs_usage_statistics.sql  --> MOS Note 1317265.1
    from (
    with
    MAP as (
    -- mapping between features tracked by DBA_FUS and their corresponding database products (options or packs)
    select '' PRODUCT, '' feature, '' MVERSION, '' CONDITION from dual union all
    SELECT 'Active Data Guard'                                   , 'Active Data Guard - Real-Time Query on Physical Standby' , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Active Data Guard'                                   , 'Global Data Services'                                    , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Analytics'                                  , 'Data Mining'                                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'ADVANCED Index Compression'                              , '^12\.'                      , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Advanced Index Compression'                              , '^12\.'                      , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Backup HIGH Compression'                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup LOW Compression'                                  , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup MEDIUM Compression'                               , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Backup ZLIB Compression'                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Data Guard'                                              , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'Flashback Data Archive'                                  , '^11\.2\.0\.[1-3]\.'         , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Flashback Data Archive'                                  , '^(11\.2\.0\.[4-9]\.|12\.)'  , 'INVALID' from dual union all -- licensing required by Optimization for Flashback Data Archive
    SELECT 'Advanced Compression'                                , 'HeapCompression'                                         , '^11\.2|^12\.1'              , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'HeapCompression'                                         , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Heat Map'                                                , '^12\.1'                     , 'BUG'     from dual union all
    SELECT 'Advanced Compression'                                , 'Heat Map'                                                , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Hybrid Columnar Compression Row Level Locking'           , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Information Lifecycle Management'                        , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Advanced Network Compression Service'             , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Utility Datapump (Export)'                        , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'Oracle Utility Datapump (Import)'                        , '^11\.2|^12\.'               , 'C001'    from dual union all
    SELECT 'Advanced Compression'                                , 'SecureFile Compression (user)'                           , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Compression'                                , 'SecureFile Deduplication (user)'                         , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'ASO native encryption and checksumming'                  , '^11\.2|^12\.'               , 'INVALID' from dual union all -- no longer part of Advanced Security
    SELECT 'Advanced Security'                                   , 'Backup Encryption'                                       , '^11\.2'                     , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Backup Encryption'                                       , '^12\.'                      , 'INVALID' from dual union all -- licensing required only by encryption to disk
    SELECT 'Advanced Security'                                   , 'Data Redaction'                                          , '^12\.'                      , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Encrypted Tablespaces'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Oracle Utility Datapump (Export)'                        , '^11\.2|^12\.'               , 'C002'    from dual union all
    SELECT 'Advanced Security'                                   , 'Oracle Utility Datapump (Import)'                        , '^11\.2|^12\.'               , 'C002'    from dual union all
    SELECT 'Advanced Security'                                   , 'SecureFile Encryption (user)'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Advanced Security'                                   , 'Transparent Data Encryption'                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Change Management Pack'                              , 'Change Management Pack'                                  , '^11\.2'                     , ' '       from dual union all
    SELECT 'Configuration Management Pack for Oracle Database'   , 'EM Config Management Pack'                               , '^11\.2'                     , ' '       from dual union all
    SELECT 'Data Masking Pack'                                   , 'Data Masking Pack'                                       , '^11\.2'                     , ' '       from dual union all
    SELECT '.Database Gateway'                                   , 'Gateways'                                                , '^12\.'                      , ' '       from dual union all
    SELECT '.Database Gateway'                                   , 'Transparent Gateway'                                     , '^12\.'                      , ' '       from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Aggregation'                                   , '^12\.'                      , ' '       from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Column Store'                                  , '^12\.1\.0\.2\.0'            , 'BUG'     from dual union all
    SELECT 'Database In-Memory'                                  , 'In-Memory Column Store'                                  , '^12\.1\.0\.2\.[^0]|^12\.2'  , ' '       from dual union all
    SELECT 'Database Vault'                                      , 'Oracle Database Vault'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Database Vault'                                      , 'Privilege Capture'                                       , '^12\.'                      , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'ADDM'                                                    , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Baseline'                                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Baseline Template'                                   , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'AWR Report'                                              , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Automatic Workload Repository'                           , '^12\.'                      , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Baseline Adaptive Thresholds'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Baseline Static Computations'                            , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'Diagnostic Pack'                                         , '^11\.2'                     , ' '       from dual union all
    SELECT 'Diagnostics Pack'                                    , 'EM Performance Page'                                     , '^12\.'                      , ' '       from dual union all
    SELECT '.Exadata'                                            , 'Exadata'                                                 , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT '.GoldenGate'                                         , 'GoldenGate'                                              , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression'                             , '^12\.1'                     , 'BUG'     from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression'                             , '^12\.[2-9]'                 , ' '       from dual union all
    SELECT '.HW'                                                 , 'Hybrid Columnar Compression Row Level Locking'           , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Sun ZFS with EHCC'                                       , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'ZFS Storage'                                             , '^12\.'                      , ' '       from dual union all
    SELECT '.HW'                                                 , 'Zone maps'                                               , '^12\.'                      , ' '       from dual union all
    SELECT 'Label Security'                                      , 'Label Security'                                          , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Multitenant'                                         , 'Oracle Multitenant'                                      , '^12\.'                      , 'C003'    from dual union all -- licensing required only when more than one PDB containers are created
    SELECT 'Multitenant'                                         , 'Oracle Pluggable Databases'                              , '^12\.'                      , 'C003'    from dual union all -- licensing required only when more than one PDB containers are created
    SELECT 'OLAP'                                                , 'OLAP - Analytic Workspaces'                              , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'OLAP'                                                , 'OLAP - Cubes'                                            , '^12\.'                      , ' '       from dual union all
    SELECT 'Partitioning'                                        , 'Partitioning (user)'                                     , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Partitioning'                                        , 'Zone maps'                                               , '^12\.'                      , ' '       from dual union all
    SELECT '.Pillar Storage'                                     , 'Pillar Storage'                                          , '^12\.'                      , ' '       from dual union all
    SELECT '.Pillar Storage'                                     , 'Pillar Storage with EHCC'                                , '^12\.'                      , ' '       from dual union all
    SELECT '.Provisioning and Patch Automation Pack'             , 'EM Standalone Provisioning and Patch Automation Pack'    , '^11\.2'                     , ' '       from dual union all
    SELECT 'Provisioning and Patch Automation Pack for Database' , 'EM Database Provisioning and Patch Automation Pack'      , '^11\.2'                     , ' '       from dual union all
    SELECT 'RAC or RAC One Node'                                 , 'Quality of Service Management'                           , '^12\.'                      , ' '       from dual union all
    SELECT 'Real Application Clusters'                           , 'Real Application Clusters (RAC)'                         , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Real Application Clusters One Node'                  , 'Real Application Cluster One Node'                       , '^12\.'                      , ' '       from dual union all
    SELECT 'Real Application Testing'                            , 'Database Replay: Workload Capture'                       , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT 'Real Application Testing'                            , 'Database Replay: Workload Replay'                        , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT 'Real Application Testing'                            , 'SQL Performance Analyzer'                                , '^11\.2|^12\.'               , 'C004'    from dual union all
    SELECT '.Secure Backup'                                      , 'Oracle Secure Backup'                                    , '^12\.'                      , 'INVALID' from dual union all  -- does not differentiate usage of Oracle Secure Backup Express, which is free
    SELECT 'Spatial and Graph'                                   , 'Spatial'                                                 , '^11\.2'                     , 'INVALID' from dual union all  -- does not differentiate usage of Locator, which is free
    SELECT 'Spatial and Graph'                                   , 'Spatial'                                                 , '^12\.'                      , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'Automatic Maintenance - SQL Tuning Advisor'              , '^12\.'                      , 'INVALID' from dual union all  -- system usage in the maintenance window
    SELECT 'Tuning Pack'                                         , 'Automatic SQL Tuning Advisor'                            , '^11\.2|^12\.'               , 'INVALID' from dual union all  -- system usage in the maintenance window
    SELECT 'Tuning Pack'                                         , 'Real-Time SQL Monitoring'                                , '^11\.2'                     , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'Real-Time SQL Monitoring'                                , '^12\.'                      , 'INVALID' from dual union all  -- default
    SELECT 'Tuning Pack'                                         , 'SQL Access Advisor'                                      , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Monitoring and Tuning pages'                         , '^12\.'                      , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Profile'                                             , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Tuning Advisor'                                      , '^11\.2|^12\.'               , ' '       from dual union all
    SELECT 'Tuning Pack'                                         , 'SQL Tuning Set (user)'                                   , '^12\.'                      , 'INVALID' from dual union all -- no longer part of Tuning Pack
    SELECT 'Tuning Pack'                                         , 'Tuning Pack'                                             , '^11\.2'                     , ' '       from dual union all
    SELECT '.WebLogic Server Management Pack Enterprise Edition' , 'EM AS Provisioning and Patch Automation Pack'            , '^11\.2'                     , ' '       from dual union all
    select '' PRODUCT, '' FEATURE, '' MVERSION, '' CONDITION from dual
    ),
    FUS as (
    -- the current data set to be used: DBA_FEATURE_USAGE_STATISTICS or CDB_FEATURE_USAGE_STATISTICS for Container Databases(CDBs)
    select
        0 as CON_ID,
        NULL as CON_NAME,
        -- Detect and mark with Y the current DBA_FUS data set = Most Recent Sample based on LAST_SAMPLE_DATE
          case when DBID || '#' || VERSION || '#' || to_char(LAST_SAMPLE_DATE, 'YYYYMMDDHH24MISS') =
                    first_value (DBID    )         over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc) || '#' ||
                    first_value (VERSION )         over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc) || '#' ||
                    first_value (to_char(LAST_SAMPLE_DATE, 'YYYYMMDDHH24MISS'))
                                                   over (partition by 0 order by LAST_SAMPLE_DATE desc nulls last, DBID desc)
               then 'Y'
               else 'N'
        end as CURRENT_ENTRY,
        NAME            ,
        LAST_SAMPLE_DATE,
        DBID            ,
        VERSION         ,
        DETECTED_USAGES ,
        TOTAL_SAMPLES   ,
        CURRENTLY_USED  ,
        FIRST_USAGE_DATE,
        LAST_USAGE_DATE ,
        AUX_COUNT       ,
        FEATURE_INFO
    from DBA_FEATURE_USAGE_STATISTICS xy
    ),
    PFUS as (
    -- Product-Feature Usage Statitsics = DBA_FUS entries mapped to their corresponding database products
    select
        CON_ID,
        CON_NAME,
        PRODUCT,
        NAME as FEATURE_BEING_USED,
        case  when CONDITION = 'BUG'
                   --suppressed due to exceptions/defects
                   then '3.SUPPRESSED_DUE_TO_BUG'
              when     detected_usages > 0                 -- some usage detection - current or past
                   and CURRENTLY_USED = 'TRUE'             -- usage at LAST_SAMPLE_DATE
                   and CURRENT_ENTRY  = 'Y'                -- current record set
                   and (    trim(CONDITION) is null        -- no extra conditions
                         or CONDITION_MET     = 'TRUE'     -- extra condition is met
                        and CONDITION_COUNTER = 'FALSE' )  -- extra condition is not based on counter
                   then '6.CURRENT_USAGE'
              when     detected_usages > 0                 -- some usage detection - current or past
                   and CURRENTLY_USED = 'TRUE'             -- usage at LAST_SAMPLE_DATE
                   and CURRENT_ENTRY  = 'Y'                -- current record set
                   and (    CONDITION_MET     = 'TRUE'     -- extra condition is met
                        and CONDITION_COUNTER = 'TRUE'  )  -- extra condition is     based on counter
                   then '5.PAST_OR_CURRENT_USAGE'          -- FEATURE_INFO counters indicate current or past usage
              when     detected_usages > 0                 -- some usage detection - current or past
                   and (    trim(CONDITION) is null        -- no extra conditions
                         or CONDITION_MET     = 'TRUE'  )  -- extra condition is met
                   then '4.PAST_USAGE'
              when CURRENT_ENTRY = 'Y'
                   then '2.NO_CURRENT_USAGE'   -- detectable feature shows no current usage
              else '1.NO_PAST_USAGE'
        end as USAGE,
        LAST_SAMPLE_DATE,
        DBID            ,
        VERSION         ,
        DETECTED_USAGES ,
        TOTAL_SAMPLES   ,
        CURRENTLY_USED  ,
        FIRST_USAGE_DATE,
        LAST_USAGE_DATE,
        EXTRA_FEATURE_INFO
    from (
    select m.PRODUCT, m.CONDITION, m.MVERSION,
           -- if extra conditions (coded on the MAP.CONDITION column) are required, check if entries satisfy the condition
           case
                 when CONDITION = 'C001' and (   regexp_like(to_char(FEATURE_INFO), 'compression used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                                              or regexp_like(to_char(FEATURE_INFO), 'compression used: *TRUE', 'i')                 )
                      then 'TRUE'  -- compression has been used
                 when CONDITION = 'C002' and (   regexp_like(to_char(FEATURE_INFO), 'encryption used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                                              or regexp_like(to_char(FEATURE_INFO), 'encryption used: *TRUE', 'i')                  )
                      then 'TRUE'  -- encryption has been used
                 when CONDITION = 'C003' and CON_ID=1 and AUX_COUNT > 1
                      then 'TRUE'  -- more than one PDB are created
                 when CONDITION = 'C004' and 'N'= 'N'
                      then 'TRUE'  -- not in oracle cloud
                 else 'FALSE'
           end as CONDITION_MET,
           -- check if the extra conditions are based on FEATURE_INFO counters. They indicate current or past usage.
           case
                 when CONDITION = 'C001' and     regexp_like(to_char(FEATURE_INFO), 'compression used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                      then 'TRUE'  -- compression counter > 0
                 when CONDITION = 'C002' and     regexp_like(to_char(FEATURE_INFO), 'encryption used:[ 0-9]*[1-9][ 0-9]*time', 'i')
                      then 'TRUE'  -- encryption counter > 0
                 else 'FALSE'
           end as CONDITION_COUNTER,
           case when CONDITION = 'C001'
                     then   regexp_substr(to_char(FEATURE_INFO), 'compression used:(.*?)(times|TRUE|FALSE)', 1, 1, 'i')
                when CONDITION = 'C002'
                     then   regexp_substr(to_char(FEATURE_INFO), 'encryption used:(.*?)(times|TRUE|FALSE)', 1, 1, 'i')
                when CONDITION = 'C003'
                     then   'AUX_COUNT=' || AUX_COUNT
                when CONDITION = 'C004' and 'N'= 'Y'
                     then   'feature included in Oracle Cloud Services Package'
                else ''
           end as EXTRA_FEATURE_INFO,
           f.CON_ID          ,
           f.CON_NAME        ,
           f.CURRENT_ENTRY   ,
           f.NAME            ,
           f.LAST_SAMPLE_DATE,
           f.DBID            ,
           f.VERSION         ,
           f.DETECTED_USAGES ,
           f.TOTAL_SAMPLES   ,
           f.CURRENTLY_USED  ,
           f.FIRST_USAGE_DATE,
           f.LAST_USAGE_DATE ,
           f.AUX_COUNT       ,
           f.FEATURE_INFO
      from MAP m
      join FUS f on m.FEATURE = f.NAME and regexp_like(f.VERSION, m.MVERSION)
      where nvl(f.TOTAL_SAMPLES, 0) > 0                        -- ignore features that have never been sampled
    )
      where nvl(CONDITION, '-') != 'INVALID'                   -- ignore features for which licensing is not required without further conditions
        and not (CONDITION = 'C003' and CON_ID not in (0, 1))  -- multiple PDBs are visible only in CDB$ROOT; PDB level view is not relevant
    )
    select
        CON_ID            ,
        CON_NAME          ,
        PRODUCT           ,
        FEATURE_BEING_USED,
        decode(USAGE,
              '1.NO_PAST_USAGE'        , 'NO_PAST_USAGE'        ,
              '2.NO_CURRENT_USAGE'     , 'NO_CURRENT_USAGE'     ,
              '3.SUPPRESSED_DUE_TO_BUG', 'SUPPRESSED_DUE_TO_BUG',
              '4.PAST_USAGE'           , 'PAST_USAGE'           ,
              '5.PAST_OR_CURRENT_USAGE', 'PAST_OR_CURRENT_USAGE',
              '6.CURRENT_USAGE'        , 'CURRENT_USAGE'        ,
              'UNKNOWN') as USAGE,
        LAST_SAMPLE_DATE  ,
        DBID              ,
        VERSION           ,
        DETECTED_USAGES   ,
        TOTAL_SAMPLES     ,
        CURRENTLY_USED    ,
        FIRST_USAGE_DATE  ,
        LAST_USAGE_DATE   ,
        EXTRA_FEATURE_INFO
      from PFUS
      where USAGE in ('2.NO_CURRENT_USAGE', '3.SUPPRESSED_DUE_TO_BUG', '4.PAST_USAGE', '5.PAST_OR_CURRENT_USAGE', '6.CURRENT_USAGE')  -- ignore '1.NO_PAST_USAGE'
    order by CON_ID, decode(substr(PRODUCT, 1, 1), '.', 2, 1), PRODUCT, FEATURE_BEING_USED, LAST_SAMPLE_DATE desc, PFUS.USAGE );
    

    The post Better track the Usage of Database Options and Management Packs, or it will cost you appeared first on AMIS Oracle and Java Blog.

    Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2

    Fri, 2017-04-07 07:41

    As an integration/backend developer, when starting a project using Mobile Cloud Service, it is important to have some understanding of what this MBaaS (Mobile Backend as a Service) has to offer in terms of security features. This is important in order to be able to configure and test MCS. In this blog I will give examples on how to configure and use the basic authentication and OAuth2 features which are provided to secure APIs. You can read the Oracle documentation (which is quite good for MCS!) on this topic here.

    Introduction

    Oracle Mobile Cloud Service offers platform APIs to offer specific features. You can create custom APIs by writing JavaScript code to run on Node.js. Connectors are used to access backend systems. This blogs focuses on authentication options for incoming requests.

    The connectors are not directly available from the outside. MCS can secure custom and platform APIs. This functionality is taken care of by the Mobile Backend and the custom API configuration.

    Getting started

    The first thing to do when you want to expose an API is assign the API to a Mobile Backend. You can do this in the Mobile Backend configuration screen, APIs tab.

    You can allow anonymous access, but generally you want to know who accesses your API. Also because MCS has a license option to pay for a specific number of API calls; you want to know who you are paying for. In order to require authentication on a per user basis, you first have to create a user and assign it to a group. You can also do this from the Mobile Backend configuration. Go to the Mobile Users Management tab to create users and groups.

    After you have done this, you can assign the role to the API. You can also do this on a per endpoint basis which makes this authentication scheme very flexible.

    Now we have configured our API to allow access to users who are in a specific role. We can now call our API using basic authentication or OAuth2.

    Basic Authentication

    In order to test our API, Postman is a suitable option. Postman is a freely available Chrome plugin (but also available standalone for several OSes) which provides many options for testing HTTP calls.

    Basic authentication is a rather weak authentication mechanism. You Base64 encode a string username:password and send that as an HTTP header to the API you are calling. If someone intercepts the message, he/she can easily Base64 decode the username:password string to obtain the credentials. You can thus understand why I’ve blanked out that part of the Authorization field in several screenshots.

    In addition to specifying the basic authentication header, you also need to specify the Oracle-Mobile-Backend-Id HTTP header which can be obtained from the main page of the Mobile Backend configuration page.

    Obtain Oracle-Mobile-Backend-Id

    Call your API with Basic authentication

    This mechanism is rather straightforward. The authorization header needs to be supplied with every request though.

    OAuth2

    OAuth2 works a bit different than basic authentication in that first a token is obtained from a token service and the token is used in subsequent requests. When using the token, no additional authentication is required.

    You can obtain the token from the Mobile Backend settings page as shown above. When you do a request to this endpoint, you need to provide some information:

    You can use basic authentication with the Client ID:Client secret to access the token endpoint. These can be obtained from the screen shown below.

    You also need to supply a username and password of the user for whom the token is generated. After you have done a request to the token service, you obtain a token.

    This token can be used in subsequent request to your API. You can add the Bearer field with the token as Authentication HTTP header to authenticate instead of sending your username/password every time. This is thus more secure.

    Finally

    I’ve not talked about security options for outgoing requests provided by the supplied connectors.

    These have per connector specific options and allow identity propagation. For example the REST connector (described in the Oracle documentation here) supports SAML tokens, CSF keys, basic authentication, OAuth2, JWT. The SOAP connector (see here) can use WS-Security in several flavours, SAML tokens, CSF keys, basic authentication, etc (quite a list).

    The post Oracle Mobile Cloud Service (MCS): An introduction to API security: Basic Authentication and OAuth2 appeared first on AMIS Oracle and Java Blog.

    Machine learning: Getting started with random forests in R

    Fri, 2017-04-07 02:08

    According to Gartner, machine learning is on top of the hype cycle at the peak of inflated expectations. There is a lot of misunderstanding about what machine learning actually is and what it can be done with it.

    Machine learning is not as abstract as one might think. If you want to get value out of known data and do predictions for unknown data, the most important challenge is asking the right questions and of course knowing what you are doing, especially if you want to optimize your prediction accuracy.

    In this blog I’m exploring an example of machine learning. The random forest algorithm. I’ll provide an example on how you can use this algorithm to do predictions. In order to implement a random forest, I’m using R with the randomForest library and I’m using the iris data set which is provided by the R installation.

    The Random Forest

    A popular method of machine learning is by using decision tree learning. Decision tree learning comes closest to serving as an off-the-shelf procedure for data mining (see here). You do not need to know much about your data in order to be able to apply this method. The random forest algorithm is an example of a decision tree learning algorithm.

    Random forest in (very) short

    How it works exactly takes some time to figure out. If you want to know details, I recommend watching some youtube recordings of lectures on the topic. Some of its most important features of this method:

    • A random forest is a method to do classifications based on features. This implies you need to have features and classifications.
    • A random forest generates a set of classification trees (an ensemble) based on splitting a subset of features at locations which maximize information gain. This method is thus very suitable for distributed parallel computation.
    • Information gain can be determined by how accurate the splitting point is in determining the classification. Data is split based on the feature at a specific point and the classification on the left and right of the splitting point are checked. If for example the splitting point splits all data of a first classification from all data of a second classification, the confidence is 100%; maximum information gain.
    • A splitting point is a branching in the decision tree.
    • Splitting points are based on values of features (this is fast)
    • A random forest uses randomness to determine features to look at and randomness in the data used to construct the tree. Randomness helps reducing compute time.
    • Each tree gets to see a different dataset. This is called bagging.
    • Tree classification confidences are summed and averaged. Products of the confidences can also be taken. Individual trees have a high variance because they have only seen a small subset of data. Averaging helps creating a better result.
    • With correlated features, strong features can end up with low scores and the method can be biased towards variables with many categories.
    • A random forest does not perform well with unbalanced datasets; samples where there are more occurrences of a specific class.
    Use case for a random forest

    Use cases for a random forest can be for example text classification such as spam detection. Determine if certain words are present in a text can be used as a feature and the classification would be spam/not spam or even more specific such as news, personal, etc. Another interesting use case lies in genetics. Determining if the expression of certain genes is relevant for a specific disease. This way you can take someone’s DNA and determine with a certain confidence if someone will contract a disease. Of course you can also take other features into account such as income, education level, smoking, age, etc.

    R Why R

    I decided to start with R. Why? Mainly because it is easy. There are many libraries available and there is a lot of experience present worldwide; a lot of information can be found online. R however also has some drawbacks.

    Some benefits

    • It is free and easy to get started. Hard to master though.
    • A lot of libraries are available. R package management works well.
    • R has a lot of users. There is a lot of information available online
    • R is powerful in that if you know what you are doing, you require little code doing it.

    Some challenges

    • R loads datasets in memory
    • R is not the best at doing distributed computing but can do so. See for example here
    • The R syntax can be a challenge to learn
    Getting the environment ready

    I decided to install a Linux VM to play with. You can also install R and R studio (the R IDE) on Windows or Mac. I decided to start with Ubuntu Server. I first installed the usual things like a GUI. Next I installed some handy things like a terminal emulator, Firefox and stuff like that. I finished with installing R and R-studio.

    So first download and install Ubuntu Server (next, next, finish)

    sudo apt-get update
    sudo apt-get install aptitude

    –Install a GUI
    sudo aptitude install –without-recommends ubuntu-desktop

    — Install the VirtualBox Guest additions
    sudo apt-get install build-essential linux-headers-$(uname -r)
    Install guest additions (first mount the ISO image which is part of VirtualBox, next run the installer)

    — Install the below stuff to make Dash (Unity search) working
    http://askubuntu.com/questions/125843/dash-search-gives-no-result
    sudo apt-get install unity-lens-applications unity-lens-files

    — A shutdown button might come in handy
    sudo apt-get install indicator-session

    — Might come in handy. Browser and fancy terminal application
    sudo apt-get install firefox terminator

    –For the installation of R I used the following as inspiration: https://www.r-bloggers.com/how-to-install-r-on-linux-ubuntu-16-04-xenial-xerus/
    sudo echo “deb http://cran.rstudio.com/bin/linux/ubuntu xenial/” | sudo tee -a /etc/apt/sources.list
    gpg –keyserver keyserver.ubuntu.com –recv-key E084DAB9
    gpg -a –export E084DAB9 | sudo apt-key add –
    sudo apt-get update
    sudo apt-get install r-base r-base-dev

    — For the installation of R-studio I used: https://mikewilliamson.wordpress.com/2016/11/14/installing-r-studio-on-ubuntu-16-10/

    wget http://ftp.ca.debian.org/debian/pool/main/g/gstreamer0.10/libgstreamer0.10-0_0.10.36-1.5_amd64.deb
    wget http://ftp.ca.debian.org/debian/pool/main/g/gst-plugins-base0.10/libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
    sudo dpkg -i libgstreamer0.10-0_0.10.36-1.5_amd64.deb
    sudo dpkg -i libgstreamer-plugins-base0.10-0_0.10.36-2_amd64.deb
    sudo apt-mark hold libgstreamer-plugins-base0.10-0
    sudo apt-mark hold libgstreamer0.10

    wget https://download1.rstudio.org/rstudio-1.0.136-amd64.deb
    sudo dpkg -i rstudio-1.0.136-amd64.deb
    sudo apt-get -f install

    Doing a random forest in R

    R needs some libraries to do random forests and create nice plots. First give the following commands:

    #to do random forests
    install.packages(“randomForest”)

    #to work with R markdown language
    install.packages(“knitr”)

    #to create nice plots
    install.packages(“ggplot2”)

    In order to get help on a library you can give the following command which will give you more information on the library.

    library(help = “randomForest”)

     Of course, the randomForest implementation does have some specifics:

    • it uses the reference implementation based on CART trees
    • it is biased in favor of continuous variables and variables with many categories

    A simple program to do a random forest looks like this:

    #load libraries
    library(randomForest)
    library(knitr)
    library(ggplot2)

    #random numbers after the set.seed(10) are reproducible if I do set.seed(10) again
    set.seed(10)

    #create a training sample of 45 items from the iris dataset. replace indicates items can only be present once in the dataset. If replace is set to true, you will get Out of bag errors.
    idx_train <- sample(1:nrow(iris), 45, replace = FALSE)

    #create a data.frame from the data which is not in the training sample
    tf_test <- !1:nrow(iris) %in% idx_train

    #the column ncol(iris) is the last column of the iris dataset. this is not a feature column but a classification column
    feature_columns <- 1:(ncol(iris)-1)

    #generate a randomForest.
    #use the feature columns from training set for this
    #iris[idx_train, ncol(iris)] indicates the classification column
    #importance=TRUE indicates the importance of features in determining the classification should be determined
    #y = iris[idx_train, ncol(iris)] gives the classifications for the provided data
    #ntree=1000 indicates 1000 random trees will be generated
    model <- randomForest(iris[idx_train, feature_columns], y = iris[idx_train, ncol(iris)], importance = TRUE, ntree = 1000)

    #print the model
    #printing the model indicates how the sample dataset is distributed among classes. The sum of the sample classifications is 45 which is the sample size. OOB rate indicates ‘out of bag’ (the overall classification error).

    print(model)

    #we use the model to predict the class based on the feature columns of the dataset (minus the sample used to train the model).
    response <- predict(model, iris[tf_test, feature_columns])

    #determine the number of correct classifications
    correct <- response == iris[tf_test, ncol(iris)]

    #determine the percentage of correct classifications
    sum(correct) / length(correct)

    #print a variable importance (varImp) plot of the randomForest
    varImpPlot(model)

    #in this dataset the petal length and width are more important measures to determine the class than the sepal length and width.

    The post Machine learning: Getting started with random forests in R appeared first on AMIS Oracle and Java Blog.

    How I learned to appreciate MongoDB

    Tue, 2017-04-04 05:11

    Last week at our company we organized a session about NoSQL in general and MongoDB in particular, as you can read here. The MongoDB focus was presented by me and I would like to take you with me on my trip to actually appreciating the NoSQL database world (and MongoDB in particular).

    Coming from RDBMS I’m used to ACID. Business models reside within the database, data never gets lost, transactions are isolated, you’ve got your read consistency, strictly defined tables, foreign keys et cetera. MongoDB on the other hand I saw as an unstructured pile of data, not stuctured, no such thing as transactions, it’s ‘eventually consistent’ (that sounds like a leap of faith), no joins hence no foreign keys… You get the picture.

    I am a DBA. My major concern is to have all data available or at least recoverable, no matter what. But the world is changing. Developers more and more look at the database as a storage engine; business logica is programmed in the application. We, DBA’s, try to teach them to use the database better, but is that really necessary? There’s no law..

    The world is changing fast and so are businesses. It’s not unusual to deploy a new release every night. If developers need to redesign the database every few days, then maybe the structure of data is not that important. If we collect the number of hits on our website, is it a disaster if out of 10,000 hits we occasionally miss 1?

    It takes a lot of discussion and ‘yes, but..’ for this other look at data to finally settle in. At least, for me it did. What finally won me over was an online course at MongoDB University that sort of mitigated the pain. Because, once you let go of the ACID model, you gain a lot of flexibility in terms of database design and infrastructure design. Scaling out becomes a very easy operation for instance. Resilience against hardware failure is a piece of cake. And due the lack of the RDBMS legacy, the engine can focus almost entirely on reading and writing data which leads to lightning fast performance.

    In the next paragraphs i will show some examples of the resilience and general behaviour of MongoDB, losely compared to Oracle. It is handson so I will also get you started, as minimal as possible, with mongoDB in general.

    I will not go into the way you read and write data. Only some actions will be shown that are needed for the examples. But in general:

    db = database and can be compared to a schema in Oracle.

    A db contains collections, which can be compared to tables.

    A collections contains documents which can be compared to rows.

    Joins are not possible, so all data you need should be in the same collection.

    Collections consist of key:value pairs, as many as you like within one collection.

    So, you can have the database ‘trades’ with collection ‘customers’ with documents like

    {“name”:”Larry”,”company”:”Oracle”,”hobby”:”sailing”}

    Getting started.

    Go to the MongoDB site here and download the appropriate version. The entire handson can be run on your typical Windows or Linux laptop or Virtual Box. The installation is very easy, no instructions needed from my part.

    The software will be installed in C:\Program Files\MongoDB\Server\3.4\bin. Version can change over time off course.

    Add that to your path in your environment variables.

    In linux, add the path where you unpacked the tar ball, followed by bin, to your $PATH.

    Starting an instance is also very easy. Open a command box, create a directory \data\db1 and start the instance with

    mongod --dbpath \data\db1

    On windows you should leave this box open. When you close it the instance shuts down. Better would be to create a service but for this demonstration, this will do.

    Stop the database by pressing ^C.

    On linux you can fork the process so you don’t have to open a terminal for every instance:

    mongod --dbpath /data/db1 --fork --logpath a.log

    End it by killing the process.

    From now on I will continue in Windows, Linux users can follow the instructions with minor adjustments, like / instead of \

    Also make sure to use a different logpath for each instance.

    Resilience against hardware failure.

    In Oracle we have 2 kinds of resilience against hardware failure. Instance failure can be mitigated by RAC, storage failure by data guard. Besides, if a single instance crashes you can recover all data, provided you have a decent backup strategy.

    MongoDB uses a different approach called replica sets. Each instance (or member as it’s called) has its own storage and can be replicated to another instance (or many) with its own storage too. Only one instance can read and write, that is the primary instance. The others only allow you to read data.

    In production this is a no brainer: should a single instance fail, then you can’t recover all data like in Oracle, no matter how often you make backups.

    All instances vote who will be the primary. This can be manipulated by setting the priority parameter. I will not go into that here but just demonstrate a simple replica set.

    Open a command box and type:

    mkdir \data\db1\r1
    mkdir \data\db1\r1
    mkdir \data\db1\r1
    mongod --smallfiles --oplogSize 50 --port 27001 --dbpath \data\db1\r1 --replSet r

    Leave it open and open a second Command box and type:

    mongod --smallfiles --oplogSize 50 --port 27002 --dbpath \data\db1\r2 --replSet r

    Leave it open and open a third Command box and type:

    mongod --smallfiles --oplogSize 50 --port 27003 --dbpath \data\db1\r3 --replSet r

    Open fourth command box This will be used to actually talk to the database using the mongo shell. We will then initiate the replica set.

    mongo –-port 27003
    rs.initiate(
               { _id:'r',
                 members:[
                         { _id:1, host:'localhost:27001' },
                         { _id:2, host:'localhost:27002', "arbiterOnly" : true },
                         { _id:3, host:'localhost:27003' }
                         ]
               }
    )
    rs.status()

    I introduced a special member, the Arbiter. This is a member without data, it only helps to have an uneven number of members which is necessary to always get a majority of votes when it comes to choosing the Primary member.

    In the output you can see that we have 3 members: a Secondary on port 27001, an Arbiter on port 27002 and a Primary on port 27003. You can also see by the prompt that we are connected to the Primary.

    We will now create a collection called ‘simple’ and insert some data. Also, the writeConcern phrase makes sure data is written to at least 2 members. If there are more members they will be ‘eventually consistent’, meaning that they will synchronize but not immediately.

    db.simple.insert( { _id : 1 }, { writeConcern : { w : 2 } } )
    db.simple.insert( { _id : 2 }, { writeConcern : { w : 2 } } )
    db.simple.insert( { _id : 3 }, { writeConcern : { w : 2 } } )

    Go to your secondary member and try to read the data. This involves giving your self permission to read from the secondary as I’ll show:

    exit
    mongo --port 27001
    r:SECONDARY> db.simple.find()
    Error: error: { "$err" : "not master and slaveOk=false", "code" : 13435 }
    r:SECONDARY> rs.slaveOk()
    r:SECONDARY> db.simple.find()
    { "_id" : 1 }
    { "_id" : 2 }
    { "_id" : 3 }

    This looks okay. Not featured here: if I stop the Secondary, add data on the Primary and restart the Secondary, it synchronizes as expected. Just one thing: the writeConcern for 2 members can not be used since we only have 1 member.

    Now it becomes interesting. I’ll stop the Secondary, write some data on the Primary, stop the Primary and start the Secondary. Would the data written whilst the Secondary was down still be visible? If not, would it be recoverable?

    r:SECONDARY> exit
    bye

    Go to your first box and stop the Secondary with ^C.

    Go to the mongoshell box and connect to port 27003, the Primary and add some more data:

    mongo --port 27003
    MongoDB shell version: 3.0.14
    connecting to: 127.0.0.1:27003/test
    r:PRIMARY> db.simple.insert( { _id : 4 } )
    WriteResult({ "nInserted" : 1 })
    r:PRIMARY> db.simple.insert( { _id : 5 } )
    WriteResult({ "nInserted" : 1 })
    r:PRIMARY> db.simple.insert( { _id : 6 } )
    WriteResult({ "nInserted" : 1 })
    r:PRIMARY> db.simple.find()
    { "_id" : 1 }
    { "_id" : 2 }
    { "_id" : 3 }
    { "_id" : 4 }
    { "_id" : 5 }
    { "_id" : 6 }
    
    r:PRIMARY> exit
    bye

    Now stop the primary in your 3rd box with ^C and restart the Secondary in your 1st box. Then go to the mongoshell box and connect to port 27001

    mongo --port 27001
    MongoDB shell version: 3.0.14
    connecting to: 127.0.0.1:27001/test
    r:PRIMARY> rs.status()
    {
    "set" : "r",
    "date" : ISODate("2017-03-20T19:12:43.425Z"),
    "myState" : 1,
    "members" : [
    {
    "_id" : 1,
    "name" : "localhost:27001",
    "health" : 1,
    "state" : 1,
    "stateStr" : "PRIMARY",
    "uptime" : 25,
    "optime" : Timestamp(1490035617, 1),
    "optimeDate" : ISODate("2017-03-20T18:46:57Z"),
    "electionTime" : Timestamp(1490037141, 1),
    "electionDate" : ISODate("2017-03-20T19:12:21Z"),
    "configVersion" : 1,
    "self" : true
    },
    {
    "_id" : 2,
    "name" : "localhost:27002",
    "health" : 1,
    "state" : 7,
    "stateStr" : "ARBITER",
    "uptime" : 24,
    "lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
    "lastHeartbeatRecv" : ISODate("2017-03-20T19:12:43.167Z"),
    "pingMs" : 0,
    "configVersion" : 1
    },
    {
    "_id" : 3,
    "name" : "localhost:27003",
    "health" : 0,
    "state" : 8,
    "stateStr" : "(not reachable/healthy)",
    "uptime" : 0,
    "optime" : Timestamp(0, 0),
    "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
    "lastHeartbeat" : ISODate("2017-03-20T19:12:43.354Z"),
    "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
    "configVersion" : -1
    }
    ],
    "ok" : 1
    }
    r:PRIMARY> db.simple.find()
    { "_id" : 1 }
    { "_id" : 2 }
    { "_id" : 3 }
    r:PRIMARY>db.simple.insert( { _id : 7 } )
    WriteResult({ "nInserted" : 1 })
    r:PRIMARY> db.simple.find()
    { "_id" : 1 }
    { "_id" : 2 }
    { "_id" : 3 }
    { "_id" : 7 }
    r:PRIMARY>
    

    So, member 1 now has become the Primary, but we hit data loss: it never had a chance to synchronize and they do not share any storage to read from.

    What would happen if we restart the 3rd member? After all, that one does have the lost data stored, somewhere.

    Start up the 3rd member (in the 3rd box)

    In the output you will see it transitions to Secondary and it performs a rollback: the lost data is actually rolled back. And the good news: it is stored. Under its data directory \data\db1\r3 it created a directory called rollback which contains a .bson file. This file can be examend and/or imported in the database as I’ll show.

    Go to the 4th box and exit mongoshell. Then:

    cd \data\db1\r3\rollback
    C:\data\db1\r3\rollback>bsondump test.simple.2017-03-20T19-32-31.0.bson
    {"_id":4.0}
    {"_id":5.0}
    {"_id":6.0}
    2017-03-20T20:45:06.412+0100 3 objects found
    C:\data\db1\r3\rollback>mongorestore --port 27001 --db test --collection simple test.simple.2017-03-20T19-32-31.0.bson
    2017-03-20T20:47:59.880+0100 checking for collection data in test.simple.2017-03-20T19-32-31.0.bson
    2017-03-20T20:47:59.886+0100 restoring test.simple from file test.simple.2017-03-20T19-32-31.0.bson
    2017-03-20T20:48:00.463+0100 no indexes to restore
    2017-03-20T20:48:00.463+0100 finished restoring test.simple (3 documents)
    2017-03-20T20:48:00.463+0100 done
    
    C:\data\db1\r3\rollback>mongo --port 27001
    MongoDB shell version: 3.0.14
    connecting to: 127.0.0.1:27001/test
    r:PRIMARY> db.simple.find()
    { "_id" : 1 }
    { "_id" : 2 }
    { "_id" : 3 }
    { "_id" : 7 }
    { "_id" : 4 }
    { "_id" : 5 }
    { "_id" : 6 }
    r:PRIMARY>
    

    Okay, Oracle would have done everything by itself. But at what cost? It needs to maintain redologs and archived redo logs. It only has 1 member to query, the Primary database. Yes, you can have a read only Standby database with Active Data Guard since 11G, but that’s a licensed option. It’s robust, nevertheless. I only want to say that the alternative is different but not all bad. Not at all.

    ;

    ;

    Scaling out, also known as sharding

    In the previous paragraph we covered HA and recoverablility. Now let’s have a look at scaling out, best compared to RAC.

    RAC enables you to add CPU power and memory to a database. It also enables you to distribute different kinds of workloads over different machines, for instance reporting on one node and OLTP on another node. That distribution can be compared to smart using of replica sets explained above.

    Adding CPU power and memory is something else. MongoDB heavily relies on memory to perform. And they made it very easy for you to add more nodes to your cluster. This is done by sharding.

    Sharding can best be described as range based partitioning. Sharding is done on a per collection base. A shard cluster consists of 1 (!) or more nodes that automatically partitions a collection and distributes it evenly over all cluster members.

    Let’s have a closer look.

    First of all, each mongod needs to know it is part of a shard cluster. That is accomplished with a –shardsvr startup parameter. It is also very wise to explicitely declare the port number with the –port parameter. Finally it needs its own storage, the –dbpath parameter. Example:

    mongod  --shardsvr --port 27020 --dbpath \data\db1\s1
    mongod  --shardsvr --port 27021 --dbpath \data\db1\s2

    Next we need a config server. This is also a mongodb, but instead of data, it has a special database that contains all information there is to know about the cluster. Especially which members are known and the relation between the members and the partitions, the shards in mongo language.

    As of Mongo 3.4 config servers need to be in a replica set instead of standalone. For demonstration of develoment, it is allowed to have a set of only 1 member.

    In a production environment you typically create 3 config servers, for now we’ll create just one:

    mongod --configsvr --replSet c --dbpath \data\db1\conf --port 27019

    Start the mongo shell in another command box so we can configure the relica set “c”:

    rs.initiate(
    {
    _id: "c",
    configsvr: true,
    members: [
    { _id : 0, host : "localhost:27019" }
    ]
    }
    )

    Finally we need at least one mongos which is a routing service and serves as the front end to which the users connect. The mongos has no persitant data, it reads the config server and distributes client requests over the shards.

    It needs to know where to find the config server so we tell it with a parameter configReplSetName/hostname:port:

    mongos --configdb c/localhost:27019

    We can now open a mongo shell. It will by default connect to port 27017 and, lo and behold, a mongos automatically runs on port 27017. Since we are all running on the same host, connecting is very easy.

    In the shell we will add shard servers to the cluster. Next we will enable sharding for a specific database.

    mongo
    mongos> sh.addShard( "localhost:27020")
    mongos> sh.addShard( "localhost:27021")
    mongos> sh.enableSharding("test")

    The only thing we have done is enable sharding for a db. But nothing is harded yet. For that to happen we need to decide which collection(s) will be sharded and on what key. This key needs to have an index on the shard key. And then finally nothing needs to be done anymore.

    So what did I learn?
    Sure, you lack the robustnes of an RDBMS. Sure, you can not join and sure, therefor you store way more bytes then usual. But it’s fast, it’s easy and it serves many purposes. And last but not least, it takes some serious out of the box thinking for a DBA to actually appriciate this new world: you have to let go of some fundamental principles on which your world was based for the last ten, twenty or more years.

    And finally a disclaimer: These examples have been over simplified. In the real world you’d use many hosts. You’d use 3 config servers, many mongos instances and off course a replicated shard cluster.
    Apart from that, there are many ways to make the behaviour more sophisticated and robust. Chack out the official documentation, it’s quite good in my opinion and challenges you to many experiments.

    The post How I learned to appreciate MongoDB appeared first on AMIS Oracle and Java Blog.

    The value of the Oracle PaaS Partner Community Forum

    Fri, 2017-03-31 04:10

    IMG_7692I have just returned home from the Oracle PaaS Partner Community Forum 2017 which took place in Split, Croatia. Filled with energy, inspiration, impressions, already fond memories, more understanding, longer term plans and shorter term action items and a warm feeling of being part of an international network of smart, open, helpful and fun people. Does that sound soft, a little over the top – slightly brainwashed even? Let me try to explain what this event entails and what it means to me.

    IMG_7722200 Representatives from Oracle PaaS and Middleware partners in 25+ countries – primarily in Europe and including North America, Asia and Africa – flocked to Split, Croatia to meet with each other as well as with  a few dozen Oracle staff – from Vice President Product Management Fusion Middleware and PaaS Cloud to the Oracle A Team. The objectives of the event are to share real life experiences with Oracle technology and Oracle as a company to do business with. The partners share experience with each other – to learn and teach – as well as with Oracle – to enlighten them as to what works well and what is sorely lacking in products. Oracle staff lay out the product roadmaps for the next 6-18 months, share commercial opportunities, demonstrate new or hidden features in the products and in general enable attendees to make more successful use of their technology. The common goal is clear: become more successful in applying Oracle products for customers – by making the products better and by better understanding how the products can be best used.

    IMG_7773The common ground for this community is strong and clear: a shared enthusiasm to work with technology and achieve business value for customers with it. We share a lot of history – even if this is the first time we meet – from working many years with the same technology, trying to achieve similar results for customers, facing the same challenges and probably unknowingly collaborating through discussion forums and blog articles. This shared history and common vocabulary make it very easy to strike up conversations – with just about anyone. Talking about a product, a technology, a wild plan at Oracle, the demo in the previous session. We speak the same language. And this common ground ground and shared history result in a warm, open atmosphere where conversations about sensitive topics and personal affairs can easily take place. With such a group of smart and very smart people, it seems easy to analyze the large problems of the world – and to overcome them as well (although that may have been the very agreeable local red wine speaking).

    IMG_7770Some of the conversations I was part of included topics such as: Political situation in Brazil, economy of Portugal, president Trump, the Dutch elections, Oracle SaaS in Belgium, recent history and current relations in Slovenia, Croatia and Former Yugoslavia, politics and healthy food trends in Germany, history and culture in India, the Brexit and its consequences for Northern Ireland, raising children – especially at that tender age of 16 – in Slovenia, Portugal, Netherlands and the USA, regional differences within Spain, weather and seasons in Peru, absorbing the move from Mexico to Norway. 

    I treasure the warm, open, positive, inquisitive, helpful attitude of the attendees – their willingness to share and help, to collaborate and appreciate. I am sure people have similar experiences with scientific communities and conferences, scouting jamborees, musical festivals and other gatherings of people with a shared passion. I am glad I have my own professional social community to be a part of.

    Thanks Jürgen for building and facilitating that community – it does not just happen.IMG_7753

     

     

    OMESA – the Open Modern Enterprise Software Architecture

    imageStarting at the Community Day on Monday – but actually continuing an initiative that was started around Oracle OpenWorld 2016 – a small team of likeminded spirits with shared architectural interests, experiences and zeal gathered to work on OMESA – the Open Modern Enterprise Software Architecture initiative, initiated by Hajo Normann and especially Luis Weir. Taking into account many modern themes and upcoming technology patterns and trends – such as microservices, IoT, wearables and other devices, APIs, real time, big data, DevOps, scale out , stateless/server-less – the OMESA initiative tries to come up with good ways to describe, design and eventually implement enterprise software architecture patterns that actually work in real life. Drawing on the many decades experience across organizations, IT landscapes, corporate cultures as well as countries and continents, the OMESA discussions ran deep (more than 2 hours for about 20% of a single slide) and wide (from business objectives to infrastructure design, from fine grained component to end to end business flow). All in good spirit and at times quite passionate. Fun to do and very meaningful as well. image

     

     

    Themes

    Clearly, one community day, two conference days and two days of in depth handson workshops cover many topics and details. A number of subjects and themes kept reappearing and seemed to define the main scope for the conference and the imminent roadmaps. A prime example was: ChatBot which made an appearance in almost every presentation.

    More far reaching themes included:

    • Integration – various solutions to create end to end integration flow across technology stacks, locations, cloud & on premises ;
    • SaaS Extension – Oracle focuses on various aspects of SaaS Extension (and SaaS enablement) and specific solutions for those. ABCS (Application Builder Cloud Service) is put forward as the primary solution for (simple) UI extensions of SaaS solutions. ABCS is rapidly evolving into a low code/no code/visual development environment for citizen developers and IT professionals to create cloud based user interfaces on top of local, custom business objects, SaaS APIs or custom REST APIs. Integration and Process & Workflow are other areas where through ICS with Cloud Adapters and PCS for human workflow and complex business processes the SaaS offerings can be extended and tailored. In terms of financial conditions – Oracle is offering these cloud services at very attractive price points for use with Oracle SaaS.
    • Machine Learning & AI – leveraging data to predict & recommend, based on statistical analysis of (big data) is rapidly becoming the holy grail for many organizations. The Big Data lakes are to be exploited and machine learning is one way of doing exactly that. IoT is leading to even more data and even more real time requirements. Machine Learning is popping up in various cloud services to provide them with predictive capabilities – for example in Dynamic Processes in PCS and Log and Infrastructure Analytics in OMC.
    • Processes – PCS will be extended and enriched with dynamic process (evolved from adaptive case management), business rules and more.
    • APIs – APIs – internal and external, generic business APIs and single purpose, consumer tailored APIs – API first design; API Platform/Catalog (design time) and API Gateway (runtime). apiary.io will always be free (promises Vikas Anand) – for API design and (design time) publication
    • PaaS orchestration – using the Oracle PaaS services together; having them talk to each other, share data, connections, information, logging, events, monitoring, IaaS resources. Oracle is working on that, and I getting better at it.
    • Containers & Microservices – another holy grail seems to be a definition of what a microservice is. It seems obvious that microservice is the next level in decoupling, ownership, stateless and flexible horizontal scale out and modern DevOps. In terms of implementation – containers including application contains and serverless functions – seem an obvious choice (to an extent where sometimes means and end get confused).
    • DevOps – taking ownership of a software component throughout its lifecycle (you build it, you run it, you fix it), using automated continuous delivery, automated (anti-)regression testing and automated monitoring/problem analysis as well as automated recovery/fail over and scaling. Containers at infra structure level, microservices as organization and architecture level and tooling such as Oracle Management Cloud for monitoring and analysis.

     

     

    REAL – Red Expert Alliance

    The partner community forum is a great opportunity for the partners who collaborate even more within the Red Expert Alliance (or REAL) to catch up with each other. With partners in Mexico, USA, Australia, Portugal, Norway, Germany, Spain, UK, The Netherlands – REAL network forms an international team of Oracle partners and over 400 Oracle Professionals, supporting customers on all continents. Split provided a good opportunity to exchange the latest developments, discuss new business propositions, better get to know each other and in general strengthen the mutual ties. Additionally, some conversations took place with new candidate members for REAL.

     

    Soaring through the Clouds

    IMG_7759One of the highlights for me of the community event was the Soaring through the Cloud Live Demonstration of integrating 15 cloud services – prepared by a team of seven Oracle ACEs spanning three countries and five companies. The preparation for this session started over two months before the actual performance – and in a way already started over a year ago with the first and decidedly more challenging (given the state of the PaaS Cloud services at that time) live demo at the community event in Valencia, Spain – March 2016.

    In 1 hour and 20 minutes, we described an end to end business flow with many interaction points with things, social media, IT components and people including the audience. We managed to leverage 18 and demonstrate 15 cloud services, with only one or two minor hiccups around switching between Oracle Public Cloud identity domains. In doing so, we claim to have set the world record for integrating PaaaS Cloud Services.image

    You can find the resources for this presentation at: http://tinyurl.com/SoaringSplit (live endpoints, UI and supporting resources including source code).

    The presentation slide deck itself (including many screenshots from our live demo) is available from SlideShare:

    The post The value of the Oracle PaaS Partner Community Forum appeared first on AMIS Oracle and Java Blog.

    Pages