Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 1 hour 59 min ago

Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images

Thu, 2017-11-16 22:47

Oracle has been active with Docker for quite some time now. From the first hesitant steps from some enthusiastic employees with Docker build files that helped people to get going with Oracle software in their experimental containers to a GitHub repo with a broad set of extensive Docker build files to create Docker containers with various Oracle product that are supported ( And of course the Oracle Container Cloud – launched in the Spring of 2017 that will run custom Docker images. And now recently the next step: the availability of the Oracle Container Registry – Oracle’s own Docker container registry that offers a range of ready built container images with Oracle software. Using these images – all you need to run Oracle platform component on your local Docker Host or Kubernetes cluster is docker pull from this registry followed by a docker run.

In this article I will give a quick example of how to work this the Oracle Container Registry. It can be found at: .

The steps to go through:

1. Register as a user for the Oracle Container Registry (one time only, an Oracle account is required)

2. Explore the registry, locate the desired image(s) and Agree to and accept the Oracle Standard Terms and Restrictions for the image(s) that you want to make use of

3. Do a docker login to connect to the Oracle Container Registry

4. Pull the image(s) that you need

5. Run the image(s)

Open the link for the Container Registry:


Click on Register. Sign On with an existing Oracle Account or start the flow for creating such an accountimage

Provide the account’s credentials. The click on Create New User.


A confirmation email is sent:


And now the opening page lists the areas in which currently images are provided:


You can explore what images are available, for example for the database:


And for Java:


Before you can download any image, you need to accept the terms for that specific image – a manual step in the user interface of the container registry:




After pressing Accept, this image is now available to be pulled from docker.


Run Docker container based on Oracle’s Java Runtime Image

I will focus now on the Java Run Time image – one of the smaller images on the registry – to demonstrate the steps for running it in my local Docker host.

Accept the terms:


Click on the name of image to get the details and the docker pull command required for this image:


Check out the tags:


We wil go for the latest.

From the Docker host, first do a login, using your Oracle account credentials:

docker login –u username –p password

SNAGHTML92082caThen use docker pull, using the command provided on the image page:

docker pull

The image is downloaded and stored locally in the image cache.



When the download is complete the image (not small mind you, at 377 MB) is available to be used for running container instances, in the regular Docker way. For example:

docker run -it


Et voila: the container is running locally based on a pre built image. No local build steps are required, no downloading of required software packages and special configurations to be applied. The Java runtime is fairly straightforward. With running the Oracle Docker image for the enterprise database or the Fusion Middleware infrastructure, the gain is even bigger from using the prebuilt image from the Oracle Container Registry.

If you want to free up local space, you can of course remove the Oracle Docker image. After all, it is easy to pull it again from the registry.


The post Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images appeared first on AMIS Oracle and Java Blog.

First steps with Istio on Kubernetes on Minikube on Windows 10

Wed, 2017-10-25 07:53

In this article, I discuss my steps to get going with Istio [service mesh]on Kubernetes running on Minikube on Windows 10. Unfortunately, I have ran into an issue with Istio. This article describes the steps leading up to the issue. I will continue with the article once the issue is resolved. For now, it is a dead end street.

Note: my preparations – install Minikube and Kubernetes on Windows 10 are described in this previous article:

Clone Git repository with samples

git clone

Start minikube

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

minikube start


Run Bookinfo

cd c:\data\bookinfo\istio\samples\bookinfo\kube

kubectl apply -f bookinfo.yaml


Show productpage. First find port on which product page is exposed:


productpage is a service of type ClusterIP, which is only available inside the cluster – which is not good for me.

So to expose the service to outside the cluster:

kubectl edit svc productpage

and in the editor that pops up, change the type from ClusterIP to NodePort:


After changing the type and saving the change, get services indicates the port on which the productpage service is now exposed:


So now we can go to URL:


Installing Istio into the Kubernetes Cluster

Now that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to install Istio in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons.

First, download Istio for Windows from and extract the contents of the zip file.


Add the directory that contains the client binary istioctl.exe to the PATH variable.


Open a new command line window. Navigate to the installation location of Istio.

To install Istio to the minikube Kubernetes cluster:

kubectl apply -f install/kubernetes/istio.yaml


ending with:


To verify the success of the installation:

kubectl get svc -n istio-system


On Minikube – that does not support services of type LoadBalancer – the external IP for the istio-ingress will stay on pending. You must access the application using the service NodePort, or use port-forwarding instead.

Check on the pods:

kubectl get pods -n istio-system


On the dashboard, when I switch to the istio-system Namespace, I can see more details 


When I try to run istio commands, I run into issues:

istio version


panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x30 pc=0x121513f]

goroutine 1 [running]:
main.getDefaultNamespace(0x14b878a, 0xd, 0x0, 0x0)

I am not sure yet what is the underlying cause and if there is a solution. The issue report seems related – perhaps.

I do not know where to get more detailed logging about what happens prior to the exception.

Install Book Info Application and inject Istio

The next command I tried was:

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)

this one fails with: the system cannot find the file specified



Git Repository for Istio – with samples: 

Guide for Istio introduction – Managing microservices with the Istio service mesh – 

Installation of Istio into Kubernetes Cluster –

Tutorial  Istio is not just for microservices  –

Istio: Traffic Management for your Microservices –

Istio Guide – getting started with sample application Bookinfo –

The post First steps with Istio on Kubernetes on Minikube on Windows 10 appeared first on AMIS Oracle and Java Blog.

Installing Minikube and Kubernetes on Windows 10

Tue, 2017-10-24 00:25

Quick notes on the installaton of Minikube for trying out Kubernetes on my Windows 10 laptop (using VirtualBox –not Hyper-V)

Following instructions in 

Download Windows installer for MiniKube:

Run installer

After running the installer, open a command line window


Download kubectl.exe

curl -o kubectl.exe

Copy downloaded file to a proper location – of your choosing – and add that location to the PATH environment variable.

Open a new command line window, set MINIKUBE_HOME

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

and run

minikube start

to start Minikube.


The VM image is downloaded in which the Kubernetes cluster will be created and ran. This image is 139 MB, so this startup takes a while – but of course only the first time.


The directory .minikube is created:

imageAnd in VirtualBox you will find a new VM set up and running:



minikube dashboard

and the browser will open:


with an overview from within the VM of the Kubernetes Cluster.


minikube stop

you can halt the cluster – later to be started again using minikube start


A restart now only takes 10-15 seconds:


Using the instructions here – – I can quickly run a Docker Image on my minikube cluster:

kubectl run my-nginx –image=nginx  –port=80

This will create two nginx pods listening on port 80. It will also create a deployment named my-nginx to ensure that there are always two pods running.


In the dashboard, this same information is available:


kubectl expose deployment my-nginx –type=”NodePort”

is used to expose the deployment – make it accessible from outside the cluster.


kubectl get services

I get a list of services and the local IP address and port on which they are exposed.


I can get the same information on the dashboardimage

The IP address where the VirtualBox VM can be accessed is – as can be seen for example from the URL where the dashboard application is accessed:


The nginx service can now be accessed at

And in the browser:


The post Installing Minikube and Kubernetes on Windows 10 appeared first on AMIS Oracle and Java Blog.

Rapid first few steps with Fn – open source project for serverless functions

Thu, 2017-10-19 00:49

Project Fn is an open source project that provides a container native, poly-language, cloud agnostic (aka run on any cloud) serverless platform for running functions. Fn was launched during Oracle OpenWorld 2017. Fn is available on GitHub ( ) and provides all resources required to get started. In this article, I will just show you (and myself) how I went through the quick start steps and what it looked like on my laptop (Windows 10 with Vagrant and VirtualBox).

I simply get Fn up and running, create a first function that I then deploy and access through HTTP. I briefly show the APIs available on the Fn server and Fn UI application.


  1. Create VirtualBox VM with Debian and Docker (for me, Ubuntu 14 failed to run Fn; I created issue 437 for that) – this step is described in a different article
  2. Install Fn command line
  3. Install and run Fn server in the VM, as Docker container
  4. Create function hello
  5. Initialize new function and run it
  6. Deploy the new function (in its own Docker Container running inside the container running Fn server)
  7. Invoke the new function over http from the Host laptop
  8. Run the Fn UI application
  9. Inspect the Fn Server REST APIs

Connect into the Debian Virtual Machine – for me with vagrant ssh.

Install Fn Command Line

To install the Fn command line, I used this command:

curl -LSs | sh


Install and run Fn server in the VM, as Docker container

To run the Fn server, after installing the CLI, I just used

fn start


Fn Server is running.

Create function hello

As per the instructions in the quick start guide, I created a new directory hello with a text file hello.go:


Note: I created these on the host laptop inside the directory that is mapped into the VM under /vagrant. So I can access the file inside the VM in /vagrant/hello.

Initialize new function and run it


and after a little while


Deploy the new function

(in its own Docker Container running inside the container running Fn server)



Run function inside Debian VM:


Invoke the new function over http from the Host laptop


The IP address was assigned during the provisioning of the VM with Vagrant.

Run the Fn UI application

A UI application to inspect all Fn applications and functions can be installed and ran:



And accessed from the host laptop:


Note: for me it did not show the details for my new hello function.

Inspect the Fn Server REST APIs

Fn platform publishes REST APIs that can be used to programmatically learn more about applications and functions and also to manipulate those.


Some examples:





    Getting started with Fn is pretty smooth. I got started and wrote this article in under an hour and a half. I am looking forward to doing much more with Fn – especially tying functions together using Fn Flow.


    Fn project home page:

    Article to quickly provision VirtualBox Image with Debian and Docker:

    Fn quick start guide: 

    Fn UI on GitHub: 

    Fn API:

    The post Rapid first few steps with Fn – open source project for serverless functions appeared first on AMIS Oracle and Java Blog.

    Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions

    Thu, 2017-10-19 00:02

    A short and simple article. I needed a Debian VM that I could use as Docker host – to run on my Windows 10 laptop. I resorted to Vagrant. With a few very simple steps, I got what I wanted:

    0. install Vagrant (if not already done)

    0. install Vagrant plugin for automatically adding Virtual Box Guest Additions to every VM stamped out by Vagrant (so folder mapping from host laptop to VM is supported)


    1. create a fresh directory with a simple Vagrant file that refers for Debian image

    2. run vagrant up

    3. sit back and relax (few minutes)

    4. use vagrant ssh to connect into the running VM and start doing stuff.

    The vagrant file:

    Vagrant.configure(“2”) do |config|
    config.vm.provision “docker”

    config.vm.define “debiandockerhostvm”
    # = “debian/jessie64” “private_network”, ip: “”

    config.vm.synced_folder “./”, “/vagrant”, id: “vagrant-root”,
           owner: “vagrant”,
           group: “www-data”,
           mount_options: [“dmode=775,fmode=664”],
           type: “”
    config.vm.provider :virtualbox do |vb| = “debiandockerhostvm”
       vb.memory = 4096
       vb.cpus = 2
       vb.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
       vb.customize [“modifyvm”, :id, “–natdnsproxy1”, “on”]

    Running Vagrant to create and subsequently run the VM:



    Use vagrant ssh to enter the Virtual Machine and start mucking around:



    Vagrant Plugin for automatically installing Guest Addition to each VM that is produced:

    Vagrant Box Jessie:

    The post Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions appeared first on AMIS Oracle and Java Blog.

    Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant

    Tue, 2017-10-17 09:53

    The topic of quickly creating an Oracle development VM is not new. Several years ago Edwin Biemond and Lucas Jellema have written several blogs about this and have given presentations about the topics at various conferences. You can also download ready made Virtualbox images from Oracle here and specifically for SOA Suite here.

    Over the years I have created a lot (probably 100+) of virtual machines manually. For SOA Suite, the process of installing the OS, installing the database, installing WebLogic Server, installing SOA Suite itself can be quite time consuming and boring if you have already done it so many times. Finally my irritation has passed the threshold that I needed to automate it! I wanted to easily recreate a clean environment with a new version of specific software. This blog is a start; provisioning an OS and installing the XE database on it. It might seem a lot but this blog contains the knowledge of two days work. This indicates it is easy to get started.

    I decided to start from scratch and first create a base Vagrant box using Packer which uses Kickstart. Kickstart is used to configure the OS of the VM such as disk partitioning scheme, root password and initial packages. Packer makes using Kickstart easy and allows easy creation of a Vagrant base box. After the base Vagrant box was created, I can use Vagrant to create the Virtualbox machine, configure it and do additional provisioning such as in this case installing the Oracle XE database.

    Getting started

    First install Vagrant from HashiCorp (here).

    If you just want a quick VM with Oracle XE database installed, you can skip the Packer part. If you want to have the option to create everything from scratch, you can first create your own a base image with Packer and use it locally or use the Vagrant cloud to share the base box.

    Every Vagrant development environment requires a base box. You can search for pre-made boxes at

    Oracle provides Vagrant boxes you can use here. Those boxes have some default settings. I wanted to know how to create my own box to start with in case I for example wanted to use an OS not provided by Oracle. I was presented with three options in the Vagrant documentation. Using Packer was presented as the most reusable option.


    ‘Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.’ (from here) Download Packer from HashiCorp (here).

    Avast Antivirus and maybe other antivirus programs, do not like Packer so you might have to temporarily disable them or tell them Packer can be trusted.

    virtualbox-iso builder

    Packer can be used to build Vagrant boxes (here) but also boxes for other platforms such as Amazon and Virtualbox. See here. For VirtualBox there are two so called builders available. Start from from scratch by installing the OS from an ISO file or start from an OVF/OVA file (pre-build VM). Here of course I choose the ISO file since I want to be able to easily update the OS of my VM and do not want to create a new OVF/OVA file for every new OS version. Thus I decided to use the virtualbox-iso builder.


    For my ISO file I decided to go with Oracle Linux Release 7 Update 4 for x86 (64 bit) which is currently the most recent version. In order for Packer to work fully autonomous (and make it easy for the developer), you can provide a remote URL to a file you want to download. For Oracle Linux there are several mirrors available which provide that. Look one up close to you here. You have to update the checksum in the template file (see below) when you update the ISO image if you want to run on a new OS version.

    template JSON file

    In order to use Packer with the virtualbox-iso builder, you first require a template file in JSON format. Luckily samples for these have already been made available here. You should check them though. I made my own version here.


    In order to make the automatic installation of Oracle Linux work, you need a Kickstart file. This is generated automatically when performing an installation at /root/anaconda-ks.cfg. Read here. I’ve made my own here in order to have the correct users, passwords, packages installed and swap partition size.

    After you have a working Kickstart file and the Packer ol74.json, you can kickoff the build by:

    packer build ol74.json

    Packer uses a specified username to connect to the VM (present in the template file). This should be a user which is created in the Kickstart script. For example if you have a user root with password Welcome01 in the kickstart file, you can use that one to connect to the VM. Creating the base box will take a while since it will do a complete OS installation and first download the ISO file.

    You can put the box remote or keep it local.

    Put the box remote

    After you have created the box, you can upload it to the Vagrant Cloud so other people can use it. The Vagrant Cloud free option offers unlimited free public boxes (here). The process of uploading a base box to the Vagrant cloud is described here. You first create a box and then upload the file Packer has created as provider.

    After you’re done, the result will be a Vagrant box which can be used as base image in the Vagrantfile. This looks like:

    Use the box locally

    Alternatively you can use the box you’ve created locally:
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/

    You of course have to change the box location to be specific to your environment

    And use ol74 as box name in your Vagrantfile. You can see an example of a local and remote box here.

    If you have recreated your box and want to use the new version in Vagrant to create a new Virtualbox VM:

    vagrant box remove ol74
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/


    You now have a base clean OS (relatively clean, I added a GUI) and you want to install stuff in it. Vagrant can help you do that. I’ve used a simple shell script to do the provisioning (see here) but you can also use more complex pieces of software like Chef or Puppet. These are of course in the long run better suitable to also update and manage machines. Since this is just a local development machine, I decided to keep it simple.

    I’ve prepared the following Vagrant file.

    This expects to find a structure like:
    Directory: software

    These can be downloaded here. Except the file which can be downloaded here.

    Oracle XE comes with a rsp file (a so-called response file) which makes automating the installation easy. This is described here. You just have to fill in some variables like password and port and such. I’ve prepared such a file here.

    After everything is setup, you can do:

    vagrant up soadb

    And it will create the soadb VM for you in Virtualbox

    The post Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant appeared first on AMIS Oracle and Java Blog.

    JSON manipulation in Java 9 JShell

    Thu, 2017-10-12 09:38

    In this article I will demonstrate how we can work with JSON based data – for analysis, exploration, cleansing and processing – in JShell, much like we do in Python. I work with a JSON document with entries for all sessions at the Oracle OpenWorld 2017 conference (

    The Java 9 SE specification for the JDK does not contain the JSON-P API and libraries for processing JSON. In order to work with JSON-P in JShell, we need to add the libraries – that we first need to find and download.

    I have used a somewhat roundabout way to get hold of the required jar-files (but it works in a pretty straightforward manner):

    1. Create a pom.xml file with dependencies on JSON-P





    2. Then run

    mvn install dependency:copy-dependencies

    as described in this article:

    this will download the relevant JAR files to subdirectory target/dependencies


    3. Copy JAR files to a directory – that can be accessed from within the Docker container that runs JShell – for me that is the local lib directory that is mapped by Vagrant and Docker to /var/www/lib inside the Docker container that runs JShell.


    4. In the container that runs JShell:

    Start JShell with this statement that makes the new httpclient module available, for when the JSON document is retrieved from an HTTP URL resource:

    jshell –add-modules jdk.incubator.httpclient


    5. Update classpath from within jshell

    To process JSON in JShell – using JSON-P – we need set the classpath to include the two jar files that were downloaded using Maven.

    /env –class-path /var/www/lib/javax.json-1.1.jar:/var/www/lib/javax.json-api-1.1.jar

    Then the classes in JSON-P are imported

    import javax.json.*;

    if we need to retrieve JSON data from a URL resource, we should also

    import jdk.incubator.http.*;


    6. I have made the JSON document available on the file system.


    It can be accessed as follows:

    InputStream input = new FileInputStream(“/var/www/oow2017-sessions-catalog.json”);


    7. Parse data from file into JSON Document, get the root object and retrieve the array of sessions:

    JsonReader jsonReader = Json.createReader(input)

    JsonObject rootJSON = jsonReader.readObject();

    JsonArray sessions = rootJSON.getJsonArray(“sessions”);


    8. Filter sessions with the term SQL in the title and print their title to the System output – using Streams: p -> (JsonObject)p).filter(s ->  s.getString(“title”).contains(“SQL”)) .forEach( s -> {System.out.println(s.getString(“title”));})



    One other example: show a list of all presentations for which a slidedeck has been made available for download along with the download URL:

    .map( p -> (JsonObject)p)

    .filter(s -> s.containsKey(“files”) && !s.isNull(“files”) && !(s.getJsonArray(“files”).isEmpty()))

    .forEach( s -> {System.out.println(s.getString(“title”)+” url:”+s.getJsonArray(“files”).getJsonObject(0).getString(“url”));})


    Bonus: Do HTTP Request

    As an aside some steps in jshell to execute an HTTP request:

    jshell> HttpClient client = HttpClient.newHttpClient();
    client ==> jdk.incubator.http.HttpClientImpl@4d339552

    jshell> HttpRequest request = HttpRequest.newBuilder(URI.create(“”)).GET().build();
    request ==> GET

    jshell> HttpResponse response = client.send(request, HttpResponse.BodyHandler.asString())
    response ==> jdk.incubator.http.HttpResponseImpl@147ed70f

    jshell> System.out.println(response.body())
    <HTML><HEAD><meta http-equiv=”content-type” content=”text/html;charset=utf-8″>
    <TITLE>302 Moved</TITLE></HEAD><BODY>
    <H1>302 Moved</H1>
    The document has moved
    <A HREF=”;dcr=0&amp;ei=S2XeWcbPFpah4gTH6Lb4Ag”>here</A>.



    The post JSON manipulation in Java 9 JShell appeared first on AMIS Oracle and Java Blog.

    Java 9 – First baby steps with Modules and jlink

    Wed, 2017-10-11 12:00

    In a recent article, I created an isolated Docker Container as Java 9 R&D environment: In this article, I will use that environment to take few small steps with Java 9 – in particular with modules. Note:this story does not end well. I wanted to conclude with using jlink to create a stand alone runtime that contained both the required JDK modules and my own module – and demonstrate how small that runtime was. Unfortunately, the Link step failed for me. More news on that in a later article.

    Create Custom Module

    Start a container based on the openjdk:9 image, exposing its port 80 on the docker host machine and mapping folder /vagrant (mapped from my Windows host to the Docker Host VirtualBox Ubuntu image) to /var/www inside the container:

    docker run -it -p -v /vagrant:/var/www openjdk:9 /bin/sh

    Create Java application with custom module:  I create a single Module (nl.amis.j9demo) and a single class nl.amis.j9demo.MyDemo. The module depends directly on one JDK module (httpserver) and indirectly on several more.

    imageThe root directory for the module has the same fully qualified name as the module: nl.amis.j9demo.

    This directory contains the file. This file specifies:

    • which modules this module depends on
    • which packages it exports (for other modules to create dependencies on)

    In my example, the file is very simple – only specifying a dependency on jdk.httpserver:


    The Java Class MyDemo has a number of imports. Many are for base classes from the java.base module. Note: every Java module has a implicit dependency on java.base, so we do not need to include it in the file.


    This code create an instance of HttpServer – an object that listens for HTTP Requests at the specified port (80 in this case) and then always returns the same response (the string “This is the response”). As meaningless as that is – the notion of receiving and replying to HTTP Requests in just few lines of Java Code (running in the OpenJDK!) is quite powerful.

    package nl.amis.j9demo;
    import java.util.*;
    import java.util.concurrent.*;
    import static java.lang.System.out;
    import static*;
    public class MyDemo{
      private static final int DEFAULT_PORT = 80;
      private static URI ROOT_PATH = URI.create("/"); 
    private static class MyHandler implements HttpHandler {
           public void handle(HttpExchange t) throws IOException {
               URI tu = t.getRequestURI();
               InputStream is = t.getRequestBody();
               // .. read the request body
               String response = "This is the response";
               t.sendResponseHeaders(200, response.length());
               OutputStream os = t.getResponseBody();
      public static void main(String[] args) throws IOException {
        HttpServer server = HttpServer.create(new InetSocketAddress(DEFAULT_PORT), 0);
        server.createContext("/apps ", new MyHandler());
        server.setExecutor(null); // creates a default executor
        out.println("HttpServer is started, listening at port "+DEFAULT_PORT);

    Compile, Build and Run

    Compile the custom module:

    javac -d mods –module-source-path src -m nl.amis.j9demo


    Create destination directory for JAR file

    mkdir -p lib

    Create the JAR for the module:

    jar –create –file lib/nl-amis-j9demo.jar –main-class nl.amis.j9demo.MyDemo -C mods/nl.amis.j9demo .


    Inspect the JAR file:

    jar tvf lib/nl-amis-j9demo.jar


    To run the Java application- with a reference to the module:

    java –p lib/ -m nl.amis.j9demo


    the traditional equivalent with a classpath for the JAR file(s) would be:

    java -classpath lib/nl-amis-j9demo.jar nl.amis.j9demo.MyDemo

    Because port 80 in the container was exposed and mapped to port 8080 on the Docker Host, we can access the Java application from the Docker Host, using wget:



    The response from the Java application is hardly meaningful However, the fact that we get a response at all is quite something: the ‘remote’  container based on openjdk:9 has published an HTTP server from our custom module that we can access from the Docker Host with a simple HTTP request.


    I tried to use jlink – to create a special runtime for my demo app, consisting of required parts of JDK and my own module. I expect this runtime to be really small.

    The JVM modules by the way on my Docker Container are in /docker-java-home/jmods


    The command for this:

    jlink –output mydemo-runtime –module-path lib:/docker-java-home/jmods –limit-modules nl.amis.j9demo –add-modules nl.amis.j9demo –launcher demorun=nl.amis.j9demo –compress=2 –no-header-files –strip-debug

    Unfortunately, on my OpenJDK:9 Docker Image, linking failed with this error:


    Error: java.nio.file.FileSystemException: mydemo-runtime/legal/jdk.httpserver/ASSEMBLY_EXCEPTION: Protocol error


    Documentation for jlink –

    JavaDoc for HttpServer package –

    Java9 Modularity Part 1 (article on Medium by Chandrakala) –

    JavaOne 2017 Keynote – Mark Reynolds demoing jlink –

    Exploring Java 9 Modularity –

    The post Java 9 – First baby steps with Modules and jlink appeared first on AMIS Oracle and Java Blog.

    Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant

    Wed, 2017-10-11 08:25

    The messages from JavaOne 2017 were loud and clear. Some of these:

    • Java 9 is here,
    • the OpenJDK has all previously exclusive commercial features from the Oracle (fka SUN) JDK – this includes the Java Flight Recorder for real time monitoring/metrics gathering and analysis,
    • Java 9 will be succeeded by Java 18.3, 18.9 and so on (a six month cadence) with much quicker evolution with continued quality and stability
    • JigSaw is finally here; it powers the coming evolution of Java and the platform and it allows us to create fine tuned, tailor more Java runtime environments that may take less than 10-20% of the full blown JRE
    • Java 9 has many cool and valuable features besides the Modularity of JigSaw – features that make programming easier, more elegant more fun more lightweight etc.
    • One of the objectives is “Java First, Java Always” (instead of: when web companies mature, then they switch to Java) (having Java enabled for cloud, microsevice and serverless is an important step in this)

      Note: during the JavaOne Keynote, Spotify presented a great example of this pattern: they have a microservices architecture (from before it was called microservice); most were originally created in Python, with the exception of the search capability; due to scalability challenges, all Python based microservices have been migrated to Java over the years. The original search service is still around. Java not only scales very well and has the largest pool of developers to draw from, it also provides great run time insight into what is going on in the JVM

    I have played around a little with Java 9 but now that is out in the open (and I have started working on a fresh new laptop – Windows 10) I thought I should give it another try. In this article I will describe the steps I took from a non Java enabled Windows environment to playing with Java 9 in jshell – in an isolated container, created and started without any programming, installation or configuration. I used Vagrant and VirtualBox – both were installed on my laptop prior to the exercise described in this article. Vagrant in turn used Docker and downloaded the OpenJDK Docker image for Java 9 on top of Alpine Linux. All of that was hidden from view.

    The steps:

    0. Preparation – install VirtualBox and Vagrant

    1. Create Vagrant file – configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that image as well as a Docker Container with OpenJDK 9

    2. Run Vagrant for that Vagrant file to have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container

    3. Connect into VirtualBox Docker Host and Docker Container

    4. Run jshell command line and try out some Java 9 statements

    In more detail:

    1. Create Vagrant file

    In a new directory, create a file called Vagrantfile – no extension. The file has the following content:

    It is configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that VB image as well as a Docker Container based on the OpenJDK:9 image.


    Vagrant.configure("2") do |config|
    config.vm.provision "docker" do |d| "j9",
          image: "openjdk:9",
          cmd: "/bin/sh",
          args: "-v '/vagrant:/var/www'"
        d.remains_running = true  
    # The following line terminates all ssh connections. Therefore Vagrant will be forced to reconnect.
    # That's a workaround to have the docker command in the PATH
    # Command: "docker" "ps" "-a" "-q" "--no-trunc"
    # without it, I run into this error:
    # Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied.
    # Are you trying to connect to a TLS-enabled daemon without TLS?
    config.vm.provision "shell", inline:
    "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
    config.vm.define "dockerhostvm" = "ubuntu/trusty64" "private_network", ip: ""
    config.vm.provider :virtualbox do |vb| = "dockerhostvm"
      vb.memory = 4096
      vb.cpus = 2
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    # to get into running container: 
    # vagrant ssh
    # docker run -it  -v /vagrant:/var/www openjdk:9 /bin/sh
    2. Run Vagrant for that Vagrant file

    And have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container:


    3. Connect into VirtualBox Docker Host and Docker Container


    vagrant ssh

    to connect into the VirtualBox Ubuntu Host and

    docker run –it openjdk:9 /bin/sh

    to run a container and connect into the shell command line, we get to the environment primed for running Java 9:


    At this point, I should also be able to use docker exec to get into the container that started by the Vagrant Docker provisioning configuration. However, I had some unresolved issues with that – the container kept restarting. I will attempt to resolve that issue.

    4. Run jshell command line and try out some Java 9 statements

    JShell is the new Java command line tool that allows REPL style exploration – somewhat similar to for example Python and JavaScript (and even SQL*Plus).

    Here is an example of some JShell interaction:


    I tried to use the new simple syntax for creating collections from static data. Here I got the syntax right:


    It took me a little time to find out the exit strategy. Turns out that /exit does that trick:


    In summary: spinning up a clean, isolated environment in which to try out Java is not hard at all. On Linux – with Docker running natively – it is even simpler, although even then using Vagrant may be beneficial. On Windows it is also quite straightforward – no complex sys admin stuff required and hardly any command line things either. And that is something we developers should start to master – if we do not do so already.

    Issue with Docker Provider in Vagrant

    Note: I did not succeed in using the Docker provider (instead of the provisioner) with Vagrant. Attempting that (cleaner) approach failed with “Bringing machine ‘j9’ up with ‘docker’ provider…
    The executable ‘docker’ Vagrant is trying to run was not
    found in the %PATH% variable. This is an error. Please verify
    this software is installed and on the path.” I have looked across the internet, found similar reports but did not find a solutio that worked for me.


    The provider is documented here:

    The Vagrantfile I tried to use originally – but was unable to get to work:


    (based on my own previous article:

    The post Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant appeared first on AMIS Oracle and Java Blog.

    ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC

    Tue, 2017-10-10 13:40

    Here is my entry for the Oracle Developer Community ODC Appreciation Day (#ThanksODC).

    It is quite hard to make a choice for a feature to write about. So many to talk about. And almost every day another favorite of the month. Sliding time windows. The Oracle Developer Community – well, that is us. All developers working with Oracle technology, sharing experiences and ideas, helping each other with inspiration and solutions to challenges, making each other and ourselves better. Sharing fun and frustration, creativity and best practices, desires and results. Powered by OTN now kown as ODC. Where we can download virtually any software Oracle has to offer. And find resources – from articles and forum answers to documentation and sample code. This article is part of the community effort to show appreciation – to the community and to the Orace Developer Community (organization).

    For fun, you could take a look at how the OTN site started – sometime in 2000 – using the WayBack machine: 


    And the WayBack machine is just one of many examples of timelines – presentation of data organized by date.image We all know how pictures say more than many words. And how tables of data are frequently to much less accessible to users than to the point visualizations. For some reason, data associated with moments in time have always had special interest for me. As do features that are about time – such as Flashback Query, 12c Temporal Database and SYSDATE (or better yet: SYSTIMESTAMP).

    To present such time-based data in way that reveals the timeline and historical threat that resides in the data, we can make use of the Timeline component that is available in:

    In JET:image

    In ADF:

    This image is described in the surrounding text

    In Data Visualization Cloud:

    Note that in all cases it does not take much more than a dataset with date (or date time) attribute and one or more attributes to create a label and perhaps to categorize. A simple select ename, job, hiredate from emp suffices.

    The post ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC appeared first on AMIS Oracle and Java Blog.

    SaaS going forward at Oracle OpenWorld 2017–Smart, Connected, Productivity, Multi-Channel

    Mon, 2017-10-09 03:34

    I have not seen many sessions on SaaS and business applications at OracleOpen World. Yet SaaS is becoming increasingly more important. The number of SaaS applications or at least the number of functions that standard available applications can perform is growing rapidly. The availability to any organization of SaaS functions that will support them with a large portion of their business process is growing. The main challenge of corporate IT departments is going to shift from creating IT facilities to support the business [processes]to enabling SaaS applications to provide that support – by mutually tying together these applications through integration and mash up as well as embedding in authentication, authorization, data warehousing, scanning, printing, enterprise content management and other enterprise IT facilities.

    Business Applications not only support many more niche functions and allow fine tuning to an organization’s ways of doing things, they also become much smarter and proactive. Smart Business Applications – apply machine learning to help humans focus on the tasks that require human attention and handle automatically the cases that fall within boundaries of normal action.


    Some simple examples:

    • Marketing – who to send email to
    • Sales – who to focus on
    • Customer Service – recommend next step with calling customer

    Oracle is permeating AI into business apps (AI Adaptive Apps),image also leveraging its Data as a Service with 3B consumer profiles in DaaS, and records on over $4 Trillion spending.


    Oracle offers “a full suite of SaaS offerings” :


    (although they clearly do not yet all have ideal mutual integration, similar look & feel and perfect alignment)

    During the Keynote by Thomas Kurian at Oracle OpenWorld 2017, an extensive demo was presented of how consumer activity can be tracked and used to reach out and make relevant offerings – as part of the B2C Customer Experience (see )

    For example – web site navigation behavior can be tracked:


    and from this, a profile can be composed about this particular user:




    By comparing the profile to similar profiles and looking at the purchase behavior of those similar profiles, the AI powered application can predict and recommend purchases by the user with this profile.

    Here follow a number of screenshots that indicate the insight in customer interest in products – and the effects of specific, targeted campaigns to push certain products






    Information can be retrieved using REST services as well:


    Recommendations that have been given to customers can be analyzed for their success. Additionally, the settings that drive these recommendations can be overridden – for example to push stock of a product that has been overstocked or is at of line:




    The Supervisory Controls allow humans to override the machine learning based behavior:


    Change weight between channelsimageimage



    The post SaaS going forward at Oracle OpenWorld 2017–Smart, Connected, Productivity, Multi-Channel appeared first on AMIS Oracle and Java Blog.

    Some impressions from Oracle Analytics Cloud–taken from keynote at Oracle OpenWorld 2017

    Mon, 2017-10-09 01:07

    In his keynote on October 3rd during Oracle OpenWorld 2017, Thomas Kurian stated that the vision at Oracle around analytics has changed quite considerably. He explained this change and the new vision using this slide.


    All kinds of data, all kinds of users, many more ways to present and visualize and machine generated insights to complement human understanding.

    The newly launched Analytics Cloud supports this vision.


    Zooming in on Data Preparation:


    And from cleansed and prepared data – create Machine Learning models that help create classify and predict, use conventional (charts) and new (personalized and context sensitive and rich chat, notification, maps) and allow users to collaborate around findings from data.


    Thomas K. threw in the Autonomous Datawarehouse as an intermediate or final destination for prepared data or even the findings from that data.


    The keynote continued with a demo that made clear how a specific challenge – monitor social media for traffic on specific topics and derive from all messages and tweets which player was most valuable (and has the largest social influence) – could be addressed.


    Click on Analyze Social Streams

    Select streams to analyze:


    Define search criteria:


    See how additional cloud services are spun up: Big Data Compute (running Hadoop, Spark, Elastic) and Event Hub (running Kafka)image

    The initial data load is presented for the new Social Data Stream project on the Prepare tab. The Analytics Cloud comes with recommendations (calls to action) to cleanse (or “heal”) and enrich the data. Among the potential actions are correcting zip-codes, extracting business entities from images, complete names and enrich by joining to predefined data sets such as players, locations, team names etc.

    The intial presentation of data is in itself a rich exploration of the data. Analytics Cloud has already identified a large number of attributes, has analyzed the data and presents various aggregations. (This has clear undertones of Endeca) At this point, we can work on the data, to make it better – cleaner, richer and better suited for presentations, conclusions and model building.




    Images can be analyzed to identify objects, recognize scenes and even find specific brands:


    After each healing action, new recommendations for data preparation may be presented.


    Here are two examples of joining the data sets to additional sets:




    Some more examples of what the current status of preparation is of the data.



    Here is the Visualize tab – where users can edit the proposed visualizations and add new ones.imageThe demo continued to show how through a mobile app – through voice recognition – a new KPI could be added.



    That should result in notifications being sent upon specific conditions:


    Notifications can take various forms – including visual but passive alerts on a dashboard or active push messages on messenger or chat channel (Slack, WeChat, Facebook Messenger), SMS Text Messages, Email.


    The post Some impressions from Oracle Analytics Cloud–taken from keynote at Oracle OpenWorld 2017 appeared first on AMIS Oracle and Java Blog.

    Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017

    Sun, 2017-10-08 12:26

    From Thomas Kurian’s keynote during Oracle OpenWorld 2017 – see – a quick recap of the five most important announcements regarding IaaS:










    World record benchmarks




    The post Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017 appeared first on AMIS Oracle and Java Blog.

    Watch Oracle OpenWorld 2017 Keynotes On Demand

    Sat, 2017-10-07 06:16

    imageWatch Keynotes on YouTube using these links:

    Larry Ellison (Sunday Oct 1st) –

    Dave Donatelli (Tuesday Oct 3rd) – 

    imageThomas Kurian (Tuesday Oct 3rd) – 

    Larry Ellison (Tuesday Oct 3rd) – 

    SuiteConnect – Evan Goldberg (Wednesday Oct 4th) – 

    imageJavaOne Keynote (Monday Oct 2nd) –


    The post Watch Oracle OpenWorld 2017 Keynotes On Demand appeared first on AMIS Oracle and Java Blog.

    Fun with Data Visualization Cloud–creating a timeline for album releases

    Fri, 2017-10-06 08:51

    I have played a little with Oracle’s Data Visualization cloud and it is really fun to be able to so quickly turn raw data into nice and sometimes meaningful visuals. I do not pretend I grasp the full potential of Data Viz CS, but I can show you some simple steps to quickly create something good looking and potentially really useful.

    My very first steps were documented in this earlier article:

    In this article, I start with two tables in a cloud database – with the data we used for the Soaring through the Clouds demo at Oracle OpenWorld 2017:


    As described in the earlier article, I have created a database connection to this DBaaS instance and I have created data sources for these two tables.

    Now I am ready to create a new project:


    I select the data sources to use in this project:


    And on the prepare tab I make sure that the connection between the Data Sources is defined correctly (with Proposed Acts adding fact – lookup data – to the Albums):


    On the Visualize tab, I drag the Release Date to the main pane.


    I then select Timeline as visualization :


    Next, I bring the title of the album to the Details section:


    and the genre of the album to the Color area:


    Then I realize I would like to have the concatenation of Artist Name and Album Title in the details section. However, I cannot add two attributes to that area. What I can do instead is create a Calculation:


    Next I can use this caclculation for the details:


    I can use Trellis Rows to create a Timeline per value of the selected attribute, in this case the artist:


    It is very easy to add filters – that can be manipulated by end users in presentation mode to filter on data relevant to them. Simply drag attributes to the filter section at the top:


    Then select the desired filter values:


    and the visualization is updated accordingly:


    The post Fun with Data Visualization Cloud–creating a timeline for album releases appeared first on AMIS Oracle and Java Blog.

    Tweet with download link for JavaOne and Oracle OpenWorld slide decks

    Fri, 2017-10-06 07:24

    In a recent article I discussed how to programmatically fetch a JSON document with information about sessions at Oracle OpenWorld and JavaOne 2017. Yesterday, slidedecks for these sessions started to become available. I have analyzed how the link to these downloads were included in the JSON data returned by the API. Then I created simple Node programs to tweet about each of the sessions for which the download became available


    and to download the file to my local file system.


    I added provisions to space out the tweets and the download activity over time – as to not burden the backend of the web site and to not be kicked off Twitter for being a robot.

    The code I crafted is not particularly ingenuous – it was created rather hastily in order to share with the OOW17 and JavaOne communities the links for downloading slide decks from presentations at both conferences. I used npm modules twit and download. This code can be found on GitHub:

    The documents javaone2017-sessions-catalog.json and oow2017-sessions-catalog.json contain details on all sessions – including the URLs for downloading slides.


    The post Tweet with download link for JavaOne and Oracle OpenWorld slide decks appeared first on AMIS Oracle and Java Blog.

    Oracle Open World; day 4 – almost done

    Thu, 2017-10-05 13:43

    Almost done. It’s not expected that tomorrow, thursday, will be a day full of new stuff, exciting news. Today, wednesday was a mix for me between ‘normal’ content like sessions about migrating to Oracle Enterprise 13.2 (another packed room) and a very interesting session about the Autonomous Database.  Just a short note about a few sessions (including the Autonomour Database of course).

    As mentioned, sessions with ‘normal’ content, in this case, migrating a database of 100TB in one day – with Mike Dietrich, are quite popular. We may almost forget that most of the customers are thinking about the cloud, but at the moment just focussed on how to keep the daily business running.

    The session about Oracle Enterprise Manager, about upgrading to 13c (a packed room) is quite rare. Two years ago there were a lot of presentations about this management product, this year close to none. I’m very curious to know what happens after 2020. Oracle Management Cloud is coming rapidly. But… Oracle is using it quite heavy in the public cloud, so it is expected it won’t dissappear that fast. Here are the timelines:


    Foto 04-10-17 11 03 01 (1)


    At the end of the day, a session was planned about the most most important announcement of Oracle OpenWorld, preview of the autonomous database.

    Quite peculiar, at the very end of the day, in a room that was obviously too small for the crowd.

    A view outlines. The DBA is still needed, only the general tasks are disappearing:

    Foto 04-10-17 15 38 54 (1)

    The very rough roadmap .

    Foto 04-10-17 16 10 48 (1)

    This Data Warehouse version is already there in 2017. This was technically ‘easier’ to accomplish. The OLTP autonomous database has more challenges.

    Foto 04-10-17 16 10 48 (2)

    And a very important message to the customers: a SLA guarantee.

    Foto 04-10-17 16 01 11 (3)


    The post Oracle Open World; day 4 – almost done appeared first on AMIS Oracle and Java Blog.

    Oracle Open World; day 3- some highlights

    Wed, 2017-10-04 07:01

    Day 3 began with a very smooth and interesting keynote of Thomas Kurian, full of flawless, wonderful demo’s . The second keynote, of Larry Ellison happened in the afternoon, I couldn’t attent for as I head a product management meeting. But some hickups I heard. Beside the keynotes it was a day full of good information  and surprisingly stuff, with in the end: Oracle Database Appliance, the X7-2 series. Another short note about Oracle Open World.As said, the day began with the keynote of Thomas Kurian.

    A slide of the six journeys to the cloud, almost the key of whole Open World. The Journey to the Cloud. My special interest by the way, as I am interested in engineered systems, is the first . Optimizing your on-premises datacenter.


    A lot of ‘best of’  came along: Fastest compute, Fastest GPU, Fastest storage, Fastest network, Industry leading global DNS.

    Then the demo’s were given.

    Cool stuff is the right word for it I think: Chatbots, Smartfeed, Connected Intelligence, Social Media Analyses, Analytics, IOT.

    The technique behind these demo’s spans a dazzling amount of new and existing different Cloud services. Too much for now I’m  afraid.

    Machine Learning was the big keyword in all these demo’s I think.

    Serverless with Kafka and Kubernetes Cloud Service:


    There’s a new cost estimator to calculate how much universal credits  the several services will cost.


    Another announcement : Blockchain cloud service for secure inter-connected transactions

    The announcement in the afternoon: Oracle Management and Security Cloud, with machine learning and Management Cloud.Larry Ellison talks about the severity of data hacks and information stealing while data centers get increasingly complicated and systems are harder to patch. “We’ve got to do something. It must be an automated process.”

    In a product management session of IaaS several price comparisons were made with AWS on a detailed level. How cheap is Oracle compared to AWS.

    Foto 03-10-17 12 13 16 (1)

    Announcement of SLA’s

    In the session of Oracle Database Appliance of course the X7-2 serie, the summary – SE for all models, support:

    Foto 03-10-17 12 50 53 (2) 

    The new HA :

    Foto 03-10-17 13 00 20

    KVM as a new deployment option, and the way forward. Oracle VM on the ODA is slowly disappearing.

    Foto 03-10-17 13 08 01 (1)

    I know, this is just a scratch at the surface of all new things that are happening…


    The post Oracle Open World; day 3- some highlights appeared first on AMIS Oracle and Java Blog.

    Oracle Open World; day 2 – highlights

    Tue, 2017-10-03 01:45


    At Monday Oracle Open World really starts. A whole lot of general sessions with announcements and strategic directions on product / service level. In this post I’ll try to summarize the highlights of this day. Quite hard, there are a whole lot of interesting sessions which overlap. Always the feeling I’m missing something. And a very interesting session at the very end of the day as a surprise.

    It started with the keynote of Mark Hurd, no big news, or it must be the revisiting of his predictions a year ago. This holds true:

    – By 2025 the number of corporate-owned data centers will decrease by 80%

    – 80% of production apps will be in the cloud by 2025


    Hardware : not shouting out loud, but there’s a new generation of

    – Engineered systems, the X7 line. Engineered for Oracle Cloud Infrastructure (but also for on-premises)

    – Sparc servers with the Sparc M8 processor.


    As mentioned earlier, Oracle Management Cloud is becoming more dominant as ‘single pane of glass’.


    Software : Wim Coekaerts

    – Solaris will be supported and developed for a long time !


    Oracle Kubernetes Cloud is started!

    A few slides about key announcements I really can’t judge by it’s value:



    The last session was about new features of Database 18c (december 2017), quite interesting. A few enhancements:

    – Performance enhancements by low latency memory transactions, non-volatile memory support, in-memory column store improvements. Mainly for OLTP and IOT workload improvements.

    – For Data Warehousing and Big Data are new features as In-Memory for external tables, Machine learning algorithmen, alter table merge function.

    – Per-PDB switchover

    – Centrally managed users in Active Directory

    – Improved JSON support

    – Centrally Managed Users Directly in Active Directory

    Foto 02-10-17 18 16 40 (1)

    – A REST API to provide instance management and monitoring

    – Official Docker support (except RAC, is coming)

    – SQLCI got more attention

    – Gold Images as new Installation Approach, as zip-, tarfile, docker image, virtual box. Installation through RPM.

    – First quarter 2018 18c XE is launched. Including PDB’s, probably 2GB, 2 CPU, 12GB storage (compressed, net 40GB). Meant for students, Proof of Concepts, not for production.

    Foto 02-10-17 18 27 40 (2)


    The post Oracle Open World; day 2 – highlights appeared first on AMIS Oracle and Java Blog.