Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 5 hours 43 min ago

Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff

Sat, 2017-11-18 11:58

An overview of some of my recent recordings:

expected soon:

November 2017 – Oracle Developer Community Podcast  What’s Hot? Tech Trends That Made a Real Difference in 2017 (with Chris Richardson, Frank Munz, Pratik Patel, Lonneke Dikmans, Bob Rhubart and Lucas Jellema)  – https://blogs.oracle.com/developers/podcast-tech-trends-that-made-a-real-difference-in-2017

November 2nd – Oracle Developer Community Two Minute Tech Tip – No Excuses: Get Hands-On Experience With New Technologies – https://www.youtube.com/watch?v=NrfrWMq0m9Y

image

October 3rd – Oracle OpenWorld DevLive – Interview with Bob Rubart (Oracle Developer Community) on Kafka Streams, Java Cloud, PaaS Integration – https://www.youtube.com/watch?v=L_mhNCT2nao

image

October, Oracle Code/JavaOne San Francisco – Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server Push – https://www.youtube.com/watch?v=izTuO3IUBBY 

image

August 23rd – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 –  Modern DevOps across Technologies, On Premises and Clouds – https://www.youtube.com/watch?v=q8-wvvod85U

August 14th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Oracle Application Container Cloud as the Microservices Platform – https://youtu.be/LkMomfG6rv4

July 7th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Art of Intelligence – A Practical Introduction Machine Learning – https://youtu.be/XmqQhDsJnhY

June – Oracle Code Brussels – DevLive: Get on the (Event) Bus! with Lucas Jellema – https://www.youtube.com/watch?v=4raJRNFRJFk 

imageApril 20th – Oracle Code London – Event Bus as Backbone for Decoupled Microservice Choreography – https://www.youtube.com/watch?v=dRd-QggXqiA

image

April 20th – Oracle Code London – DevLive: Lucas Jellema on Decoupled Microservices with Event Bus – https://www.youtube.com/watch?v=T0gZhzzu5lg image

Older Resources

October 2015 – 2 Minute Tech Tip The Evolution of Flashback in Oracle Database – https://www.youtube.com/watch?v=WOcsYtX69N8

January 2015 – Interviewing Simone Geib (Oracle SOA Suite Product Manager) – https://www.youtube.com/watch?v=MrtpAW9aOHQ 

September 2014 – 2 Minute Tech Tip – Vagrant, Puppet, Docker, and Packer – https://www.youtube.com/watch?v=36ZmfLMFPJI

October 2013 – Interview with Boh Rhubart on SOA, Agile, DevOps, and Transformation – https://www.youtube.com/watch?v=rtiwGqmzmWo

image 

March 2013 – On User Experience – with Bob Rhubart & Jeremy Ashley – https://www.youtube.com/watch?v=8Jm_cVCoQ3o


The post Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff appeared first on AMIS Oracle and Java Blog.

Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide

Sat, 2017-11-18 05:38

imageThis article will show how to run an Oracle Database on a Docker host using the prebaked images on Oracle Continer Registry. It is my expectation that it takes me very little manual effort to run the full 12.2.0.1 Oracle Enterprise Database – just pull and run the Docker image. Once it is running, I get the usual Docker benefits such as clean environment management, linking from other containers, quick stop and start, running scripts inside the container etc.

The minimum requirements for the container is 8GB of disk space and 2GB of memory. There is a slim alternative that requires less resources: The slim (12.2.0.1-slim) version of EE does not have support for Analytics, Oracle R, Oracle Label Security, Oracle Text, Oracle Application Express and Oracle DataVault. I am not sure yet how much that shaves of the resource requirements.

My recent article Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images explained the first steps for getting started with Oracle Container Registry, including how to sign up and accept terms and conditions for individual images.

Once that is out of the way, we can get going with running the database.

The steps are:

  1. start docker
  2. login to Oracle Container Registry
  3. pull image for Oracle Database – I will use the enterprise edition database image in this article
  4. run a docker container based on that image
  5. start interacting with the database, for example from SQLcl or SQL Developer.

In terms of work, it will take less than two minutes of your time. The time before the database is running is mainly determined by the time it takes to download the image. After that, running the container takes just a few dozens of seconds.

The Oracle Database images are published on the website for the Container Registry:

image

image

image

Copy the docker pull command in the upper right hand corner to the clipboard. It is also worth remembering the docker run command for running the database image.

Note that this webpage contains more useful information:

  • how to run SQL scripts from within the container
  • how to expose the database [port]outside the container
  • how to specify SID (default ORCLCDB), PDB (default is ORCLPDB1), DOMAIN (default is localdomain) and allocated MEMORY (default is 2 GB)
  • how to change SYS password (default is Oradoc_db1)
  • how to make use of a volume external to the database container for storing data files, redo logs, audit logs, alert logs and trace files
  • how to run a database server image against an existing set of database files
Let’s run a database

After starting Docker (on my laptop I am using the Docker Quick Start Terminal in the Docker Toolbox), login to the container registry using your Oracle account.

image

SNAGHTMLea0ec12

Then pull the database image, using the command

docker pull container-registry.oracle.com/database/enterprise

image

07:09 Start Pull

10:28 Start Extracting

image

10.30 Image is available, ready run containers off

image

The download took over three and a half hours. I was doing stuff over that time – so no time lost.

Once the pull was finished, the image was added to the local cache of Docker images. I can now run the database.

docker run -d -it –-name ORA12201_1 –P container-registry.oracle.com/database/enterprise:12.2.0.1

The value ORA12201_1 is the self-picked name for the container.

image

Here -P indicates that the ports can be chosen by docker. The mapped port can be discovered by executing

docker port ORA12201_1

image

In a few minutes – I am not sure exactly how long it took – the container status is healthy:

image

The Database server can be connected to – when the container status is Healthy – by executing sqlplus from within the container as

docker exec -it ORA12201_1 bash -c “source /home/oracle/.bashrc; sqlplus /nolog”

image

imageIn addition to connecting to the database from within the container – we can also just consider the container running the database as a back  box that exposes the database’s internal port 1521 at port 32769. And using any tool capable of communicating to a database can be used in a regular way – provided we also have the IP address for the Docker Host if the connect is not made from that machine itself:

image 

Creating a database connection in SQL Developer is done like this:

SNAGHTMLf64a329

Using SYS/Oradoc_db1 as the credentials, the Docker Host IP address for the hostname and the port mapped by Docker to port 1521 in the container, 32769 in this case. The Service Name is composed of the PDB name and the domain name:  ORCLPDB1.localdomain.

A sample query:imageConnecting with SQLcl is similar:

sql sys/Oradoc_db1@192.168.99.100:32769/ORCLPDB1.localdomain as sysdba

image

To stop the container – and the database:

docker stop 62eb

It takes a few seconds to stop cleanly.

image

Restarting takes about 1 minute before the database is up and running:

image

image

Note: with this basic approach, all database files are created in the container’s file system. And are not available elsewhere nor will they survive the removal of the container. A better way of handling these files is through the mounting of a host folder for storing files or through a Docker Volume.

Note: when running on Windows using Docker Toolbox, this may be convenient for increasing the size of memory and disk of the default VM: https://github.com/crops/docker-win-mac-docs/wiki/Windows-Instructions-(Docker-Toolbox)

The post Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide appeared first on AMIS Oracle and Java Blog.

Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images

Thu, 2017-11-16 22:47

Oracle has been active with Docker for quite some time now. From the first hesitant steps from some enthusiastic employees with Docker build files that helped people to get going with Oracle software in their experimental containers to a GitHub repo with a broad set of extensive Docker build files to create Docker containers with various Oracle product that are supported (https://github.com/oracle/docker-images). And of course the Oracle Container Cloud – launched in the Spring of 2017 that will run custom Docker images. And now recently the next step: the availability of the Oracle Container Registry – Oracle’s own Docker container registry that offers a range of ready built container images with Oracle software. Using these images – all you need to run Oracle platform component on your local Docker Host or Kubernetes cluster is docker pull from this registry followed by a docker run.

In this article I will give a quick example of how to work this the Oracle Container Registry. It can be found at:  https://container-registry.oracle.com .

The steps to go through:

1. Register as a user for the Oracle Container Registry (one time only, an Oracle account is required)

2. Explore the registry, locate the desired image(s) and Agree to and accept the Oracle Standard Terms and Restrictions for the image(s) that you want to make use of

3. Do a docker login to connect to the Oracle Container Registry

4. Pull the image(s) that you need

5. Run the image(s)

Open the link for the Container Registry:

image

Click on Register. Sign On with an existing Oracle Account or start the flow for creating such an accountimage

Provide the account’s credentials. The click on Create New User.

SNAGHTML9181a91

A confirmation email is sent:

image

And now the opening page lists the areas in which currently images are provided:

image

You can explore what images are available, for example for the database:

image

And for Java:

image

Before you can download any image, you need to accept the terms for that specific image – a manual step in the user interface of the container registry:

image

image

image

After pressing Accept, this image is now available to be pulled from docker.

image

Run Docker container based on Oracle’s Java Runtime Image

I will focus now on the Java Run Time image – one of the smaller images on the registry – to demonstrate the steps for running it in my local Docker host.

Accept the terms:

image

Click on the name of image to get the details and the docker pull command required for this image:

image

Check out the tags:

image

We wil go for the latest.

From the Docker host, first do a login, using your Oracle account credentials:

docker login –u username –p password container-registry.oracle.com

SNAGHTML92082caThen use docker pull, using the command provided on the image page:

docker pull container-registry.oracle.com/java/serverjre

The image is downloaded and stored locally in the image cache.

image

image

When the download is complete the image (not small mind you, at 377 MB) is available to be used for running container instances, in the regular Docker way. For example:

docker run -it container-registry.oracle.com/java/serverjre

image

Et voila: the container is running locally based on a pre built image. No local build steps are required, no downloading of required software packages and special configurations to be applied. The Java runtime is fairly straightforward. With running the Oracle Docker image for the enterprise database or the Fusion Middleware infrastructure, the gain is even bigger from using the prebuilt image from the Oracle Container Registry.

If you want to free up local space, you can of course remove the Oracle Docker image. After all, it is easy to pull it again from the registry.

image

The post Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images appeared first on AMIS Oracle and Java Blog.

First steps with Istio on Kubernetes on Minikube on Windows 10

Wed, 2017-10-25 07:53

In this article, I discuss my steps to get going with Istio [service mesh]on Kubernetes running on Minikube on Windows 10. Unfortunately, I have ran into an issue with Istio. This article describes the steps leading up to the issue. I will continue with the article once the issue is resolved. For now, it is a dead end street.

Note: my preparations – install Minikube and Kubernetes on Windows 10 are described in this previous article: https://technology.amis.nl/2017/10/24/installing-minikube-and-kubernetes-on-windows-10/.

Clone Git repository with samples

git clone https://github.com/istio/istio

Start minikube

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

minikube start

image

Run Bookinfo

cd c:\data\bookinfo\istio\samples\bookinfo\kube

kubectl apply -f bookinfo.yaml

image

Show productpage. First find port on which product page is exposed:

image

productpage is a service of type ClusterIP, which is only available inside the cluster – which is not good for me.

So to expose the service to outside the cluster:

kubectl edit svc productpage

and in the editor that pops up, change the type from ClusterIP to NodePort:

image

After changing the type and saving the change, get services indicates the port on which the productpage service is now exposed:

image

So now we can go to URL:  http://192.168.99.100:9080/productpage

image

Installing Istio into the Kubernetes Cluster

Now that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to install Istio in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons.

First, download Istio for Windows from https://github.com/istio/istio/releases and extract the contents of the zip file.

image

Add the directory that contains the client binary istioctl.exe to the PATH variable.

image

Open a new command line window. Navigate to the installation location of Istio.

To install Istio to the minikube Kubernetes cluster:

kubectl apply -f install/kubernetes/istio.yaml

SNAGHTML3324bb4

ending with:

image

To verify the success of the installation:

kubectl get svc -n istio-system

image

On Minikube – that does not support services of type LoadBalancer – the external IP for the istio-ingress will stay on pending. You must access the application using the service NodePort, or use port-forwarding instead.

Check on the pods:

kubectl get pods -n istio-system

image

On the dashboard, when I switch to the istio-system Namespace, I can see more details 

image

When I try to run istio commands, I run into issues:

istio version

image

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x30 pc=0x121513f]

goroutine 1 [running]:
main.getDefaultNamespace(0x14b878a, 0xd, 0x0, 0x0)

I am not sure yet what is the underlying cause and if there is a solution. The issue report https://github.com/istio/pilot/issues/1336 seems related – perhaps.

I do not know where to get more detailed logging about what happens prior to the exception.

Install Book Info Application and inject Istio

The next command I tried was:

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)

this one fails with: the system cannot find the file specified

image

Resources

Git Repository for Istio – with samples: https://github.com/istio/istio 

Guide for Istio introduction – Managing microservices with the Istio service mesh – http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html?m=1 

Installation of Istio into Kubernetes Cluster – https://istio.io/docs/setup/kubernetes/index.html

Tutorial  Istio is not just for microservices  – https://developer.ibm.com/recipes/tutorials/istio-is-not-just-for-microservices

Istio: Traffic Management for your Microservices – https://github.com/IBM/microservices-traffic-management-using-istio/blob/master/README.md

Istio Guide – getting started with sample application Bookinfo – https://istio.io/docs/guides/bookinfo.html

The post First steps with Istio on Kubernetes on Minikube on Windows 10 appeared first on AMIS Oracle and Java Blog.

Installing Minikube and Kubernetes on Windows 10

Tue, 2017-10-24 00:25

Quick notes on the installaton of Minikube for trying out Kubernetes on my Windows 10 laptop (using VirtualBox –not Hyper-V)

Following instructions in https://www.ibm.com/support/knowledgecenter/en/SS5PWC/minikube.html 

Download Windows installer for MiniKube:

https://github.com/kubernetes/minikube/releases

Run installer

After running the installer, open a command line window

image

Download kubectl.exe

curl -o kubectl.exe https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/windows/amd64/kubectl.exe

Copy downloaded file to a proper location – of your choosing – and add that location to the PATH environment variable.

Open a new command line window, set MINIKUBE_HOME

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

and run

minikube start

to start Minikube.

image

The VM image is downloaded in which the Kubernetes cluster will be created and ran. This image is 139 MB, so this startup takes a while – but of course only the first time.

image

The directory .minikube is created:

imageAnd in VirtualBox you will find a new VM set up and running:

SNAGHTML3ce136f1

Run

minikube dashboard

and the browser will open:

image

with an overview from within the VM of the Kubernetes Cluster.

With

minikube stop

you can halt the cluster – later to be started again using minikube start

image

A restart now only takes 10-15 seconds:

image

Using the instructions here – https://github.com/kubernetes/kubernetes/blob/master/examples/simple-nginx.md – I can quickly run a Docker Image on my minikube cluster:

kubectl run my-nginx –image=nginx  –port=80

This will create two nginx pods listening on port 80. It will also create a deployment named my-nginx to ensure that there are always two pods running.

image

In the dashboard, this same information is available:

image

kubectl expose deployment my-nginx –type=”NodePort”

is used to expose the deployment – make it accessible from outside the cluster.

Using

kubectl get services

I get a list of services and the local IP address and port on which they are exposed.

image

I can get the same information on the dashboardimage

The IP address where the VirtualBox VM can be accessed is 192.168.99.100 – as can be seen for example from the URL where the dashboard application is accessed:

image

The nginx service can now be accessed at 192.168.99.100:32178image

And in the browser:

image


The post Installing Minikube and Kubernetes on Windows 10 appeared first on AMIS Oracle and Java Blog.

Rapid first few steps with Fn – open source project for serverless functions

Thu, 2017-10-19 00:49

Project Fn is an open source project that provides a container native, poly-language, cloud agnostic (aka run on any cloud) serverless platform for running functions. Fn was launched during Oracle OpenWorld 2017. Fn is available on GitHub (https://github.com/fnproject/fn ) and provides all resources required to get started. In this article, I will just show you (and myself) how I went through the quick start steps and what it looked like on my laptop (Windows 10 with Vagrant and VirtualBox).

I simply get Fn up and running, create a first function that I then deploy and access through HTTP. I briefly show the APIs available on the Fn server and Fn UI application.

Steps:

  1. Create VirtualBox VM with Debian and Docker (for me, Ubuntu 14 failed to run Fn; I created issue 437 for that) – this step is described in a different article
  2. Install Fn command line
  3. Install and run Fn server in the VM, as Docker container
  4. Create function hello
  5. Initialize new function and run it
  6. Deploy the new function (in its own Docker Container running inside the container running Fn server)
  7. Invoke the new function over http from the Host laptop
  8. Run the Fn UI application
  9. Inspect the Fn Server REST APIs

Connect into the Debian Virtual Machine – for me with vagrant ssh.

Install Fn Command Line

To install the Fn command line, I used this command:

curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

image 

Install and run Fn server in the VM, as Docker container

To run the Fn server, after installing the CLI, I just used

fn start

image

Fn Server is running.

Create function hello

As per the instructions in the quick start guide, I created a new directory hello with a text file hello.go:

SNAGHTML2859e5fa

Note: I created these on the host laptop inside the directory that is mapped into the VM under /vagrant. So I can access the file inside the VM in /vagrant/hello.

Initialize new function and run it

image

and after a little while

image

Deploy the new function

(in its own Docker Container running inside the container running Fn server)

image

image

Run function inside Debian VM:

image

Invoke the new function over http from the Host laptop

image

The IP address 192.168.188.102 was assigned during the provisioning of the VM with Vagrant.

Run the Fn UI application

A UI application to inspect all Fn applications and functions can be installed and ran:

image

image

And accessed from the host laptop:

image

Note: for me it did not show the details for my new hello function.

Inspect the Fn Server REST APIs

Fn platform publishes REST APIs that can be used to programmatically learn more about applications and functions and also to manipulate those.

image

Some examples:

image

and

image

    Summary

    Getting started with Fn is pretty smooth. I got started and wrote this article in under an hour and a half. I am looking forward to doing much more with Fn – especially tying functions together using Fn Flow.

    Resources

    Fn project home page: http://fnproject.io/

    Article to quickly provision VirtualBox Image with Debian and Docker: https://technology.amis.nl/2017/10/19/create-debian-vm-with-docker-host-using-vagrant-automatically-include-guest-additions/

    Fn quick start guide: https://github.com/fnproject/fn 

    Fn UI on GitHub: https://github.com/fnproject/ui 

    Fn API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/fnproject/fn/master/docs/swagger.yml

    The post Rapid first few steps with Fn – open source project for serverless functions appeared first on AMIS Oracle and Java Blog.

    Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions

    Thu, 2017-10-19 00:02

    A short and simple article. I needed a Debian VM that I could use as Docker host – to run on my Windows 10 laptop. I resorted to Vagrant. With a few very simple steps, I got what I wanted:

    0. install Vagrant (if not already done)

    0. install Vagrant plugin for automatically adding Virtual Box Guest Additions to every VM stamped out by Vagrant (so folder mapping from host laptop to VM is supported)

    image

    1. create a fresh directory with a simple Vagrant file that refers for Debian image

    2. run vagrant up

    3. sit back and relax (few minutes)

    4. use vagrant ssh to connect into the running VM and start doing stuff.

    The vagrant file:

    Vagrant.configure(“2”) do |config|
     
    config.vm.provision “docker”

    config.vm.define “debiandockerhostvm”
    # https://app.vagrantup.com/debian/boxes/jessie64
    config.vm.box = “debian/jessie64”
    config.vm.network “private_network”, ip: “192.168.188.102”
     

    config.vm.synced_folder “./”, “/vagrant”, id: “vagrant-root”,
           owner: “vagrant”,
           group: “www-data”,
           mount_options: [“dmode=775,fmode=664”],
           type: “”
            
    config.vm.provider :virtualbox do |vb|
       vb.name = “debiandockerhostvm”
       vb.memory = 4096
       vb.cpus = 2
       vb.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
       vb.customize [“modifyvm”, :id, “–natdnsproxy1”, “on”]
    end
     
    end

    Running Vagrant to create and subsequently run the VM:

    image

    image

    Use vagrant ssh to enter the Virtual Machine and start mucking around:

    image

    Resources

    Vagrant Plugin for automatically installing Guest Addition to each VM that is produced: https://github.com/dotless-de/vagrant-vbguest/

    Vagrant Box Jessie: https://app.vagrantup.com/debian/boxes/jessie64

    The post Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions appeared first on AMIS Oracle and Java Blog.

    Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant

    Tue, 2017-10-17 09:53

    The topic of quickly creating an Oracle development VM is not new. Several years ago Edwin Biemond and Lucas Jellema have written several blogs about this and have given presentations about the topics at various conferences. You can also download ready made Virtualbox images from Oracle here and specifically for SOA Suite here.

    Over the years I have created a lot (probably 100+) of virtual machines manually. For SOA Suite, the process of installing the OS, installing the database, installing WebLogic Server, installing SOA Suite itself can be quite time consuming and boring if you have already done it so many times. Finally my irritation has passed the threshold that I needed to automate it! I wanted to easily recreate a clean environment with a new version of specific software. This blog is a start; provisioning an OS and installing the XE database on it. It might seem a lot but this blog contains the knowledge of two days work. This indicates it is easy to get started.

    I decided to start from scratch and first create a base Vagrant box using Packer which uses Kickstart. Kickstart is used to configure the OS of the VM such as disk partitioning scheme, root password and initial packages. Packer makes using Kickstart easy and allows easy creation of a Vagrant base box. After the base Vagrant box was created, I can use Vagrant to create the Virtualbox machine, configure it and do additional provisioning such as in this case installing the Oracle XE database.

    Getting started

    First install Vagrant from HashiCorp (here).

    If you just want a quick VM with Oracle XE database installed, you can skip the Packer part. If you want to have the option to create everything from scratch, you can first create your own a base image with Packer and use it locally or use the Vagrant cloud to share the base box.

    Every Vagrant development environment requires a base box. You can search for pre-made boxes at https://vagrantcloud.com/search.

    Oracle provides Vagrant boxes you can use here. Those boxes have some default settings. I wanted to know how to create my own box to start with in case I for example wanted to use an OS not provided by Oracle. I was presented with three options in the Vagrant documentation. Using Packer was presented as the most reusable option.

    Packer

    ‘Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.’ (from here) Download Packer from HashiCorp (here).

    Avast Antivirus and maybe other antivirus programs, do not like Packer so you might have to temporarily disable them or tell them Packer can be trusted.

    virtualbox-iso builder

    Packer can be used to build Vagrant boxes (here) but also boxes for other platforms such as Amazon and Virtualbox. See here. For VirtualBox there are two so called builders available. Start from from scratch by installing the OS from an ISO file or start from an OVF/OVA file (pre-build VM). Here of course I choose the ISO file since I want to be able to easily update the OS of my VM and do not want to create a new OVF/OVA file for every new OS version. Thus I decided to use the virtualbox-iso builder.

    Iso

    For my ISO file I decided to go with Oracle Linux Release 7 Update 4 for x86 (64 bit) which is currently the most recent version. In order for Packer to work fully autonomous (and make it easy for the developer), you can provide a remote URL to a file you want to download. For Oracle Linux there are several mirrors available which provide that. Look one up close to you here. You have to update the checksum in the template file (see below) when you update the ISO image if you want to run on a new OS version.

    template JSON file

    In order to use Packer with the virtualbox-iso builder, you first require a template file in JSON format. Luckily samples for these have already been made available here. You should check them though. I made my own version here.

    Kickstart

    In order to make the automatic installation of Oracle Linux work, you need a Kickstart file. This is generated automatically when performing an installation at /root/anaconda-ks.cfg. Read here. I’ve made my own here in order to have the correct users, passwords, packages installed and swap partition size.

    After you have a working Kickstart file and the Packer ol74.json, you can kickoff the build by:

    packer build ol74.json

    Packer uses a specified username to connect to the VM (present in the template file). This should be a user which is created in the Kickstart script. For example if you have a user root with password Welcome01 in the kickstart file, you can use that one to connect to the VM. Creating the base box will take a while since it will do a complete OS installation and first download the ISO file.

    You can put the box remote or keep it local.

    Put the box remote

    After you have created the box, you can upload it to the Vagrant Cloud so other people can use it. The Vagrant Cloud free option offers unlimited free public boxes (here). The process of uploading a base box to the Vagrant cloud is described here. You first create a box and then upload the file Packer has created as provider.

    After you’re done, the result will be a Vagrant box which can be used as base image in the Vagrantfile. This looks like:

    Use the box locally

    Alternatively you can use the box you’ve created locally:
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    You of course have to change the box location to be specific to your environment

    And use ol74 as box name in your Vagrantfile. You can see an example of a local and remote box here.

    If you have recreated your box and want to use the new version in Vagrant to create a new Virtualbox VM:

    vagrant box remove ol74
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    Vagrant

    You now have a base clean OS (relatively clean, I added a GUI) and you want to install stuff in it. Vagrant can help you do that. I’ve used a simple shell script to do the provisioning (see here) but you can also use more complex pieces of software like Chef or Puppet. These are of course in the long run better suitable to also update and manage machines. Since this is just a local development machine, I decided to keep it simple.

    I’ve prepared the following Vagrant file.

    This expects to find a structure like:

    provision.sh
    Vagrantfile
    Directory: software
    –oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    –xe.rsp

    These can be downloaded here. Except the oracle-xe-11.2.0-1.0.x86_64.rpm.zip file which can be downloaded here.

    Oracle XE comes with a rsp file (a so-called response file) which makes automating the installation easy. This is described here. You just have to fill in some variables like password and port and such. I’ve prepared such a file here.

    After everything is setup, you can do:

    vagrant up soadb

    And it will create the soadb VM for you in Virtualbox

    The post Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant appeared first on AMIS Oracle and Java Blog.

    JSON manipulation in Java 9 JShell

    Thu, 2017-10-12 09:38

    In this article I will demonstrate how we can work with JSON based data – for analysis, exploration, cleansing and processing – in JShell, much like we do in Python. I work with a JSON document with entries for all sessions at the Oracle OpenWorld 2017 conference (https://raw.githubusercontent.com/lucasjellema/scrape-oow17/master/oow2017-sessions-catalog.json)

    The Java 9 SE specification for the JDK does not contain the JSON-P API and libraries for processing JSON. In order to work with JSON-P in JShell, we need to add the libraries – that we first need to find and download.

    I have used a somewhat roundabout way to get hold of the required jar-files (but it works in a pretty straightforward manner):

    1. Create a pom.xml file with dependencies on JSON-P

    image

     

    image

     

    2. Then run

    mvn install dependency:copy-dependencies

    as described in this article: https://technology.amis.nl/2017/02/09/download-all-directly-and-indirectly-required-jar-files-using-maven-install-dependencycopy-dependencies/

    this will download the relevant JAR files to subdirectory target/dependencies

    image

    3. Copy JAR files to a directory – that can be accessed from within the Docker container that runs JShell – for me that is the local lib directory that is mapped by Vagrant and Docker to /var/www/lib inside the Docker container that runs JShell.

     

    4. In the container that runs JShell:

    Start JShell with this statement that makes the new httpclient module available, for when the JSON document is retrieved from an HTTP URL resource:

    jshell –add-modules jdk.incubator.httpclient

     

    5. Update classpath from within jshell

    To process JSON in JShell – using JSON-P – we need set the classpath to include the two jar files that were downloaded using Maven.

    /env –class-path /var/www/lib/javax.json-1.1.jar:/var/www/lib/javax.json-api-1.1.jar

    Then the classes in JSON-P are imported

    import javax.json.*;

    if we need to retrieve JSON data from a URL resource, we should also

    import jdk.incubator.http.*;

     

    6. I have made the JSON document available on the file system.

    image

    It can be accessed as follows:

    InputStream input = new FileInputStream(“/var/www/oow2017-sessions-catalog.json”);

     

    7. Parse data from file into JSON Document, get the root object and retrieve the array of sessions:

    JsonReader jsonReader = Json.createReader(input)

    JsonObject rootJSON = jsonReader.readObject();

    JsonArray sessions = rootJSON.getJsonArray(“sessions”);

     

    8. Filter sessions with the term SQL in the title and print their title to the System output – using Streams:

    sessions.stream().map( p -> (JsonObject)p).filter(s ->  s.getString(“title”).contains(“SQL”)) .forEach( s -> {System.out.println(s.getString(“title”));})

    image

     

    One other example: show a list of all presentations for which a slidedeck has been made available for download along with the download URL:

    sessions.stream()

    .map( p -> (JsonObject)p)

    .filter(s -> s.containsKey(“files”) && !s.isNull(“files”) && !(s.getJsonArray(“files”).isEmpty()))

    .forEach( s -> {System.out.println(s.getString(“title”)+” url:”+s.getJsonArray(“files”).getJsonObject(0).getString(“url”));})

     

    Bonus: Do HTTP Request

    As an aside some steps in jshell to execute an HTTP request:

    jshell> HttpClient client = HttpClient.newHttpClient();
    client ==> jdk.incubator.http.HttpClientImpl@4d339552

    jshell> HttpRequest request = HttpRequest.newBuilder(URI.create(“http://www.google.com”)).GET().build();
    request ==> http://www.google.com GET

    jshell> HttpResponse response = client.send(request, HttpResponse.BodyHandler.asString())
    response ==> jdk.incubator.http.HttpResponseImpl@147ed70f

    jshell> System.out.println(response.body())
    <HTML><HEAD><meta http-equiv=”content-type” content=”text/html;charset=utf-8″>
    <TITLE>302 Moved</TITLE></HEAD><BODY>
    <H1>302 Moved</H1>
    The document has moved
    <A HREF=”http://www.google.nl/?gfe_rd=cr&amp;dcr=0&amp;ei=S2XeWcbPFpah4gTH6Lb4Ag”>here</A>.
    </BODY></HTML>

     

    image

    The post JSON manipulation in Java 9 JShell appeared first on AMIS Oracle and Java Blog.

    Java 9 – First baby steps with Modules and jlink

    Wed, 2017-10-11 12:00

    In a recent article, I created an isolated Docker Container as Java 9 R&D environment: https://technology.amis.nl/2017/10/11/quick-and-clean-start-with-java-9-running-docker-container-in-virtualbox-vm-on-windows-10-courtesy-of-vagrant/. In this article, I will use that environment to take few small steps with Java 9 – in particular with modules. Note:this story does not end well. I wanted to conclude with using jlink to create a stand alone runtime that contained both the required JDK modules and my own module – and demonstrate how small that runtime was. Unfortunately, the Link step failed for me. More news on that in a later article.

    Create Custom Module

    Start a container based on the openjdk:9 image, exposing its port 80 on the docker host machine and mapping folder /vagrant (mapped from my Windows host to the Docker Host VirtualBox Ubuntu image) to /var/www inside the container:

    docker run -it -p 127.0.0.1:8080:80 -v /vagrant:/var/www openjdk:9 /bin/sh

    Create Java application with custom module:  I create a single Module (nl.amis.j9demo) and a single class nl.amis.j9demo.MyDemo. The module depends directly on one JDK module (httpserver) and indirectly on several more.

    imageThe root directory for the module has the same fully qualified name as the module: nl.amis.j9demo.

    This directory contains the module-info.java file. This file specifies:

    • which modules this module depends on
    • which packages it exports (for other modules to create dependencies on)

    In my example, the file is very simple – only specifying a dependency on jdk.httpserver:

    image

    The Java Class MyDemo has a number of imports. Many are for base classes from the java.base module. Note: every Java module has a implicit dependency on java.base, so we do not need to include it in the modue-info.java file.

    image

    This code create an instance of HttpServer – an object that listens for HTTP Requests at the specified port (80 in this case) and then always returns the same response (the string “This is the response”). As meaningless as that is – the notion of receiving and replying to HTTP Requests in just few lines of Java Code (running in the OpenJDK!) is quite powerful.

    package nl.amis.j9demo;
    import java.io.*;
    import java.net.*;
    import java.util.*;
    import java.util.concurrent.*;
    import java.util.stream.*;
    import com.sun.net.httpserver.*;
    
    import static java.lang.System.out;
    import static java.net.HttpURLConnection.*;
    
    public class MyDemo{
      private static final int DEFAULT_PORT = 80;
      private static URI ROOT_PATH = URI.create("/"); 
               
    
    private static class MyHandler implements HttpHandler {
           public void handle(HttpExchange t) throws IOException {
               URI tu = t.getRequestURI();
               InputStream is = t.getRequestBody();
               // .. read the request body
               String response = "This is the response";
               t.sendResponseHeaders(200, response.length());
               OutputStream os = t.getResponseBody();
               os.write(response.getBytes());
               os.close();
           }
       }
    
    
      public static void main(String[] args) throws IOException {
        HttpServer server = HttpServer.create(new InetSocketAddress(DEFAULT_PORT), 0);
        server.createContext("/apps ", new MyHandler());
        server.setExecutor(null); // creates a default executor
        server.start();
        out.println("HttpServer is started, listening at port "+DEFAULT_PORT);
      }
    
    }
    
    

    Compile, Build and Run

    Compile the custom module:

    javac -d mods –module-source-path src -m nl.amis.j9demo

    image

    Create destination directory for JAR file

    mkdir -p lib

    Create the JAR for the module:

    jar –create –file lib/nl-amis-j9demo.jar –main-class nl.amis.j9demo.MyDemo -C mods/nl.amis.j9demo .

    image

    Inspect the JAR file:

    jar tvf lib/nl-amis-j9demo.jar

    image

    To run the Java application- with a reference to the module:

    java –p lib/ -m nl.amis.j9demo

    image

    the traditional equivalent with a classpath for the JAR file(s) would be:

    java -classpath lib/nl-amis-j9demo.jar nl.amis.j9demo.MyDemo

    Because port 80 in the container was exposed and mapped to port 8080 on the Docker Host, we can access the Java application from the Docker Host, using wget:

    wget 127.0.0.1:8080/apps

    image

    The response from the Java application is hardly meaningful However, the fact that we get a response at all is quite something: the ‘remote’  container based on openjdk:9 has published an HTTP server from our custom module that we can access from the Docker Host with a simple HTTP request.

    Jlink

    I tried to use jlink – to create a special runtime for my demo app, consisting of required parts of JDK and my own module. I expect this runtime to be really small.

    The JVM modules by the way on my Docker Container are in /docker-java-home/jmods

    image

    The command for this:

    jlink –output mydemo-runtime –module-path lib:/docker-java-home/jmods –limit-modules nl.amis.j9demo –add-modules nl.amis.j9demo –launcher demorun=nl.amis.j9demo –compress=2 –no-header-files –strip-debug

    Unfortunately, on my OpenJDK:9 Docker Image, linking failed with this error:

    image

    Error: java.io.UncheckedIOException: java.nio.file.FileSystemException: mydemo-runtime/legal/jdk.httpserver/ASSEMBLY_EXCEPTION: Protocol error

    Resources

    Documentation for jlink – https://docs.oracle.com/javase/9/tools/jlink.htm

    JavaDoc for HttpServer package – https://docs.oracle.com/javase/9/docs/api/com/sun/net/httpserver/package-summary.html#

    Java9 Modularity Part 1 (article on Medium by Chandrakala) – https://medium.com/@chandra25ms/java9-modularity-part1-a102d85e9676

    JavaOne 2017 Keynote – Mark Reynolds demoing jlink – https://youtu.be/UNg9lmk60sg?t=1h35m43s

    Exploring Java 9 Modularity – https://www.polidea.com/blog/Exploring-Java-9-Java-Platform-Module-System/

    The post Java 9 – First baby steps with Modules and jlink appeared first on AMIS Oracle and Java Blog.

    Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant

    Wed, 2017-10-11 08:25

    The messages from JavaOne 2017 were loud and clear. Some of these:

    • Java 9 is here,
    • the OpenJDK has all previously exclusive commercial features from the Oracle (fka SUN) JDK – this includes the Java Flight Recorder for real time monitoring/metrics gathering and analysis,
    • Java 9 will be succeeded by Java 18.3, 18.9 and so on (a six month cadence) with much quicker evolution with continued quality and stability
    • JigSaw is finally here; it powers the coming evolution of Java and the platform and it allows us to create fine tuned, tailor more Java runtime environments that may take less than 10-20% of the full blown JRE
    • Java 9 has many cool and valuable features besides the Modularity of JigSaw – features that make programming easier, more elegant more fun more lightweight etc.
    • One of the objectives is “Java First, Java Always” (instead of: when web companies mature, then they switch to Java) (having Java enabled for cloud, microsevice and serverless is an important step in this)

      Note: during the JavaOne Keynote, Spotify presented a great example of this pattern: they have a microservices architecture (from before it was called microservice); most were originally created in Python, with the exception of the search capability; due to scalability challenges, all Python based microservices have been migrated to Java over the years. The original search service is still around. Java not only scales very well and has the largest pool of developers to draw from, it also provides great run time insight into what is going on in the JVM

    I have played around a little with Java 9 but now that is out in the open (and I have started working on a fresh new laptop – Windows 10) I thought I should give it another try. In this article I will describe the steps I took from a non Java enabled Windows environment to playing with Java 9 in jshell – in an isolated container, created and started without any programming, installation or configuration. I used Vagrant and VirtualBox – both were installed on my laptop prior to the exercise described in this article. Vagrant in turn used Docker and downloaded the OpenJDK Docker image for Java 9 on top of Alpine Linux. All of that was hidden from view.

    The steps:

    0. Preparation – install VirtualBox and Vagrant

    1. Create Vagrant file – configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that image as well as a Docker Container with OpenJDK 9

    2. Run Vagrant for that Vagrant file to have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container

    3. Connect into VirtualBox Docker Host and Docker Container

    4. Run jshell command line and try out some Java 9 statements

    In more detail:

    1. Create Vagrant file

    In a new directory, create a file called Vagrantfile – no extension. The file has the following content:

    It is configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that VB image as well as a Docker Container based on the OpenJDK:9 image.

    image

    Vagrant.configure("2") do |config|
     
    config.vm.provision "docker" do |d|
        d.run "j9",
          image: "openjdk:9",
          cmd: "/bin/sh",
          args: "-v '/vagrant:/var/www'"
        d.remains_running = true  
      end
     
    # The following line terminates all ssh connections. Therefore Vagrant will be forced to reconnect.
    # That's a workaround to have the docker command in the PATH
    # Command: "docker" "ps" "-a" "-q" "--no-trunc"
    # without it, I run into this error:
    # Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied.
    # Are you trying to connect to a TLS-enabled daemon without TLS?
     
    config.vm.provision "shell", inline:
    "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
     
    config.vm.define "dockerhostvm"
    config.vm.box = "ubuntu/trusty64"
    config.vm.network "private_network", ip: "192.168.188.102"
     
    config.vm.provider :virtualbox do |vb|
      vb.name = "dockerhostvm"
      vb.memory = 4096
      vb.cpus = 2
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    end
     
    end
    
    # to get into running container: 
    # vagrant ssh
    # docker run -it  -v /vagrant:/var/www openjdk:9 /bin/sh
    
    2. Run Vagrant for that Vagrant file

    And have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container:

    image

    3. Connect into VirtualBox Docker Host and Docker Container

    Using

    vagrant ssh

    to connect into the VirtualBox Ubuntu Host and

    docker run –it openjdk:9 /bin/sh

    to run a container and connect into the shell command line, we get to the environment primed for running Java 9:

    image

    At this point, I should also be able to use docker exec to get into the container that started by the Vagrant Docker provisioning configuration. However, I had some unresolved issues with that – the container kept restarting. I will attempt to resolve that issue.

    4. Run jshell command line and try out some Java 9 statements

    JShell is the new Java command line tool that allows REPL style exploration – somewhat similar to for example Python and JavaScript (and even SQL*Plus).

    Here is an example of some JShell interaction:

    image

    I tried to use the new simple syntax for creating collections from static data. Here I got the syntax right:

    image

    It took me a little time to find out the exit strategy. Turns out that /exit does that trick:

    image

    In summary: spinning up a clean, isolated environment in which to try out Java is not hard at all. On Linux – with Docker running natively – it is even simpler, although even then using Vagrant may be beneficial. On Windows it is also quite straightforward – no complex sys admin stuff required and hardly any command line things either. And that is something we developers should start to master – if we do not do so already.

    Issue with Docker Provider in Vagrant

    Note: I did not succeed in using the Docker provider (instead of the provisioner) with Vagrant. Attempting that (cleaner) approach failed with “Bringing machine ‘j9’ up with ‘docker’ provider…
    The executable ‘docker’ Vagrant is trying to run was not
    found in the %PATH% variable. This is an error. Please verify
    this software is installed and on the path.” I have looked across the internet, found similar reports but did not find a solutio that worked for me.

    image

    The provider is documented here: https://www.vagrantup.com/docs/docker/

    The Vagrantfile I tried to use originally – but was unable to get to work:

    image

    (based on my own previous article: https://technology.amis.nl/2015/08/22/first-steps-with-provisioning-of-docker-containers-using-vagrant-as-provider/)

    The post Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant appeared first on AMIS Oracle and Java Blog.

    ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC

    Tue, 2017-10-10 13:40

    Here is my entry for the Oracle Developer Community ODC Appreciation Day (#ThanksODC).

    It is quite hard to make a choice for a feature to write about. So many to talk about. And almost every day another favorite of the month. Sliding time windows. The Oracle Developer Community – well, that is us. All developers working with Oracle technology, sharing experiences and ideas, helping each other with inspiration and solutions to challenges, making each other and ourselves better. Sharing fun and frustration, creativity and best practices, desires and results. Powered by OTN now kown as ODC. Where we can download virtually any software Oracle has to offer. And find resources – from articles and forum answers to documentation and sample code. This article is part of the community effort to show appreciation – to the community and to the Orace Developer Community (organization).

    For fun, you could take a look at how the OTN site started – sometime in 2000 – using the WayBack machine: https://web.archive.org/web/20000511100612/http://otn.oracle.com:80/ 

    image

    And the WayBack machine is just one of many examples of timelines – presentation of data organized by date.image We all know how pictures say more than many words. And how tables of data are frequently to much less accessible to users than to the point visualizations. For some reason, data associated with moments in time have always had special interest for me. As do features that are about time – such as Flashback Query, 12c Temporal Database and SYSDATE (or better yet: SYSTIMESTAMP).

    To present such time-based data in way that reveals the timeline and historical threat that resides in the data, we can make use of the Timeline component that is available in:

    In JET:image

    In ADF:

    This image is described in the surrounding text

    In Data Visualization Cloud:

    Note that in all cases it does not take much more than a dataset with date (or date time) attribute and one or more attributes to create a label and perhaps to categorize. A simple select ename, job, hiredate from emp suffices.

    The post ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC appeared first on AMIS Oracle and Java Blog.

    SaaS going forward at Oracle OpenWorld 2017–Smart, Connected, Productivity, Multi-Channel

    Mon, 2017-10-09 03:34

    I have not seen many sessions on SaaS and business applications at OracleOpen World. Yet SaaS is becoming increasingly more important. The number of SaaS applications or at least the number of functions that standard available applications can perform is growing rapidly. The availability to any organization of SaaS functions that will support them with a large portion of their business process is growing. The main challenge of corporate IT departments is going to shift from creating IT facilities to support the business [processes]to enabling SaaS applications to provide that support – by mutually tying together these applications through integration and mash up as well as embedding in authentication, authorization, data warehousing, scanning, printing, enterprise content management and other enterprise IT facilities.

    Business Applications not only support many more niche functions and allow fine tuning to an organization’s ways of doing things, they also become much smarter and proactive. Smart Business Applications – apply machine learning to help humans focus on the tasks that require human attention and handle automatically the cases that fall within boundaries of normal action.

    image

    Some simple examples:

    • Marketing – who to send email to
    • Sales – who to focus on
    • Customer Service – recommend next step with calling customer

    Oracle is permeating AI into business apps (AI Adaptive Apps),image also leveraging its Data as a Service with 3B consumer profiles in DaaS, and records on over $4 Trillion spending.


    image

    Oracle offers “a full suite of SaaS offerings” :

    image

    (although they clearly do not yet all have ideal mutual integration, similar look & feel and perfect alignment)

    During the Keynote by Thomas Kurian at Oracle OpenWorld 2017, an extensive demo was presented of how consumer activity can be tracked and used to reach out and make relevant offerings – as part of the B2C Customer Experience (see https://youtu.be/cef7C2uiDTM?t=47m35s )

    For example – web site navigation behavior can be tracked:

    image

    and from this, a profile can be composed about this particular user:

    image

    image

    image

    By comparing the profile to similar profiles and looking at the purchase behavior of those similar profiles, the AI powered application can predict and recommend purchases by the user with this profile.

    Here follow a number of screenshots that indicate the insight in customer interest in products – and the effects of specific, targeted campaigns to push certain products

    image

    image

    image

    image

    image

    Information can be retrieved using REST services as well:

    image

    Recommendations that have been given to customers can be analyzed for their success. Additionally, the settings that drive these recommendations can be overridden – for example to push stock of a product that has been overstocked or is at of line:

    imageimage

    image

    imageimage


    The Supervisory Controls allow humans to override the machine learning based behavior:

    image

    Change weight between channelsimageimage

    image

    image



    The post SaaS going forward at Oracle OpenWorld 2017–Smart, Connected, Productivity, Multi-Channel appeared first on AMIS Oracle and Java Blog.

    Some impressions from Oracle Analytics Cloud–taken from keynote at Oracle OpenWorld 2017

    Mon, 2017-10-09 01:07

    In his keynote on October 3rd during Oracle OpenWorld 2017, Thomas Kurian stated that the vision at Oracle around analytics has changed quite considerably. He explained this change and the new vision using this slide.

    image

    All kinds of data, all kinds of users, many more ways to present and visualize and machine generated insights to complement human understanding.

    The newly launched Analytics Cloud supports this vision.

    image 

    Zooming in on Data Preparation:

    image

    And from cleansed and prepared data – create Machine Learning models that help create classify and predict, use conventional (charts) and new (personalized and context sensitive and rich chat, notification, maps) and allow users to collaborate around findings from data.

    image

    Thomas K. threw in the Autonomous Datawarehouse as an intermediate or final destination for prepared data or even the findings from that data.

    image

    The keynote continued with a demo that made clear how a specific challenge – monitor social media for traffic on specific topics and derive from all messages and tweets which player was most valuable (and has the largest social influence) – could be addressed.

    image

    Click on Analyze Social Streams

    Select streams to analyze:

    image

    Define search criteria:

    image

    See how additional cloud services are spun up: Big Data Compute (running Hadoop, Spark, Elastic) and Event Hub (running Kafka)image

    The initial data load is presented for the new Social Data Stream project on the Prepare tab. The Analytics Cloud comes with recommendations (calls to action) to cleanse (or “heal”) and enrich the data. Among the potential actions are correcting zip-codes, extracting business entities from images, complete names and enrich by joining to predefined data sets such as players, locations, team names etc.

    The intial presentation of data is in itself a rich exploration of the data. Analytics Cloud has already identified a large number of attributes, has analyzed the data and presents various aggregations. (This has clear undertones of Endeca) At this point, we can work on the data, to make it better – cleaner, richer and better suited for presentations, conclusions and model building.

    image

    image

    image

    Images can be analyzed to identify objects, recognize scenes and even find specific brands:

    image

    After each healing action, new recommendations for data preparation may be presented.

    image

    Here are two examples of joining the data sets to additional sets:

    image

    and

    image

    Some more examples of what the current status of preparation is of the data.

    image

    image

    Here is the Visualize tab – where users can edit the proposed visualizations and add new ones.imageThe demo continued to show how through a mobile app – through voice recognition – a new KPI could be added.

    image

    image

    That should result in notifications being sent upon specific conditions:

    image

    Notifications can take various forms – including visual but passive alerts on a dashboard or active push messages on messenger or chat channel (Slack, WeChat, Facebook Messenger), SMS Text Messages, Email.

     


    The post Some impressions from Oracle Analytics Cloud–taken from keynote at Oracle OpenWorld 2017 appeared first on AMIS Oracle and Java Blog.

    Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017

    Sun, 2017-10-08 12:26

    From Thomas Kurian’s keynote during Oracle OpenWorld 2017 – see https://youtu.be/cef7C2uiDTM – a quick recap of the five most important announcements regarding IaaS:

    1.image

    2.

    image

    3.

    image

    4.

    image

    5.

    image

    World record benchmarks

    image

    image

    image

    The post Top 5 Infrastructure (IaaS) announcements by Oracle at Oracle OpenWorld 2017 appeared first on AMIS Oracle and Java Blog.

    Watch Oracle OpenWorld 2017 Keynotes On Demand

    Sat, 2017-10-07 06:16

    imageWatch Keynotes on YouTube using these links:

    Larry Ellison (Sunday Oct 1st) – https://www.youtube.com/watch?v=HEupUSSSEBo

    Dave Donatelli (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=irvNYpCopA8 

    imageThomas Kurian (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=cef7C2uiDTM 

    Larry Ellison (Tuesday Oct 3rd) – https://www.youtube.com/watch?v=faKWViY6zEk&t=6s 

    SuiteConnect – Evan Goldberg (Wednesday Oct 4th) – https://www.youtube.com/watch?v=pURoDocJW1Y 


    imageJavaOne Keynote (Monday Oct 2nd) – https://www.youtube.com/watch?v=UNg9lmk60sg

    image

    The post Watch Oracle OpenWorld 2017 Keynotes On Demand appeared first on AMIS Oracle and Java Blog.

    Fun with Data Visualization Cloud–creating a timeline for album releases

    Fri, 2017-10-06 08:51

    I have played a little with Oracle’s Data Visualization cloud and it is really fun to be able to so quickly turn raw data into nice and sometimes meaningful visuals. I do not pretend I grasp the full potential of Data Viz CS, but I can show you some simple steps to quickly create something good looking and potentially really useful.

    My very first steps were documented in this earlier article: https://technology.amis.nl/2017/09/10/hey-mum-i-am-a-citizen-data-scientist-with-oracle-data-visualization-cloud-and-you-can-be-one-too/.

    In this article, I start with two tables in a cloud database – with the data we used for the Soaring through the Clouds demo at Oracle OpenWorld 2017:

    image

    As described in the earlier article, I have created a database connection to this DBaaS instance and I have created data sources for these two tables.

    Now I am ready to create a new project:

    image

    I select the data sources to use in this project:

    image

    And on the prepare tab I make sure that the connection between the Data Sources is defined correctly (with Proposed Acts adding fact – lookup data – to the Albums):

    image

    On the Visualize tab, I drag the Release Date to the main pane.

    image

    I then select Timeline as visualization :

    image

    Next, I bring the title of the album to the Details section:

    image

    and the genre of the album to the Color area:

    image

    Then I realize I would like to have the concatenation of Artist Name and Album Title in the details section. However, I cannot add two attributes to that area. What I can do instead is create a Calculation:

    image

    Next I can use this caclculation for the details:

    image

    I can use Trellis Rows to create a Timeline per value of the selected attribute, in this case the artist:

    image

    It is very easy to add filters – that can be manipulated by end users in presentation mode to filter on data relevant to them. Simply drag attributes to the filter section at the top:

    image

    Then select the desired filter values:

    image

    and the visualization is updated accordingly:

    image

    The post Fun with Data Visualization Cloud–creating a timeline for album releases appeared first on AMIS Oracle and Java Blog.

    Tweet with download link for JavaOne and Oracle OpenWorld slide decks

    Fri, 2017-10-06 07:24

    In a recent article I discussed how to programmatically fetch a JSON document with information about sessions at Oracle OpenWorld and JavaOne 2017. Yesterday, slidedecks for these sessions started to become available. I have analyzed how the link to these downloads were included in the JSON data returned by the API. Then I created simple Node programs to tweet about each of the sessions for which the download became available

    image

    and to download the file to my local file system.

    image

    I added provisions to space out the tweets and the download activity over time – as to not burden the backend of the web site and to not be kicked off Twitter for being a robot.

    The code I crafted is not particularly ingenuous – it was created rather hastily in order to share with the OOW17 and JavaOne communities the links for downloading slide decks from presentations at both conferences. I used npm modules twit and download. This code can be found on GitHub: https://github.com/lucasjellema/scrape-oow17.

    The documents javaone2017-sessions-catalog.json and oow2017-sessions-catalog.json contain details on all sessions – including the URLs for downloading slides.

    image

    The post Tweet with download link for JavaOne and Oracle OpenWorld slide decks appeared first on AMIS Oracle and Java Blog.

    Oracle Open World; day 4 – almost done

    Thu, 2017-10-05 13:43

    Almost done. It’s not expected that tomorrow, thursday, will be a day full of new stuff, exciting news. Today, wednesday was a mix for me between ‘normal’ content like sessions about migrating to Oracle Enterprise 13.2 (another packed room) and a very interesting session about the Autonomous Database.  Just a short note about a few sessions (including the Autonomour Database of course).

    As mentioned, sessions with ‘normal’ content, in this case, migrating a database of 100TB in one day – with Mike Dietrich, are quite popular. We may almost forget that most of the customers are thinking about the cloud, but at the moment just focussed on how to keep the daily business running.

    The session about Oracle Enterprise Manager, about upgrading to 13c (a packed room) is quite rare. Two years ago there were a lot of presentations about this management product, this year close to none. I’m very curious to know what happens after 2020. Oracle Management Cloud is coming rapidly. But… Oracle is using it quite heavy in the public cloud, so it is expected it won’t dissappear that fast. Here are the timelines:

     

    Foto 04-10-17 11 03 01 (1)

     

    At the end of the day, a session was planned about the most most important announcement of Oracle OpenWorld, preview of the autonomous database.

    Quite peculiar, at the very end of the day, in a room that was obviously too small for the crowd.

    A view outlines. The DBA is still needed, only the general tasks are disappearing:

    Foto 04-10-17 15 38 54 (1)

    The very rough roadmap .

    Foto 04-10-17 16 10 48 (1)

    This Data Warehouse version is already there in 2017. This was technically ‘easier’ to accomplish. The OLTP autonomous database has more challenges.

    Foto 04-10-17 16 10 48 (2)

    And a very important message to the customers: a SLA guarantee.

    Foto 04-10-17 16 01 11 (3)

    Regardz

    The post Oracle Open World; day 4 – almost done appeared first on AMIS Oracle and Java Blog.

    Pages