Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 4 hours 59 min ago

Getting started with Oracle Database in a Docker container!

Sat, 2017-12-30 05:33

One of the benefits of using Docker is quick and easy provisioning. I wanted to find out first-hand if this could help me get an Oracle Enterprise Edition database quickly up and running for use in a development environment. Oracle provides Docker images for its Standard and Enterprise Edition database in the Oracle Container Registry. Lucas Jellema has already provided two blogs on this (here and here) which have been a useful starting point. In this blog I’ll describe some of the choices to make and challenges I encountered. To summarize, I’m quite happy with the Docker images in the registry as they provide a very easy way to automate the install of an EE database. You can find a Vagrant provisioning shell script with the installation of Docker and Docker commands to execute here and a description on how to use it here.

Docker Installing Docker on Oracle Linux 7 Why Docker

Preparing for this blog was my first real Docker experience outside of workshops. The benefits of Docker I mainly appreciated during this exercise is that

  • Docker uses paravirtualization which is more lightweight than full virtualization on for example VirtualBox or VMWare.
  • The installation of a product inside the container is already fully scripted if you have a Docker image or Dockerfile. There are a lot of images and Dockerfiles available. Also provided and supported by software vendors such as Oracle.
  • The Docker CLI is very user friendly. For example, you can just throw away your container and create a new one or stop it and start it again at a later time. Starting a shell within a container is also easy. Compare this to for example VBoxManage.

In order to install Docker on Oracle Linux 7, you need to do some things which are described below.

Preparing a filesystem

Docker images/containers are created in /var/lib/docker by default. You do not need to specifically create a filesystem for that, however, Docker runs well on a filesystem of type BTRFS. This is however not supported on Oracle Linux. Docker has two editions. Docker CE and Docker EE. Docker CE is not certified for Oracle Linux while EE is. For Docker CE, BTRFS is only recommended on Ubuntu or Debian and for Docker EE, BTRFS is only supported on SLES.

When you do want to use a BTRFS partition (at your own risk), and you want to automate the installation of your OS using Kickstart, you can do this like:

part btrfs.01 --size=1000 --grow
btrfs /var/lib/docker --label=docker btrfs.01

See a complete Kickstart example here for Oracle Linux 7 and the blog on how to use the Kickstart file with Packer here.

Enable repositories

Docker is not present in a repository which is enabled by default in Oracle Linux 7. You can automate enabling them by:

yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum-config-manager --enable ul7_uekr4
Install Docker

Installing Docker can be done with a single command:

yum install docker-engine btrfs-progs btrfs-progs-devel -y

If you’re not using BTRFS, you can leave those packages out.

Start the Docker daemon

The Docker CLI talks to a daemon which needs to be running. Starting the daemon and making it start on boot can be done with:

systemctl start docker
systemctl enable docker
Allow a user to use Docker

You can add a user to the docker group in order to allow it to use docker. This is however a bad practice since the user can obtain root access to the system. The way to allow a non-root user to execute docker is described here. You allow the user to execute the docker command using sudo and create an alias for the docker command to instead perform sudo docker. You can also tune the docker commands to only allow access to specific containers.

Add to /etc/sudoers

oracle ALL=(ALL) NOPASSWD: /usr/bin/docker

Create the following alias

alias docker="sudo /usr/bin/docker"
Oracle database Choosing an edition Why not XE?

My purpose is to automate the complete install of SOA Suite from scratch. In a previous blog I described how to get started with Kickstart, Vagrant, Packer to get the OS ready. I ended in that blog post with the installation of the XE database. After the installation of the XE database, the Repository Creation Utility (RCU) needs to be run to create tablespaces, schemas, tables, etc for SOA Suite. Here I could not continue with my automated install since the RCU wants to create materialized views. The Advanced Replication option however is not part of the current version of the XE database. There was no automated way to let the installer skip over the error and continue as you would normally do with a manual install. I needed a non-XE edition of the database! The other editions of the database however were more complex to install and thus automate. For example, you need to install the database software, configure the listener, create a database, create scripts to start the database when the OS starts. Not being a DBA (or having any ambitions to become one), this was not something I wanted to invest much time in.

Enter Oracle Container Registry!

The Oracle Container Registry contains preconfigured images for Enterprise Edition and Standard Edition database. The Container Registry also contains useful information on how to use these images. The Standard Edition database uses a minimum of 4Gb of RAM. The Enterprise Edition database has a slim variant with less features but which only uses 2Gb of RAM. The slim image also is a lot smaller. Only about 2Gb to download instead of nearly 5Gb. The Standard Edition can be configured with a password from a configuration file while the Enterprise Edition has the default password ‘Oradoc_db1’. The Docker images can use a mounted share for their datafiles.

Create an account and accept the license

In order to use the Container Registry, you have to create an account first. Next you have to login and accept the license for a specific image. This has been described here and is pretty easy.

After you have done that and you have installed Docker as described above, you can start using the image and create containers!

Start using the image and create a container

First you have to login to the container registry from your OS. This can be done using a command like:

docker login -u maarten.smeets@amis.nl -p XXX container-registry.oracle.com

XXX is not my real password and I also did not accidentally commit it to GitHub. You should use the account here which you have created for the Container Registry.

I created a small configuration file (db_env.dat) with some settings. These are all the configuration options which are currently possible from a separate configuration file. The file contains the below 4 lines:

DB_SID=ORCLCDB
DB_PDB=ORCLPDB1
DB_DOMAIN=localdomain
DB_MEMORY=2GB

Next you can pull the image and run a container with it:

docker run -d --env-file db_env.dat -p 1521:1521 -p 5500:5500 -it --name dockerDB container-registry.oracle.com/database/enterprise:12.2.0.1-slim

The -p options specify port mappings. I want port 1521 and port 5500 mapped to my host (VirtualBox, Oracle Linux 7) OS.

You can see if the container is up and running with:

docker ps

You can start a shell inside the container:

docker exec -it dockerDB /bin/bash

I can easily stop the database with:

docker stop dockerDB

And start it again with

docker start dockerDB

If you want to connect to the database inside the container, you can do so by using a service of ORCLPDB1.localdomain user SYS password Oradoc_db1 hostname localhost (when running on the VirtualBox machine) port 1521. For the RCU, I created an Oracle Wallet file from the RCU configuration wizard and used that to automate the RCU and install the SOA Suite required artifacts in the container database. See here.

Finally

I was surprised at how easy it was to use the Docker image from the registry. Getting Docker itself installed and ready was more work. After a container is created based on the image, managing it with the Docker CLI is also very easy. As a developer this makes me very happy and I recommend other developers to try it out! There are some challenges though if you want to use the images on larger scale.

Limited configuration options

Many customers use different standards. The Docker image comes with a certain configuration and can be configured only in a (very) limited way by means of a configuration file (as shown above). You can mount an external directory to store data files.

Limitations in features

Also, the Docker container database can only run one instance, cannot be patched and does not support Dataguard. I can imagine that in production, not being able to patch the database might be an issue. You can however replace the entire image with a new version and hope the new version can still use the old datafiles. You have to check this though.

Running multiple containers on the same machine is inefficient

If you have multiple Oracle Database containers running at the same time on the same machine, you will not benefit from the multitenancy features since every container runs its own container and pluggable database. Also every container runs its own listener.

The post Getting started with Oracle Database in a Docker container! appeared first on AMIS Oracle and Java Blog.

Handle HTTP PATCH request with Java Servlet

Thu, 2017-12-14 23:04

The Java Servlet specification does not include handling a PATCH request. That means that class  javax.servlet.http.HttpServlet does not have a doPatch() method, unlike doGet, doPost, doPut etc.

That does not mean that a Java Servlet can not handle PATCH requests. It is quite simple to make it do that.

The trick is overriding the service(request,response) method – and have it respond to PATCH requests (in a special way) and to all other requests in the normal way. Or to do it one step more elegantly:

  1. create an abstract class – MyServlet for example – that extends from HttpServlet, override servce() and add an abstract doPatch() method – that is not supposed to be ever invoked but only be overridden
    package nl.amis.patch.view;
    
    import java.io.IOException;
    
    import javax.servlet.ServletException;
    import javax.servlet.http.HttpServlet;
    import javax.servlet.http.HttpServletRequest;
    import javax.servlet.http.HttpServletResponse;
    
    public abstract class MyServlet extends HttpServlet {
    
        public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
            if (request.getMethod().equalsIgnoreCase("PATCH")){
               doPatch(request, response);
            } else {
                super.service(request, response);
            }
        }
        
        public abstract void doPatch(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException;
    
    }
    
    
  2. any servlet (class) that should handle PATCH requests should extend from this class [MyServlet] and implement the doPatch() method

 

package nl.amis.patch.view;

import java.io.IOException;
import java.io.PrintWriter;

import javax.servlet.*;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

@WebServlet(name = "TheServlet", urlPatterns = { "/theservlet" })
public class TheServlet extends MyServlet {
    private static final String CONTENT_TYPE = "text/html; charset=windows-1252";

    public void init(ServletConfig config) throws ServletException {
        super.init(config);
    }

    public void doPatch(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
        response.setContentType(CONTENT_TYPE);
        PrintWriter out = response.getWriter();
        out.println("<html>");
        out.println("<head><title>TheServlet</title></head>");
        out.println("<body>");
        out.println("<p>The Servlet has received a PATCH request and will do something meaningful with it! This is the reply.</p>");
        out.println("</body></html>");
        out.close();
    }
}

 

Here is the result of sending a PATCH request from Postman to the Servlet:

 

image

The post Handle HTTP PATCH request with Java Servlet appeared first on AMIS Oracle and Java Blog.

10 reasons NOT to implement Blockchain now

Tue, 2017-12-12 06:35

A secure distributed ledger with smart contract capabilities not requiring a bank as an intermediary! Also a single source of truth with complete traceability. Definitely something we want! Blockchain technology promises to make this possible. Blockchain became famous through cryptocurrency like Bitcoin and Ethereum. The technology could also be considered to replace B2B functionality. With new technologies it is not a bad idea to look at pro’s and con’s before starting an implementation. Blockchain is the new kid on the block and there is not much experience yet on how well he will play with others and will mature. In this blog I summarize some of my concerns concerning blockchain of which I hope will be solved in due time.

Regarding new/emerging technologies in the integration space, I’m quite open to investigate the potential value which they can offer. I’m a great proponent of for example Kafka, the highly scalable streaming platform and Docker to host microservices. However, I’ve been to several conferences and did some research online regarding blockchain and I’m sceptical. I definitely don’t claim to be an expert on this subject so please correct me if I’m wrong! Also, this is my personal opinion. It might deviate from my employers and customers views.

Most of the issues discussed here are valid for public blockchains. Private blockchains are of course more flexible since they can be managed by companies themselves. You can for example more easily migrate private blockchains to a new blockchain technology or fix issues with broken smart contracts. These do require management tooling, scripts and enough developers / operations people around your private blockchain though. I don’t think it is a deploy and go solution just yet.

1 Immutable is really immutable!

A pure public blockchain (not taking into account sidechains and off chain code) is an immutable chain. Every block uses a hashed value of the previous block in its encryption. You cannot alter a block which is already on the chain. This makes sure things you put on the chain cannot suddenly appear or disappear. There is traceability. Thus you cannot accidentally create money for example on a distributed ledger (unless you create immutable smart contracts to provide you with that functionality). Security and immutability are great things but they require you to work in a certain way we are not that used to yet. For example, you cannot cancel a confirmed transaction. You have to do a new transaction counteracting the effects of the previous one you want to cancel. If you have an unconfirmed transaction, you can ‘cancel’ it by creating a new transaction with the same inputs and a higher transaction fee (at least on a public blockchain). See for example here. Also if you put a smart contract on a public chain and it has a code flaw someone can abuse, you’re basically screwed. If the issue is big enough, public blockchains can fork (if ‘the community’ agrees). See for example the DAO hack on Etherium. In an enterprise environment with a private blockchain, you can fork the chain and replay the transactions after the issue you want corrected on the chain. This however needs to be performed for every serious enough issue and can be a time consuming operation. In this case it helps (in your private blockchain) if you have a ‘shadow administration’ of transactions. You do have to take into account however that transactions can have different results based on what has changed since the fork. Being careful here is probably required.

2 Smart Contracts

Smart contracts! It is really cool you can also put a contract on the chain. Execution of the contract can be verified by nodes on the chain which have permission and the contract is immutable. This is a cool feature!

However there are some challenges when implementing smart contracts. A lot becomes possible and this freedom creates sometimes unwanted side-effects.

CryptoKitties

You can lookup CryptoKitties, a game implemented by using Smart Contracts on Etherium. They can clog a public blockchain and cause transactions to take a really long time. This is not the first time blockchain congestion occurs (see for example here). This is a clear sign there are scalability issues, especially with public blockchains. When using private blockchains, these scalability issues are also likely to occur eventually if the number of transactions increases (of course you can prevent CryptoKitties on a private blockchain). The Bitcoin / VISA comparison is an often quoted one, although there is much discussion on the validity of the comparison.

Immutable software. HelloWorld forever!

Smart contracts are implemented in code and code contains bugs and those bugs, depending on the implementation, sometimes cannot be fixed since the code on the chain is immutable. Especially since blockchain is a new technology, many people will put buggy code on public blockchains and that code will remain there forever. If you create DAO‘s (decentralized autonomous organizations) on a blockchain, this becomes even more challenging since the codebase is larger. See for example the Etherium DAO hack.

Because the code is immutable, it will remain on the chain forever. Every hello world tryout, every CryptoKitten from everyone will remain there. Downloading the chain and becoming a node will thus become more difficult as the amount of code on the chain increases, which it undoubtedly will.

Business people creating smart contracts?

A smart contract might give the idea a business person or lawyer should be able to design/create them. If they can create deterministic error free contracts which will be on the blockchain forever, that is of course possible. It is a question though how realistic that is. It seems like a similar idea that business people could create business rules in business rule engines (‘citizen developers’). In my experience technical people need to do that in a controlled, tested manner.

3 There is no intermediary and no guarantees

There is no bank in between you and the (public) blockchain. This can be a good thing since a bank eats money. However in case of for example the blockchain loses popularity, steeply drops in value or has been hacked (compare with a bank going bankrupt, e.g. Icesave) than you won’t have any guarantees like for example the deposit guarantee schemes in the EU. Your money might be gone.

4 Updating the code of a blockchain

Updating the core code of a running blockchain is due to its distributed nature, quite the challenge. This often leads to forks. See for example Bitcoin forks like Bitcoin Cash and Bitcoin Gold and an Etherium fork like Byzantium. The issue with forks is that it makes the entire cryptocurrency landscape crowded. It is like Europe in the past when every country had their own coin. You have to exchange coins if you want to spend in a certain country (using the intermediaries everyone wants to avoid) or have a stack of each of them. Forks, especially hard forks come with security challenges such as replay attacks (transactions which can be valid on different chains). Some reasons you might want to update the code is because transactions are slow, security becomes an issue in the future (quantum computing) or new features are required (e.g. related to smart contracts).

5 Blockchain and privacy legislation (GDPR) Security

Security is one of the strong points of blockchain technology and helps with the security by design and by default GDPR requirements. There are some other things to think about though.

The right to be forgotten

Things put on a blockchain are permanent. You cannot delete them afterwards, although you might be able to make then inaccessible in certain cases. This conflicts with the GDPR right to be forgotten.

Data localization requirements

Every node has the entire blockchain and thus all the data. This might cause issues with legislation. For example requirements to have data contained within the same country. This becomes more of a challenge when running blockchain in a cloud environment. In Europe with many relatively small countries, this will be more of an issue compared to for example the US, Russia or China.

Blockchain in the cloud

It is really dependent on the types of services the blockchain cloud provider offers and how much they charge for it. It could be similar to using a bank, requiring you to pay per transaction. In that case, why not stick to a bank? Can you enforce the nodes being located in your country? If you need to fix a broken smart contract, will there be a service request and will the cloud provider fork and replay transactions for you? Will you get access to the blockchain itself? Will they provide a transaction manager? Will they guarantee a max transactions per second in their SLA? A lot of questions for which there are probably answers (which differ per provider) and based on those answers, you can make a cost calculation if it will be worthwhile to use the cloud blockchain. In the cloud, the challenges with being GDPR compliant are even greater (especially for European governments and banks).

6 Lost your private key?

If you have lost your private key or lost access to your wallet (more business friendly name of a keystore) containing your private key, you might have lost your assets on the blockchain. Luckily a blockchain is secure and there is no easy way to fix this. If you have a wallet which is being managed by a 3rd party, they might be able to help you with recovering it. Those 3rd parties however are hacked quite often (a lot of value can be obtained from such a hack). See for example here, here and here.

7 A blockchain transaction manager is required

A transaction is put on the blockchain. The transaction is usually verified by several several nodes before it is distributed to all nodes and becomes part of the chain. Verification can fail or might take a while. This can be hours on some public blockchains. It could be the transaction has been caught up by another transaction with higher priority. In the software which is integrated with a blockchain solution, you have to keep track on the state of transactions since you want to know what the up to date value is of your assets. This causes an integration challenge and you might have to introduce a product which has a blockchain transaction manager feature.

8 Resource inefficient; not good for the environment

Blockchain requires large amounts of resources when compared to classic integration.
Everyone node has the complete chain so everyone can verify transactions. This is a good thing since if a single node is hacked, other nodes will overrule the transactions which this node offers to the chain if they are invalid in whatever way. However this means every transaction is distributed to all nodes (network traffic) and every verification is performed on every node (CPU). Also when the chain becomes larger, every node has a complete copy and thus diskspace is not used efficiently. See for example some research on blockchain electricity usage here. Another example is that a single Bitcoin transaction (4 can be processed per second) requires the same amount of electricity as 5000 VISA transactions (while VISA can do 4000 transactions per second, see here). Of course there is discussion on the validity of such a comparison and in the future this will most likely change. Also an indication blockchains are still in the early stages.

9 Maturity

Blockchain is relatively new and new implementations appear almost daily. There is little standardisation. The below picture was taken from a slide at the UKOUG Apps17 conference in Birmingham this year (the presentation was given by David Haimes).

Even with this many (partially open source) products, it seems every implementation requires a new product. For example the Estonian government has implemented their own blockchain flavor; KSI Blockchain. It is likely that eventually there will be a most common used product which will hopefully be the one that works best (not like what happened in the videotape format wars).

Which product?

If you choose a product now to implement, you will most likely not choose the product which will be most popular in a couple of years time. Improvements to the technology/products will quite quickly catch up to you. This will probably mean you would have to start migration projects.

Standards?

For example in case of webservice integrations, there are many internationally acknowledged standards such as the WS-* standards and SOAP. For REST services there is JOSE, JWT and of course JSON. What is there for blockchain? Every product uses its own protocol. Like in the times when Microsoft conjured up its own HTML/JavaScript standards causing cross browser compatibility issues. Only this time there are hundreds of Microsofts.

10 Quantum computing

Most of the blockchain implementations are based on ECDSA signatures. Elliptic curve cryptography is vulnerable to a modified Shor’s algorithm for solving the discrete logarithm problem on elliptic curves. This potentially makes it possible to obtain a user’s private key from their public key when performing a transaction (see here and here). Of course this will be fixed, but how? By forking the public blockchains? By introducing new blockchains? As indicated before, updating the technology of a blockchain can be challenging.

How to deal with these challenges?

You can jump on the wagon and hope the ride will not carry you off a cliff. I would be a bit careful when implementing blockchain. I would not expect in an enterprise to quickly get something to production which will actually be worthwhile in use without requiring a lot of expertise to work on all the challenges.

Companies will gain experience with this technology and architectures which mitigate these challenges will undoubtedly emerge. A new development could also be that the base assumptions the blockchain technology is based on, are not practical in enterprise environments and another technology arises to fill the gap.

Alternatives Not really

To be honest, a solid alternative which covers all the use cases of blockchain is not easily found. This might also help in explaining the popularity of blockchain. Although there are many technical challenges, in absence of a solid alternative, where should you go to implement those use cases?

SWIFT?

Exchanging value internationally has been done by using the SWIFT network (usually by using a B2B application to provide a bridge). This however often requires manual interventions (at least in my experience) and there are security considerations. SWIFT has been hacked for example.

Kafka?

The idea of having a consortium which guards a shared truth has been around for quite a while in the B2B world. The technology such a consortium uses can just as well for example be a collection of Kafka topics. It would require a per use-case study if all the blockchain features can be implemented. It will perform way better, the order of messages (like in a blockchain) can be guaranteed, a message put on a topic is immutable and you can use compacted topics to get the latest value of something. Kafka has been designed to be easily scalable. It might be worthwhile to explore if you can create a blockchain like setup which does not have its drawbacks.

Off-chain transactions and sidechains

Some blockchain issues can be mitigated by using so-called off-chain transactions and code. See for example here. Sidechains are extensions to existing blockchains, enhancing their privacy and functionality by adding features like smart contracts and confidential transactions.

Finally

It might not seem like it, from the above, but I’m excited about this new technology. Currently however in my opinion, it is still immature. It lacks standardization, performance and the immutable nature of a blockchain might be difficult to deal with in an actual enterprise. With this blog I tried to create some awareness on things you might think about when considering an implementation. Currently implementations (still?) require much consultancy on the development and operations side, to make it work. This is of course a good thing in my line of work!

The post 10 reasons NOT to implement Blockchain now appeared first on AMIS Oracle and Java Blog.

Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines

Sat, 2017-12-02 07:37

imageOracle announced a managed Kubernetes Cloud service during Oracle OpenWorld 2017. This week, I had an opportunity to work with this new container native cloud offering. It is quite straightforward:

Through the Wercker console

image

a new Cluster can be created on a Oracle BareMetal Cloud (aka Oracle Cloud Infrastructure) environment. The cloud credentials are provided

SNAGHTMLff83abb

Name and K8S version are specified:

image

The Cluster Size is configured:

image

And the node configuration is indicated:

image

Subsequently, Oracle will rollout a Kubernetes cluster to the designated Cloud Infrastructure – according to these specifications.

SNAGHTML1010ac4c

The Cluster’s Address us highlighted in this screenshot. This endpoint will be required later on to configure the automated deployment pipeline.

This cluster can be managed through the Kubernetes Dashboard. Deployments to the cluster can be done using the normal means – such as the kubectl command line tool. Oracle recommends automating all deployments, using the Wercker pipelines. I will illustrate how that is done in this article.

The source code can be found on GitHub: https://github.com/lucasjellema/the-simple-app. Be warned – the code is extremely simple.

The steps are: (assuming one already has a GitHub account as well as a Wercker account and a local kubectl installation)

  1. generate a personal token in the Wercker account (to be used for Wercker’s direct interactions with the Kubernetes cluster)
  2. prepare (local) Kubernetes configuration file – in order to work against the cluster using local kubectl commandline
  3. implement the application that is to be deployed onto the Kubernetes cluster – for example a simple Node application
  4. create the wercker.yml file (along with templates for Kubernetes deployment files) that describes the build steps that apply to the application and its deployment to Kubernetes
  5. push the application to a GitHub repository
  6. create a release in the Wercker console – associated with the GitHub Repository
  7. define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file
  8. define the automation pipeline – a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo
  9. define environment variables – specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline
  10. trigger the automation pipeline – for example through a commit to GitHub
  11. verify in Kubernetes – dashboard or command line – that the application is deployed and determine the public endpoint
  12. access the application
  13. iterate through steps 10..12 while evolving the application

Generate Wercker Token

image

Prepare local Kubernetes Configuration file

Create a config file in the users/<current user>/.kube directory which contains the server address for the Kubernetes cluster and the token generated in the Wercker user settings. The file looks something like this screenshot:

SNAGHTML10176445

Verify the correctness of the config file by running for example:

kubectl version

image

Or any other kubectl command.

Implement the application that is to be deployed onto the Kubernetes
cluster

In this example the application is a very simple Node/Express application that handles two types of HTTP requests: a GET request to the url path /about and a POST request to /simple-app. There is nothing special about the aplication – in fact it is thoroughly underwhelming. The functionality consists of returning a result that proves that application has been invoked successfully – and not much more.

The application source is found in https://github.com/lucasjellema/the-simple-app – mainly in the file app.js.

After implementing the app.js I can run and invoke the application locally:

image

Create the wercker.yml file for the application

The wercker.yml file provides instructions to the Wercker engine on how to execute the build and deploy steps. This step makes use of parameters the values for which are provided by the Wercker build engine at run time, partially from the values defined for environment variables at organization, application or pipeline level.

Here three pipelines are shown:

image

The build pipeline use the node:6.10 base Docker container image as its starting point. It adds the source code, executes npm install and generates TLS key and certificate. The push-to-releases pipeline stores the build outcome (the container image) in the configured container registry. The deploy-to-oke (oke == Oracle Kubernetes Engine) pipeline takes the container image and deploys it to the Kubernetes cluster – using the Kubernetes template files, as indicated in this screenshot.

image 

Along with the wercker.yml file we provide templates for Kubernetes deployment
files that describe the
deployment to Kubernetes.

The kubernetes-deployment.yml.template defines the Deployment (based on the container image with a single replica) and the service – exposing port 3000 from the container.

image

The ingress.yml.template file defines how the service is to exposed through the cluster ingress nginx.

Push the application – including the yml files for Wercker and Kubernetes to a GitHub repository

image

Create a release in the Wercker console – associated with the GitHub Repository

image

image

image

image

image

image

Define the Wercker Pipelines for the application – using the Pipelines from the wercker.yml file

image

Click on New Pipeline for each of the build pipelines in the wercker.yml file. Note: the build pipeline is predefined.

image

image

Define the automation pipeline

– a chain of the pipelines defined in the previous step, triggered by event such as a commit in the GitHub repo

image

image

Define environment variables

– specifically the Kubernetes endpoint and the user token to use for connecting to the Kubernetes cluster from the automated pipeline

SNAGHTML10a57e52

Trigger the automation pipeline – for example through a commit to GitHub

image


When the changes are pushed to GitHub, the web hook fires and the build pipeline in Wercker is triggered.

image

image

image

I even received an email from Wercker, alerting me about this issue:

image

It turns out I forgot to set the values for the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN. In this article it is the previous step, preceding this one, in reality I forgot to do it and ran into this error as a result.

After setting the correct values, I triggered the pipeline once more, with better luck this time.

image

image

Verify in Kubernetes – dashboard or command line – that the application is deployed

The deployment from Wercker to the Kubernetes Cluster was successful. Unfortunately, the Node application itself did not start as desired. And I was informed about this on the overview page for the relevant namespace – lucasjellema – on the Kubernetes dashboard – that I accessed by running

kubectl proxyimage

on my laptop and opening my browser at: http://127.0.0.1:8001/ui.


image

The logging for the pod made clear that there was a problem with the port mapping.

image

I fixed the code, committed and pushed to GitHub. The build pipeline was triggered and the application was built into a container that was successfully deployed on the Kubernetes cluster:

image

I now need to find out what the endpoint is where I can access the application. For that, I check out the Ingress created for the deployment – to find the value for the path: /lucasjellema

image

Next, I check the ingress service in the oracle-bmc namespace – as that is in my case the cluster wide ingress for all public calls into the cluster:

SNAGHTML10b0fd8f

This provides me with the public ip adress.

Access the Application

Calls to the simple-app application can now be made at: http://<public ip>/lucasjellema/simple-app (and http://<public ip>/lucasjellema/about):

SNAGHTML10b254f1

and

image

Note: because of a certificate issue, the call from Postman to the POST endpoint only succeeds after disabling certificate verification in the general settings:

image

image


 

Evolve the Application

From this point on it is very simple to further evolve the application. Modify the code, test locally, commit and push to Git – and the changed application is automatically built and deployed to the managed Kubernetes cluster.

A quick example:

I add support for /stuff to the REST API supported by simple-app:

image

The code is committed and pushed:

image

The Wercker pipeline is triggered

image

At this point, the application does not yet support requests to /stuff:

image

After a little less than 3 minutes, the full build, store and deploy to Kubernetes cluster pipeline is done:

image

And the new functionality is live from the publicly exposed Kubernetes environment:

image

Resources

Wercker Tutorial on Getting Started with Wercker Clusters Using Wercker Clusters – http://devcenter.wercker.com/docs/getting-started-with-wercker-clusters#exampleend2end

The post Oracle Managed Kubernetes Cloud– First Steps with Automated Deployment using Wercker Pipelines appeared first on AMIS Oracle and Java Blog.

Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT)

Wed, 2017-11-29 05:00

imageThe situation discussed in this article is as follows: a rich client web application (JavaScript based, could be created with Oracle JET or based on Angular/Vue/React/Ember/…) is embedded in an ADF or WebCenter Portal application. Users are authenticated in that application through a regular login procedure that leverages the OPSS (Oracle Platform Security Service) in WebLogic, authenticating against an LDAP directory or another type of security provider. The embedded rich web application makes calls to REST APIs. These APIs enforce authentication and authorization – to prevent rogue calls. Note: these APIs have to be accessible from wherever the users are working with the ADF or WebCenter Portal application.

This article describes how the authenticated HTTP Session context in ADF – where we have the security context with authenticated principal with subjects and roles – can be leveraged to generate a secure token that can be passed to the embedded client web application and subsequently used by that application to make calls to REST APIs that can verify through that token that an authenticated user is making the call. The REST API can also extract relevant information from the token – such as the user’s identity, permissions or entitlements and custom attributes. The token could also be used by the REST API to retrieve additional information about the user and his or her session context.

Note: if calls are made to REST APIs that are deployed as part of the enterprise application (same EAR file) that contains the ADF or WebCenter Portal application, then the session cookie mechanism ensures that the REST API handles the request in the same [authenticated]session context. In that case, there is no need for a token exchange.

 

Steps described in this article:

  1. Create a managed session bean that can be called upon to generate the JWT Token
  2. Include the token from this session bean in the URL that loads the client web application into the IFrame embedded in the ADF application
  3. Store the token in the web client
  4. Append the token to REST API calls made from the client application
  5. Receive and inspect the token inside the REST API to ensure the authenticated status of the user; extract additional information from the token

As a starting point, we will assume an ADF application for which security has been configured, forcing users accessing the application to login by providing user credentials.

The complete application in a working – though somewhat crude – form with code that absolutely not standards compliant nor production ready can be found on GitHub: https://github.com/lucasjellema/adf-embedded-js-client-token-rest.

 

Create a managed session bean that can be called upon to generate the JWT Token

I will use a managed bean to generate the JWT Token, either in session scope (to reuse the token) or in request scope (to generate fresh tokens on demand) .

JDeveloper and WebLogic both ship with libraries that support the generation of JWT Tokens. In a Fusion Web Application the correct libraries are present by default. Anyone of these libraries will suffice:

image

I create a new class as the Token Generator:

package nl.amis.portal.view;

import java.util.Date;

import javax.faces.bean.SessionScoped;
import javax.faces.bean.ManagedBean;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import oracle.security.restsec.jwt.JwtToken;
import java.util.HashMap;
import java.util.Map;
@ManagedBean
@SessionScoped
public class SessionTokenGenerator {
    
    private String token = ";";
    private final String secretKey = "SpecialKeyKeepSecret";
    public SessionTokenGenerator() {
        super();
        ADFContext adfCtx = ADFContext.getCurrent();  
        SecurityContext secCntx = adfCtx.getSecurityContext();  
        String user = secCntx.getUserPrincipal().getName();  
        String _user = secCntx.getUserName();  
        try {
            String jwt = generateJWT(user, "some parameter value - just because we can", _user, secretKey);
            this.token = jwt;
        } catch (Exception e) {
        }
    }

    public String generateJWT(String subject, String extraParam, String extraParam2, String myKey) throws Exception {           
           String result = null;        
           JwtToken jwtToken = new JwtToken();
           //Fill in all the parameters- algorithm, issuer, expiry time, other claims etc
           jwtToken.setAlgorithm(JwtToken.SIGN_ALGORITHM.HS512.toString());
           jwtToken.setType(JwtToken.JWT);
           jwtToken.setClaimParameter("ExtraParam", extraParam);
           jwtToken.setClaimParameter("ExtraParam2", extraParam2);
           long nowMillis = System.currentTimeMillis();
           Date now = new Date(nowMillis);
           jwtToken.setIssueTime(now);
           // expiry = 5 minutes - only for demo purposes; in real life, several hours - equivalent to HttpSession Timeout in web.xml - seems more realistic
           jwtToken.setExpiryTime(new Date(nowMillis + 5*60*1000));
                                           jwtToken.setSubject(subject);
                                           jwtToken.setIssuer("ADF_JET_REST_APP");
           // Get the private key and sign the token with a secret key or a private key
           result = jwtToken.signAndSerialize(myKey.getBytes());
           return result;
       }

    public String getToken() {
        return token;
    }
}
Embed the Web Client Application

The ADF Application consists of main page – index.jsf – that contains a region binding a taskflow that in turn contains a page fragment (client-app.jsff) that consists of a panelStretchLayout that contains an inline frame (rendered as an IFrame) that loads the web client application.

image

The JWT token (just a long string) has to be included in the URL that loads the client web application into the IFrame. This is easily done by adding an EL Expression in the URL property:

 <af:inlineFrame source="client-web-app/index.xhtml?token=#{sessionTokenGenerator.token}"
                            id="if1" sizing="preferred" styleClass="AFStretchWidth"/>

 

Store the token in the web client

When the client application is loaded, the token can be retrieved from the query parameters. An extremely naive implementation uses an onLoad event trigger on the body object to call a function that reads the token from the query parameters on the window.location.href object and stores it in the session storage:

function getQueryParams() {
    token = getParameterByName('token');
    if (token) {
        document.getElementById('content').innerHTML += '<br>Token was received and saved in the client for future REST calls';
        // Save token to sessionStorage
        sessionStorage.setItem('portalToken', token);    }
    else 
        document.getElementById('content').innerHTML += '<br>Token was NOT received; you will not be able to use this web application in a meaningful way';
}

function getParameterByName(name, url) {
    if (!url)
        url = window.location.href;
    var regex = new RegExp("[?&]" + name + "(=([^&#]*)|&|#|$)"), results = regex.exec(url);
    if (!results)
        return null;
    if (!results[2])
        return '';
    return decodeURIComponent(results[2].replace(/\+/g, " "));
}

If we wanted to so do, we can parse the token in the client and extract information from it – using a function like this one:

 

function parseJwt(token) {
    var base64Url = token.split('.')[1];
    var base64 = base64Url.replace('-', '+').replace('_', '/');
    return JSON.parse(window.atob(base64));
};

 

Append the token to REST API calls made from the client application

Whenever the client application makes REST API calls, it should include the JWT token in an HTTP Header. Here is example code for making an AJAX style REST API call – with the token included in the Authorization header:

function callServlet() {
    var portalToken = sessionStorage.getItem('portalToken');
    // in this example the REST API runs on the same host and port as the ADF Application; that need not be the case - the following URL is also a good example: 
    // var targetURL = 'http://some.otherhost.com:8123/api/things';
    var targetURL = '/ADF_JET_REST-ViewController-context-root/restproxy/rest-api/person';
    var xhr = new XMLHttpRequest();
    xhr.open('GET', targetURL)
    xhr.setRequestHeader("Authorization", "Bearer " +  portalToken);
    xhr.onload = function () {
        if (xhr.status === 200) {
            alert('Response ' + xhr.responseText);
        }
        else {
            alert('Request failed.  Returned status of ' + xhr.status);
        }
    };
    xhr.send();
}

 

Receive and inspect the token inside the REST API to ensure the authenticated status of the user

Depending on how the REST API is implemented – for example Java with JAX-RS, Node with Express, Python, PHP, C# – the inspection of the token will take a place in a slightly different way.

With JAX-RS based REST APIs running on a Java EE Web Server, one possible approach to inspection of the token is using a ServletFilter. This filter can front the JAX-RS service and stay completely independent of it. By mapping the Servlet Filter to all URL paths on which REST APIs can be accessed, we ensure that these REST APIs can only be accessed by requests that contain valid tokens.

A more simplistic, less elegant approach is to just make the inspection of the token an explicit part of the REST API. The Java code required for both approaches is very similar. Here is the code I used in a simple servlet that sits between the incoming REST API request and the actual REST API as a proxy that verifies the token, does the CORS headers and does the routing:

 

package nl.amis.portal.view;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;

import java.net.HttpURLConnection;
import java.net.URL;

import javax.servlet.*;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.*;

import javax.ws.rs.core.HttpHeaders;

import oracle.adf.share.ADFContext;
import oracle.adf.share.security.SecurityContext;

import java.util.Date;

import java.util.Map;

import oracle.security.restsec.jwt.JwtException;
import oracle.security.restsec.jwt.JwtToken;
import oracle.security.restsec.jwt.VerifyException;


@WebServlet(name = "RESTProxy", urlPatterns = { "/restproxy/*" })
public class RESTProxy extends HttpServlet {
    private static final String CONTENT_TYPE = "application/json; charset=UTF-8";
    private final String secretKey = "SpecialKeyKeepSecret";


    public void init(ServletConfig config) throws ServletException {
        super.init(config);
    }

    private TokenDetails validateToken(HttpServletRequest request) {
        TokenDetails td = new TokenDetails();
        try {
            boolean tokenAccepted = false;
            boolean tokenValid = false;
            // 1. check if request contains token

            // Get the HTTP Authorization header from the request
            String authorizationHeader = request.getHeader(HttpHeaders.AUTHORIZATION);

            // Extract the token from the HTTP Authorization header
            String tokenString = authorizationHeader.substring("Bearer".length()).trim();

            String jwtToken = "";
            String issuer = "";
            td.setIsTokenPresent(true);

            try {
                JwtToken token = new JwtToken(tokenString);
                // verify whether token was signed with my key
                boolean result = token.verify(secretKey.getBytes());
                if (!result) {
                    td.addMotivation("Token was not signed with correct key");
                } else {
                    td.setIsTokenVerified(true);
                    td.setJwtTokenString(tokenString);
                    tokenAccepted = false;
                }

                // Validate the issued and expiry time stamp.
                if (token.getExpiryTime().after(new Date())) {
                    jwtToken = tokenString;
                    tokenValid = true;
                    td.setIsTokenFresh(true);
                } else {
                    td.addMotivation("Token has expired");
                }

                // Get the issuer from the token
                issuer = token.getIssuer();
                // possibly validate/verify the issuer as well
                
                td.setIsTokenAccepted(td.isIsTokenPresent() && td.isIsTokenFresh() && td.isIsTokenVerified());
                return td;

            } catch (JwtException e) {
                td.addMotivation("No valid token was found in request");

            } catch (VerifyException e) {
                td.addMotivation("Token was not verified (not signed using correct key");

            }
        } catch (Exception e) {
            td.addMotivation("No valid token was found in request");
        }
        return td;
    }

    private void addCORS(HttpServletResponse response) {
        response.setHeader("Access-Control-Allow-Origin", "*");
        response.setHeader("Access-Control-Allow-Methods", "POST, GET, OPTIONS, DELETE");
        response.setHeader("Access-Control-Max-Age", "3600");
        response.setHeader("Access-Control-Allow-Headers", "x-requested-with");
    }

    public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        TokenDetails td = validateToken(request);
        if (!td.isIsTokenAccepted()) {
            response.setContentType(CONTENT_TYPE);
            response.setStatus(HttpServletResponse.SC_FORBIDDEN);
            response.addHeader("Refusal-Motivation", td.getMotivation());
            addCORS(response);
            response.getOutputStream().close();
        } else {

            // optionally parse token, extract details for user

            // get URL path for REST call
            String pathInfo = request.getPathInfo();

            // redirect the API call/ call API and return result

            URL url = new URL("http://127.0.0.1:7101/RESTBackend/resources" + pathInfo);
            HttpURLConnection conn = (HttpURLConnection) url.openConnection();
            conn.setRequestMethod("GET");
            conn.setRequestProperty("Accept", "application/json");

            if (conn.getResponseCode() != 200) {
                throw new RuntimeException("Failed : HTTP error code : " + conn.getResponseCode());
            }

            BufferedReader br = new BufferedReader(new InputStreamReader((conn.getInputStream())));


            response.setContentType(CONTENT_TYPE);
            // see http://javahonk.com/enable-cors-cross-origin-requests-restful-web-service/
            addCORS(response);

            response.setStatus(conn.getResponseCode());
            RESTProxy.copyStream(conn.getInputStream(), response.getOutputStream());
            response.getOutputStream().close();
        } // token valid so continue

    }


    public static void copyStream(InputStream input, OutputStream output) throws IOException {
        byte[] buffer = new byte[1024]; // Adjust if you want
        int bytesRead;
        while ((bytesRead = input.read(buffer)) != -1) {
            output.write(buffer, 0, bytesRead);
        }
    }

   private class TokenDetails {
        private String jwtTokenString;
        private String motivation;

        private boolean isJSessionEstablished; // Http Session could be reestablished
        private boolean isTokenVerified; // signed with correct key
        private boolean isTokenFresh; // not expired yet
        private boolean isTokenPresent; // is there a token at all
        private boolean isTokenValid; // can it be parsed
        private boolean isTokenIssued; // issued by a trusted token issuer

        private boolean isTokenAccepted = false; // overall conclusion

        ... plus getters and setters

}

 

Running the ADF Application with the Embedded Client Web Application

When  accessing the ADF application in the browser, we are prompted with the login dialog:

image

After successful authentication, the ADF Web Application renders its first page. This includes the Taskflow that contains the Inline Frame that loads the client web application using a URL that contains the token.

image

When the link is clicked in the client web application, the AJAX call is made – the call that has the token included in a Authorization Request header. The first time we make the call, the result is shown as returned from the REST API

image

However, a second call after more than 5 minutes fails:

image

Upon closer inspection of the request, we find the reason: the token has expired:

image

The token based authentication has done a good job.

Similarly, when we try to access the REST API directly – we need to have a valid token or we are unsuccessful:

image

 

Inspect token in Node based REST API

REST APIs can be implemented in various technologies. One popular option is Node – using server side JavaScript. Node applications are perfectly capable of doing inspection of JWT tokens – verifying their validity and extracting information from the token. A simple example is shown here – using the NPM module jsonwebtoken:

 

// Handle REST requests (POST and GET) for departments
var express = require('express') //npm install express
  , bodyParser = require('body-parser') // npm install body-parser
  , http = require('http')
  ;

var jwt = require('jsonwebtoken');
var PORT = process.env.PORT || 8123;


const app = express()
  .use(bodyParser.urlencoded({ extended: true }))
  ;

const server = http.createServer(app);

var allowCrossDomain = function (req, res, next) {
  res.header('Access-Control-Allow-Origin', '*');
  res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE,OPTIONS');
  res.header('Access-Control-Allow-Headers', 'Content-Type');
  res.header('Access-Control-Allow-Credentials', true);
  res.header("Access-Control-Allow-Headers", "Access-Control-Allow-Headers, Origin,Accept, X-Requested-With, Content-Type, Authorization, Access-Control-Request-Method, Access-Control-Request-Headers");
  next();
}

app.use(allowCrossDomain);

server.listen(PORT, function listening() {
  console.log('Listening on %d', server.address().port);
});

app.get('/api/things', function (req, res) {
  // check header or url parameters or post parameters for token
  var token = req.body.token || req.query.token || req.headers['x-access-token'];
  if (req.headers && req.headers.authorization) {
    var parts = req.headers.authorization.split(' ');
    if (parts.length === 2 && parts[0] === 'Bearer') {
      // two tokens sent in the request
      if (token) {
        error = true;
      }
      token = parts[1];
    }
  }
  var decoded = jwt.decode(token);

  // get the decoded payload and header
  var decoded = jwt.decode(token, { complete: true });
  var subject = decoded.payload.sub;
  var issuer = decoded.payload.iss;

  // verify key
  var myKey = "SpecialKeyKeepSecret";
  var rejectionMotivation;
  var tokenValid = false;

  jwt.verify(token, myKey, function (err, decoded) {
    if (err) {
      rejectionMotivation = err.name + " - " + err.message;
    } else {
      tokenValid = true;
    }
  });


  if (!tokenValid) {
    res.status(403);
    res.header("Refusal-Motivation", rejectionMotivation);
    res.end();
  } else {
      // do the thing the REST API is supposed to do
      var things = { "collection": [{ "name": "bicycle" }, { "name": "table" }, { "name": "car" }] }

      res.status(200);
      res.header('Content-Type', 'application/json');
      res.end(JSON.stringify(things));
  }
  }
});

The post Implementing Authentication for REST API calls from JET Applications embedded in ADF or WebCenter Portal using JSON Web Token (JWT) appeared first on AMIS Oracle and Java Blog.

Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part Two

Sun, 2017-11-19 15:22

This article describes – in two installments – how events are used to communicate a change in a data record owned by the Customer microservice to consumers such as the Order microservice that has some details about the modified customer in its bound context. The first installment described the implementation of the Customer microservice – using MongoDB for its private data store and producing events to Event Hub cloud service to inform other microservices about updates in customer records. In the installment you are reading right now, the Order microservice is introduced – implemented in Node, running on Application Container Cloud, bound to Oracle Database in the cloud and consuming events from Event Hub. These events include the CustomerModified event published by the Customer microservice and used by the Order microservice to synchronize its bound context.

imageThe provisioning and configuration of the Oracle Public Cloud services used in this article is described in detail in this article: Prepare and link bind Oracle Database Cloud, Application Container Cloud, Application Container Cache and Event Hub.

The sources for this article are available on GitHub: https://github.com/lucasjellema/order-data-demo-devoxx .

The setup described in this article was used as a demonstration during my presentation on “50 Shades of Data” during Devoxx Morocco (14-16 November, Casablanca, Morocco); the slidedeck for this session is available here:

https://www.slideshare.net/lucasjellema/50-shades-of-data-how-when-and-why-bigrelationalnosqlelasticeventcqrs-devoxx-maroc-november-2017-including-detailed-demo-screenshots

The Order microservice

The Order microservice is implemented in Node and deployed on Oracle Application Container cloud. It has service bindings to Database Cloud (for its private data store with Orders and associated data bound context) and Event Hub (for consuming events such as the CustomerModified event).image

The Node runtime provided by Application Container Cloud included an Oracle Database Client and the Oracle DB driver for Node. This means that connecting to and interacting with an Oracle Database is done very easily.

The Orders microservice supports the REST call GET /order-api/orders which returns a JSON document with all customers:

image

The implementation of this functionality is straightforward Node, Express and Oracle Database driver for Node:

CODE FOR RETRIEVE ORDERS

var express = require('express')
  , http = require('http');

var bodyParser = require('body-parser') // npm install body-parser
var ordersAPI = require( "./orders-api.js" );

var app = express();
var server = http.createServer(app);

var PORT = process.env.PORT || 3000;
var APP_VERSION = '0.0.4.06';

var allowCrossDomain = function(req, res, next) {
    res.header('Access-Control-Allow-Origin', '*');
    res.header('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE');
    res.header('Access-Control-Allow-Headers', 'Content-Type');
    res.header('Access-Control-Allow-Credentials', true); 
    next();
}

server.listen(PORT, function () {
  console.log('Server running, version '+APP_VERSION+', Express is listening... at '+PORT+" for Orders Data API");
});

app.use(bodyParser.json()); // for parsing application/json
app.use(allowCrossDomain);

ordersAPI.registerListeners(app);

And the OrdersAPI:

var oracledb = require('oracledb');

var ordersAPI = module.exports;
var apiURL = "/order-api";

ordersAPI.registerListeners =
  function (app) {
    app.get(apiURL + '/orders', function (req, res) {
      handleGetOrders(req, res);
    });
  }//registerListeners

handleGetOrders = function (req, res) {
  getOrdersFromDBTable(req, res);
}

transformOrders = function (orders) {
  return orders.map(function (o) {
    var order = {};
    order.id = o[0];
    order.customer_id = o[1];
    order.customer_name = o[2];
    order.status = o[3];
    order.shipping_destination = o[4];
    return order;
  })
}


getOrdersFromDBTable = function (req, res) {
  handleDatabaseOperation(req, res, function (request, response, connection) {
    var selectStatement = "select id, customer_id, customer_name, status , shipping_destination from dvx_orders order by last_updated_timestamp";
    connection.execute(selectStatement, {}
      , function (err, result) {
        if (err) {
          return cb(err, conn);
        } else {
          try {
            var orders = result.rows;
            orders = transformOrders(orders);
            response.writeHead(200, { 'Content-Type': 'application/json' });
            response.end(JSON.stringify(orders));
          } catch (e) {
            console.error("Exception in callback from execute " + e)
          }
        }
      });
  })
}//getOrdersFromDBTable


function handleDatabaseOperation(request, response, callback) {
  var connectString = process.env.DBAAS_DEFAULT_CONNECT_DESCRIPTOR;
  oracledb.getConnection(
    {
      user:  process.env.DBAAS_USER_NAME,
      password: process.env.DBAAS_USER_PASSWORD ,
      connectString: connectString
    },
    function (err, connection) {
      if (err) {
        console.log('Error in acquiring connection ...');
        console.log('Error message ' + err.message);
        return;
      }
      // do with the connection whatever was supposed to be done
      console.log('Connection acquired ; go execute - call callback ');
      callback(request, response, connection);
    });
}//handleDatabaseOperation


function doRelease(connection) {
  connection.release(
    function (err) {
      if (err) {
        console.error(err.message);
      }
    });
}

function doClose(connection, resultSet) {
  resultSet.close(
    function (err) {
      if (err) { console.error(err.message); }
      doRelease(connection);
    });
}

Creating new orders is supported through POST operation on the REST API exposed by the Order microservice:

image


The implementation in the Node application is fairly straightforward – see below:

CODE FOR CREATING ORDERS – added to the OrdersAPI module:

var  eventBusPublisher = require("./EventPublisher.js");

ordersAPI.registerListeners =
  function (app) {
    app.get(apiURL + '/orders', function (req, res) {
      handleGetOrders(req, res);
    });
    app.post(apiURL + '/*', function (req, res) {
      handlePost(req, res);
    });
  }//registerListeners

handlePost =
  function (req, res) {
    if (req.url.indexOf('/rest/') > -1) {
      ordersAPI.handleGet(req, res);
    } else {
      var orderId = uuidv4();
      var order = req.body;
      order.id = orderId;
      order.status = "PENDING";
      insertOrderIntoDatabase(order, req, res,
        function (request, response, order, rslt) {

          eventBusPublisher.publishEvent("NewOrderEvent", {
            "eventType": "NewOrder"
            ,"order": order
            , "module": "order.microservice"
            , "timestamp": Date.now()
          }, topicName);

          var result = {
            "description": `Order has been creatd with id=${order.id}`
            , "details": "Published event = not yet created in Database " + JSON.stringify(order)
          }
          response.writeHead(200, { 'Content-Type': 'application/json' });
          response.end(JSON.stringify(result));

        });//insertOrderIntoDatabase
    }
  }//ordersAPI.handlePost



// produce unique identifier
function uuidv4() {
  return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function (c) {
    var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
    return v.toString(16);
  });
}	


The function handlePost() makes a call to the module EventBusPublisher, to publish a NewOrder event on the Event Hub. The code for this module is shown below:

var kafka = require('kafka-node');

var kafkaConnectDescriptor = "129.xx.yy.zz";

var Producer = kafka.Producer
KeyedMessage = kafka.KeyedMessage;

var client;

var APP_VERSION = "0.8.3"
var APP_NAME = "EventBusPublisher"

var producer;
var client;

function initializeKafkaProducer(attempt) {
  try {
    client = new kafka.Client(kafkaConnectDescriptor);
    producer = new Producer(client);
    producer.on('ready', function () {
      console.log("Producer is ready in " + APP_NAME);
    });
    producer.on('error', function (err) {
      console.log("failed to create the client or the producer " + JSON.stringify(err));
    })
  }
  catch (e) {
    console.log("Exception in initializeKafkaProducer" + e);
    console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
    console.log("Try again in 5 seconds");
    setTimeout(initializeKafkaProducer, 5000, ++attempt);
  }
}//initializeKafkaProducer
initializeKafkaProducer(1);

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event, topic) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: topic, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(data));
  });
}

Updating the bound Context

Suppose the details for Customer with identifier 25 are going to be updated, through the Customer microservice. That would mean that the customer record in the MongoDB database will be updated. However, that is not the only place where information about the Customer is recorded. Because we want our microservice to be independent and we can only properly work with the Order microservice if we have some information about elements associated with the order – such as the customer name and some details about each of the products – we have defined the data bound context of the Order microservice to include the customer name. As you can see in the next screenshot, we have an order record in the data store of the Order microservice for customer 25 and it contains the name of the customer.

image

That means that when the Customer microservice records a change in the name of the customer, we should somehow update the bound context of the Order microservice. And that is what we will do using the CustomerModified event, produced by the Customer microservice and consumed by the Order microservice.

image

The REST call to update the name of customer 25 – from Joachim to William:

image

The customer record in MongoDB is updated

image

and subsequently a CustomerModified event is produced to the devoxx-topic on Event Hub:

image

This event is consumed by the Order microservice and subsequently it triggers an update of the DVX_ORDERS table in the Oracle Database cloud instance. The code responsible for consuming the event and updating the database is shown below:

CODE FOR CONSUME EVENT – first the EventBusListener module

var kafka = require('kafka-node');

var client;

var APP_VERSION = "0.1.2"
var APP_NAME = "EventBusListener"

var eventListenerAPI = module.exports;

var kafka = require('kafka-node')
var Consumer = kafka.Consumer

var subscribers = [];

eventListenerAPI.subscribeToEvents = function (callback) {
  subscribers.push(callback);
}

var topicName = "a516817-devoxx-topic";
var KAFKA_ZK_SERVER_PORT = 2181;
var EVENT_HUB_PUBLIC_IP = '129.xx.yy.zz';

var consumerOptions = {
    host: EVENT_HUB_PUBLIC_IP + ':' + KAFKA_ZK_SERVER_PORT,
    groupId: 'consume-order-events-for-devoxx-app',
    sessionTimeout: 15000,
    protocol: ['roundrobin'],
    fromOffset: 'earliest' // equivalent of auto.offset.reset valid values are 'none', 'latest', 'earliest'
};

var topics = [topicName];
var consumerGroup = new kafka.ConsumerGroup(Object.assign({ id: 'consumer1' }, consumerOptions), topics);
consumerGroup.on('error', onError);
consumerGroup.on('message', onMessage);



function onMessage(message) {
    subscribers.forEach((subscriber) => {
        subscriber(message.value);
    })
}

function onError(error) {
    console.error(error);
    console.error(error.stack);
}

process.once('SIGINT', function () {
    async.each([consumerGroup], function (consumer, callback) {
        consumer.close(true, callback);
    });
});

And the code in module OrdersAPI that imports the module, registers the event listener and handles the event:

var eventBusListener = require("./EventListener.js");

eventBusListener.subscribeToEvents(
  (message) => {
    console.log("Received event from event hub");
    try {
    var event = JSON.parse(message);
    if (event.eventType=="CustomerModified") {
      console.log(`Details for a customer have been modified and the bounded context for order should be updated accordingly ${event.customer.id}`);
      updateCustomerDetailsInOrders( event.customer.id, event.customer)
    }
    } catch (err) {
      console.log("Parsing event failed "+err);
    }
  }
);

function updateCustomerDetailsInOrders( customerId, customer) {
  console.log(`All orders for cusyomer ${customerId} will  be  updated to new customer name ${customer.name} `);
  console.log('updateCustomerDetailsInOrders');
  handleDatabaseOperation("req", "res", function (request, response, connection) {
    var bindvars = [customer.name, customerId];
    var updateStatement = `update dvx_orders set customer_name = :customerName where customer_id = :customerId` ;
    connection.execute(updateStatement, bindvars, function (err, result) {
      if (err) {
        console.error('error in updateCustomerDetailsInOrders ' + err.message);
        doRelease(connection);
        callback(request, response, order, { "summary": "Update failed", "error": err.message, "details": err });
      }
      else {
        connection.commit(function (error) {
          if (error) console.log(`After commit - error = ${error}`);
          doRelease(connection);
          // there is no callback:  callback(request, response, order, { "summary": "Update Status succeeded", "details": result });
        });
      }//else
    }); //callback for handleDatabaseOperation
  });//handleDatabaseOperation 
}// updateCustomerDetailsInOrders}


When we check the current set of Orders, we will find that the customer name associated with the order(s) for customer 25 have now William as the customer_name, instead of Joachim or Jochem.

imageWe can check directly in the Oracle Database Table DVX_ORDERS to find the customer name updated for both orders for customer 25:

image

The post Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part Two appeared first on AMIS Oracle and Java Blog.

Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part One

Sun, 2017-11-19 06:48

This article describes – in two installments – how events are used to communicate a change in a data record owned by the Customer microservice to consumers such as the Order microservice that has some details about the modified customer in its bound context.

image

The microservices are implemented using Node. The Customer microservice uses a cloud based MongoDB instance as its data store. The Order microservices runs on Oracle Application Container Cloud and has a service binding to an Oracle DBaaS (aka Database Cloud) instance. The Oracle Event Hub Cloud is used; it has a Kafka Topic that microservices on the Oracle Cloud as well as any where else can use to produce events to and consume events from. The Event Hub is used to communicate events that describe changes in the data owned by each of the microservices – allowing other microservices to update their bound context.

The provisioning and configuration of the Oracle Public Cloud services used in this article is described in detail in this article: Prepare and link bind Oracle Database Cloud, Application Container Cloud, Application Container Cache and Event Hub.

The sources for this article are available on GitHub: https://github.com/lucasjellema/order-data-demo-devoxx .

The setup described in this article was used as a demonstration during my presentation on “50 Shades of Data” during Devoxx Morocco (14-16 November, Casablanca, Morocco); the slidedeck for this session is available here:

https://www.slideshare.net/lucasjellema/50-shades-of-data-how-when-and-why-bigrelationalnosqlelasticeventcqrs-devoxx-maroc-november-2017-including-detailed-demo-screenshots

 

The Customer Microservice

Implemented in Node (JS) using a MongoDB instance (a free cloud based instance on MLab) for its private data store, running locally and engaging with Oracle Event Hub to produce a CustomerModified event in case of a change in a customer record. The Customer Microservice exposes a REST API with a single resource (customer) and operations to retrieve, list, create and update customer(s).

The MongoDB database was created on MLab (https://mlab.com/) – a MongDB hosting service with free tier up to 500 MB. I created a database called world and in it prepared a collection called customers.

image

image

image

The customers can be listed through REST API calls to the customer microservice:

image

and new customers can be created through the API:

image

Which of course results in a new record in MongoDB:

image

Clearly the customer microservice has associated state (in the MongoDB database) but is itself stateless. It can be stopped and restarted and it will still be able to produce customer records. Multiple instances of the microservice could be running and they would each have access to the same data. There could be some concurrency conflicts that we currently do not really cater for.

The salient code for implementing the REST operation for retrieving the customers is the following:

var MongoClient = require('mongodb').MongoClient;
var assert = require('assert');

var mongodbHost = 'ds139719.mlab.com';
var mongodbPort = '39791';
var authenticate = 'mongousername:mongopassword@'
var mongodbDatabase = 'world';

var mongoDBUrl = 'mongodb://' + authenticate + mongodbHost + ':' + mongodbPort + '/' + mongodbDatabase;

var http = require('http'),
	express = require('express'),
	bodyParser = require('body-parser'), // npm install body-parser
;

var moduleName = "customer-ms";
var PORT = process.env.PORT || 5118;
var appVersion = "1.0.2";

var app = express();
var server = http.createServer(app);

server.listen(PORT, function () {
	console.log('Server running, version ' + appVersion + ', Express is listening... at ' + PORT + " for Customer Microservice");
});

app.use(bodyParser.urlencoded({ extended: true }));
app.use(bodyParser.json({ "type": '*/*', "inflate": "true" }));

app.use(function (request, response, next) {
	response.setHeader('Access-Control-Allow-Origin', '*');
	response.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
	response.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
	response.setHeader('Access-Control-Allow-Credentials', true);
	next();
});

app.get('/customer', function (req, res) {
	// find customers in database
	MongoClient.connect(mongoDBUrl, function (err, db) {
		var nameOfCollection = "customers"
		db.collection(nameOfCollection).find(
			 function (err, customersCursor) {
				if (err) {
					console.log(err);
				} else {
					// for each cursor element, add a customer to the result
					customers = {"customers": []};
					customersCursor.toArray(function (err, cmrs)   { 
						customers.customers = cmrs;
						res.statusCode = 200;
						res.setHeader('Content-Type', 'application/json');
						res.setHeader('MyReply', 'retrieved all customers');
						res.send( customers);
						});
				}
			})
	}) //connect
})

Pretty straightforward code for setting up an Express based listener for GET requests at the specified port and url path /customer. When the request comes in, a connection is initialized to the MongoDB instance, all elements from the customers collection are retrieved and returned in a single JSON document.

Here is an example of a call to this operation in the REST API from Postman:

SNAGHTML151dda41

 

Inserting a new customer requires just a small additional method in the Node application, reacting to PUT requests:

app.put('/customer', function (req, res) {
	var customer = req.body;
	MongoClient.connect(mongoDBUrl, function (err, db) {
		var nameOfCollection = "customers"
		db.collection(nameOfCollection).insertMany([customer], function (err, r) {
			res.statusCode = 200;
			res.setHeader('Content-Type', 'application/json');
			res.setHeader('MyReply', 'Create or Updated the Customer ');
			res.send(customer);
		})//insertMany
	}//connect
	)
})

Updating an existing customer – to handle for example a name change as we will in just a moment – is very similar, triggered  by POST requests

var eventBusPublisher = require("./EventPublisher.js");

app.post('/customer/:customerId', function (req, res) {
    var customerId = req.params['customerId'];
    var customer = req.body;
    customer.id = customerId;
    // find customer in database and update
    MongoClient.connect(mongoDBUrl, function (err, db) {
        var nameOfCollection = "customers"
        db.collection(nameOfCollection).findAndModify(
            { "id": customerId }
            , [['_id', 'asc']]  // sort order
            , { $set: customer }
            , {}
            , function (err, updatedCustomer) {
                if (err) {
                    console.log(err);
                } else {
                    console.log("Customer updated :" + JSON.stringify(updatedCustomer));
                    // Now publish an event of type CustomerModified on Event Hub Cloud Service
                    eventBusPublisher.publishEvent("CustomerModified", {
                        "eventType": "CustomerModified"
                        , "customer": customer
                        , "module": "customer.microservice"
                        , "timestamp": Date.now()
                    }, topicName);
                    // and compose the HTTP Response
                    res.statusCode = 200;
                    res.setHeader('Content-Type', 'application/json');
                    res.setHeader('MyReply', 'Updated the Customer and published event on Event Hub - with  id -  ' + customerId);
                    res.send(customer);
                }
            })
    }) //connect
})

There something different and important about this snippet. After the MongoDB findAndModify operation returns, a call is made to a local module EventPublisher. This module handles the communication to the Event Hub [Cloud Service]. The salient code in this module is as follows:

var kafka = require('kafka-node');
// from the Oracle Event Hub - Platform Cluster Connect Descriptor
var kafkaConnectDescriptor = "129.xx.yy.zz";

var Producer = kafka.Producer
KeyedMessage = kafka.KeyedMessage;
var client;

var APP_VERSION = "0.8.3"
var APP_NAME = "EventBusPublisher"
console.log("Initialized module " + APP_NAME + "version " + APP_VERSION);
var producer;
var client;

function initializeKafkaProducer(attempt) {
  try {
    console.log(`Try to initialize Kafka Client at ${kafkaConnectDescriptor} and Producer, attempt ${attempt}`);
    client = new kafka.Client(kafkaConnectDescriptor);
    producer = new Producer(client);
    producer.on('ready', function () {
      console.log("Producer is ready in " + APP_NAME);
    });
    producer.on('error', function (err) {
      console.log("failed to create the client or the producer " + JSON.stringify(err));
    })
  }
  catch (e) {
    console.log("Exception in initializeKafkaProducer" + JSON.stringify(e));
    // try again in 5 secs
    setTimeout(initializeKafkaProducer, 5000, ++attempt);
  }
}//initializeKafkaProducer
initializeKafkaProducer(1);

var eventPublisher = module.exports;

eventPublisher.publishEvent = function (eventKey, event, topic) {
  km = new KeyedMessage(eventKey, JSON.stringify(event));
  payloads = [
    { topic: topic, messages: [km], partition: 0 }
  ];
  producer.send(payloads, function (err, data) {
    if (err) {
      console.error("Failed to publish event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(err));
    }
    console.log("Published event with key " + eventKey + " to topic " + topic + " :" + JSON.stringify(data));
  });
} // publishEvent   

 

At this point, we have implemented the following situation:

image

We can update the name of a customer – through a call to the REST API of the Customer microservice. The customer record is updated in the MongoDB database and a CustomerModified event is published to the EventHub’s devoxx-topic topic.image

image

image

In the next installment, we will implement the Order microservice that runs on the Oracle Application Container cloud, uses a cloud database (DBaaS instance) and consumes CustomerModified events from Event Hub.

 

 

 

The post Microservices and Updating Data Bound Context on Oracle Cloud with Application Container and Event Hub (plus DBaaS and MongoDB)–Part One appeared first on AMIS Oracle and Java Blog.

Prepare and link bind Oracle Database Cloud, Application Container Cloud, Application Container Cache and Event Hub

Sun, 2017-11-19 00:20

A fairly common combination of Oracle Public Cloud services that I use together – for example for the implementation of microservices – is DBaaS, Application Container Cloud, Application [Container] Cache and Event Hub. In this article, I show the sequence of steps I went through in the Oracle Public Cloud console a few days back to prepare a demo environment for my presentations at Devoxx Morocco in Casablanca. Alternatively, I could have used the command line psm tool and a few straightforward scripts to create the cloud environments. The set up I set out to create looks like this:imageSeveral Node applications will run on Application Container Cloud. Each will have service bindings to the same Application Container Cache (a black box powered by Oracle Coherence), a specific schema in a designated pluggable database in an Oracle Database instance and [a specific topic on]Event Hub (with Apache Kafka inside). The applications on Application Container Cloud need public ip adresses, as does the Topic on Event Hub. For my convenience in this demo, I also want to be able access the database directly from my laptop on a public IP address.

This article will show the steps for provisioning the following cloud services – and show where applicable how to enable network access from the public internet:

  • Database Cloud
  • Application Container Cloud
  • Event Hub
  • Application Container Cache

The article also shows how service bindings are created for applications on Application Container Cloud to the cache, the Event Hub and the DBaaS instance.

Database Cloud

image

image

image

image

image

After some time, I received an email, informing me that creation of the database instance was complete:

image

This is reflected in the service console:

image

I can drill down on the service instance

image

and check out the details – such as 150 GB of storage that has been allocated (a bit excessive for my little demo environment) and the public IP address that has been assigned

In order to enable network access from the public internet to the database on port 1521, I need to enable network access rules.

image

In this case, enable the access rule for port 1521:

image

image

At this point, I can check for example from SQL Developer running on my laptop if the database cloud instance is indeed accessible:

image

Note the composition of the Service name – from the DB Name, the name of the identity domain and the fixed string oraclecloud.internal.

Similarly, connecting through SQLcl shows that the database in the cloud is up and running and accessible:

image

Application Container Cloud

image

I create an application [container]for the Node   runtime:

image

In this case, I have uploaded a barebones node application – packaged in a zip-file order-ms.zip – along with a manifest.json file that describes several things, among which how to run this application (“node order-ms.js”)

image

When the upload is complete, the creation of the application commences

image

image

imageNow it is done creating:

image

and the current state of the application can be inspected:

image

The URL for the application has been assigned and is publicly accessible:

image

I can make a test call to the order-ms application:

image

To easily access the logging from the application container – which is written to my Storage Cloud environment – I use Cloudberry for OpenStack (see blog article Graphical file explorer tool on top of Oracle Storage Cloud Service – CloudBerry for easy file inspection and manipulation) which has support for Oracle Storage Cloud:

image

And now the log files are accessible in the desktop tool:

image

At this point, we are mid way to realizing the environment setup:

image 

Next is the service binding from Application Container to DBaaS – for injecting the database access details into the Node application – without having to explicitly hard code the database name, network address and other access details in the Node application itself. The Service Binding produces environment variables for these details that the application can simply read.

image

image

image

The application has to be restarted in order to have changes in service bindings or environment variables applied.

Event Hub and Topic

This earlier blog article – Setting up Oracle Event Hub (Apache Kafka) Cloud Service and Pub & Sub from local Node Kafka client – describes the provisioning of the Event Hub Platform service is in detail. In this particular case, I used one of the predefined stacks to create an environment with Event Hub and Big Data Compute Cloud through the Oracle Cloud Stack Manager:

image

image

image

image

image

Unfortunately, creation of this stack ran for many hours (more than seven) and ultimately failed. It did however provision the Event Hub Platform – a very generously sized Apache Kafka Cluster.

image

image

The next steps for me are to create a Topic on this Platform and to allow access to the platform from the public internet so that Kafka clients can publish to and consume from this topic.

Access rules are required for the Zookeeper Port (2181) and the Kafka Server port (6667).

image

image

image

Next, create a Topic devoxx-topic:

image

image

image

image

Details for the new topic – including its fully qualified name (including the identity domain name, which suggests some sort of multitenancy going on under the covers)

image

This where we are now with the environment:

image

The next step is creating a service binding from the order-ms application on Application Container Cloud to the Event Hub instance:

image

image

And now the event hub details are available inside the Node application through a few additional environment variables, based on this service binding:

image

Application Container Cache

The final service required is the Application Container Cache – an in-memory grid that acts as a very fast key-value store. Using simple HTTP calls we can put stuff (such as JSON documents, simple values, images and other binary blocks) in the cache for safe keeping across multiple requests, multiple instances of an application and also multiple applications. The cache can be used to hold the closest thing to session state or application state in our stateless microservices architecture.

image

image

image

image

And now the cache is ready:

image

Note: the cache cannot be accessed directly outside the identity domain in the Oracle public cloud. In fact, only applications running on the Application Container Cloud can access this cache.

A service binding to the cache from an application on Application Container cloud can be created when the applicationis first created:

image

or in the more usual way.

That means that now the environment we need has been fully provisioned and bound and is available to run our services on.

image

The post Prepare and link bind Oracle Database Cloud, Application Container Cloud, Application Container Cache and Event Hub appeared first on AMIS Oracle and Java Blog.

Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff

Sat, 2017-11-18 11:58

An overview of some of my recent recordings:

expected soon:

November 2017 – Oracle Developer Community Podcast  What’s Hot? Tech Trends That Made a Real Difference in 2017 (with Chris Richardson, Frank Munz, Pratik Patel, Lonneke Dikmans, Bob Rhubart and Lucas Jellema)  – https://blogs.oracle.com/developers/podcast-tech-trends-that-made-a-real-difference-in-2017

November 2nd – Oracle Developer Community Two Minute Tech Tip – No Excuses: Get Hands-On Experience With New Technologies – https://www.youtube.com/watch?v=NrfrWMq0m9Y

image

October 3rd – Oracle OpenWorld DevLive – Interview with Bob Rubart (Oracle Developer Community) on Kafka Streams, Java Cloud, PaaS Integration – https://www.youtube.com/watch?v=L_mhNCT2nao

image

October, Oracle Code/JavaOne San Francisco – Real Time UI with Apache Kafka Streaming Analytics of Fast Data and Server Push – https://www.youtube.com/watch?v=izTuO3IUBBY 

image

August 23rd – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 –  Modern DevOps across Technologies, On Premises and Clouds – https://www.youtube.com/watch?v=q8-wvvod85U

August 14th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Oracle Application Container Cloud as the Microservices Platform – https://youtu.be/LkMomfG6rv4

July 7th – APACOUC (Asia Pacific Oracle User Council) Webinar Tour 2017 – The Art of Intelligence – A Practical Introduction Machine Learning – https://youtu.be/XmqQhDsJnhY

June – Oracle Code Brussels – DevLive: Get on the (Event) Bus! with Lucas Jellema – https://www.youtube.com/watch?v=4raJRNFRJFk 

imageApril 20th – Oracle Code London – Event Bus as Backbone for Decoupled Microservice Choreography – https://www.youtube.com/watch?v=dRd-QggXqiA

image

April 20th – Oracle Code London – DevLive: Lucas Jellema on Decoupled Microservices with Event Bus – https://www.youtube.com/watch?v=T0gZhzzu5lg image

Older Resources

October 2015 – 2 Minute Tech Tip The Evolution of Flashback in Oracle Database – https://www.youtube.com/watch?v=WOcsYtX69N8

January 2015 – Interviewing Simone Geib (Oracle SOA Suite Product Manager) – https://www.youtube.com/watch?v=MrtpAW9aOHQ 

September 2014 – 2 Minute Tech Tip – Vagrant, Puppet, Docker, and Packer – https://www.youtube.com/watch?v=36ZmfLMFPJI

October 2013 – Interview with Boh Rhubart on SOA, Agile, DevOps, and Transformation – https://www.youtube.com/watch?v=rtiwGqmzmWo

image 

March 2013 – On User Experience – with Bob Rhubart & Jeremy Ashley – https://www.youtube.com/watch?v=8Jm_cVCoQ3o


The post Online Videos with Lucas Jellema–Live recording of Talks, Interviews and Stuff appeared first on AMIS Oracle and Java Blog.

Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide

Sat, 2017-11-18 05:38

imageThis article will show how to run an Oracle Database on a Docker host using the prebaked images on Oracle Continer Registry. It is my expectation that it takes me very little manual effort to run the full 12.2.0.1 Oracle Enterprise Database – just pull and run the Docker image. Once it is running, I get the usual Docker benefits such as clean environment management, linking from other containers, quick stop and start, running scripts inside the container etc.

The minimum requirements for the container is 8GB of disk space and 2GB of memory. There is a slim alternative that requires less resources: The slim (12.2.0.1-slim) version of EE does not have support for Analytics, Oracle R, Oracle Label Security, Oracle Text, Oracle Application Express and Oracle DataVault. I am not sure yet how much that shaves of the resource requirements.

My recent article Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images explained the first steps for getting started with Oracle Container Registry, including how to sign up and accept terms and conditions for individual images.

Once that is out of the way, we can get going with running the database.

The steps are:

  1. start docker
  2. login to Oracle Container Registry
  3. pull image for Oracle Database – I will use the enterprise edition database image in this article
  4. run a docker container based on that image
  5. start interacting with the database, for example from SQLcl or SQL Developer.

In terms of work, it will take less than two minutes of your time. The time before the database is running is mainly determined by the time it takes to download the image. After that, running the container takes just a few dozens of seconds.

The Oracle Database images are published on the website for the Container Registry:

image

image

image

Copy the docker pull command in the upper right hand corner to the clipboard. It is also worth remembering the docker run command for running the database image.

Note that this webpage contains more useful information:

  • how to run SQL scripts from within the container
  • how to expose the database [port]outside the container
  • how to specify SID (default ORCLCDB), PDB (default is ORCLPDB1), DOMAIN (default is localdomain) and allocated MEMORY (default is 2 GB)
  • how to change SYS password (default is Oradoc_db1)
  • how to make use of a volume external to the database container for storing data files, redo logs, audit logs, alert logs and trace files
  • how to run a database server image against an existing set of database files
Let’s run a database

After starting Docker (on my laptop I am using the Docker Quick Start Terminal in the Docker Toolbox), login to the container registry using your Oracle account.

image

SNAGHTMLea0ec12

Then pull the database image, using the command

docker pull container-registry.oracle.com/database/enterprise

image

07:09 Start Pull

10:28 Start Extracting

image

10.30 Image is available, ready run containers off

image

The download took over three and a half hours. I was doing stuff over that time – so no time lost.

Once the pull was finished, the image was added to the local cache of Docker images. I can now run the database.

docker run -d -it –-name ORA12201_1 –P container-registry.oracle.com/database/enterprise:12.2.0.1

The value ORA12201_1 is the self-picked name for the container.

image

Here -P indicates that the ports can be chosen by docker. The mapped port can be discovered by executing

docker port ORA12201_1

image

In a few minutes – I am not sure exactly how long it took – the container status is healthy:

image

The Database server can be connected to – when the container status is Healthy – by executing sqlplus from within the container as

docker exec -it ORA12201_1 bash -c “source /home/oracle/.bashrc; sqlplus /nolog”

image

imageIn addition to connecting to the database from within the container – we can also just consider the container running the database as a back  box that exposes the database’s internal port 1521 at port 32769. And using any tool capable of communicating to a database can be used in a regular way – provided we also have the IP address for the Docker Host if the connect is not made from that machine itself:

image 

Creating a database connection in SQL Developer is done like this:

SNAGHTMLf64a329

Using SYS/Oradoc_db1 as the credentials, the Docker Host IP address for the hostname and the port mapped by Docker to port 1521 in the container, 32769 in this case. The Service Name is composed of the PDB name and the domain name:  ORCLPDB1.localdomain.

A sample query:imageConnecting with SQLcl is similar:

sql sys/Oradoc_db1@192.168.99.100:32769/ORCLPDB1.localdomain as sysdba

image

To stop the container – and the database:

docker stop 62eb

It takes a few seconds to stop cleanly.

image

Restarting takes about 1 minute before the database is up and running:

image

image

Note: with this basic approach, all database files are created in the container’s file system. And are not available elsewhere nor will they survive the removal of the container. A better way of handling these files is through the mounting of a host folder for storing files or through a Docker Volume.

Note: when running on Windows using Docker Toolbox, this may be convenient for increasing the size of memory and disk of the default VM: https://github.com/crops/docker-win-mac-docs/wiki/Windows-Instructions-(Docker-Toolbox)

The post Run Oracle Database in Docker using prebaked image from Oracle Container Registry–a two minute guide appeared first on AMIS Oracle and Java Blog.

Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images

Thu, 2017-11-16 22:47

Oracle has been active with Docker for quite some time now. From the first hesitant steps from some enthusiastic employees with Docker build files that helped people to get going with Oracle software in their experimental containers to a GitHub repo with a broad set of extensive Docker build files to create Docker containers with various Oracle product that are supported (https://github.com/oracle/docker-images). And of course the Oracle Container Cloud – launched in the Spring of 2017 that will run custom Docker images. And now recently the next step: the availability of the Oracle Container Registry – Oracle’s own Docker container registry that offers a range of ready built container images with Oracle software. Using these images – all you need to run Oracle platform component on your local Docker Host or Kubernetes cluster is docker pull from this registry followed by a docker run.

In this article I will give a quick example of how to work this the Oracle Container Registry. It can be found at:  https://container-registry.oracle.com .

The steps to go through:

1. Register as a user for the Oracle Container Registry (one time only, an Oracle account is required)

2. Explore the registry, locate the desired image(s) and Agree to and accept the Oracle Standard Terms and Restrictions for the image(s) that you want to make use of

3. Do a docker login to connect to the Oracle Container Registry

4. Pull the image(s) that you need

5. Run the image(s)

Open the link for the Container Registry:

image

Click on Register. Sign On with an existing Oracle Account or start the flow for creating such an accountimage

Provide the account’s credentials. The click on Create New User.

SNAGHTML9181a91

A confirmation email is sent:

image

And now the opening page lists the areas in which currently images are provided:

image

You can explore what images are available, for example for the database:

image

And for Java:

image

Before you can download any image, you need to accept the terms for that specific image – a manual step in the user interface of the container registry:

image

image

image

After pressing Accept, this image is now available to be pulled from docker.

image

Run Docker container based on Oracle’s Java Runtime Image

I will focus now on the Java Run Time image – one of the smaller images on the registry – to demonstrate the steps for running it in my local Docker host.

Accept the terms:

image

Click on the name of image to get the details and the docker pull command required for this image:

image

Check out the tags:

image

We wil go for the latest.

From the Docker host, first do a login, using your Oracle account credentials:

docker login –u username –p password container-registry.oracle.com

SNAGHTML92082caThen use docker pull, using the command provided on the image page:

docker pull container-registry.oracle.com/java/serverjre

The image is downloaded and stored locally in the image cache.

image

image

When the download is complete the image (not small mind you, at 377 MB) is available to be used for running container instances, in the regular Docker way. For example:

docker run -it container-registry.oracle.com/java/serverjre

image

Et voila: the container is running locally based on a pre built image. No local build steps are required, no downloading of required software packages and special configurations to be applied. The Java runtime is fairly straightforward. With running the Oracle Docker image for the enterprise database or the Fusion Middleware infrastructure, the gain is even bigger from using the prebuilt image from the Oracle Container Registry.

If you want to free up local space, you can of course remove the Oracle Docker image. After all, it is easy to pull it again from the registry.

image

The post Quick introduction to Oracle Container Registry–running one of Oracle’s prebaked images appeared first on AMIS Oracle and Java Blog.

First steps with Istio on Kubernetes on Minikube on Windows 10

Wed, 2017-10-25 07:53

In this article, I discuss my steps to get going with Istio [service mesh]on Kubernetes running on Minikube on Windows 10. Unfortunately, I have ran into an issue with Istio. This article describes the steps leading up to the issue. I will continue with the article once the issue is resolved. For now, it is a dead end street.

Note: my preparations – install Minikube and Kubernetes on Windows 10 are described in this previous article: https://technology.amis.nl/2017/10/24/installing-minikube-and-kubernetes-on-windows-10/.

Clone Git repository with samples

git clone https://github.com/istio/istio

Start minikube

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

minikube start

image

Run Bookinfo

cd c:\data\bookinfo\istio\samples\bookinfo\kube

kubectl apply -f bookinfo.yaml

image

Show productpage. First find port on which product page is exposed:

image

productpage is a service of type ClusterIP, which is only available inside the cluster – which is not good for me.

So to expose the service to outside the cluster:

kubectl edit svc productpage

and in the editor that pops up, change the type from ClusterIP to NodePort:

image

After changing the type and saving the change, get services indicates the port on which the productpage service is now exposed:

image

So now we can go to URL:  http://192.168.99.100:9080/productpage

image

Installing Istio into the Kubernetes Cluster

Now that we’ve seen the app, we’ll adjust our deployment slightly to make it work with Istio. We first need to install Istio in our cluster. To see all of the metrics and tracing features in action, we also install the optional Prometheus, Grafana, and Zipkin addons.

First, download Istio for Windows from https://github.com/istio/istio/releases and extract the contents of the zip file.

image

Add the directory that contains the client binary istioctl.exe to the PATH variable.

image

Open a new command line window. Navigate to the installation location of Istio.

To install Istio to the minikube Kubernetes cluster:

kubectl apply -f install/kubernetes/istio.yaml

SNAGHTML3324bb4

ending with:

image

To verify the success of the installation:

kubectl get svc -n istio-system

image

On Minikube – that does not support services of type LoadBalancer – the external IP for the istio-ingress will stay on pending. You must access the application using the service NodePort, or use port-forwarding instead.

Check on the pods:

kubectl get pods -n istio-system

image

On the dashboard, when I switch to the istio-system Namespace, I can see more details 

image

When I try to run istio commands, I run into issues:

istio version

image

panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x30 pc=0x121513f]

goroutine 1 [running]:
main.getDefaultNamespace(0x14b878a, 0xd, 0x0, 0x0)

I am not sure yet what is the underlying cause and if there is a solution. The issue report https://github.com/istio/pilot/issues/1336 seems related – perhaps.

I do not know where to get more detailed logging about what happens prior to the exception.

Install Book Info Application and inject Istio

The next command I tried was:

kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)

this one fails with: the system cannot find the file specified

image

Resources

Git Repository for Istio – with samples: https://github.com/istio/istio 

Guide for Istio introduction – Managing microservices with the Istio service mesh – http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html?m=1 

Installation of Istio into Kubernetes Cluster – https://istio.io/docs/setup/kubernetes/index.html

Tutorial  Istio is not just for microservices  – https://developer.ibm.com/recipes/tutorials/istio-is-not-just-for-microservices

Istio: Traffic Management for your Microservices – https://github.com/IBM/microservices-traffic-management-using-istio/blob/master/README.md

Istio Guide – getting started with sample application Bookinfo – https://istio.io/docs/guides/bookinfo.html

The post First steps with Istio on Kubernetes on Minikube on Windows 10 appeared first on AMIS Oracle and Java Blog.

Installing Minikube and Kubernetes on Windows 10

Tue, 2017-10-24 00:25

Quick notes on the installaton of Minikube for trying out Kubernetes on my Windows 10 laptop (using VirtualBox –not Hyper-V)

Following instructions in https://www.ibm.com/support/knowledgecenter/en/SS5PWC/minikube.html 

Download Windows installer for MiniKube:

https://github.com/kubernetes/minikube/releases

Run installer

After running the installer, open a command line window

image

Download kubectl.exe

curl -o kubectl.exe https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/windows/amd64/kubectl.exe

Copy downloaded file to a proper location – of your choosing – and add that location to the PATH environment variable.

Open a new command line window, set MINIKUBE_HOME

set MINIKUBE_HOME=C:\Users\lucas_j\.minikube

and run

minikube start

to start Minikube.

image

The VM image is downloaded in which the Kubernetes cluster will be created and ran. This image is 139 MB, so this startup takes a while – but of course only the first time.

image

The directory .minikube is created:

imageAnd in VirtualBox you will find a new VM set up and running:

SNAGHTML3ce136f1

Run

minikube dashboard

and the browser will open:

image

with an overview from within the VM of the Kubernetes Cluster.

With

minikube stop

you can halt the cluster – later to be started again using minikube start

image

A restart now only takes 10-15 seconds:

image

Using the instructions here – https://github.com/kubernetes/kubernetes/blob/master/examples/simple-nginx.md – I can quickly run a Docker Image on my minikube cluster:

kubectl run my-nginx –image=nginx  –port=80

This will create two nginx pods listening on port 80. It will also create a deployment named my-nginx to ensure that there are always two pods running.

image

In the dashboard, this same information is available:

image

kubectl expose deployment my-nginx –type=”NodePort”

is used to expose the deployment – make it accessible from outside the cluster.

Using

kubectl get services

I get a list of services and the local IP address and port on which they are exposed.

image

I can get the same information on the dashboardimage

The IP address where the VirtualBox VM can be accessed is 192.168.99.100 – as can be seen for example from the URL where the dashboard application is accessed:

image

The nginx service can now be accessed at 192.168.99.100:32178image

And in the browser:

image


The post Installing Minikube and Kubernetes on Windows 10 appeared first on AMIS Oracle and Java Blog.

Rapid first few steps with Fn – open source project for serverless functions

Thu, 2017-10-19 00:49

Project Fn is an open source project that provides a container native, poly-language, cloud agnostic (aka run on any cloud) serverless platform for running functions. Fn was launched during Oracle OpenWorld 2017. Fn is available on GitHub (https://github.com/fnproject/fn ) and provides all resources required to get started. In this article, I will just show you (and myself) how I went through the quick start steps and what it looked like on my laptop (Windows 10 with Vagrant and VirtualBox).

I simply get Fn up and running, create a first function that I then deploy and access through HTTP. I briefly show the APIs available on the Fn server and Fn UI application.

Steps:

  1. Create VirtualBox VM with Debian and Docker (for me, Ubuntu 14 failed to run Fn; I created issue 437 for that) – this step is described in a different article
  2. Install Fn command line
  3. Install and run Fn server in the VM, as Docker container
  4. Create function hello
  5. Initialize new function and run it
  6. Deploy the new function (in its own Docker Container running inside the container running Fn server)
  7. Invoke the new function over http from the Host laptop
  8. Run the Fn UI application
  9. Inspect the Fn Server REST APIs

Connect into the Debian Virtual Machine – for me with vagrant ssh.

Install Fn Command Line

To install the Fn command line, I used this command:

curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

image 

Install and run Fn server in the VM, as Docker container

To run the Fn server, after installing the CLI, I just used

fn start

image

Fn Server is running.

Create function hello

As per the instructions in the quick start guide, I created a new directory hello with a text file hello.go:

SNAGHTML2859e5fa

Note: I created these on the host laptop inside the directory that is mapped into the VM under /vagrant. So I can access the file inside the VM in /vagrant/hello.

Initialize new function and run it

image

and after a little while

image

Deploy the new function

(in its own Docker Container running inside the container running Fn server)

image

image

Run function inside Debian VM:

image

Invoke the new function over http from the Host laptop

image

The IP address 192.168.188.102 was assigned during the provisioning of the VM with Vagrant.

Run the Fn UI application

A UI application to inspect all Fn applications and functions can be installed and ran:

image

image

And accessed from the host laptop:

image

Note: for me it did not show the details for my new hello function.

Inspect the Fn Server REST APIs

Fn platform publishes REST APIs that can be used to programmatically learn more about applications and functions and also to manipulate those.

image

Some examples:

image

and

image

    Summary

    Getting started with Fn is pretty smooth. I got started and wrote this article in under an hour and a half. I am looking forward to doing much more with Fn – especially tying functions together using Fn Flow.

    Resources

    Fn project home page: http://fnproject.io/

    Article to quickly provision VirtualBox Image with Debian and Docker: https://technology.amis.nl/2017/10/19/create-debian-vm-with-docker-host-using-vagrant-automatically-include-guest-additions/

    Fn quick start guide: https://github.com/fnproject/fn 

    Fn UI on GitHub: https://github.com/fnproject/ui 

    Fn API: http://petstore.swagger.io/?url=https://raw.githubusercontent.com/fnproject/fn/master/docs/swagger.yml

    The post Rapid first few steps with Fn – open source project for serverless functions appeared first on AMIS Oracle and Java Blog.

    Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions

    Thu, 2017-10-19 00:02

    A short and simple article. I needed a Debian VM that I could use as Docker host – to run on my Windows 10 laptop. I resorted to Vagrant. With a few very simple steps, I got what I wanted:

    0. install Vagrant (if not already done)

    0. install Vagrant plugin for automatically adding Virtual Box Guest Additions to every VM stamped out by Vagrant (so folder mapping from host laptop to VM is supported)

    image

    1. create a fresh directory with a simple Vagrant file that refers for Debian image

    2. run vagrant up

    3. sit back and relax (few minutes)

    4. use vagrant ssh to connect into the running VM and start doing stuff.

    The vagrant file:

    Vagrant.configure(“2”) do |config|
     
    config.vm.provision “docker”

    config.vm.define “debiandockerhostvm”
    # https://app.vagrantup.com/debian/boxes/jessie64
    config.vm.box = “debian/jessie64”
    config.vm.network “private_network”, ip: “192.168.188.102”
     

    config.vm.synced_folder “./”, “/vagrant”, id: “vagrant-root”,
           owner: “vagrant”,
           group: “www-data”,
           mount_options: [“dmode=775,fmode=664”],
           type: “”
            
    config.vm.provider :virtualbox do |vb|
       vb.name = “debiandockerhostvm”
       vb.memory = 4096
       vb.cpus = 2
       vb.customize [“modifyvm”, :id, “–natdnshostresolver1”, “on”]
       vb.customize [“modifyvm”, :id, “–natdnsproxy1”, “on”]
    end
     
    end

    Running Vagrant to create and subsequently run the VM:

    image

    image

    Use vagrant ssh to enter the Virtual Machine and start mucking around:

    image

    Resources

    Vagrant Plugin for automatically installing Guest Addition to each VM that is produced: https://github.com/dotless-de/vagrant-vbguest/

    Vagrant Box Jessie: https://app.vagrantup.com/debian/boxes/jessie64

    The post Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions appeared first on AMIS Oracle and Java Blog.

    Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant

    Tue, 2017-10-17 09:53

    The topic of quickly creating an Oracle development VM is not new. Several years ago Edwin Biemond and Lucas Jellema have written several blogs about this and have given presentations about the topics at various conferences. You can also download ready made Virtualbox images from Oracle here and specifically for SOA Suite here.

    Over the years I have created a lot (probably 100+) of virtual machines manually. For SOA Suite, the process of installing the OS, installing the database, installing WebLogic Server, installing SOA Suite itself can be quite time consuming and boring if you have already done it so many times. Finally my irritation has passed the threshold that I needed to automate it! I wanted to easily recreate a clean environment with a new version of specific software. This blog is a start; provisioning an OS and installing the XE database on it. It might seem a lot but this blog contains the knowledge of two days work. This indicates it is easy to get started.

    I decided to start from scratch and first create a base Vagrant box using Packer which uses Kickstart. Kickstart is used to configure the OS of the VM such as disk partitioning scheme, root password and initial packages. Packer makes using Kickstart easy and allows easy creation of a Vagrant base box. After the base Vagrant box was created, I can use Vagrant to create the Virtualbox machine, configure it and do additional provisioning such as in this case installing the Oracle XE database.

    Getting started

    First install Vagrant from HashiCorp (here).

    If you just want a quick VM with Oracle XE database installed, you can skip the Packer part. If you want to have the option to create everything from scratch, you can first create your own a base image with Packer and use it locally or use the Vagrant cloud to share the base box.

    Every Vagrant development environment requires a base box. You can search for pre-made boxes at https://vagrantcloud.com/search.

    Oracle provides Vagrant boxes you can use here. Those boxes have some default settings. I wanted to know how to create my own box to start with in case I for example wanted to use an OS not provided by Oracle. I was presented with three options in the Vagrant documentation. Using Packer was presented as the most reusable option.

    Packer

    ‘Packer is an open source tool for creating identical machine images for multiple platforms from a single source configuration.’ (from here) Download Packer from HashiCorp (here).

    Avast Antivirus and maybe other antivirus programs, do not like Packer so you might have to temporarily disable them or tell them Packer can be trusted.

    virtualbox-iso builder

    Packer can be used to build Vagrant boxes (here) but also boxes for other platforms such as Amazon and Virtualbox. See here. For VirtualBox there are two so called builders available. Start from from scratch by installing the OS from an ISO file or start from an OVF/OVA file (pre-build VM). Here of course I choose the ISO file since I want to be able to easily update the OS of my VM and do not want to create a new OVF/OVA file for every new OS version. Thus I decided to use the virtualbox-iso builder.

    Iso

    For my ISO file I decided to go with Oracle Linux Release 7 Update 4 for x86 (64 bit) which is currently the most recent version. In order for Packer to work fully autonomous (and make it easy for the developer), you can provide a remote URL to a file you want to download. For Oracle Linux there are several mirrors available which provide that. Look one up close to you here. You have to update the checksum in the template file (see below) when you update the ISO image if you want to run on a new OS version.

    template JSON file

    In order to use Packer with the virtualbox-iso builder, you first require a template file in JSON format. Luckily samples for these have already been made available here. You should check them though. I made my own version here.

    Kickstart

    In order to make the automatic installation of Oracle Linux work, you need a Kickstart file. This is generated automatically when performing an installation at /root/anaconda-ks.cfg. Read here. I’ve made my own here in order to have the correct users, passwords, packages installed and swap partition size.

    After you have a working Kickstart file and the Packer ol74.json, you can kickoff the build by:

    packer build ol74.json

    Packer uses a specified username to connect to the VM (present in the template file). This should be a user which is created in the Kickstart script. For example if you have a user root with password Welcome01 in the kickstart file, you can use that one to connect to the VM. Creating the base box will take a while since it will do a complete OS installation and first download the ISO file.

    You can put the box remote or keep it local.

    Put the box remote

    After you have created the box, you can upload it to the Vagrant Cloud so other people can use it. The Vagrant Cloud free option offers unlimited free public boxes (here). The process of uploading a base box to the Vagrant cloud is described here. You first create a box and then upload the file Packer has created as provider.

    After you’re done, the result will be a Vagrant box which can be used as base image in the Vagrantfile. This looks like:

    Use the box locally

    Alternatively you can use the box you’ve created locally:
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    You of course have to change the box location to be specific to your environment

    And use ol74 as box name in your Vagrantfile. You can see an example of a local and remote box here.

    If you have recreated your box and want to use the new version in Vagrant to create a new Virtualbox VM:

    vagrant box remove ol74
    vagrant box add ol74 file:///d:/vagrant/packer/virtualbox/ol74.box

    Vagrant

    You now have a base clean OS (relatively clean, I added a GUI) and you want to install stuff in it. Vagrant can help you do that. I’ve used a simple shell script to do the provisioning (see here) but you can also use more complex pieces of software like Chef or Puppet. These are of course in the long run better suitable to also update and manage machines. Since this is just a local development machine, I decided to keep it simple.

    I’ve prepared the following Vagrant file.

    This expects to find a structure like:

    provision.sh
    Vagrantfile
    Directory: software
    –oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    –xe.rsp

    These can be downloaded here. Except the oracle-xe-11.2.0-1.0.x86_64.rpm.zip file which can be downloaded here.

    Oracle XE comes with a rsp file (a so-called response file) which makes automating the installation easy. This is described here. You just have to fill in some variables like password and port and such. I’ve prepared such a file here.

    After everything is setup, you can do:

    vagrant up soadb

    And it will create the soadb VM for you in Virtualbox

    The post Quickly create a Virtualbox development VM with XE DB using Kickstart, Packer, Vagrant appeared first on AMIS Oracle and Java Blog.

    JSON manipulation in Java 9 JShell

    Thu, 2017-10-12 09:38

    In this article I will demonstrate how we can work with JSON based data – for analysis, exploration, cleansing and processing – in JShell, much like we do in Python. I work with a JSON document with entries for all sessions at the Oracle OpenWorld 2017 conference (https://raw.githubusercontent.com/lucasjellema/scrape-oow17/master/oow2017-sessions-catalog.json)

    The Java 9 SE specification for the JDK does not contain the JSON-P API and libraries for processing JSON. In order to work with JSON-P in JShell, we need to add the libraries – that we first need to find and download.

    I have used a somewhat roundabout way to get hold of the required jar-files (but it works in a pretty straightforward manner):

    1. Create a pom.xml file with dependencies on JSON-P

    image

     

    image

     

    2. Then run

    mvn install dependency:copy-dependencies

    as described in this article: https://technology.amis.nl/2017/02/09/download-all-directly-and-indirectly-required-jar-files-using-maven-install-dependencycopy-dependencies/

    this will download the relevant JAR files to subdirectory target/dependencies

    image

    3. Copy JAR files to a directory – that can be accessed from within the Docker container that runs JShell – for me that is the local lib directory that is mapped by Vagrant and Docker to /var/www/lib inside the Docker container that runs JShell.

     

    4. In the container that runs JShell:

    Start JShell with this statement that makes the new httpclient module available, for when the JSON document is retrieved from an HTTP URL resource:

    jshell –add-modules jdk.incubator.httpclient

     

    5. Update classpath from within jshell

    To process JSON in JShell – using JSON-P – we need set the classpath to include the two jar files that were downloaded using Maven.

    /env –class-path /var/www/lib/javax.json-1.1.jar:/var/www/lib/javax.json-api-1.1.jar

    Then the classes in JSON-P are imported

    import javax.json.*;

    if we need to retrieve JSON data from a URL resource, we should also

    import jdk.incubator.http.*;

     

    6. I have made the JSON document available on the file system.

    image

    It can be accessed as follows:

    InputStream input = new FileInputStream(“/var/www/oow2017-sessions-catalog.json”);

     

    7. Parse data from file into JSON Document, get the root object and retrieve the array of sessions:

    JsonReader jsonReader = Json.createReader(input)

    JsonObject rootJSON = jsonReader.readObject();

    JsonArray sessions = rootJSON.getJsonArray(“sessions”);

     

    8. Filter sessions with the term SQL in the title and print their title to the System output – using Streams:

    sessions.stream().map( p -> (JsonObject)p).filter(s ->  s.getString(“title”).contains(“SQL”)) .forEach( s -> {System.out.println(s.getString(“title”));})

    image

     

    One other example: show a list of all presentations for which a slidedeck has been made available for download along with the download URL:

    sessions.stream()

    .map( p -> (JsonObject)p)

    .filter(s -> s.containsKey(“files”) && !s.isNull(“files”) && !(s.getJsonArray(“files”).isEmpty()))

    .forEach( s -> {System.out.println(s.getString(“title”)+” url:”+s.getJsonArray(“files”).getJsonObject(0).getString(“url”));})

     

    Bonus: Do HTTP Request

    As an aside some steps in jshell to execute an HTTP request:

    jshell> HttpClient client = HttpClient.newHttpClient();
    client ==> jdk.incubator.http.HttpClientImpl@4d339552

    jshell> HttpRequest request = HttpRequest.newBuilder(URI.create(“http://www.google.com”)).GET().build();
    request ==> http://www.google.com GET

    jshell> HttpResponse response = client.send(request, HttpResponse.BodyHandler.asString())
    response ==> jdk.incubator.http.HttpResponseImpl@147ed70f

    jshell> System.out.println(response.body())
    <HTML><HEAD><meta http-equiv=”content-type” content=”text/html;charset=utf-8″>
    <TITLE>302 Moved</TITLE></HEAD><BODY>
    <H1>302 Moved</H1>
    The document has moved
    <A HREF=”http://www.google.nl/?gfe_rd=cr&amp;dcr=0&amp;ei=S2XeWcbPFpah4gTH6Lb4Ag”>here</A>.
    </BODY></HTML>

     

    image

    The post JSON manipulation in Java 9 JShell appeared first on AMIS Oracle and Java Blog.

    Java 9 – First baby steps with Modules and jlink

    Wed, 2017-10-11 12:00

    In a recent article, I created an isolated Docker Container as Java 9 R&D environment: https://technology.amis.nl/2017/10/11/quick-and-clean-start-with-java-9-running-docker-container-in-virtualbox-vm-on-windows-10-courtesy-of-vagrant/. In this article, I will use that environment to take few small steps with Java 9 – in particular with modules. Note:this story does not end well. I wanted to conclude with using jlink to create a stand alone runtime that contained both the required JDK modules and my own module – and demonstrate how small that runtime was. Unfortunately, the Link step failed for me. More news on that in a later article.

    Create Custom Module

    Start a container based on the openjdk:9 image, exposing its port 80 on the docker host machine and mapping folder /vagrant (mapped from my Windows host to the Docker Host VirtualBox Ubuntu image) to /var/www inside the container:

    docker run -it -p 127.0.0.1:8080:80 -v /vagrant:/var/www openjdk:9 /bin/sh

    Create Java application with custom module:  I create a single Module (nl.amis.j9demo) and a single class nl.amis.j9demo.MyDemo. The module depends directly on one JDK module (httpserver) and indirectly on several more.

    imageThe root directory for the module has the same fully qualified name as the module: nl.amis.j9demo.

    This directory contains the module-info.java file. This file specifies:

    • which modules this module depends on
    • which packages it exports (for other modules to create dependencies on)

    In my example, the file is very simple – only specifying a dependency on jdk.httpserver:

    image

    The Java Class MyDemo has a number of imports. Many are for base classes from the java.base module. Note: every Java module has a implicit dependency on java.base, so we do not need to include it in the modue-info.java file.

    image

    This code create an instance of HttpServer – an object that listens for HTTP Requests at the specified port (80 in this case) and then always returns the same response (the string “This is the response”). As meaningless as that is – the notion of receiving and replying to HTTP Requests in just few lines of Java Code (running in the OpenJDK!) is quite powerful.

    package nl.amis.j9demo;
    import java.io.*;
    import java.net.*;
    import java.util.*;
    import java.util.concurrent.*;
    import java.util.stream.*;
    import com.sun.net.httpserver.*;
    
    import static java.lang.System.out;
    import static java.net.HttpURLConnection.*;
    
    public class MyDemo{
      private static final int DEFAULT_PORT = 80;
      private static URI ROOT_PATH = URI.create("/"); 
               
    
    private static class MyHandler implements HttpHandler {
           public void handle(HttpExchange t) throws IOException {
               URI tu = t.getRequestURI();
               InputStream is = t.getRequestBody();
               // .. read the request body
               String response = "This is the response";
               t.sendResponseHeaders(200, response.length());
               OutputStream os = t.getResponseBody();
               os.write(response.getBytes());
               os.close();
           }
       }
    
    
      public static void main(String[] args) throws IOException {
        HttpServer server = HttpServer.create(new InetSocketAddress(DEFAULT_PORT), 0);
        server.createContext("/apps ", new MyHandler());
        server.setExecutor(null); // creates a default executor
        server.start();
        out.println("HttpServer is started, listening at port "+DEFAULT_PORT);
      }
    
    }
    
    

    Compile, Build and Run

    Compile the custom module:

    javac -d mods –module-source-path src -m nl.amis.j9demo

    image

    Create destination directory for JAR file

    mkdir -p lib

    Create the JAR for the module:

    jar –create –file lib/nl-amis-j9demo.jar –main-class nl.amis.j9demo.MyDemo -C mods/nl.amis.j9demo .

    image

    Inspect the JAR file:

    jar tvf lib/nl-amis-j9demo.jar

    image

    To run the Java application- with a reference to the module:

    java –p lib/ -m nl.amis.j9demo

    image

    the traditional equivalent with a classpath for the JAR file(s) would be:

    java -classpath lib/nl-amis-j9demo.jar nl.amis.j9demo.MyDemo

    Because port 80 in the container was exposed and mapped to port 8080 on the Docker Host, we can access the Java application from the Docker Host, using wget:

    wget 127.0.0.1:8080/apps

    image

    The response from the Java application is hardly meaningful However, the fact that we get a response at all is quite something: the ‘remote’  container based on openjdk:9 has published an HTTP server from our custom module that we can access from the Docker Host with a simple HTTP request.

    Jlink

    I tried to use jlink – to create a special runtime for my demo app, consisting of required parts of JDK and my own module. I expect this runtime to be really small.

    The JVM modules by the way on my Docker Container are in /docker-java-home/jmods

    image

    The command for this:

    jlink –output mydemo-runtime –module-path lib:/docker-java-home/jmods –limit-modules nl.amis.j9demo –add-modules nl.amis.j9demo –launcher demorun=nl.amis.j9demo –compress=2 –no-header-files –strip-debug

    Unfortunately, on my OpenJDK:9 Docker Image, linking failed with this error:

    image

    Error: java.io.UncheckedIOException: java.nio.file.FileSystemException: mydemo-runtime/legal/jdk.httpserver/ASSEMBLY_EXCEPTION: Protocol error

    Resources

    Documentation for jlink – https://docs.oracle.com/javase/9/tools/jlink.htm

    JavaDoc for HttpServer package – https://docs.oracle.com/javase/9/docs/api/com/sun/net/httpserver/package-summary.html#

    Java9 Modularity Part 1 (article on Medium by Chandrakala) – https://medium.com/@chandra25ms/java9-modularity-part1-a102d85e9676

    JavaOne 2017 Keynote – Mark Reynolds demoing jlink – https://youtu.be/UNg9lmk60sg?t=1h35m43s

    Exploring Java 9 Modularity – https://www.polidea.com/blog/Exploring-Java-9-Java-Platform-Module-System/

    The post Java 9 – First baby steps with Modules and jlink appeared first on AMIS Oracle and Java Blog.

    Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant

    Wed, 2017-10-11 08:25

    The messages from JavaOne 2017 were loud and clear. Some of these:

    • Java 9 is here,
    • the OpenJDK has all previously exclusive commercial features from the Oracle (fka SUN) JDK – this includes the Java Flight Recorder for real time monitoring/metrics gathering and analysis,
    • Java 9 will be succeeded by Java 18.3, 18.9 and so on (a six month cadence) with much quicker evolution with continued quality and stability
    • JigSaw is finally here; it powers the coming evolution of Java and the platform and it allows us to create fine tuned, tailor more Java runtime environments that may take less than 10-20% of the full blown JRE
    • Java 9 has many cool and valuable features besides the Modularity of JigSaw – features that make programming easier, more elegant more fun more lightweight etc.
    • One of the objectives is “Java First, Java Always” (instead of: when web companies mature, then they switch to Java) (having Java enabled for cloud, microsevice and serverless is an important step in this)

      Note: during the JavaOne Keynote, Spotify presented a great example of this pattern: they have a microservices architecture (from before it was called microservice); most were originally created in Python, with the exception of the search capability; due to scalability challenges, all Python based microservices have been migrated to Java over the years. The original search service is still around. Java not only scales very well and has the largest pool of developers to draw from, it also provides great run time insight into what is going on in the JVM

    I have played around a little with Java 9 but now that is out in the open (and I have started working on a fresh new laptop – Windows 10) I thought I should give it another try. In this article I will describe the steps I took from a non Java enabled Windows environment to playing with Java 9 in jshell – in an isolated container, created and started without any programming, installation or configuration. I used Vagrant and VirtualBox – both were installed on my laptop prior to the exercise described in this article. Vagrant in turn used Docker and downloaded the OpenJDK Docker image for Java 9 on top of Alpine Linux. All of that was hidden from view.

    The steps:

    0. Preparation – install VirtualBox and Vagrant

    1. Create Vagrant file – configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that image as well as a Docker Container with OpenJDK 9

    2. Run Vagrant for that Vagrant file to have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container

    3. Connect into VirtualBox Docker Host and Docker Container

    4. Run jshell command line and try out some Java 9 statements

    In more detail:

    1. Create Vagrant file

    In a new directory, create a file called Vagrantfile – no extension. The file has the following content:

    It is configured to provide a VirtualBox image (based on Ubuntu Linux) and provision the Docker host on that VB image as well as a Docker Container based on the OpenJDK:9 image.

    image

    Vagrant.configure("2") do |config|
     
    config.vm.provision "docker" do |d|
        d.run "j9",
          image: "openjdk:9",
          cmd: "/bin/sh",
          args: "-v '/vagrant:/var/www'"
        d.remains_running = true  
      end
     
    # The following line terminates all ssh connections. Therefore Vagrant will be forced to reconnect.
    # That's a workaround to have the docker command in the PATH
    # Command: "docker" "ps" "-a" "-q" "--no-trunc"
    # without it, I run into this error:
    # Stderr: Get http:///var/run/docker.sock/v1.19/containers/json?all=1: dial unix /var/run/docker.sock: permission denied.
    # Are you trying to connect to a TLS-enabled daemon without TLS?
     
    config.vm.provision "shell", inline:
    "ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill"
     
    config.vm.define "dockerhostvm"
    config.vm.box = "ubuntu/trusty64"
    config.vm.network "private_network", ip: "192.168.188.102"
     
    config.vm.provider :virtualbox do |vb|
      vb.name = "dockerhostvm"
      vb.memory = 4096
      vb.cpus = 2
      vb.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
      vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
    end
     
    end
    
    # to get into running container: 
    # vagrant ssh
    # docker run -it  -v /vagrant:/var/www openjdk:9 /bin/sh
    
    2. Run Vagrant for that Vagrant file

    And have it spin up the VirtualBox, install Docker into it, pull the OpenJDK image and run the container:

    image

    3. Connect into VirtualBox Docker Host and Docker Container

    Using

    vagrant ssh

    to connect into the VirtualBox Ubuntu Host and

    docker run –it openjdk:9 /bin/sh

    to run a container and connect into the shell command line, we get to the environment primed for running Java 9:

    image

    At this point, I should also be able to use docker exec to get into the container that started by the Vagrant Docker provisioning configuration. However, I had some unresolved issues with that – the container kept restarting. I will attempt to resolve that issue.

    4. Run jshell command line and try out some Java 9 statements

    JShell is the new Java command line tool that allows REPL style exploration – somewhat similar to for example Python and JavaScript (and even SQL*Plus).

    Here is an example of some JShell interaction:

    image

    I tried to use the new simple syntax for creating collections from static data. Here I got the syntax right:

    image

    It took me a little time to find out the exit strategy. Turns out that /exit does that trick:

    image

    In summary: spinning up a clean, isolated environment in which to try out Java is not hard at all. On Linux – with Docker running natively – it is even simpler, although even then using Vagrant may be beneficial. On Windows it is also quite straightforward – no complex sys admin stuff required and hardly any command line things either. And that is something we developers should start to master – if we do not do so already.

    Issue with Docker Provider in Vagrant

    Note: I did not succeed in using the Docker provider (instead of the provisioner) with Vagrant. Attempting that (cleaner) approach failed with “Bringing machine ‘j9’ up with ‘docker’ provider…
    The executable ‘docker’ Vagrant is trying to run was not
    found in the %PATH% variable. This is an error. Please verify
    this software is installed and on the path.” I have looked across the internet, found similar reports but did not find a solutio that worked for me.

    image

    The provider is documented here: https://www.vagrantup.com/docs/docker/

    The Vagrantfile I tried to use originally – but was unable to get to work:

    image

    (based on my own previous article: https://technology.amis.nl/2015/08/22/first-steps-with-provisioning-of-docker-containers-using-vagrant-as-provider/)

    The post Quick and clean start with Java 9–running Docker container in VirtualBox VM on Windows 10 courtesy of Vagrant appeared first on AMIS Oracle and Java Blog.

    ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC

    Tue, 2017-10-10 13:40

    Here is my entry for the Oracle Developer Community ODC Appreciation Day (#ThanksODC).

    It is quite hard to make a choice for a feature to write about. So many to talk about. And almost every day another favorite of the month. Sliding time windows. The Oracle Developer Community – well, that is us. All developers working with Oracle technology, sharing experiences and ideas, helping each other with inspiration and solutions to challenges, making each other and ourselves better. Sharing fun and frustration, creativity and best practices, desires and results. Powered by OTN now kown as ODC. Where we can download virtually any software Oracle has to offer. And find resources – from articles and forum answers to documentation and sample code. This article is part of the community effort to show appreciation – to the community and to the Orace Developer Community (organization).

    For fun, you could take a look at how the OTN site started – sometime in 2000 – using the WayBack machine: https://web.archive.org/web/20000511100612/http://otn.oracle.com:80/ 

    image

    And the WayBack machine is just one of many examples of timelines – presentation of data organized by date.image We all know how pictures say more than many words. And how tables of data are frequently to much less accessible to users than to the point visualizations. For some reason, data associated with moments in time have always had special interest for me. As do features that are about time – such as Flashback Query, 12c Temporal Database and SYSDATE (or better yet: SYSTIMESTAMP).

    To present such time-based data in way that reveals the timeline and historical threat that resides in the data, we can make use of the Timeline component that is available in:

    In JET:image

    In ADF:

    This image is described in the surrounding text

    In Data Visualization Cloud:

    Note that in all cases it does not take much more than a dataset with date (or date time) attribute and one or more attributes to create a label and perhaps to categorize. A simple select ename, job, hiredate from emp suffices.

    The post ODC Appreciation Day : Timeline component in Oracle JET, Data Visualization Cloud, APEX and ADF DVT: #ThanksODC appeared first on AMIS Oracle and Java Blog.

    Pages