Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 4 hours 1 min ago

First encounters of a happy kind – rich web client application development with Vue.js

Sun, 2017-07-16 01:39

Development of rich web applications can be done in various ways, using one or more of many frameworks. In the end it all boils down to HTML(5), CSS and JavaScript, run and interpreted by the browser. But the exact way of getting there differs. Server side oriented Web applications with .NET and Java EE (Servlet, JSP, JSF) and also PHP, Python and Ruby has long been the most important way of architecting web applications. However, with the power of today’s browsers, the advanced state of HTML5 and JavaScript and the high degree of standardization across browsers, it is now almost goes without saying that web applications are implemented with a rich client side that interacts with a backend to a very limited degree and typically only to retrieve or pass data or enlist external services and complex backend operations. What client/server did to terminal based computing in the early nineties, the fat browser is doing now to three tier web computing with its heavy focus on the server side.

The most prominent frameworks for developing these fat browser based clients are Angular and Angular 2, React.js, Ember, complemented by jQuery and a plethora of other libraries, components and frameworks (see for example this list of top 9 frameworks) . And then there is Vue.js. To be honest, I am not sure where Vue ranks in all the trends and StackOverflow comparisons etc. However, I did decide to take a quick look at Vue.js – and I liked what I saw.

From the Vue website:

Vue (pronounced /vjuː/, like view) is a progressive framework for building user interfaces. Unlike other monolithic frameworks, Vue is designed from the ground up to be incrementally adoptable. The core library is focused on the view layer only, and is very easy to pick up and integrate with other libraries or existing projects. On the other hand, Vue is also perfectly capable of powering sophisticated Single-Page Applications when used in combination with modern tooling and supporting libraries.

I have never really taken to Angular. It felt overly complex and I never particularly liked it. Perhaps I should give it another go – now that my understanding of modern web development has evolved. Maybe now I am finally ready for it. Instead, I checked out Vue.js and it made me more than a little happy. I smiled as I read through the introductory guide, because it made sense. The pieces fit together. I understand the purpose of the main moving pieces and I enjoy trying them out. The two way data binding is fun. The encapsulation of components, passing down properties, passing up events – I like that too. The HTML syntax, the use of templates, the close fit with “standard” HTML. It somehow agrees with me.

Note: it is still early days and I have not yet built a serious application with Vue. But I thought I should share some of my excitement.

The creator of Vue, Evan You, Vue.js ( http://evanyou.me/ ), writes about Vue’s origins:

I started Vue as a personal project when I was working at Google Creative Labs in 2013. My job there involved building a lot of UI prototypes. After hand-rolling many of them with vanilla JavaScript and using Angular 1 for a few, I wanted something that captured the declarative nature of Angular’s data binding, but with a simpler, more approachable API. That’s how Vue started.

And that is what appealed to me.

The first thing I did to get started with Vue.js was to read through the Introductory Guide for Vue.js 2.0: https://vuejs.org/v2/guide/ .

Component Tree

It is a succinct tour and explanation, starting at the basics and quickly coming round to the interesting challenges. Most examples in the guide work in line – and using the Google Chrome Addin for Vue.js it is even easier to inspect what is going on in the runtime application.

The easiest way to try out Vue.js (at its simplest) is using the JSFiddle Hello World example.

Next, I read through and followed the example of a more interesting Vue application in this article that shows data (News stories) retrieved from a public REST API (https://newsapi.org):

This example explains in a very enjoyable way how two components are created – news source selection and news story list from selected source- as encapsulated, independent components that still work together. Both components interact with the REST API to fetch their data. The article starts with an instruction on how to install the Vue command line tool and initialize a new project with a generated scaffold. If Node and NPM are already installed, you will be up and running with the hello world of Vue applications in less than 5 minutes.

Vue and Oracle JET

One other line of investigation is how Vue.js can be used in an Oracle JET application, to complement and perhaps even replace KnockOut. More on that:

The post First encounters of a happy kind – rich web client application development with Vue.js appeared first on AMIS Oracle and Java Blog.

Running any Node application on Oracle Container Cloud Servicer

Sun, 2017-07-16 00:32

In an earlier article, I discussed the creation of a generic Docker Container Image that runs any Node.JS application based on sources for that application on GitHub. When the container is started, the GitHub URL is passed in as a parameter and the container will download the sources and run the application. Using this generic image, you can your Node application everywhere you can run a Docker container. One of the places where you can run a Docker Container is the Oracle Container Cloud Service (OCCS) – a service that offers a platform for managing your container landscape. In this article, I will show how I used OCCS to run my generic Docker image for running Node application and how I configured the service to run a specific Node application from GitHub.

Getting started with OCCS is described very well in an article by my colleague Luc Gorissen on this same blog: Docker, WebLogic Image on Oracle Container Cloud Service. I used his article to get started myself.

The steps are:

  • create OCCS Service instance
  • configure OCCS instance (with Docker container image registry)
  • Create a Service for the desired container image (the generic Node application runner) – this includes configuring the Docker container parameters such as port mapping and environment variables
  • Deploy the Service (run a container instance)
  • Check the deployment (status, logs, assigned public IP)
  • Test the deployment – check if the Node application is indeed available

 

Create OCCS Service instance

Assuming you have an Oracle Public Cloud account with a subscription to OCCS. Go to the Dashboard for OCCS. Click on Create Service

image

Configure the service instance:

 

image

However, do not make it too small (!) (Oracle Cloud does not come in small portions):

image

So now with the minimum allowed data volume size (for a stateless container!)

image

This time I pass the validations:

image

And the Container Cloud Service instance is  created:

image

 

Configure OCCS instance (with Docker container image registry)

After some time, when the instance is ready, I can access it:

image

image

It is pretty sizable as you can see.

Let’s access the Container console.

image

image

The Dashboard gives an overview of the current status, the actual deployments (none yet) and access to Services, Stacks, Containers, Images and more.

image

One of the first things to do, is to configure a (Container Image) Registry – for example a local registry or an account on Docker Hub – my account, where I have saved container images that I need to create containers from in the Oracle Container Cloud:

image

My details are validated:

image

The registry is added:

image

 

Create a Service for a desired container image

Services are container images along with configuration to be used for running containers. Oracle Container Cloud comes with a number of popular container images already configured as services. I want to add another service, for my own image: the generic Node application runner). For this I select the image from my Docker Hub account followed by configuring the Docker container parameters such as port mapping and environment variables

image

The Service editor – the form to define the Image (from one of the configured registries), the name of the service (which represents the combination of the image with a set of configuration settings to make it into a specific service) and of course those configuration settings – port mappings, environment variables, volumes etc.

image

Note: I am creating a service for the image that can run any Node application that is available in GitHub (as described here: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ )

Deploy the Service (run a container instance)

After the service was created, it is now available as the blueprint to run new containers from. This dis done through a Deployment – this ties together a Service with a some runtime settings around scaling, load balancing and the like:

image

Set the deployment details for the new deployment of this service:

image

After completing these details, press deploy to go ahead and run the new deployment; in this case it consists of a single instance (boring….) but it could have been more involved.

image

The deployment is still starting.

A little later (a few seconds) the container is running:

image

Check some details:

image

To check the deployment (status, logs, assigned IP), click on the container name:

image

Anything written to the console inside the container is accessible from the Logs:

image

 

To learn about the public IP address at which the application is exposed, we need to turn to the Hosts tab.

Monitor Hosts

image

Drill down on one specific host:

image

and learn its public IP address, where we can access the application running in the deployed container.

Test the deployment – check if the Node application is indeed available

With the host’s public IP address and the knowledge that port 8080 inside the container (remember environment variable APP_PORT that was defined as 8080 to pass to the generic Node application running) is mapped to port 8005 externally, we can now invoke the application running inside the container deployed on the Container Cloud Service from our local browser.

 

image

 

And there is the output of the application (I never said it would be spectacular…)

image

 

Conclusion

After having gotten used to the sequence of actions:

  • configure registry (probably only once)
  • configure a service (for every container image plus specific setup of configuration parameters, including typical Docker container settings such as port mapping, volumes, environment variables)
  • define and run a deployment (from a service) with scaling factor and other deployment details
  • get hold of host public IP address to access the application in the container

Oracle Container Cloud Service provides a very smooth experience that compares favorably with other Container Cloud Services and management environments I have seen. From a Developer’s perspective at least, OCCS does a great job. It is a little too early to say much about the Ops side of things – how operations with OCCS are.

The post Running any Node application on Oracle Container Cloud Servicer appeared first on AMIS Oracle and Java Blog.

AWS – Build your own Oracle Linux 7 AMI in the Cloud

Fri, 2017-07-14 14:37

I always like to know what is installed in the servers that I need to use for databases or Weblogic installs. Whether it is in the Oracle Cloud or in any other Cloud. One way to know is to build your own image that will be used to start your instances. My latest post was about building my own image for the Oracle Cloud (IAAS), but I could only get it to work with Linux 6. Whatever I tried with Linux 7 it wouldn’t start in a way that I could logon to it. And no way to see what was wrong. Not even when mounting the boot disk to an other instance after a test boot. My trial ran out before I could get it to work and a new trial had other problems.

Since we have an AWS account I could try to do the same in AWS EC2 when I had some spare time. A few years back I had built Linux 6 AMI’s via a process that felt a bit complicated but it worked for a PV Kernel. For Linux 7 I couldn’t find any examples on the web on how to do that with enough detail to really get it working. But while was studying for my Oracle VM 3.0 for x86 Certified Implementation Specialist exam I realized what must have been the problem. Therefore below follow my notes on how to build my own Oracle Linux 7.3 AMI for EC2.

General Steps:
  1. Create a new Machine in VirtualBox
  2. Install Oracle Linux 7.3 on it
  3. Configure it and install some extra packages
  4. Clean your soon to be AMI
  5. Export your VirtualBox machine as an OVA
  6. Create an S3 bucket and upload your OVA
  7. Use aws cli to import your image
  8. Start an instance from your new AMI, install the UEKR3 kernel.
  9. Create a new AMI from that instance in order to give it a sensible name

The nitty gritty details: Ad 1) Create a new Machine in VirtualBox

Create an New VirtualBox Machine and start typing the name as “OL” which sets the type to Linux and version to Oracle (64 bit). Pick a name you like. I choose OL73. I kept the memory as it was (1024M). Create a HardDisk. 10Gb Dynamically allocated (VDI) worked for me. I disabled the audio as I had no use for that and made sure one network interface was available. I selected my NatNetwork type because that gives me VM access to the network and lets me access it via a Forwarding Rule on just one interface. You need to logon via VBox first to get the IP address then you can use an other preferred terminal to login. I like putty.

Attach the DVD with the Linux you want to use, I like Oracle Linux (https://otn.oracle.com), and start the VM.

Ad 2) Install Oracle Linux 7.3 on it

When you get the installation screen do not choose “Install Oracle Linux 7.3” but use TAB to add “ net.ifnames=0” to the boot parameters (note the extra space) and press enter.

Choose the language you need, English (United States) with a us keyboard layout works for me. Go to the next screen.

Before you edit “Date & Time” edit the network connection (which is needed for NTP).

Notice that the interface has the name eth0 and is disconnected. Turn the eth0 on by flipping the switch

And notice the IP address etc. get populated:

Leave the host name as it is (localhost.localdomain) because your cloud provider will change anything you set here anyway, and press the configure button. Then choose the General tab to check “Automatically connect to this network when it is available”, keep the settings on the Ethernet tab as they are, the same for 802.1X Security tab, DCB tab idem. On the IPv4 Settings tab, leave “Method” on Automatic (DHCP) and check “Require IPv4 addressing for this connection to complete”. On the IPv6 Settings tab change “Method” to Ignore and press the “Save” button and then press “Done”.

Next change the “Date & Time” settings to your preferred settings and make sure that “Network Time” is on and configured. Then press “Done”.

Next you have to press “Installation Destination”

Now if the details are in accordance with what you want press “Done”.

Your choice here has impact on what you can expect from the “cloud-init” tools.

For example: Later on you can launch an instance with this soon to be AMI and start it with let’s say a 20 GiB disk instead of the 10GiB disk this image now has. The extra 10GiB can be used via a new partition and adding that to a LVM pool. That requires manual actions. But if you expect the cloud-init tools to resize your partition to make use of the extra 10GiB and extend the filesystem (at first launch). Then you need to change a few things.

Then press “Done” and you get guided through an other menu:

Change LVM to “Standard Partition”

And then create the mount points you need by pressing “+” or click the blue link:

Now what you get are 3 partitions on your disk (/dev/sda). Notice that “/” is sda3 and is the last partition. When you choose this in your image the cloud-init utils will resize that partition to use the extra 10GiB and extend the filesystem on it as well. It makes sense that it can only resize the last partition of your disk. Which means that that your swap size is fixed between these partitions and can only be increased on a different disk (Or volume as it is called in EC2) that you need to add to your instance when launching (or afterwards). Leaving you with a gap of 1024MiB that is not very useful.

You might know what kind of memory size instances you want to use this image for and create the necessary swap up front (and maybe increase the disk from 10GiB to a size that caters for the extra needed swap).

I like LVM and choose to partition automatically and will use LVM utils to use the extra space by creating a third partition.

The other options I kept default:

And press “Begin Installation”. You then will see:

Set the root password to something you will remember, later I will disable it via “cloud-init” and there is no need to create an other user. Cloud-init will also take care of that.

I ignored the message: and pressed Done again.

Press the “Reboot” button when you are asked to and when restarting select the standard kernel (Not UEK). This is needed for the Amazon VMImport tool. You have less then 5 seconds to change the default kernel (UEK) from booting.

If you missed it just restart the VM.

Ad 3) Configure it and install some extra packages

Login with your preferred terminal program via NatNetwork (make sure you have a forwarding rule for the IP you wrote down for ssh)

 

or use the VirtualBox console. If you forgot to write the IP down you can still find it via the VirtualBox console session:

You might have noticed that my IP address changed. That is because I forgot to set the network in VirtualBox to NatNetwork when making the screenshots. As you can see the interface name is eth0 as expected. If you forgot to set the boot parameter above you need to do some extra work in the Console to make sure that eth0 is used.

Check the grub settings:

cat /etc/default/grub

And look at: GRUB_CMDLINE_LINUX (check if net.ifnames=0 is in there), and look at: GRUB_TIMEOUT. You might want to change that from 5 seconds to give you a bit more time. The AWS VMImport tool will change it to 30 seconds.

If you made some changes, you need to rebuild grub via:

grub2-mkconfig -o /boot/grub2/grub.cfg

Change the network interface settings:

vi /etc/sysconfig/network-scripts/ifcfg-eth0
Make it look like this:

TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME=eth0
DEVICE=eth0
ONBOOT=yes
PEERDNS=yes
PEERROUTES=yes

Change dracut.conf *** this is very important. In VirtualBox the XEN drivers do not get installed in the initramfs image and that will prevent your AMI from booting in AWS if it is not fixed ***

vi /etc/dracut.conf

adjust the following two lines:

# additional kernel modules to the default
#add_drivers+=””

to:

# additional kernel modules to the default
add_drivers+=”xen-blkfront xen-netfront”

Temporarily change default kernel:

(AWS VMImport has issues when the UEK kernels are installed or even present)

vi /etc/sysconfig/kernel

change:

DEFAULTKERNEL=kernel-uek

to:

DEFAULTKERNEL=kernel

Remove the UEK kernel:

yum erase -y kernel-uek kernel-uek-firmware

Check the saved_entry setting of grub:

cat /boot/grub2/grubenv
or: grubby –default-kernel

If needed set it to the RHCK (RedHat Compatible Kernel) via:

grub2-set-default <nr>

Find the <nr> to use via:

grubby –info=ALL

Use the <nr> of index=<nr> where kernel=/xxxx lists the RHCK (not a UEK kernel).

Rebuild initramfs to contain the xen drivers for all the installed kernels:

rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} dracut -f /boot/initramfs-{}.img {}

Verify that the xen drivers are indeed available:

rpm -qa kernel | sed ‘s/^kernel-//’  | xargs -I {} lsinitrd -k {}|grep -i xen

Yum repo adjustments:

vi /etc/yum.repos.d/public-yum-ol7.repo

Disable: ol7_UEKR4 and ol7_UEKR3.
You don’t want to get those kernels back with a yum update just yet.
Enable: ol7_optional_latest, ol7_addons

Install deltarpm, system-storage-manager and wget:

yum install -y deltarpm system-storage-manager wget

(Only wget is really necessary to enable/download the EPEL repo. The others are useful)

Change to a directory where you can store the rpm and install it. For example:

cd ~
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -Uvh epel-release-latest-7.noarch.rpm

Install rlwrap (useful tool) and the necessary cloud tools:

yum install -y rlwrap cloud-init cloud-utils-growpart

Check your Firewall settings (SSH should be enabled!):

firewall-cmd –get-default-zone
firewall-cmd –zone=public –list-all

You should see something like for your default-zone:
interfaces: eth0
services: dhcpv6-client ssh

Change SELinux to permissive (might not be really needed, but I haven’t tested it without this):

vi /etc/selinux/config
change: SELINUX=enforcing
to: SELINUX=permissive

Edit cloud.cfg:

vi /etc/cloud/cloud.cfg
change: ssh_deletekeys:    0
to: ssh_deletekeys:   1

change:

system_info:
default_user:
name: cloud-user
to:
system_info:
default_user:
name: ec2-user

Now cloud.cfg should look like this: (between the =====)

users:
– default

disable_root: 1
ssh_pwauth:   0

mount_default_fields: [~, ~, ‘auto’, ‘defaults,nofail’, ‘0’, ‘2’]
resize_rootfs_tmp: /dev
ssh_deletekeys:   1
ssh_genkeytypes:  ~
syslog_fix_perms: ~

cloud_init_modules:
– migrator
– bootcmd
– write-files
– growpart
– resizefs
– set_hostname
– update_hostname
– update_etc_hosts
– rsyslog
– users-groups
– ssh

cloud_config_modules:
– mounts
– locale
– set-passwords
– yum-add-repo
– package-update-upgrade-install
– timezone
– puppet
– chef
– salt-minion
– mcollective
– disable-ec2-metadata
– runcmd

cloud_final_modules:
– rightscale_userdata
– scripts-per-once
– scripts-per-boot
– scripts-per-instance
– scripts-user
– ssh-authkey-fingerprints
– keys-to-console
– phone-home
– final-message

system_info:
default_user:
name: ec2-user
lock_passwd: true
gecos: Oracle Linux Cloud User
groups: [wheel, adm, systemd-journal]
sudo: [“ALL=(ALL) NOPASSWD:ALL”]
shell: /bin/bash
distro: rhel
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd

# vim:syntax=yaml

With this cloud.cfg you will get new ssh keys for the server when you deploy a new instance, a user “ec2-user” that has password less sudo rights to root, and direct ssh to root becomes disabled as well as using a password for ssh authentication.

**** Remember, when you reboot now cloud-init will kick in and only console access to root will be available. Ssh to root is disabled ****
**** because you do not have an http server running serving ssh keys for the new ec2-user that cloud-init can use ****
**** It might be prudent to validate your cloud.cfg is a valid yaml file via http://www.yamllint.com/ ****

Check for the latest packages and update:

yum check-update
yum update -y

Ad 4) Clean your soon to  be AMI

You might want to clean the VirtualBox machine of logfiles and executed commands etc:

rm -rf  /var/lib/cloud/
rm -rf /var/log/cloud-init.log
rm -rf /var/log/cloud-init-output.log

yum -y clean packages
rm -rf /var/cache/yum
rm -rf /var/lib/yum

rm -rf /var/log/messages
rm -rf /var/log/boot.log
rm -rf /var/log/dmesg
rm -rf /var/log/dmesg.old
rm -rf /var/log/lastlog
rm -rf /var/log/yum.log
rm -rf /var/log/wtmp

find / -name .bash_history -exec rm -rf {} +
find / -name .Xauthority -exec rm -rf {} +
find / -name authorized_keys -exec rm -rf {} +

history -c
shutdown -h now

Ad 5) Export your VirtualBox machine as an OVA

In VirtualBox Manager:

And select the Virtual Machine you had just shutdown:

If needed change the location of the ova to be created:

 

Ad 6) Create an S3 bucket and upload your OVA

Log in to your AWS console choose the region where you want your AMI to be created and create a bucket there (or re-use one that you already have):

https://console.aws.amazon.com/s3/home?region=eu-west-1

(I used the region eu-west-1)

Set the properties you want, I kept the defaults properties and permissions:

Then press:

 

Ad 7) Use aws cli to import your image

Before you can import the OVA file you need to put it in the created bucket. You can upload it via the browser or use “aws cli” to do that. I prefer the aws cli because that always works and the browser upload gave me problems.

How to install the command line interface is described here: http://docs.aws.amazon.com/cli/latest/userguide/installing.html

On an Oracle linux 7 machine it comes down to:

yum install -y python34.x86_64 python34-pip.noarch
pip3 install –upgrade pip
pip install –upgrade awscli
aws –version

Then it is necessary to configure it, which is basically (http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html):

aws configure

And answer the questions by supplying your credentials and your preferences. These are fake credentials below

AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: eu-west-1
Default output format [None]: json

The answers will be saved in two files:

~/.aws/credentials
~/.aws/config

To test the access try to do a listing of your bucket:

aws s3 ls s3://amis-share

To upload the generated OVA file is then as simple as:

aws s3 cp /file_path/OL73.ova s3://amis-share

The time it takes depends on your upload speed.

Create the necessary IAM role and policy (http://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html):

Create a trust-policy.json file:

vi trust-policy.json
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": { "Service": "vmie.amazonaws.com" },
         "Action": "sts:AssumeRole",
         "Condition": {
            "StringEquals":{
               "sts:Externalid": "vmimport"
            }
         }
      }
   ]
}

Create the IAM role:

aws iam create-role –role-name vmimport –assume-role-policy-document file:///home/ec2-user/trust-policy.json

Create the role-policy.json file:

Change the file to use your S3 bucket (amis-share/*).

vi role-policy.json
{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource": [
            "arn:aws:s3:::amis-share"
         ]
      },
      {
         "Effect": "Allow",
         "Action": [
            "s3:GetObject"
         ],
         "Resource": [
            "arn:aws:s3:::amis-share/*"
         ]
      },
      {
         "Effect": "Allow",
         "Action":[
            "ec2:ModifySnapshotAttribute",
            "ec2:CopySnapshot",
            "ec2:RegisterImage",
            "ec2:Describe*"
         ],
         "Resource": "*"
      }
   ]
}

aws iam put-role-policy –role-name vmimport –policy-name vmimport –policy-document file:///home/ec2-user/role-policy.json

Now you should be able to import the OVA.

Prepare a json file with the following contents (adjust to your own situation):

cat imp_img.json 
{
    "DryRun": false,
    "Description": "OL73 OVA",
    "DiskContainers": [
        {
            "Description": "OL73 OVA",
            "Format": "ova",
            "UserBucket": {
                "S3Bucket": "amis-share",
                "S3Key": "OL73.ova"
            }
        }
    ],
    "LicenseType": "BYOL",
    "Hypervisor": "xen",
    "Architecture": "x86_64",
    "Platform": "Linux",
    "ClientData": {
        "Comment": "OL73"
    }
}

Then start the actual import job:

aws ec2 import-image –cli-input-json file:///home/ec2-user/imp_img.json

The command retuns with the name of the import job which you can then use to get the progress:
aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7

Or in a loop:

while true; do sleep 60; date; aws ec2 describe-import-image-tasks –import-task-ids import-ami-fgotr2g7; done

Depending on the size of your OVA it takes some time to complete. An example output is:

{
    "ImportImageTasks": [
        {
            "StatusMessage": "converting",
            "Status": "active",
            "LicenseType": "BYOL",
            "SnapshotDetails": [
                {
                    "DiskImageSize": 1470183936.0,
                    "Format": "VMDK",
                    "UserBucket": {
                        "S3Bucket": "amis-share",
                        "S3Key": "OL73.ova"
                    }
                }
            ],
            "Platform": "Linux",
            "ImportTaskId": "import-ami-fgotr2g7",
            "Architecture": "x86_64",
            "Progress": "28",
            "Description": "OL73 OVA"
        }
    ]
}

Example of an error:

{
    "ImportImageTasks": [
        {
            "SnapshotDetails": [
                {
                    "DiskImageSize": 1357146112.0,
                    "UserBucket": {
                        "S3Key": "OL73.ova",
                        "S3Bucket": "amis-share"
                    },
                    "Format": "VMDK"
                }
            ],
            "StatusMessage": "ClientError: Unsupported kernel version 3.8.13-118.18.4.el7uek.x86_64",
            "ImportTaskId": "import-ami-fflnx4fv",
            "Status": "deleting",
            "LicenseType": "BYOL",
            "Description": "OL73 OVA"
        }
    ]
}

Once the import is successful you can find your AMI in your EC2 Console:

Unfortunately no matter what you Description or Comment you supply in the json file the AMI is only recognized via the name of the import job: import-ami-fgotr2g7. As I want to use the UEK kernel I need to start an instance from this AMI and use that as an new AMI. And via that process (Step 9) I can supply a better name. Make a note of the snapshots and volumes that have been created via this import job. You might want to remove those later to prevent storage costs for something you don’t need anymore.

 

Ad 8) Start an instance from your new AMI, install the UEKR3 kernel

I want an AMI to run Oracle software and want the UEK kernel that has support. UEKR4 wasn’t supported for some of the software I recently worked with, thus that left me with the UEKR3 kernel.

Login to your new instance as the ec2-user with your preferred ssh tool and use sudo to become root:

sudo su –

Enable Yum Repo UEKR3

vi /etc/yum.repos.d/public-yum-ol7.repo
ol7_UEKR3
enabled=0 ==> 1

Change the default kernel back to UEK:

vi /etc/sysconfig/kernel
change:
DEFAULTKERNEL=kernel
To:
DEFAULTKERNEL=kernel-uek

Update the kernel:

yum check-update
yum install kernel-uek.x86_64

Notice the changes in grub_cmd_line that where made by the import proces:

cat /etc/default/grub

Notice some changes:

GRUB_TIMEOUT=30
GRUB_TERMINAL_OUTPUT=”console”
GRUB_CMDLINE_LINUX=”crashkernel=auto rd.lvm.lv=ol/root rd.lvm.lv=ol/swap rhgb quiet net.ifnames=0 console=ttyS0″
GRUB_DISABLE_RECOVERY=”true”

To verify which kernel will be booted next time you can use:

cat /boot/grub2/grubenv
grubby –default-kernel
grubby –default-index
grubby –info=ALL

Clean the instance again and shut it down in order to create an new AMI:

rm -rf  /var/lib/cloud/
rm -rf /var/log/cloud-init.log
rm -rf /var/log/cloud-init-output.log

yum -y clean packages
rm -rf /var/cache/yum
rm -rf /var/lib/yum

rm -rf /var/log/messages
rm -rf /var/log/boot.log
rm -rf /var/log/dmesg
rm -rf /var/log/dmesg.old
rm -rf /var/log/lastlog
rm -rf /var/log/yum.log
rm -rf /var/log/wtmp

find / -name .bash_history -exec rm -rf {} +
find / -name .Xauthority -exec rm -rf {} +
find / -name authorized_keys -exec rm -rf {} +

history -c
shutdown -h now

Ad 9) Create a new AMI from that instance in order to give it a sensible name

Use the instance id of the instance that you just shut down: i-050357e3ecce863e2 to create a new AMI.

To generate a skeleton json file:

aws ec2 create-image –instance-id i-050357e3ecce863e2 –generate-cli-skeleton

Edit the file to your needs or liking:

vi cr_img.json
{
“DryRun”: false,
“InstanceId”: “i-050357e3ecce863e2”,
“Name”: “OL73 UEKR3 LVM”,
“Description”: “OL73 UEKR3 LVM 10GB disk with swap and root on LVM thus expandable”,
“NoReboot”: true
}

And create the AMI:

aws ec2 create-image –cli-input-json file:///home/ec2-user/cr_img.json
{
“ImageId”: “ami-27637b41”
}

It takes a few minutes for the AMI to be visable in the webconsole of AWS EC2.

Don’t forget to:

  • Deregister the AMI generated bij VMImport
  • Delete the corresponding snaphot
  • Terminate the instance you used to create the new AMI
  • Delete the volumes of that instance (if they are not deleted on termination) (expand the info box in AWS you see when you terminate the instance to see which volume it is. E.g.: The following volumes are not set to delete on termination: vol-0150ca9702ea0fa00)
  • Remove the OVA from your S3 bucket if you don’t need it for something else.

Launch an instance of your new AMI and start to use it.

Useful documentation:

The post AWS – Build your own Oracle Linux 7 AMI in the Cloud appeared first on AMIS Oracle and Java Blog.

Create a 12c physical standby database on ODA X5-2

Thu, 2017-07-06 07:06

ODA X5-2 simplifies and speeds up the creation of a 12c database quite considerably with oakcli. You can take advantage of this command by also using it in the creation of physical standby databases as I discovered when I had to setup Dataguard on as many as 5 production and 5 acceptance databases within a very short time.

I used the “oakcli create database …” command to create both primary and standby databases really fast and went on from there to setup a Dataguard Bbroker configuration in max availability mode. Where you would normally duplicate a primary database on to a skeleton standby database that’s itself without any data or redo files and starts up with a pfile, working with 2 fully configured databases is a bit different. You do not have to change a db_unique_name after the RMAN duplicate, which proved to be quite an advantage, and the duplicate itself doesn’t have to address any spfile adaptations because it’s already there. But you may get stuck with some obsolete data and redo files of the original standby database that can fill up the filesystem. However, as long as you remove these files in time, just before the RMAN duplicate, this isn’t much of an issue.

What I did to create 12c primary database ABCPRD1 on one ODA and physical standby database ABCPRD2 on a second ODA follows from here. Nodes on oda1 are oda10 and oda11, nodes on oda2 are oda20 and oda21. The nodes I will use are oda10 and oda20.

-1- Create parameterfile on oda10 and oda20
oakcli create db_config_params -conf abcconf
-- parameters:
-- Database Block Size  : 8192
-- Database Language    : AMERICAN
-- Database Characterset: WE8MSWIN1252
-- Database Territory   : AMERICA
-- Component Language   : English
-- NLS Characterset     : AL16UTF16
file is saved as: /opt/oracle/oak/install/dbconf/abcconf.dbconf

-2- Create database ABCPRD1 on oda10 and ABCPRD2 on oda20
oda10 > oakcli create database -db ABCPRD1 -oh OraDb12102_home1 -params abcconf
oda20 > oakcli create database -db ABCPRD2 -oh OraDb12102_home1 -params abcconf
-- Root  password: ***
-- Oracle  password: ***
-- SYSASM  password - During deployment the SYSASM password is set to 'welcome1 - : ***
-- Database type: OLTP
-- Database Deployment: EE - Enterprise Edition
-- Please select one of the following for Node Number >> 1
-- Keep the data files on FLASH storage: N
-- Database Class: odb-02  (2 cores,16 GB memory)

-3- Setup db_name ABCPRD for both databases... this is a prerequisite for Dataguard
oda10 > sqlplus / as sysdba
oda10 > shutdown immediate;
oda10 > startup mount
oda10 > ! nid TARGET=sys/*** DBNAME=ABCPRD SETNAME=YES
oda10 > Change database name of database ABCPRD1 to ABCPRD? (Y/[N]) => Y
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > shutdown immediate;
oda20 > startup mount
oda20 > ! nid TARGET=sys/*** DBNAME=ABCPRD SETNAME=YES
oda20 > Change database name of database ABCPRD2 to ABCPRD? (Y/[N]) => Y
oda20 > exit

-4- Set db_name of both databases in their respective spfile as well as ODA cluster,
    and reset the db_unique_name after startup back from ABCPRD to ABCPRD1|ABCPRD2
oda10 > sqlplus / as sysdba    
oda10 > startup mount
oda10 > alter system set db_name=ABCPRD scope=spfile;
oda10 > alter system set service_names=ABCPRD1 scope=spfile;
oda10 > ! srvctl modify database -d ABCPRD1 -n ABCPRD
oda10 > shutdown immediate
oda10 > startup
oda10 > alter system set db_unique_name=ABCPRD1 scope=spfile;
oda10 > shutdown immediate;
oda10 > exit

oda20 > sqlplus / as sysdba    
oda20 > startup mount
oda20 > alter system set db_name=ABCPRD scope=spfile;
oda20 > alter system set service_names=ABCPRD2 scope=spfile;
oda20 > ! srvctl modify database -d ABCPRD2 -n ABCPRD
oda20 > shutdown immediate
oda20 > startup
oda20 > alter system set db_unique_name=ABCPRD2 scope=spfile;
oda20 > shutdown immediate;
oda20 > exit

-5- Startup both databases from the cluster.
oda10 > srvctl start database -d ABCPRD1
oda20 > srvctl start database -d ABCPRD2

Currently, 2 identical configured databases are active with the same db_name, which is a first condition for the following configuration of Dataguard Broker. By just matching the db_name between databases and keeping the db_unique_name as it was, ASM database and diagnostic directory names remain as they are.

Also, the spfile entry in the cluster continues to point to the correct directory and file, as well as the init.ora in $ORACLE_HOME/dbs. Because the standby started with an existing and correctly configured spfile you no longer need to retrieve it from the primary. It simplifies and reduces the RMAN duplicate code to just a one line command, apart from login and channel allocation.

-6- Add Net Service Names for ABCPRD1 and ABCPRD2 to your tnsnames.ora on oda10 and oda20
ABCPRD1_DGB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oda10)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ABCPRD1_DGB)
    )
  )

ABCPRD2_DGB =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = oda20)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ABCPRD2_DGB)
    )
  )

-7- Add as a static service to listener.ora on oda10 and oda20
oda10 > SID_LIST_LISTENER =
oda10 >   (SID_LIST =
oda10 >     (SID_DESC =
oda10 >       (GLOBAL_DBNAME = ABCPRD1_DGB)
oda10 >       (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
oda10 >       (SID_NAME = ABCPRD1)
oda10 >     ) 
oda10 >   )        

oda20 > SID_LIST_LISTENER =
oda20 >   (SID_LIST =
oda20 >     (SID_DESC =
oda20 >       (GLOBAL_DBNAME = ABCPRD2_DGB)
oda20 >       (ORACLE_HOME = /u01/app/oracle/product/12.1.0.2/dbhome_1)
oda20 >       (SID_NAME = ABCPRD2)
oda20 >     ) 
oda20 >   )

-8- Restart listener from cluster on oda10 and oda20
oda10 > srvctl stop listener
oda10 > srvctl start listener

oda20 > srvctl stop listener
oda20 > srvctl start listener

-9- Create 4 standby logfiles on oda10 only (1 more than nr. of redologgroups and each with just 1 member)
    The RMAN duplicate takes care of the standby logfiles on oda20, so don't create them there now
oda10 > alter database add standby logfile thread 1 group 4 size 4096M;
oda10 > alter database add standby logfile thread 1 group 5 size 4096M;
oda10 > alter database add standby logfile thread 1 group 6 size 4096M;
oda10 > alter database add standby logfile thread 1 group 7 size 4096M;
oda10 > exit

-10- Start RMAN duplicate from oda20
oda20 > srvctl stop database -d ABCPRD2
oda20 > srvctl start database -d ABCPRD2 -o nomount
oda20 > *****************************************************************************
oda20 > ********* !!! REMOVE EXISTING DATA EN REDO FILES OF ABCPRD2 NOW !!! *********
oda20 > *****************************************************************************
oda20 > rman target sys/***@ABCPRD1 auxiliary sys/***@ABCPRD2
oda20 > .... RMAN> 
oda20 > run {
oda20 > allocate channel d1 type disk;
oda20 > allocate channel d2 type disk;
oda20 > allocate channel d3 type disk;
oda20 > allocate auxiliary channel stby1 type disk;
oda20 > allocate auxiliary channel stby2 type disk;
oda20 > duplicate target database for standby nofilenamecheck from active database;
oda20 > }
oda20 > exit

And there you are… primary database ABCPRD1 in open read-write mode and standby database ABCPRD2 in mount mode. The only thing left to do now is the dataguard broker setup, and activate flashback and force_logging on both databases.

-11- Setup broker files in shared storage (ASM) and start brokers on oda10 and oda20
oda10 > sqlplus / as sysdba
oda10 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr1ABCPRD1.dat' scope=both; 
oda10 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD1/ABCPRD1/dr2ABCPRD1.dat' scope=both;
oda10 > alter system set dg_broker_start=true scope=both;
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > alter system set dg_broker_config_file1='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr1ABCPRD2.dat' scope=both; 
oda20 > alter system set dg_broker_config_file2='/u02/app/oracle/oradata/datastore/.ACFS/snaps/ABCPRD2/ABCPRD1/dr2ABCPRD2.dat' scope=both;
oda20 > alter system set dg_broker_start=true scope=both;
oda20 > exit

-12- Create broker configuration from oda10
oda10 > dgmgrl sys/***
oda10 > create configuration abcprd as primary database is abcprd1 connect identifier is abcprd1_dgb;
oda10 > edit database abcprd1 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda10)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD1_DGB)(INSTANCE_NAME=ABCPRD1)(SERVER=DEDICATED)))';
oda10 > add database abcprd2 as connect identifier is abcprd2_dgb maintained as physical;
oda10 > edit database abcprd2 set property StaticConnectIdentifier='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oda20)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ABCPRD2_DGB)(INSTANCE_NAME=ABCPRD2)(SERVER=DEDICATED)))';
oda10 > enable configuration;
oda10 > edit database abcprd2 set state=APPLY-OFF;
oda10 > exit

-13- Enable flashback and force logging on both primary and standby database
oda10 > sqlplus / as sysdba
oda10 > alter database force logging;
oda10 > alter database flashback on;
oda10 > exit

oda20 > sqlplus / as sysdba
oda20 > alter database force logging;
oda20 > alter database flashback on;
oda20 > exit
oda20 > srvctl stop database -d abcprd2
oda20 > srvctl start database -d abcprd2 -o mount

oda10 > srvctl stop database -d abcprd1
oda10 > srvctl start database -d abcprd1

-14- Configure max availability mode from oda10
oda10 > dgmgrl sys/*** 
oda10 > edit database abcprd2 set state=APPLY-ON;
oda10 > edit database abcprd1 set property redoroutes='(LOCAL : abcprd2 SYNC)';
oda10 > edit database abcprd2 set property redoroutes='(LOCAL : abcprd1 SYNC)';
oda10 > edit configuration set protection mode as maxavailability;
oda10 > show database abcprd1 InconsistentProperties;
oda10 > show database abcprd2 InconsistentProperties;
oda10 > show configuration
oda10 > validate database abcprd2;
oda10 > exit

You should now have a valid 12c Max Availability Dataguard configuration, but you better test it thoroughly with
some switchovers and a failover before taking it into production. Have fun!

The post Create a 12c physical standby database on ODA X5-2 appeared first on AMIS Oracle and Java Blog.

Virtualization on the Oracle Database Appliance S, M, L

Wed, 2017-07-05 15:44

One of the great advantages of the Oracle database Appliance HA is the possibility of Virtualization through OracleVM. This virtualization wasn’t possible for the other members of the Oracle Database Appliance. Until now.

In the patch 12.1.2.11.0 which has been released recently for the ODA S,M and L, virtualization is possible… through KVM. Is this a shocking change? No, KVM is part of Linux for more than 10 years now. Desirable? Yes, I think so, and worthwhile for give it a bit of attention in this blogpost.

You can read a very, very short announcement in the documentation of the Oracle Database Appliance.

Oracle has promised more information (including step-by-step guide) will be released very soon.

When installing the patch, the Oracle Linux KVM will be installed, and there’s no need for re-imaging your system like the Oracle Database Appliance HA. When using KVM it’s possible to run applications on the ODA S,M and L , and in that way isolate the databases from the application in matter of life cycle management.

In my opinion this could be a great solution for some customers for consolidating their software and for ISV’s for creating a solution in a box.

 

But… ( there’s always a but) as I understand – haven’t tested it yet – there are a few limitations:

– You may only use the Linux O.S. on the guest VM

– There’s no support for installing an Oracle database on the guest VM

– Related to that, there’s no capacity-on-demand for databases or applications in the guestVM

 

So the usability of this new feature may seem limited for now, but testing and using the feature has just begun!

The next big release will be in Feb/March 2018:

  • Databases in the VM’s
  • Each database will be running in its own VM
  • VM hard-partitioning support for licensing
  • Windows support

I’m very curious how Oracle will handle the standardization in the Oracle Database Appliance family in the future:

– ODACLI versus OAKCLI

– OracleVM versus KVM

– Web console user interface vs command-line

Will it merge and if it will, in what direction. Or  will a new rising technology take the lead.

 

Regardz.

 

Resources:

Oracle Database Appliance Documentation: https://docs.oracle.com/cd/E86648_01/doc.121/e85591/managing-database-appliance-virtualized-platform.htm#GUID-971B6555-B1A6-4500-8187-C085989F25A9

The post Virtualization on the Oracle Database Appliance S, M, L appeared first on AMIS Oracle and Java Blog.

SSL/TLS: How to choose your cipher suite

Tue, 2017-07-04 11:00

For SSL/TLS connections, cipher suites determine for a major part how secure the connection will be. A cipher suite is a named combination of authentication, encryption, message authentication code (MAC) and key exchange algorithms used to negotiate the security settings (here). But what does this mean and how do you choose a secure cipher suite? The area of TLS is quite extensive and I cannot cover it in its entirety in a single blog post but I will provide some general recommendations based on several articles researched online. At the end of the post I’ll provide some suggestions for strong ciphers for JDK8.

Introduction

First I’ll introduce what a cipher suite is and how it is agreed upon by client / server. Next I’ll explain several of the considerations which can be relevant while making a choice of cipher suites to use.

What does the name of a cipher suite mean?

The names of the cipher suites can be a bit confusing. You see for example a cipher suite called: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 in the SunJSSE list of supported cipher suites. You can break this name into several parts:

  • TLS: transport layer security (duh..)http://www.jscape.com/blog/cipher-suites
  • ECDHE: The key exchange algoritm is ECDHE (Elliptic curve Diffie–Hellman, ephemeral).
  • ECDSA: The authentication algorithm is ECDSA (Elliptic Curve Digital Signature Algorithm). The certificate authority uses an ECDH key to sign the public key. This is what for example Bitcoin uses.
  • WITH_AES_256_CBC: This is used to encrypt the message stream. (AES=Advanced Encryption Standard, CBC=Cipher Block Chaining). The number 256 indicates the block size.
  • SHA_384: This is the so-called message authentication code (MAC) algorithm. SHA = Secure Hash Algorithm. It is used to create a message digest or hash of a block of the message stream. This can be used to validate if message contents have been altered. The number indicates the size of the hash. Larger is more secure.

If the key exchange algorithm or the authentication algorithm is not explicitly specified, RSA is assumed. See for example here for a useful explanation of cipher suite naming.

What are your options

First it is a good idea to look at what your options are. This is dependent on the (client and server) technology used. If for example you are using Java 8, you can look here (SunJSSE) for supported cipher suites. In you want to enable the strongest ciphers available to JDK 8 you need to install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files (here). You can find a large list of cipher suites and which version of JDK supports them (up to Java 8 in case of the Java 8 documentation). Node.js uses OpenSSL for cipher suite support. This library supports a large array of cipher suites. See here.

How determining a cipher suite works

They are listed in preference order. How does that work? During the handshake phase of establishing an TLS/SSL connection, the client sends supported cipher suites to the server. The server chooses the cipher to use based on the preference order and what the client supports.


This works quite efficiently, but a problem can arise when

  • There is no overlap in ciphers the client and server can speak
  • The only overlap between client and server supported cipher is a cipher which provides poor or no encryption

This is illustrated in the image below. The language represents the cipher suite. The order/preference specifies the encryption strength. In the first illustration, client and server can both speak English so the server chooses English. In the second image, the only overlapping language is French. French might not be ideal to speak but the server has no other choice in this case but to accept speaking French or to refuse talking to the client.

Thus it is a good practice to for the server only select specific ciphers which conform to your security requirements, but do of course take client compatibility into account.

How to choose a cipher suite Basics Check which cipher suites are supported

There are various mechanisms to check which ciphers are supported. For cloud services or websites you can use SSLLabs. For internal server checking, you can use various scripts available online such as this one or this one.

TLS 1.2

Of course you only want TLS 1.2 cipher suites since older TLS and SSL versions contain security liabilities. Within TLS 1.2 there is a lot to choose from. OWASP provides a good overview of which ciphers to choose here (‘Rule – Only Support Strong Cryptographic Ciphers’). Wikipedia provides a nice overview of (among other things) TLS 1.2 benefits such as GCM (Galois/Counter Mode) support which provides integrity checking.

Disable weak ciphers

As indicated before, if weak ciphers are enabled, they might be used, making you vulnerable. You should disable weak ciphers like those with DSS, DSA, DES/3DES, RC4, MD5, SHA1, null, anon in the name. See for example here and here. For example, do not use DSA/DSS: they get very weak if a bad entropy source is used during signing (here). For the other weak ciphers, similar liabilities can be looked up.

How to determine the key exchange algorithm Types

There are several types of keys you can use. For example:

  • ECDHE: Use elliptic curve diffie-hellman (DH) key exchange (ephemeral). One key is used for every exchange. This key is generated for every request and does not provide authentication like ECDH which uses static keys.
  • RSA: Use RSA key exchange. Generating DH symetric keys is faster than RSA symmetric keys. DH also currently seems more popular. DH and RSA keys solve different challenges. See here.
  • ECDH: Use elliptic curve diffie-hellman key exchange. One key is for the entire SSL session. The static key can be used for authentication.
  • DHE: Use normal diffie-hellman key. One key is used for every exchange. Same as ECDHE but a different algorithm is used for the calculation of shared secrets.

There are other key algorithms but the above ones are most popular. A single server can host multiple certificates such as ECDSA and RSA certificates. Wikipedia is an example. This is not supported by all web servers. See here.

Forward secrecy

Forward secrecy means that is a private key is compromised, past messages which are send cannot also be decrypted. Read here. Thus it is beneficial to have perfect forward secrecy for your security (PFS).

The difference between ECDHE/DHE and ECDH is that for ECDH one key for the duration of the SSL session is used (which can be used for authentication) while with ECDHE/DHE a distinct key for every exchange is used. Since this key is not a certificate/public key, no authentication can be performed. An attacked can use their own key (here). Thus when using ECDHE/DHE, you should also implement client key validation on your server (2-way SSL) to provide authentication.

ECDHE and DHE give forward secrecy while ECDH does not. See here. ECDHE is significantly faster than DHE (here). There are rumors that the NSA can break DHE keys and ECDHE keys are preferred (here). On other sites it is indicated DHE is more secure (here). The calculation used for the keys is also different. DHE is prime field Diffie Hellman. ECDHE is Elliptic Curve Diffie Hellman. ECDHE can be configured. ECDHE-ciphers must not support weak curves, e.g. less than 256 bits (see here).

Certificate authority

The certificate authority you use to get a certificate from to sign the key can have limitations. For example, RSA certificates are very common while ECDSA is gaining popularity. If you use an internal certificate authority, you might want to check it is able to generate ECDSA certificates and use them for signing. For compatibility, RSA is to be preferred.

How to determine the message encryption mechanism

As a rule of thumb: AES_256 or above is quite common and considered secure. 3DES, EDE and RC4 should be avoided.

The difference between CBC and GCM

GCM provides both encryption and integrity checking (using a nonce for hashing) while CBC only provides encryption (here). You can not use the same nonce for the same key to encrypt twice when using GCM. This protects against replay attacks. GCM is supported from TLS 1.2.

How to choose your hashing algorithm

MD5 (here) and SHA-1 (here) are old and should not be used anymore. As a rule of thumb, SHA256 or above can be considered secure.

Finally Considerations

Choosing a cipher suite can be a challenge. Several considerations play a role in making the correct choice here. Just to name a few;
Capabilities of server, client and certificate authority (required compatibility); you would choose a different cipher suite for an externally exposed website (which needs to be compatible with all major clients) than for internal security.

  • Encryption/decryption performance
  • Cryptographic strength; type and length of keys and hashes
  • Required encryption features; such as prevention of replay attacks, forward secrecy
  • Complexity of implementation; can developers and testers easily develop servers and clients supporting the cipher suite?

Sometimes even legislation plays a role since some of the stronger encryption algorithms are not allowed to be used in certain countries (we will not guess for the reason but you can imagine).

Recommendation

Based on the above I can recommend some strong cipher suites to be used for JDK8 in preference order:

  • TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
  • TLS_RSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
  • TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
  • TLS_DHE_RSA_WITH_AES_256_GCM_SHA384

My personal preference would be to use TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 as it provides

  • Integrity checking: GCM
  • Perfect forward secrecy: ECDHE
  • Uses strong encryption: AES_256
  • Uses a strong hashing algorithm: SHA384
  • It uses a key signed with an RSA certificate authority which is supported by most internal certificate authorities.

Since ECDHE does not provide authentication, you should tell the server to verify client certificates (implement 2-way SSL).

The post SSL/TLS: How to choose your cipher suite appeared first on AMIS Oracle and Java Blog.

How to start with Amazon cloud server

Mon, 2017-07-03 08:56

Just created an Amazon account and willing to create a first server? Use the interactive guide (Launch Instance button) to create your own oracle server within 5 minutes of time. Hereby practical notes to create a new instance.

After login to AWS select a region nearby, this for the speed of network traffic. At the upper right front of the webpage you can select a region. When you want to develop, the US East region is the region you have to select. Creating an Oracle environment may be done in a region of your choice. With Amazon it is also good to be known that pricing per server is different between regions. Before start you can check on the following link where you can get the best price for you environment (on demand).

https://aws.amazon.com/ec2/pricing/on-demand/

  • For a first Oracle environment you better choose an existing Amazon Machine Image (AMI). To create a new instance press the button Launch Instance on the dashboard. Within several steps you will be guided to create a new Instance. For this trial we show you how to create an Oracle environment for test usage.
  •  For our first server I use an predefined Image created by a colleague, there are several predefined Machine Images available. On the first tab we choose an linux image. When creating a server for an Oracle database, it is also possible to start with a RDS (oracle database) On the second tab, we can select an Instance type. It depends on the software you want to install on the instance, for Oracle middleware applications such as Database or weblogic an instance with 2 cores and 4 or 8 GB of memory is eligible. Below the explanation of the codes used by Amazon:

T<number> generic usage for development and test environments

M<number> generic usage for production environements

C<number> CPU intensive usage

G<number> Grafical solution such as videostreaming

R<number> Memory intensive systems

I<number> IO intensive systems.

Costs per instance per hour are on the website, see above for link

  •  On the 3rd tab only the IAM role has to be set. Create a new one if not having already one. When creating a new one select one for Amazon EC2 and then AdministratorAccess for your own environment. When saved, you have to push on the refresh button before it is available in the dropdown box. Leave everything else as is to avoid additional costs.5.
  • On the 4th tab you can select additional storage to your instance. Select a different disk instead of enlarge the existing disk This is better for an Oracle environment. So press the Add New Volume button for more disk space. Volume type EBS is right, only change the size you want to use. Volume type GP2 stands for General Purpose (see picture below).

  • Appending an extra volume to an instance will remain in an reusable instance after reboot, else you have to install the software of Oracle again.
  • The next tab is for creating tags to your Oracle environment. The TAG Name will also directly be displayed when using the dashboard for looking to the instances. Other tags are optional but very helpful for colleagues, department, name or id of the owner are very helpful for colleagues.
  • On the 6th tab you have to configure a security group, you want to avoid access from anyone, everywhere by default port 22. When selecting on the source drill down menu the My-IP option only your own IP will be allowed to connect by port 22. Even other ports can be configured. For Oracle database or weblogic different ports will be used, so you have to configure them also.
  • On the last tab review and launch by pressing the launch button, you will be asked for selecting or creating a key, when it is your first server, create a key for using it putty or other ssh applications. A private key will be generated, this one you have to store carefully, because this is given once. Using the key with putty, use the following link: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/putty.html
  • You might also reuse a key when you already have one
  • Your system will be ready after several minutes for using (state = running). You first have to run a yum update on the system by typing : sudo yum update –y
  • Now the system is ready to install Oracle software and create an Oracle database or weblogic environment

The post How to start with Amazon cloud server appeared first on AMIS Oracle and Java Blog.

Remove Deploy Target from JDeveloper

Mon, 2017-07-03 08:06

When you use JDeveloper and deploy a project from JDeveloper to a server, JDeveloper remembers this, so you can easily deploy to it again. You can do this by right-clicking on the project and choose for Deploy and then the numbered deploy target of your choice (or from menubar -> Build -> Deploy -> deploy target). But how to remove such a deploy target if you don’t want to use it any more?

Take for example the screenshot below where you notice two deploy targets, “1… to SOADev” and “2… to SOAAcc”.
JDeveloper Deploy Targets
Suppose I want to remove the “2… to SOAAcc”, from the list.
Unfortunately removing the server connection “SOAAcc” in JDeveloper didn’t remove it from the list of deploy targets in the projects. And I can not find any other way of removing it in JDeveloper itself.

So I scanned for files with “SOAAcc” in it and turns out that they are configured in user cached project files. On my Windows laptop these files are located in folder C:\Users\[username>]\AppData\Roaming\JDeveloper\system12.2.1.0.42.151011.0031\o.ide\projects.
You will find a cached project file for each project in JDeveloper, at least the ones that have a deploy target.
They have the same name as the project, followed by a hash and also with extension .jpr, in my case it is named “OtmTripExecution602704b7.jpr”. This file turns out to be an XML file and in there there are two “hash” elements for “SOAAcc”, see screenshot below. Removing them both removes the target in JDeveloper (be sure JDeveloper is closed when you do this).
Remove Deploy Target XML

The post Remove Deploy Target from JDeveloper appeared first on AMIS Oracle and Java Blog.

after 27+ years in Oracle land I am forced to patch sqlplus

Fri, 2017-06-30 06:25

When starting to use sqlplus 12.2 I noticed that my SQL prompt was not changing to what login.sql told it to be. This did not happen in sqlplus 12.1 or lower versions.
Maybe this is a bug, maybe a new feature I thought. Behaviour in 12.2 of sqlplus indeed has changed according to documentation: sqlplus 12.2 no longer looks in local directory (i.e. where you start sqlplus) for a login.sql file to run but only looks for .sql files in directories indicated by environment variables (SQLPATH for Windows and ORACLE_PATH on Linux). However, even when setting these environment variables to the proper values sqlplus still did not run my login.sql automatically. Ok, then I’ll create an SR with Oracle Support. They confirmed that this odd behaviour indeed is a bug and that a patch is available for sqlplus: PATCH:25804573. So now finally I have a reason to patch sqlplus!

The post after 27+ years in Oracle land I am forced to patch sqlplus appeared first on AMIS Oracle and Java Blog.

ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server

Mon, 2017-06-12 10:04

While I prefer a “loosely coupled architecture” for replication between Oracle and SQL Server, sometimes a direct (database) link cannot be avoided. By using DBMS_HS_PASSTHROUGH for data extraction the 2 other ETL processes (transformation and load) can be configured and administered with more flexibility, providing an almost acceptable level of “loosely coupled processing“.
Consider this as a really simple ETL config:

    Extract: Select SQL Server data with native sql, using DBMS_PASSTHROUGH and a PIPELINED function.
    Transform: Define a view on top of the function and transform column_names and column datatypes correctly.
    Load: SQL> insert into oracle_table select * from oracle_view;

When you use DBMS_HS_PASSTHROUGH Oracle doesn’t interpret the data you receive from SQL Server. By default this is done by the dg4odbc process, and the performance benefit in bypassing this process is considerable. Also, you’re not restricted by the limitations of dg4odbc and can transform the data into anything you need.

Like dg4odbc DBMS_HS_PASSTHROUGH depends on Heterogeneous Services (a component built-in to Oracle) to provide the connectivity between Oracle and SQL Server. Installation of unixODBC and a freeTDS driver on Linux is required to setup the SQL Server datasource… installation and configuration steps can be found here and here. DBMS_HS_PASSTHROUGH is invoked through an Oracle database link. The package conceptually resides at SQL Server but, in reality, calls to this package are intercepted and mapped to one or more Heterogeneous Services calls. The freeTDS driver, in turn, maps these calls to the API of SQL Server. More about DBMS_HS_PASSTHROUGH here.

Next a short example of how to setup data extraction from SQL Server with DBMS_HS_PASSTHROUGH and data transformation within the definition of a view. In this example the SQL Server column names differ from the ones in Oracle in case, length and/or in name and/or in datatype, and are transformed by the view. NLS_DATE_FORMAT synchronization is an exception… it’s done in the extract package itself. Reason is that all dates in this particular SQL Server database use a specific format, and it doesn’t really obscure the code. But if you choose to refrain from all transformation code in the extract package, you could create types with VARCHAR2’s only, and put all your to_number, to_date and to_timestamp conversion code in the view definition.

Extract

-- create Oracle types for uninterpreted SQL Server data
CREATE OR REPLACE TYPE E01_REC 
AS OBJECT(
  C01    NUMBER(8),
  C02    VARCHAR2(25 CHAR),
  C03    VARCHAR2(3 CHAR),
  C04    NUMBER(8),
  C05    DATE,
  C06    DATE );
/

CREATE OR REPLACE TYPE E01_TAB AS TABLE OF E01_REC;
/

-- create the extract package
CREATE OR REPLACE PACKAGE E AUTHID DEFINER AS
--------------------------------------------------------- 
  FUNCTION E01 RETURN E01_TAB PIPELINED;
---------------------------------------------------------
END E;
/

-- create the extract package body
CREATE OR REPLACE PACKAGE BODY E AS
  v_cursor   BINARY_INTEGER;  
  v_out_e01  E01_REC:=E01_REC(NULL,NULL,NULL,NULL,NULL,NULL);
-------------------------------------------------------------------------
  v_stat_e01 VARCHAR2(100):= 'Select SiteID
                                   , SiteName
                                   , SiteMnemonic
                                   , PointRefNumber
                                   , OpeningDate
                                   , ClosingDate
                               From ObjSite';
                                 
-------------------------------------------------------------------------
FUNCTION E01
RETURN E01_TAB PIPELINED
  IS
BEGIN
  execute immediate 'alter session set NLS_DATE_FORMAT = ''YYYY-MM-DD HH24:MI:SS'' ';
  v_cursor := DBMS_HS_PASSTHROUGH.OPEN_CURSOR@<DBLINK>;
  DBMS_HS_PASSTHROUGH.PARSE@<DBLINK>(v_cursor,v_stat_e01);
  WHILE DBMS_HS_PASSTHROUGH.FETCH_ROW@<DBLINK>(v_cursor) > 0
    LOOP
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,1,v_out_e01.c01);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,2,v_out_e01.c02);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,3,v_out_e01.c03);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,4,v_out_e01.c04);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,5,v_out_e01.c05);
      DBMS_HS_PASSTHROUGH.GET_VALUE@<DBLINK>(v_cursor,6,v_out_e01.c06);
    PIPE ROW(v_out_e01);
    END LOOP;
  DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  RETURN;
EXCEPTION
  WHEN NO_DATA_NEEDED THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
  WHEN OTHERS THEN
    DBMS_HS_PASSTHROUGH.CLOSE_CURSOR@<DBLINK>(v_cursor);
    DBMS_OUTPUT.PUT_LINE(SQLERRM||'--'||DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
  RAISE;
END E01;
------------------------------------------------------------------------
END E;
/

Transform

CREATE OR REPLACE FORCE VIEW SITE_VW
AS 
SELECT TO_NUMBER(C01) SITEID,
       C02            STATIONNAME,
       C03            SITEMNEMONIC,
       TO_NUMBER(C04) STATIONID,
       C05            OPENINGDATE,
       C06            CLOSINGDATE
FROM TABLE(E.E01);

Load

INSERT INTO SITE SELECT * FROM SITE_VW;
COMMIT;

The post ETL using Oracle DBMS_HS_PASSTHROUGH and SQL Server appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite: Want performance? Don’t log so much and clean up your database!

Fri, 2017-06-02 11:12

The Oracle SOA Suite infrastructure, especially composites, use the database intensively. Not only are the process definitions stored in the database, also a lot of audit information gets written there. The SOA infrastructure database, if not well managed, will grow and will eventually have detrimental effects on performance. In this blog post I will give some quick suggestions that will help you increase performance of your SOA Suite infrastructure on the database side by executing some simple scripts. These are some suggestions I have seen work at different customers. Not only do they help managing the SOA Suite data in the database, they will also lead to better SOA Suite performance.

Do not log too much!

Less data is faster. If you can limit database growth, management becomes easier.

  • Make sure the auditlevel of your processes is set to production level in production environments.
  • Think about the BPEL setting inMemoryOptimization. This can only be set for processes that do not contain any dehydration points such as receive, wait, onMessage and onAlarm activities. If set to true, the completionpersistpolicy can be used to tweak what to do after completion of the process. For example only save information about faulted instances in the dehydration store. In 12c this setting is part of the ‘Oracle Integration Continuous Availability’ feature and uses Coherence.

Start with a clean slate regularly

Especially for development environments it is healthy to regularly truncate all the major SOAINFRA tables. The script to do this is supplied by Oracle: MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/truncate/truncate_soa_oracle.sql

The effect of executing this script is that all instance data is gone. This includes all tasks, long running BPM processes, long running BPEL processes, recoverable errors. For short everything except the definitions. The performance gain from executing the script can be significant. You should consider for example to run the script at the end of every sprint to start with a clean slate.

Delete instances

Oracle has provided scripts to remove old instances. These are scheduled by default in a clean installation of 12c. If you upgrades from 11g to 12c, this scheduling is not enabled by default. The auto-purge feature of 12c is described here.

What this feature does is execute the standard supplied purge scripts: MW_HOME/SOA_ORACLE_HOME/rcu/integration/soainfra/sql/soa_purge/soa_purge_scripts.sql

In a normal SOA Suite 12c installation you can also find the scripts in MW_HOME/SOA_ORACLE_HOME/common/sql/soainfra/sql/oracle

In 12c installations, the patched purge scripts for older versions are also supplied. I would use the newest version of the scripts since the patches sometimes fix logic which can cause data inconsistencies which can have consequences later, for example during migrations.

What the scripts do is nicely described here. These scripts only remove instances you should not miss. Running instances and instances which can be recovered, are not deleted. In the script you can specify for how long data should be retained.

You should schedule this and run it daily. The shorter the period you keep information, the more you can reduce your SOAINFRA space usage and the better the performance of the database will be.

An example of how to execute the script:

DECLARE
MAX_CREATION_DATE TIMESTAMP;
MIN_CREATION_DATE TIMESTAMP;
BATCH_SIZE        INTEGER;
MAX_RUNTIME       INTEGER;
RETENTION_PERIOD  TIMESTAMP;
BEGIN
MIN_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-2000, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
MAX_CREATION_DATE := TO_TIMESTAMP(TO_CHAR(sysdate-30, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
RETENTION_PERIOD  := TO_TIMESTAMP(TO_CHAR(sysdate-29, ‘YYYY-MM-DD’),’YYYY-MM-DD’);
MAX_RUNTIME       := 180;
BATCH_SIZE        := 250000;

SOA.DELETE_INSTANCES(
MIN_CREATION_DATE    => MIN_CREATION_DATE,
MAX_CREATION_DATE    => MAX_CREATION_DATE,
BATCH_SIZE           => BATCH_SIZE,
MAX_RUNTIME          => MAX_RUNTIME,
RETENTION_PERIOD     => RETENTION_PERIOD,
PURGE_PARTITIONED_COMPONENT => FALSE);
);

END;
/

The script also has a variant which can be executed in parallel (which is faster) but that requires extra grants for the SOAINFRA database user.

Shrink space Tables

Deleting instances will not free up space on the filesystem of the server. Nor does it make sure that the data is not fragmented over many tablespace segments. Oracle does not provide standard scripts for this but does tell you this is a good idea and explains why here (9.5.2). In addition you can rebuild indexes. You should also of course run a daily gather statistics on the schema.

For 11g you can use this script to shrink space for tables and rebuild indexes. You should execute it under XX_SOAINFRA where XX if your schema prefix.

LOBs

LOB columns are saved outside of the tables and can be shrunk separately. In the below script you should replace XX_SOAINFRA with your SOAINFRA schema. The script explicitly drops BRDECISIONINSTANCE_INDX5 since the table can become quite large in development environments and you cannot shrink it with the index still on it. This script also might overlap with the script above for tables with LOB columns. It only shrinks for large tables where the LOB columns take more than 100Mb of space.

Other database suggestions Redo log size

Not directly related to cleaning, but related to SOAINFRA space management. The Oracle database uses so-called redo-log files to store all changes to the database. In case of a database instance failure, the database can use these redo-log files to recover. Usually there are two or more redo-logfiles. These files are rotated: if one is full, it goes to the next. When the last one is filled, it goes back to the first one overriding old data. Read more about redo-logs here. Rotating a redo-log file takes some time. When the redo-log files are small, they are rotated a lot. The following provides some suggestions in analyzing if increasing the size will help you. I’ve seen default values of 3 redo-log files of 100Mb. Oracle recommends having 3 groups of 2Gb each here.

Clean up long running and faulted instances!

The regular cleaning scripts which you might run on production do not clean instances which have an ECID which is the same as an instance which cannot be cleaned because it is for example still running or recoverable. If you have many processes running, you might be able to win a lot by for example restarting the running processes with a new ECID. You do have to build that functionality for yourself though. Also you should think about keeping track of time for tasks. If a certain task is supposed to only be open for a month, let it expire after a month. If you do not check this, you might encounter large numbers of tasks which remain open. This mains the instance which has created the task will remain open. This means you cannot undeploy the version of the process which has this task running. Life-cycle management is a thing!

Finally SOAINFRA is part of the infrastructure

Oracle SOA Suite logs a lot of audit information in the SOAINFRA database. You might be tempted to join that information to other business data directly on database level. This is not a smart thing to do.

If the information in the SOAINFRA database is used to for example query BPM processes or tasks, especially when this information is being joined over a database link to another database with additional business data, you have introduced a timebomb. The performance will be directly linked to the amount of data in the SOAINFRA database and especially with long running processes and tasks. You have now not only introduced a potential performance bottleneck for all your SOA composites but also for other parts of your application.

It is not a system of records

Secondly, the business might demand you keep the information for a certain period. Eventually they might even want to keep the data forever and use it for audits of historic records. This greatly interferes with purging strategies, which are required if you want to keep your environment to have good performance. If the business considers certain information important to keep, create a table and store the relevant information there.

The post Oracle SOA Suite: Want performance? Don’t log so much and clean up your database! appeared first on AMIS Oracle and Java Blog.

Open Brief aan Oracle Professionals – gun jezelf twee dagen kennismaking met je toekomst – op 15 en 16 juni 2017 bij de OGH TechExperience 2017

Fri, 2017-06-02 00:25

imageBeste Oracle professional,

Of je nu Oracle Database ontwikkelaar bent, DBA, ADF ontwikkelaar, BI specialist,  of SOA Suite specialist, de nieuwsgierigheid blijft altijd kriebelen. Hoop ik. Je wilt beter worden in je vak, nieuwe dingen leren, minimaal bijblijven en je blijven doorontwikkelen. Het is best lastig om in de hectiek en de druk van je dagelijkse werk daar tijd en ruimte voor te maken. Af en toe een week naar Oracle OpenWorld of een conferentie in een ver oord zou mooi zijn, maar is vast niet zo makkelijk te regelen met je manager, je thuisfront of je boekhouder.

Op 15 en 16 juni 2017 komt die internationale conferentie naar jou toe. Op die dagen vindt in De Rijtuigloods in Amersfoort de Tech Experience conferentie plaats, georganiseerd door OGh. In 90 sessies treden ruim 70 sprekers aan, afkomstig uit Nederland en allerlei buitenlanden. waaronder Oracle ACEs en ACE Directors en enkele aansprekende experts van Oracle Corporation. Heel veel concrete praktijkkennis loopt hier rond. Dit evenement biedt je dus de gelegenheid om in heel korte tijd en heel dicht bij huis over allerlei onderwerpen de breedte en de diepte in te gaan en ook meer te leren over waar je in je dagelijkse praktijk mee bezig bent of tegenaan loopt.

De hoofdthema’s van de conferentie zijn:

    • Database – zowel database administration als database development (SQL, PL/SQL)
    • Engineered Systems
    • Web & Mobile – User eXperience en User Interface development met onder andere ADF, Oracle JET en rich client JavaScript web development frameworks
    • Integratie & API Management– web services, SOA (Suite), BPM (Suite), API management
    • Business Intelligence
    • Middleware Platform (CAF) – WebLogic & JCS

De cloud komt in diverse sessies langs, security en architectuur zijn terugkerende elementen en praktijktoepassingen spelen een grote rol. Daarnaast zijn er in veel sessies demo’s te zien.

De conferentie is kleinschalig, laagdrempelig en intiem genoeg om sprekers en andere deelnemers makkelijk aan te spreken en om tips en oplossingen te vragen en ervaringen uit te wisselen. De sessies gaan veelal de diepte in met sprekers die praktijk-specialist zijn op hun onderwerp. Je kunt in deze twee heel intensieve dagen meer te leren over onderwerpen waar je al in de praktijk mee bezig bent – en horen hoe je peers zaken aanpakken en uitdagingen oplossen. Daarnaast heb je de kans om inspiratie op te doen, een gevoel te krijgen voor toekomstige ontwikkelingen en de kansen die daaruit voortkomen en de kruisbestuiving te ervaren tussen verschillende technologieën.

Ik zou je van harte willen adviseren om jezelf deze twee dagen te gunnen. Om een stap te zetten in je kennis en je bewustzijn van wat er allemaal speelt en aan zit te komen. En om te ervaren hoe het is om met honderden vakgenoten samen te zijn, ervaringen uit te wisselen en plannen te vergelijken. Een internationale conferentie, in je voortuin. Die kans doet zich niet zo vaak voor. Grijp hem dus.

Details over de agenda vind je op https://www.ogh.nl/techex/. Ook vind je informatie over de indrukwekkende line-up aan sprekers en details van de sessies, de locatie en het registratieformulier. De kosten voor twee dagen zijn 475 euro (en 175 euro voor leden van OGh). Twee collega’s kunnen op één inschrijving elk één van de beide dagen bijwonen.

Ik heb in mijn carrière al een flink aantal Oracle conferenties bezocht. En het is soms moeilijk om aan anderen – die er niet waren – uit te leggen wat zo bijzonder en waardevol is aan zo’n evenement. Een paar trefwoorden zijn: energie, enthousiasme, inspiratie, plezier, bevestiging en erkenning, delen, verbinding, expertise, diepgang en de interactie met gelijkgestemde zielen uit allerlei organisaties, landen en culturen. Een internationale conferentie waar de top-specialisten op diverse gebieden aanwezig zijn en direct aanspreekbaar zijn voegt voor mij een belangrijke dimensie toe aan mijn dagelijkse werk. Dat is niet allemaal heel concreet in waarde uit te drukken – maar ik hoop van harte dat je ruimte vindt in je agenda en je budget om op 15 en 16 juni tijdens de Tech Experience 2017 conferentie zelf te ervaren wat ik zo bijzonder en waardevol vind. Mijn collega Job Oprel schreef recent een blog artikel waarin hij ook beschrijft wat de waarde is van deelname aan een conferentie; lees ook zijn verhaal: Why go to a seminar or conference?

NB: ik spreek af en toe Oracle specialisten die eigenlijk vorig jaar wel naar de AMIS25 Beyond the Horizon conferentie hadden willen gaan, maar dachten dat hun manager dat wel niet goed zou vinden. In sommige gevallen hadden ze dat niet eens gevraagd aan de betreffende manager. Om te helpen je manager te overtuigen heb ik een tweede open brief geschreven – aan jouw manager: https://technology.amis.nl/2017/06/02/oproep-aan-managers-van-oracle-professionals-gun-je-team-de-tech-experience-2017-15-16-juni/ . Als jij overtuigd bent, ga dan naar je manager en laat haar of hem desnoods mijn open brief lezen. Je hoort als Oracle professional gewoon dit evenement te bezoeken! Zorg dat je erbij bent.

 

Hartelijke groeten en graag tot 15 en/of 16 juni in Amersfoort.

Lucas Jellema

The post Open Brief aan Oracle Professionals – gun jezelf twee dagen kennismaking met je toekomst – op 15 en 16 juni 2017 bij de OGH TechExperience 2017 appeared first on AMIS Oracle and Java Blog.

Materials for Workshop Microservices Choreography with Kubernetes, Docker, Kafka and Node.JS

Fri, 2017-06-02 00:00

imageThursday June 1st – yet another community event at AMIS. This one dedicated to Microservices. What are microservices, why do we think they are interesting? How are they different? How can they be implemented and how do you deploy them? What is a microservices platform and what generic capabilities should such a platform offer? How can we make microservices act together – to perform a workflow – if they are to be isolated and unaware of each other? These are some of the questions that we discussed.

image

Through demonstrations with Kubernetes and Docker with Node.JS, Kafka , Redis as implementing technologies, we discussed a possible implementation of microservices choreograhy – stateless, horizontally scalable microservices participating in a workflow driven by events – without any direct interaction. This figure visualizes the topology that we discussed and subsequently worked on during a hands on workshop:

image

 

All materials for the presentation and the workshop are on GitHub: https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017 – and all required software is open source and freely downloadable.

image

Feel free to give the workshop a spin.

During Oracle Code London I presented a shorter version of the presentation that was part of this workshop. You can watch that presentation from Oracle Code London on YouTube (https://www.youtube.com/watch?v=5Nf7acMU5WA&index=6&list=PLTwx5YGQHdjlLm8BP5l0Zig6BL6YUn2-6):

The post Materials for Workshop Microservices Choreography with Kubernetes, Docker, Kafka and Node.JS appeared first on AMIS Oracle and Java Blog.

Oproep aan Managers van Oracle Professionals: gun je team de Tech Experience 2017 (15 & 16 juni)

Thu, 2017-06-01 23:23

imageBeste Manager of Team Leider van Oracle professionals,

Als je Oracle professionals in je team hebt – of dat nu Oracle DBAs, PL/SQL ontwikkelaars of middleware ontwikkelaars zijn – in alle gevallen is persoonlijke groei en kennisontwikkeling een voortdurend streven. Van je mensen – en als ze dat zelf al hier en daar vergeten dan toch in elk geval van jou als hun coach. In dat kader is er half juni een niet te missen gelegenheid om in twee dagen, dicht bij huis, een evenement met internationale allure te bezoeken en daar op allerlei onderwerpen intensief inspiratie, kennis en ervaringen op te doen. En ook om over de dagelijkse problemen en uitdagingen van vakgenoten te leren en oplossingen op te doen.

De Oracle Gebruikersgroep Holland (OGh) organiseert the Tech Experience 2017 conferentie op 15 en 16 juni in de Rijtuigenloods in Amsersfoort. Deze conferentie omvat 90 sessies van ruim 70 top-sprekers uit Nederland en de rest van de wereld, waaronder Oracle ACEs en ACE Directors en enkele aansprekende experts van Oracle Corporation.

De hoofdthema’s van de conferentie zijn:

  • Database – zowel database administration als database development (SQL, PL/SQL)
  • Engineered Systems
  • Web & Mobile – User eXperience en User Interface development met onder andere ADF, Oracle JET en rich client JavaScript web development frameworks
  • Integratie & API Management– web services, SOA (Suite), BPM (Suite), API management
  • Business Intelligence
  • Middleware Platform (CAF) – WebLogic, Exalogic

De cloud komt in diverse sessies langs, security en architectuur zijn terugkerende elementen en praktijktoepassingen spelen een grote rol.

De conferentie is kleinschalig, laagdrempelig en intiem genoeg om sprekers en andere bezoekers makkelijk aan te spreken en om tips, oplossingen en ervaringen te vragen. De sessies gaan veelal de diepte in met sprekers die praktijk-specialist zijn op hun onderwerp en hebben de waarde van een mini-training. En meer dan in een heel gerichte training is dit een evenement waar het opdoen van inspiratie en de kruisbestuiving tussen verschillende terreinen de extra waarde geeft.

Deelnemers hebben de gelegenheid om in twee heel intensieve dagen meer te leren op onderwerpen waar zelf al in de praktijk mee bezig zijn – en te horen hoe ervaringsdeskundigen zaken aanpakken en uitdagingen oplossen. Daarnaast is dit een mooie gelegenheid om vooruit te kijken op onderwerpen die er misschien al aan zitten te komen of er aan zouden gaan komen. IT professionals zitten door de waan van de dag en constante focus nog wel eens in een soort tunnel – de Tech Experience biedt de kans daar uit te breken.

Als CTO van AMIS ben ik niet objectief. Ik ben razend enthousiast over deze conferentie, het programma en de sprekers, de onderwerpen en de energie die we tijdens die twee dagen gaan genereren en delen. Deze krent in de pap geeft inspiratie en enthousiasme voor Oracle specialisten om lange tijd op te teren. Mijn collega’s moeten hier dus heen vind ik. En wat mij betreft geldt dat voor alle Oracle professionals in Nederland – dus zeker ook voor jouw medewerkers. Gun ze dit stuk opleiding, dwing ze desnoods twee dagen het dagelijkse werk aan de kant te zetten en de wereld (van Oracle) te verkennen. In Amsersfoort, op de Tech Experiences conferentie.

Details over de agenda vind je op https://www.ogh.nl/techex/. Ook vind je informatie over de indrukwekkende line-up aan sprekers en details van de sessies, de locatie en het registratieformulier. De kosten voor twee dagen zijn 475 euro (en 175 euro voor leden van OGh). Twee collega’s kunnen op één inschrijving elk één van de beide dagen bijwonen.

met vriendelijke groeten,

Lucas Jellema

CTO van AMIS

The post Oproep aan Managers van Oracle Professionals: gun je team de Tech Experience 2017 (15 & 16 juni) appeared first on AMIS Oracle and Java Blog.

Configuring Oracle Traffic Director 12c with WebGate

Sun, 2017-05-28 15:01

At a recent customer install, I was faced with configuring Oracle Traffic Director (OTD) 12.2.1.2.0 Webgate with Oracle Access Manager.

Deploying Webgate on OD 12c is very well described in the documentation. See A Configuring OAM Agent (WebGate) for Oracle Traffic Director 12.2.1.2

There is however a flaw in the documentation. I came across that when I reached the point where Webgate get’s configured in the conf files of OTD.

When you configure Webgate for OTD 12c, the OTD Conf files such as magnus.conf and virtual-server-*-obj.conf are updated. (on a collocated installation)
If you follow the documentation completely, you will end up with conf files that either have no WebGate configuration in them or with the configuration dedicated to the first OTD instance on both servers. In the latter case, the second instance will no longer start.
I created a Service Request at Oracle support to address the issue. They didn’t have a solution for the problem and I ended up being bounced between the OTD and the WebGate support teams. Finally one of the guys from the WebGate team really tried to help me, but couldn’t resolve the issue. So I went along and solved the problem myself. As I will describe below.

When you reach pt.5 of the documentation A.2 Configuring Oracle Traffic Director 12c WebGate

Change the EditObjConf line as follows

./EditObjConf -f Domain_Home/config/fmwconfig/components/
OTD/otd_configuration_name/config/virtual_server_name-obj.conf -w webgate_instanceDirectory [-oh Oracle_Home] -ws otd

For example

OTD Configuration Name: TST1
OTD Instance 1: otd_TST1_host1.domain.local
OTD Instance 2: otd_TST1_ host1.domain.local
Domainhome: /u01/app/oracle/config/domains/otd_domain_tst

./EditObjConf -f /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/TST1/config/virtual-server-tst1-obj.conf -w /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/
OTD/instances/otd_TST1_ host1.domain.local -oh $ORACLE_HOME -ws otd

Where TST1 is the name of the configuration and host1.domain.local is the name of the first server.
This will change the magnus.conf and virtual-server-tst1-obj.conf for Webgate.
In virtual-server-tst1-obj.conf there are no instance specific references.
However in the magnus.conf there are references to the first instance, since this is the one that we used with EditObjConf.

This is what the magnus.conf in the OTD configuration section (on global level) looks like after EditObjConf command.
Notice the hardcoded instance name in four places.

less /u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/ OTD/TST1/config/magnus.conf&amp;amp;lt;/pre&amp;amp;gt;
#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#

Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401, OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;amp;lt;some_local_dir&amp;amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;amp;lt;WebGateLockFileDir&amp;amp;amp;gt;/&amp;amp;amp;lt;Hash of WebGateInstancedir&amp;amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.
#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

Leaving it like this will result in this hardcoded instance name being distributed to all instance. Hence only one instance would start.

Now how to fix this.

Open magnus.conf with an editor

Replace the hardcoded instance name with a variable called ${INSTANCE_NAME}
(I picked up the existence of this variable in the server.xml which is also on the OTD Configuration level and get distributed all instances.)

In our example the magnus.conf now looks like this.

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;amp;lt;/pre&amp;amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
#WebGateLockFileDir="&amp;amp;amp;lt;some_local_dir&amp;amp;amp;gt;"
# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;amp;lt;WebGateLockFileDir&amp;amp;amp;gt;/&amp;amp;amp;lt;Hash of WebGateInstancedir&amp;amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd" 
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/${INSTANCE_NAME}" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN
Now to distributed these files

Open Enterprise ManagerFusion Middleware Control 12c and go to the OTD Configuration

Go to Virtual Server section and click Lock and Edit

EM will show the Pull Components Changes bar.

DON’T pull the changes!
This will replace the conf files of the configuration with those currently in use by the instances.

Instead make a minor, insignificant, change in the configuration.
For example add a hostname to the Virtual Server Settings. (We remove it later)
Now activate the changes
Again, don’t Pull the changes

Discard the Instance Changes and Activate Changes.

Again Discard Changes

And finally Discard Changes to distributed the correct conf files to the instances.

Now let’s look at the magnus.conf on both instances. (We already know that the virtual-server-tst1-obj.conf is the same everywhere)

On Instance 1

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;lt;/pre&amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules"
funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/
components/OTD/instances/otd_TST1_host1.domain.local"
#ESSO#Init fn="load-modules" funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;lt;some_local_dir&amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create # webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;lt;WebGateLockFileDir&amp;amp;gt;/&amp;amp;lt;Hash of WebGateInstancedir&amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host1.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

And on Instance 2

#
# Copyright (c) 2011, 2016, Oracle and/or its affiliates. All rights reserved.
#&amp;lt;/pre&amp;gt;
Init fn="load-modules" shlib="libwebapp-firewall.so"

# Oracle WebGate Init FNs start #WGINITFN
Init fn="load-modules" funcs="OBWebGate_Init,OBWebGate_Authent,OBWebGate_Control,
OBWebGate_Err,OBWebGate_Handle401,OBWebGate_Response"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
#ESSO#Init fn="load-modules"
funcs="EssoBasicAuthInit,EssoBasicAuth,EssoClean"
shlib="/u01/app/oracle/product/otd1221/webgate/otd/lib/webgate.so"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local"
Init fn="OBWebGate_Init"
obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
#WebGateLockFileDir="&amp;amp;lt;some_local_dir&amp;amp;gt;"

# WebGateLockFileDir: Optional directive specifying the location to create
# webgate lock files.
#
# If configured, then all webgate lock files will be created under
# &amp;amp;lt;WebGateLockFileDir&amp;amp;gt;/&amp;amp;lt;Hash of WebGateInstancedir&amp;amp;gt;. The hash subdir is to
# ensure uniqueness for each webserver instance and avoid locking conflicts
# if two different instances have configured the directive with same value.
#
# If the dir does not exist before, will try to create it first. If dir
# creation failed or the directive not configured, webgate falls back to old
# model, i.e. use same location as original file that lock is based upon.
#
# This directive is useful when webgate instance is located on NFS mounted
# disks and performance greatly impacted. Configure it to local dir will solve
# the issue.

#ESSO#Init fn="EssoBasicAuthInit"

obinstalldir="/u01/app/oracle/product/otd1221/webgate/otd"
obinstancedir="/u01/app/oracle/config/domains/otd_domain_tst/config/fmwconfig/components/OTD/instances/otd_TST1_host2.domain.local" Mode="PEER"
# Oracle WebGate Init FNs end #WGINITFN

The files look good on both instances.

Now Restart Instances

Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local Validate Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local ------------------------------------------------
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local
Perform Restart operation on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local
------------------------------------------------
Checking operation status on target /Domain_otd_domain_tst/otd_domain_tst/TST1
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host2.domain.local succeeded
Operation Restart on target /Domain_otd_domain_tst/otd_domain_tst/otd_TST1_host1.domain.local succeeded

 

Now you’re good to go with WebGate correctly configured on OTD 12c.

I put the solution in the service request and got thanks from the guys at Oracle Support. They told me, they where going to change the documentation to match my solution. Always nice to get this kind of appreciation from them.

The post Configuring Oracle Traffic Director 12c with WebGate appeared first on AMIS Oracle and Java Blog.

Docker, WebLogic Image on Microsoft Azure Container Service

Wed, 2017-05-24 09:37

This blog series shows how to get started with WebLogic and Docker – in 3 different Clouds:

This blog is running a WebLogic Docker Container image from the Docker Hub registry on the Microsoft Azure Container Service.

Starting point & Outline

Starting point for this blog is:

  • A computer running Ubuntu and with Docker installed
  • A WebLogic Docker Container Image in a private Docker Hub repository, as described in this blog
  • Access to Microsoft Azure, e.g. via a trial subscription

The blog itself consists of 2 main parts:

  1. Create an Azure Container Service
  2. Run the container image from the Docker Hub repository on the created Cloud Service

 

Create an Azure Container Service

The Azure Container Service offers the choice between using Docker Swarm, DC/OS, or Kubernetes for orchestration/management of the Docker container solution. For our specific use case, I picked Docker Swarm.

The high level steps for creating your Azure Container Service are:

  • create an SSH RSA public key
  • deploy an Azure Container Service cluster (via the Azure portal), by using an Azure Resource Manager template (for Docker Swarm)

Let’s get started.

  • create an SSH RSA public key

Log in into the Ubuntu machine and generate the key:

developer@developer-VirtualBox:~$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/developer/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/developer/.ssh/id_rsa.

Your public key has been saved in /home/developer/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:Lpm8BrZoQscz1E6Maq9J0WdjjLjHAP5fxZXBdlrdzMY developer@developer-VirtualBox

The key's randomart image is:

+---[RSA 2048]----+
|          ..  .+.|

|           ooo .E|

|.   +     .o+  . |

|o ooo+  . ..     |

| =+oo*  So       |

| +*=*o.+.        |

|ooo*oo=..        |

|o =.o oo         |

| =.  o.          |

+----[SHA256]-----+
developer@developer-VirtualBox:~$
  • deploy an Azure Container Service cluster (via the Azure portal), by using an Azure Resource Manager template (for Docker Swarm)

First, login into the Microsoft Azure Portal on http://portal.azure.com. Here, click the + sign, select ‘Containers’ and then ‘Azure Container Service’:

The next screen appears:

Click the Create button.

Complete the settings like shown in the figure above and click OK to move to the next page:

For the Master configuration, complete the settings as shown above. Use the SSH key that was created in step (1). Note that the ‘Master’ is the Manager node in a Docker Swarm. One Master node is enough for this simple configuration. Next, click OK.

That brings us to the Agent configuration page, where Agent is actually a Docker Swarm Worker node. We need only 1 agent. For the Virtual Machine size, the DS2 profile is chosen, which has 2 cpu cores and 7GB of RAM. That should be enough to run the WebLogic container on.

Click OK to continue:

Review the Summary page, and the click OK to start creation of your Azure Container Service:

After some time, your Azure Container Service will be created!

 

Run the WebLogic image from Docker Hub registry

Now, we will start to get the WebLogic image from the Docker Hub registry running on the Azure Container Service. This is achieved from the command line of our local Ubuntu machine – the one that also has Docker installed.

The following steps will be done:

  • Make a tunnel to the Master node
  • Run the WebLogic container
  • Add a LoadBalancer rule for forwarding port 7001
  • Test

Let’s get started.

  • Make a SSH tunnel to the Master node

From the local Ubuntu machine, make an SSH tunnel to the Master node. In that way, docker commands from the Ubuntu machine will be handled by the Docker Swarm container on the Master node. First, establish the tunnel, set the Docker host that will be used by the local Ubuntu machine and then list the images. The output of the ‘docker images list’ command shows what Docker images are available on the Master node – in our case: none (as this is a fresh installation):

developer@developer-VirtualBox:~$ ssh -fNL 2375:localhost:2375 -p 2200 lgorisse@lgomgmt.northeurope.cloudapp.azure.com

developer@developer-VirtualBox:~$ export DOCKER_HOST=:2375

developer@developer-VirtualBox:~$ docker images list

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

developer@developer-VirtualBox:~$
  • Run the WebLogic container

Now, give the commands below on the local Ubuntu machine to start the WebLogic container on the Azure Container Service. The WebLogic container will run on the agent machine.

The following steps have to be done:

  • Login into the Docker Hub, using your Docker Hub account. This is necessary because the Docker Hub registry that I used is a private registry (= password protected)
  • Pull the WebLogic Container image with the docker pull command
  • Run the image with the port mapping for port 7001
  • Notice in the output below that I also had a yeasy/simple-web container running
developer@developer-VirtualBox:~$ docker login

Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.

Username: lgorissen

Password:

Login Succeeded

developer@developer-VirtualBox:~$ docker pull lgorissen/myfirstweblogic

Using default tag: latest

swarm-agent-595FAAF3000001: Pulling lgorissen/myfirstweblogic:latest... : downloaded

developer@developer-VirtualBox:~$ docker run -d -p 7001:7001 lgorissen/myfirstweblogic

6506b88dc7cc166df55c470e4e7f9732cfb55353c8a1a84d8048c7689c886a7c

developer@developer-VirtualBox:~$ docker container ps

CONTAINER ID        IMAGE                       COMMAND                  CREATED                  STATUS              PORTS                                                   NAMES

6506b88dc7cc        lgorissen/myfirstweblogic   "startWebLogic.sh"       Less than a second ago   Up 21 seconds       5556/tcp, 7002/tcp, 8453/tcp, 10.0.0.5:7001->7001/tcp   swarm-agent-595FAAF3000001/competent_allen

fd4c32b1fd19        yeasy/simple-web            "/bin/sh -c 'pytho..."   8 minutes ago            Up 8 minutes        10.0.0.5:80->80/tcp                                     swarm-agent-595FAAF3000001/pensive_pike

developer@developer-VirtualBox:~$
  • Add a LoadBalancer rule for forwarding port 7001

In this set-up, the Agent machine can’t be reached over port 7001. A route in the LoadBalancer that routes traffic for this port has to be created. Creating that LoadBalancer route can be done in the Azure Portal. First, look up the resource groups in the portal:

Open the weblogic-demo resource group by clicking on it:

Click on the Load balancer for the agent (remember, out WebLogic container is running on the agent virtual machine). That brings up the Load balancer screen:

Click on the Add sign to add a new load balancer rule for port 7001:

Enter front-end, back-end and port numbers to establish the new route and save it.

 

  • Test

Again look into the resource groups in the Azure Portal to find the public dns name for the agents:

From the figure above, we see that the url to use for accessing the WebLogic container is: http://lgoagents.northeurope.cloudapp.azure.com:7001/

Nifty :-S

The post Docker, WebLogic Image on Microsoft Azure Container Service appeared first on AMIS Oracle and Java Blog.

Docker, WebLogic Image on Oracle Container Cloud Service

Wed, 2017-05-24 09:36

This blog series shows how to get started with WebLogic and Docker – in 3 different Clouds:

Starting point & Outline

Starting point for this blog is:

  • A computer with a browser
  • A WebLogic Docker Container Image in a private Docker Hub repository, as described in this blog [todo: make reference]
  • Identity Domain Administrator access to the Oracle Public Cloud, e.g. via a trial account

The blog itself consists of 2 main parts:

  1. Create a Container Cloud Service
  2. Run the container image from the Docker Hub repository on the created Cloud Service
Create a Container Cloud Service

First step is to create an Oracle Container Cloud Service. Start by logging in as Identity Domain Administrator. When you have a trial account, you will have received a mail like the one below:

Login into MyServices Administration using the url from the mail as shown above:

Click on the ‘Create Instance’ to start provisioning fo the Container Cloud Service. A pop-up is now shown:

Pick the Container (cloud service), which brings you to the first page of the wizard for creation of the Container Cloud Service:

Click ‘Create Service’ button:

On this page, create the SSH Public Key: click the Edit button and select ‘Create a New Key’:

Click Enter:

Download the key. Continue by clicking Next. That brings you to the overview page:

Review the data and then click ‘Create’ to start creation of the Container Cloud Service:

Now, the service is creating for several minutes – mine took 11 minutes. The page will then look like:

Click on the ‘WebLogicService’, which brings you to a more detailed overview of your created Container Cloud Service:

Like shown in the above picture: go to the ‘Container Console’:

Login with the admin username and password that you entered when creating the Container Cloud Service. This will now bring you to the Dashboard of the Cloud Container Service named WebLogicService:

In this console, the main concepts of the Oracle Container Cloud Service are clearly visible:

  • Task: action created in the Oracle Container Cloud Service as a response to your requests
  • Event: individual, discrete operations on the Oracle Container Cloud Service
  • Service: comprises the information required for running a Docker image on a host
  • Stack: comprises the configuration for running a set of services as a single entity
  • Deployment: a deployed service or stack on the Oracle Container Cloud Service
  • Container: a Docker container, i.e. a process created to run a Docker image
  • Image: a Docker image
  • Hosts: the Oracle Compute virtual machines that are managed by the Oracle Container Cloud Service (also: worker nodes)
  • Resource Pools: a combination of hosts into groups of compute resources
  • Registry: a Docker registry, i.e. a system for storing and sharing Docker images
  • Tags: labels for organizing resource pools and the hosts within them (used for management)

 

Run the WebLogic image from Docker Hub registry

With the Oracle Container Cloud Service up and running, we can start running the WebLogic image that we have in Docker Hub. The following has to be done:

  1. Update the registry to access the private Docker Hub repository where the WebLogic image is stored
  2. Create a Service for the WebLogic container image
  3. Deploy the Service
  4. Test the deployment

Steps are shown below:

  • Update the registry to access the private Docker Hub repository where the WebLogic image is stored

Goto registries:

Click the Edit button and add the authorization details for the private Docker Hub registry:

 

  • Create a Service for the WebLogic container image

Next, we’ll create a Service that describes how we want to deploy the WebLogic container image. Goto the Services page and click on the New Service button:

In the Service Editor pop-up, enter the values like shown below. Note the port mapping settings!

Click Save and note that the Service is added:

 

  • Deploy the Service

 

In the previous sceen, click the green Deploy button to start deployment of the service.

Accept the default settings and click Deploy. After some time, i.e. when the deployment has completed, the Deployments tab will be colored green:

 

  • Test the deployment

First, check that the image has actually been pulled in on your host, by looking at the Images tab:

Then, check for the container to be up-and-running in the Containers tab:

Now, as a final check, you’ll want to log in into the weblogic console.

First, go to the Hosts tab, and find the public ip address:

Then, point your browser to the familiar url, here: http://129.150.70.46:7001/console

… and login to observe that the WebLogic server has the developer_domain, which is a clear indication that it is indeed running in our container from Docker Hub.

 

The post Docker, WebLogic Image on Oracle Container Cloud Service appeared first on AMIS Oracle and Java Blog.

Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache

Sun, 2017-05-21 23:41

In a previous article I talked about a generic Docker Container Image that can be used to run any Node.js application directly from GitHub or some other Git instance by feeding the Git repo url as Docker run parameter (see https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/). In this article, I create a simple Node.js application that will be pushed to GitHub and run in that generic Docker container. It will use a Redis cache that is running in a separate Docker Container.

image

The application does something simple: HTTP requests are handled: each request will lead to an increment of the request counter and the current value of the request counter is returned. The earlier implementation of this functionality used a local Node.js variable to keep track of the request count. This approach had two spectacular flaws: horizontal scalability (adding instances of the application fronted by a load balancer of sorts) led to strange results because each instance kept its own request counter. And a restart of the application caused the count to be reset. The incarnation we discuss in this article uses a Redis cache as a shared store for the request counter, one that will also survive the restart of the Node.js application instances. Note: of course this means Redis becomes a single point of failure, unless we cluster Redis too and/or use a persistent file as backup. Both options are available but are out of scope for this article.

Sources for this article can be found on GitHub: https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017/tree/master/part1 .

Run Redis

To run a Docker Container with a Redis cache instance, we only have to execute this statement:

docker run -d –name redis -p 6379:6379 redis

We run a container based on the Docker image called redis. The container is also called redis and its internal port 6379 is exposed and mapped to port 6379 in the host. That it all it takes. The image is pulled and the container is started.

image

Create Node.js Application RequestCounter – Talking to Redis

To talk to Redis from a Node.js application, there are several modules available. The most common and generic one seems to be called redis. To use it, I have to install it with npm:

npm install redis –save

image

To leverage Redis in my application code, I need to require(‘redis’) and create a client connection. For that, I need the host and port for the Redis instance. The port was specified when we started the Docker container for Redis (6379) and the host ip is the ip of the Docker machine (I am running Docker Tools on Windows).

Here is the naïve implementation of the request counter, backed by Redis. Naïve because it does not cater for race conditions between multiple instances that could each read the current counter value from Redis, each increase it and write it back, causing one or multiple counts to be potentially lost. Note that the REDIS_HOST and REDIS_PORT can be specified through environment variables (read with process.env.<name of variable>.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
var http = require('http');
var redis = require("redis");

var redisHost = process.env.REDIS_HOST ||"192.168.99.100" ;
var redisPort = process.env.REDIS_PORT ||6379;

var redisClient = redis.createClient({ "host": redisHost, "port": redisPort });

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    var requestCounter = 0;

    redisClient.get(redisKeyRequestCounter, function (err, reply) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            if (!reply || reply == null) {
                console.log("no value found yet");
                redisClient.set(redisKeyRequestCounter, requestCounter);
            } else {
                requestCounter = Number(reply) + 1;
                redisClient.set(redisKeyRequestCounter, requestCounter);
            }
            res.write('Request Count (Version 3): ' + requestCounter);
            res.end();
        }
    })
}).listen(PORT);

    //        redisClient.quit();

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

 

Run the Node.JS Application talking to Redis

The Node.js application can be run locally – from the command line directly on the Node.js runtime.

Alternatively, I have committed and pushed the application to GitHub. Now I can run it using the generic Docker Container Image lucasjellema/node-app-runner that I prepared in this article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/ using a single startup command:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-3.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

This command passes relevant values as environment variable – such as the GitHub Repo url, the directory in that repo and the exact script to run and also the host and port for Redis as well as the port that the Node.js application should listen at for requests. In the standard Docker way, the internal port (8080) is mapped to the external port (8015).image

 

The application can accessed from the browser:

image

 

Less Naïve Implementation using Redis Watch and Multi for Optimistic Locking

Although the code shown overhead seems to be working – it is not robust. When scaling out –  multiple instances can race against each other and overwrite each other’s changes in Redis because no locking has been implemented. Based on this article: https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo I have extended the code with an optimistic locking mechanism. Additionally, the treatment of client connections is improved – reducing the chance of leaking connections.

//respond to HTTP requests with response: count of number of requests
// invoke from browser or using curl:  curl http://127.0.0.1:PORT
// use an optmistic locking strategy to prevent race conditions between multiple clients updating the requestCount at the same time
// based on https://blog.yld.io/2016/11/07/node-js-databases-using-redis-for-fun-and-profit/#.WSGEWtwlGpo 
var http = require('http');
var Redis = require("redis");

var redisHost = process.env.REDIS_HOST || "192.168.99.100";
var redisPort = process.env.REDIS_PORT || 6379;

var PORT = process.env.APP_PORT || 3000;

var redisKeyRequestCounter = "requestCounter";

var server = http.createServer(function handleRequest(req, res) {
    increment(redisKeyRequestCounter, function (err, newValue) {
        if (err) {
            res.write('Request Count (Version 3): ERROR ' + err);
            res.end();
        } else {
            res.write('Request Count (Version 3): ' + newValue);
            res.end();
        }
    })
}).listen(PORT);


function _increment(key, cb) {
    var replied = false;
    var newValue;

    var redis = Redis.createClient({ "host": redisHost, "port": redisPort });
    // if the key does not yet exist, then create it with a value of zero associated with it
    redis.setnx(key, 0);
    redis.once('error', done);
    // ensure that if anything changes to the key-value pair in Redis (from a different connection), this atomic operation will fail
    redis.watch(key);
    redis.get(key, function (err, value) {
        if (err) {
            return done(err);
        }
        newValue = Number(value) + 1;
        // either watch tells no change has taken place and the set goes through, or this action fails
        redis.multi().
            set(key, newValue).
            exec(done);
    });

    function done(err, result) {
        redis.quit();

        if (!replied) {
            if (!err && !result) {
                err = new Error('Conflict detected');
            }

            replied = true;
            cb(err, newValue);
        }
    }
}

function increment(key, cb) {
    _increment(key, callback);

    function callback(err, result) {
        if (err && err.message == 'Conflict detected') {
            _increment(key, callback);
        }
        else {
            cb(err, result);
        }
    }
}

console.log('Node.JS Server running on port ' + PORT + ' for version 3 of requestCounter application, powered by Redis.');

This Node.js application is run in exactly the same way as the previous one, using requestCounter-4.js as APP_STARTUP rather than requestCounter-3.js.

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8015:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-4.js” -e “REDIS_HOST:127.0.0.1” -e “REDIS_PORT:6379”   lucasjellema/node-app-runner

image

The post Node.js run from GitHub in Generic Docker Container backed by Dockerized Redis Cache appeared first on AMIS Oracle and Java Blog.

Running Node.js applications from GitHub in generic Docker Container

Sun, 2017-05-21 06:27

This article shows how I create a generic Docker Container Image to run any Node.JS application based on sources for that application on GitHub. The usage of this image is shown in this picture:

 

image

Any Node.JS application in any public GitHub repo can be run using this Docker Container Image. When a container is run from this image, the url for the GitHub Repo is passed in as environment variable – as well as (optionally) the directory in the repo to run the application from, the name of the file to run and the specific version of the Node runtime to use. An example of the command line to use:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017” -e “APP_PORT=8080” -p 8005:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

This command will run the script requestCounter.js in the part1 directory in the repo found in GitHub at the URL specified. It passes an additional environment variable APP_PORT to the runtime – to be used in the node application (process.env.APP_PORT). It maps port 8080 inside the container to port 8005 on the host in the standard Docker way.

To run an entirely different Node.js application, I can use this command:

docker run -e “GIT_URL=https://github.com/lucasjellema/nodejs-serversentevents-quickstart”  -p 8010:8888 -e”PORT=8888″ -e “APP_HOME=.”  -e “APP_STARTUP=app.js”   lucasjellema/node-app-runner

The same image is started, passing a different GIT_URL and different instructions regarding the directory and the script to run – and also a different environment variable called PORT.

Note: this work is based on the Docker Image created by jakubknejzlik – see https://hub.docker.com/r/jakubknejzlik/docker-git-node-app/ and https://github.com/jakubknejzlik/docker-git-node-app/blob/master/Dockerfile.

My own sources are part of the GitHub Repository at https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017 – with resources for a workshop on Microservices, Choreography, Docker, Kubernetes, Node.jS, Kafka and more.

The steps described in this article:

image

1. Docker file to build the container

2. bootstrap.sh file to run when the container is started

3. Create image from the container

4. Push image to public Docker Hub Registry (https://hub.docker.com/r/lucasjellema/node-app-runner/)

(5. Create Node.js application and push to GitHub)

6. Run Node.js application from GitHub repository by starting a Docker container from the image created in the previous steps

 

1. Docker file to build the container

The Docker file is shown here

FROM node

ENV NODE_VERSION stable
ENV NPM_SCRIPT start
ENV GIT_URL https://github.com/heroku/node-js-sample
ENV APP_PORT 3000

ENV APP_HOME .
ENV APP_STARTUP ""
# JUST_RUN specifies whether node should be installed and git should be cloned
ENV JUST_RUN N

COPY ./docker-work /code

WORKDIR /code

#RUN chown -R app:app /code/*
RUN chmod +x /code/bootstrap.sh

RUN npm install -g n --silent
RUN n stable

ENTRYPOINT ["/code/bootstrap.sh"]

It starts from the Docker Image node – the official base image (see https://hub.docker.com/_/node/ for details). The scripts defines a number of environment variables with (default) values; these values can be overwritten when a container is run. The contents of directory docker-work (off the current working directory) is copied into directory /code inside the Docker image. The file bootstrap.sh – which is in that docker-work directory – is made executable. NPM package n is installed (https://www.npmjs.com/package/n) for doing version management of Node.js and the currently stable release of Node.js is installed – in addition to the version of Node.js shipped in the Node Docker image . Finally, the entrypoint is set to bootstrap.sh – meaning that when a container is started based on the image, this file will be executed.

 

2. bootstrap.sh file to run when the container is started

The file bootstrap.sh is executed when the container is started. This file takes care of

* install a special version of the Node.js runtime if required

* cloning the Git repository – to bring the application sources into the container

* installing all required node-modules by running npm install

* running the Node.js application

The file uses a number of environment variables for these actions:

– NODE_VERSION – if a specific version of Node runtime is required

– GIT_URL – the URL to the Git repository that contains the application sources

– APP_HOME – the directory within the repository that contains package.json and the start script for the application to run

– APP_STARTUP – the file that should be executed (node $APP_STARTUP); when this parameter is not passed, the application is started with npm start – based on the start script in package.json

– JUST_RUN – when this variable has the value Y, then the container will not attempt to install a new version of Node.js nor will it clone the Git repo (again)

#!/bin/bash

if [ "$JUST_RUN" = "N" ]; then
  echo switching node to version $NODE_VERSION
  n $NODE_VERSION --quiet
fi

echo node version: `node --version`

if [ "$JUST_RUN" = "N" ]; then
  git clone $GIT_URL app
fi

cd app

cd $APP_HOME
echo Application Home: $APP_HOME

if [ "$JUST_RUN" = "N" ]; then
  if [ "$YARN_INSTALL" = "1" ]; then
    yarn install --production --silent
  else
    npm install --production --silent
  fi
fi

if [ "$APP_STARTUP" = "" ]; then
  npm run $NPM_SCRIPT
else
  node $APP_STARTUP
fi

 

3. Build container image

In my environment, I am working on a Windows7 laptop. On this laptop, I have installed Docker Tools. I am running in the Docker Tools Quickstart terminal (fka boot2docker) – Docker Machine on a Linux client running a small Oracle VirtualBox VM.

Using this command I build the container image from the Dockerfile:

docker build -t lucasjellema/node-app-runner .

SNAGHTMLb25b94a

To inspect whether the image contains the setup, I can run the image and start a Bash shell – just check on the contents of the file system:

docker run  -it –entrypoint /bin/bash  lucasjellema/node-app-runner

I can now try out the image, using a command like this:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8004:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

image

This runs a container, clones the Git repo at the indicated URL to directory /code/app , navigate into directory /code/app/part1, performs an npm install to get required modules and runs requestCounter.js with Node.js, listening at port 8004 for http requests on the host that are forwarded to port 8080 inside the container.

In order to access the application on my Windows host, I need to know the IP address of Docker Machine – the Linux VM instance that runs the Docker server inside VirtualBox. This is done using

docker-machine ip default

which will return the IP address assigned to the VM.

image

I can then access the Node.js application at http://IP_ADDRESS:8004.

 

image

 

4. (optional) Push image to public Docker Hub Registry (https://hub.docker.com/r/lucasjellema/node-app-runner/)

The image has proven itself, and we can now push it to a public or private registry. To push to Docker Hub:

docker login

docker push lucasjellema/node-app-runner

image

5. Create Node.js application and push to GitHub

image

6. Run Node.js application from GitHub repository by starting a Docker container from the image created in the previous steps

I have several Node.js applications that I would like to run – each in their own container, listening at their own port. This is now very simple and straightforward – using several calls to docker run, each with different values for GIT_URL, APP_HOME and APP_STARTUP as well as APP_PORT or PORT.

For example – run three containers in parallel:

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8001:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter.js”   lucasjellema/node-app-runner

docker run -e “GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017″ -e “APP_PORT=8080” -p 8005:8080 -e “APP_HOME=part1”  -e “APP_STARTUP=requestCounter-2.js”   lucasjellema/node-app-runner

docker run -e “GIT_URL=https://github.com/lucasjellema/nodejs-serversentevents-quickstart”  -p 8010:8888 -e”PORT=8888″ -e “APP_HOME=.”  -e “APP_STARTUP=app.js”   lucasjellema/node-app-runner

We can look at the logging from a container:

docker logs <container id>

We can stop each container:

docker stop <container id>

list all containers – running and stopped:

docker container ls -all

restart a container (now the time to restart is very short):

docker start <container id>

7. Turn Container into Image

Note: it is easy to turn one of these containers running a specific Node.js application itself into an image from which subsequent containers can be run. This image would contain the correct version of Node.js as well as the application and all its dependent modules – allowing for a faster startup time. The steps:

docker commit CONTAINER_ID NAME_OF_IMAGE

for example:

docker commit a771 request-counter

Subsequently we can run a container based on this image; note that this time we do not specify the GIT_URL – because the application and all node_modules are baked into the image. The environment variables used in bootstrap.sh and in the application can still be passed. The startup time for this container should be very short – since hardly any preparation needs to be performed:

docker run  -e “APP_PORT=8080” -p 8004:8080 -e “APP_HOME=part1” -e “JUST_RUN=Y” -e “APP_STARTUP=requestCounter.js”   request-counter

 

Notes

Note: remove old containers

list exited containers:

docker ps -aq -f status=exited

remove them (http://blog.yohanliyanage.com/2015/05/docker-clean-up-after-yourself/)

docker rm -v $(docker ps -a -q -f status=exited)

remove dangling images

list them:

docker images -f “dangling=true” -q

Remove them:

docker rmi $(docker images -f “dangling=true” -q)

The post Running Node.js applications from GitHub in generic Docker Container appeared first on AMIS Oracle and Java Blog.

Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises

Thu, 2017-05-18 04:38

One of the challenges with programming in JavaScript (ECMA Script) in general and Node.JS in particular is having to deal with asynchronous operations. Whenever a call is made to a function that will handle the request asynchronously, care has to be taken to be prepared to receive the result from the function in an asynchronous fashion. Additionally, we have to ensure that the program flow does not continue prematurely – only those steps that can be performed without the result from the function call can proceed. Orchestrating multiple asynchronous – some of them sequential or chained and others possibly in parallel – and gathering the results from those calls in the proper way is not trivial.

Traditionally, we used callback functions to program the asynchronous interaction: the caller passed a reference to a function to the asynchronous operation and when done with the asynchronous operation, the called function would invoke this callback function to hand it the outcome. The call(ed)back function would then take over and continue flow of the program. A simple example of a callback function is seen whenever an action is scheduled for execution using setTimeout():

setTimeout(function () {
  console.log("Now I am doing my thing ");
}, 1000);

or perhaps more explicitly:

function cb() {
  console.log("Now I am doing my thing ");
}

setTimeout(cb, 1000);
Chain of Asynchronous Actions

With multiple mutually dependent (chained) calls, using callback functions results in nested program logic that quickly becomes hard to read, debug and maintain. An example is shown here:

image

 

Function readElementFromJsonFile does what its name says: it reads the value of a specific element from the file specified in the input parameter. It does so asynchronously and it will call the callback function to return the result when it has been obtained. Using this function, we are after the final value. Starting with file step1.json, we read the name of the nextfile element which indicates the next file to read, in this case step2.json. This file in turn indicates that nextStep.json should be inspected and so on. Clearly we have a case of a chain of asynchronous actions where each action’s output provides the input for the next action.

In classic callback oriented JavaScript, the code for the chain of calls looks like this – the nested structure we have come to expect from using callback functions to handle asynchronous situations:

// the classic approach with nested callbacks
var fs = require('fs');
var step1 = "/step1.json";

function readElementFromJsonFile(fileName, elementToRead, cb) {
    var elementToRetrieve = 'nextfile';
    if (elementToRead) {
        elementToRetrieve = elementToRead;
    }
    console.log('file to read from ' + fileName);
    fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
        var element = "";
        if (err) return cb(err);
        try {
            element = JSON.parse(data)[elementToRetrieve];
        } catch (e) {
            return cb(e);
        }
        console.log('value of element read = ' + element);
        cb(null, element);
    });
}//readElementFromJsonFile

readElementFromJsonFile(step1, null, function (err, data) {
    if (err) return err;
    readElementFromJsonFile(data, null, function (err, data) {
        if (err) return err;
        readElementFromJsonFile(data, null, function (err, data) {
            if (err) return err;
            readElementFromJsonFile(data, null, function (err, data) {
                if (err) return err;
                readElementFromJsonFile(data, 'actualValue', function (err, data) {
                    if (err) return err;
                    console.log("Final value = " + data);
                });
            });
        });
    });
});

The arrival of the Promise in ES6 – a native language mechanism that is therefore available in recent versions of Node.JS – makes things a little bit different and more organized, readable and maintainable. The function readElementFromJsonFile() will now return a Promise – a placeholder for the eventual result of the asynchronous operation. Even though the result will be provided through the Promise object at a later moment, we can program as if the Promise represents that result right now – and we can anticipate in our code at what to do when the function delivers on its Promise (by calling the built in function resolve inside the Promise).

The result of the resolution of a Promise is a value – in the case of function readElementFromJsonFile it is the value read from the file. The then() operation that is executed when the Promise is resolved with that value, calls the function that it was given as a parameter. The result (resolution outcome) of the Promise is passes as input into this function. In the code sample below we see how readElementFromJsonFile(parameters).then(readElementFromJsonFile) is used. This means: when the Promise returned from the first call to the function is resolved, then call the function again, this time using the outcome of the first call as input to the second call. With the fourth then this is a little bit more explicit: since in the final call to the function readElementFromJsonFile we need to pass not just the outcome from the previous call to the function as an input parameter but also the name of the element to read from the file. Therefore we use an anonymous function that takes the resolution result as input and makes the call to the function with the additional parameter. Something similar happens with the final then – where the result from the previous call is simply printed to the output.

The code for our example of subsequently and asynchronously reading the files becomes:

var fs = require('fs');
var step1 = "step1.json";

function readElementFromJsonFile(fileName, elementToRead) {
    return new Promise((resolve, reject) => {
        var elementToRetrieve = 'nextfile';
        if (elementToRead) {
            elementToRetrieve = elementToRead;
        }
        console.log('file to read from ' + fileName);
        fs.readFile(__dirname + '/' + fileName, "utf8", function (err, data) {
            var element = "";
            if (err) return reject(err);
            try {
                element = JSON.parse(data)[elementToRetrieve];
            } catch (e) {
                reject(e);
            }
            console.log('element read = ' + element);
            resolve(element);
        });
    })// promise
}

readElementFromJsonFile(step1)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(readElementFromJsonFile)
    .then(function (filename) { return readElementFromJsonFile(filename, 'actualValue') })
    .then(function (value) { console.log('Value read after processing five files = ' + value); })
Scheduled Actions as Promise or how to Promisify setTimeout

The setTimeout() built in expects a call back function. It does not currently return a Promise. Something like:

setTimeout(1000).then(myFunc)

would be nice but does not exist.

This entry on Stackoverflow has a nice solution for working with setTimeout Promise style:

function delay(t) {
   return new Promise(function(resolve) { 
       setTimeout(resolve, t)
   });
}

function myFunc() {
    console.log('At last I can work my magic!');
}

delay(1000).then(myFunc);

The post Sequential Asynchronous calls in Node.JS – using callbacks, async and ES6 Promises appeared first on AMIS Oracle and Java Blog.

Pages