Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 3 hours 46 min ago

ORDS: Installation and Configuration

Fri, 2018-03-30 09:57

In my job as system administrator/DBA/integrator I was challenged to implement smoketesting using REST calls. Implementing REST in combination with WebLogic is pretty easy. But then we wanted to extend smoketesting to the database. For example we wanted to know if the database version and patch level were at the required level as was used throughout the complete DTAP environment. Another example is the existence of required database services. As it turns out Oracle has a feature called ORDS – Oracle REST Data Service – to accomplish this.

With ORDS you can install it in 2 different scenario’s, in standalone mode on the database server, or in combination with an application server such as WebLogic Server, Glassfish Server, or Tomcat.

This article will give a short introduction to ORDS. It then shows you how to install ORDS feasible for a production environment using WebLogic Server 12c and an Oracle 12c database as we have done for our smoketesting application.

We’ve chosen WebLogic Server to deploy the ORDS application because we already used WebLogic’s REST feature for smoketesting the application and WebLogic resources, and for high availability reasons because we use an Oracle RAC database. Also running in stand-alone mode would lead to additional security issues for port configutions.

Terminology

REST: Representational State Transfer. It provides interoperability on the Internet between computer systems.

ORDS: Oracle REST Data Services. Oracle’s implementation of RESTful services against the database.

RESTful service: an http web service that follows the REST architecture principles. Access to and/or manipulation of web resources is done using a uniform and predefined set of stateless operators.

ORDS Overview

ORDS makes it easy to develop a REST interface/service for relational data. This relational data can be stored in either an Oracle database, an Oracle 12c JSON Document Store, or an Oracle NoSQL database.

A mid-tier Java application called ORDS, maps HTTP(S) requests (GET, PUT, POST, DELETE, …) to database transactions and returns results in a JSON format.

ORDS Request Response Flow

Installation Process

The overall process of installing and configuring ORDS is very simple.

  1. Download the ORDS software
  2. Install the ORDS software
  3. Make some setup configurational changes
  4. Run the ORDS setup
  5. Make a mapping between the URL and the ORDS application
  6. Deploy the ORDS Java application

Download the ORDS software

Downloading the ORDS software can be done from the Oracle Technology Network. I used version ords.3.0.12.263.15.32.zip. I downloaded it from Oracle Technet:
http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html

Install the ORDS software

The ORDS software is installed on the WebLogic server running the Administration console. Create an ORDS home directory and unzip the software.

Here are the steps on Linux

$ mkdir -p /u01/app/oracle/product/ords
$ cp -p ords.3.0.12.263.15.32.zip /u01/app/oracle/product/ords
$ cd /u01/app/oracle/product/ords
$ unzip ords.3.0.12.263.15.32.zip

Make some setup configurational changes ords_params.properties File

Under the ORDS home directory a couple of subdirectories are created. One subdirectory is called params. This directory holds a file called ords_params.properties. This file holds some default parameters that are used during the installation. This file ords_params.properties, is used for silent installation. In case any parameters aren’t specified in this file, ORDS interactively asks you for the values.

In this article I go for a silent installation. Here are the default parameters and the ones I set for installing

Parameter

Default Value

Configured Value

db.hostname

dbserver01.localdomain

db.port

1521

1521

db.servicename

ords_requests

db.username

APEX_PUBLIC_USER

APEX_PUBLIC_USER

migrate.apex.rest

false

false

plsql.gateway.add

false

false

rest.services.apex.add

false

rest.services.ords.add

true

true

schema.tablespace.default

SYSAUX

ORDS

schema.tablespace.temp

TEMP

TEMP

standalone.http.port

8080

8080

user.public.password

Ords4Ever!

user.tablespace.default

USERS

ORDS

user.tablespace.temp

TEMP

TEMP

sys.user

SYS

sys.password

Oracle123

NOTE

As you see, I refer to a tablespace ORDS for the installation of the metadata objects. Don’t forget to create this tablespace before continuing.

NOTE

The parameters sys.user and sys.password are removed from the ords_params.properties file after running the setup (see later on in this article)

NOTE

The password for parameter user.public.password is obscured after running the setup (see later on in this article)

NOTE

As you can see there are many parameters that refer to APEX. APEX is a great tool for rapidly developing very sophisticated applications nowadays. Although you can run ORDS together with APEX, you don’t have to. ORDS runs perfectly without an APEX installation.

Configuration Directory

I create an extra directory to hold all configuration data, called config directly under the ORDS home directory. Here all configurational data used during setup are stored.

$ mkdir config
$ java -jar ords.war configdir /u01/app/oracle/product/ords/config
$ # Check what value of configdir has been set!
$ java -jar ords.war configdir

Run the ORDS setup

After all configuration is done, you can run the setup, which installs the Oracle metadata objects necessary for running ORDS in the database. The setup creates 2 schemas called:

  • ORDS_METADATA
  • ORDS_PUBLIC_USER

The setup is run in silent mode, which uses the parameter values previously set in the ords_params.properties file.

$ mkdir -p /u01/app/oracle/logs/ORDS
$ java -jar ords.war setup –database ords –logDir /u01/app/oracle/logs/ORDS –silent

Make a mapping between the URL and the ORDS application

After running the setup, ORDS required objects are created inside the database. Now it’s time to make a mapping from the request URL to the ORDS interface in the database.

$ java -jar ords.war map-url –type base-path /ords ords

Here a mapping is made between the request URL from the client to the ORDS interface in the database. The /ords part after the base URL is used to map to a database connection resource called ords.

So the request URL will look something like this:

http://webserver01.localdomain:7001/ords/

Where http://webserver01.localdomain:7001 is the base path.

Deploy the ORDS Java application

Right now all changes and configurations are done. It’s time to deploy the ORDS Java application against the WebLogic Server. Here I use wlst to deploy the ORDS Java application, but you can do it via the Administration Console as well, whatever you like.

$ wlst.sh
$ connect(‘weblogic’,’welcome01′,’t3://webserver01.localdomain:7001′)
$ progress= deploy(‘ords’,’/u01/app/oracle/product/ords/ords.war’,’AdminServer’)
$ disconnect()
$ exit()

And your ORDS installation is ready for creating REST service!

NOTE

After deployment of the ORDS Java application, it’s state should be Active and health OK. You might need to restart the Managed Server!

Deinstallation of ORDS

As the installation of ORDS is pretty simple, deinstallation is even more simple. The installation involves the creation of 2 schemas on the database and a deployment of ORDS on the application server. The deinstall process is the reverse.

  1. Undeploy ORDS from WebLogic Server
  2. Deinstall the database schemas using

    $ java –jar ords.war uninstall

    In effect this removes the 2 schemas from the database

  3. Optionally remove the ORDS installation directories
  4. Optionally remove the ORDS tablespace from the database

References

Summary

The installation of ORDS is pretty simple. You don’t need to get any extra licenses to use ORDS. ORDS can be installed without installing APEX. You can run ORDS stand-alone, or use a J2EE webserver like WebLogic Server, Glassfish Server, or Apache Tomcat. Although you will need additional licenses for the use of these webservers.

Hope this helps!

The post ORDS: Installation and Configuration appeared first on AMIS Oracle and Java Blog.

Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415

Thu, 2018-03-29 10:26

We are in the process of upgrading our Oracle Clusters and SIHA/Restart systems to Oracle 12.2.0.1

The upgrade of the Grid-Infra home on a Oracle SIHA/Restart system from 11.2.0.4 to 12.2.0.1 fails when
running rootupgrade.sh with error message:

CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’

We start the upgrade to 12.2.0.1 (with Jan2018 RU patch) as:
$ ./gridSetup.sh -applyPSU /app/software/27100009

The installation and relink of the software looks correct.
However, when running the rootupgrade.sh as root user, as part of the post-installation,
the script ends with :

2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/crsctl query has softwareversion
2018-03-28 11:20:27: Command output:
> Oracle High Availability Services version on the local node is [12.2.0.1.0]
>End Command output
2018-03-28 11:20:27: Version String passed is: [Oracle High Availability Services version on the local node is [12.2.0.1.0]]
2018-03-28 11:20:27: Version Info returned is : [12.2.0.1.0]
2018-03-28 11:20:27: Got CRS softwareversion for su025p074: 12.2.0.1.0
2018-03-28 11:20:27: The software version on su025p074 is 12.2.0.1.0
2018-03-28 11:20:27: leftVersion=11.2.0.4.0; rightVersion=12.2.0.0.0
2018-03-28 11:20:27: [11.2.0.4.0] is lower than [12.2.0.0.0]
2018-03-28 11:20:27: Disable the SRVM_NATIVE_TRACE for srvctl command on pre-12.2.
2018-03-28 11:20:27: Invoking “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
2018-03-28 11:20:27: trace file=/app/oracle/crsdata/su025p074/crsconfig/srvmcfg1.log
2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first
2018-03-28 11:21:02: Command output:
> PRCA-1003 : Failed to create ASM asm resource ora.asm
> PRCR-1071 : Failed to register or update resource ora.asm
> CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’.
>End Command output
2018-03-28 11:21:02: “upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first” failed with status 1.
2018-03-28 11:21:02: Executing cmd: /app/gi/12201_grid/bin/clsecho -p has -f clsrsc -m 180 “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
2018-03-28 11:21:02: Command

The rootupgrade.sh script is run as the root user as prescribed,  but root cannot add the ASM resource.
This leaves the installation unfinished.

There is no description in the Oracle Knowledge base, however according Oracle Support this problem is
caused by unpublished   Bug 25183818 : SIHA 11204 UPGRADE TO MAIN IS FAILING 

As per March 2018, no workaround or software patch is yet available.

The post Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415 appeared first on AMIS Oracle and Java Blog.

Dbvisit Standby upgrade

Wed, 2018-03-28 10:00
Upgrading to Dbvisit Standby 8.0.x

Dbvisit provides upgrade documentation which is detailed and in principle  correct but only describes the upgrade process from the viewpoint of an installation on a single host.
I upgraded Dbvisit Standby at a customer’s site with Dbvisit Standby in a running configuration with several hosts with several primary and standby databases.  I found, by trial and error and with the help of Dbvisit support, some additional steps and points of advice that I think may be of help to others.
This documents describes the upgrade process for a working environment and provides information and advice in addition to the upgrade documentation. Those additions will be clearly marked in red throughout the blog. Also the steps of the upgrade process have been rearranged in a more logical order.
It is assumed that the reader is familiar with basic Dbvisit concepts and processes.

Configuration

The customer’s configuration that was upgraded is as follows:

  • Dbvisit 8.0.14
  • 4 Linux OEL 6 hosts running Dbvisit Standby
  • 6 databases in Dbvisit Standby configuration distributed among the hosts
  • 1 Linux OEL 7 host running Dbvisit Console
  • DBIVIST_BASE: /usr/dbvisit
  • Dbvctl running in Deamon mode
Dvisit upgrade overview

The basic steps that are outlined in the Dbvisit upgrade documentation are as follows:

  1. Stop your Dbvisit Schedules if you have any running.
  2. Stop or wait for any Dbvisit processes that might still be executing.
  3. Backup the Dbvisit Base location where your software is installed.
  4. Download the latest version from www.dbvisit.com.
  5. Extract the install files into a temporary folder, example /home/oracle/8.0.
  6. Start the Installer and select to install the required components.
  7. Once the update is complete, you can remove the temporary install folder where the installer was extracted.
  8. It is recommended to run a manual send/apply of logs once an upgrade is complete.
  9. Re-enable any schedules.

During the actual upgrade we deviated significantly from this: steps were rearranged, added and changed slightly.

  1. Download the latest available version of DBVisit and make it available on all server.
  2. Make a note of the primary host for each Dbvisit standby configuration.
  3. Stop dbvisit processes.
  4. Backup the Dbvisit Base location where your software is installed.
  5. Upgrade the software.
  6. Start dbvagent and dbvnet.
  7. Upgrade de DDC configuration files.
  8. Restart dbvserver.
  9. Update DDC’s in Bbvisit Console.
  10. run a manual send/apply of logs.
  11. Restart dbvsit standby processes.

In the following sections these steps will  be explained more detailed.

Dbvisit Standby upgrade

Here follow the steps in detail that in my view should be taken for a Dbvisit upgrade, based on the experience and steps taken during the actual Dbvisit upgrade.

  1. Download the latest available version of DBVisit and make it available on all server.
    In our case I put it in /home/oracle/upgrade on all hosts. The versions used were 8.0.18 for Oracle Enterprise Linux 6 and 7:

    dbvisit-standby8.0.18-el6.zip
    dbvisit-standby8.0.18-el7.zip
    
  2. Make a note of the primary hosts for each Dbvisit standby configuration.
    You will need this information later in step 7. It is possible to get the information from the DDC .env files, but in our case it is easier to get it from the Dbvisit console.
    If you need to get them from the DDC .env files look for the SOURCE parameter. Say we have a database db1:

    [root@dbvhost04 conf]# cd /usr/dbvisit/standby/conf/
    [root@dbvhost04 conf]# grep "^SOURCE" dbv_db1.env
    SOURCE = dbvhost04
    
  3. Stop dbvisit processes.
    The Dbvisit upgrade manual assumes you schedule dbvctl from the cron. In our situation the dbvctl processes were running in Deamon mode. Easiest was therefor to stop them from the Dbvisit console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose stop.
    Dbvagent, dbvnet and, on the Dbvisit console host, dbvserver can be stopped as follows:

    cd /usr/dbvisit/dbvagent
    ./dbvagent -d stop
    cd /usr/dbvisit/dbvnet
    ./dbvnet -d stop
    cd /usr/dbvisit/dbvserver
     ./dbvserver -d stop
    

    Do this on all hosts. Dbvisit support advices that all hosts in a configuration be upgraded at the same time. There is no rolling upgrade or something similar.
    Check before proceeding if all processes are down.

  4. Backup the Dbvisit Base location where your software is installed.
    The Dbvisit upgrade manual marks this step as optional – but recommended. In my view it is not optional.
    You can simply tar everything under DBIVIST_BASE for later use.
  5. Upgrade the software.
    Extract the downloaded software and run the included  installer. It will show you which version you already have and which versions is available in the downloaded software. Choose the correct install option to upgrade. Below you can see the upgrade of one of the OEL 6 database hosts running Dbvisit standby:

    cd /home/oracle/upgrade
    <unzip and untar the correct version from /home/oracle/upgrade>
    cd dbvisit/installer/
    ./install-dbvisit
    
    -----------------------------------------------------------
        Welcome to the Dbvisit software installer.
    -----------------------------------------------------------
    
        It is recommended to make a backup of our current Dbvisit software
        location (Dbvisit Base location) for rollback purposes.
        
        Installer Directory /home/oracle/upgrade/dbvisit
    
    >>> Please specify the Dbvisit installation directory (Dbvisit Base).
     
        The various Dbvisit products and components - such as Dbvisit Standby, 
        Dbvisit Dbvnet will be installed in the appropriate subdirectories of 
        this path.
    
        Enter a custom value or press ENTER to accept default [/usr/dbvisit]: 
         >     DBVISIT_BASE = /usr/dbvisit 
    
        -----------------------------------------------------------
        Component      Installer Version   Installed Version
        -----------------------------------------------------------
        standby        8.0.18_0_gc6a0b0a8  8.0.14.19191                                      
        dbvnet         8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
        dbvagent       8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
        dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
    
        -----------------------------------------------------------
     
        What action would you like to perform?
           1 - Install component(s)
           2 - Uninstall component(s)
           3 - Exit
        
        Your choice: 1
    
        Choose component(s):
           1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
           2 - Dbvisit Standby Core (Command Line Interface)
           3 - Dbvnet (Dbvisit Network Communication) 
           4 - Dbvagent (Dbvisit Agent)
           5 - Dbvserver (Dbvisit Central Console) - Not available on Solaris/AIX
           6 - Exit Installer
        
        Your choice: 1
    
    -----------------------------------------------------------
        Summary of the Dbvisit STANDBY configuration
    -----------------------------------------------------------
        DBVISIT_BASE /usr/dbvisit 
    
        Press ENTER to continue 
    -----------------------------------------------------------
        About to install Dbvisit STANDBY
    -----------------------------------------------------------
    
        Component standby installed. 
    
        Press ENTER to continue 
    -----------------------------------------------------------
        About to install Dbvisit DBVNET
    -----------------------------------------------------------
    
    Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/cert.pem to /usr/dbvisit/dbvnet/conf/cert.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/ca.pem to /usr/dbvisit/dbvnet/conf/ca.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/prikey.pem to /usr/dbvisit/dbvnet/conf/prikey.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvnet/dbvnet to /usr/dbvisit/dbvnet/dbvnet
    
    Copied file /usr/dbvisit/dbvnet/conf/dbvnetd.conf to /usr/dbvisit/dbvnet/conf/dbvnetd.conf.201802201235
    
        DBVNET config file updated 
    
    
        Press ENTER to continue 
    -----------------------------------------------------------
        About to install Dbvisit DBVAGENT
    -----------------------------------------------------------
    
    Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/cert.pem to /usr/dbvisit/dbvagent/conf/cert.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/ca.pem to /usr/dbvisit/dbvagent/conf/ca.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/prikey.pem to /usr/dbvisit/dbvagent/conf/prikey.pem
    
    Copied file /home/oracle/upgrade/dbvisit/dbvagent/dbvagent to /usr/dbvisit/dbvagent/dbvagent
    
    Copied file /usr/dbvisit/dbvagent/conf/dbvagent.conf to /usr/dbvisit/dbvagent/conf/dbvagent.conf.201802201235
    
        DBVAGENT config file updated 
    
    
        Press ENTER to continue 
    
        -----------------------------------------------------------
        Component      Installer Version   Installed Version
        -----------------------------------------------------------
        standby        8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
        dbvnet         8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
        dbvagent       8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
        dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
    
        -----------------------------------------------------------
     
        What action would you like to perform?
           1 - Install component(s)
           2 - Uninstall component(s)
           3 - Exit
        
        Your choice: 3
    
  6. Start dbvagent and dbvnet.
    For the next step dbvagent and dbvnet need to be running.  In our case we had an init script which started both:

    cd /etc/init.d
    ./dbvisit start
    

    Otherwise do something like:

    cd /usr/dbvisit/dbvnet
    ./dbvnet –d start 
    cd /usr/dbvisit/dbvagent
    ./dbvagent –d start
    

    The upgrade documentation at this point refers to section 5 of the Dbvisit Standby Networking chapter from the Dbvisit 8.0 user guide: Testing Dbvnet Communication. There some tests to see if dbvnet is working are described. It is important, as the upgrade documentation rightly points, out to test this before proceeding.
    Do on all database hosts:

    [oracle@dbvhost04 init.d]$ cd /usr/dbvisit/dbvnet/
    [oracle@dbvhost04 dbvnet]$ ./dbvnet -e "uname -n"
    dbvhost01
    [oracle@dbvhost04 dbvnet]$ ./dbvnet -f /tmp/dbclone_extract.out.err -o /tmp/testfile
    [oracle@dbvhost04 dbvnet]$ cd /usr/dbvisit/standby
    [oracle@dbvhost04 standby]$ ./dbvctl -f system_readiness
    
    Please supply the following information to complete the test.
    Default values are in [].
    
    Enter Dbvisit Standby location on local server: [/usr/dbvisit]:
    Your input: /usr/dbvisit
    
    Is this correct? <Yes/No> [Yes]:
    
    Enter the name of the remote server: []: dbvhost01
    Your input: dbvhost01
    
    Is this correct? <Yes/No> [Yes]:
    
    Enter Dbvisit Standby location on remote server: [/usr/dbvisit]:
    Your input: /usr/dbvisit
    
    Is this correct? <Yes/No> [Yes]:
    
    Enter the name of a file to transfer relative to local install directory
    /usr/dbvisit: [standby/doc/README.txt]:
    Your input: standby/doc/README.txt
    
    Is this correct? <Yes/No> [Yes]:
    
    Choose copy method:
    1)   /usr/dbvisit/dbvnet/dbvnet
    2)   /usr/bin/scp
    Please enter choice [1] :
    
    Is this correct? <Yes/No> [Yes]:
    
    Enter port for method /usr/dbvisit/dbvnet/dbvnet: [7890]:
    Your input: 7890
    
    Is this correct? <Yes/No> [Yes]:
    -------------------------------------------------------------
    Testing the network connection between local server and remote server dbvhost01.
    -------------------------------------------------------------
    Settings
    ========
    Remote server                                          =dbvhost01
    Dbvisit Standby location on local server               =/usr/dbvisit
    Dbvisit Standby location on remote server              =/usr/dbvisit
    Test file to copy                                      =/usr/dbvisit/standby/doc/README.txt
    Transfer method                                        =/usr/dbvisit/dbvnet/dbvnet
    port                                                   =7890
    -------------------------------------------------------------
    Checking network connection by copying file to remote server dbvhost01...
    -------------------------------------------------------------
    Trace file /usr/dbvisit/standby/trace/58867_dbvctl_f_system_readiness_201803201304.trc
    
    File copied successfully. Network connection between local and dbvhost01
    correctly configured.
    
  7. Upgrade de DDC configuratie files.
    Having upgraded the software now the Dbvisit Standby Configuration (DDC) files, which are located in DBVISIT_BASE/standby/conf, on the database hosts need to be upgraded.
    Do this once for each standby configuration only on the primary host. If you do it on the secondary host you will get an error and all DDC configuration files will be deleted!
    So if we have a database db1 in a Dbvisit standby configuration with database host dbvhost1 running the primary database (source in Dbvisit terminology) and database host dbvhost2 running the standby database (destination in Dbvisit terminology), we do the following on the dbvhost1 only:

    cd /usr/dbvisit/standby
    ./dbvctl -d db1 -o upgrade
    
  8. Restart dbvserver.
    In our configuration the next step is to restart dbvserver to renable the Dbvisit Console.

    cd /usr/dbvisit/dbvserver
    ./dbvserver -d start
    
  9. Update DDC’s in Dbvisit Console.
    After the upgrade the configurations need to te be updated in Dbvisit Console. Go to Manage Configurations and status field will show an error and the edit configuration button is replaced with an update button.
    Update the DDC for each configuration on that screen.
  10. run a manual send/apply of logs.
    In our case this was easiest done from the Dbvisit console again: Main Menu -> Database Actions -> send logs button, followed by apply logs button.
    Do this for each configuration and check for errors before continuing.
  11. Restart Dbvsit standby processes.
    In our case we restarted the dbvctl processes in deamon mode from the Dbvisit Console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose start.
References

Linux – Upgrade from Dbvisit Standby version 8.0.x
Dbvisit Standby Networking – Dbvnet – 5. Testing Dbvnet Communication

The post Dbvisit Standby upgrade appeared first on AMIS Oracle and Java Blog.

Getting started with git behind a company proxy

Sun, 2018-03-25 11:50

Since a few months I’ve been involved in working with git to save our Infrastructure as Code in GitHub. But I don’t want to have to type in my password every time and do not like in clear text saved passwords, thus I prefer ssh over https. But when working behind a proxy that doesn’t allow for traffic over port 22 (ssh) I had to spend some time to get it working. Without a proxy there is nothing to it.

First some background information. We connect to a “stepping stone” server that has some version of Windows as the O.S. and then use Putty to connect to our Linux host where we work on our code.

 

Network background

Our connection to Internet is via the proxy, but the proxy doesn’t allow traffic over port 22 (ssh/git). It does however allow traffic over port 80 (http) or 443 (https).

So the goal here is to:
  1. use a public/private key pair to authenticate myself at GitHub.com
  2. route traffic to GitHub.com via the proxy
  3. reroute port 22 to port 443
Generate a public/private key pair.

This can be done on the Linux prompt but then you either need to type your passphrase every time you use git (or have it cached in Linux), or use a key pair without a passphrase. I wanted to take this one step further and use Putty Authentication Agent (Pageant.exe) to cache my private key and forward authentication requests over Putty to Pageant.

With Putty Key Generator (puttygen.exe) you generate a public/private key pair. Just start the program and press the generate button.

2018-03-25 16_35_08-keygen

You then need to generate some entropy by moving the mouse around:

2018-03-25 16_39_08-PuTTY Key Generator

And in the end you get something like this:

2018-03-25 16_41_25-PuTTY Key Generator

Ad 1) you should use a descriptive name like “github <accountname>”

Ad 2) you should use a sentence to protect your private key. Mind you: If you do not use a caching mechanism you need to type it in frequently

Ad 3) you should save your private key somewhere you consider safe. (It should not be accessible for other people)

Ad 4) you copy this whole text field (starting with ssh-rsa in this case up to and including the Key comment “rsa-key-20180325” which is repeated in that text field)

Once you have copied the public key you need to add it to your account at github.com.

Adding the public key in github.com

Log in to github.com and click on your icon:

2018-03-25 17_03_03-github

Choose “Settings” and go to “SSH and GPG keys”:

2018-03-25 17_03_14-Your Profile

There you press the “Add SSH key” button and you get to the next screen:

2018-03-25 17_08_16-Add new SSH keys

Give the Title a descriptive name so you can recognize/remember where you generated this key for, and in the Key field you paste the copied public key in. Then you press Add SSH key which results in something like this:

2018-03-25 17_11_43-SSH and GPG keys

In your case the picture of the key will not be green but black as you haven’t used it yet. In case you no longer want this public/private key pair to have access to your github account you can Delete it here as well.

So now you can authenticate yourself with a private key that get checked by the public key you uploaded in github.

You can test that on a machine that has direct access to Internet and is able to use port 22 (For example a VirtualBox VM on your own laptop at home).

Route git traffic to github.com via the Proxy and change the port.

On the Linux server behind the company firewall, when logged on with your own account, you need to got to the “.ssh” directory. If it isn’t there yet you haven’t used ssh on that machine yet. (ssh <you>@<linuxserver> is enough and cancel the logging in). So change directory to .ssh in your home dir. Create a file called “config” with the contents.

# github.com
Host github.com
    Hostname ssh.github.com
    ProxyCommand nc -X connect -x 192.168.x.y:8080 %h %p
    Port 443
    ServerAliveInterval 20
    User git

#And if you use gitlab as well the entry should be like:
# gitlab.com
Host gitlab.com
    Hostname altssh.gitlab.com
    Port    443
    ProxyCommand    /usr/bin/nc -X connect -x 192.168.x.y:8080 %h %p
    ServerAliveInterval 20
    User  git

This is the part where you define that ssh call’s to server github.com should be rerouted to the proxy server 192.168.x.y on port 8080 (change that to your proxy details), and that the server should not be github.com but changed to ssh.github.com. That is the server where github allows you to use the git or ssh protocol to connect to over https (port 443). I’ve added the example for gitlab as well. There the hostname should be changed to altssh.gitlab.com as is done in the config above.

“nc” or “/usr/bin/nc” is the utility Netcat that does the work of changing hostname and port number for us. On our RedHat Linux 6 server it is installed by default.

The ServerAliveInterval 20 makes sure that the connection is kept alive by sending a packet every 20 seconds to prevent a “broken pipe”. And the User git makes sure you will not connect as your local Linux user to github.com but as user git.

But two things still needs to be done:

  1. Add your private key to Putty Authentication Agent
  2. Allow the Putty session to your Linux host to use Putty Authentication Agent
Add your private key to Putty Authentication Agent

On your “Stepping Stone Server” start the Putty Authentication Agent (Pageant.exe), right click on the icon (useally somewhere on the bottom of your screen to the right)

2018-03-25 17_49_49-

Select View Keys to see the keys already loaded or press Add Key to add your newly created private key. You get asked to type your passphrase. Via View Keys you can check if the key was loaded:

2018-03-25 17_56_06-Pageant Key List

The obfuscated part shows the key fingerprint and the text to the right of that is the Key Comment you used. If the comment is bigger not all the text is visible. Thus make sure the Key Comment is distinguishable in the first part.

If you want to use the same key for authentication on the Linux host, then put the Public key part in a file called “authorized_keys”. This file should be located in the “.ssh” directory and have rw permissions for your local user only (chmod 0600 authorized_keys) and nobody else. If you need or want a different key pair for that make sure you load the corresponding private key as well.

Allow the Putty session to your Linux host to use Putty Authentication Agent

The Putty session that you use to connect to the Linux host needs to have the following checked:

2018-03-25 18_08_03-PuTTY Configuration

Thus for the session go to “Connection” –> “SSH” –> “Auth” and check “Allow agent forwarding” to allow you terminal session on the Linux host to forward the authentication request with GitHub (or gitlab) to be handled by your Pageant process on the Stepping Stone server. For that last part you need to have checked the box “Attempt authentication using Pageant”.

Now you are all set to clone a GitHub repository on your Linux host and use key authentication.

Clone a git repository using the git/ssh protocol

Browse to GitHub.com, select the repository you have access to with your GitHub account (if it is a private repo), press the “Clone or download” button and make sure you select “Clone with SSH”. See the picture below.

2018-03-25 18_18_41-git

Press the clipboard icon to copy the line starting with “git@github.com” and ending with “.git”.

That should work now (like it did for me).

HTH Patrick

P.S. If you need to authenticate your connection with the proxy service you probably need to have a look at the manual pages of “nc”. Or google it. I didn’t have to authenticate with the proxy service so I didn’t dive into that.

The post Getting started with git behind a company proxy appeared first on AMIS Oracle and Java Blog.

How to fix Dataguard FAL server issues

Wed, 2018-03-21 06:23

One of my clients had an issue with their Dataguard setup, after having to move tables and rebuild indexes the transport to their standby databases failed. The standby databases complained about not being able to fetch archivelogs from the primary database. In this short blog I will explain what happened and how I diagnosed the issue and fixed it.

 

The situation

Below  you can see a diagram of the setup: a primary site with both a primary database and a standby database. At the remote site there are two standby databases both get their redo stream from the primary database.

DG_situation

This setup was working well for the company, but having two redo streams going to the remote site with limited bandwith can give issues when doing massive data manipulation. When the need arrived for doing massive table movements and rebuilding of indexes the generation of redo was too much for the WAN link and also to the local standby database. After trying to fix the standby databases for several days my help was requested because the standby databases were not able to fix the gaps in the redo stream.

 

The issues

While analyzing the issues I found that the standby databases failed to fetch archived logs from the primary database, usually you can fix this by using RMAN to supply the primary database with the archived logs needed for the standby, because in most cases the issue is that het archived logs have been deleted on the primary database. The client’s own DBA already supplied the required archived logs so the message was kind of misleading, the archived logs are there, but the primary doesn’t seem to be able to supply them.

When checking the alert log for the primary database there was no obvious sign that there was anything going on or going wrong.  While searching for more information I discovered the default setting for the parameter log_archive_max_processes is 4. This setting controls the amount of processes available for archiving, redo transport and FAL servers. Now take a quick look at the drawing above and start counting with me: at least one for local archiving, and three for the redo transport to the three standby databases. So when one of the standby databases wants to fetch archived logs to fill in a gap, it may not be able to request this from the primary database. So time to fix it:

 

ALTER SYSTEM SET log_archive_max_processes=30 scope=both;

Now the fetching start working better, but I discovered some strange behaviour, the standby database closest to the primary database was still not able to fetch archive logs from the primary. The two remote standby databases were actually fetching some archived logs, so thats an improvement… but still, the alert log for the primary database was quiet silent… fortunately Oracle provides us with more server parameters: log_archive_trace. This setting enables extra logging for certain subprocesses. add the values in the linked documentation to see the desired logging: in this case 2048  and 128 for getting FAL server logging and redo transport logging.

ALTER SYSTEM SET log_archive_trace=2176 scope=both;

With this setting I was able to see that all 26 other archiver processes were busy with supplying one of the standby databases with archived logs. It seems to me that the database thats furthest behind will get the first go at the primary database…. Anyway, my first instinct was to have the local standby database fixed first so this one is available for failover, so by stopping the remote standby databases the local standby database was now able to fetch archived logs from the primary database. The next step is to start the other standby databases, to speed up things I started the first one and only after this database has completed its archive log gap I started the second database.

 

In conclusion, it’s important that you tune your settings for your environment: set log_archive_max_processes as appropriate and set your log level so you see what’s going on.

Please mind that both of these settings are also managed by the Dataguard Broker. To prevent warnings from Dataguard Broken make sure you set these parameters via dgmgrl:

edit database <<primary>> set property LogArchiveTrace=2176;
edit database <<primary>> set property LogArchiveMaxProcesses=30;

The post How to fix Dataguard FAL server issues appeared first on AMIS Oracle and Java Blog.

Handle a GitHub Push Event from a Web Hook Trigger in a Node application

Tue, 2018-03-20 11:57

My requirement in this case: a push of one or more commits to a GitHub repository need to trigger a Node application that inspects the commit and when specific conditions are met – it will download the contents of the commit.

image

I have implemented this functionality using a Node application – primarily because it offers me an easy way to create a REST end point that I can configure as a WebHook in GitHub.

Implementing the Node application

The requirements for a REST endpoint that can be configured as a webhook endpoint are quite simple: handle a POST request  no response required. I can do that!

In my implementation, I inspect the push event, extract some details about the commits it contains and write the summary to the console. The code is quite straightforward and self explanatory; it can easily be extended to support additional functionality:

app.post('/github/push', function (req, res) {
  var githubEvent = req.body
  // - githubEvent.head_commit is the last (and frequently the only) commit
  // - githubEvent.pusher is the user of the pusher pusher.name and pusher.email
  // - timestamp of final commit: githubEvent.head_commit.timestamp
  // - branch:  githubEvent.ref (refs/heads/master)

  var commits = {}
  if (githubEvent.commits)
    commits = githubEvent.commits.reduce(
      function (agg, commit) {
        agg.messages = agg.messages + commit.message + ";"
        agg.filesTouched = agg.filesTouched.concat(commit.added).concat(commit.modified).concat(commit.removed)
          .filter(file => file.indexOf("src/js/jet-composites/input-country") > -1)
        return agg
      }
      , { "messages": "", "filesTouched": [] })

  var push = {
    "finalCommitIdentifier": githubEvent.after,
    "pusher": githubEvent.pusher,
    "timestamp": githubEvent.head_commit.timestamp,
    "branch": githubEvent.ref,
    "finalComment": githubEvent.head_commit.message,
    "commits": commits
  }
  console.log("WebHook Push Event: " + JSON.stringify(push))
  if (push.commits.filesTouched.length > 0) {
    console.log("This commit involves changes to the input-country component, so let's update the composite component for it ")
    var compositeName = "input-country"
    compositeloader.updateComposite(compositeName)
  }

  var response = push
  res.json(response)
})

Configuring the WebHook in GitHub

A web hook can be configured in GitHub for any of your repositories. You indicate the endpoint URL, the type of event that should trigger the web hook and optionally a secret. See my configuration:

image

Trying out the WebHook and receiving Node application

In this particular case, the Node application is running locally on my laptop. I have used ngrok to expose the local application on a public internet address:

image

(note: this is the address you saw in the web hook configuration)

I have committed and pushed a small change in a file in the repository on which the webhook is configured:

image

The ngrok agent has received the WebHook request:

image

The Node application has received the push event and has done its processing:

image

The post Handle a GitHub Push Event from a Web Hook Trigger in a Node application appeared first on AMIS Oracle and Java Blog.

Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller

Mon, 2018-03-19 00:58

The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.

image

This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).

www:

var app = require('../app');
var debug = require('debug')(' :server');
var http = require('http');

var port = normalizePort(process.env.PORT || '3000');
app.set('port', port);
var server = http.createServer(app);
server.listen(port);
server.on('error', onError);
server.on('listening', onListening);

function normalizePort(val) {
var port = parseInt(val, 10);

if (isNaN(port)) {
// named pipe
return val;
}

if (port >= 0) {
// port number
return port;
}

return false;
}

function onError(error) {
if (error.syscall !== 'listen') {
throw error;
}

var bind = typeof port === 'string'
? 'Pipe ' + port
: 'Port ' + port;

// handle specific listen errors with friendly messages
switch (error.code) {
case 'EACCES':
console.error(bind + ' requires elevated privileges');
process.exit(1);
break;
case 'EADDRINUSE':
console.error(bind + ' is already in use');
process.exit(1);
break;
default:
throw error;
}
}

function onListening() {
var addr = server.address();
var bind = typeof addr === 'string'
? 'pipe ' + addr
: 'port ' + addr.port;
debug('Listening on ' + bind);
}

package.json:

{
"name": "jet-on-node",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"body-parser": "~1.18.2",
"cookie-parser": "~1.4.3",
"debug": "~2.6.9",
"express": "~4.15.5",
"morgan": "~1.9.0",
"pug": "2.0.0-beta11",
"request": "^2.85.0",
"serve-favicon": "~2.4.5"
}
}

app.js:

var express = require('express');
var path = require('path');
var favicon = require('serve-favicon');
var logger = require('morgan');
var cookieParser = require('cookie-parser');
var bodyParser = require('body-parser');

const http = require('http');
const url = require('url');
const fs = require('fs');
const request = require('request');

var app = express();
// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());

// define static resource server from local directory public (for any request not otherwise handled)
app.use(express.static(path.join(__dirname, 'public')));

app.use(function (req, res, next) {
res.header("Access-Control-Allow-Origin", "*");
res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
next();
});

// catch 404 and forward to error handler
app.use(function (req, res, next) {
var err = new Error('Not Found');
err.status = 404;
next(err);
});

// error handler
app.use(function (err, req, res, next) {
// set locals, only providing error in development
res.locals.message = err.message;
res.locals.error = req.app.get('env') === 'development' ? err : {};

// render the error page
res.status(err.status || 500);
res.json({
message: err.message,
error: err
});
});

module.exports = app;

Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

// any request at /js/jet-composites (for resouces in that folder)
// should be intercepted and redirected
var compositeBasePath = '/js/jet-composites/'
app.get(compositeBasePath + '*', function (req, res) {
var requestedResource = req.url.substr(compositeBasePath.length)
// parse URL
const parsedUrl = url.parse(requestedResource);
// extract URL path
let pathname = `${parsedUrl.pathname}`;
// maps file extention to MIME types
const mimeType = {
'.ico': 'image/x-icon',
'.html': 'text/html',
'.js': 'text/javascript',
'.json': 'application/json',
'.css': 'text/css',
'.png': 'image/png',
'.jpg': 'image/jpeg',
'.wav': 'audio/wav',
'.mp3': 'audio/mpeg',
'.svg': 'image/svg+xml',
'.pdf': 'application/pdf',
'.doc': 'application/msword',
'.eot': 'appliaction/vnd.ms-fontobject',
'.ttf': 'aplication/font-sfnt'
};

handleResourceFromCompositesServer(res, mimeType, pathname)
})

async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
// fetch resource and return
var options = url.parse(reqUrl);
options.method = "GET";
options.agent = false;

// options.headers['host'] = options.host;
http.get(reqUrl, function (serverResponse) {
console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
console.log(' ');
console.log('\t-> Response Headers: ', serverResponse.headers);

serverResponse.pause();

serverResponse.headers['access-control-allow-origin'] = '*';

switch (serverResponse.statusCode) {
// pass through. we're not too smart here...
case 200: case 201: case 202: case 203: case 204: case 205: case 206:
case 304:
case 400: case 401: case 402: case 403: case 404: case 405:
case 406: case 407: case 408: case 409: case 410: case 411:
case 412: case 413: case 414: case 415: case 416: case 417: case 418:
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });
serverResponse.resume();
break;

// fix host and pass through.
case 301:
case 302:
case 303:
serverResponse.statusCode = 303;
serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
console.log('\t-> Redirecting to ', serverResponse.headers['location']);
res.writeHeader(serverResponse.statusCode, serverResponse.headers);
serverResponse.pipe(res, { end: true });
serverResponse.resume();
break;

// error everything else
default:
var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
serverResponse.resume();
res.writeHeader(500, {
'content-type': 'text/plain'
});
res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);
break;
}

console.log('\n\n');
});
}

Resources

Express Tutorial Part 2: Creating a skeleton website - https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

Building a Node.js static file server (files over HTTP) using ES6+ - http://adrianmejia.com/blog/2016/08/24/Building-a-Node-js-static-file-server-files-over-HTTP-using-ES6/

How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk - https://medium.com/adobe-io/how-to-combine-rest-api-calls-with-javascript-promises-in-node-js-or-openwhisk-d96cbc10f299

Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. - https://gist.github.com/cmawhorter/a527a2350d5982559bb6

5 Ways to Make HTTP Requests in Node.js - https://www.twilio.com/blog/2017/08/http-requests-in-node-js.html

The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

Create a Node JS application for Downloading sources from GitHub

Sun, 2018-03-18 16:26

My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

  • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
  • iterate over all items:
    • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
    • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

An example of the code in action:

A randomly selected GitHub repo (at https://github.com/lucasjellema/WebAppIframe2ADFSynchronize):

image

The local target directory is empty at the beginning of the action:

SNAGHTML8180706

Run the code:

image

And the content is downloaded and written locally:

image

Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: https://github.com/settings/tokens . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

const octokit = require('@octokit/rest')() 
const fs = require('fs');

var gitToken = "YourToken"

octokit.authenticate({
    type: 'token',
    token: gitToken
})

var targetProjectRoot = "C:/data/target/" 
var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }

downloadGitHubRepo(github, targetProjectRoot)

async function downloadGitHubRepo(github, targetDirectory) {
    console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
    var repo = github.repo;
    var path = ''
    var owner = github.owner
    var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
    processGithubDirectory(owner, repo, ref, path, path, targetDirectory)
}

// let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
var maxSize = 1000000;
function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
        .then(result => {
            var targetDir = targetRoot + path
            // check if targetDir exists 
            checkDirectorySync(targetDir)
            result.data.forEach(item => {
                if (item.type == "dir") {
                    processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                } // if directory
                if (item.type == "file") {
                    if (item.size > maxSize) {
                        var sha = item.sha
                        octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                        ).then(result => {
                            var target = `${targetRoot + item.path}`
                            fs.writeFile(target
                                , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { })
                        })
                            .catch((error) => { console.log("ERROR BIGGA" + error) })
                        return;
                    }// if bigga
                    octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                        .then(result => {
                            var target = `${targetRoot + item.path}`
                            if (binaryExtensions.includes(item.path.slice(-3))) {
                                fs.writeFile(target
                                    , Buffer.from(result.data.content, 'base64'), function (err, data) { reportFile(item, target) })
                            } else
                                fs.writeFile(target
                                    , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })

                        })
                        .catch((error) => { console.log("ERROR " + error) })
                }// if file
            })
        }).catch((error) => { console.log("ERROR XXX" + error) })
}//processGithubDirectory

function reportFile(item, target) {
    console.log(`- installed ${item.name} (${item.size} bytes )in ${target}`)
}

function checkDirectorySync(directory) {
    try {
        fs.statSync(directory);
    } catch (e) {
        fs.mkdirSync(directory);
        console.log("Created directory: " + directory)
    }
}

Resources

Octokit REST API Node JS library: https://github.com/octokit/rest.js 

API Documentation for Octokit: https://octokit.github.io/rest.js/#api-Repos-getContent

The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu

Sun, 2018-03-18 08:53

Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

Getting started with Docker Installing Docker

Of course you need a Docker installation. I’ll not get into details here but;

Oracle Linux 7
yum-config-manager --enable ol7_addons
yum-config-manager --enable ol7_optional_latest
yum install docker-engine
systemctl start docker
systemctl enable docker
Ubuntu
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
apt-get update
apt-get install docker-ce

You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

Running a Docker container

See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

  • go to container-registry.oracle.com and enable your OTN account to be used
  • go to the product you want to use and accept the license agreement
  • do docker login -u username -p password container-registry.oracle.com

If you are using the Docker Store, you first need to

  • go to store.docker.com and create an account
  • find the image you want to use. Click Get Content and accept the license agreement
  • do docker login -u username -p password

To start a container in the foreground

docker run --name mycontainer -it imagename /bin/sh

To start a container in the background

docker run --name mycontainer -d imagename tail -f /dev/null

To ‘enter’ a running container:

docker exec -it mycontainer /bin/sh

/bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

Cleaning up

Good to know is how to clean up your images/containers after having played around with them. See here.

#!/bin/bash
# Delete all containers
docker rm $(docker ps -a -q)
# Delete all images
docker rmi $(docker images -q)
Options for JDK

Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

Oracle JDK on Oracle Linux

When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about support and roadmap, read the following blog.

store.docker.com

The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

#use the store.docker.com username and password
docker login -u yourusername -p yourpassword
docker pull store/oracle/serverjre:8

To start in the foreground:

docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash
container-registry.oracle.com

You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

#use your OTN username and password
docker login -u yourusername -p yourpassword container-registry.oracle.com

docker pull container-registry.oracle.com/java/serverjre:8

#To start in the foreground:
docker run --name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash
OpenJDK on Alpine Linux

When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread related challenges with Alpine Linux though. See for example here and here.

Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login.

When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

docker pull openjdk:8-jdk-alpine

Next you can do

docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh
Zulu on Ubuntu Linux

You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

Do it yourself

Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

Spring Boot in a Docker container

Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE
ADD ${JAR_FILE} app.jar
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

<build>
<finalName>accs-cache-sample</finalName>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<configuration>
<repository>${docker.image.prefix}/${project.artifactId}</repository>
<buildArgs>
<JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
</buildArgs>
</configuration>
</plugin>
</plugins>
</build>

To actually build the Docker image, which allows using it locally, you can do:

mvn install dockerfile:build

If you want to distribute it (allow others to easily pull and run it), you can push it with

mvn install dockerfile:push

This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

You can then do something like

docker run -t maartensmeets/accs-cache-sample:latest

The post Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu appeared first on AMIS Oracle and Java Blog.

Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

Wed, 2018-03-14 10:24

Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

Using the Application Cache Java SDK Create an Application Cache

You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

Use the Java SDK

Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.

Injection

Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.

Connectivity

The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

Deploy a Spring Boot application to ACCS Create a deployable

In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

Deploy to ACCS

There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.

Manually

Create a new Java SE application.

 

Upload the previously created ZIP file

References

Introducing Application Cache Client Java SDK for Oracle Cloud

Caching with Oracle Application Container Cloud

Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

A sample of using the Application Cache Java SDK. Application is Jersey based

The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

ADF Performance Tuning: Avoid a Long Browser Load Time

Wed, 2018-03-07 04:10

It is not always easy to troubleshoot ADF performance problems – it is often complicated. Many parts needs to be measured, analyzed and considered. While looking for performance problems at the usual suspects (ADF application, database, network), the real problem can also be found in the often overlooked browser load time. The browser load time is just an important part of the HTTP request and response handling as is the time spent in the applicationserver, database and network. The browser load time can take a few seconds extra time on top of the server and network process time before the end-user receives the HTTP response and can continue with his work. Especially if the browser needs to build a very very ‘rich’ ADF page – the browser needs to build and process the very large DOM-tree. The end-user needs to wait then for seconds, even in modern browsers as Google Chrome, Firefox and Microsoft Edge. Often this is caused by a ‘bad’ page design where too much ADF components are rendered and displayed at the same time; too many table columns and rows, but also too many other components can cause a slow browser load time. This blog shows an example, analyses the browser load time in the ADF Performance Monitor, and suggest simple page design considerations to prevent a large browser load time.

Read more on adfpm.com – our new website on the ADF Performance Monitor.

The post ADF Performance Tuning: Avoid a Long Browser Load Time appeared first on AMIS Oracle and Java Blog.

Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl

Sun, 2018-03-04 14:08

image

The challenge I describe in this article is quite specific. I have a Windows laptop. I have access to a remote Kubernetes cluster (on Oracle Cloud Infrastructure). I want to create Fn functions and deploy them to an Fn server running on that Kubernetes (k8s from now on) environment and I want to be able to execute functions running on k8s from my laptop. That’s it.

In this article I will take you on a quick tour of what I did to get this to work:

  • Use vagrant to spin up a VirtualBox VM based on a Debian Linux image and set up with Docker Server installed. Use SSH to enter the Virtual Machine and install Helm (a Kubernetes package installer) – both client (in the VM) and server (called Tiller, on the k8s cluster). Also install kubectl in the VM.
  • Then install Project Fn in the VM. Also install Fn to the Kubernetes cluster, using the Helm chart for Fn (this will create a series of Pods and Services that make up and run the Fn platform).
  • Still inside the VM, create a new Fn function. Then, deploy this function to the Fn server on the Kubernetes cluster. Run the function from within the VM – using kubectl to set up port forwarding for local calls to requests into the Kubernetes cluster.
  • On the Windows host (the laptop, outside the VM) we can also run kubectl with port forwarding and invoke the Fn function on the Kubernetes cluster.
  • Finally, I show how to expose the the fn-api service from the Kubernetes service on an external IP address. Note: the latter is nice for demos, but compromises security in a major way.

All in all, you will see how to create, deploy and invoke an Fn function – using a Windows laptop and a remote Kubernetes cluster as the runtime environment for the function.

The starting point:

image

a laptop running Windows, with VirtualBox and Vagrant installed and a remote Kubernetes Cluster (could be in some cloud, such as Oracle the Container Engine Cloud that I am using or could be minikube).

Step One: Prepare Virtual Machine

Create a Vagrantfile – for example this one: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows/blob/master/vagrantfile:

Vagrant.configure("2") do |config|
  
config.vm.provision "docker"

config.vm.define "debiandockerhostvm"
# https://app.vagrantup.com/debian/boxes/jessie64
config.vm.box = "debian/jessie64"
config.vm.network "private_network", ip: "192.168.188.105"
 

config.vm.synced_folder "./", "/vagrant", id: "vagrant-root",
       owner: "vagrant",
       group: "www-data",
       mount_options: ["dmode=775,fmode=664"],
       type: ""
         
config.vm.provider :virtualbox do |vb|
   vb.name = "debiananddockerhostvm"
   vb.memory = 4096
   vb.cpus = 2
   vb.customize ["modifyvm", :id, "--natdnshostresolver1","on"]
   vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
  
end

This Vagrantfile will create a VM with VirtualBox called debiandockerhostvm – based on the VirtualBox image debian/jessie64. It exposes the VM to the host laptop at IP 192.168.188.105 (you can safely change this). It maps the local directory that contains the Vagrantfile into the VM, at /vagrant. This allows us to easily exchange files between Windows host and Debian Linux VM. The instruction “config.vm.provision “docker”” ensures that Docker is installed into the Virtual Machine.

To actually create the VM, open a command line and navigate to the directory that contains the Vagrant file. Then type “vagrant up”. Vagrant starts running and creates the VM, interacting with the VirtualBox APIs. When the VM is created, it is started.

From the same command line, using “vagrant ssh”, you can now open a terminal window in the VM.

To further prepare the VM, we need to install Helm and kubectl. Helm is installed in the VM (client) as well as in the Kubernetes cluster (the Tiller server component).

Here are sthe steps to perform inside the VM (see step 1):

######## kubectl

# download and extract the kubectl binary 
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

# set the executable flag for kubectl
chmod +x ./kubectl

# move the kubectl executable to the bin directory
sudo mv ./kubectl /usr/local/bin/kubectl

# assuming that the kubeconfig file with details for Kubernetes cluster is available On the Windows Host:
# Copy the kubeconfig file to the directory that contains the Vagrantfile and from which vagrant up and vagrant ssh were performed
# note: this directory is mapped into the VM to directory /vagrant

#Then in VM - set the proper Kubernetes configuration context: 
export KUBECONFIG=/vagrant/kubeconfig

#now inspect the succesful installation of kubectl and the correct connection to the Kubernetes cluster 
kubectl cluster-info


########  HELM
#download the Helm installer
curl -LO  https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz

#extract the Helm executable from the archive
tar -xzf helm-v2.8.1-linux-amd64.tar.gz

#set the executable flag on the Helm executable
sudo chmod +x  ./linux-amd64/helm

#move the Helm executable to the bin directory - as helm
sudo mv ./linux-amd64/helm /usr/local/bin/helm

#test the successful installatin of helm
helm version

###### Tiller

#Helm has a server side companion, called Tiller, that should be installed into the Kubernetes cluster
# this is easily done by executing:
helm init

# an easy test of the Helm/Tiller set up can be run (as described in the quickstart guide)
helm repo update              

helm install stable/mysql

helm list

# now inspect in the Kubernetes Dashboard the Pod that should have been created for the MySQL Helm chart

# clean up after yourself:
helm delete <name of the release of MySQL>

When this step is complete, the environment looks like this:

image

Step Two: Install Project Fn – in VM and on Kubernetes

Now that we have prepared our Virtual Machine, we can proceed with adding the Project Fn command line utility to the VM and the Fn platform to the Kubernetes cluster. The former is simple local installation of a binary file. The latter is an even simpler installation of a Helm Chart. Here are the steps that you should go through inside the VM (also see step 2):

# 1A. download and install Fn locally inside the VM
curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

#note: this previous statement failed for me; I went through the following steps as a workaround
# 1B. create install script
curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install > inst
all.sh
# make script executable
chmod u+x install.sh
# execute script - as sudo
sudo ./install.sh

# 1C. and if that fails, you can manually manipulate the downloaded executable:
sudo mv /tmp/fn_linux /usr/local/bin/fn
sudo chmod +x /usr/local/bin/fn

# 2. when the installation was done through one of the  methods listed, test the success by running  
fn --version


# 3. Server side installation of Fn to the Kubernetes Cluster
# details in https://github.com/fnproject/fn-helm

# Clone the GitHub repo with the Helm chart for fn; sources are downloaded into the fn-helm directory
git clone git@github.com:fnproject/fn-helm.git && cd fn-helm

# Install chart dependencies from requirements.yaml in the fn-helm directory:
helm dep build fn

#To install the Helm chart with the release name my-release into Kubernetes:
helm install --name my-release fn

# to verify the cluster server side installation you could run the following statements:
export KUBECONFIG=/vagrant/kubeconfig

#list all pods for app my-release-fn
kubectl get pods --namespace default -l "app=my-release-fn"

When the installation of Fn has been done, the environment can be visualized as shown below:

image

You can check in the Kubernetes Dashboard to see what has been created from the Helm chart:

image

Or on the command line:

image

Step Three: Create, Deploy and Run Fn Functions

We now have a ready to run environment – client side VM and server side Kubernetes cluster – for creating Fn functions – and subsequently deploying and invoking them.

Let’s now go through these three steps, starting with the creation of a new function called shipping-costs, created in Node.

docker login

export FN_REGISTRY=lucasjellema

mkdir shipping-costs

cd shipping-costs

fn init --name shipping-costs --runtime  node

# this creates the starting point of the Node application (package.json and func.js) as well as the Fn meta data file (func.yaml) 

# now edit the func.js file (and add dependencies to package.json if necessary)

#The extremely simple implementation of func.js looks like this:
var fdk=require('@fnproject/fdk');

fdk.handle(function(input){
  var name = 'World';
  if (input.name) {
    name = input.name;
  }
  response = {'message': 'Hello ' + name, 'input':input}
  return response
})

#This function receives an input parameter (from a POST request this would be the body contents, typically a JSON document)
# the function returns a result, a JSON document with the message and the input document returned in its entirety

After this step, the function exists in the VM – not anywhere else yet. Some other functions could already have been deployed to the Fn platform on Kubernetes.

image

This function shipping-costs should now be deployed to the K8S cluster, as that was one of our major objectives.

export KUBECONFIG=/vagrant/kubeconfig

# retrieve the name of the Pod running the Fn API
kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"

# retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME    


# set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
# this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
kubectl port-forward --namespace default $POD_NAME 8080:80 &

#now we inform Fn that deployment activities can be directed at port 8080 of the local host, effectively to the pod $POD_NAME on the K8S cluster
export FN_API_URL=http://127.0.0.1:8080
export FN_REGISTRY=lucasjellema
docker login

#perform the deployment of the function from the directory that contains the func.yaml file
#functions are organized in applications; here the name of the application is set to soaring-clouds-app
fn deploy --app soaring-clouds-app

Here is what the deployment looks like in the terminal window in the VM. (I have left out the steps: docker login, set FN_API_URL and set FN_REGISTRY

image


After deploying function shipping-costs it now exists on the Kubernetes cluster – inside the fn-api Pod (where a docker containers are running for each of the functions):image

To invoke the functions, several options are available. The function can be invoked from within the VM, using cURL to the function’s endpoint – leveraging kubectrl port forwarding as before. We can also apply kubectl port forwarding on the laptop – and use any tool that can invoke HTTP endpoints – such as Postman – to call the function.

If we want clients without kubectl port forwarding – and even completely without knowledge of the Kubernetes cluster – to invoke the function, that can be done as well, by exposing an external IP for the service on K8S for fn-api.

imageFirst, let’s invoke the function from with in the VM.

export KUBECONFIG=/vagrant/kubeconfig

# retrieve the name of the Pod running the Fn API
kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"

# retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME    


# set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
# this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
kubectl port-forward --namespace default $POD_NAME 8080:80 &


curl -X POST \
  http://127.0.0.1:8080/r/soaring-clouds-app/shipping-costs \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/json' \
  -H 'Postman-Token: bb753f9f-9f63-46b8-85c1-8a1428a2bdca' \
  -d '{"X":"Y"}'



# on the Windows laptop host
set KUBECONFIG=c:\data\2018-soaring-keys\kubeconfig

kubectl port-forward --namespace default <name of pod> 8080:80 &


kubectl port-forward --namespace default my-release-fn-api-frsl5 8085:80 &

image

Now, try to call the function from the laptop host. This assumes that on the host we have both kubectl and the kubeconfig file that we also use in the VM.

First we have to set the KUBECONFIG environment variable to refer to the kubeconfig file. Then we set up kubectl port forwarding just like in the VM, in this case forwarding port 8085 to the Kubernetes Pod for the Fn API.

image


When this is done, we can make calls to the shipping-costs functions on the localhost, port 8085: endpoint http://127.0.0.1:8085/r/soaring-clouds-app/shipping-costs

imageThis still requires the client to be aware of Kubernetes: have the kubeconfig file and the kubectl client. We can make it possible to directly invoke Fn functions from anywhere without using kubectl. We do this by exposing an external IP directly on the service for Fn API on Kubernetes.

The simplest way of making this happen is through the Kubernetes dashboard.

Run the dashboard:

image

and open it in a local browser at : http://127.0.0.1:8001/ui .

Edit the configuration of the service for fn-api:

image

Change type ClusterIP to LoadBalancer. This instructs Kubernetes to externally expose this Service – and assign an external IP address to it. Click on Update to make the change real.

image

After a litle while, the change will have been processed and we can find an external endpoint for the service.

SNAGHTML2314c81

Now we (and anyone who has this IP address) can invoke the Fn function shipping-costs directly using this external IP address:

SNAGHTML231e1db

Summary

This article showed how to start with a standard Windows laptop – with only Virtual Box and Vagrant as special components. Through a few simple, largely automated steps, we created a VM that allows us to create Fn functions and to deploy those functions to a Kubernetes cluster, onto which we have also deployed the Fn server plaform. The article provides all sources and scripts and demonstrates how to create, deploy and invoke a specific function.

Resources

Sources for this article in GitHub: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows

Vagrant home page: https://www.vagrantup.com/

VirtualBox home page: https://www.virtualbox.org/ 

Quickstart for Helm: https://docs.helm.sh/using_helm/#quickstart-guide

Fn Project Helm Chart for Kubernetes – https://medium.com/fnproject/fn-project-helm-chart-for-kubernetes-e97ded6f4f0c

Installation instruction for kubectl – https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

Project Fn – Quickstart – https://github.com/fnproject/fn#quickstart

Tutorial for Fn with Node: https://github.com/fnproject/fn/tree/master/examples/tutorial/hello/node

Kubernetes – expose external IP address for a Service – https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

Use Port Forwarding to Access Applications in a Cluster – https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

AMIS Technology Blog – Rapid first few steps with Fn – open source project for serverless functions – https://technology.amis.nl/2017/10/19/rapid-first-few-steps-with-fn-open-source-project-for-serverless-functions/

AMIS Technology Blog – Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions – https://technology.amis.nl/2017/10/19/create-debian-vm-with-docker-host-using-vagrant-automatically-include-guest-additions/

The post Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl appeared first on AMIS Oracle and Java Blog.

Creating RMAN backups using Commvault

Thu, 2018-03-01 09:35

One of our customers decided to migrate to Commvault for creating their backups. First they started with OS file backups, but finally they also want to create backups of the Oracle database with Commvault.
Commvault is not replacing RMAN. In fact Commvault generates RMAN commands/scripts which are run on the Oracle database. It can make backups to a storage server.

In this blog article I would like to describe a possible way to make RMAN backups with Commvault. The method described here makes Full, incremental and archivelog backups to an external storage and also makes a Oracle Recommended Backup in the local FRA of the database. So in fact a twofold backup approach is used here: The first one is a classical Full and Incremental backup approach to external storage. The other one is a Oracle Recommended Backup approach to the “local” FRA.

We implement this approach by two Commvault jobs:
1. Full Incremental. This one runs with a frequency of every day.
2. Incremental. This one runs every 4 hours.
The terms used in Commvault are a little bit different from the ones used in Oracle. Therefore these terms can confuse the “standard” Oracle dba a little bit.

The full incremental backup job we execute consists of the following steps:
1. Full backup: to external commvault storage

a. full backup = incremental level 0 backup to external commvault storage
b. autobackup controlfile to external commvault storage

2. Oracle Recommended backup: to FRA on local server

a. backup incremental level 1 for recover of copy: to FRA
b. recover of copy of database: in FRA
c. autobackup controlfile: to FRA

3. backup of archivelog: to external commvault storage

a. backup of archivelog: to external commvault storage
b. autobackup controlfile: to external commvault storage

The incremental backup job consists of:
1. Incremental level 1 backup: to external commvault storage

a. incremental level 1 backup to external commvault storage
b. autobackup controlfile to external commvault storage

2. Oracle Recommended backup: to FRA on local server

a. backup incremental level 1 for recover of copy: to FRA
b. recover of copy of database: in FRA
c. autobackup controlfile: to FRA

3. backup of archivelog: to external commvault storage

a. backup of archivelog: to external commvault storage
b. autobackup controlfile: to external commvault storage

 

Creating a backup job in Commvault:

The following steps should be executed to implement this backup strategy:

First the Commvault console should be started in a browser. That should be done by opening the following URL:

http://[servername].[domainname]/console/

You then should provide your credentials in order to login.

step 1: configure storage policies

Go to Client Computer Groups, select Oracle and then select the server you want to choose, in this case puhora0004. And then select Oracle under puhora0004. Click with the right mouse button on Oracle (the branch under puhora0004) and select properties:

The next screen will be shown:

The DATA Storage Policy and Log Storage Policy should be entered. The system engineer responsible responsible for the Commvault system already made a storage policy for Oracle named SP_NDC1_Oracle. So we entered this storage policy in these fields. See the screenprint above. Click OK.

Back in the last screen select the database under the puhora004-Oracle branch. In this example it is the PRIMRP1P database.

Click with the right mouse button on this database and select properties. Click on tab Storage Device. Then select in the field Storage Policy  … “SP_NDC1_Oracle”.

Click OK.

In the tab on the right select default:

Click with the right mouse button and select Properties. Select tab Storage Device. Select Storage Policy: SP_NDC1_Oracle:

 

Select tab Advanced  and if necessary sub tab Backup Arguments
Enter the field Backup Tag. In this example this is: COPY_DB :

 

Select sub tab Options and select Merge Incremental Image Copies:
(this one switches on Oracle Recommended Backup)

 

Choose tab Content. Select Selective Online Full:

In the Commvault version we used, there was a bug. This bug causes the storage policy under Oracle to disappear if you configure the storage policy in one of the underlying databases. So you should check after all the storage policies have been applied if all the storage policies are still there.

 

Step 2: Create a schedule policy

In the left pane go to Policies. And then choose Schedule Policies. Select with the right mouse button:  New Schedule Policy:

Provide your New Schedule Policy with a name.

Click on Agent type, select Oracle. And then click on the blue colored word Select:

Click on Add:
Select Full:

Choose tab Schedule Pattern: Fill this window for example with the following information:

Click OK.

You have now made a schedule for the full backup.
Choose your Start Time with care. Your incremental backup should be finished before you start a Full backup. If the incremental backup still is running on the scheduled start time of your full backup, your full backup will not start at all. So be sure your incremental backup will not be running on the Start Time you choose for the Full backup.

Next you make a schedule for the incremental backup.

Click Add, select Incremental:

Choose Schedule Pattern. Fill this window for example with the following information:

Click OK.
Next, click on tab Associations:

Look up under the Oracle branch the correct server and databases:
In this example these are the puhora0004, puhora0005 and puthkd17 servers
Select under the requested database the option default.

Click OK.

We have now made two scheduled jobs (full and incremental) that creates backups of 3 databases which are running on 3 different servers.

You could also decide to make a schedule which creates backups of all databases on 1 particular server.

How to view your backup history

You can view which backups have been made on your database with Commvault. To do that select in the left pane Client Computer Groups – Oracle – [servername – in our example puhora004]– Oracle – [database name – in our example PRIMRP1P] Click on PRIMRP1P with the right mouse button and select view and the Backup History.

Click OK on the following screen called Backup History Filter. Then you will see an overview of the backups that were made on this database.

You can view status of the job: Completed or Failed. You can find the type of backup: Full or Incremental. The Start Time, End Time, Duration can also be viewed on this screen.

It is also possible to view the rman log of a backup job. You can do this as follows:
Click on the backup to want to see. Click with your right mouse button on this backup. Choose View RMAN log:

Then you can read the log of the RMAN backup:

 

The post Creating RMAN backups using Commvault appeared first on AMIS Oracle and Java Blog.

Some of my Solutions for challenges with Oracle JET

Tue, 2018-02-27 08:46

This article is not some sophisticated treatise on Oracle JET fundamentals.It is merely a collection of challenges I had to deal with and found solutions for – that work, even if they are perhaps not the best approach around. This article is first of all a personal notebook. If you can get anything useful from it, then by all means take it and enjoy it.

The code for the application referenced in this article can be found on GitHub: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel.

How to define a global context that is accessible from all modules?

The challenge is a simple one: I want to be able to set a value in one module and have access to that value in other modules. For example: when I enter my username in one module

image

I want to make that value available in the very root of the application (index.html and ViewModel appController.js) as well in a second module, called dashboard (accessible through the Home tab):

image

Here I was helped by Geertjan’s article: https://blogs.oracle.com/geertjan/intermodular-communication-in-oracle-jet-part-1

The username field in the customers module is bound to the observable self.username. When the login button is clicked, a function loginButtonClick is invoked on the ViewModel. This function reads the observable’s value, retrieves the root ViewModel (using the Knock Out feature dataFor) and sets the username on the global variable – an observable userLogin defined on the root ViewModel. It also sets the observable userLoggedIn on the Root ViewModel – a flag that for example controls tabs in the navigation list.

    function CustomersViewModel() {
      var self = this;
      self.username = ko.observable("You");
      self.password = ko.observable();

      self.loginButtonClick = function (event) {       
        var rootViewModel = ko.dataFor(document.getElementById('globalBody'));
        rootViewModel.userLogin(self.username());
        rootViewModel.userLoggedIn("Y");
        return true;
      }

The definitions in the Root ViewModel (in appController.js):

 function ControllerViewModel() {
      var self = this;
      // Header
      // Application Name used in Branding Area
      self.appName = ko.observable("Soaring through the Clouds Webshop Portal");
      // User Info used in Global Navigation area
      self.userLogin = ko.observable("Not yet logged in");
      self.userLoggedIn = ko.observable("N");

How to make Tabs (Navigation List Items) conditionally displayed?

In this application, the tabs to be displayed are dependent on the question whether or not the user has logged in. My challenge in this case: how to display the tabs (items in an oj-navigation-list component) based on a condition?

image

The items are rendered by a template specified on the item.renderer attribute of the oj-navigation-list.

It turns out that by data-binding the visible attribute on the outermost HTML element in the template, I can have the tabs rendered based on a logical expression referencing the global (root) variable that indicates whether or not the user is logged in:

 <!-- Template for rendering navigation items -->
    <script type="text/html" id="navTemplate">      
      <li data-bind="visible: (!$data['loggedInOnly']|| $root.userLoggedIn() =='Y')"><a href="#">
          <span data-bind="css: $data['iconClass']"></span>
          <!-- ko text: $data['name'] --> <!--/ko-->
        </a></li> 
    </script>

The definition of the navigation list items is in appController.js; the array navData contains the items that can be turned into tabs. Each item defined the name (the label displayed to the user) as well as the associated module and the iconClass. I have added an optional property  loggedInOnly – that indicates whether or not a tab should be displayed only when the user is logged in; this property is used in the expression in the template shown overhead.

      // Navigation setup
      var navData = [
        {
          name: 'Home', id: 'dashboard', loggedInOnly: false,
          iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-chart-icon-24'
        },
        {
          name: 'Browse Catalog', id: 'products',
          iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-fire-icon-24'
        },
        {
          name: 'Browse Orders', id: 'orders', loggedInOnly: true,
          iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-people-icon-24'
        },
        {
          name: 'Your Profile', id: 'customers', loggedInOnly: true,
          iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-info-icon-24'
        }
      ];
      self.navDataSource = new oj.ArrayTableDataSource(navData, { idAttribute: 'id' });

How to dynamically set the label of oj-option elements

The drop down menu contains an item that changes its label, depending on whether the user is logged in.

image

I wanted to have two items and control their visibility – for some reason I got sidetracked and solved it a little differently.

I have defined a <span> element inside the oj-option and defined the text attribute through a data binding. This binding subsequently uses a ternary expression to determine which label to display:

              <oj-menu id="menu1" slot="menu" style="display:none" on-oj-action="[[menuItemAction]]">
                <oj-option id="help" value="help">Help</oj-option>
                <oj-option id="about" value="about">About</oj-option>
                <oj-option id="sign" value="sign">
                  <span data-bind="text: (userLoggedIn() =='Y'?'Sign Out':'Sign In/Sign Up')"></span>
                </oj-option>
              </oj-menu>

How to react to the User Clicking on a Menu Option in a oj-menu component

Not surprisingly, when the user clicks on an menu item in the drop down menu shown overhead, the application should respond somehow. I was wondering how to trigger my code for the click-a-menu-item event. Then I found out about the on-oj-action attribute on oj-menu.

<oj-menu id="menu1" slot="menu" style="display:none" on-oj-action="[[menuItemAction]]">
   <oj-option id="help" value="help">Help</oj-option>
   ...

It refers to a function in the viewModel – that can take an event and from that event’s path[0] element get the id of the selected oj-option item. It can then do whatevwer needs to be done.

 self.menuItemAction = function (event) {
        var selectedMenuOption = event.path[0].id
        console.log(selectedMenuOption);
        if (selectedMenuOption == "sign") {
           ....

How to programmatically navigate to a module – by activating the Router

When the Sign In/Up option is selected in the menu above, I want the application to navigate to the customers module, where the user can login:

image

This navigation is to be done programmatically – in the function handling the click on menu item event. The programmatic manipulation of the router turns out to be extremely simple in the function:

 self.menuItemAction = function (event) {
        var selectedMenuOption = event.path[0].id
        if (selectedMenuOption == "sign") {
          if (self.userLoggedIn() == "N") {
            // navigate to the module that allows us to sign in
            oj.Router.rootInstance.go('customers');
          } else {
            // sign off
            self.userLogin("Not yet logged in");
            self.userLoggedIn("N");
            oj.Router.rootInstance.go('dashboard');
          }
        }

This is all it takes to present the customers.html center page. This of course depends on the module binding in index.html:

<div role="main" class="oj-web-applayout-max-width oj-web-applayout-content" data-bind="ojModule: router.moduleConfig">
</div>

and the module definitions in the ViewModel appController.js

     // Router setup
      self.router = oj.Router.rootInstance;
      self.router.configure({
        'dashboard': { label: 'Dashboard', isDefault: true },
        'products': { label: 'Products' },
        'orders': { label: 'Orders' },
        'customers': { label: 'Customers' }
      });
      oj.Router.defaults['urlAdapter'] = new oj.Router.urlParamAdapter();

The post Some of my Solutions for challenges with Oracle JET appeared first on AMIS Oracle and Java Blog.

Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container

Mon, 2018-02-26 07:06

The essential message of this article is the automation for Oracle JET application of the flow from source code commit to a running application on Oracle Application Container Cloud, as shown in this picture:

image

I will describe the inside of the “black box” (actually light blue in this picture) where the build, package and deploy are done for an Oracle JET application.

The outline of the approach: a Docker Container is started in response to the code commit. This container contains all tooling that is required to perform the necessary actions including the scripts to actually run those actions. When the application has been deployed (or the resulting package is stored in an artifact repository) the container can be stopped. This approach is very clean – intermediate products that are created during the build process simply vanish along wih the container. A fresh container is started for the next iteration.

Note: the end to end build and deploy flow takes about 2 to 3 minutes on my environment. That obviously would be horrible for a simple developer round trip, but is actually quite acceptable for this type of ‘formal’ release to the shared cloud environment. This approach and this article are heavily inspired by this article (Deploy your apps to Oracle Cloud using PaaS Service Manager CLI on Docker) on Medium by Abhishek Gupta (who writes many very valuable articles, primarily around microservices and Oracle PaaS services such as Application Container Cloud).

Note: this article focuses on final deployment of the JET application to Application Container Cloud. It would however be quite simple to modify (in fact to simplify)the build container to not deploy the final ZIP file to Application Container Cloud, but instead push the file to an artifact repository or deploy to some other type of runtime platform. It would not be very hard to take the ZIP file and create a fresh Docker Container with that file that can be deployed on Kubernetes Cluster or any Docker runtime such as Oracle Container Cloud.

The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

The steps I describe in this article are:

  • preparation of the Docker Container that will do the build-package-deploy actions
  • preparation of the Oracle JET application – to be turned from a locally run, developer only client side web application into a stand-alone runnable enterprise web app with server side platform (Node with Express)
  • creation of the build script that will run inside the container and orchestrate the actions by the available tools to take the source all the way to the cloud
  • putting it all together

 

1. Preparation of the Docker Container that will do the build-package-deploy actions

The first step is the composition of the Docker Container. For this step, I have made good use of Abhishek’s article and the dockerfile he proposes in that article. I complemented Abhishek’s Dockerfiles with the tooling required for building Oracle JET applications.

A visual presentation of what the Docker Container will contain – and the steps made to put it together – is shown below:

image

Note: it is fun to bake Docker Container Images completely through a Docker file – and it is certainly convenient to share the instructions for creating a Docker Container image in the form of a Docker file. However, when the steps are complex to automated through a Docker file, there is a simple alternative: build as much of the container as you can through a Docker file. Then run the container and complete it through manual steps. Finally, when the container does what you need it to do, you can commit the state of the container as your reusable container image. And perhaps at this point, you can try to extend the Docker file with some of the manual steps, if you feel that maintaining the image will be a frequently recurring task.

The Docker build file that I finally put together is included below. The key steps:

  • the container is based on the “python:3.6.2-alpine3.6” image; this is done mainly because the PSM (Oracle PaaS Service Manager command line tool requires a Python runtime environment)
  • the apk package manager for Alpine Linux is used several times to add required packages to the image; it adds curl, zip, nodejs, nodejs-npm, bash, git and openssh
  • download and install the Oracle PSM command line tool (a Python application)
  • set up PSM for the target identity domain and user
  • install the Oracle JET Command Line tool that will be used for building the JET web application
  • copy the script build-app.sh that will be executed to run the end-to-end build-package-deploy flow

 

# extended from https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7
# added npm, ojet-cli and git

FROM python:3.6.2-alpine3.6

ARG USERNAME
ARG PASSWORD
ARG IDENTITY_DOMAIN
ARG PSM_USERNAME
ARG PSM_PASSWORD
ARG PSM_REGION
ARG PSM_OUTPUT


WORKDIR "/oracle-cloud-psm-cli/"

RUN apk add --update curl && \
    rm -rf /var/cache/apk/*

RUN curl -X GET -u $USERNAME:$PASSWORD -H X-ID-TENANT-NAME:$IDENTITY_DOMAIN https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/$IDENTITY_DOMAIN/client -o psmcli.zip && \
	pip3 install -U psmcli.zip 

COPY psm-setup-payload.json
RUN psm setup -c psm-setup-payload.json

RUN apk add --update nodejs nodejs-npm
RUN apk add --update zip

RUN npm install -g @oracle/ojet-cli

RUN apk update && apk upgrade &&  apk add --no-cache bash git openssh

COPY build-app.sh .

CMD ["/bin/sh"]

Use this command to build the container:

docker build –build-arg USERNAME=”your ACC cloud username” –build-arg PASSWORD=”the ACC cloud password” –build-arg IDENTITY_DOMAIN=”your identity domain” –build-arg PSM_REGION=”us” –build-arg PSM_OUTPUT=”json” -t psm-cli .

assuming that this command is run in the directory where the docker file is located.

This will create a container and tag it as image psm-cli. When this command completes, you can find the container image by running “docker images”. Subsequently, you can run a container based on the image: “docker run –rm -it psm-cli”

     

    2. Preparation of the Oracle JET application

    When developing a JET (4.x) application, we typically use the Oracle JET CLI – the command line tool that helps us to quickstart a new application, create composite components, serve the application locally as we are developing it to a browser with instant update of any file changes. The JET CLI is also used to build the application for release. The result of this step is the complete set of files needed to run the JET application in the browser. In order to actually offer the JET application to end users, it has to be served from a ‘web serving’ platform component – such as nginx or a backend in Python, Java or Node. Frequently, the JET application will require some server side facilities that the backend that serves the static JET application resources can also provide. For that reason, I select a JET serving backend that I can easily leverage for these serverside facilities; for me, this is currently Node.

    In order to create a self running JET application for the JET application built in the pipeline discussed in this article, I have added a simple Node & Express backend.

    I have used npm to create a new Node application (npm init jet-on-node). I have next created directory bin and the file www. This file is main entrypoint into the node application that serves the JET application; it delegates most work to module app that is loaded from file app.js in the root of this Node application, path /jet-on-node .

     

    SNAGHTML8be0161

    All static resources that the browser can access (including the JET application) go into the folder /jet-on-node/public. Module app defines – through Express – that requests for public resources (requests not handled by one of the URL path handlers) are taken care – by serving resources from the directory /public. Module app can handle other HTTP requests – for example from the JET application – and it could also implement the backend for Server Sent Events or WebSockets. Currently is handles the REST GET request to path “/about” that returns some key data for the application:

    SNAGHTML8d17d8a

    The dependencies for the jet-on-node application are defined in package.json during the build process of the final application, we will use “npm install” to add the server side required node modules.

    At this point, we have extended our code base with a simple landing platform for the JET application that can serve the application at runtime. All that remains is to take all content under the /web directory and copy it to the jet-on-node/public folder. Then we can run the application using “npm start” in directory jet-on-node. This will execute the start script in file package.json – which is defined as “node ./bin/www”.

     

    3. Creation of the build script that will run inside the container and
    orchestrate the actions

    The JET build container is available. The JET application is available from a Git repository (in my example in GitHub). A number of steps are now required to go to a running application on Application Container Cloud. The first steps are shown below:

     

    image

    1. Clone the Git repo that contains the JET application (or pull the latest sources or a specific tag)

    2. Install all modules required by the JET application – by running npm install

    3. Use the Oracle JET command line utility to build the application for release: ojet build –release

    After this step, all run time artifacts – including the JET libraries – are in the /web directory. These next steps turn these artifacts into a running application:

    4. Copy the contents of /web to /jet-on-node/public

    5. Install the modules required for the server side Node application by running npm install in directory jet-on-node

    6. Create a single zip file for all artifacts in the /jet-node directory – that includes both the JET application and its server side backend Node application. This zip-file is the release artifact for the JET application. As such, it can be pushed to an artifact repository or deployed to some other platform.

    7. Engage psm command line interface (Oracle PaaS Service Manager CLI) to perform deployment of the zip file to the Application Container Cloud for which psm already as configured during the creation of the build container.

    Note: the files manifest.json and deployment.json in the root of jet-on-node provide instructions to PSM and Application Container Cloud regarding the run time settings for this application – including the runtime version of Node, the command for starting the application, the runtime memory per instance and the number of instances as well as the values of environment variables to be passed to the application.

    image

    The shell-script build-app.sh (you may have to explicitly make this script executable, using “chmod u+x build-app.sh”) performs the steps described above (although perhaps not in the optimal way – feel free to fine tune and improve and let me know about it).

    #git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel
    # cd webshop-portal-soaring-through-the-cloud-native-sequel
    
    git pull
    wait
    
    npm install
    wait
    ojet build --release
    wait
    cp -a ./web/. ./jet-on-node/public
    wait
    cd jet-on-node
    wait
    npm install
    wait
    zip -r webshop.zip .
    wait
    cd /oracle-cloud-psm-cli/webshop-portal-soaring-through-the-cloud-native-sequel/jet-on-node
    
    psm accs push -n SoaringWebshopPortal -r node -s hourly -d deployment.json -p webshop.zip
    

    The end-to-end flow through the build container during the release of the latest version of the JET application can now be depicted like this:

    image

     

    4. Putting it all together

    I will now try to demonstrate how this all works together. In order to do so, I will go through these steps – and illustrate them with screenshots:

    • make a change in the JET application
    • commit and push the change (to GitHub)
    • run the Docker build container psm-cli
    • run the script build-app.sh
    • wait for about three minutes (check the output in the build container and the application status in the ACC console)
    • access the updated Web Application

    The starting point for the application:

    SNAGHTML8fbfc34

    1. Make a change

    The word Shopping Basket – next to the icon – seems superfluous, I will remove that. And I will increase the version number, from v1.2.0 to v1.2.1.

    image

     

      2. commit and push the change (to GitHub)

      image

      The change is accepted in GitHub:

      image

       

      3. Run the Docker build container psm-cli

      Run the Docker Quickstart Terminal (I am on Windows) and perform: “docker run –rm -it psm-cli”

      image

       

      At this point, I lack a little bit of automation. The manual step I need to take (just the first time round) is to clone the JET application’s Git repository:

      git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequelCloning

      and to move to the created directory

      cd webshop-portal-soaring-through-the-cloud-native-sequel/

      and to make the file build-app.sh executable:

      chmod u+x build-app.sh

      image

      Note: As long the container keeps running, I only have to run “git pull” and “./build-app.sh” for every next update to the JET application. The next step would be to configure a web hook that is triggered by the relevant commit in the GitHub repository.

       

      4. run the script build-app.sh

      ./build-app.sh

      image

      wait for about three minutes (check the output in the build container

      image

      SNAGHTML91173de

      and the application status in the ACC console)

      SNAGHTML90fc3cd

      SNAGHTML90fe186

       

      5. access the updated Web Application

      image

      As you can see, after committing and pushing the change, I only had to run a simple command line command to get the application fully rebuilt and redeployed. After stopping the Docker container, no traces remain of the build process. And I can easily share the container image with my team members to build the same application or update to also build other or additional JET applications.

       

      Resources

      The inspirational article by Abhishek Gupta: https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7

      The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

      Oracle JET Command Line Interface: https://github.com/oracle/ojet-cli

      Docs on the Oracle PSM (PaaS Service Manager) CLI: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

      Node & Express Tutorial Part 2: Creating a skeleton website: https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

      Serving Public Files with Express – https://expressjs.com/en/starter/static-files.html

      Documentation for Oracle Application Container Cloud: https://docs.oracle.com/en/cloud/paas/app-container-cloud/dvcjv/getting-started-oracle-application-container-cloud-service.html

       

        The post Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container appeared first on AMIS Oracle and Java Blog.

        Pure Client Side Event Exchange between ADF Taskflows and Rich Client Web Applications such as Oracle JET, Angular and React

        Fri, 2018-02-23 03:41

        For one of our current projects I have done some explorations into the combination of ADF (and WebCenter Portal in our specific case) with JET. Our customer has existing investments in WC Portal and many ADF Taskflows and is now switching to JET as a WebApp implementation technology – for reasons of better user experience and especially better availability of developers. I believe that this is a situation that many organizations are in or are contemplating (including those who want to extend Oracle EBusiness Suite or Fusion Apps). This is not the ideal green field technology mix of course. However, if either WebCenter Portal (heavily steeped in ADF) or an existing enterprise ADF application are the starting point for new UI requirements, you are bound to end up with a combination of ADF and the latest and greatest technology used for building those requirements.

        We have to ensure that the rich client based ‘Portlets’ are nicely embedded in the ADF host environment. We also have to take care that events triggered by user actions in the ADF UI areas are communicated to the embedded rich client based UI areas in the page and lead to appropriate actions over there – and the same for actions in the embedded UI areas and events flowing in the other direction.image_thumb11_thumb

        In two previous articles (Publish Events from any Web Application in IFRAME to ADF Applications and Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React), I have described how events in the embedded area are communicated to the ADF side of the fence and lead to UI synchronization and similary how events in the traditional ADF based UIs are communicated to the embedded areas and trigger the appropriate synchronization. The implementation described in these articles is based on pure, native ADF mechanisms such as server listener, contextual event, partial page refresh in combination with standard HTML5 mechanism for publishing events on embedded IFRAME windows. The route described using these out of the box mechanisms is robust, proven and very decoupled. It allows run time configuration in WebCenter Portal (wiring of taskflows leveraging the contextual event).  This route is also somewhat heavyhanded; it is not very fast – dependencing on network latency to the backend – and it puts additional load on the application server.

        There is a fast, light-weight alternative to the use of contextual (server side) events for communication between areas in an ADF based web page. One that can help with interaction between ADF based areas and non-ADF areas (JET, React, Angular) – but also with interactions between two or more pure ADF areas. An alternative that I believe should be part of the native ADF framework – but is not. This alternative is: the client side event bus.

        The client side event bus is a very simple pure JavaScript client side component – that I have introduced in an earlier article. In essence, this is what it is:

        image

        The client side event bus is loaded in the outermost page and will be available throughout the lifetime of the application. It has a registry of event subscriptions that each consist of the name of the event type and a function reference to the function that should be called to handle the event. Each UI area produced from an ADF Taskflow can contain JavaScript snippets that create event handlers (JavaScript functions) and subscribe those with the event bus for a specific event type. Finally, each UI area can publish an event to the event bus whenever something happens that is worth publishing. Of course this is somewhat loosely stated – we should document with some rigor the client side events that each UI area will publish – and will consume – just like the contextual (server side) events with taskflows. It is my recommendation that for the ADF application as a whole, an event registry is maintained that describes all events that can be published – client side or contextual server side – along with the payload for each event.

        Let’s make use of this client side event bus for the following use case:

        The ADF application embeds a client side web application in an IFRAME in an ADF Taskflow – ADF-JET-Container-taskflow. The application contains a second taskflow – ADF-X-taskflow – that is pure ADF, no embedding whatsoever. The challenge: an event taking place in the client side UI area produced from ADF-X-taskflow should have an effect in the client side web application – plain HTML5 or Oracle JET – in the IFRAME in the UI area produced from the other ADF Taskflow, and we want this effect to be produced as quickly and smoothly as possible and given the nature of the event and the effect there is no need for server side involvement. In this case, using contextual events is almost wasteful – it is not simple to implement, it is not efficient or fast to execute and it does not buy us anything in terms of additional security, scalability or functionality. So let’s use this client side event bus.

        The steps to implement – on top of the ADF application with the index.jsf page, the two ADF Taskflows with their respective views and the embedded IFRAME plus web application – are as follows:

        (note: all code can be found on GitHub: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize/releases/tag/v3.0)

         

        1. Create JavaScript library adf-client-event-bus.js with the functionality to record subscriptions and forward published events to the event handlers for the specific event types

        var subscriptions = {};
        function publishEvent( eventType, payload) {
           console.log('Event published of type '+eventType);
           console.log('Event payload'+JSON.stringify(payload));
            // find all subscriptions for this event type
           if (subscriptions[eventType]) { 
            // loop over subscriptions and invoke callback function for each subscription
            for (i = 0; i &lt; subscriptions[eventType].length; i++) {
               var callback = subscriptions[eventType][i];
               try {
                 callback(payload);
               }
               catch (err) {
                   console.log("Error in calling callback function to handle event. Error: "+err.message);
               }
            }//for 
           }//if     
            
        }// publishEvent
        // register an interest in an eventType by providing a callback function that takes a payload parameter
        function subscribeToEvent( eventType, callback) {
           if (!subscriptions[eventType]) { subscriptions[eventType]= [ ]};
           subscriptions[eventType].push(callback);
           console.log('added subscription for eventtype '+eventType);
        }//subscribeToEvent
        

        2. Add adf-client-event-bus.js to the main index.jsf page.

                &lt;af:resource type="javascript" source="/resources/js/adf-client-event-bus.js"/&gt;
        

        3. Add client listener to the input component on which the event of interest takes place. In this case: a selectOneChoice from which the user selects a country in view.jsff in taskflow ADF-X-taskflow;

                &lt;af:selectOneChoice label="Choose a country" id="soc1" autoSubmit="false" valueChangeListener="#{pageFlowScope.detailsBean.countryChangeHandler}"&gt;
                    &lt;af:selectItem label="The Netherlands" value="nl" id="si1"/&gt;
                    &lt;af:selectItem label="Germany" value="de" id="si2"/&gt;
                    &lt;af:selectItem label="United Kingdom of Great Brittain and Northern Ireland" value="uk" id="si3"/&gt;
                    &lt;af:selectItem label="United States of America" value="us" id="si4"/&gt;
                    &lt;af:selectItem label="Spain" value="es" id="si5"/&gt;
                    &lt;af:selectItem label="Norway" value="no" id="si6"/&gt;
                    &lt;af:clientListener method="countrySelectionListener" type="valueChange"/&gt;
                &lt;/af:selectOneChoice&gt;
        

        the client listener is configured to invoke a client side JavaScript function countrySelectionListener

        4. Add function countrySelectionListener  to the adf-x-taskflow-client.js JavaScript library that is associated with the page(s) in the ADF X taskflow; this function publishes the client side event countrySelectionEvent

        function countrySelectionListener(event) {
            var selectOneChoice = event.getSource();
            var newValue = selectOneChoice.getSubmittedValue();
            var selectItems= selectOneChoice.getSelectItems();
            var selectedItem = selectItems[newValue];
            publishEvent("countrySelectionEvent", 
            {
                "selectedCountry" : selectedItem._label, "sourceTaskFlow" : "ADF-X-taskflow"
            });
        }
        

        5. Add function handleCountrySelection to the adf-jet-client-app.js JavaScript library that is associated with the JETView.jsff container page in the ADF-JET-Container-taskflow; this function will handle the client event countrySelectionEvent by posting an event message to the IFRAME that contains the client side web application. Also add the call to subscribe this function with the client event bus for events of this type:

        subscribeToEvent("countrySelectionEvent", handleCountrySelection);
        function handleCountrySelection(payload) {
            var country= payload.selectedCountry;
            var message = {
                'eventType' : 'countryChanged', 'payload' : country
            };
            postMessageToJETIframe(message);
        }
        //handleCountrySelection
        

        6. Add JavaScript code in view.xhtml in the client wide web app to process an incoming message event of type countryChanged. This event will trigger an update in the UI.

        &lt;html xmlns="http://www.w3.org/1999/xhtml"&gt;
            &lt;head&gt;
                &lt;meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/&gt;
                &lt;title&gt;Client Side Web App&lt;/title&gt;
                &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20function%20init()%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20attach%20listener%20to%20receive%20message%20from%20parent%3B%20this%20is%20not%20required%20for%20sending%20messages%20to%20the%20parent%20window%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20window.addEventListener(%22message%22%2C%20function%20(event)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20console.log(%22Iframe%20receives%20message%20from%20parent%22%20%2B%20event.data)%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20if%20(event.data%20%26amp%3B%26amp%3B%20event.data.eventType%20%3D%3D%20'countryChanged'%20%26amp%3B%26amp%3B%20event.data.payload)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20countrySpan%20%3D%20document.getElementById('currentCountry')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20countrySpan.innerHTML%20%3D%20%22Fresh%20Country%3A%20%22%20%2B%20event.data.payload%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20false)%3B%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%2F%2Finit%0A%20%20%20%20%20%20%20%20%20%20document.addEventListener(%22DOMContentLoaded%22%2C%20function%20(event)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20init()%3B%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;
            &lt;/head&gt;
            &lt;body&gt;
                
        &lt;h2&gt;Client Web App&lt;/h2&gt;
        
                
        
                    Country is 
                    &lt;span id="currentCountry"&gt;&lt;/span&gt;
                
        
            &lt;/body&gt;
        &lt;/html&gt;
        

        image

         

        When the user selects a country using the dropdownlist in area ADF X, the selected country name is displayed almost instantaneously in the IFRAME area based on the rich client web application.

        Client Side Event Flow from Embedded Web Application (in IFRAME) to ADF powered Area

        Our story would not be complete if we did not also discuss the flow from the embedded UI area to the ADF based UI. It is very similar of course to what we described above. The event originates in the web application and is communicated from within the IFRAME to the parent window and handled by a JavaScript handler loaded for the ADF JET Container Taskflow. This handler publishes a client event with the client side event bus. In this case, a subscription for this event was created from the adf-x-taskflow-client.js library, subscribing a handler function handleDeepMessageSelection that updates the client side message component.

        The details steps and code snippets:

        1. Add code in view.xhtml to publish a message to the parent window with the message entered by a user in the text field

        &lt;html xmlns="http://www.w3.org/1999/xhtml"&gt;
            &lt;head&gt;
                &lt;meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/&gt;
                &lt;title&gt;Client Side Web App&lt;/title&gt;
                &lt;!--        &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22client-web-app-lib.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;--&gt;
                &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20function%20callParent()%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20console.log('send%20message%20from%20Web%20App%20to%20parent%20window')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20jetinputfield%20%3D%20document.getElementById('jetinputfield')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20inputvalue%20%3D%20jetinputfield.value%3B%0A%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20message%20%3D%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22message%22%20%3A%20%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22value%22%20%3A%20inputvalue%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2C%22eventType%22%20%3A%20%22deepMessage%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22mydata%22%20%3A%20%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22param1%22%20%3A%2042%2C%20%22param2%22%20%3A%20%22train%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20here%20we%20can%20restrict%20which%20parent%20page%20can%20receive%20our%20message%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20by%20specifying%20the%20origin%20that%20this%20page%20should%20have%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20targetOrigin%20%3D%20'*'%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20parent.postMessage(message%2C%20targetOrigin)%3B%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%2F%2FcallParent%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;
            &lt;/head&gt;
            &lt;body&gt;
                
        &lt;h2&gt;Client Web App&lt;/h2&gt;
        
                &lt;input id="jetinputfield" type="text" value="Default"/&gt;         
                &lt;a href="#" onclick="callParent()"&gt;Send Message&lt;/a&gt;
            &lt;/body&gt;
        &lt;/html&gt;
        

        2. Attach message event listener in the JavaScript library adf-jet-client-app.js for the message events (for example from the embedded IFRAME); in this handler, publish the event as a deepMessageEvent on the client side event bus

        function init() {
            window.addEventListener("message", function (event) {
                console.log("Parent receives message from iframe " + JSON.stringify(event.data));
                var data = event.data;
                var message = data["message"];
        
                if (data &amp;&amp; message){
                if ( message['eventType'] == 'deepMessage') {
                    console.log("ADF JET Container Taskflow received deep message event from web App")
                    var message = message.value;
                    publishEvent("deepMessageEvent", 
                    {
                        "message" : message
                       ,"sourceTaskFlow" :"ADF-JET-container-taskflow"
                       ,"eventOrigin" : "JET:jet-embedded"
        
                    });
                }        }
            },
            false);
        }
        
        document.addEventListener("DOMContentLoaded", function (event) {
            init();
        });
        

        3. From the adf-x-taskflow-client.js library, subscribe a function as event handler for the deepMessageEvent with the client side event bus

        subscribeToEvent("deepMessageEvent", handleDeepMessageSelection);
        
        function handleDeepMessageSelection(payload) {
            console.log("DeepMessageEvent consumed in ADF X Taskflow" + JSON.stringify(payload));
            var message = payload.message;
            // find inputText component using its fake styleClass: messageInputHandle
            //         &lt;af:inputText label="Message" id="it1" columns="120" rows="1" styleClass="messageInputHandle"/&gt;
            var msgInputFieldId = document.getElementsByClassName("messageInputHandle")[0].id;
            var msgInputText = AdfPage.PAGE.findComponentByAbsoluteId(msgInputFieldId);
            msgInputText.setValue(message);    
        }
        

        This function extracts the value of the message and sets an inputText component with that value – on the client

         

        The end to end flow looks like this:

         

         

        image

         

        Client Side Interaction with JET application

        The interaction as described above with a plain HTM5 web application embedded in an ADF application is not any different when the embedded application is an Oracle JET application. This figure shows an example of a JET application embedded in the JET Client area. It consumes two client side events from the ADF parent environment: countrySelection and colorSelection. It publishes an event itself: browserSelectionEvent. All interaction around these events with the client side event bus is taken care of by the ADF JET Container Taskflow. All interaction between the JET application and the ADF JET Container Taskflow is handled through the postMessage mechanism on the IFRAME’s content window and its parent window.

        image

        The salient code snippets in the JET application are:

        The ViewModel:

        
        define(
            ['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojselectcombobox'
            ],
            function (oj, ko, $) {
                'use strict';
                function WorkareaViewModel() {
                    var self = this;
                    // initialize two country observables
                    self.country = ko.observable("Italy");
                    self.color = ko.observable("Greenish");
                    self.browser = ko.observable("Chrome");
        
                    self.callParent = function (message) {
                        console.log('send message from Web App to parent window');
                        // here we can restrict which parent page can receive our message
                        // by specifying the origin that this page should have
                        var targetOrigin = '*';
                        parent.postMessage(message, targetOrigin);
        
                    }
        
                    self.browserChangedListener = function (event) {
                        var newBrowser = event.detail.value;
                        var oldBrowser = event.detail.previousValue;
        
                        console.log("browser  changed to:" + newBrowser);
                        var message = {
                            "message": {
                                "eventType": "browserChanged",
                                "value": newBrowser
                            }
                        };
                        self.callParent(message);
        
                    }
        
                    self.init = function () {
                        // attach listener to receive message from parent; this is not required for sending messages to the parent window
                        window.addEventListener("message", function (event) {
                            console.log("Iframe receives message from parent" + event.data);
                            if (event.data &amp;&amp; event.data.eventType == 'countryChanged' &amp;&amp; event.data.payload) {
                                self.country(event.data.payload);
                            }
                            if (event.data &amp;&amp; event.data.eventType == 'colorChanged' &amp;&amp; event.data.payload) {
                                self.color(event.data.payload);
                            }
                    } //init
        
                    $(document).ready(function () { self.init(); })
                }
        
                return new WorkareaViewModel();
            }
        );
        

        The View:

        
        &lt;h2&gt;Workarea&lt;/h2&gt;
        
        
        &lt;div&gt;
            &lt;oj-label for="country-input"&gt;Country&lt;/oj-label&gt;
            &lt;oj-input-text id="country-input" value="{{country}}" &gt;&lt;/oj-input-text&gt;
            
        &lt;h4 data-bind="text: country"&gt;&lt;/h4&gt;
        
            &lt;oj-label for="color-input"&gt;Color&lt;/oj-label&gt;
            &lt;oj-input-text id="color-input" value="{{color}}"&gt;&lt;/oj-input-text&gt;
            
        &lt;h4 data-bind="text: color"&gt;&lt;/h4&gt;
        
            &lt;oj-label for="combobox"&gt;Browser Type Selection&lt;/oj-label&gt;
            &lt;oj-combobox-one id="combobox" value="{{browser}}" on-value-changed="{{browserChangedListener}}" style="max-width:20em"&gt;
                &lt;oj-option value="Internet Explorer"&gt;Internet Explorer&lt;/oj-option&gt;
                &lt;oj-option value="Firefox"&gt;Firefox&lt;/oj-option&gt;
                &lt;oj-option value="Chrome"&gt;Chrome&lt;/oj-option&gt;
                &lt;oj-option value="Opera"&gt;Opera&lt;/oj-option&gt;
                &lt;oj-option value="Safari"&gt;Safari&lt;/oj-option&gt;
            &lt;/oj-combobox-one&gt;
        &lt;/div&gt;
        
        

         

        The corresponding code in the ADF JET client app library:

        var jetIframeClientId = "";
        
        function init() {
            window.addEventListener("message", function (event) {
                console.log("Parent receives message from iframe " + JSON.stringify(event.data));
                var data = event.data;
                var message = data["message"];
        
                if (data &amp;&amp; message){
                if ( message['eventType'] == 'browserChanged') {
                    console.log("ADF JET Container Taskflow received browser changed event from JET App")
                    var browser = message.value;
                    publishEvent("browserSelectionEvent", 
                    {
                        "selectedBrowser" : browser
                       ,"sourceTaskFlow" :"ADF-JET-container-taskflow"
                       ,"eventOrigin" : "JET:jet-embedded"
        
                    });
                }
            },
            false);
        }
        
        document.addEventListener("DOMContentLoaded", function (event) {
            init();
        });
        
        function findIframeWithIdEndingWith(idEndString) {
            var iframe;
            var iframeHtmlCollectionArray = document.getElementsByTagName("iframe");
            //http://clubmate.fi/the-intuitive-and-powerful-foreach-loop-in-javascript/#Looping_HTMLCollection_or_a_nodeList_with_forEach
        [].forEach.call(iframeHtmlCollectionArray, function (el, i) {
                if (el.id.endsWith(idEndString)) {
                    iframe = el;
                }
            });
            return iframe;
        }
        
        function processCountryChangedEvent(newCountry) {
            console.log("Client Side handling of Country Changed event; now transfer to IFRAME");
        
            var message = {
                'eventType' : 'countryChanged', 'payload' : newCountry
            };
            postMessageToJETIframe(message);
        }
        
        function postMessageToJETIframe(message) {
            var iframe = findIframeWithIdEndingWith('jetIframe::f');
            var targetOrigin = '*';
            iframe.contentWindow.postMessage(message, targetOrigin);
        }
        
        subscribeToEvent("colorSelectionEvent", handleColorSelection);
        
        function handleColorSelection(payload) {
            console.log("ColorSelectionEvent consumed " + JSON.stringify(payload));
            var color = payload.selectedColor;
            console.log("selected color " + color);
            var message = {
                'eventType' : 'colorChanged', 'payload' : color
            };
            postMessageToJETIframe(message);
        }
        //handleColorSelection
        
        
        subscribeToEvent("countrySelectionEvent", handleCountrySelection);
        function handleCountrySelection(payload) {
            var country= payload.selectedCountry;
            var message = {
                'eventType' : 'countryChanged', 'payload' : country
            };
            postMessageToJETIframe(message);
        }
        //handleCountrySelection
        
        
        Resources

        Sources for this article: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize. (note: this repository also contains the code for the flows from and to JET IFRAME to and from the ADF Taskflow X via the server side – the traditional ADF approach

        Blog Client Side Event Bus in Rich ADF Web Applications – for easier, faster decoupled interaction across regions : https://technology.amis.nl/2017/01/11/client-side-event-bus-in-rich-adf-web-applications-for-easier-faster-decoupled-interaction-across-regions/

        Docs on postMessage: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage

        The post Pure Client Side Event Exchange between ADF Taskflows and Rich Client Web Applications such as Oracle JET, Angular and React appeared first on AMIS Oracle and Java Blog.

        Java: How to fix Spring @Autowired annotation not working issues

        Thu, 2018-02-22 12:48

        Spring is a powerful framework, but it requires some skill to use efficiently. When I started working with Spring a while ago (actually Spring Boot to develop microservices) I encountered some challenges related to dependency injection and using the @Autowired annotation. In this blog I’ll explain the issues and possible solutions. Do note that since I do not have a long history with Spring, the provided solutions might not be the best ones.

        Introduction @Autowired

        In Spring 2.5 (2007), a new feature became available, namely the @Autowired annotation. What this annotation basically does is provide an instance of a class when you request it in for example an instance variable of another class. You can do things like:

        
        @Autowired
        MyClass myClass;
        
        

        This causes myClass to automagically be assigned an instance of MyClass if certain requirements are met.

        How does it know which classes can provide instances? The Spring Framework does this by performing a scan of components when the application starts. In Spring Boot the @SpringBootApplication provides this functionality. You can use the @ComponentScan annotation to tweak this behavior if you need to. Read more here.

        The classes of which instances are acquired, also have to be known to the Spring framework (to be picked up by the ComponentScan) so they require some Spring annotation such as @Component, @Repository, @Service, @Controller, @Configuration. Spring manages the life-cycle of instances of those classes. They are known in the Spring context and can be used for injection.

        Order of execution

        When a constructor of a class is called, the @Autowired instance variables do not contain their values yet. If you are dependent on them for the execution of specific logic, I suggest you use the @PostConstruct annotation. This annotation allows a specific method to be executed after construction of the instance and also after all the @Autowired instances have been injected.

        Multiple classes which fit the @Autowired bill

        If you create an instance of a class implementing an interface and there are multiple classes implementing that interface, you can use different techniques to let it determine the correct one. Read here.

        You can indicate a @Primary candidate for @Autowired. This sets a default class to be wired. Some other alternatives are to use @Resource, @Qualifier or @Inject. Read more here. @Autowired is Spring specific. The others are not.

        You can for example name a @Component like:

        
        @Component("beanName1")
        public class MyClass1 implements InterfaceName {
        }
        
        @Component("beanName2")
        public class MyClass2 implements InterfaceName {
        }
        
        

        And use it in an @Autowired like

        
        @Autowired
        @Qualifier("beanName1")
        InterfaceName myImpl;
        
        

        MyImpl will get an instance of MyClass1

        When @Autowired doesn’t work

        There are several reasons @Autowired might not work.

        When a new instance is created not by Spring but by for example manually calling a constructor, the instance of the class will not be registered in the Spring context and thus not available for dependency injection. Also when you use @Autowired in the class of which you created a new instance, the Spring context will not be known to it and thus most likely this will also fail.
        Another reason can be that the class you want to use @Autowired in, is not picked up by the ComponentScan. This can basically be because of two reasons.

        • The package is outside the ComponentScan search path. Move the package to a scanned location or configure the ComponentScan to fix this.
        • The class in which you want to use @Autowired does not have a Spring annotation. Add one of the following annotatons to the class: @Component, @Repository, @Service, @Controller, @Configuration. They have different behaviors so choose carefully! Read more here.
        Instances created not by Spring

        Autowired is cool! It makes certain things very easy. Instances created not by Spring are a challenge and stand between you and @Autowired. How do you deal with this?

        Do not create your own instances; let Spring handle it

        If you can do this (refactor), it is the easiest way to go. If you need to deal with instances created not by Spring, there are some workarounds available below, but most likely, they will have unexpected side-effects. It is easy to add Spring annotations, have the class be picked up by the ComponentScan and let instances be @Autowired when you need it. This avoids you having to create new instances regularly or having to forward them through a call stack.

        Not like this
        
        //Autowired annotations will not work inside MyClass. Other classes who want to use MyClass have to create their own instances or you have to forward this one.
        
        public class MyClass {
        }
        
        public class MyParentClass {
        MyClass myClass = new MyClass();
        }
        
        
        But like this

        Below how you can refactor this in order to Springify it.

        
        //@Component makes sure it is picked up by the ComponentScan (if it is in the right package). This allows @Autowired to work in other classes for instances of this class
        @Component
        public class MyClass {
        }
        
        //@Service makes sure the @Autowired annotation is processed
        @Service
        public class MyParentClass {
        //myClass is assigned an instance of MyClass
        @Autowired
        MyClass myClass;
        }
        
        
        Manually force Autowired to be processed

        If you want to manually create a new instance and force the @Autowired annotation used inside it to be processed, you can obtain the  SpringApplicationContext (see here) and do the following (from here):

        
        B bean = new B();
        AutowireCapableBeanFactory factory = applicationContext.getAutowireCapableBeanFactory();
        factory.autowireBean( bean );
        factory.initializeBean( bean, "bean" );
        
        

        initializeBean processes the PostConstruct annotation. There is some discussion though if this does not break the inversion of control principle. Read for example here.

        Manually add the bean to the Spring context

        If you not only want the Autowired annotation to be processed inside the bean, but also make the new instance available to be autowired to other instances, it needs to be present in the SpringApplicationContext. You can obtain the SpringApplicationContext by implementing ApplicationContextAware (see here) and use that to register the bean. A nice example of such a ‘dynamic Spring bean’ can be found here and here. There are other flavors which provide pretty similar functionality. For example here.

        The post Java: How to fix Spring @Autowired annotation not working issues appeared first on AMIS Oracle and Java Blog.

        Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker

        Thu, 2018-02-22 03:22

        It is nice – to push code to a branch in a Git repository and after a little while find the freshly built application up and running in the live environment. That is exactly what Wercker can do for me.

        image

        The Oracle + Wercker Cloud service allows me to define applications based on Git repositories. For each application, one or more workflows can be defined composed out of one or more pipelines (steps). A workflow can be triggered by a commit on a specific branch in the Git repository. A pipeline can do various things – including: build a Docker container from the sources as runtime for the application, push the Docker container to a container registry and deploy containers from this container registry to a Kubernetes cluster.

        In this article, I will show the steps I went through to set up the end to end workflow for a Node JS application that I had developed and tested locally and then pushed to a repository on GitHub. This end to end workflow is triggered by any commit to the master branch. It builds the application runtime container, stores it and deploys it to a Kubernetes Cluster running on Oracle Cloud Infrastructure (the Container Engine Cloud).

        The starting point is the application – eventmonitor-microservice-soaring-clouds-sequel – in the GitHub Repository at: https://github.com/lucasjellema/eventmonitor-microservice-soaring-clouds-sequel . I already have a free account on Wercker (http://www.wercker.com/)

        The steps:

        1. Add an Application to my Wercker account

        image

        2. Step through the Application Wizard:

        image

        Select GitHub (in my case).

        Since I am logged in into Wercker using my GitHub account details, I get presented a list of all my repositories. I select the one that holds the code for the application I am adding:

        image

        Accept checking out the code without SSH key:

        image

        Step 4 presents the configuration information for the application. Press Create to complete the definition of the application.

        image

        The successful creation of the application is indicated.

        image

        3. Define the build steps in a wercker.yml

        The build steps that Wercker executes are described by a wercker.yml file. This file is expected in the root of the source repository.

        Wercker offers help with the creation of the build file. For a specific languagem it can generate the skeleton wercker.yml file that already refers to the base box (a language specific runtime) and has the outline for the steps to build and push a container.

        image

        In my case, I have created the wercker.yml file manually and already included it in my source repo.

        Here is part of that file.

        image

        Based on the box node8 (the base container image), it defines three building block: build, push-to-releases and deploy-to-oke. The first one is standard for Node applications and builds the application (well, it gathers all node modules). The second one takes the resulting container image from the first step and pushes it to the Wercker Container Registry with a tag composed from the branch name and the git commit id. The third one is a little more elaborate. It takes the container image from the Wercker registry and creates a Kubernetes deployment that is subsequently pushed to the Kubernetes cluster that is indicated by the environment variables KUBERNETES_MASTER and KUBERNETES_TOKEN.

        4. Define Pipelines and Workflow

        In the Wercker console, I can define workflows for my application. These workflows consist of pipelines, organized in a specific sequence. Each pipeline is triggered by the completion of the previous one. The first pipeline is typically triggered by a commit event in the source repository.

        image

         

        Before I can compose the workflow I need, I first have to set up the Pipelines – corresponding to the build steps in the wercker.yml file in the application source repo. Click on Add new pipline.

        Define the name for the new pipeline (anything you like) and the name of the YML Pipeline – this one has to correspond exactly with the name of the building block in the wercker.yml file.

        image

        Click on Create.

        Next, create a pipeline for the ”deploy-to-oke” step in the YML file

        image

        Press Create to also create this pipeline.

        With all three pipelines available, we can complete the workflow.

        image

        Click on the plus icon to add step in the workflow. Associate this step with the pipeline push-docker-image-to-releases:image

        Next, add a step for the final pipeline:

        image

        This completes the workflow. If you now commit code to the master branch of the GitHub repo, the workflow will be triggered and will start to execute. The execution will fail however: the wercker.yml file contains various references to variables that need to be defined for the application (or the workflow or even the individual pipeline) before the workflow can be successful.

        image

        Crucial in making the deployment to Kubernetes successful are the files kubernetes-deployment.yml.template and ingress.yml.template. These files are used as template for the Kubernetes deployment and ingress definitions that are applied to Kubernetes. These files define important details such as:

        • Container Image in the Wercker Container Registry to create the Pod for
        • Port(s) to be exposed from each Pod
        • Environment variables to be published inside the Pod
        • URL path at which the application’s endpoints are accessed (in ingress.yml.template)

        image

        5. Define environment variables

        Click on the Environment tab. Set values for all the variables used in the wercker.yml file. Some of these define the Kubernetes environment to which deployment should take place, others provide values that are injected into the Kubernetes Pod and made available as environment variables to the application at run timeSNAGHTMLce5b5cb

        6. Trigger a build of the application

        At this point, the application is truly ready to be built and deployed. One way to trigger this, is by committing something to the master branch. Another option is shown here:

        image

        The build is triggered. The output from each step is available in the console:image

        When the build is done, the console reflects the result.

        image

        Each pipeline can be clicked to inspect details for all individual steps, for example the deployment to Kubernetes:

        image

        Each step can be expanded for even more details:

        image

        In these details, we can find the values that have been injected for the environment variables.

        7. Access the live application

        This final step is not specific to Wercker. It is however the icing on the cake – to make actual use of the application.

        The ingress definition for the application specifies:

        image

        This means that the application can be accessed at the endpoint for the K8S ingress at the path /eventmonitor-ms/app/.

        Given the external IP address for the ingress service, I can now access the application:

        SNAGHTMLd0a5d95

        Note: /health is one of the operations supported by the application.

        8. Change the application and Roll out the Change – the ultimate proof

        The real proof of this pipeline is in changing the application and having that change rolled out as a result of the Git commit.

        I make a tiny change, commit the change to GitHub

        image

        and push the changes. Almost immediately, the workflow is triggered:

        image 

        After a minute or so, the workflow is complete:

        and the updated application is live on Kubernetes:

        SNAGHTMLd156705

        Check the live logs in the Pod:

        SNAGHTMLd1447ba

        And access the application again – now showing the updated version:


        SNAGHTMLd14d46a

        The post Set up continuous application build and delivery from Git to Kubernetes with Oracle Wercker appeared first on AMIS Oracle and Java Blog.

        Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React

        Wed, 2018-02-21 02:33

        For one of our current projects I have done some explorations into the combination of ADF (and WebCenter Portal in our specific case) with JET. Our customer has existing investments in WC Portal and many ADF Taskflows and is now switching to JET as a WebApp implementation technology – for reasons of better user experience and especially better availability of developers. I believe that this is a situation that many organizations are in or are contemplating (including those who want to extend Oracle EBusiness Suite or Fusion Apps). This is not the ideal green field technology mix of course. However, if either WebCenter Portal (heavily steeped in ADF) or an existing enterprise ADF application are the starting point for new UI requirements, you are bound to end up with a combination of ADF and the latest and greatest technology used for building those requirements.

        We have to ensure that the rich client based ‘Portlets’ are nicely embedded in the ADF host environment. We also have to take care that events triggered by user actions in the ADF UI areas are communicated to the embedded rich client based UI areas in the page and lead to appropriate actions over there.

        In a previous article, I have described how events in the embedded area are communicated to the ADF side of the fence and lead to UI synchronization: Publish Events from any Web Application in IFRAME to ADF Applications. In the current article, I describe the reverse route: events in the ADF based areas on the page are communicated to the rich client based UI and trigger the appropriate synchronization. The implementation is suitably decoupled, using ADF mechanisms such as server listener, conetxtual event, partial page refresh and the standard HTML5 mechanism for publishing events on embedded IFRAME windows.image_thumb11

         

        It is quite likely that an IFRAME is used as a container for the new UI components

        The UI components created in ADF and those built in other technologies are fairly well isolated from each other, through the use of the IFRAME. However, in certain instances, the isolation has to be pierced. When a user performs an action in one UI component, it is quite possible that is action should have an effect in another UI area in the same page. The other area may need to refresh (get latest data), synchronize (align with the selection), navigate, etc. We need a solid, decoupled way of taking an event in the ADF based UI area to the UI sections embedded in IFRAMEs and based on one of the proponents of the latest UI technology.

        This article describes such an appoach – that allows our ADF side of the User Interface to send events in a well defined way to the JET, React or Angular UI component and thus make these areas play nice with each other after all. The visualization of the end to end flow is shown below:

        image

         

        Note: where it says JET, it could also say Angular, Vue or React – or even plain HTM5.

        The steps are:

        1. A user action is performed and the event to be published is identified in the ADF UI – the ADF X taskflow in the figure.
        2. Several options are available for the communication to the server – from an auto-submit enabled input component with a value change listener associated with a managed bean to a UI comonent with a client listener that leverages a server listener to queue a custom event to be sent to the server – also ending up in a managed bean
        3. The managed bean, defined in the context of the ADF Taskflow X, gets hold of the binding container for the current view, gets hold of the publishEvent method binding and executes that binding
        4. The publishEvent method binding is specified in the page definition for the current page. It invokes method publishEvent on the Data Control EventPublisherBean that was created for the POJO EventPublisherBean. The method binding in the page definition contains an events element that specifies that execution of this method binding will result in the publication of a contextual event called CountrySelectedEvent that takes the result from the method publishEvent as its payload.

          At this point, we leave the ADF X Taskflow. It has done its duty by reporting the event that took place in the client. It is available at the heart of the ADF framework, ready to be processed by one or more consumers – that each have to take care of refreshing their own UI if so desired.

        5. The contextual event CountryChangedEvent is consumed in method handleCountryChangedEvent in POJO EventConsumer. A Data Control is created for this POJO. A method action is configured for handleCountryChangedEvent in the page definition for the view in JET Container ADF Taskflow. This page definition also contains an eventMap element that specifies that the CountryChangedEvent event is to be handled by method binding handleCountryChangedEvent. The method binding specifies a single parameter that will receive the payload of the event (from the EL Expression ${payLoad} for attribute NDValue)
        6. The EventConsumer receives the payload of the event and writes a JavaScript snippet to be executed in the browser at the event of the partial page request processing.
        7. The JavaScript snippet, written by EventConsumer, is executed in the client; it invokes function processCountryChangedEvent (loaded in JS library adf-jet-client-app.js) and pass the payload of the countrychanged event.
        8. Function processCountryChangedEvent locates the IFRAME element that contains the target client application and posts a message on its content window – carrying the event’s payload
        9. A message event handler defined in the IFRAME, in the JET application, consumes the message, extracts the event’s payload and processes it in the appropriate way – probably synchronizing the UI in some way or other.At this point, all effects that the action in ADF X area should have in the JET application in the IFRAME have been achieved.

        image

        And now for some real code.

        Starting point:

        • ADF Web Application (may have Model, such as ADF BC, not necessarily)
          • an index.jsf page – home of the application
          • the ADF JET Container Taskflow with a JETView.jsff that has the embedded IFRAME that loads the index.xhtml
          • a jet-web-app folder with an index.html – to represent the JET application (note: for now it is just a plain HTML5 application)
          • the ADF X Taskflow with a view.jsff page – representing the existing WC Portal or ADF ERP application

         

        image_thumb21

         

        From ADF X Taskflow to ADF Contextual Event

        The page view.jsff contains a selectOneChoice component

        image

         

        Users can select a country.

        The component has autoSubmit set to true – which means that when the selection changes, the change is submitted (in an AJAX request) to the server. The valueChangeListener has been configured – with the detailsBean managed bean, defined in the ADF-X-taskflow.

        <?xml version='1.0' encoding='UTF-8'?>
        <ui:composition xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
            <af:panelHeader text="Classic ADF X Taskflow" id="ph1">
                <af:selectOneChoice label="Choose a country" id="soc1" autoSubmit="true"
                                    valueChangeListener="#{pageFlowScope.detailsBean.countryChangeHandler}">
                    <af:selectItem label="The Netherlands" value="nl" id="si1"/>
                    <af:selectItem label="Germany" value="de" id="si2"/>
                    <af:selectItem label="United Kingdom of Great Brittain and Northern Ireland" value="uk" id="si3"/>
                    <af:selectItem label="United States of America" value="us" id="si4"/>
                    <af:selectItem label="Spain" value="es" id="si5"/>
                    <af:selectItem label="Norway" value="no" id="si6"/>
                </af:selectOneChoice>
            </af:panelHeader>
        </ui:composition>
        
        

        image

        The detailsBean is defined for the ADF-X-taskflow:

        <?xml version="1.0" encoding="windows-1252" ?>
        <adfc-config xmlns="http://xmlns.oracle.com/adf/controller" version="1.2">
          <task-flow-definition id="ADF-X-taskflow">
            <default-activity>view</default-activity>
            <data-control-scope>
              <shared/>
            </data-control-scope>
            <managed-bean id="__1">
              <managed-bean-name>detailsBean</managed-bean-name>
              <managed-bean-class>nl.amis.frontend.jet2adf.view.adfX.DetailsBean</managed-bean-class>
              <managed-bean-scope>pageFlow</managed-bean-scope>
            </managed-bean>
            <view id="view">
              <page>/view.jsff</page>
            </view>
            <use-page-fragments/>
          </task-flow-definition>
        </adfc-config>
        

        The bean is based on class DetailsBean. The relevant method here is countryChangedHandler:

            public void countryChangeHandler(ValueChangeEvent valueChangeEvent) {
                System.out.println("Country Changed to = " + valueChangeEvent.getNewValue());
                // find operation binding publishEvent and execute in order to publish contextual event
                BindingContainer bindingContainer = BindingContext.getCurrent().getCurrentBindingsEntry();
                OperationBinding method = bindingContainer.getOperationBinding("publishEvent");
                method.getParamsMap().put("payload", valueChangeEvent.getNewValue());
                method.execute();
            }
        

        CODE details bean country changed handler

        This method gets hold of the binding container (for the current page, view.jsff) and in it of the method action publishEvent

        <?xml version="1.0" encoding="UTF-8" ?>
        <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="12.2.1.9.14" id="viewPageDef"
                        Package="nl.amis.frontend.jet2adf.view.pageDefs">
            <parameters/>
            <executables>
                <variableIterator id="variables"/>
            </executables>
            <bindings>
                <methodAction id="publishEvent" RequiresUpdateModel="true" Action="invokeMethod" MethodName="publishEvent"
                              IsViewObjectMethod="false" DataControl="EventPublisherBean"
                              InstanceName="bindings.publishEvent.dataControl.dataProvider"
                              ReturnName="data.EventPublisherBean.methodResults.publishEvent_publishEvent_dataControl_dataProvider_publishEvent_result">
                    <NamedData NDName="payload" NDType="java.lang.Object"/>
                    <events xmlns="http://xmlns.oracle.com/adfm/contextualEvent">
                        <event name="CountryChangedEvent"/>
                    </events>
                </methodAction>
            </bindings>
        </pageDefinition>
        

        This Page Definition defines the method action and specifies that execution of that method action publishes the Contextual Event CountryChangedEvent.

         

        At this point, we leave the ADF X Taskflow. It has done its duty by reporting
        the event that took place in the client. It is available at the heart of the ADF
        framework, ready to be processed by one or more consumers – that each have to
        take care of refreshing their own UI if so desired.

         

        From ADF Contextual Event to JET Application

        A method action is configured for method
        handleCountryChangedEvent in data control EventConsumer created for POJO EventConsumer, in the page definition for the view in JET
        Container ADF Taskflow. This page definition also contains an eventMap
        element that specifies that the CountryChangedEvent event is to be handled by this method binding handleCountryChangedEvent. The method binding specifies
        a single parameter that will receive the payload of the event (from the EL
        Expression ${payLoad} for attribute NDValue)

        Here is the code for the Page Definition for the JETView.jsff:

        <?xml version="1.0" encoding="UTF-8" ?>
        <pageDefinition xmlns="http://xmlns.oracle.com/adfm/uimodel" version="12.2.1.9.14" id="JETViewPageDef"
                        Package="nl.amis.frontend.jet2adf.view.pageDefs">
            <parameters/>
            <executables>
                <variableIterator id="variables"/>
            </executables>
            <bindings>
                 <methodAction id="handleCountryChangedEvent" RequiresUpdateModel="true" Action="invokeMethod"
                              MethodName="handleCountryChangedEvent" IsViewObjectMethod="false" DataControl="EventConsumer"
                              InstanceName="bindings.handleCountryChangedEvent.dataControl.dataProvider">
                    <NamedData NDName="payload" NDValue="${payLoad}" NDType="java.lang.Object"/>
                </methodAction>
            </bindings>
            <eventMap xmlns="http://xmlns.oracle.com/adfm/contextualEvent">
                <event name="CountryChangedEvent">
                    <producer region="*">
                    <!-- http://www.jobinesh.com/2014/05/revisiting-contextual-event-dynamic.html -->
                        <consumer handler="handleCountryChangedEvent" refresh="false"/>
                    </producer>
                </event>
            </eventMap>
        </pageDefinition>
        

        Note: the refresh attribute in the consumer element is crucial: it specifies that the page should not be refreshed when the event is consumed. The default is that the page does refresh; that would mean in our case that the IFRAME refreshes and reloads the JET application that is reinitialized and loses all it state.

        And here is the EventConsumer class – for which a Data Control has been created – that handles the CountryChangedEvent:

        package nl.amis.frontend.jet2adf.view.adfjetclient;
        
        import javax.faces.context.FacesContext;
        
        import org.apache.myfaces.trinidad.render.ExtendedRenderKitService;
        import org.apache.myfaces.trinidad.util.Service;
        
        public class EventConsumer {
            public EventConsumer() {
                super();
            }
        
            public void handleCountryChangedEvent(Object payload) {
                System.out.println(">>>>>> Consume Event: " + payload);
                writeJavaScriptToClient("console.log('CountryChangeEvent was consumed; the new country value = "+payload+"'); processCountryChangedEvent('"+payload+"');");
              }
        
            //generic, reusable helper method to call JavaScript on a client
            private void writeJavaScriptToClient(String script) {
                FacesContext fctx = FacesContext.getCurrentInstance();
                ExtendedRenderKitService erks = null;
                erks = Service.getRenderKitService(fctx, ExtendedRenderKitService.class);
                erks.addScript(fctx, script);
            }
        }
        

        The contextual event CountryChangedEvent is consumed in method handleCountryChangedEvent in this POJO EventConsumer. It receives the payload of the event and writes a
        JavaScript snippet to be executed in the browser at the event of the partial
        page request processing, using the ExtendedRenderKitService in the ADF framework.

        The JavaScript snippet, written by EventConsumer:

          console.log('CountryChangeEvent was consumed; the new country value = uk'); 
          processCountryChangedEvent('uk');
        

        It is executed in the client;
        it invokes function processCountryChangedEvent (loaded in JS library
        adf-jet-client-app.js) and passes the payload of the countrychanged event (that is: the country code for the selected country).

        Function processCountryChangedEvent locates the IFRAME element that
        contains the target client application and posts a message on its content
        window – carrying the event’s payload:

        function findIframeWithIdEndingWith(idEndString) {
            var iframe;
            var iframeHtmlCollectionArray = document.getElementsByTagName("iframe");
            //http://clubmate.fi/the-intuitive-and-powerful-foreach-loop-in-javascript/#Looping_HTMLCollection_or_a_nodeList_with_forEach
            [].forEach.call(iframeHtmlCollectionArray, function (el, i) {
                if (el.id.endsWith(idEndString)) {
                    iframe = el;
                }
            });
            return iframe;
        }
        
        function processCountryChangedEvent(newCountry) {
            console.log("Client Side handling of Country Changed event; now transfer to IFRAME");    
            var iframe = findIframeWithIdEndingWith('jetIframe::f');
            var targetOrigin = '*';
            iframe.contentWindow.postMessage({'eventType':'countryChanged','payload':newCountry}, targetOrigin);
        }
        

        A message event handler defined in the IFRAME, in the JET application,
        consumes the message, extracts the event’s payload and processes it.

                  function init() {
                      // attach listener to receive message from parent; this is not required for sending messages to the parent window
                      window.addEventListener("message", function (event) {
                          console.log("Iframe receives message from parent" + event.data);
                          if (event.data &amp;&amp; event.data.eventType == 'countryChanged' &amp;&amp; event.data.payload) {
                              var countrySpan = document.getElementById('currentCountry');
                              countrySpan.innerHTML = "Fresh Country: " + event.data.payload;
                          }
                      },
                      false);
                  }
                  //init
                  document.addEventListener("DOMContentLoaded", function (event) {
                      init();
                  });
        

        It receives the event, gets the data property that contains the payload (as a String) and parses it as JSON (to turn it into a JavaScript object). It extracts the country from the event. Then it locates a SPAN element in the DOM and updates its innerHTML property. This will update the UI:

        image

        Here the salient details of the index.html of the embedded web application:

            <body>
                <h2>Client Web App</h2>
                <p>
                    Country is 
                    <span id="currentCountry"></span>
                </p>
            </body>
        

        The full project looks like this:

        image

         

        A simple animated visualization of what happens:

        Webp.net-gifmaker (4)

        Resources

        Sources for this article: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize. (note: this repository also contains the code for the flow from JET IFRAME to the ADF Taskflow X

        Docs on postMessage: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage

        Blog ADF: (re-)Introducing Contextual Events in several simple steps  : https://technology.amis.nl/2013/03/14/adf-re-introducing-contextual-events-in-several-simple-steps/

        Blog Revisiting Contextual Event : Dynamic Event Producer, Manual Region Refresh, Conditional Event Subscription and using Managed Bean as Event Handler (with the crucial hint regarding supressing the automatic refresh of pages after consuming a contextual event

        The post Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React appeared first on AMIS Oracle and Java Blog.

        Introducing Elastic Search NoSQL to Oracle SQL developers – comparing dozens of ElasticSearch and SQL operations (a bit like Rosetta)

        Tue, 2018-02-20 07:47

        Even for organizations with strong roots in relational databases such as Oracle RDBMS, there may be valuable opportunities for leveraging additional data sources, for example to support special (search) use cases. Elastic Search (Index) is one of those data stores that can add value – for example to provide powerfur search capabilities to web applicaties, to handle metrics and logging output from live applications or to collect and analyze any data set in your landacape. The Elastic Stack also consists of Kibana (visualizations/dashboards), Log Stash & Beats (gathering data, for example through harvesting log files).

        For developers with SQL at their fingerprints, after sometimes decades of relational querying, it can be a little challenging to get started with Elastic Search. Especially for these developers, I have compiled a Postman collection (the interface to Elastic Search is a REST API) and a Powerpoint Presentation. These two cover over two dozen of operations – index management (DDL) and data manipulation (DML) as well as searches – with the familiar Employees and Departments data set. The presentation lists these operations side by side: the left hand side of the slide shows the action in Elastic Search and the right hand side the more familiar Oracle SQL syntax. By showing equivalent statements in a well known language and the new to be grasped language, I hope to help Oracle SQL developers get kick-started with Elastic Search.

        The searches make use of stored scripts, geo_point and geospatial operators, text searches, aggregations, highlighting, sorting, limiting, etc.

        Note: the GitHub repository (https://github.com/lucasjellema/sig-elasticsearch-february-2018) also contains hands-on labs that you could make use of to get more acquainted with both Elastic Search and Kibana.

        Resources

        Sources: https://github.com/lucasjellema/sig-elasticsearch-february-2018

        Slides: https://www.slideshare.net/lucasjellema/comparing-30-elastic-search-operations-with-oracle-sql-statements

        The post Introducing Elastic Search NoSQL to Oracle SQL developers – comparing dozens of ElasticSearch and SQL operations (a bit like Rosetta) appeared first on AMIS Oracle and Java Blog.

        Pages