Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 9 hours 48 min ago

A DBA’s first steps in Jenkins

Thu, 2018-04-12 02:56

My Customer wanted an automated way to refresh an application database to a known state, to be done by non-technical personnel. As a DBA I know a lot of scripting, can build some small web interfaces, but why bother when there are ready available tools, like Jenkins. Jenkins is mostly a CI/CD developer thing that for a classical DBA is a bit of magic. I decided to try this tool to script the refreshing of my application.

Successs

 

Getting started

First, fetch the Jenkins distribution from https://jenkins-ci.org, I used the jenkins.war latest version. Place the jenkins.war file in a desired location and you’re almost set to go, set the environment variable JENKINS_HOME to a sane value, or else your Jenkins settings, data and workdir will be in $HOME/.jenkins/

Start Jenkins by using the following commandline:

java -jar jenkins.war --httpPort=8024

You may want to make a start script to automate this step. Please note the –httpPort argument: choose a available portnumber (and make sure the firewall is opened for this port)

When starting Jenkins for the first time it creates a password that it will show in the standard output. When you open the webinterface for Jenkins for the first time you need this password. After logging in, install the recommended plugins. In this set there should be at least the Pipeline plugin. The next step will create your admin user account.

Creating a Pipeline build job.

Navigate to “New Item” to start creating your first pipeline. Type a descriptive name, choose as type a Pipeline

myfirstpipeline

After creating the job, you can start building the pipeline: In my case I needed about four steps: stopping the Weblogic servers,
clearing the schemas, importing the schemas and fixing stuff, and finally starting Weblogic again.

The Pipeline scripting language is quite extensive, I only used the bare minimum of the possibilities, but at least it gets my job done. The actual code can be entered in the configuration of the job, in the pipeline script field. A more advanced option could be to retrieve your Pipeline code (plus additional scripts) from a SCM like Git or Bitbucket.

empty_pipeline

 

The code below is my actual code to allow the refresh of the application:

pipeline {
    agent any
    stages {
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping the Owner"
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Importing OWNER'
                sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; impdp /@@theSID directory=thedirforyourdump \
                            dumpfile=Youknowwhichfiletoimport.dmp \
                            logfile=import-`date +%F-%h%m`.log \
                            schemas=ONLY_OWNER,THE_OTHER_OWNER,SOME_OTHER_REQUIRED_SCHEMA \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY"', returnStatus: true

				 echo 'Fixing invalid objects'           
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus / as sysdba @?/rdbms/admin/utlrp"'    
				 
                 echo 'Gathering statistics in the background'
                 sh script: 'ssh dbhost01 "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv -s ; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

In this script you can see the four global steps, but some steps are more involved. In this situation I decided not to completely drop the schemas associated with the application, the dump file could come from a different environment with different passwords. Additionally I only import here the known schemas, if the supplied dumpfile accidentally contains additional schemas the errors in the log would be enormous due to not creating the useraccounts in the import stage.

When the job is saved, you can try a Build, this will run your job, you can monitor the console output to see how your job is going.

SQL*Plus with wallet authentication

The observant types among you may have noticed that I used a wallet for authentication with SQL*Plus and impdp. As this tool would be used by people who should not get DBA passwords, using a password on the commandline is not recommended: note that all the command above and their output would be logged in plaintext. So I decided to start making use of a wallet for the account information. Most steps are well documented, but I found that the step of making the wallet autologin capable (not needing to type a wallet password all the time) was documented using the GUI tool, but not the commandline tool. Luckily there are ways of doing that on the command line.

mkdir -p $ORACLE_HOME/network/admin/wallet
mkstore -wrl $ORACLE_HOME/network/admin/wallet/ -create
mkstore -wrl $ORACLE_HOME/network/admin/wallet -createCredential theSID_system system 'YourSuperSekritPassword'
orapki wallet create -wallet $ORACLE_HOME/network/admin/wallet -auto_login

sqlnet.ora needs to contain some information so the wallet can be found:

WALLET_LOCATION =
  (SOURCE =    (METHOD = FILE)
   (METHOD_DATA =      (DIRECTORY = <<ORACLE_HOME>>/network/admin/wallet)    )  )
SQLNET.WALLET_OVERRIDE = TRUE

also make sure a tnsnames entry is added for your wallet credential name (above: theSID_system) now using sqlplus /@theSID_system should connect you to the database as the configured user.

Asking Questions

The first job was quite static: always the same dump, or I need to edit the pipeline code to change the named dumpfile… not as flexible as I would like… Can Jenkins help me here? Luckily, YES:

    def dumpfile
    def dbhost = 'theHost'
    def dumpdir = '/u01/oracle/admin/THESID/dpdump'

    pipeline {
    agent any
    stages {
        stage ('Choose Dumpfile') {
            steps {
                script {
                    def file_collection
                    file_collection = sh script: "ssh $dbhost 'cd $dumpdir; ls *X*.dmp *x*.dmp 2>/dev/null'", returnStdout: true
                    dumpfile = input message: 'Choose the right dump', ok: 'This One!', parameters: [choice(name: 'dump file', choices: "${file_collection}", description: '')]
                }
            }
        }
        stage ('Stop Weblogic') {
            steps { 
                echo 'Stopping Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/stopServers.py'
            }
        }
        stage ( 'Drop OWNER') {
            steps {
                echo "Dropping Owner"
                sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/drop_tables.sql"'
            }
        }
        stage ( 'Import OWNER' ) {
            steps {
                echo 'Import OWNER'
                sh script: "ssh $dbhost 'export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; impdp /@theSID directory=dump \
                            dumpfile=$dumpfile \
                            logfile=import-`date +%F@%H%M%S`.log \
                            schemas=MYFAVOURITE_SCHEMA,SECONDOWNER \
                            exclude=USER,SYNONYM,VIEW,TYPE,PACKAGE,PACKAGE_BODY,PROCEDURE,FUNCTION,ALTER_PACKAGE_SPEC,ALTER_FUNCTION,ALTER_PROCEDURE,TYPE_BODY'", returnStatus: true
                            
                 sh script: 'ssh $dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus / as sysdba @?/rdbms/admin/utlrp"'
                            
                 sh script: 'ssh dbhost "export ORACLE_SID=theSID; export ORAENV_ASK=no;\
                            source oraenv; sqlplus /@theSID @ scripts/refresh_stats.sql"'
            }
        }
        stage ( 'Start Weblogic' ) {
            steps {
                echo 'Starting Weblogic'
                sh script: '/u01/app/oracle/product/wls12212/oracle_common/common/bin/wlst.sh /home/oracle/scripts/startServers_turbo.py'
            }
        }
    }
}

The first stage actually looks at the place where all the dumpfiles are to be found and does a ls on it. This listing is then stored in a variable that will be split into choices. The running job will wait for input, so no harm is done until the choice is made.

Starting a build like this will pause, you can see that when looking at the latest running build in the build queue.

When clicking the link the choice can be made (or the build can be aborted)

 

 

 

 

 

 

The post A DBA’s first steps in Jenkins appeared first on AMIS Oracle and Java Blog.

First steps with Docker Checkpoint – to create and restore snapshots of running containers

Sun, 2018-04-08 01:31

Docker Containers can be stopped and started again. Changes made to the file system in a running container will survive this deliberate stop and start cycle. Data in memory and running processes obviously do not. A container that crashes cannot just be restarted and will have a file system in an undetermined state if it can be restarted. When you start a container after it was stopped, it will go through its full startup routine. If heavy duty processes needs to be started – such as a database server process – this startup time can be substantial, as in many seconds or dozens of seconds.

Linux has a mechanism called CRIU or Checkpoint/Restore In Userspace. Using this tool, you can freeze a running application (or part of it) and checkpoint it as a collection of files on disk. You can then use the files to restore the application and run it exactly as it was during the time of the freeze. See https://criu.org/Main_Page for details. Docker CE has (experimental) support for CRIU. This means that using straightforward docker commands we can take a snapshot of a running container (docker checkpoint create <container name> <checkpointname>). At a later moment, we can start this snapshot as the same container (docker start –checkpoint <checkpointname> <container name> ) or as a different container.

The container that is started from a checkpoint is in the same state – memory and processes – as the container was when the checkpoint was created. Additionally, the startup time of the container from the snapshot is very short (subsecond); for containers with fairly long startup times – this rapid startup can be a huge boon.

In this article, I will tell about my initial steps with CRIU and Docker. I got it to work. I did run into an issue with recent versions of Docker CE (17.12 and 18.x) so I resorted back to 17.04 of Docker CE. I also ran into an issue with an older version of CRIU, so I built the currently latest version of CRIU (3.8.1) instead of the one shipped in the Ubuntu Xenial 64 distribution (2.6).

I will demonstrate how I start a container that clones a GitHub repository and starts a simple REST API as a Node application; this takes 10 or more seconds. This application counts the number of GET requests it handles (by keeping some memory state). After handling a number of requests, I create a checkpoint for this container. Next, I make a few more requests, all the while watching the counter increase. Then I stop the container and start a fresh container from the checkpoint. The container is running lightningly fast – within 700ms – so it clearly leverages the container state at the time of creating the snapshot. It continues counting requests at the point were the snapshot was created, apparently inheriting its memory state. Just as expected and desired.

Note: a checkpoint does not capture changes in the file system made in a container. Only the memory state is part of the snapshot.

Note 2: Kubernetes does not yet provide support for checkpoints. That means that a pod cannot start a container from a checkpoint.

In a future article I will describe a use case for these snapshots – in automated test scenarios and complex data sets.

The steps I went through (on my Windows 10 laptop using Vagrant 2.0.3 and VirtualBox 5.2.8):

  • use Vagrant to a create an Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x
  • downgrade Docker from 18.x to 17.04
  • configure Docker for experimental options
  • install CRIU package
  • try out simple scenario with Docker checkpoint
  • build CRIU latest version
  • try out somewhat more complex scenario with Docker checkpoint (that failed with the older CRIU version)

 

Create Ubuntu 16.04 LTS (Xenial) Virtual Box VM with Docker CE 18.x

My Windows 10 laptop already has Vagrant 2.0.3 and Virtual Box 5.2.8. Using the following vagrantfile, I create the VM that is my Docker host for this experiment:

 

After creating (and starting) the VM with

vagrant up

I connect into the VM with

vagrant ssh

ending up at the command prompt, ready for action.

And in just to make sure we are pretty much up to date, I run

sudo apt-get upgrade

image

Downgrade Docker CE to Release 17.04

At the time of writing there is an issue with recent Docker version (at least 17.09 and higher – see https://github.com/moby/moby/issues/35691) and for that reason I downgrade to version 17.04 (as described here: https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4 ).

First remove the version of Docker installed by the vagrant provider:

sudo apt-get autoremove -y docker-ce \
&& sudo apt-get purge docker-ce -y \
&& sudo rm -rf /etc/docker/ \
&& sudo rm -f /etc/systemd/system/multi-user.target.wants/docker.service \
&& sudo rm -rf /var/lib/docker \
&&  sudo systemctl daemon-reload

then install the desired version:

sudo apt-cache policy docker-ce

sudo apt-get install -y docker-ce=17.04.0~ce-0~ubuntu-xenial

 

    Configure Docker for experimental options

    Support for checkpoints leveraging CRIU is an experimental feature in Docker. In order to make use of it, the experimental options have to be enabled. This is done (as described in https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04)

     

    sudo nano /etc/docker/daemon.json
    

    add

    {
    "experimental": true
    }
    

    Press CTRL+X, select Y and press Enter to save the new file.

    restart the docker service:

    sudo service docker restart
    

    Check with

    docker version
    

    if experimental is indeed enabled.

     

    Install CRIU package

    The simple approach with CRIU – how it should work – is by simply installing the CRIU package:

    sudo apt-get install criu
    

    (see for example in https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/)

    This installation results for me in version 2.6 of the CRIU package. For some actions that proves sufficient, and for others it turns out to be not enough.

    image

     

    Try out simple scenario with Docker checkpoint on CRIU

    At this point we have Docker 17.04, Ubuntu 16.04 with CRIU 2.6. And that combination can give us a first feel for what the Docker Checkpoint mechanism entails.

    Run a simple container that writes a counter value to the console once every second (and then increases the counter)

    docker run --security-opt=seccomp:unconfined --name cr -d busybox /bin/sh -c 'i=0; while true; do echo $i; i=$(expr $i + 1); sleep 1; done'
    

    check on the values:

    docker logs cr
    

    create a checkpoint for the container:

    docker checkpoint create  --leave-running=true cr checkpoint0
    

    image

    leave the container running for a while and check the logs again

    docker logs cr
    

    SNAGHTML19a5da6

    now stop the container:

    docker stop cr
    

    and restart/recreate the container from the checkpoint:

    docker start --checkpoint checkpoint0 cr
    

    Check the logs:

    docker logs cr
    

    You will find that the log is resumed at the value (19) where the checkpoint was created:

    SNAGHTML197d66e

     

    Build CRIU latest version

    When I tried a more complex scenario (see next section) I ran into this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host. Here are the steps I went through to accomplish that – following these instuctions: https://criu.org/Installation.

    First, remove the currently installed CRIU package:

    sudo apt-get autoremove -y criu \
    && sudo apt-get purge criu -y \
    

    Then, prepare the build environment:

    sudo apt-get install build-essential \
    && sudo apt-get install gcc   \
    && sudo apt-get install libprotobuf-dev libprotobuf-c0-dev protobuf-c-compiler protobuf-compiler python-protobuf \
    && sudo apt-get install pkg-config python-ipaddr iproute2 libcap-dev  libnl-3-dev libnet-dev --no-install-recommends
    

    Next, clone the GitHub repository for CRIU:

    git clone <a href="https://github.com/checkpoint-restore/criu">https://github.com/checkpoint-restore/criu</a>
    

    Navigate into to the criu directory that contains the code base

    cd criu
    

    and build the criu package:

    make
    

    When make is done, I can run CRIU :

    sudo ./criu/criu check
    

    to see if the installation is successful. The final message printed should be: Looks Good (despite perhaps one or more warnings).

    Use

    sudo ./criu/criu –V
    

    to learn about the version of CRIU that is currently installed.

    Note: the CRIU instructions describe the following steps to install criu system wide. This does not seem to be needed in order for Docker to leverage CRIU from the docker checkpoint commands.

    sudo apt-get install asciidoc  xmlto
    sudo make install
    criu check
    

    Now we are ready to take on the more complex scenario that failed before with an issue in the older CRIU version.

    A More complex scenario with Docker Checkpoint

    This scenario failed with the older CRIU version – probably because of this issue. I could work around that issue by building the latest version of CRIU on my Ubuntu Docker Host.

      In this case, I run a container based on a Docker Container image for running any Node application that is downloaded from a GitHub Repository. The Node application that the container will download and run handles simple HTTP GET requests: it counts requests and returns the value of the counter as the response to the request. This container image and this application were introduced in an earlier article: https://technology.amis.nl/2017/05/21/running-node-js-applications-from-github-in-generic-docker-container/

      Here you see the command to run the container – to be be called reqctr2:

      docker run --name reqctr2 -e "GIT_URL=https://github.com/lucasjellema/microservices-choreography-kubernetes-workshop-june2017" -e "APP_PORT=8080" -p 8005:8080 -e "APP_HOME=part1"  -e "APP_STARTUP=requestCounter.js"   lucasjellema/node-app-runner
      

      image

      It takes about 15 seconds for the application to start up and handle requests.

      Once the container is running, requests can be sent from outside the VM – from a browser running on my laptop for example – to be handled  by the container, at http://192.168.188.106:8005/.

      After a number or requests, the counter is at 21:

      image

      At this point, I create a checkpoint for the container:

      docker checkpoint create  --leave-running=true reqctr2 checkpoint1
      

      image

      I now make a few additional requests in the browser, bringing the counter to a higher value:

      imageAt this point, I stop the container – and subsequently start it again from the checkpoint:

      docker stop reqctr2
      docker start --checkpoint checkpoint1 reqctr2
      

      image

      It takes less than a second for the container to continue running.

      When I make a new request, I do not get 1 as a value (as would be the result from a fresh container) nor is it 43 (the result I would get if the previous container would still be running). Instead, I get

      imageThis is the next value starting at the state of the container that was captured in the snapshot. Note: because I make the GET request from the browser and the browser also tries to retrieve the favicon, the counter is increased by two for every single time I press refresh in the browser.

      Note: I can get a list of all checkpoints that have been created for a container. Clearly, I should put some more effort in a naming convention for those checkpoints:

      docker checkpoint ls reqctr2
      

      image

      The flow I went through in this scenario can be visualized like this:

      image

      The starting point: Windows laptop with Vagrant and Virtual Box. A VM has been created by Vagrant with Docker inside. The correct version of Docker and of the CRIU package have been set up.

      Then these steps are run through:

      1. Start Docker container based on an image with Node JS runtime
      2. Clone GitHub Repository containing a Node JS application
      3. Run the Node JS application – ready for HTTP Requests
      4. Handle HTTP Requests from a browser on the Windows Host machine
      5. Create a Docker Checkpoint for the container – a snapshot of the container state
      6. The checkpoint is saved on the Docker Host – ready for later use
      7. Start a container from the checkpoint. This container starts instantaneously, no GitHub clone and application startup are required; it resumes from the state at the time of creating the checkpoint
      8. The container handles HTTP requests – just like its checkpointed predecessor

       

      Resources

      Sources are in this GitHub repo: https://github.com/lucasjellema/docker-checkpoint-first-steps

      Article on CRIU: http://www.admin-magazine.com/Archive/2014/22/Save-and-Restore-Linux-Processes-with-CRIU

      Also: on CRIU and Docker: https://yipee.io/2017/06/saving-and-restoring-container-state-with-criu/.

      Docs on Checkpoint and Restore in Docker: https://github.com/docker/cli/blob/master/experimental/checkpoint-restore.md

       

      Home of CRIU:   and page on Docker support: https://criu.org/Docker; install CRIU package on Ubuntu: https://criu.org/Packages#Ubuntu

      Install and Build CRIU Sources: https://criu.org/Installation

       

      Docs on Vagrant’s Docker providingprovisioning: https://www.vagrantup.com/docs/provisioning/docker.html

      Article on downgrading Docker : https://forums.docker.com/t/how-to-downgrade-docker-to-a-specific-version/29523/4

      Configure Docker for experimental options: https://stackoverflow.com/questions/44346322/how-to-run-docker-with-experimental-functions-on-ubuntu-16-04

      Issue with Docker and Checkpoints (at least in 17.09-18.03): https://github.com/moby/moby/issues/35691

      The post First steps with Docker Checkpoint – to create and restore snapshots of running containers appeared first on AMIS Oracle and Java Blog.

      Regenerate Oracle VM Manager repository database

      Fri, 2018-04-06 02:01

      Some quick notes to regenerate a corrupted Oracle VM manager repository database.

      How did we discover the corruption?
      The MySQL repository databases was increasing in size, the file “OVM_STATISTIC.ibd” was 62G. We also found the following error messages in the “AdminServer.log” logfile:

      ####<2018-02-13T07:52:17.339+0100> <Error> <com.oracle.ovm.mgr.task.ArchiveTask> <ovmm003.gemeente.local> <AdminServer> <Scheduled Tasks-12> <<anonymous>> <> <e000c2cc-e7fe-4225-949d-25d2cdf0b472-00000004> <1518504737339> <BEA-000000> <Archive task exception:
      com.oracle.odof.exception.ObjectNotFoundException: No such object (level 1), cluster is null: <9208>

      Regenerate steps
      – Stop the OVM services
      ve used Toad in the past, this tool will be very familiar to you.</p> <p>TOra s born out of <a href="http://www.globecom.net/tora/history.html">jealousy</a>. Windows users have an abundance of tools to choose from, Linux user however, don’t… or at least didn’t. TOra filled this gap.</p> <p>It was created in C++ and uses the Qt library. In the included documentation, there is a section explaining ways to create plug-ins for TOra. It even includes a tutorial. The only <a href="http://log4plsql.sourceforge.net/download/log4plsql.tpl">plug-in</a> I could find incorporates <a href="http://log4plsql.sourceforge.net/">Log4PLSQL</a> into TOra.</p> <p>While using Google to search for plug-ins available for TOra I came across a post mentioning a plug-in for SQL*Loader, I couldn’t find the actual plug-in though.</p> <p>TOra is free of charge, unless you’re a Windows user, then you’ll need to purchase a commercial license. The Windows version of TOrais governed by <a href="http://www.globecom.se/tora/license.pdf">the Software License Agreement from Quest Software.</a> Other platform releases are licensed under <a href="http://www.gnu.org/copyleft/gpl.html">GPL</a>.</p> <p>Features included in TOra:</p> <ul style='margin-top:0in' type=disc> <li>PL/SQL Debugger, at least according to the specs. I couldn’t get it going. The<br /> menu showed the icon, but was disabled.</li> <li>SQL Worksheet with syntax highlighting. Tab Pages provide additional<br /> information such as Explain Plan and a Log of previously executed<br /> statements<br /> A nice feature here is the “describe under cursorâ€? which shows the table<br /> structure you a currently querying.</li> <li>Schema Browser to show tables, view, indexes , sequences, synonyms, pl/sql and triggers for a particular schema. </li> </ul> <p>Here is a screenshot showcasing some of these features.<br /> <img src="https://technology.amis.nl/wp-content/uploads/images/ToraScreenshot.png" alt="TOra Screenshot" /></p> <p>TOra supports Database versions up to Oracle 9i (which release is not specified). Being connected to an Oracle 10g database didn’t seem to cause any problems.</p> <p>I installed a trial version on a Windows platform and played around with that for a while.<br /> The first thing that strikes me is the resemblance to Toad. There are a lot of similarities between these two products. The overall look and feel, where the different tools are located etc. make clear that TOra was inspired by Toad.</p> <p>My experience with TOra… it has a lot of features I never use. The ones I do use, don’t provide me with the feedback I need.</p> <p>An example to illustrate this: If I create a procedure with an error in it. It will compile, or at least it appears that way. The error messages are shown on the status bar and disappear after a while. You can<br /> recall the messages using a button on the status bar, or navigate the cursor to the status bar to display the error message in a tooltip.<br /> What I’d like to see is more immediate feedback to notice errors early on during development. Toad will display a pop-up window clearly stating the error.</p> <p>Creating and manipulating Objects formed somewhat of a problem in the SQL Worksheet. A valid Object Type Body definition (tested in SQL*Plus) resulted in an “ORA-00900: Invalid SQL Statement” error, making it impossible to create the Object Type Body here.</p> <p>Doing a similar action(creating a Object Type Body in a SQL window) in Toad or SQL*Plus was no problem. A valid Object Type Body was the result.</p> <p>A really nice feature in TOra is the DB Extract/Compare/Search tool. Simply using check-marks to specify which database objects you want to use and this tool will either Extract (creating installation scripts), Compare (handy if you need to compare two schema’s) or Search the database.</p> <p>I think it’s possible to overcome the limitations I mentioned before, once you get more comfortable using this tool. Getting used to a tool like Toad or TOra requires some time. There are so many tools at your disposal, learning each one of them simply takes time and effort. It’s like a new pair of shoes, once you break them in, they’re comfortable to wear, but the first two weeks…</p> <p>There are a number of tools on the market to choose from, especially if you’re using Windows. TOra beats Toad price-wise, but for how long? Quest is involved in TOra, draw your own conclusion. How will it compete with others on the Windows platform? Is it still going to evolve and incorporate new features and enhancements?</p> <p>If you’re not on a Windows platform, TOramight be worth looking into. The price is right, it offers a lot (maybe most) of the features Toad has.<br /> Jealousy can be a thing of the past.</p> " data-medium-file="" data-large-file="" class="alignnone size-full wp-image-119" src="https://ronniekalisingh.files.wordpress.com/2018/02/restore_db_1.png?resize=674%2C42" alt="restore_db_1" width="674" height="42" data-recalc-dims="1" />

      – Delete the Oracle VM Manager repository database
      restore_db_2

      “/u01/app/oracle/ovm-manager-3/ovm_upgrade/bin/ovm_upgrade.sh –deletedb –dbuser=ovs –dbpass=<PASSWORD> –dbhost=localhost –dbport=49500 –dbsid=ovs”

      restore_db_3

      – Generate replacement certificate
      restore_db_4

      – Start the OVM services and generate new certificates
      restore_db_5

      – Restart OVM services
      restore_db_6

      – Repopulate the database by discovering the Oracle VM Servers
      restore_db_7.png

      – Restore simple names
      Copy the restoreSimpleName script to /tmp, see Oracle Support note: 2129616.1
      restore_db_8

      Resources
      [OVM] Issues with huge OVM_STATISTIC.ibd used as OVM_STATISTIC Table. (Doc ID 2216441.1)
      Oracle VM: How To Regenerate The OVM 3.3.x/3.4.x DB (Doc ID 2038168.1)
      Restore OVM Manager “Simple Names” After a Rebuild/Reinstall (Doc ID 2129616.1)

      The post Regenerate Oracle VM Manager repository database appeared first on AMIS Oracle and Java Blog.

      First steps with REST services on ADF Business Components

      Sat, 2018-03-31 10:20

      Recently we had a challenge at a customer for which ADF REST resources on Business Components were the perfect solution.

      Our application is built in Oracle JET and of course we wanted nice REST services to communicate with. Because our data is stored in an Oracle database we needed an implementation to easily access the data from JET. We decided on using ADF and Business Components to achieve this. Of course there are alternative solutions available but because our application runs as a portal in Webcenter Portal, ADF was already in our technology stack. I would like to share some of my first experiences with this ADF feature. We will be using ADF 12.2.1.1.

      In this introduction we will create a simple application, the minimal required set of business components and a simple REST service. There are no prerequirements to start using the REST functionality in ADF. If you create a custom application you can choose to add the feature for REST Services but it is not necessary. Start with making a simple EO and VO:

      image

      Before you can create any REST services, you need to define your first release version. The versions of REST resources are managed in the adf-config.xml. Go to this file, open the Release Versions tab and create version 1. The internal name is automatically configured based on your input:

      image

      Your application is now ready for your first service. Go to the Web Service tab of the Application Module and then the REST tab. Click the green plus icon to add a resource. Your latest version will automatically be selected. Choose an appropriate name and press OK.

      image

      ADF will create a config file for your resource (based on the chosen ViewObject), a resourceRegistry that will manage all resources in your application and a new RESTWebService project that you can use to start the services. The config file automatically opens and you can now further configure your resource.

      image

      In the wizard Create Business Components from Tables, there is a REST Resources step in which you can immediately define some resources on View Objects. Using this option always gives me an addPageDefinitionUsage error, even by creating the simplest service:

      image

      After ignoring this error, several things go wrong (what a surprise). The REST resource is created in a separate folder (not underneath the Application Module), it is not listed as a REST resource in the Application Module and finally it doesn’t work. All in all not ideal. I haven’t been able to figure out what happens but I would recommend avoiding this option (at least in this version of JDeveloper).

      There are two important choices to make before starting your service. You have to decide which REST actions will be allowed, and what attributes will be exposed.

      Setting the actions is simple. On the first screen of your config file are several checkboxes for the actions, Create, Delete and Update. By default they are all allowed on your service. Make sure to uncheck all actions that you don’t want to allow on your service. This provides for better security.

      image

      Limiting the exposed attributes can be done in two ways. You can hide attributes on the ViewObject for all REST services on that VO. This is a secure and convenient way if you know an attribute should never be open to a user.

      image

      Another way of configuring attributes for your REST services is creating REST shapes. This is a powerful feature that can be accessed from the ViewObject screen. You can make shapes independent of specific resources and apply them whenever you want. To create a shape, go to the ViewObject and to the tab Service Shaping. Here you can add a shape with the green plus-icon. Keep in mind that the name you choose for your shape will be a suffix to the name of your ViewObject. After creating the shape, you can use the shuttle to remove attributes.

      image

      The newly created shape will have its own configuration file in a different location but you can only change it in the ViewObject configuration.

      image

      After the shape is created, it can now be added to your REST service. To do this, use the Attributes tab in your Resource file, select the shape and you see the attribute shuttle is updated automatically.

      image

      You are now ready to start your service. Right-click on the RESTWebService project and run. If you have done everything right, JDeveloper will show you the url where your services are running. Now you can REST easily.

      The post First steps with REST services on ADF Business Components appeared first on AMIS Oracle and Java Blog.

      ORDS: Installation and Configuration

      Fri, 2018-03-30 09:57

      In my job as system administrator/DBA/integrator I was challenged to implement smoketesting using REST calls. Implementing REST in combination with WebLogic is pretty easy. But then we wanted to extend smoketesting to the database. For example we wanted to know if the database version and patch level were at the required level as was used throughout the complete DTAP environment. Another example is the existence of required database services. As it turns out Oracle has a feature called ORDS – Oracle REST Data Service – to accomplish this.

      With ORDS you can install it in 2 different scenario’s, in standalone mode on the database server, or in combination with an application server such as WebLogic Server, Glassfish Server, or Tomcat.

      This article will give a short introduction to ORDS. It then shows you how to install ORDS feasible for a production environment using WebLogic Server 12c and an Oracle 12c database as we have done for our smoketesting application.

      We’ve chosen WebLogic Server to deploy the ORDS application because we already used WebLogic’s REST feature for smoketesting the application and WebLogic resources, and for high availability reasons because we use an Oracle RAC database. Also running in stand-alone mode would lead to additional security issues for port configutions.

      Terminology

      REST: Representational State Transfer. It provides interoperability on the Internet between computer systems.

      ORDS: Oracle REST Data Services. Oracle’s implementation of RESTful services against the database.

      RESTful service: an http web service that follows the REST architecture principles. Access to and/or manipulation of web resources is done using a uniform and predefined set of stateless operators.

      ORDS Overview

      ORDS makes it easy to develop a REST interface/service for relational data. This relational data can be stored in either an Oracle database, an Oracle 12c JSON Document Store, or an Oracle NoSQL database.

      A mid-tier Java application called ORDS, maps HTTP(S) requests (GET, PUT, POST, DELETE, …) to database transactions and returns results in a JSON format.

      ORDS Request Response Flow

      Installation Process

      The overall process of installing and configuring ORDS is very simple.

      1. Download the ORDS software
      2. Install the ORDS software
      3. Make some setup configurational changes
      4. Run the ORDS setup
      5. Make a mapping between the URL and the ORDS application
      6. Deploy the ORDS Java application

      Download the ORDS software

      Downloading the ORDS software can be done from the Oracle Technology Network. I used version ords.3.0.12.263.15.32.zip. I downloaded it from Oracle Technet:
      http://www.oracle.com/technetwork/developer-tools/rest-data-services/downloads/index.html

      Install the ORDS software

      The ORDS software is installed on the WebLogic server running the Administration console. Create an ORDS home directory and unzip the software.

      Here are the steps on Linux

      $ mkdir -p /u01/app/oracle/product/ords
      $ cp -p ords.3.0.12.263.15.32.zip /u01/app/oracle/product/ords
      $ cd /u01/app/oracle/product/ords
      $ unzip ords.3.0.12.263.15.32.zip

      Make some setup configurational changes ords_params.properties File

      Under the ORDS home directory a couple of subdirectories are created. One subdirectory is called params. This directory holds a file called ords_params.properties. This file holds some default parameters that are used during the installation. This file ords_params.properties, is used for silent installation. In case any parameters aren’t specified in this file, ORDS interactively asks you for the values.

      In this article I go for a silent installation. Here are the default parameters and the ones I set for installing

      Parameter

      Default Value

      Configured Value

      db.hostname

      dbserver01.localdomain

      db.port

      1521

      1521

      db.servicename

      ords_requests

      db.username

      APEX_PUBLIC_USER

      APEX_PUBLIC_USER

      migrate.apex.rest

      false

      false

      plsql.gateway.add

      false

      false

      rest.services.apex.add

      false

      rest.services.ords.add

      true

      true

      schema.tablespace.default

      SYSAUX

      ORDS

      schema.tablespace.temp

      TEMP

      TEMP

      standalone.http.port

      8080

      8080

      user.public.password

      Ords4Ever!

      user.tablespace.default

      USERS

      ORDS

      user.tablespace.temp

      TEMP

      TEMP

      sys.user

      SYS

      sys.password

      Oracle123

      NOTE

      As you see, I refer to a tablespace ORDS for the installation of the metadata objects. Don’t forget to create this tablespace before continuing.

      NOTE

      The parameters sys.user and sys.password are removed from the ords_params.properties file after running the setup (see later on in this article)

      NOTE

      The password for parameter user.public.password is obscured after running the setup (see later on in this article)

      NOTE

      As you can see there are many parameters that refer to APEX. APEX is a great tool for rapidly developing very sophisticated applications nowadays. Although you can run ORDS together with APEX, you don’t have to. ORDS runs perfectly without an APEX installation.

      Configuration Directory

      I create an extra directory to hold all configuration data, called config directly under the ORDS home directory. Here all configurational data used during setup are stored.

      $ mkdir config
      $ java -jar ords.war configdir /u01/app/oracle/product/ords/config
      $ # Check what value of configdir has been set!
      $ java -jar ords.war configdir

      Run the ORDS setup

      After all configuration is done, you can run the setup, which installs the Oracle metadata objects necessary for running ORDS in the database. The setup creates 2 schemas called:

      • ORDS_METADATA
      • ORDS_PUBLIC_USER

      The setup is run in silent mode, which uses the parameter values previously set in the ords_params.properties file.

      $ mkdir -p /u01/app/oracle/logs/ORDS
      $ java -jar ords.war setup –database ords –logDir /u01/app/oracle/logs/ORDS –silent

      Make a mapping between the URL and the ORDS application

      After running the setup, ORDS required objects are created inside the database. Now it’s time to make a mapping from the request URL to the ORDS interface in the database.

      $ java -jar ords.war map-url –type base-path /ords ords

      Here a mapping is made between the request URL from the client to the ORDS interface in the database. The /ords part after the base URL is used to map to a database connection resource called ords.

      So the request URL will look something like this:

      http://webserver01.localdomain:7001/ords/

      Where http://webserver01.localdomain:7001 is the base path.

      Deploy the ORDS Java application

      Right now all changes and configurations are done. It’s time to deploy the ORDS Java application against the WebLogic Server. Here I use wlst to deploy the ORDS Java application, but you can do it via the Administration Console as well, whatever you like.

      $ wlst.sh
      $ connect(‘weblogic’,’welcome01′,’t3://webserver01.localdomain:7001′)
      $ progress= deploy(‘ords’,’/u01/app/oracle/product/ords/ords.war’,’AdminServer’)
      $ disconnect()
      $ exit()

      And your ORDS installation is ready for creating REST service!

      NOTE

      After deployment of the ORDS Java application, it’s state should be Active and health OK. You might need to restart the Managed Server!

      Deinstallation of ORDS

      As the installation of ORDS is pretty simple, deinstallation is even more simple. The installation involves the creation of 2 schemas on the database and a deployment of ORDS on the application server. The deinstall process is the reverse.

      1. Undeploy ORDS from WebLogic Server
      2. Deinstall the database schemas using

        $ java –jar ords.war uninstall

        In effect this removes the 2 schemas from the database

      3. Optionally remove the ORDS installation directories
      4. Optionally remove the ORDS tablespace from the database

      References

      Summary

      The installation of ORDS is pretty simple. You don’t need to get any extra licenses to use ORDS. ORDS can be installed without installing APEX. You can run ORDS stand-alone, or use a J2EE webserver like WebLogic Server, Glassfish Server, or Apache Tomcat. Although you will need additional licenses for the use of these webservers.

      Hope this helps!

      The post ORDS: Installation and Configuration appeared first on AMIS Oracle and Java Blog.

      Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415

      Thu, 2018-03-29 10:26

      We are in the process of upgrading our Oracle Clusters and SIHA/Restart systems to Oracle 12.2.0.1

      The upgrade of the Grid-Infra home on a Oracle SIHA/Restart system from 11.2.0.4 to 12.2.0.1 fails when
      running rootupgrade.sh with error message:

      CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’

      We start the upgrade to 12.2.0.1 (with Jan2018 RU patch) as:
      $ ./gridSetup.sh -applyPSU /app/software/27100009

      The installation and relink of the software looks correct.
      However, when running the rootupgrade.sh as root user, as part of the post-installation,
      the script ends with :

      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/crsctl query has softwareversion
      2018-03-28 11:20:27: Command output:
      > Oracle High Availability Services version on the local node is [12.2.0.1.0]
      >End Command output
      2018-03-28 11:20:27: Version String passed is: [Oracle High Availability Services version on the local node is [12.2.0.1.0]]
      2018-03-28 11:20:27: Version Info returned is : [12.2.0.1.0]
      2018-03-28 11:20:27: Got CRS softwareversion for su025p074: 12.2.0.1.0
      2018-03-28 11:20:27: The software version on su025p074 is 12.2.0.1.0
      2018-03-28 11:20:27: leftVersion=11.2.0.4.0; rightVersion=12.2.0.0.0
      2018-03-28 11:20:27: [11.2.0.4.0] is lower than [12.2.0.0.0]
      2018-03-28 11:20:27: Disable the SRVM_NATIVE_TRACE for srvctl command on pre-12.2.
      2018-03-28 11:20:27: Invoking “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
      2018-03-28 11:20:27: trace file=/app/oracle/crsdata/su025p074/crsconfig/srvmcfg1.log
      2018-03-28 11:20:27: Executing cmd: /app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first
      2018-03-28 11:21:02: Command output:
      > PRCA-1003 : Failed to create ASM asm resource ora.asm
      > PRCR-1071 : Failed to register or update resource ora.asm
      > CRS-2415: Resource ‘ora.asm’ cannot be registered because its owner ‘root’ is not the same as the Oracle Restart user ‘oracle’.
      >End Command output
      2018-03-28 11:21:02: “upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first” failed with status 1.
      2018-03-28 11:21:02: Executing cmd: /app/gi/12201_grid/bin/clsecho -p has -f clsrsc -m 180 “/app/gi/12201_grid/bin/srvctl upgrade model -s 11.2.0.4.0 -d 12.2.0.1.0 -p first”
      2018-03-28 11:21:02: Command

      The rootupgrade.sh script is run as the root user as prescribed,  but root cannot add the ASM resource.
      This leaves the installation unfinished.

      There is no description in the Oracle Knowledge base, however according Oracle Support this problem is
      caused by unpublished   Bug 25183818 : SIHA 11204 UPGRADE TO MAIN IS FAILING 

      As per March 2018, no workaround or software patch is yet available.

      The post Upgrade of Oracle Restart/SIHA from 11.2 to 12.2 fails with CRS-2415 appeared first on AMIS Oracle and Java Blog.

      Dbvisit Standby upgrade

      Wed, 2018-03-28 10:00
      Upgrading to Dbvisit Standby 8.0.x

      Dbvisit provides upgrade documentation which is detailed and in principle  correct but only describes the upgrade process from the viewpoint of an installation on a single host.
      I upgraded Dbvisit Standby at a customer’s site with Dbvisit Standby in a running configuration with several hosts with several primary and standby databases.  I found, by trial and error and with the help of Dbvisit support, some additional steps and points of advice that I think may be of help to others.
      This documents describes the upgrade process for a working environment and provides information and advice in addition to the upgrade documentation. Those additions will be clearly marked in red throughout the blog. Also the steps of the upgrade process have been rearranged in a more logical order.
      It is assumed that the reader is familiar with basic Dbvisit concepts and processes.

      Configuration

      The customer’s configuration that was upgraded is as follows:

      • Dbvisit 8.0.14
      • 4 Linux OEL 6 hosts running Dbvisit Standby
      • 6 databases in Dbvisit Standby configuration distributed among the hosts
      • 1 Linux OEL 7 host running Dbvisit Console
      • DBIVIST_BASE: /usr/dbvisit
      • Dbvctl running in Deamon mode
      Dvisit upgrade overview

      The basic steps that are outlined in the Dbvisit upgrade documentation are as follows:

      1. Stop your Dbvisit Schedules if you have any running.
      2. Stop or wait for any Dbvisit processes that might still be executing.
      3. Backup the Dbvisit Base location where your software is installed.
      4. Download the latest version from www.dbvisit.com.
      5. Extract the install files into a temporary folder, example /home/oracle/8.0.
      6. Start the Installer and select to install the required components.
      7. Once the update is complete, you can remove the temporary install folder where the installer was extracted.
      8. It is recommended to run a manual send/apply of logs once an upgrade is complete.
      9. Re-enable any schedules.

      During the actual upgrade we deviated significantly from this: steps were rearranged, added and changed slightly.

      1. Download the latest available version of DBVisit and make it available on all server.
      2. Make a note of the primary host for each Dbvisit standby configuration.
      3. Stop dbvisit processes.
      4. Backup the Dbvisit Base location where your software is installed.
      5. Upgrade the software.
      6. Start dbvagent and dbvnet.
      7. Upgrade de DDC configuration files.
      8. Restart dbvserver.
      9. Update DDC’s in Bbvisit Console.
      10. run a manual send/apply of logs.
      11. Restart dbvsit standby processes.

      In the following sections these steps will  be explained more detailed.

      Dbvisit Standby upgrade

      Here follow the steps in detail that in my view should be taken for a Dbvisit upgrade, based on the experience and steps taken during the actual Dbvisit upgrade.

      1. Download the latest available version of DBVisit and make it available on all server.
        In our case I put it in /home/oracle/upgrade on all hosts. The versions used were 8.0.18 for Oracle Enterprise Linux 6 and 7:

        dbvisit-standby8.0.18-el6.zip
        dbvisit-standby8.0.18-el7.zip
        
      2. Make a note of the primary hosts for each Dbvisit standby configuration.
        You will need this information later in step 7. It is possible to get the information from the DDC .env files, but in our case it is easier to get it from the Dbvisit console.
        If you need to get them from the DDC .env files look for the SOURCE parameter. Say we have a database db1:

        [root@dbvhost04 conf]# cd /usr/dbvisit/standby/conf/
        [root@dbvhost04 conf]# grep "^SOURCE" dbv_db1.env
        SOURCE = dbvhost04
        
      3. Stop dbvisit processes.
        The Dbvisit upgrade manual assumes you schedule dbvctl from the cron. In our situation the dbvctl processes were running in Deamon mode. Easiest was therefor to stop them from the Dbvisit console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose stop.
        Dbvagent, dbvnet and, on the Dbvisit console host, dbvserver can be stopped as follows:

        cd /usr/dbvisit/dbvagent
        ./dbvagent -d stop
        cd /usr/dbvisit/dbvnet
        ./dbvnet -d stop
        cd /usr/dbvisit/dbvserver
         ./dbvserver -d stop
        

        Do this on all hosts. Dbvisit support advices that all hosts in a configuration be upgraded at the same time. There is no rolling upgrade or something similar.
        Check before proceeding if all processes are down.

      4. Backup the Dbvisit Base location where your software is installed.
        The Dbvisit upgrade manual marks this step as optional – but recommended. In my view it is not optional.
        You can simply tar everything under DBIVIST_BASE for later use.
      5. Upgrade the software.
        Extract the downloaded software and run the included  installer. It will show you which version you already have and which versions is available in the downloaded software. Choose the correct install option to upgrade. Below you can see the upgrade of one of the OEL 6 database hosts running Dbvisit standby:

        cd /home/oracle/upgrade
        <unzip and untar the correct version from /home/oracle/upgrade>
        cd dbvisit/installer/
        ./install-dbvisit
        
        -----------------------------------------------------------
            Welcome to the Dbvisit software installer.
        -----------------------------------------------------------
        
            It is recommended to make a backup of our current Dbvisit software
            location (Dbvisit Base location) for rollback purposes.
            
            Installer Directory /home/oracle/upgrade/dbvisit
        
        >>> Please specify the Dbvisit installation directory (Dbvisit Base).
         
            The various Dbvisit products and components - such as Dbvisit Standby, 
            Dbvisit Dbvnet will be installed in the appropriate subdirectories of 
            this path.
        
            Enter a custom value or press ENTER to accept default [/usr/dbvisit]: 
             >     DBVISIT_BASE = /usr/dbvisit 
        
            -----------------------------------------------------------
            Component      Installer Version   Installed Version
            -----------------------------------------------------------
            standby        8.0.18_0_gc6a0b0a8  8.0.14.19191                                      
            dbvnet         8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
            dbvagent       8.0.18_0_gc6a0b0a8  2.0.14.19191                                      
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
        
            -----------------------------------------------------------
         
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            
            Your choice: 1
        
            Choose component(s):
               1 - Core Components (Dbvisit Standby Cli, Dbvnet, Dbvagent)
               2 - Dbvisit Standby Core (Command Line Interface)
               3 - Dbvnet (Dbvisit Network Communication) 
               4 - Dbvagent (Dbvisit Agent)
               5 - Dbvserver (Dbvisit Central Console) - Not available on Solaris/AIX
               6 - Exit Installer
            
            Your choice: 1
        
        -----------------------------------------------------------
            Summary of the Dbvisit STANDBY configuration
        -----------------------------------------------------------
            DBVISIT_BASE /usr/dbvisit 
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit STANDBY
        -----------------------------------------------------------
        
            Component standby installed. 
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit DBVNET
        -----------------------------------------------------------
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/cert.pem to /usr/dbvisit/dbvnet/conf/cert.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/ca.pem to /usr/dbvisit/dbvnet/conf/ca.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/conf/prikey.pem to /usr/dbvisit/dbvnet/conf/prikey.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvnet/dbvnet to /usr/dbvisit/dbvnet/dbvnet
        
        Copied file /usr/dbvisit/dbvnet/conf/dbvnetd.conf to /usr/dbvisit/dbvnet/conf/dbvnetd.conf.201802201235
        
            DBVNET config file updated 
        
        
            Press ENTER to continue 
        -----------------------------------------------------------
            About to install Dbvisit DBVAGENT
        -----------------------------------------------------------
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/cert.pem to /usr/dbvisit/dbvagent/conf/cert.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/ca.pem to /usr/dbvisit/dbvagent/conf/ca.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/conf/prikey.pem to /usr/dbvisit/dbvagent/conf/prikey.pem
        
        Copied file /home/oracle/upgrade/dbvisit/dbvagent/dbvagent to /usr/dbvisit/dbvagent/dbvagent
        
        Copied file /usr/dbvisit/dbvagent/conf/dbvagent.conf to /usr/dbvisit/dbvagent/conf/dbvagent.conf.201802201235
        
            DBVAGENT config file updated 
        
        
            Press ENTER to continue 
        
            -----------------------------------------------------------
            Component      Installer Version   Installed Version
            -----------------------------------------------------------
            standby        8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvnet         8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvagent       8.0.18_0_gc6a0b0a8  8.0.18_0_gc6a0b0a8                                
            dbvserver      8.0.18_0_gc6a0b0a8  not installed                                     
        
            -----------------------------------------------------------
         
            What action would you like to perform?
               1 - Install component(s)
               2 - Uninstall component(s)
               3 - Exit
            
            Your choice: 3
        
      6. Start dbvagent and dbvnet.
        For the next step dbvagent and dbvnet need to be running.  In our case we had an init script which started both:

        cd /etc/init.d
        ./dbvisit start
        

        Otherwise do something like:

        cd /usr/dbvisit/dbvnet
        ./dbvnet –d start 
        cd /usr/dbvisit/dbvagent
        ./dbvagent –d start
        

        The upgrade documentation at this point refers to section 5 of the Dbvisit Standby Networking chapter from the Dbvisit 8.0 user guide: Testing Dbvnet Communication. There some tests to see if dbvnet is working are described. It is important, as the upgrade documentation rightly points, out to test this before proceeding.
        Do on all database hosts:

        [oracle@dbvhost04 init.d]$ cd /usr/dbvisit/dbvnet/
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -e "uname -n"
        dbvhost01
        [oracle@dbvhost04 dbvnet]$ ./dbvnet -f /tmp/dbclone_extract.out.err -o /tmp/testfile
        [oracle@dbvhost04 dbvnet]$ cd /usr/dbvisit/standby
        [oracle@dbvhost04 standby]$ ./dbvctl -f system_readiness
        
        Please supply the following information to complete the test.
        Default values are in [].
        
        Enter Dbvisit Standby location on local server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter the name of the remote server: []: dbvhost01
        Your input: dbvhost01
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter Dbvisit Standby location on remote server: [/usr/dbvisit]:
        Your input: /usr/dbvisit
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter the name of a file to transfer relative to local install directory
        /usr/dbvisit: [standby/doc/README.txt]:
        Your input: standby/doc/README.txt
        
        Is this correct? <Yes/No> [Yes]:
        
        Choose copy method:
        1)   /usr/dbvisit/dbvnet/dbvnet
        2)   /usr/bin/scp
        Please enter choice [1] :
        
        Is this correct? <Yes/No> [Yes]:
        
        Enter port for method /usr/dbvisit/dbvnet/dbvnet: [7890]:
        Your input: 7890
        
        Is this correct? <Yes/No> [Yes]:
        -------------------------------------------------------------
        Testing the network connection between local server and remote server dbvhost01.
        -------------------------------------------------------------
        Settings
        ========
        Remote server                                          =dbvhost01
        Dbvisit Standby location on local server               =/usr/dbvisit
        Dbvisit Standby location on remote server              =/usr/dbvisit
        Test file to copy                                      =/usr/dbvisit/standby/doc/README.txt
        Transfer method                                        =/usr/dbvisit/dbvnet/dbvnet
        port                                                   =7890
        -------------------------------------------------------------
        Checking network connection by copying file to remote server dbvhost01...
        -------------------------------------------------------------
        Trace file /usr/dbvisit/standby/trace/58867_dbvctl_f_system_readiness_201803201304.trc
        
        File copied successfully. Network connection between local and dbvhost01
        correctly configured.
        
      7. Upgrade de DDC configuratie files.
        Having upgraded the software now the Dbvisit Standby Configuration (DDC) files, which are located in DBVISIT_BASE/standby/conf, on the database hosts need to be upgraded.
        Do this once for each standby configuration only on the primary host. If you do it on the secondary host you will get an error and all DDC configuration files will be deleted!
        So if we have a database db1 in a Dbvisit standby configuration with database host dbvhost1 running the primary database (source in Dbvisit terminology) and database host dbvhost2 running the standby database (destination in Dbvisit terminology), we do the following on the dbvhost1 only:

        cd /usr/dbvisit/standby
        ./dbvctl -d db1 -o upgrade
        
      8. Restart dbvserver.
        In our configuration the next step is to restart dbvserver to renable the Dbvisit Console.

        cd /usr/dbvisit/dbvserver
        ./dbvserver -d start
        
      9. Update DDC’s in Dbvisit Console.
        After the upgrade the configurations need to te be updated in Dbvisit Console. Go to Manage Configurations and status field will show an error and the edit configuration button is replaced with an update button.
        Update the DDC for each configuration on that screen.
      10. run a manual send/apply of logs.
        In our case this was easiest done from the Dbvisit console again: Main Menu -> Database Actions -> send logs button, followed by apply logs button.
        Do this for each configuration and check for errors before continuing.
      11. Restart Dbvsit standby processes.
        In our case we restarted the dbvctl processes in deamon mode from the Dbvisit Console. Go to Main Menu -> Database Actions -> Daemon Actions -> select both hosts in turn and choose start.
      References

      Linux – Upgrade from Dbvisit Standby version 8.0.x
      Dbvisit Standby Networking – Dbvnet – 5. Testing Dbvnet Communication

      The post Dbvisit Standby upgrade appeared first on AMIS Oracle and Java Blog.

      Getting started with git behind a company proxy

      Sun, 2018-03-25 11:50

      Since a few months I’ve been involved in working with git to save our Infrastructure as Code in GitHub. But I don’t want to have to type in my password every time and do not like in clear text saved passwords, thus I prefer ssh over https. But when working behind a proxy that doesn’t allow for traffic over port 22 (ssh) I had to spend some time to get it working. Without a proxy there is nothing to it.

      First some background information. We connect to a “stepping stone” server that has some version of Windows as the O.S. and then use Putty to connect to our Linux host where we work on our code.

       

      Network background

      Our connection to Internet is via the proxy, but the proxy doesn’t allow traffic over port 22 (ssh/git). It does however allow traffic over port 80 (http) or 443 (https).

      So the goal here is to:
      1. use a public/private key pair to authenticate myself at GitHub.com
      2. route traffic to GitHub.com via the proxy
      3. reroute port 22 to port 443
      Generate a public/private key pair.

      This can be done on the Linux prompt but then you either need to type your passphrase every time you use git (or have it cached in Linux), or use a key pair without a passphrase. I wanted to take this one step further and use Putty Authentication Agent (Pageant.exe) to cache my private key and forward authentication requests over Putty to Pageant.

      With Putty Key Generator (puttygen.exe) you generate a public/private key pair. Just start the program and press the generate button.

      2018-03-25 16_35_08-keygen

      You then need to generate some entropy by moving the mouse around:

      2018-03-25 16_39_08-PuTTY Key Generator

      And in the end you get something like this:

      2018-03-25 16_41_25-PuTTY Key Generator

      Ad 1) you should use a descriptive name like “github <accountname>”

      Ad 2) you should use a sentence to protect your private key. Mind you: If you do not use a caching mechanism you need to type it in frequently

      Ad 3) you should save your private key somewhere you consider safe. (It should not be accessible for other people)

      Ad 4) you copy this whole text field (starting with ssh-rsa in this case up to and including the Key comment “rsa-key-20180325” which is repeated in that text field)

      Once you have copied the public key you need to add it to your account at github.com.

      Adding the public key in github.com

      Log in to github.com and click on your icon:

      2018-03-25 17_03_03-github

      Choose “Settings” and go to “SSH and GPG keys”:

      2018-03-25 17_03_14-Your Profile

      There you press the “Add SSH key” button and you get to the next screen:

      2018-03-25 17_08_16-Add new SSH keys

      Give the Title a descriptive name so you can recognize/remember where you generated this key for, and in the Key field you paste the copied public key in. Then you press Add SSH key which results in something like this:

      2018-03-25 17_11_43-SSH and GPG keys

      In your case the picture of the key will not be green but black as you haven’t used it yet. In case you no longer want this public/private key pair to have access to your github account you can Delete it here as well.

      So now you can authenticate yourself with a private key that get checked by the public key you uploaded in github.

      You can test that on a machine that has direct access to Internet and is able to use port 22 (For example a VirtualBox VM on your own laptop at home).

      Route git traffic to github.com via the Proxy and change the port.

      On the Linux server behind the company firewall, when logged on with your own account, you need to got to the “.ssh” directory. If it isn’t there yet you haven’t used ssh on that machine yet. (ssh <you>@<linuxserver> is enough and cancel the logging in). So change directory to .ssh in your home dir. Create a file called “config” with the contents.

      # github.com
      Host github.com
          Hostname ssh.github.com
          ProxyCommand nc -X connect -x 192.168.x.y:8080 %h %p
          Port 443
          ServerAliveInterval 20
          User git
      
      #And if you use gitlab as well the entry should be like:
      # gitlab.com
      Host gitlab.com
          Hostname altssh.gitlab.com
          Port    443
          ProxyCommand    /usr/bin/nc -X connect -x 192.168.x.y:8080 %h %p
          ServerAliveInterval 20
          User  git
      

      This is the part where you define that ssh call’s to server github.com should be rerouted to the proxy server 192.168.x.y on port 8080 (change that to your proxy details), and that the server should not be github.com but changed to ssh.github.com. That is the server where github allows you to use the git or ssh protocol to connect to over https (port 443). I’ve added the example for gitlab as well. There the hostname should be changed to altssh.gitlab.com as is done in the config above.

      “nc” or “/usr/bin/nc” is the utility Netcat that does the work of changing hostname and port number for us. On our RedHat Linux 6 server it is installed by default.

      The ServerAliveInterval 20 makes sure that the connection is kept alive by sending a packet every 20 seconds to prevent a “broken pipe”. And the User git makes sure you will not connect as your local Linux user to github.com but as user git.

      But two things still needs to be done:

      1. Add your private key to Putty Authentication Agent
      2. Allow the Putty session to your Linux host to use Putty Authentication Agent
      Add your private key to Putty Authentication Agent

      On your “Stepping Stone Server” start the Putty Authentication Agent (Pageant.exe), right click on the icon (useally somewhere on the bottom of your screen to the right)

      2018-03-25 17_49_49-

      Select View Keys to see the keys already loaded or press Add Key to add your newly created private key. You get asked to type your passphrase. Via View Keys you can check if the key was loaded:

      2018-03-25 17_56_06-Pageant Key List

      The obfuscated part shows the key fingerprint and the text to the right of that is the Key Comment you used. If the comment is bigger not all the text is visible. Thus make sure the Key Comment is distinguishable in the first part.

      If you want to use the same key for authentication on the Linux host, then put the Public key part in a file called “authorized_keys”. This file should be located in the “.ssh” directory and have rw permissions for your local user only (chmod 0600 authorized_keys) and nobody else. If you need or want a different key pair for that make sure you load the corresponding private key as well.

      Allow the Putty session to your Linux host to use Putty Authentication Agent

      The Putty session that you use to connect to the Linux host needs to have the following checked:

      2018-03-25 18_08_03-PuTTY Configuration

      Thus for the session go to “Connection” –> “SSH” –> “Auth” and check “Allow agent forwarding” to allow you terminal session on the Linux host to forward the authentication request with GitHub (or gitlab) to be handled by your Pageant process on the Stepping Stone server. For that last part you need to have checked the box “Attempt authentication using Pageant”.

      Now you are all set to clone a GitHub repository on your Linux host and use key authentication.

      Clone a git repository using the git/ssh protocol

      Browse to GitHub.com, select the repository you have access to with your GitHub account (if it is a private repo), press the “Clone or download” button and make sure you select “Clone with SSH”. See the picture below.

      2018-03-25 18_18_41-git

      Press the clipboard icon to copy the line starting with “git@github.com” and ending with “.git”.

      That should work now (like it did for me).

      HTH Patrick

      P.S. If you need to authenticate your connection with the proxy service you probably need to have a look at the manual pages of “nc”. Or google it. I didn’t have to authenticate with the proxy service so I didn’t dive into that.

      The post Getting started with git behind a company proxy appeared first on AMIS Oracle and Java Blog.

      How to fix Dataguard FAL server issues

      Wed, 2018-03-21 06:23

      One of my clients had an issue with their Dataguard setup, after having to move tables and rebuild indexes the transport to their standby databases failed. The standby databases complained about not being able to fetch archivelogs from the primary database. In this short blog I will explain what happened and how I diagnosed the issue and fixed it.

       

      The situation

      Below  you can see a diagram of the setup: a primary site with both a primary database and a standby database. At the remote site there are two standby databases both get their redo stream from the primary database.

      DG_situation

      This setup was working well for the company, but having two redo streams going to the remote site with limited bandwith can give issues when doing massive data manipulation. When the need arrived for doing massive table movements and rebuilding of indexes the generation of redo was too much for the WAN link and also to the local standby database. After trying to fix the standby databases for several days my help was requested because the standby databases were not able to fix the gaps in the redo stream.

       

      The issues

      While analyzing the issues I found that the standby databases failed to fetch archived logs from the primary database, usually you can fix this by using RMAN to supply the primary database with the archived logs needed for the standby, because in most cases the issue is that het archived logs have been deleted on the primary database. The client’s own DBA already supplied the required archived logs so the message was kind of misleading, the archived logs are there, but the primary doesn’t seem to be able to supply them.

      When checking the alert log for the primary database there was no obvious sign that there was anything going on or going wrong.  While searching for more information I discovered the default setting for the parameter log_archive_max_processes is 4. This setting controls the amount of processes available for archiving, redo transport and FAL servers. Now take a quick look at the drawing above and start counting with me: at least one for local archiving, and three for the redo transport to the three standby databases. So when one of the standby databases wants to fetch archived logs to fill in a gap, it may not be able to request this from the primary database. So time to fix it:

       

      ALTER SYSTEM SET log_archive_max_processes=30 scope=both;
      

      Now the fetching start working better, but I discovered some strange behaviour, the standby database closest to the primary database was still not able to fetch archive logs from the primary. The two remote standby databases were actually fetching some archived logs, so thats an improvement… but still, the alert log for the primary database was quiet silent… fortunately Oracle provides us with more server parameters: log_archive_trace. This setting enables extra logging for certain subprocesses. add the values in the linked documentation to see the desired logging: in this case 2048  and 128 for getting FAL server logging and redo transport logging.

      ALTER SYSTEM SET log_archive_trace=2176 scope=both;
      

      With this setting I was able to see that all 26 other archiver processes were busy with supplying one of the standby databases with archived logs. It seems to me that the database thats furthest behind will get the first go at the primary database…. Anyway, my first instinct was to have the local standby database fixed first so this one is available for failover, so by stopping the remote standby databases the local standby database was now able to fetch archived logs from the primary database. The next step is to start the other standby databases, to speed up things I started the first one and only after this database has completed its archive log gap I started the second database.

       

      In conclusion, it’s important that you tune your settings for your environment: set log_archive_max_processes as appropriate and set your log level so you see what’s going on.

      Please mind that both of these settings are also managed by the Dataguard Broker. To prevent warnings from Dataguard Broken make sure you set these parameters via dgmgrl:

      edit database <<primary>> set property LogArchiveTrace=2176;
      edit database <<primary>> set property LogArchiveMaxProcesses=30;
      

      The post How to fix Dataguard FAL server issues appeared first on AMIS Oracle and Java Blog.

      Handle a GitHub Push Event from a Web Hook Trigger in a Node application

      Tue, 2018-03-20 11:57

      My requirement in this case: a push of one or more commits to a GitHub repository need to trigger a Node application that inspects the commit and when specific conditions are met – it will download the contents of the commit.

      image

      I have implemented this functionality using a Node application – primarily because it offers me an easy way to create a REST end point that I can configure as a WebHook in GitHub.

      Implementing the Node application

      The requirements for a REST endpoint that can be configured as a webhook endpoint are quite simple: handle a POST request  no response required. I can do that!

      In my implementation, I inspect the push event, extract some details about the commits it contains and write the summary to the console. The code is quite straightforward and self explanatory; it can easily be extended to support additional functionality:

      app.post('/github/push', function (req, res) {
        var githubEvent = req.body
        // - githubEvent.head_commit is the last (and frequently the only) commit
        // - githubEvent.pusher is the user of the pusher pusher.name and pusher.email
        // - timestamp of final commit: githubEvent.head_commit.timestamp
        // - branch:  githubEvent.ref (refs/heads/master)
      
        var commits = {}
        if (githubEvent.commits)
          commits = githubEvent.commits.reduce(
            function (agg, commit) {
              agg.messages = agg.messages + commit.message + ";"
              agg.filesTouched = agg.filesTouched.concat(commit.added).concat(commit.modified).concat(commit.removed)
                .filter(file => file.indexOf("src/js/jet-composites/input-country") > -1)
              return agg
            }
            , { "messages": "", "filesTouched": [] })
      
        var push = {
          "finalCommitIdentifier": githubEvent.after,
          "pusher": githubEvent.pusher,
          "timestamp": githubEvent.head_commit.timestamp,
          "branch": githubEvent.ref,
          "finalComment": githubEvent.head_commit.message,
          "commits": commits
        }
        console.log("WebHook Push Event: " + JSON.stringify(push))
        if (push.commits.filesTouched.length > 0) {
          console.log("This commit involves changes to the input-country component, so let's update the composite component for it ")
          var compositeName = "input-country"
          compositeloader.updateComposite(compositeName)
        }
      
        var response = push
        res.json(response)
      })
      
      

      Configuring the WebHook in GitHub

      A web hook can be configured in GitHub for any of your repositories. You indicate the endpoint URL, the type of event that should trigger the web hook and optionally a secret. See my configuration:

      image

      Trying out the WebHook and receiving Node application

      In this particular case, the Node application is running locally on my laptop. I have used ngrok to expose the local application on a public internet address:

      image

      (note: this is the address you saw in the web hook configuration)

      I have committed and pushed a small change in a file in the repository on which the webhook is configured:

      image

      The ngrok agent has received the WebHook request:

      image

      The Node application has received the push event and has done its processing:

      image

      The post Handle a GitHub Push Event from a Web Hook Trigger in a Node application appeared first on AMIS Oracle and Java Blog.

      Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller

      Mon, 2018-03-19 00:58

      The requirement is simple: a Node JS application that receives HTTP requests and forwards (some of) them to other hosts and subsequently the returns the responses it receives to the original caller.

      image

      This can be used in many situations – to ensure all resources loaded in a web application come from the same host (one way to handle CORS), to have content in IFRAMEs loaded from the same host as the surrounding application or to allow connection between systems that cannot directly reach each other. Of course, the proxy component does not have to be the dumb and mute intermediary – it can add headers, handle faults, perform validation and keep track of the traffic. Before you know it, it becomes an API Gateway…

      In this article a very simple example of a proxy that I want to use for the following purpose: I create a Rich Web Client application (Angular, React, Oracle JET) – and some of the components used are owned and maintained by an external party. Instead of adding the sources to the server that serves the static sources of the web application, I use the proxy to retrieve these specific sources from their real origin (either a live application, a web server or even a Git repository). This allows me to have the latets sources of these components at any time, without redeploying my own application.

      The proxy component is of course very simple and straightforward. And I am sure it can be much improved upon. For my current purposes, it is good enough.

      The Node application consists of file www that is initialized with npm start through package.json. This file does some generic initialization of Express (such as defining the port on which the listen). Then it defers to app.js for all request handling. In app.js, a static file server is configured to serve files from the local /public subdirectory (using express.static).

      www:

      var app = require('../app');
      var debug = require('debug')(' :server');
      var http = require('http');

      var port = normalizePort(process.env.PORT || '3000');
      app.set('port', port);
      var server = http.createServer(app);
      server.listen(port);
      server.on('error', onError);
      server.on('listening', onListening);

      function normalizePort(val) {
      var port = parseInt(val, 10);

      if (isNaN(port)) {
      // named pipe
      return val;
      }

      if (port >= 0) {
      // port number
      return port;
      }

      return false;
      }

      function onError(error) {
      if (error.syscall !== 'listen') {
      throw error;
      }

      var bind = typeof port === 'string'
      ? 'Pipe ' + port
      : 'Port ' + port;

      // handle specific listen errors with friendly messages
      switch (error.code) {
      case 'EACCES':
      console.error(bind + ' requires elevated privileges');
      process.exit(1);
      break;
      case 'EADDRINUSE':
      console.error(bind + ' is already in use');
      process.exit(1);
      break;
      default:
      throw error;
      }
      }

      function onListening() {
      var addr = server.address();
      var bind = typeof addr === 'string'
      ? 'pipe ' + addr
      : 'port ' + addr.port;
      debug('Listening on ' + bind);
      }

      package.json:

      {
      "name": "jet-on-node",
      "version": "0.0.0",
      "private": true,
      "scripts": {
      "start": "node ./bin/www"
      },
      "dependencies": {
      "body-parser": "~1.18.2",
      "cookie-parser": "~1.4.3",
      "debug": "~2.6.9",
      "express": "~4.15.5",
      "morgan": "~1.9.0",
      "pug": "2.0.0-beta11",
      "request": "^2.85.0",
      "serve-favicon": "~2.4.5"
      }
      }

      app.js:

      var express = require('express');
      var path = require('path');
      var favicon = require('serve-favicon');
      var logger = require('morgan');
      var cookieParser = require('cookie-parser');
      var bodyParser = require('body-parser');

      const http = require('http');
      const url = require('url');
      const fs = require('fs');
      const request = require('request');

      var app = express();
      // uncomment after placing your favicon in /public
      //app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
      app.use(logger('dev'));
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      app.use(cookieParser());

      // define static resource server from local directory public (for any request not otherwise handled)
      app.use(express.static(path.join(__dirname, 'public')));

      app.use(function (req, res, next) {
      res.header("Access-Control-Allow-Origin", "*");
      res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
      next();
      });

      // catch 404 and forward to error handler
      app.use(function (req, res, next) {
      var err = new Error('Not Found');
      err.status = 404;
      next(err);
      });

      // error handler
      app.use(function (err, req, res, next) {
      // set locals, only providing error in development
      res.locals.message = err.message;
      res.locals.error = req.app.get('env') === 'development' ? err : {};

      // render the error page
      res.status(err.status || 500);
      res.json({
      message: err.message,
      error: err
      });
      });

      module.exports = app;

      Then the interesting bit: requests for URL /js/jet-composites/* are intercepted: instead of having those requests also handle by serving local resources (from directory public/js/jet-composites/*), the requests are interpreted and routed to an external host. The responses from that host are returned to the requester. To the requesting browser, there is no distinction between resources served locally as static artifacts from the local file system and resources retrieved through these redirected requests.

      // any request at /js/jet-composites (for resouces in that folder)
      // should be intercepted and redirected
      var compositeBasePath = '/js/jet-composites/'
      app.get(compositeBasePath + '*', function (req, res) {
      var requestedResource = req.url.substr(compositeBasePath.length)
      // parse URL
      const parsedUrl = url.parse(requestedResource);
      // extract URL path
      let pathname = `${parsedUrl.pathname}`;
      // maps file extention to MIME types
      const mimeType = {
      '.ico': 'image/x-icon',
      '.html': 'text/html',
      '.js': 'text/javascript',
      '.json': 'application/json',
      '.css': 'text/css',
      '.png': 'image/png',
      '.jpg': 'image/jpeg',
      '.wav': 'audio/wav',
      '.mp3': 'audio/mpeg',
      '.svg': 'image/svg+xml',
      '.pdf': 'application/pdf',
      '.doc': 'application/msword',
      '.eot': 'appliaction/vnd.ms-fontobject',
      '.ttf': 'aplication/font-sfnt'
      };

      handleResourceFromCompositesServer(res, mimeType, pathname)
      })

      async function handleResourceFromCompositesServer(res, mimeType, requestedResource) {
      var reqUrl = "http://yourhost:theport/applicationURL/" + requestedResource
      // fetch resource and return
      var options = url.parse(reqUrl);
      options.method = "GET";
      options.agent = false;

      // options.headers['host'] = options.host;
      http.get(reqUrl, function (serverResponse) {
      console.log('<== Received res for', serverResponse.statusCode, reqUrl); console.log('\t-> Request Headers: ', options);
      console.log(' ');
      console.log('\t-> Response Headers: ', serverResponse.headers);

      serverResponse.pause();

      serverResponse.headers['access-control-allow-origin'] = '*';

      switch (serverResponse.statusCode) {
      // pass through. we're not too smart here...
      case 200: case 201: case 202: case 203: case 204: case 205: case 206:
      case 304:
      case 400: case 401: case 402: case 403: case 404: case 405:
      case 406: case 407: case 408: case 409: case 410: case 411:
      case 412: case 413: case 414: case 415: case 416: case 417: case 418:
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });
      serverResponse.resume();
      break;

      // fix host and pass through.
      case 301:
      case 302:
      case 303:
      serverResponse.statusCode = 303;
      serverResponse.headers['location'] = 'http://localhost:' + PORT + '/' + serverResponse.headers['location'];
      console.log('\t-> Redirecting to ', serverResponse.headers['location']);
      res.writeHeader(serverResponse.statusCode, serverResponse.headers);
      serverResponse.pipe(res, { end: true });
      serverResponse.resume();
      break;

      // error everything else
      default:
      var stringifiedHeaders = JSON.stringify(serverResponse.headers, null, 4);
      serverResponse.resume();
      res.writeHeader(500, {
      'content-type': 'text/plain'
      });
      res.end(process.argv.join(' ') + ':\n\nError ' + serverResponse.statusCode + '\n' + stringifiedHeaders);
      break;
      }

      console.log('\n\n');
      });
      }

      Resources

      Express Tutorial Part 2: Creating a skeleton website - https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

      Building a Node.js static file server (files over HTTP) using ES6+ - http://adrianmejia.com/blog/2016/08/24/Building-a-Node-js-static-file-server-files-over-HTTP-using-ES6/

      How To Combine REST API calls with JavaScript Promises in node.js or OpenWhisk - https://medium.com/adobe-io/how-to-combine-rest-api-calls-with-javascript-promises-in-node-js-or-openwhisk-d96cbc10f299

      Node script to forward all http requests to another server and return the response with an access-control-allow-origin header. Follows redirects. - https://gist.github.com/cmawhorter/a527a2350d5982559bb6

      5 Ways to Make HTTP Requests in Node.js - https://www.twilio.com/blog/2017/08/http-requests-in-node-js.html

      The post Node & Express application to proxy HTTP requests – simply forwarding the response to the original caller appeared first on AMIS Oracle and Java Blog.

      Create a Node JS application for Downloading sources from GitHub

      Sun, 2018-03-18 16:26

      My objective: create a Node application to download sources from a repository on GitHub. I want to use this application to read a simple package.json-like file (that describes which reusable components (from which GitHub repositories) the application has dependencies on) and download all required resources from GitHub and store them in the local file system. This by itself may not seem very useful. However, this is a stepping stone on the road to a facility to trigger run time update of appliation components triggered by GitHub WebHook triggers.

      I am making use of the Octokit Node JS library to interact with the REST APIs of GitHub. The code I have created will:

      • fetch the meta-data for all items in the root folder of a GitHub Repo (at the tip of a specific branch, or at a specific tag or commit identifier)
      • iterate over all items:
        • download the contents of the item if it is a file and create a local file with the content (and cater for large files and for binary files)
        • create a local directory for each item in the GitHub repo that is a diectory, then recursively process the contents of the directory on GitHub

      An example of the code in action:

      A randomly selected GitHub repo (at https://github.com/lucasjellema/WebAppIframe2ADFSynchronize):

      image

      The local target directory is empty at the beginning of the action:

      SNAGHTML8180706

      Run the code:

      image

      And the content is downloaded and written locally:

      image

      Note: the code could easily provide an execution report with details such as file size, download, last change date etc. It is currently very straightforward. Note: the gitToken is something you need to get hold of yourself in the GitHub dashboard: https://github.com/settings/tokens . Without a token, the code will still work, but you will be bound to the GitHub rate limit (of about 60 requests per hour).

      const octokit = require('@octokit/rest')() 
      const fs = require('fs');
      
      var gitToken = "YourToken"
      
      octokit.authenticate({
          type: 'token',
          token: gitToken
      })
      
      var targetProjectRoot = "C:/data/target/" 
      var github = { "owner": "lucasjellema", "repo": "WebAppIframe2ADFSynchronize", "branch": "master" }
      
      downloadGitHubRepo(github, targetProjectRoot)
      
      async function downloadGitHubRepo(github, targetDirectory) {
          console.log(`Installing GitHub Repo ${github.owner}\\${github.repo}`)
          var repo = github.repo;
          var path = ''
          var owner = github.owner
          var ref = github.commit ? github.commit : (github.tag ? github.tag : (github.branch ? github.branch : 'master'))
          processGithubDirectory(owner, repo, ref, path, path, targetDirectory)
      }
      
      // let's assume that if the name ends with one of these extensions, we are dealing with a binary file:
      const binaryExtensions = ['png', 'jpg', 'tiff', 'wav', 'mp3', 'doc', 'pdf']
      var maxSize = 1000000;
      function processGithubDirectory(owner, repo, ref, path, sourceRoot, targetRoot) {
          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": path, "ref": ref })
              .then(result => {
                  var targetDir = targetRoot + path
                  // check if targetDir exists 
                  checkDirectorySync(targetDir)
                  result.data.forEach(item => {
                      if (item.type == "dir") {
                          processGithubDirectory(owner, repo, ref, item.path, sourceRoot, targetRoot)
                      } // if directory
                      if (item.type == "file") {
                          if (item.size > maxSize) {
                              var sha = item.sha
                              octokit.gitdata.getBlob({ "owner": owner, "repo": repo, "sha": item.sha }
                              ).then(result => {
                                  var target = `${targetRoot + item.path}`
                                  fs.writeFile(target
                                      , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { })
                              })
                                  .catch((error) => { console.log("ERROR BIGGA" + error) })
                              return;
                          }// if bigga
                          octokit.repos.getContent({ "owner": owner, "repo": repo, "path": item.path, "ref": ref })
                              .then(result => {
                                  var target = `${targetRoot + item.path}`
                                  if (binaryExtensions.includes(item.path.slice(-3))) {
                                      fs.writeFile(target
                                          , Buffer.from(result.data.content, 'base64'), function (err, data) { reportFile(item, target) })
                                  } else
                                      fs.writeFile(target
                                          , Buffer.from(result.data.content, 'base64').toString('utf8'), function (err, data) { if (!err) reportFile(item, target); else console.log('Fuotje ' + err) })
      
                              })
                              .catch((error) => { console.log("ERROR " + error) })
                      }// if file
                  })
              }).catch((error) => { console.log("ERROR XXX" + error) })
      }//processGithubDirectory
      
      function reportFile(item, target) {
          console.log(`- installed ${item.name} (${item.size} bytes )in ${target}`)
      }
      
      function checkDirectorySync(directory) {
          try {
              fs.statSync(directory);
          } catch (e) {
              fs.mkdirSync(directory);
              console.log("Created directory: " + directory)
          }
      }
      
      

      Resources

      Octokit REST API Node JS library: https://github.com/octokit/rest.js 

      API Documentation for Octokit: https://octokit.github.io/rest.js/#api-Repos-getContent

      The post Create a Node JS application for Downloading sources from GitHub appeared first on AMIS Oracle and Java Blog.

      Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu

      Sun, 2018-03-18 08:53

      Spring Boot is great for running inside a Docker container. Spring Boot applications ‘just run’. A Spring Boot application has an embedded servlet engine making it independent of application servers. There is a Spring Boot Maven plugin available to easily create a JAR file which contains all required dependencies. This JAR file can be run with a single command-line like ‘java -jar SpringBootApp.jar’. For running it in a Docker container, you only require a base OS and a JDK. In this blog post I’ll give examples on how to get started with different OSs and different JDKs in Docker. I’ll finish with an example on how to build a Docker image with a Spring Boot application in it.

      Getting started with Docker Installing Docker

      Of course you need a Docker installation. I’ll not get into details here but;

      Oracle Linux 7
      yum-config-manager --enable ol7_addons
      yum-config-manager --enable ol7_optional_latest
      yum install docker-engine
      systemctl start docker
      systemctl enable docker
      Ubuntu
      curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
      add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
      apt-get update
      apt-get install docker-ce

      You can add a user to the docker group or give it sudo docker rights. They do allow the user to become root on the host-OS though.

      Running a Docker container

      See below for commands you can execute to start containers in the foreground or background and access them. For ‘mycontainer’ in the below examples, you can fill in a name you like. The name of the image can be found in the description further below. This can be for example for an Oracle Linux 7 image container-registry.oracle.com/os/oraclelinux:7 when using the Oracle Container Registry or store/oracle/serverjre:8 for for example a JRE image from the Docker Store.

      If you are using the Oracle Container Registry (for example to obtain Oracle JDK or Oracle Linux docker images) you first need to

      • go to container-registry.oracle.com and enable your OTN account to be used
      • go to the product you want to use and accept the license agreement
      • do docker login -u username -p password container-registry.oracle.com

      If you are using the Docker Store, you first need to

      • go to store.docker.com and create an account
      • find the image you want to use. Click Get Content and accept the license agreement
      • do docker login -u username -p password

      To start a container in the foreground

      docker run --name mycontainer -it imagename /bin/sh

      To start a container in the background

      docker run --name mycontainer -d imagename tail -f /dev/null

      To ‘enter’ a running container:

      docker exec -it mycontainer /bin/sh

      /bin/sh exists in Alpine Linux, Oracle Linux and Ubuntu. For Oracle Linux and Ubuntu you can also use /bin/bash. ‘tail -f /dev/null’ is used to start a ‘bare OS’ container with no other running processes to keep it running. A suggestion from here.

      Cleaning up

      Good to know is how to clean up your images/containers after having played around with them. See here.

      #!/bin/bash
      # Delete all containers
      docker rm $(docker ps -a -q)
      # Delete all images
      docker rmi $(docker images -q)
      Options for JDK

      Of course there are more options for running JDKs in Docker containers. These are just some of the more commonly used.

      Oracle JDK on Oracle Linux

      When you’re running in the Oracle Cloud, you have probably noticed the OS running beneath it is often Oracle Linux (and currently also often version 7.x). When for example running Application Container Cloud Service, it uses the Oracle JDK. If you want to run in a similar environment locally, you can use Docker images. Good to know is that the Oracle Server JRE contains more than a regular JRE but less than a complete JDK. Oracle recommends using the Server JRE whenever possible instead of the JDK since the Server JRE has a smaller attack surface. Read more here. For questions about support and roadmap, read the following blog.

      store.docker.com

      The steps to obtain Docker images for Oracle JDK / Oracle Linux from store.docker.com are as follows:

      Create an account on store.docker.com. Go to https://store.docker.com/images/oracle-serverjre-8. Click Get Content. Accept the agreement and you’re ready to login, pull and run.

      #use the store.docker.com username and password
      docker login -u yourusername -p yourpassword
      docker pull store/oracle/serverjre:8

      To start in the foreground:

      docker run --name jre8 -it store/oracle/serverjre:8 /bin/bash
      container-registry.oracle.com

      You can use the image from the container registry. First, same as for just running the OS, enable your OTN account and login.

      #use your OTN username and password
      docker login -u yourusername -p yourpassword container-registry.oracle.com
      
      docker pull container-registry.oracle.com/java/serverjre:8
      
      #To start in the foreground:
      docker run --name jre8 -it container-registry.oracle.com/java/serverjre:8 /bin/bash
      OpenJDK on Alpine Linux

      When running Docker containers, you want them to as small as possible to allow quick starting, stopping, downloading, scaling, etc. Alpine Linux is a suitable Linux distribution for small containers and is being used quite often. There can be some thread related challenges with Alpine Linux though. See for example here and here.

      Running OpenJDK in Alpine Linux in a Docker container is more easy than you might think. You don’t require any specific accounts for this and also no login.

      When you pull openjdk:8, you will get a Debian 9 image. In order to run on Alpine Linux, you can do

      docker pull openjdk:8-jdk-alpine

      Next you can do

      docker run --name openjdk8 -it openjdk:8-jdk-alpine /bin/sh
      Zulu on Ubuntu Linux

      You can also consider OpenJDK based JDK’s like Azul’s Zulu. This works mostly the same only the image name is something like ‘azul/zulu-openjdk:8’. The Zulu images are Ubuntu based.

      Do it yourself

      Of course you can create your own image with a JDK. See for example here. This requires you download the JDK code and build the image yourself. This is quite easy though.

      Spring Boot in a Docker container

      Creating a container with a Spring Boot application based on an image which already has a JDK in it, is easy. This is described here. You can create a simple Dockerfile like:

      FROM openjdk:8-jdk-alpine
      VOLUME /tmp
      ARG JAR_FILE
      ADD ${JAR_FILE} app.jar
      ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]

      The FROM image can also be an Oracle JDK or Zulu JDK image as mentioned above.

      And add a dependency to com.spotify.dockerfile-maven-plugin and some configuration to your pom.xml file to automate building the Dockerfile once you have the Spring Boot JAR file. See for a complete example pom.xml and Dockerfile also here. The relevant part of the pom.xml file is below.

      <build>
      <finalName>accs-cache-sample</finalName>
      <plugins>
      <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
      <plugin>
      <groupId>com.spotify</groupId>
      <artifactId>dockerfile-maven-plugin</artifactId>
      <version>1.3.6</version>
      <configuration>
      <repository>${docker.image.prefix}/${project.artifactId}</repository>
      <buildArgs>
      <JAR_FILE>target/${project.build.finalName}.jar</JAR_FILE>
      </buildArgs>
      </configuration>
      </plugin>
      </plugins>
      </build>

      To actually build the Docker image, which allows using it locally, you can do:

      mvn install dockerfile:build

      If you want to distribute it (allow others to easily pull and run it), you can push it with

      mvn install dockerfile:push

      This will of course only work if you’re logged in as maartensmeets and only for Docker hub (for this example). The below screenshot is after having pushed the image to hub.docker.com. You can find it there since it is public.

      You can then do something like

      docker run -t maartensmeets/accs-cache-sample:latest

      The post Running Spring Boot in a Docker container on OpenJDK, Oracle JDK, Zulu on Alpine Linux, Oracle Linux, Ubuntu appeared first on AMIS Oracle and Java Blog.

      Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application

      Wed, 2018-03-14 10:24

      Spring Boot allows you to quickly develop microservices. Application Container Cloud Service (ACCS) allows you to easily host Spring Boot applications. Oracle provides an Application Cache based on Coherence which you can use from applications deployed to ACCS. In order to use the Application Cache from Spring Boot, Oracle provides an open source Java SDK. In this blog post I’ll give an example on how you can use the Application Cache from Spring Boot using this SDK. You can find the sample code here.

      Using the Application Cache Java SDK Create an Application Cache

      You can use a web-interface to easily create a new instance of the Application Cache. A single instance can contain multiple caches. A single application can use multiple caches but only a single cache instance. Multiple applications can use the same cache instance and caches. Mind that the application and the application cache are deployed in the same region in order to allow connectivity. Also do not use the ‘-‘ character in your cache name, since the LBaaS configuration will fail.

      Use the Java SDK

      Spring Boot applications commonly use an architecture which defines abstraction layers. External resources are exposed through a controller. The controller uses services. These services provide operations to execute specific tasks. The services use repositories for their connectivity / data access objects. Entities are the POJO’s which are exchanged/persisted and exposed for example as REST in a controller. In order to connect to the cache, the repository seems like a good location. Which repository to use (a persistent back-end like a database or for example the application cache repository) can be handled by the service. Per operation this can differ. Get operations for example might directly use the cache repository (which could use other sources if it can’t find its data) while you might want to do Put operations in both the persistent backend as well as in the cache. See for an example here.

      In order to gain access to the cache, first a session needs to be established. The session can be obtained from a session provider. The session provider can be a local session provider or a remote session provider. The local session provider can be used for local development. It can be created with an expiry which indicated the validity period of items in the cache. When developing / testing, you might try setting this to ‘never expires’ since else you might not be able to find entries which you expect to be there. I have not looked further into this issue or created a service request for it. Nor do I know if this is only an issue with the local session provider. See for sample code here or here.

      When creating a session, you also need to specify the protocol to use. When using the Java SDK, you can (at the moment) choose from GRPC and REST. GRPC might be more challenging to implement without an SDK in for example Node.js code, but I have not tried this. I have not compared the performance of the 2 protocols. Another difference is that the application uses different ports and URLs to connect to the cache. You can see how to determine the correct URL / protocol from ACCS environment variables here.

      The ACCS Application Cache Java SDK allows you to add a Loader and a Serializer class when creating a Cache object. The Loader class is invoked when a value cannot be found in the cache. This allows you to fetch objects which are not in the cache. The Serializer is required so the object can be transferred via REST or GRPC. You might do something like below.

      Injection

      Mind that when using Spring Boot you do not want to create instances of objects by directly doing something like: Class bla = new Class(). You want to let Spring handle this by using the @Autowired annotation.

      Do mind though that the @Autowired annotation assigns instances to variables after the constructor of the instance is executed. If you want to use the @Autowired variables after your constructor but before executing other methods, you should put them in a @PostConstruct annotated method. See also here. See for a concrete implemented sample here.

      Connectivity

      The Application cache can be restarted at certain times (e.g. maintenance like patching, scaling) and there can be connectivity issues due to other reasons. In order to deal with that it is a good practice to make the connection handling more robust by implementing retries. See for example here.

      Deploy a Spring Boot application to ACCS Create a deployable

      In order to deploy an application to ACCS, you need to create a ZIP file in a specific format. In this ZIP file there should at least be a manifest.json file which describes (amongst other things) how to start the application. You can read more here. If you have environment specific properties, binding information (such as which cache to use) and environment variables, you can create a deployment.json file. In addition to those metadata files, there of course needs to be the application itself. In case of Spring Boot, this is a large JAR file which contains all dependencies. You can create this file with the spring-boot-maven-plugin. The ZIP itself is most easily composed with the maven-assembly-plugin.

      Deploy to ACCS

      There are 2 major ways (next to directly using the API’s with for example CURL) in which you can deploy to ACCS. You can do this manually or use the Developer Cloud Service. The process to do this from Developer Cloud Service is described here. This is quicker (allows redeployment on Git commit for example) and more flexible. Below globally describes the manual procedure. An important thing to mind is that if you deploy the same application under the same name several times, you might encounter issues with the application not being replaced with a new version. In this case you can do 2 things. Deploy under a different name every time. The name of the application however is reflected in the URL and this could cause issues with users of the application. Another way is to remove files from the Storage Cloud Service before redeployment so you are sure the deployable is the most recent version which ends up in ACCS.

      Manually

      Create a new Java SE application.

       

      Upload the previously created ZIP file

      References

      Introducing Application Cache Client Java SDK for Oracle Cloud

      Caching with Oracle Application Container Cloud

      Complete working sample Spring Boot on ACCS with Application cache (as soon as a SR is resolved)

      A sample of using the Application Cache Java SDK. Application is Jersey based

      The post Application Container Cloud Service (ACCS): Using the Application Cache from a Spring Boot application appeared first on AMIS Oracle and Java Blog.

      ADF Performance Tuning: Avoid a Long Browser Load Time

      Wed, 2018-03-07 04:10

      It is not always easy to troubleshoot ADF performance problems – it is often complicated. Many parts needs to be measured, analyzed and considered. While looking for performance problems at the usual suspects (ADF application, database, network), the real problem can also be found in the often overlooked browser load time. The browser load time is just an important part of the HTTP request and response handling as is the time spent in the applicationserver, database and network. The browser load time can take a few seconds extra time on top of the server and network process time before the end-user receives the HTTP response and can continue with his work. Especially if the browser needs to build a very very ‘rich’ ADF page – the browser needs to build and process the very large DOM-tree. The end-user needs to wait then for seconds, even in modern browsers as Google Chrome, Firefox and Microsoft Edge. Often this is caused by a ‘bad’ page design where too much ADF components are rendered and displayed at the same time; too many table columns and rows, but also too many other components can cause a slow browser load time. This blog shows an example, analyses the browser load time in the ADF Performance Monitor, and suggest simple page design considerations to prevent a large browser load time.

      Read more on adfpm.com – our new website on the ADF Performance Monitor.

      The post ADF Performance Tuning: Avoid a Long Browser Load Time appeared first on AMIS Oracle and Java Blog.

      Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl

      Sun, 2018-03-04 14:08

      image

      The challenge I describe in this article is quite specific. I have a Windows laptop. I have access to a remote Kubernetes cluster (on Oracle Cloud Infrastructure). I want to create Fn functions and deploy them to an Fn server running on that Kubernetes (k8s from now on) environment and I want to be able to execute functions running on k8s from my laptop. That’s it.

      In this article I will take you on a quick tour of what I did to get this to work:

      • Use vagrant to spin up a VirtualBox VM based on a Debian Linux image and set up with Docker Server installed. Use SSH to enter the Virtual Machine and install Helm (a Kubernetes package installer) – both client (in the VM) and server (called Tiller, on the k8s cluster). Also install kubectl in the VM.
      • Then install Project Fn in the VM. Also install Fn to the Kubernetes cluster, using the Helm chart for Fn (this will create a series of Pods and Services that make up and run the Fn platform).
      • Still inside the VM, create a new Fn function. Then, deploy this function to the Fn server on the Kubernetes cluster. Run the function from within the VM – using kubectl to set up port forwarding for local calls to requests into the Kubernetes cluster.
      • On the Windows host (the laptop, outside the VM) we can also run kubectl with port forwarding and invoke the Fn function on the Kubernetes cluster.
      • Finally, I show how to expose the the fn-api service from the Kubernetes service on an external IP address. Note: the latter is nice for demos, but compromises security in a major way.

      All in all, you will see how to create, deploy and invoke an Fn function – using a Windows laptop and a remote Kubernetes cluster as the runtime environment for the function.

      The starting point:

      image

      a laptop running Windows, with VirtualBox and Vagrant installed and a remote Kubernetes Cluster (could be in some cloud, such as Oracle the Container Engine Cloud that I am using or could be minikube).

      Step One: Prepare Virtual Machine

      Create a Vagrantfile – for example this one: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows/blob/master/vagrantfile:

      Vagrant.configure("2") do |config|
        
      config.vm.provision "docker"
      
      config.vm.define "debiandockerhostvm"
      # https://app.vagrantup.com/debian/boxes/jessie64
      config.vm.box = "debian/jessie64"
      config.vm.network "private_network", ip: "192.168.188.105"
       
      
      config.vm.synced_folder "./", "/vagrant", id: "vagrant-root",
             owner: "vagrant",
             group: "www-data",
             mount_options: ["dmode=775,fmode=664"],
             type: ""
               
      config.vm.provider :virtualbox do |vb|
         vb.name = "debiananddockerhostvm"
         vb.memory = 4096
         vb.cpus = 2
         vb.customize ["modifyvm", :id, "--natdnshostresolver1","on"]
         vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
      end
        
      end
      

      This Vagrantfile will create a VM with VirtualBox called debiandockerhostvm – based on the VirtualBox image debian/jessie64. It exposes the VM to the host laptop at IP 192.168.188.105 (you can safely change this). It maps the local directory that contains the Vagrantfile into the VM, at /vagrant. This allows us to easily exchange files between Windows host and Debian Linux VM. The instruction “config.vm.provision “docker”” ensures that Docker is installed into the Virtual Machine.

      To actually create the VM, open a command line and navigate to the directory that contains the Vagrant file. Then type “vagrant up”. Vagrant starts running and creates the VM, interacting with the VirtualBox APIs. When the VM is created, it is started.

      From the same command line, using “vagrant ssh”, you can now open a terminal window in the VM.

      To further prepare the VM, we need to install Helm and kubectl. Helm is installed in the VM (client) as well as in the Kubernetes cluster (the Tiller server component).

      Here are sthe steps to perform inside the VM (see step 1):

      ######## kubectl
      
      # download and extract the kubectl binary 
      curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
      
      # set the executable flag for kubectl
      chmod +x ./kubectl
      
      # move the kubectl executable to the bin directory
      sudo mv ./kubectl /usr/local/bin/kubectl
      
      # assuming that the kubeconfig file with details for Kubernetes cluster is available On the Windows Host:
      # Copy the kubeconfig file to the directory that contains the Vagrantfile and from which vagrant up and vagrant ssh were performed
      # note: this directory is mapped into the VM to directory /vagrant
      
      #Then in VM - set the proper Kubernetes configuration context: 
      export KUBECONFIG=/vagrant/kubeconfig
      
      #now inspect the succesful installation of kubectl and the correct connection to the Kubernetes cluster 
      kubectl cluster-info
      
      
      ########  HELM
      #download the Helm installer
      curl -LO  https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz
      
      #extract the Helm executable from the archive
      tar -xzf helm-v2.8.1-linux-amd64.tar.gz
      
      #set the executable flag on the Helm executable
      sudo chmod +x  ./linux-amd64/helm
      
      #move the Helm executable to the bin directory - as helm
      sudo mv ./linux-amd64/helm /usr/local/bin/helm
      
      #test the successful installatin of helm
      helm version
      
      ###### Tiller
      
      #Helm has a server side companion, called Tiller, that should be installed into the Kubernetes cluster
      # this is easily done by executing:
      helm init
      
      # an easy test of the Helm/Tiller set up can be run (as described in the quickstart guide)
      helm repo update              
      
      helm install stable/mysql
      
      helm list
      
      # now inspect in the Kubernetes Dashboard the Pod that should have been created for the MySQL Helm chart
      
      # clean up after yourself:
      helm delete <name of the release of MySQL>
      

      When this step is complete, the environment looks like this:

      image

      Step Two: Install Project Fn – in VM and on Kubernetes

      Now that we have prepared our Virtual Machine, we can proceed with adding the Project Fn command line utility to the VM and the Fn platform to the Kubernetes cluster. The former is simple local installation of a binary file. The latter is an even simpler installation of a Helm Chart. Here are the steps that you should go through inside the VM (also see step 2):

      # 1A. download and install Fn locally inside the VM
      curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh
      
      #note: this previous statement failed for me; I went through the following steps as a workaround
      # 1B. create install script
      curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install > inst
      all.sh
      # make script executable
      chmod u+x install.sh
      # execute script - as sudo
      sudo ./install.sh
      
      # 1C. and if that fails, you can manually manipulate the downloaded executable:
      sudo mv /tmp/fn_linux /usr/local/bin/fn
      sudo chmod +x /usr/local/bin/fn
      
      # 2. when the installation was done through one of the  methods listed, test the success by running  
      fn --version
      
      
      # 3. Server side installation of Fn to the Kubernetes Cluster
      # details in https://github.com/fnproject/fn-helm
      
      # Clone the GitHub repo with the Helm chart for fn; sources are downloaded into the fn-helm directory
      git clone git@github.com:fnproject/fn-helm.git && cd fn-helm
      
      # Install chart dependencies from requirements.yaml in the fn-helm directory:
      helm dep build fn
      
      #To install the Helm chart with the release name my-release into Kubernetes:
      helm install --name my-release fn
      
      # to verify the cluster server side installation you could run the following statements:
      export KUBECONFIG=/vagrant/kubeconfig
      
      #list all pods for app my-release-fn
      kubectl get pods --namespace default -l "app=my-release-fn"
      

      When the installation of Fn has been done, the environment can be visualized as shown below:

      image

      You can check in the Kubernetes Dashboard to see what has been created from the Helm chart:

      image

      Or on the command line:

      image

      Step Three: Create, Deploy and Run Fn Functions

      We now have a ready to run environment – client side VM and server side Kubernetes cluster – for creating Fn functions – and subsequently deploying and invoking them.

      Let’s now go through these three steps, starting with the creation of a new function called shipping-costs, created in Node.

      docker login
      
      export FN_REGISTRY=lucasjellema
      
      mkdir shipping-costs
      
      cd shipping-costs
      
      fn init --name shipping-costs --runtime  node
      
      # this creates the starting point of the Node application (package.json and func.js) as well as the Fn meta data file (func.yaml) 
      
      # now edit the func.js file (and add dependencies to package.json if necessary)
      
      #The extremely simple implementation of func.js looks like this:
      var fdk=require('@fnproject/fdk');
      
      fdk.handle(function(input){
        var name = 'World';
        if (input.name) {
          name = input.name;
        }
        response = {'message': 'Hello ' + name, 'input':input}
        return response
      })
      
      #This function receives an input parameter (from a POST request this would be the body contents, typically a JSON document)
      # the function returns a result, a JSON document with the message and the input document returned in its entirety
      

      After this step, the function exists in the VM – not anywhere else yet. Some other functions could already have been deployed to the Fn platform on Kubernetes.

      image

      This function shipping-costs should now be deployed to the K8S cluster, as that was one of our major objectives.

      export KUBECONFIG=/vagrant/kubeconfig
      
      # retrieve the name of the Pod running the Fn API
      kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"
      
      # retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
      export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
      echo $POD_NAME    
      
      
      # set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
      # this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
      kubectl port-forward --namespace default $POD_NAME 8080:80 &
      
      #now we inform Fn that deployment activities can be directed at port 8080 of the local host, effectively to the pod $POD_NAME on the K8S cluster
      export FN_API_URL=http://127.0.0.1:8080
      export FN_REGISTRY=lucasjellema
      docker login
      
      #perform the deployment of the function from the directory that contains the func.yaml file
      #functions are organized in applications; here the name of the application is set to soaring-clouds-app
      fn deploy --app soaring-clouds-app
      

      Here is what the deployment looks like in the terminal window in the VM. (I have left out the steps: docker login, set FN_API_URL and set FN_REGISTRY

      image


      After deploying function shipping-costs it now exists on the Kubernetes cluster – inside the fn-api Pod (where a docker containers are running for each of the functions):image

      To invoke the functions, several options are available. The function can be invoked from within the VM, using cURL to the function’s endpoint – leveraging kubectrl port forwarding as before. We can also apply kubectl port forwarding on the laptop – and use any tool that can invoke HTTP endpoints – such as Postman – to call the function.

      If we want clients without kubectl port forwarding – and even completely without knowledge of the Kubernetes cluster – to invoke the function, that can be done as well, by exposing an external IP for the service on K8S for fn-api.

      imageFirst, let’s invoke the function from with in the VM.

      export KUBECONFIG=/vagrant/kubeconfig
      
      # retrieve the name of the Pod running the Fn API
      kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"
      
      # retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
      export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
      echo $POD_NAME    
      
      
      # set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
      # this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
      kubectl port-forward --namespace default $POD_NAME 8080:80 &
      
      
      curl -X POST \
        http://127.0.0.1:8080/r/soaring-clouds-app/shipping-costs \
        -H 'Cache-Control: no-cache' \
        -H 'Content-Type: application/json' \
        -H 'Postman-Token: bb753f9f-9f63-46b8-85c1-8a1428a2bdca' \
        -d '{"X":"Y"}'
      
      
      
      # on the Windows laptop host
      set KUBECONFIG=c:\data\2018-soaring-keys\kubeconfig
      
      kubectl port-forward --namespace default <name of pod> 8080:80 &
      
      
      kubectl port-forward --namespace default my-release-fn-api-frsl5 8085:80 &
      
      

      image

      Now, try to call the function from the laptop host. This assumes that on the host we have both kubectl and the kubeconfig file that we also use in the VM.

      First we have to set the KUBECONFIG environment variable to refer to the kubeconfig file. Then we set up kubectl port forwarding just like in the VM, in this case forwarding port 8085 to the Kubernetes Pod for the Fn API.

      image


      When this is done, we can make calls to the shipping-costs functions on the localhost, port 8085: endpoint http://127.0.0.1:8085/r/soaring-clouds-app/shipping-costs

      imageThis still requires the client to be aware of Kubernetes: have the kubeconfig file and the kubectl client. We can make it possible to directly invoke Fn functions from anywhere without using kubectl. We do this by exposing an external IP directly on the service for Fn API on Kubernetes.

      The simplest way of making this happen is through the Kubernetes dashboard.

      Run the dashboard:

      image

      and open it in a local browser at : http://127.0.0.1:8001/ui .

      Edit the configuration of the service for fn-api:

      image

      Change type ClusterIP to LoadBalancer. This instructs Kubernetes to externally expose this Service – and assign an external IP address to it. Click on Update to make the change real.

      image

      After a litle while, the change will have been processed and we can find an external endpoint for the service.

      SNAGHTML2314c81

      Now we (and anyone who has this IP address) can invoke the Fn function shipping-costs directly using this external IP address:

      SNAGHTML231e1db

      Summary

      This article showed how to start with a standard Windows laptop – with only Virtual Box and Vagrant as special components. Through a few simple, largely automated steps, we created a VM that allows us to create Fn functions and to deploy those functions to a Kubernetes cluster, onto which we have also deployed the Fn server plaform. The article provides all sources and scripts and demonstrates how to create, deploy and invoke a specific function.

      Resources

      Sources for this article in GitHub: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows

      Vagrant home page: https://www.vagrantup.com/

      VirtualBox home page: https://www.virtualbox.org/ 

      Quickstart for Helm: https://docs.helm.sh/using_helm/#quickstart-guide

      Fn Project Helm Chart for Kubernetes – https://medium.com/fnproject/fn-project-helm-chart-for-kubernetes-e97ded6f4f0c

      Installation instruction for kubectl – https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

      Project Fn – Quickstart – https://github.com/fnproject/fn#quickstart

      Tutorial for Fn with Node: https://github.com/fnproject/fn/tree/master/examples/tutorial/hello/node

      Kubernetes – expose external IP address for a Service – https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

      Use Port Forwarding to Access Applications in a Cluster – https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

      AMIS Technology Blog – Rapid first few steps with Fn – open source project for serverless functions – https://technology.amis.nl/2017/10/19/rapid-first-few-steps-with-fn-open-source-project-for-serverless-functions/

      AMIS Technology Blog – Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions – https://technology.amis.nl/2017/10/19/create-debian-vm-with-docker-host-using-vagrant-automatically-include-guest-additions/

      The post Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl appeared first on AMIS Oracle and Java Blog.

      Creating RMAN backups using Commvault

      Thu, 2018-03-01 09:35

      One of our customers decided to migrate to Commvault for creating their backups. First they started with OS file backups, but finally they also want to create backups of the Oracle database with Commvault.
      Commvault is not replacing RMAN. In fact Commvault generates RMAN commands/scripts which are run on the Oracle database. It can make backups to a storage server.

      In this blog article I would like to describe a possible way to make RMAN backups with Commvault. The method described here makes Full, incremental and archivelog backups to an external storage and also makes a Oracle Recommended Backup in the local FRA of the database. So in fact a twofold backup approach is used here: The first one is a classical Full and Incremental backup approach to external storage. The other one is a Oracle Recommended Backup approach to the “local” FRA.

      We implement this approach by two Commvault jobs:
      1. Full Incremental. This one runs with a frequency of every day.
      2. Incremental. This one runs every 4 hours.
      The terms used in Commvault are a little bit different from the ones used in Oracle. Therefore these terms can confuse the “standard” Oracle dba a little bit.

      The full incremental backup job we execute consists of the following steps:
      1. Full backup: to external commvault storage

      a. full backup = incremental level 0 backup to external commvault storage
      b. autobackup controlfile to external commvault storage

      2. Oracle Recommended backup: to FRA on local server

      a. backup incremental level 1 for recover of copy: to FRA
      b. recover of copy of database: in FRA
      c. autobackup controlfile: to FRA

      3. backup of archivelog: to external commvault storage

      a. backup of archivelog: to external commvault storage
      b. autobackup controlfile: to external commvault storage

      The incremental backup job consists of:
      1. Incremental level 1 backup: to external commvault storage

      a. incremental level 1 backup to external commvault storage
      b. autobackup controlfile to external commvault storage

      2. Oracle Recommended backup: to FRA on local server

      a. backup incremental level 1 for recover of copy: to FRA
      b. recover of copy of database: in FRA
      c. autobackup controlfile: to FRA

      3. backup of archivelog: to external commvault storage

      a. backup of archivelog: to external commvault storage
      b. autobackup controlfile: to external commvault storage

       

      Creating a backup job in Commvault:

      The following steps should be executed to implement this backup strategy:

      First the Commvault console should be started in a browser. That should be done by opening the following URL:

      http://[servername].[domainname]/console/

      You then should provide your credentials in order to login.

      step 1: configure storage policies

      Go to Client Computer Groups, select Oracle and then select the server you want to choose, in this case puhora0004. And then select Oracle under puhora0004. Click with the right mouse button on Oracle (the branch under puhora0004) and select properties:

      The next screen will be shown:

      The DATA Storage Policy and Log Storage Policy should be entered. The system engineer responsible responsible for the Commvault system already made a storage policy for Oracle named SP_NDC1_Oracle. So we entered this storage policy in these fields. See the screenprint above. Click OK.

      Back in the last screen select the database under the puhora004-Oracle branch. In this example it is the PRIMRP1P database.

      Click with the right mouse button on this database and select properties. Click on tab Storage Device. Then select in the field Storage Policy  … “SP_NDC1_Oracle”.

      Click OK.

      In the tab on the right select default:

      Click with the right mouse button and select Properties. Select tab Storage Device. Select Storage Policy: SP_NDC1_Oracle:

       

      Select tab Advanced  and if necessary sub tab Backup Arguments
      Enter the field Backup Tag. In this example this is: COPY_DB :

       

      Select sub tab Options and select Merge Incremental Image Copies:
      (this one switches on Oracle Recommended Backup)

       

      Choose tab Content. Select Selective Online Full:

      In the Commvault version we used, there was a bug. This bug causes the storage policy under Oracle to disappear if you configure the storage policy in one of the underlying databases. So you should check after all the storage policies have been applied if all the storage policies are still there.

       

      Step 2: Create a schedule policy

      In the left pane go to Policies. And then choose Schedule Policies. Select with the right mouse button:  New Schedule Policy:

      Provide your New Schedule Policy with a name.

      Click on Agent type, select Oracle. And then click on the blue colored word Select:

      Click on Add:
      Select Full:

      Choose tab Schedule Pattern: Fill this window for example with the following information:

      Click OK.

      You have now made a schedule for the full backup.
      Choose your Start Time with care. Your incremental backup should be finished before you start a Full backup. If the incremental backup still is running on the scheduled start time of your full backup, your full backup will not start at all. So be sure your incremental backup will not be running on the Start Time you choose for the Full backup.

      Next you make a schedule for the incremental backup.

      Click Add, select Incremental:

      Choose Schedule Pattern. Fill this window for example with the following information:

      Click OK.
      Next, click on tab Associations:

      Look up under the Oracle branch the correct server and databases:
      In this example these are the puhora0004, puhora0005 and puthkd17 servers
      Select under the requested database the option default.

      Click OK.

      We have now made two scheduled jobs (full and incremental) that creates backups of 3 databases which are running on 3 different servers.

      You could also decide to make a schedule which creates backups of all databases on 1 particular server.

      How to view your backup history

      You can view which backups have been made on your database with Commvault. To do that select in the left pane Client Computer Groups – Oracle – [servername – in our example puhora004]– Oracle – [database name – in our example PRIMRP1P] Click on PRIMRP1P with the right mouse button and select view and the Backup History.

      Click OK on the following screen called Backup History Filter. Then you will see an overview of the backups that were made on this database.

      You can view status of the job: Completed or Failed. You can find the type of backup: Full or Incremental. The Start Time, End Time, Duration can also be viewed on this screen.

      It is also possible to view the rman log of a backup job. You can do this as follows:
      Click on the backup to want to see. Click with your right mouse button on this backup. Choose View RMAN log:

      Then you can read the log of the RMAN backup:

       

      The post Creating RMAN backups using Commvault appeared first on AMIS Oracle and Java Blog.

      Some of my Solutions for challenges with Oracle JET

      Tue, 2018-02-27 08:46

      This article is not some sophisticated treatise on Oracle JET fundamentals.It is merely a collection of challenges I had to deal with and found solutions for – that work, even if they are perhaps not the best approach around. This article is first of all a personal notebook. If you can get anything useful from it, then by all means take it and enjoy it.

      The code for the application referenced in this article can be found on GitHub: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel.

      How to define a global context that is accessible from all modules?

      The challenge is a simple one: I want to be able to set a value in one module and have access to that value in other modules. For example: when I enter my username in one module

      image

      I want to make that value available in the very root of the application (index.html and ViewModel appController.js) as well in a second module, called dashboard (accessible through the Home tab):

      image

      Here I was helped by Geertjan’s article: https://blogs.oracle.com/geertjan/intermodular-communication-in-oracle-jet-part-1

      The username field in the customers module is bound to the observable self.username. When the login button is clicked, a function loginButtonClick is invoked on the ViewModel. This function reads the observable’s value, retrieves the root ViewModel (using the Knock Out feature dataFor) and sets the username on the global variable – an observable userLogin defined on the root ViewModel. It also sets the observable userLoggedIn on the Root ViewModel – a flag that for example controls tabs in the navigation list.

          function CustomersViewModel() {
            var self = this;
            self.username = ko.observable("You");
            self.password = ko.observable();
      
            self.loginButtonClick = function (event) {       
              var rootViewModel = ko.dataFor(document.getElementById('globalBody'));
              rootViewModel.userLogin(self.username());
              rootViewModel.userLoggedIn("Y");
              return true;
            }
      

      The definitions in the Root ViewModel (in appController.js):

       function ControllerViewModel() {
            var self = this;
            // Header
            // Application Name used in Branding Area
            self.appName = ko.observable("Soaring through the Clouds Webshop Portal");
            // User Info used in Global Navigation area
            self.userLogin = ko.observable("Not yet logged in");
            self.userLoggedIn = ko.observable("N");
      

      How to make Tabs (Navigation List Items) conditionally displayed?

      In this application, the tabs to be displayed are dependent on the question whether or not the user has logged in. My challenge in this case: how to display the tabs (items in an oj-navigation-list component) based on a condition?

      image

      The items are rendered by a template specified on the item.renderer attribute of the oj-navigation-list.

      It turns out that by data-binding the visible attribute on the outermost HTML element in the template, I can have the tabs rendered based on a logical expression referencing the global (root) variable that indicates whether or not the user is logged in:

       <!-- Template for rendering navigation items -->
          <script type="text/html" id="navTemplate">      
            <li data-bind="visible: (!$data['loggedInOnly']|| $root.userLoggedIn() =='Y')"><a href="#">
                <span data-bind="css: $data['iconClass']"></span>
                <!-- ko text: $data['name'] --> <!--/ko-->
              </a></li> 
          </script>
      

      The definition of the navigation list items is in appController.js; the array navData contains the items that can be turned into tabs. Each item defined the name (the label displayed to the user) as well as the associated module and the iconClass. I have added an optional property  loggedInOnly – that indicates whether or not a tab should be displayed only when the user is logged in; this property is used in the expression in the template shown overhead.

            // Navigation setup
            var navData = [
              {
                name: 'Home', id: 'dashboard', loggedInOnly: false,
                iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-chart-icon-24'
              },
              {
                name: 'Browse Catalog', id: 'products',
                iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-fire-icon-24'
              },
              {
                name: 'Browse Orders', id: 'orders', loggedInOnly: true,
                iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-people-icon-24'
              },
              {
                name: 'Your Profile', id: 'customers', loggedInOnly: true,
                iconClass: 'oj-navigationlist-item-icon demo-icon-font-24 demo-info-icon-24'
              }
            ];
            self.navDataSource = new oj.ArrayTableDataSource(navData, { idAttribute: 'id' });
      

      How to dynamically set the label of oj-option elements

      The drop down menu contains an item that changes its label, depending on whether the user is logged in.

      image

      I wanted to have two items and control their visibility – for some reason I got sidetracked and solved it a little differently.

      I have defined a <span> element inside the oj-option and defined the text attribute through a data binding. This binding subsequently uses a ternary expression to determine which label to display:

                    <oj-menu id="menu1" slot="menu" style="display:none" on-oj-action="[[menuItemAction]]">
                      <oj-option id="help" value="help">Help</oj-option>
                      <oj-option id="about" value="about">About</oj-option>
                      <oj-option id="sign" value="sign">
                        <span data-bind="text: (userLoggedIn() =='Y'?'Sign Out':'Sign In/Sign Up')"></span>
                      </oj-option>
                    </oj-menu>
      

      How to react to the User Clicking on a Menu Option in a oj-menu component

      Not surprisingly, when the user clicks on an menu item in the drop down menu shown overhead, the application should respond somehow. I was wondering how to trigger my code for the click-a-menu-item event. Then I found out about the on-oj-action attribute on oj-menu.

      <oj-menu id="menu1" slot="menu" style="display:none" on-oj-action="[[menuItemAction]]">
         <oj-option id="help" value="help">Help</oj-option>
         ...
      

      It refers to a function in the viewModel – that can take an event and from that event’s path[0] element get the id of the selected oj-option item. It can then do whatevwer needs to be done.

       self.menuItemAction = function (event) {
              var selectedMenuOption = event.path[0].id
              console.log(selectedMenuOption);
              if (selectedMenuOption == "sign") {
                 ....
      

      How to programmatically navigate to a module – by activating the Router

      When the Sign In/Up option is selected in the menu above, I want the application to navigate to the customers module, where the user can login:

      image

      This navigation is to be done programmatically – in the function handling the click on menu item event. The programmatic manipulation of the router turns out to be extremely simple in the function:

       self.menuItemAction = function (event) {
              var selectedMenuOption = event.path[0].id
              if (selectedMenuOption == "sign") {
                if (self.userLoggedIn() == "N") {
                  // navigate to the module that allows us to sign in
                  oj.Router.rootInstance.go('customers');
                } else {
                  // sign off
                  self.userLogin("Not yet logged in");
                  self.userLoggedIn("N");
                  oj.Router.rootInstance.go('dashboard');
                }
              }
      

      This is all it takes to present the customers.html center page. This of course depends on the module binding in index.html:

      <div role="main" class="oj-web-applayout-max-width oj-web-applayout-content" data-bind="ojModule: router.moduleConfig">
      </div>
      

      and the module definitions in the ViewModel appController.js

           // Router setup
            self.router = oj.Router.rootInstance;
            self.router.configure({
              'dashboard': { label: 'Dashboard', isDefault: true },
              'products': { label: 'Products' },
              'orders': { label: 'Orders' },
              'customers': { label: 'Customers' }
            });
            oj.Router.defaults['urlAdapter'] = new oj.Router.urlParamAdapter();
      

      The post Some of my Solutions for challenges with Oracle JET appeared first on AMIS Oracle and Java Blog.

      Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container

      Mon, 2018-02-26 07:06

      The essential message of this article is the automation for Oracle JET application of the flow from source code commit to a running application on Oracle Application Container Cloud, as shown in this picture:

      image

      I will describe the inside of the “black box” (actually light blue in this picture) where the build, package and deploy are done for an Oracle JET application.

      The outline of the approach: a Docker Container is started in response to the code commit. This container contains all tooling that is required to perform the necessary actions including the scripts to actually run those actions. When the application has been deployed (or the resulting package is stored in an artifact repository) the container can be stopped. This approach is very clean – intermediate products that are created during the build process simply vanish along wih the container. A fresh container is started for the next iteration.

      Note: the end to end build and deploy flow takes about 2 to 3 minutes on my environment. That obviously would be horrible for a simple developer round trip, but is actually quite acceptable for this type of ‘formal’ release to the shared cloud environment. This approach and this article are heavily inspired by this article (Deploy your apps to Oracle Cloud using PaaS Service Manager CLI on Docker) on Medium by Abhishek Gupta (who writes many very valuable articles, primarily around microservices and Oracle PaaS services such as Application Container Cloud).

      Note: this article focuses on final deployment of the JET application to Application Container Cloud. It would however be quite simple to modify (in fact to simplify)the build container to not deploy the final ZIP file to Application Container Cloud, but instead push the file to an artifact repository or deploy to some other type of runtime platform. It would not be very hard to take the ZIP file and create a fresh Docker Container with that file that can be deployed on Kubernetes Cluster or any Docker runtime such as Oracle Container Cloud.

      The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

      The steps I describe in this article are:

      • preparation of the Docker Container that will do the build-package-deploy actions
      • preparation of the Oracle JET application – to be turned from a locally run, developer only client side web application into a stand-alone runnable enterprise web app with server side platform (Node with Express)
      • creation of the build script that will run inside the container and orchestrate the actions by the available tools to take the source all the way to the cloud
      • putting it all together

       

      1. Preparation of the Docker Container that will do the build-package-deploy actions

      The first step is the composition of the Docker Container. For this step, I have made good use of Abhishek’s article and the dockerfile he proposes in that article. I complemented Abhishek’s Dockerfiles with the tooling required for building Oracle JET applications.

      A visual presentation of what the Docker Container will contain – and the steps made to put it together – is shown below:

      image

      Note: it is fun to bake Docker Container Images completely through a Docker file – and it is certainly convenient to share the instructions for creating a Docker Container image in the form of a Docker file. However, when the steps are complex to automated through a Docker file, there is a simple alternative: build as much of the container as you can through a Docker file. Then run the container and complete it through manual steps. Finally, when the container does what you need it to do, you can commit the state of the container as your reusable container image. And perhaps at this point, you can try to extend the Docker file with some of the manual steps, if you feel that maintaining the image will be a frequently recurring task.

      The Docker build file that I finally put together is included below. The key steps:

      • the container is based on the “python:3.6.2-alpine3.6” image; this is done mainly because the PSM (Oracle PaaS Service Manager command line tool requires a Python runtime environment)
      • the apk package manager for Alpine Linux is used several times to add required packages to the image; it adds curl, zip, nodejs, nodejs-npm, bash, git and openssh
      • download and install the Oracle PSM command line tool (a Python application)
      • set up PSM for the target identity domain and user
      • install the Oracle JET Command Line tool that will be used for building the JET web application
      • copy the script build-app.sh that will be executed to run the end-to-end build-package-deploy flow

       

      # extended from https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7
      # added npm, ojet-cli and git
      
      FROM python:3.6.2-alpine3.6
      
      ARG USERNAME
      ARG PASSWORD
      ARG IDENTITY_DOMAIN
      ARG PSM_USERNAME
      ARG PSM_PASSWORD
      ARG PSM_REGION
      ARG PSM_OUTPUT
      
      
      WORKDIR "/oracle-cloud-psm-cli/"
      
      RUN apk add --update curl && \
          rm -rf /var/cache/apk/*
      
      RUN curl -X GET -u $USERNAME:$PASSWORD -H X-ID-TENANT-NAME:$IDENTITY_DOMAIN https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/$IDENTITY_DOMAIN/client -o psmcli.zip && \
      	pip3 install -U psmcli.zip 
      
      COPY psm-setup-payload.json
      RUN psm setup -c psm-setup-payload.json
      
      RUN apk add --update nodejs nodejs-npm
      RUN apk add --update zip
      
      RUN npm install -g @oracle/ojet-cli
      
      RUN apk update && apk upgrade &&  apk add --no-cache bash git openssh
      
      COPY build-app.sh .
      
      CMD ["/bin/sh"]
      

      Use this command to build the container:

      docker build –build-arg USERNAME=”your ACC cloud username” –build-arg PASSWORD=”the ACC cloud password” –build-arg IDENTITY_DOMAIN=”your identity domain” –build-arg PSM_REGION=”us” –build-arg PSM_OUTPUT=”json” -t psm-cli .

      assuming that this command is run in the directory where the docker file is located.

      This will create a container and tag it as image psm-cli. When this command completes, you can find the container image by running “docker images”. Subsequently, you can run a container based on the image: “docker run –rm -it psm-cli”

         

        2. Preparation of the Oracle JET application

        When developing a JET (4.x) application, we typically use the Oracle JET CLI – the command line tool that helps us to quickstart a new application, create composite components, serve the application locally as we are developing it to a browser with instant update of any file changes. The JET CLI is also used to build the application for release. The result of this step is the complete set of files needed to run the JET application in the browser. In order to actually offer the JET application to end users, it has to be served from a ‘web serving’ platform component – such as nginx or a backend in Python, Java or Node. Frequently, the JET application will require some server side facilities that the backend that serves the static JET application resources can also provide. For that reason, I select a JET serving backend that I can easily leverage for these serverside facilities; for me, this is currently Node.

        In order to create a self running JET application for the JET application built in the pipeline discussed in this article, I have added a simple Node & Express backend.

        I have used npm to create a new Node application (npm init jet-on-node). I have next created directory bin and the file www. This file is main entrypoint into the node application that serves the JET application; it delegates most work to module app that is loaded from file app.js in the root of this Node application, path /jet-on-node .

         

        SNAGHTML8be0161

        All static resources that the browser can access (including the JET application) go into the folder /jet-on-node/public. Module app defines – through Express – that requests for public resources (requests not handled by one of the URL path handlers) are taken care – by serving resources from the directory /public. Module app can handle other HTTP requests – for example from the JET application – and it could also implement the backend for Server Sent Events or WebSockets. Currently is handles the REST GET request to path “/about” that returns some key data for the application:

        SNAGHTML8d17d8a

        The dependencies for the jet-on-node application are defined in package.json during the build process of the final application, we will use “npm install” to add the server side required node modules.

        At this point, we have extended our code base with a simple landing platform for the JET application that can serve the application at runtime. All that remains is to take all content under the /web directory and copy it to the jet-on-node/public folder. Then we can run the application using “npm start” in directory jet-on-node. This will execute the start script in file package.json – which is defined as “node ./bin/www”.

         

        3. Creation of the build script that will run inside the container and
        orchestrate the actions

        The JET build container is available. The JET application is available from a Git repository (in my example in GitHub). A number of steps are now required to go to a running application on Application Container Cloud. The first steps are shown below:

         

        image

        1. Clone the Git repo that contains the JET application (or pull the latest sources or a specific tag)

        2. Install all modules required by the JET application – by running npm install

        3. Use the Oracle JET command line utility to build the application for release: ojet build –release

        After this step, all run time artifacts – including the JET libraries – are in the /web directory. These next steps turn these artifacts into a running application:

        4. Copy the contents of /web to /jet-on-node/public

        5. Install the modules required for the server side Node application by running npm install in directory jet-on-node

        6. Create a single zip file for all artifacts in the /jet-node directory – that includes both the JET application and its server side backend Node application. This zip-file is the release artifact for the JET application. As such, it can be pushed to an artifact repository or deployed to some other platform.

        7. Engage psm command line interface (Oracle PaaS Service Manager CLI) to perform deployment of the zip file to the Application Container Cloud for which psm already as configured during the creation of the build container.

        Note: the files manifest.json and deployment.json in the root of jet-on-node provide instructions to PSM and Application Container Cloud regarding the run time settings for this application – including the runtime version of Node, the command for starting the application, the runtime memory per instance and the number of instances as well as the values of environment variables to be passed to the application.

        image

        The shell-script build-app.sh (you may have to explicitly make this script executable, using “chmod u+x build-app.sh”) performs the steps described above (although perhaps not in the optimal way – feel free to fine tune and improve and let me know about it).

        #git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel
        # cd webshop-portal-soaring-through-the-cloud-native-sequel
        
        git pull
        wait
        
        npm install
        wait
        ojet build --release
        wait
        cp -a ./web/. ./jet-on-node/public
        wait
        cd jet-on-node
        wait
        npm install
        wait
        zip -r webshop.zip .
        wait
        cd /oracle-cloud-psm-cli/webshop-portal-soaring-through-the-cloud-native-sequel/jet-on-node
        
        psm accs push -n SoaringWebshopPortal -r node -s hourly -d deployment.json -p webshop.zip
        

        The end-to-end flow through the build container during the release of the latest version of the JET application can now be depicted like this:

        image

         

        4. Putting it all together

        I will now try to demonstrate how this all works together. In order to do so, I will go through these steps – and illustrate them with screenshots:

        • make a change in the JET application
        • commit and push the change (to GitHub)
        • run the Docker build container psm-cli
        • run the script build-app.sh
        • wait for about three minutes (check the output in the build container and the application status in the ACC console)
        • access the updated Web Application

        The starting point for the application:

        SNAGHTML8fbfc34

        1. Make a change

        The word Shopping Basket – next to the icon – seems superfluous, I will remove that. And I will increase the version number, from v1.2.0 to v1.2.1.

        image

         

          2. commit and push the change (to GitHub)

          image

          The change is accepted in GitHub:

          image

           

          3. Run the Docker build container psm-cli

          Run the Docker Quickstart Terminal (I am on Windows) and perform: “docker run –rm -it psm-cli”

          image

           

          At this point, I lack a little bit of automation. The manual step I need to take (just the first time round) is to clone the JET application’s Git repository:

          git clone https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequelCloning

          and to move to the created directory

          cd webshop-portal-soaring-through-the-cloud-native-sequel/

          and to make the file build-app.sh executable:

          chmod u+x build-app.sh

          image

          Note: As long the container keeps running, I only have to run “git pull” and “./build-app.sh” for every next update to the JET application. The next step would be to configure a web hook that is triggered by the relevant commit in the GitHub repository.

           

          4. run the script build-app.sh

          ./build-app.sh

          image

          wait for about three minutes (check the output in the build container

          image

          SNAGHTML91173de

          and the application status in the ACC console)

          SNAGHTML90fc3cd

          SNAGHTML90fe186

           

          5. access the updated Web Application

          image

          As you can see, after committing and pushing the change, I only had to run a simple command line command to get the application fully rebuilt and redeployed. After stopping the Docker container, no traces remain of the build process. And I can easily share the container image with my team members to build the same application or update to also build other or additional JET applications.

           

          Resources

          The inspirational article by Abhishek Gupta: https://medium.com/oracledevs/quick-start-docker-ized-paas-service-manager-cli-f54eaf4ebcc7

          The sources – including a sample JET Application – are in this GitHub repo: https://github.com/lucasjellema/webshop-portal-soaring-through-the-cloud-native-sequel .

          Oracle JET Command Line Interface: https://github.com/oracle/ojet-cli

          Docs on the Oracle PSM (PaaS Service Manager) CLI: https://docs.oracle.com/en/cloud/paas/java-cloud/pscli/abouit-paas-service-manager-command-line-interface.html

          Node & Express Tutorial Part 2: Creating a skeleton website: https://developer.mozilla.org/en-US/docs/Learn/Server-side/Express_Nodejs/skeleton_website

          Serving Public Files with Express – https://expressjs.com/en/starter/static-files.html

          Documentation for Oracle Application Container Cloud: https://docs.oracle.com/en/cloud/paas/app-container-cloud/dvcjv/getting-started-oracle-application-container-cloud-service.html

           

            The post Oracle JET Web Applications – Automating Build, Package and Deploy (to Application Container Cloud) using a Docker Container appeared first on AMIS Oracle and Java Blog.

            Pure Client Side Event Exchange between ADF Taskflows and Rich Client Web Applications such as Oracle JET, Angular and React

            Fri, 2018-02-23 03:41

            For one of our current projects I have done some explorations into the combination of ADF (and WebCenter Portal in our specific case) with JET. Our customer has existing investments in WC Portal and many ADF Taskflows and is now switching to JET as a WebApp implementation technology – for reasons of better user experience and especially better availability of developers. I believe that this is a situation that many organizations are in or are contemplating (including those who want to extend Oracle EBusiness Suite or Fusion Apps). This is not the ideal green field technology mix of course. However, if either WebCenter Portal (heavily steeped in ADF) or an existing enterprise ADF application are the starting point for new UI requirements, you are bound to end up with a combination of ADF and the latest and greatest technology used for building those requirements.

            We have to ensure that the rich client based ‘Portlets’ are nicely embedded in the ADF host environment. We also have to take care that events triggered by user actions in the ADF UI areas are communicated to the embedded rich client based UI areas in the page and lead to appropriate actions over there – and the same for actions in the embedded UI areas and events flowing in the other direction.image_thumb11_thumb

            In two previous articles (Publish Events from any Web Application in IFRAME to ADF Applications and Communicate events in ADF based UI areas to embedded Rich Client Applications such as Oracle JET, Angular and React), I have described how events in the embedded area are communicated to the ADF side of the fence and lead to UI synchronization and similary how events in the traditional ADF based UIs are communicated to the embedded areas and trigger the appropriate synchronization. The implementation described in these articles is based on pure, native ADF mechanisms such as server listener, contextual event, partial page refresh in combination with standard HTML5 mechanism for publishing events on embedded IFRAME windows. The route described using these out of the box mechanisms is robust, proven and very decoupled. It allows run time configuration in WebCenter Portal (wiring of taskflows leveraging the contextual event).  This route is also somewhat heavyhanded; it is not very fast – dependencing on network latency to the backend – and it puts additional load on the application server.

            There is a fast, light-weight alternative to the use of contextual (server side) events for communication between areas in an ADF based web page. One that can help with interaction between ADF based areas and non-ADF areas (JET, React, Angular) – but also with interactions between two or more pure ADF areas. An alternative that I believe should be part of the native ADF framework – but is not. This alternative is: the client side event bus.

            The client side event bus is a very simple pure JavaScript client side component – that I have introduced in an earlier article. In essence, this is what it is:

            image

            The client side event bus is loaded in the outermost page and will be available throughout the lifetime of the application. It has a registry of event subscriptions that each consist of the name of the event type and a function reference to the function that should be called to handle the event. Each UI area produced from an ADF Taskflow can contain JavaScript snippets that create event handlers (JavaScript functions) and subscribe those with the event bus for a specific event type. Finally, each UI area can publish an event to the event bus whenever something happens that is worth publishing. Of course this is somewhat loosely stated – we should document with some rigor the client side events that each UI area will publish – and will consume – just like the contextual (server side) events with taskflows. It is my recommendation that for the ADF application as a whole, an event registry is maintained that describes all events that can be published – client side or contextual server side – along with the payload for each event.

            Let’s make use of this client side event bus for the following use case:

            The ADF application embeds a client side web application in an IFRAME in an ADF Taskflow – ADF-JET-Container-taskflow. The application contains a second taskflow – ADF-X-taskflow – that is pure ADF, no embedding whatsoever. The challenge: an event taking place in the client side UI area produced from ADF-X-taskflow should have an effect in the client side web application – plain HTML5 or Oracle JET – in the IFRAME in the UI area produced from the other ADF Taskflow, and we want this effect to be produced as quickly and smoothly as possible and given the nature of the event and the effect there is no need for server side involvement. In this case, using contextual events is almost wasteful – it is not simple to implement, it is not efficient or fast to execute and it does not buy us anything in terms of additional security, scalability or functionality. So let’s use this client side event bus.

            The steps to implement – on top of the ADF application with the index.jsf page, the two ADF Taskflows with their respective views and the embedded IFRAME plus web application – are as follows:

            (note: all code can be found on GitHub: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize/releases/tag/v3.0)

             

            1. Create JavaScript library adf-client-event-bus.js with the functionality to record subscriptions and forward published events to the event handlers for the specific event types

            var subscriptions = {};
            function publishEvent( eventType, payload) {
               console.log('Event published of type '+eventType);
               console.log('Event payload'+JSON.stringify(payload));
                // find all subscriptions for this event type
               if (subscriptions[eventType]) { 
                // loop over subscriptions and invoke callback function for each subscription
                for (i = 0; i &lt; subscriptions[eventType].length; i++) {
                   var callback = subscriptions[eventType][i];
                   try {
                     callback(payload);
                   }
                   catch (err) {
                       console.log("Error in calling callback function to handle event. Error: "+err.message);
                   }
                }//for 
               }//if     
                
            }// publishEvent
            // register an interest in an eventType by providing a callback function that takes a payload parameter
            function subscribeToEvent( eventType, callback) {
               if (!subscriptions[eventType]) { subscriptions[eventType]= [ ]};
               subscriptions[eventType].push(callback);
               console.log('added subscription for eventtype '+eventType);
            }//subscribeToEvent
            

            2. Add adf-client-event-bus.js to the main index.jsf page.

                    &lt;af:resource type="javascript" source="/resources/js/adf-client-event-bus.js"/&gt;
            

            3. Add client listener to the input component on which the event of interest takes place. In this case: a selectOneChoice from which the user selects a country in view.jsff in taskflow ADF-X-taskflow;

                    &lt;af:selectOneChoice label="Choose a country" id="soc1" autoSubmit="false" valueChangeListener="#{pageFlowScope.detailsBean.countryChangeHandler}"&gt;
                        &lt;af:selectItem label="The Netherlands" value="nl" id="si1"/&gt;
                        &lt;af:selectItem label="Germany" value="de" id="si2"/&gt;
                        &lt;af:selectItem label="United Kingdom of Great Brittain and Northern Ireland" value="uk" id="si3"/&gt;
                        &lt;af:selectItem label="United States of America" value="us" id="si4"/&gt;
                        &lt;af:selectItem label="Spain" value="es" id="si5"/&gt;
                        &lt;af:selectItem label="Norway" value="no" id="si6"/&gt;
                        &lt;af:clientListener method="countrySelectionListener" type="valueChange"/&gt;
                    &lt;/af:selectOneChoice&gt;
            

            the client listener is configured to invoke a client side JavaScript function countrySelectionListener

            4. Add function countrySelectionListener  to the adf-x-taskflow-client.js JavaScript library that is associated with the page(s) in the ADF X taskflow; this function publishes the client side event countrySelectionEvent

            function countrySelectionListener(event) {
                var selectOneChoice = event.getSource();
                var newValue = selectOneChoice.getSubmittedValue();
                var selectItems= selectOneChoice.getSelectItems();
                var selectedItem = selectItems[newValue];
                publishEvent("countrySelectionEvent", 
                {
                    "selectedCountry" : selectedItem._label, "sourceTaskFlow" : "ADF-X-taskflow"
                });
            }
            

            5. Add function handleCountrySelection to the adf-jet-client-app.js JavaScript library that is associated with the JETView.jsff container page in the ADF-JET-Container-taskflow; this function will handle the client event countrySelectionEvent by posting an event message to the IFRAME that contains the client side web application. Also add the call to subscribe this function with the client event bus for events of this type:

            subscribeToEvent("countrySelectionEvent", handleCountrySelection);
            function handleCountrySelection(payload) {
                var country= payload.selectedCountry;
                var message = {
                    'eventType' : 'countryChanged', 'payload' : country
                };
                postMessageToJETIframe(message);
            }
            //handleCountrySelection
            

            6. Add JavaScript code in view.xhtml in the client wide web app to process an incoming message event of type countryChanged. This event will trigger an update in the UI.

            &lt;html xmlns="http://www.w3.org/1999/xhtml"&gt;
                &lt;head&gt;
                    &lt;meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/&gt;
                    &lt;title&gt;Client Side Web App&lt;/title&gt;
                    &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20function%20init()%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20attach%20listener%20to%20receive%20message%20from%20parent%3B%20this%20is%20not%20required%20for%20sending%20messages%20to%20the%20parent%20window%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20window.addEventListener(%22message%22%2C%20function%20(event)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20console.log(%22Iframe%20receives%20message%20from%20parent%22%20%2B%20event.data)%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20if%20(event.data%20%26amp%3B%26amp%3B%20event.data.eventType%20%3D%3D%20'countryChanged'%20%26amp%3B%26amp%3B%20event.data.payload)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20countrySpan%20%3D%20document.getElementById('currentCountry')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20countrySpan.innerHTML%20%3D%20%22Fresh%20Country%3A%20%22%20%2B%20event.data.payload%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20false)%3B%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%2F%2Finit%0A%20%20%20%20%20%20%20%20%20%20document.addEventListener(%22DOMContentLoaded%22%2C%20function%20(event)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20init()%3B%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;
                &lt;/head&gt;
                &lt;body&gt;
                    
            &lt;h2&gt;Client Web App&lt;/h2&gt;
            
                    
            
                        Country is 
                        &lt;span id="currentCountry"&gt;&lt;/span&gt;
                    
            
                &lt;/body&gt;
            &lt;/html&gt;
            

            image

             

            When the user selects a country using the dropdownlist in area ADF X, the selected country name is displayed almost instantaneously in the IFRAME area based on the rich client web application.

            Client Side Event Flow from Embedded Web Application (in IFRAME) to ADF powered Area

            Our story would not be complete if we did not also discuss the flow from the embedded UI area to the ADF based UI. It is very similar of course to what we described above. The event originates in the web application and is communicated from within the IFRAME to the parent window and handled by a JavaScript handler loaded for the ADF JET Container Taskflow. This handler publishes a client event with the client side event bus. In this case, a subscription for this event was created from the adf-x-taskflow-client.js library, subscribing a handler function handleDeepMessageSelection that updates the client side message component.

            The details steps and code snippets:

            1. Add code in view.xhtml to publish a message to the parent window with the message entered by a user in the text field

            &lt;html xmlns="http://www.w3.org/1999/xhtml"&gt;
                &lt;head&gt;
                    &lt;meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/&gt;
                    &lt;title&gt;Client Side Web App&lt;/title&gt;
                    &lt;!--        &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22client-web-app-lib.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;--&gt;
                    &lt;img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20%20%20%20%20%20%20function%20callParent()%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20console.log('send%20message%20from%20Web%20App%20to%20parent%20window')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20jetinputfield%20%3D%20document.getElementById('jetinputfield')%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20inputvalue%20%3D%20jetinputfield.value%3B%0A%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20message%20%3D%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22message%22%20%3A%20%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22value%22%20%3A%20inputvalue%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2C%22eventType%22%20%3A%20%22deepMessage%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22mydata%22%20%3A%20%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%22param1%22%20%3A%2042%2C%20%22param2%22%20%3A%20%22train%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%7D%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20here%20we%20can%20restrict%20which%20parent%20page%20can%20receive%20our%20message%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20by%20specifying%20the%20origin%20that%20this%20page%20should%20have%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20var%20targetOrigin%20%3D%20'*'%3B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20parent.postMessage(message%2C%20targetOrigin)%3B%0A%20%20%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%20%20%20%20%2F%2FcallParent%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&amp;lt;script&amp;gt;" title="&amp;lt;script&amp;gt;" /&gt;
                &lt;/head&gt;
                &lt;body&gt;
                    
            &lt;h2&gt;Client Web App&lt;/h2&gt;
            
                    &lt;input id="jetinputfield" type="text" value="Default"/&gt;         
                    &lt;a href="#" onclick="callParent()"&gt;Send Message&lt;/a&gt;
                &lt;/body&gt;
            &lt;/html&gt;
            

            2. Attach message event listener in the JavaScript library adf-jet-client-app.js for the message events (for example from the embedded IFRAME); in this handler, publish the event as a deepMessageEvent on the client side event bus

            function init() {
                window.addEventListener("message", function (event) {
                    console.log("Parent receives message from iframe " + JSON.stringify(event.data));
                    var data = event.data;
                    var message = data["message"];
            
                    if (data &amp;&amp; message){
                    if ( message['eventType'] == 'deepMessage') {
                        console.log("ADF JET Container Taskflow received deep message event from web App")
                        var message = message.value;
                        publishEvent("deepMessageEvent", 
                        {
                            "message" : message
                           ,"sourceTaskFlow" :"ADF-JET-container-taskflow"
                           ,"eventOrigin" : "JET:jet-embedded"
            
                        });
                    }        }
                },
                false);
            }
            
            document.addEventListener("DOMContentLoaded", function (event) {
                init();
            });
            

            3. From the adf-x-taskflow-client.js library, subscribe a function as event handler for the deepMessageEvent with the client side event bus

            subscribeToEvent("deepMessageEvent", handleDeepMessageSelection);
            
            function handleDeepMessageSelection(payload) {
                console.log("DeepMessageEvent consumed in ADF X Taskflow" + JSON.stringify(payload));
                var message = payload.message;
                // find inputText component using its fake styleClass: messageInputHandle
                //         &lt;af:inputText label="Message" id="it1" columns="120" rows="1" styleClass="messageInputHandle"/&gt;
                var msgInputFieldId = document.getElementsByClassName("messageInputHandle")[0].id;
                var msgInputText = AdfPage.PAGE.findComponentByAbsoluteId(msgInputFieldId);
                msgInputText.setValue(message);    
            }
            

            This function extracts the value of the message and sets an inputText component with that value – on the client

             

            The end to end flow looks like this:

             

             

            image

             

            Client Side Interaction with JET application

            The interaction as described above with a plain HTM5 web application embedded in an ADF application is not any different when the embedded application is an Oracle JET application. This figure shows an example of a JET application embedded in the JET Client area. It consumes two client side events from the ADF parent environment: countrySelection and colorSelection. It publishes an event itself: browserSelectionEvent. All interaction around these events with the client side event bus is taken care of by the ADF JET Container Taskflow. All interaction between the JET application and the ADF JET Container Taskflow is handled through the postMessage mechanism on the IFRAME’s content window and its parent window.

            image

            The salient code snippets in the JET application are:

            The ViewModel:

            
            define(
                ['ojs/ojcore', 'knockout', 'jquery', 'ojs/ojknockout', 'ojs/ojinputtext', 'ojs/ojselectcombobox'
                ],
                function (oj, ko, $) {
                    'use strict';
                    function WorkareaViewModel() {
                        var self = this;
                        // initialize two country observables
                        self.country = ko.observable("Italy");
                        self.color = ko.observable("Greenish");
                        self.browser = ko.observable("Chrome");
            
                        self.callParent = function (message) {
                            console.log('send message from Web App to parent window');
                            // here we can restrict which parent page can receive our message
                            // by specifying the origin that this page should have
                            var targetOrigin = '*';
                            parent.postMessage(message, targetOrigin);
            
                        }
            
                        self.browserChangedListener = function (event) {
                            var newBrowser = event.detail.value;
                            var oldBrowser = event.detail.previousValue;
            
                            console.log("browser  changed to:" + newBrowser);
                            var message = {
                                "message": {
                                    "eventType": "browserChanged",
                                    "value": newBrowser
                                }
                            };
                            self.callParent(message);
            
                        }
            
                        self.init = function () {
                            // attach listener to receive message from parent; this is not required for sending messages to the parent window
                            window.addEventListener("message", function (event) {
                                console.log("Iframe receives message from parent" + event.data);
                                if (event.data &amp;&amp; event.data.eventType == 'countryChanged' &amp;&amp; event.data.payload) {
                                    self.country(event.data.payload);
                                }
                                if (event.data &amp;&amp; event.data.eventType == 'colorChanged' &amp;&amp; event.data.payload) {
                                    self.color(event.data.payload);
                                }
                        } //init
            
                        $(document).ready(function () { self.init(); })
                    }
            
                    return new WorkareaViewModel();
                }
            );
            

            The View:

            
            &lt;h2&gt;Workarea&lt;/h2&gt;
            
            
            &lt;div&gt;
                &lt;oj-label for="country-input"&gt;Country&lt;/oj-label&gt;
                &lt;oj-input-text id="country-input" value="{{country}}" &gt;&lt;/oj-input-text&gt;
                
            &lt;h4 data-bind="text: country"&gt;&lt;/h4&gt;
            
                &lt;oj-label for="color-input"&gt;Color&lt;/oj-label&gt;
                &lt;oj-input-text id="color-input" value="{{color}}"&gt;&lt;/oj-input-text&gt;
                
            &lt;h4 data-bind="text: color"&gt;&lt;/h4&gt;
            
                &lt;oj-label for="combobox"&gt;Browser Type Selection&lt;/oj-label&gt;
                &lt;oj-combobox-one id="combobox" value="{{browser}}" on-value-changed="{{browserChangedListener}}" style="max-width:20em"&gt;
                    &lt;oj-option value="Internet Explorer"&gt;Internet Explorer&lt;/oj-option&gt;
                    &lt;oj-option value="Firefox"&gt;Firefox&lt;/oj-option&gt;
                    &lt;oj-option value="Chrome"&gt;Chrome&lt;/oj-option&gt;
                    &lt;oj-option value="Opera"&gt;Opera&lt;/oj-option&gt;
                    &lt;oj-option value="Safari"&gt;Safari&lt;/oj-option&gt;
                &lt;/oj-combobox-one&gt;
            &lt;/div&gt;
            
            

             

            The corresponding code in the ADF JET client app library:

            var jetIframeClientId = "";
            
            function init() {
                window.addEventListener("message", function (event) {
                    console.log("Parent receives message from iframe " + JSON.stringify(event.data));
                    var data = event.data;
                    var message = data["message"];
            
                    if (data &amp;&amp; message){
                    if ( message['eventType'] == 'browserChanged') {
                        console.log("ADF JET Container Taskflow received browser changed event from JET App")
                        var browser = message.value;
                        publishEvent("browserSelectionEvent", 
                        {
                            "selectedBrowser" : browser
                           ,"sourceTaskFlow" :"ADF-JET-container-taskflow"
                           ,"eventOrigin" : "JET:jet-embedded"
            
                        });
                    }
                },
                false);
            }
            
            document.addEventListener("DOMContentLoaded", function (event) {
                init();
            });
            
            function findIframeWithIdEndingWith(idEndString) {
                var iframe;
                var iframeHtmlCollectionArray = document.getElementsByTagName("iframe");
                //http://clubmate.fi/the-intuitive-and-powerful-foreach-loop-in-javascript/#Looping_HTMLCollection_or_a_nodeList_with_forEach
            [].forEach.call(iframeHtmlCollectionArray, function (el, i) {
                    if (el.id.endsWith(idEndString)) {
                        iframe = el;
                    }
                });
                return iframe;
            }
            
            function processCountryChangedEvent(newCountry) {
                console.log("Client Side handling of Country Changed event; now transfer to IFRAME");
            
                var message = {
                    'eventType' : 'countryChanged', 'payload' : newCountry
                };
                postMessageToJETIframe(message);
            }
            
            function postMessageToJETIframe(message) {
                var iframe = findIframeWithIdEndingWith('jetIframe::f');
                var targetOrigin = '*';
                iframe.contentWindow.postMessage(message, targetOrigin);
            }
            
            subscribeToEvent("colorSelectionEvent", handleColorSelection);
            
            function handleColorSelection(payload) {
                console.log("ColorSelectionEvent consumed " + JSON.stringify(payload));
                var color = payload.selectedColor;
                console.log("selected color " + color);
                var message = {
                    'eventType' : 'colorChanged', 'payload' : color
                };
                postMessageToJETIframe(message);
            }
            //handleColorSelection
            
            
            subscribeToEvent("countrySelectionEvent", handleCountrySelection);
            function handleCountrySelection(payload) {
                var country= payload.selectedCountry;
                var message = {
                    'eventType' : 'countryChanged', 'payload' : country
                };
                postMessageToJETIframe(message);
            }
            //handleCountrySelection
            
            
            Resources

            Sources for this article: https://github.com/lucasjellema/WebAppIframe2ADFSynchronize. (note: this repository also contains the code for the flows from and to JET IFRAME to and from the ADF Taskflow X via the server side – the traditional ADF approach

            Blog Client Side Event Bus in Rich ADF Web Applications – for easier, faster decoupled interaction across regions : https://technology.amis.nl/2017/01/11/client-side-event-bus-in-rich-adf-web-applications-for-easier-faster-decoupled-interaction-across-regions/

            Docs on postMessage: https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage

            The post Pure Client Side Event Exchange between ADF Taskflows and Rich Client Web Applications such as Oracle JET, Angular and React appeared first on AMIS Oracle and Java Blog.

            Pages