Amis Blog

Subscribe to Amis Blog feed
Friends of Oracle and Java
Updated: 16 hours 54 min ago

Oracle Open World; day 2 – highlights

Tue, 2017-10-03 01:45


At Monday Oracle Open World really starts. A whole lot of general sessions with announcements and strategic directions on product / service level. In this post I’ll try to summarize the highlights of this day. Quite hard, there are a whole lot of interesting sessions which overlap. Always the feeling I’m missing something. And a very interesting session at the very end of the day as a surprise.

It started with the keynote of Mark Hurd, no big news, or it must be the revisiting of his predictions a year ago. This holds true:

– By 2025 the number of corporate-owned data centers will decrease by 80%

– 80% of production apps will be in the cloud by 2025


Hardware : not shouting out loud, but there’s a new generation of

– Engineered systems, the X7 line. Engineered for Oracle Cloud Infrastructure (but also for on-premises)

– Sparc servers with the Sparc M8 processor.


As mentioned earlier, Oracle Management Cloud is becoming more dominant as ‘single pane of glass’.


Software : Wim Coekaerts

– Solaris will be supported and developed for a long time !


Oracle Kubernetes Cloud is started!

A few slides about key announcements I really can’t judge by it’s value:



The last session was about new features of Database 18c (december 2017), quite interesting. A few enhancements:

– Performance enhancements by low latency memory transactions, non-volatile memory support, in-memory column store improvements. Mainly for OLTP and IOT workload improvements.

– For Data Warehousing and Big Data are new features as In-Memory for external tables, Machine learning algorithmen, alter table merge function.

– Per-PDB switchover

– Centrally managed users in Active Directory

– Improved JSON support

– Centrally Managed Users Directly in Active Directory

Foto 02-10-17 18 16 40 (1)

– A REST API to provide instance management and monitoring

– Official Docker support (except RAC, is coming)

– SQLCI got more attention

– Gold Images as new Installation Approach, as zip-, tarfile, docker image, virtual box. Installation through RPM.

– First quarter 2018 18c XE is launched. Including PDB’s, probably 2GB, 2 CPU, 12GB storage (compressed, net 40GB). Meant for students, Proof of Concepts, not for production.

Foto 02-10-17 18 27 40 (2)


The post Oracle Open World; day 2 – highlights appeared first on AMIS Oracle and Java Blog.

Oracle Open World 2017; day 1 observations

Mon, 2017-10-02 09:41

Just a quick note about day 1 at Oracle Open World. This Sunday traditionally is filled with presentations of usergroups, customers and product management, and at the end of the day the welcome-keynote. In the presentations there’s not really exciting news, they will have to wait until the keynotes. But some observations can be done already.


The big news at the keynote – The Autonomous Database –  was not that big anymore, Larry Ellison announced it a week ago, I did a wrap up yesterday.

Quite a summary of the keynote has been written by businessinsider, And here are the highlights on video.

The most important slide of the keynote regarding the changing role of the DBA is included in this post : Less time on Administration, more time on innovation. Oracle 18c requires no DBA, is highly available, and autotune queries.


So what about the presentations I went to at Sunday. Just a few observations (very limited in scope of course):

– The phrase ‘Single Pane of Glass’, which was used for Enterprise Manager 13c a while ago, is now being used at the Oracle Management Cloud (OMC). The context and scope is howerver quite different, OMC is strategically meant for monitoring and managing a complete hybrid cloud environment, including Azure, Amazon and the on-premises environment.

– The word ‘management’ has been inserted in the slides of OMC. Not just monitoring anymore. What are the consequences for Oracle Enterprise Manager?

– Security is on the agenda

– Machine Learning is trending.

– Some services are barely present at this OpenWorld:  Oracle Enterprise Manager, Exa-systems (not the cloud service), Weblogic platform, Oracle Database Appliance, in summary a lot of hardware and on-premises management. And when hardware is mentioned it is just a step to the final goal: the cloud. With one big exception: the Oracle Cloud Machine for Cloud@Customer.

– A lot of sessions with one theme: journey to the cloud.


Mentioned it before, just a quick note for now. Regardz


Announcement autonomous database:

Business insider:

Video highlights:

The post Oracle Open World 2017; day 1 observations appeared first on AMIS Oracle and Java Blog.

The day before OOW17; infra expectations

Sat, 2017-09-30 19:01

Oracle Open World 2017 is starting tomorrow, and as a platinum partner of Oracle, we – AMIS Services – are obliged to keep us and our customers informed with the roadmap of Oracle products.

And of course translate this to added business-value for our customers.  In short: what to pick of all the coming announcements, new features, cloudiness and so on.  All the arrows are pointed towards the Oracle Cloud products of course, but first we’ll have to find the answer of the two questions:   ‘why should a customer go to the cloud’  and if yes ‘what role is Oracle Cloud playing in this’.

This week will be – as in the last years – full of announcements. It started early this year with the pre-announcement of Larry last week about the ‘autonomous database’, Bring Your Own Licence to the Cloud and Universal Credits.

As a reminder, hereby a wrap-up of the presentation of Larry Ellison:

– Lowest price for IaaS. That means the same price as Amazon, but faster, thus cheaper.

– Highest rate of automation in PaaS. That means the lowest TCO. Goal is to garantee a 50% lower TCO than Amazon

– The ‘autonomous database’ is available in December, based on Machine Learning. This should eliminate human Labor (DBA).

– Service Level Agreement of 99,995%, this is 30 minues of planned and unplanned downtime a year.

– Bring Your Own License for PaaS. 94% cheaper than the old price. This should lower the treshold to use the Oracle Cloud.

– It’s becoming possible to buy Universal Credits, no more linked to a Cloud product.

This wrap-up covers pretty much my agenda or, as you will, my focus of this Oracle Open World. Primarily focussed on TCO, (Cloud) platforms, databases, middleware , Engineered systems.

Oracle has taken some pretty smart moves, which will influence products like

– Engineered Systems. In general : is there still a business case in buying on-premisses hardware

– The role and feature of the DBA. DBAKevlar wrote a nice blog about this.

– Life Cycle Management of the platform. How to cope with the management of a hybrid environment, including this new autonomous databases, and containers.

– Databases. There will be a lot of new features in Oracle18.

Tomorrow it will be the start of an exhausting week and I’m not even presenting nor ACE or Developement Champion! Just a mortal visitor, attending a lot of presentations, key-notes, meetings with product-managment, network-events, appreciation-events, dinners with Oracle representatives.

Respect for my colleagues Lucas Jellema and Robert van Molken. Together they  are involved  with (or are presenting) 9 presentations.

My intention is also to write quite regurarly about the things that went on my journey this week.



Larry Ellison’s announcement:

DBAkevlar’s blog :

Lucas Jellema en Robert van Möken presentations / involvements:

The post The day before OOW17; infra expectations appeared first on AMIS Oracle and Java Blog.

Oracle Applications Cloud User Experience Strategy Day– Directions for User Experience

Wed, 2017-09-27 18:02

imageToday – Wednesday 27th of September – saw close to 50 people gathering for the OAUX (Oracle Applications User Experience) Strategy Day. Some attendees joined from remote locations on three continents, while most of us had assembled in the UX Spaces Lab at Oracle’s Redwood Shores HQ – equipped with some interesting video and audio equipment.

IMG_9972 Some important themes for this day:

  • The key message of Simplicity, Mobility and Extensibility is continued; simplicity means: a user experience that is to the point, only drawing a user’s attention to relevant items, only presenting meaningful data and allowing a task to be handled most efficiently.

    In order to achieve this simplicity, quite a bit of smartness is required: User context interpreted by smart apps lead to Simple UX, with Chat, Voice Input and Conversational UIs.and fully automated processes at the pinnacle. Machine learning is at the heart of this smartness – deriving information from the context, presenting relevant choices en defaults based on both context and historical patterns

  • Enterprise Mobility is a key element in the user experience – with a consistent experience yet tailored to the device (one size does not fit all at all) and the ability to start tasks on one device and continue with them on different devices and a later point in time. The experience should be light on data. Only show the absolute essential information.

  • The latest Oracle Cloud Applications Release – R13 – has some evolution in the UX and UI.

  • There is a move away from using icons to interact with the application for navigation – more towards search & notifications. The ability to tailor the look & feel (theming, logo, heading, integrate external UIs) has improved substantially.


  • Conversational UI for the Enterprise is rapidly becoming relevant. Conversational UI for the enterprise complements and replaces current Web&Mobile UI – for quick, simple, mini transaction and smart capture. The OAUX team discerns four categories of interactions that conversational interfaces are initially most likely to be used for: Do (quick decisions, approvals, data submission), Lookup (get information), Go To (use conversation as starting point for a deeplink context rich navigation to a dedicated application component) and Decision Making (provide recommendations and guidance to users).

    Some examples of conversational UIs – low threshold user to system interaction for simple questions,requests, actions and submissions


    Jeremy Ashley introduced the term JIT UI – just in time UI: widgets (buttons, selection lists) that are mixed in with the text based conversational UI (aka chat) to allow easy interaction when relevant; this could also include dynamically generated visualizations for more complex presentation of data.

    The OAUX makes an RDK (Rapid Development Kit) available for Conversational UI – or actually the first half of the RDK – the part that deals with designing the conversational UI. The part about the actual implementation will follow with the launch of the Oracle Intelligent Bot Cloud Service and associated technology and tooling.


    This new RDK can be found at : . It contains many guidelines on designing conversations – about how to address users, what information and interaction to provide.

  • Another brand new RDK is soon to be released for Oracle JET – aligned with JET 4.0, that is to be released next week at Oracle OpenWorld 2017. This RDK support development of Oracle JET rich client applications with the same look and feel as the R13 ADF based Oracle SaaS apps. Assuming that there will be a long  period of coexistence between ADF based frontends and Oracle JET powered user interfaces, it seems important to be able to develop an experience in JET that is very similar to the one users already are used to in the existing SaaS applications.


    Additionally, the JET RDK will provide guidelines on how to developer JET applications. These guidelines were created in collaboration between the SaaS foundation and development teams, the JET product development team and the OAUX team. They are primarily targeted at Oracle’s own development teams that embrace JET for building SaaS App components and other developers creating extensions on top of Oracle SaaS. However, these guidelines are very useful for any development team that is using JET for developing any applications. The guidance provided by the RDK resources – as well as potentially the reusable components provided as part of the RDK – embodies best experiences and the intent of the JET team and provides a relevant headstart to teams that otherwise have to invent their own wheels.

    Here is a screenshot of the sample JET application (R13 style) provided with the RDK:


  • Updates – aligned with Cloud Apps Release 13 – are released for MAF and ADF. Go to to find all resources

    Here is a screenshot of the ADF demo application provided with the ADF RDK:IMG_0013

Some other observations

Any data in a user interface has to be justified. Why should it be there? What will you use it for? What happens if it is not shown? Less is more (or at least: better)

Different generations of users prefer different styles of navigation & interaction; ideally the UX is personalized to cater for that.

An overview of all activities of the OAUX team during Oracle OpenWorld 2017:


The post Oracle Applications Cloud User Experience Strategy Day– Directions for User Experience appeared first on AMIS Oracle and Java Blog.

Oracle SOA Suite and WebLogic: Overview of key and keystore configuration

Sun, 2017-09-24 09:31

Keystores and the keys within can be used for security on the transport layer and application layer in Oracle SOA Suite and WebLogic Server. Keystores hold private keys (identity) but also public certificates (trust). This is important when WebLogic / SOA Suite acts as the server but also when it acts as the client. In this blog post I’ll explain the purpose of keystores, the different keystore types available and which configuration is relevant for which keystore purpose.

Why use keys and keystores?

The below image (from here) illustrates the TCP/IP model and how the different layers map to the OSI model. When in the below elaboration, I’m talking about the application and transport layers, I mean the TCP/IP model layers and more specifically for HTTP.

The two main reasons why you might want to employ keystores are that

  • you want to enable security measures on the transport layer
  • you want to enable security measures on the application layer

Almost all of the below mentioned methods/techniques require the use of keys and you can imagine the correct configuration of these keys within SOA Suite and WebLogic Server is very important. They determine which clients can be trusted, how services can be called and also how outgoing calls identity themselves.

You could think transport layer and application layer security are two completely separate things. Often they are not that separated though. The combination of transport layer and application layer security has some limitations and often the same products / components are used to configure both.

  • Double encryption is not allowed. See here. ‘U.S. government regulations prohibit double encryption’. Thus you are not allowed to do encryption on the transport layer and application layer at the same time. This does not mean you cannot do this though, but you might encounter some product restrictions since, you know, Oracle is a U.S. company.
  • Oracle Webservice Manager (OWSM) allows you to configure policies that perform checks if transport layer security is used (HTTPS in this case) and is also used to configure application level security. You see this more often that a single product is used to perform both transport layer and application layer security. For example also API gateway products such as Oracle API Platform Cloud Service.
Transport layer (TLS)

Cryptography is achieved by using keys from keystores. On the transport layer you can achieve

You can read more on TLS in SOA Suite here.

Application layer

On application level you can achieve similar feats (authentication, integrity, security, reliability), however often more fine grained such as for example on user level or on a specific part of a message instead of on host level or for the entire connection. Performance is usually not as good as with transport layer security because the checks which need to be performed, can require actual parsing of messages instead of securing the transport (HTTP) connection as a whole regardless of what passes through. The implementation depends on the application technologies used and is thus quite variable.

  • Authentication by using security tokens such as for example
    • SAML. SAML tokens can be used in WS-Security headers for SOAP and in plain HTTP headers for REST.
    • JSON Web Tokens (JWT) and OAuth are also examples of security tokens
    • Certificate tokens in different flavors can be used which directly use a key in the request to authenticate.
    • Digest authentication can also be considered. Using digest authentication, a username-password token is created which is send using WS-Security headers.
  • Security and reliability by using message protection. Message protection consists of measures to achieve message confidentiality and integrity. This can be achieved by
    • signing. XML Signature can be used for SOAP messages and is part of the WS Security standard. Signing can be used to achieve message integrity.
    • encrypting. Encrypting can be used to achieve confidentiality.
Types of keystores

There are two types of keystores in use in WebLogic Server / OPSS. JKS keystores and KSS keystores. To summarize the main differences see below table:


There are JKS keystores. These are Java keystores which are saved on the filesystem. JKS keystores can be edited by using the keytool command which is part of the JDK. There is no direct support for editing JKS keystores from WLST, WebLogic Console or Fusion Middleware Control. You can use WLST however to configure which JKS file to use. For example see here

cd ('Servers/myserver/ServerMBean/myserver')



Keys in JKS keystores can have passwords as can keystores themselves. If you use JKS keystores in OWSM policies, you are required to configure the key passwords in the credential store framework (CSF). These can be put in the map: and can be called: keystore-csf-key, enc-csf-key, sign-csf-key. Read more here. In a clustered environment you should make sure all the nodes can access the configured keystores/keys by for example putting them on a shared storage.


OPSS also offers KeyStoreService (KSS) keystores. These are saved in a database in an OPSS schema which is created by executing the RCU (repository creation utility) during installation of the domain. KSS keystores are the default keystores to use since WebLogic Server 12.1.2 (and thus for SOA Suite since 12.1.3). KSS keystores can be configured to use policies to determine if access to keys is allowed or passwords. The OWSM does not support using a KSS keystore which is protected with a password (see here: ‘Password protected KSS keystores are not supported in this release’) thus for OWSM, the KSS keystore should be configured to use policy based access.

KSS keys cannot be configured to have a password and using keys from a KSS keystore in OWSM policies thus do not require you to configure credential store framework (CSF) passwords to access them. KSS keystores can be edited from Fusion Middleware Control, by using WLST scripts or even by using a REST API (here). You can for example import JKS files quite easily into a KSS store with WLST using something like:

svc = getOpssService(name='KeyStoreService')
svc.importKeyStore(appStripe='mystripe', name='keystore2', password='password',aliases='myOrakey', keypasswords='keypassword1', type='JKS', permission=true, filepath='/tmp/file.jks')
Where and how are keystores / keys configured

As mentioned above, keys within keystores are used to achieve transport security and application security for various purposes. If we translate this to Oracle SOA Suite and WebLogic Server.

Transport layer Incoming
  • Keys are used to achieve TLS connections between different components of the SOA Suite such as Admin Servers, Managed Servers, Node Managers. The keystore configuration for those can be done from the WebLogic Console for the servers and manually for the NodeManager. You can configure identity and trust this way and if the client needs to present a certificate of its own so the server can verify its identity. See for example here on how to configure this.
  • Keys are used to allow clients to connect to servers via a secure connection (in general, so not specific for communication between WebLogic Server components). This configuration can be done in the same place as above, with the only difference that no manual editing of files on the filesystem is required (since no NodeManager is relevant here).

Outgoing Composites (BPEL, BPM)

Keys are be used to achieve TLS connections to different systems from the SOA Suite. The SOA Suite acts as the client here. The configuration of identity keystore can be done from Fusion Middleware Control by setting the KeystoreLocation MBean. See the below image. Credential store entries need to be added to store the identity keystore password and key password. Storing the key password is not required if it is the same as the keystore password. The credential keys to create for this are: SOA/KeystorePassword and SOA/KeyPassword with the user being the same as the keyalias from the keystore to use). In addition components also need to be configured to use a key to establish identity. In the composite.xml a property oracle.soa.two.way.ssl.enabled can be used to enable outgoing two-way-ssl from a composite.

Setting SOA client identity store for 2-way SSL


Specifying the SOA client identity keystore and key password in the credential store

Service Bus

The Service Bus configuration for outgoing SSL connections is quite different from the composite configuration. The following blog here describes the locations where to configure the keystores and keys nicely. In WebLogic Server console, you create a PKICredentialMapper which refers to the keystore and also contains the keystore password configuration. From the Service Bus project, a ServiceKeyProvider can be configured which uses the PKICredentialMapper and contains the configuration for the key and key password to use. The ServiceKeyProvider configuration needs to be done from the Service Bus console since JDeveloper can not resolve the credential mapper.

To summarize the above:

Overwriting keystore configuration with JVM parameters

You can override the keystores used with JVM system parameters such as,,,,, in for example the setDomainEnv script. These will override the WebLogic Server configuration and not the OWSM configuration (application layer security described below). Thus if you specify for example an alternative truststore by using the command-line, this will not influence HTTP connections going from SOA Suite to other systems. Even when message protection (using WS-Security) has been enabled, which uses keys and check trust. It will influence HTTPS connections though. For more detail on the above see here.

Application layer

Keys can be used by OWSM policies to for example achieve message protection on the application layer. This configuration can be done from Fusion Middleware Control.

The OWSM run time does not use the WebLogic Server keystore that is configured using the WebLogic Server Administration Console and used for SSL. The keystore which OWSM uses by default is kss://owsm/keystore since 12.1.2 and can be configured from the OWSM Domain configuration. See below for the difference between KSS and JKS keystores.

OWSM keystore contents and management from FMW Control

OWSM keystore domain config

In order for OWSM to use JKS keystores/keys, credential store framework (CSF) entries need to be created which contain the keystore and key passwords. The OWSM policy configuration determines the key alias to use. For KSS keystores/keys no CSF passwords to access keystores/keys are required since OWSM does not support KSS keystores with password and KSS does not provide a feature to put a password on keys.

Identity for outgoing connections (application policy level, e.g. signing and encryption keys) is established by using OWSM policy configuration. Trust for SAML/JWT (secure token service and client) can be configured from the OWSM Domain configuration.

Finally This is only the tip of the iceberg

There is a lot to tell in the area of security. Zooming in on transport and application layer security, there is also a wide range of options and do’s and don’ts. I have not talked about the different choices you can make when configuring application or transport layer security. The focus of this blog post has been to provide an overview of keystore configuration/usage and thus I have not provided much detail. If you want to learn more on how to achieve good security on your transport layer, read here. To configure 2-way SSL using TLS 1.2 on WebLogic / SOA Suite, read here. Application level security is a different story altogether and can be split up in a wide range of possible implementation choices.

Different layers in the TCP/IP model

If you want to achieve solid security, you should look at all layers of the TCP/IP model and not just at the transport and application layer. Thus it also helps if you use different security zones, divide your network so your development environment cannot by accident access your production environment or the other way around.

Final thoughts on keystore/key configuration in WebLogic/SOA Suite

When diving into the subject, I realized using and configuring keys and keystores can be quite complex. The reason for this is that it appears that for every purpose of a key/keystore, configuration in a different location is required. It would be nice if that was it, however sometimes configuration overlaps such as for example the configuration of the truststore used by WebLogic Server which is also used by SOA Suite. This feels inconsistent since for outgoing calls, composites and service bus use entirely different configuration. It would be nice if it could be made a bit more consistent and as a result simpler.

The post Oracle SOA Suite and WebLogic: Overview of key and keystore configuration appeared first on AMIS Oracle and Java Blog.

Setting up Oracle Event Hub (Apache Kafka) Cloud Service and Pub & Sub from local Node Kafka client

Thu, 2017-09-21 13:43

Oracle offers an Event Bus Cloud Service – an enterprise grade Apache Kafka instance – with large numbers of partitions and topics, (retained) messages and distributed nodes. Setting up this cloud service is simple enough – especially once you know what to do, as I will demonstrate in this article. In order to communicate with this Event Bus from a local client – in this case created with Node – we need to open up some ports for access from the public internet.

The steps I went through:

  • create Oracle Event Hub Cloud Service – Platform instance (the Kafka & Zookeeper instance)
  • create Oracle Event Hub Cloud Service – Service instance (a Topic)
  • create two network rules to allow access from the public internet to ports 2181 – Zookeeper – and 6667 – Kafka Server)
  • create two Node applications using the kafka-node package – one to produce and one to consume (based on the work done by Kunal Rupani of Oracle (GitHub:

Note: the Event Hub Cloud Service is a metered service, billed per hour. The cost for the smallest shape is around $0,70 per hour (or $200/month for non metered).

Create Oracle Event Hub Cloud Service – Platform instance (the Kafka & Zookeeper instance)






Create Oracle Event Hub Cloud Service – Service instance (a Topic)

Switch to the Big Data Compute CS and select the service category Oracle Event Hub Cloud Service –Topics.






Create two network rules to allow access from the public internet to ports 2181 – Zookeeper – and 6667 – Kafka Server)





Create two Node applications using the kafka-node package

one to produce and one to consume (based on the work done by Kunal Rupani of Oracle (GitHub:

    Note: the topic was created as microEventBus. The actual name under which it is accessed is partnercloud17-microEventBus (which is identity domain as prefix to the topic name)

    The package.json contains the dependency on kafka-node:


    The straightforward code for producing to the Event Hub Kafka Topic:

    var EVENT_HUB_PUBLIC_IP = '129.xxxxxxxx';
    var TOPIC_NAME = 'partnercloud17-microEventBus';
    var ZOOKEEPER_PORT = 2181;
    var kafka = require('kafka-node');
    var Producer = kafka.Producer;
    var client = new kafka.Client(EVENT_HUB_PUBLIC_IP + ':'+ZOOKEEPER_PORT);
    var producer = new Producer(client);
    let payloads = [
      { topic: TOPIC_NAME, messages: '*', partition: 0 }
    setInterval(function () {
      console.log('called about every 1 second');
      producer.send(payloads, function (err, data) {
        if (err) {
      if (payloads[0].messages.length < 10) {
        payloads[0].messages = payloads[0].messages + "*";
      else {
        payloads[0].messages = "*";
    }, 1000);

    And the similar code for consuming. Note that the producer talks to Zookeeper (port 2181) and the consumer to the Kafka Server itself (port 6667):

    var EVENT_HUB_PUBLIC_IP = '129xxxxxxx';
    var TOPIC_NAME = 'partnercloud17-microEventBus';
    var KAFKA_SERVER_PORT = 6667;
    var kafka = require('kafka-node'),
        Consumer = kafka.Consumer,
        client = new kafka.KafkaClient({ "kafkaHost": EVENT_HUB_PUBLIC_IP + ':' + KAFKA_SERVER_PORT }),
        consumer = new Consumer(
                { topic: TOPIC_NAME, offset: 1 }
                autoCommit: false,
                fromOffset: true
    consumer.on('message', function (message) {

    Here is the output for producing:


    and here for consuming:


    The post Setting up Oracle Event Hub (Apache Kafka) Cloud Service and Pub & Sub from local Node Kafka client appeared first on AMIS Oracle and Java Blog.

    Getting started with Oracle JET: a CRUD service

    Mon, 2017-09-18 10:25

    AMIS has recently set up a brand new Enterprise Web Application team, of which I am proud to be a member. We will working in front-end development using a variety of Javascript based frameworks. As a first framework, we are currently investigating Oracle JET.  After working through the Oracle JET MOOC and a Knockout.js tutorial we have begun to build a meetings organisation app in order to get some more hands-on experience. This app is initially intended to have users and meetings, allowing for the users to create, show, update and delete meetings with their authorization dependent on authentication. This blog post is intended to show you some of the discoveries we made while working on this project.

    Getting started

    To keep things simple, we started of by setting up a FeathersJS REST API as a back-end with two models, one for meetings and one for users, as well as some seeds to provide initial data for these models. The next step was to make this data available in the JET app, across any of the components which might want to make use of it. This was done through use of the JET model and collection systems. In this system an model is a single data record, while a collection contains multiple records.  Since we want the CRUD functionality to be available to different JET components, we placed the code in a separate service as shown below. As you can see, this service defines the model and collection as well as providing an instance of the collection. The parseMeeting and parseSaveMeeting functions and attributes are optional and can be left out if you do not want to change the names of any attributes from back-end to front-end. The comparator attribute of the collection is used to order the different models.

    .gist table { margin-bottom: 0; }
    CRUD actions

    After setting up the model and collection, we added  CRUD functionality. JET models and collections provide functions which take care of the communication with the back-end. For example, when using the create function a plain old javascript object can be passed, as well as success and error handlers. The collection which was instantiated earlier is used to call the create function on, adding the new model to collection for quick availability in the front-end as well as making a call to the back-end to create the object in the database. Specific headers can be used, for example in order to pass an authorization token.

    For the fetch and delete methods, we instantiate a new model and set its ID to the ID of the object we want to affect. The relevant function is then called on this object. In the case of the fetch function, the data in the model is mapped to a plain old javascript object, which is returned. In the success handler of the delete function we remove the destroyed meeting from the collection, automatically updating all components in the front-end which rely on its data. The update function accepts both  plain javascript objects and JET models.

    .gist table { margin-bottom: 0; }
    Calling the service

    Setting the service up like this gave us an easy to use way to interact with our back-end, as well as allowing us to clean up the code of our components and remove any duplication. Here is an example of how we call this service from our meetings.js. Other services can be set up in the same way, or multiple services could even inherit from the same base service.

    .gist table { margin-bottom: 0; }


    The post Getting started with Oracle JET: a CRUD service appeared first on AMIS Oracle and Java Blog.

    Hey Mum, I am a Citizen Data Scientist with Oracle Data Visualization Cloud (and you can be one too)

    Sun, 2017-09-10 03:42

    One of the Oracle Public Cloud Services I have seen mouthwatering demos with but have not actually tried out myself is Oracle Data Visualization Cloud. I had several triggers to at last give it a try – and I am glad I did. In this article a brief report of my first experiences with this cloud service that aims to provide the business user – aka citizen data scientist – with the means that explore data and come up with insights and meaningful visualizations that can easily be shared across the team, department or enterprise.

    I got myself a 30-day trial to the cloud service, uploaded a simple Excel document with a publicly available dataset on the countries of the world and started to play around. It turned out to be quite simple – and a lot of fun – to come up with some interesting findings and visualizations. No technical skills required – certainly not any beyond an average Excel user.


    • get a trial account – it took about 5 hours from my initial request for a trial to the moment the trial was confirmed and the service had been provisioned
    • enter the Data Viz CS and add a new Data Source (from the Excel file with country data)
    • do a little data preparation (assign attributes to be used as measures) on the data source
    • create a new project; try out Smart Insights for initial data exploration, to get a feel for attributes
    • create some visualizations – get a feel for the data and for what DVCS can do ; many different types of visualizations, various options for filtering, embellishing, highlighting; many dimensions to be included in a visualization
    • try a narrative – a dossier with multiple visualizations, to tell my story

    In a few hours, you get a very good feel for what can be done.


    Create Countries as my new data source

    First steps: download countries.csv from  . Create Excel workbook from this data. Note: I first tried to upload the raw csv file format, but that ended with an internal error.




    The data that was imported is shown. Now is a good moment to set the record straight – any meta data defined at this point is inherited in projects and visualizations that use this data. For example: if we want to calculate with attributes and use them as values – for the size of bubbles and stacks and to plot a line – we have to identify those attributes as measures.


    Once the data source is set up – we can create our first project based on that data source by simply clicking on it. Note: the PROPOSED_ACTS and ACT_ALBUMS data sources are based on database table in an Oracle DBaaS instance to which I have first created a connection (simply with host, port, service name and username & password).



    My First Project – Data Preparation & Initial Exploration

    Here is the Data Preparation page in the project. We can review the data, see what is there, modify the definition of attributes, prescribe a certain treatment (conversion) of data, add (derived) attributes etc.


    If we click on the Visuals icon in the upper right hand corner, we get a first stab at visualization of some of the data in this data source. Out of the box – just based on how DVCS interprets the data and the various attributes:


    For example the number of countries per continent. Note how we can select different measures from the dropdownlist – for example area:


    This tells us that Asia is the largest continent in landmass, followed by Africa, Russia is the largest country and the size of all countries using a currency called Dollar put together is the largest, with Rubel using countries (probably just one) as a runner up. Note: all of this – out of the box. I am 10 minutes into my exploration of DVCS!


    First Visualizations – Let’s try a Few Things

    Go to the second tab – Visualize.

    Drag an attribute – or multiple attributes – to the canvas.



    The default visualization is a pivot table that presents the data for the selected attribute. Drag two more attributes to the canvas:



    The result is a matrix of data – with the measure (area) in the cells:


    In order to prepare the visualization for presentation and sharing, we can do several things – such as removing columns or rows of data that is not relevant, setting a color to highlight a cell:



    Define range filters on selected attributes – for example filter on countries with at least a 45M population or



    When the filter has been set, the matrix adapts:


    We can try out different styles of visualization – what about a map?



    DVCS recognizes the names of continents and countries as geographical indications and can represent them on a map, using color for total area. Let’s remove continent from the Category dimensions, and let’s set bubble size for population:


    If we are interested in population density, we can add a calculated value:


    Some more examples:

    Select countries by size per continent – in a horizontal stack chart:


    and a treemap – with population added in as additional attribute represented through color:



    Any visualization we like and want to include in our final narrative can be saved as an insight:


    On the Narrate tab – we can include these insights in a meaningful order to tell our story through the data visualizations. Also see Building Stories.



    Oracle Data Visualization Cloud Service:  (at $75.00 / Named User / Month with a minimum of 5 users a fairly friendly priced offering)

    The country data is downloaded from

    Documentation on Oracle Data Visualization Cloud Service: 

    Documentation on Building Stories:

    The post Hey Mum, I am a Citizen Data Scientist with Oracle Data Visualization Cloud (and you can be one too) appeared first on AMIS Oracle and Java Blog.

    Oracle OpenWorld 2017 Review sessie op 31 oktober 2017 – met aandacht voor ….

    Thu, 2017-09-07 06:37

    imageVan 2 tot 5 oktober vindt in San Francisco Oracle OpenWorld 2017 plaats. In deze week zal de nabije – en verdere – toekomst van Oracle duidelijk worden gemaakt. De roadmaps voor de producten in het portfolio. En ook de dead-end streets en ends-of-the-road. De grote thema’s voor Oracle, de voortgang sinds vorig jaar en de ambities voor het komende. Klantverhalen over behaalde resultaten, demonstraties van nieuwe producten en features. Een week voor inspiratie, contemplatie, en voor kritische vragen.

    Op 31 oktober doet het AMIS Team – zoals ieder jaar – verslag van de bevindingen op Oracle OpenWorld. In een sessie die barstensvol zit met informatie krijg je in een paar uur een goed beeld van wat die 2000 sessies van Oracle OpenWord in grote lijnen hebben duidelijk gemaakt – en waar Oracle heen gaat.

    Je kunt je voor deze sessie nu al aanmelden (en doe dat snel, want meestal zit het nokkie vol): 

    Als voorbeeld van wat je kunt verwachten kan je nog even de slides bekijken van onze review-sessie van vorig jaar: Oracle OpenWorld 2016 Review – Slides 


    Hieronder een opsomming van thema’s en vragen die wij tijdens Oracle OpenWorld 2017 gaan verkennen en waarover we in de review zullen rapporteren. NB: als je aanvullende onderwerpen en vragen hebt – laat het weten als comment bij dit artikel.

    · Volwassenheid van Oracle als Cloud leverancier: echt pay-as-you-go subscription model per cloud capability en per business value (unit) met de mogelijkheid een concrete TCO berekening te doen; ook: de staat van de Cloud Operations van Oracle: zijn die echt “cloud scale” qua schaalbaarheid, beschikbaarheid en graad van automatisering. Slaagt Oracle erin om van de PaaS Cloud – “One integrated suite of cloud (native) capabilities” te maken, “cloud first, comprehensive, integrated, open” zoals is aangekondigd. Is geautomatiseerde Operations mogelijk van de Oracle Cloud omgevingen door gebruikers (scripting, scheduling, monitoring, …)?

    · Hoe is de match tussen Oracle en haar portfolio en middelgrote ondernemingen? Is Oracle steeds meer alleen interessant voor en geïnteresseerd in Top-100 enterprises in de wereld? Of ontstaat misschien wel door de cloud – een ‘democratization’van Oracle en is er een betere aansluiting met kleinere-dan-de-absolute top bedrijven· Organisaties maken voor hun IT gebruik van de producten van diverse leveranciers. Dat zal in de cloud niet anders zijn. Een belangrijk criterium bij de selectie van cloud-diensten zal zijn: hoe goed zijn ze toepasbaar in het hybride landschap van meerdere omgevingen met verschillende cloud-diensten van verschillende vendors.

    · Security – wat heeft Oracle ons te bieden om secure IT dichter bij te brengen? Waar staat de Identity Cloud Service op dit moment? Is er nog toekomst voorOracle Identity & Access Management Suite? Wat bieden de “security in silicon” voorzieningen?

    · De waarde van IT ontstaat in de operatie. Flexibele en snelle evolutie van IT oplossingen is vereist vanuit de business. Hoe ondersteunt Oracle de DevOps-beweging die is opgekomen om agile ontwikkeling, continuous delivery en efficiënte Operations te realiseren over de grenzen van technologie, platformen en data center locatie heen (Oracle Management Cloud is daar een schakel in)

    · Actuele Architectuur-trends en consequenties voor technologie en implementatie, zoals Microservices, Serverless Functions en Containers, “the real 3rd tier” (rich web client) en  REST APIs, de veranderende rol van de relationele (enterprise) database, de opkomst van Hadoop, NoSQL en event sourcing & CQRS

    · Het vlaggeschip product van Oracle is (nog) Oracle Database. De meest recente release is 12c Release 2 (September 2016 in de cloud, Maart 2017 voor on premises) met als belangrijke features In Memory, multitenancy, sharding, native JSON. Hoe is de adoptie van die release, hoe zijn de ervaringen met de belangrijkste features, wat kunnen we leren over migratie? Hoe ziet de roadmap richting Oracle Database 18 en 19 eruit (in plaats van een major 13c release komen er jaarlijkse opleveringen)

    · Opkomst van (niche) SaaS toepassingen en de voorzieningen om over leverancier- en omgevingsgrenzen heen standaardapplicaties te verrijken en te integreren

    · Toepassing van Machine Learning in Oracle producten en met Oracle technologie

    · Met IoT (Internet of Things) komt de fysieke wereld – klein en groot – binnen handbereik, deels zelfs in (near) real time; hoe maakt Oracle IoT mogelijk – vanaf device en sensor tot dashboard en automatische aanbevelingen en acties

    · Serverless computing is een belangrijke nieuwe manier om IaaS resources in te zetten: stateless functions die reageren op triggers – zoals http requests of events – en dan hun werk uitvoeren, potentieel met vele instances naast elkaar. Dit is het echte pay-per-use model – waarbij niet wordt betaald voor een stilstaande server die staat te wachten op verkeer om af te handelen en dat horizontaal schaalbaar is. Oracle introduceert Oracle Functions als tegenhanger van AWS Lambda en Azure Functions. We zullen bespreken hoe rijk en inzetbaar Oracle Functions zijn.

    · Stand van Infrastructure as a Service: kunnen we data centers gaan sluiten en hardware bij het grof vuil zetten: Heeft Oracle de vorig jaar aangekondigde Generation 2 Infrastructure zodanig op orde dat het motto “bigger, better, cheaper than AWS” wordt waargemaakt – en we en masse IaaS gaan afnemen? Hoe gaat het met het “cloud@customer”-programma (ook bekend als “the cloud machine”)?

    Image result for oracle cloud

    · Zit er nog leven in on premises producten van Oracle? Daarbij kijken we naar hardware (de engineered systems and de appliances – waaronder de Exadata SL met Sparc en Linux, Software & Security in Silicon) en naar de platform software zoals Database, SOA Suite, BPM Suite en andere middleware componenten

    · Hoe verloopt de Low Code revolutie? Kunnen Citizen Developers (& Data Scientists) uit de voeten met Oracle’s producten – zoals Visual Builder CS, APEX, Data Visualization CS

    · PaaS Cloud – One integrated suite of cloud (native) capabilities – cloud first, comprehensive, integrated, open

    · Tegelijk met Oracle OpenWorld vindt ook JavaOne plaats. Java 9 zal (eindelijk) het licht zien en Java EE 8 staat nu ook grotendeels in de steigers. We rapporteren over de temperatuur van het Java ecosysteem – ook in relatie tot moderne architecturen, rich client web applicaties en concurrende platforms zoals Node, Python en Go.

    · Modern User eXperience – gebruikers hebben op steeds meer manieren interactie met IT systemen. Via verschillende devices (desktop, smartphone, tablet, watch) – mogelijk door elkaar heen – en protocollen (muis/toetsenbord, touch, spraak) en kanalen (applicatie, app, notificatie, Chat (Slack/Facebook Messenger), SMS, spraakbericht, email). Welke ontwikkelingen leidt | volgt | adviseert Oracle? Hoe evolueert de Alta UI en hoe is de ondersteuning van Alta voor verschillende technologieën? Welke middelen biedt Oracle voor de realisatie van mens-applicatie interactie? Waar staan Oracle JET en ADF op dit moment?

    · BlockChain – een mechanisme voor gedistribueerde, encrypted documentstores – heeft de potentie in diverse branches voor grote veranderingen te zorgen – vooral rond het dichter en directer bij elkaar brengen van partijen – en het elimineren van ‘men in the middle’. Oracle heeft aangekondigd BlockChain te omarmen. Wat dit inhoudt, hoe de integratie met Oracle PaaS en SaaS gaat verlopen en welke mogelijkheden gaan ontstaan zullen we zeker aan de orde stellen.

    · Acquisities, roadmaps, partnerships, personalia, aankondigingen, innovatie, verrassingen

    The post Oracle OpenWorld 2017 Review sessie op 31 oktober 2017 – met aandacht voor …. appeared first on AMIS Oracle and Java Blog.

    Rapid and free cloud deployment of Node applications and Docker containers with Now by ZEIT

    Fri, 2017-09-01 03:10

    I was tipped off about this now service from ZEIT: . A cloud service with free tier that allows command line deployment of a simple static website, any Node application or any Docker container to a cloud environment where the application is publicly accessible. Depending on resources consumed and the number of applications required, you may need to upgrade to higher service tiers (starting at $15/month). Note: for personal research and small team development, the free (or OSS) tier will probably suffice.

    I decided to give it a spin.


    1. Download now command line tool – in my case for Windows – 

    2. Install command line tool


    3. Open the command line and login to now

    now login


    This will prompt you for an email address, trigger an email sent to that address and wait for you to click on the link in that email to confirm your human nature and email identity.

    4. Navigate to a directory that contains a Node application (or a Docker build file + resources or a simple static web site); the now website provides samples ( and of course any Node application will do.

    A quick and dirty new Express/Node application:






    5. Deploy the application:



    You will get a unique URL for this deployment of the application. Note: your code will be launched into a 64-bit Node.js enviroment (the latest release of Node) running on Alpine Linux.

    Deployment is not super fast – I experienced 30 secs to 4 minutes.

    6. Using the URL – we can access the ‘dashboard’ for the application:


    Both the log files:


    and the sources are available:


    And of course the application itself can be accessed, once deployment is complete:


    7. If you make changes to the sources of the application


    and want to redeploy, you simply execute again:


    This results in a new unique URL – for this new deployment.


    The previous deployment status available at its original URL. Presumably, now considers the application deployments similar to serverless functions and only spins them up when there is a demand. Old deployments will soon run out of steam – most of the time – and only consume storage after having gone to sleep. Having older deployments which are not active costs you nothing. You can actively remove those old deployments with now remove [url of deployment]. 




    We can specify custom actions to be performed for installation or starting the Node application through instructions in the package.json file (build-now instead of build and start-now instead of start)

    At runtime, the only writable location will be /tmp. Inside this directory, temporary data can be saved. But please note that it will be removed each time each deployment goes to sleep and wakes up again. In turn, it is not safe to be used as a file-based database.

    Now can deploy code directly from Git(Hub, BitBucket and GitLab): using now <username>/<repository>

    During deployment with now, we can pass environment variables for the runtime execution environment of the application; this can be done on the command line or through a local now.json configuration file.

    Scaling can be configured for the application – specifying the (max and min) number of concurrent instances that can be running to handle the load; scaling beyond three instances (and fixed scaling) can only be done on the paid plans.

    If you want to, you can now continue to buy a domain name for a friendly URL and associate that with the application you have just deployed. One of the ways for ZEIT to make money I guess is through this low threshold domain name name selling.



    Five Minute Guide to now – 

    Real time deployments with Zeit’s Now 


    Plans & Pricing




    Dashboard/Activity Stream


    The post Rapid and free cloud deployment of Node applications and Docker containers with Now by ZEIT appeared first on AMIS Oracle and Java Blog.

    Serverless computing with Azure Functions – interaction with Event Hub

    Thu, 2017-08-31 01:16

    In a previous article, I described my first steps with Azure Functions – one of the implementation mechanisms for serverless computing: Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function. Functions can be triggered in many ways – by HTTP requests, the clock (scheduled), by database modifications and by events. In this article, I will look at a Function that is triggered by an event on the Azure Event Hub. I will also show how a function (triggered by an HTTP request) can write to Event Hub.

    Functions can have triggers and input bindings. The first is the cause of the function to run – which can have a payload. An input binding is a declarative definition of data that the function has (read) access to during execution. Function also can have Output bindings – for each of the channels to which they write results.


    The first steps: arrange Azure account, create Event Hubs namespace – as the context in which to create individual event hubs (the latter are comparable to Kafka topics)

    On the Event Hub side of the world:

    • Create Event Hub
    • Create Shared Access Policy
    • Get Connection String URL for the shared access policy

    In Azure Functions –

    • At the function app level: Create Connection String for Connection String URL copied from shared access policy
    • Create a function based on the template Data Processing/JavaScript/EventHub Trigger – a JavaScript function triggered by a message on the indicated Event Hub in the Event Hub namespace addressed through the connection string; save and (test) run the function (this will publish an event to the event hub)
    • Optionally: create a second function, for example triggered by an HTTP Request, and have it write to an output binding to the Event Hub; in that case, an HTTP request to the second function will indirectly – through Event Hub – cause the first function to be executed


    In Event Hub Namespace

    Create Event Hub GreetingEvents. Set the name and accept all defaults. Press Create.




    Once the Event Hub creation is complete, we can inspect the details – such as 1 Consumer Group, 2 Partitions and 1 Day message retention:


    This is our current situation:



    Now return to the overview and click on the link Connection Strings. We need the to create a connection from Azure Functions app to Event Hub Namespace using the URL for the Shared Policy we want to leverage for that connection.


    Click on Connection Strings to bring up a list of Shared Policies. Click on the Shared Policy to use for accessing the Event Hub namespace from Azure Functions.


    Click the copy button to copy the RootManageSharedAccessKey connection string to the clipboard.

    In Azure Function App

    In order for the Function to access the Event Hub [Namespace], the connection string to the Event Hub [Namespace[ needs to be configured as app setting in the function app [context in which the Function to be triggered by Event Hub is created]. Note: that is the value in the clipboard.


    Scrolll down.


    Create Connection String to Event Hub Namespace using the value in the clipboard



    Save changes in function app


    At this point, a link is established between the function app (context) and the Event Hub Namespace. Any function in the app can link to any event hub in the namespace.



    Create Function to be Triggered by Event

    With the connection string in place, we can create a function that is executed when an event is published on Event Hub greetingevents. That is done like this:


    Type the name of the function, click on the link new and select event hub greetingevents to associate the function with:




    Click on create.

    The function is created – including the template code:



    The configuration of the function is defined in the file function.json. Its contents can be inspected and edited:


    The value of connection is a reference to an APP Setting that has been created when the function was created, based on the connection string to Event Hub Namespace.

    Click on Save and Run. A test event is published to the Event Hub greetingevents. In the log window – we can see the function reacting to that event. So we have lift off for our function – it is triggered by an event (and therefore presumably by all events) on the Event Hub and processes these events according to the (limited) logic it currently contains.


    The set up looks like this:image



    Publish to Event Hub from Azure Function


    To make things a little bit more interesting we will make the Azure Function that was introduced in a previous article for handling HTTP Request “events” also produce output to the Event Hub greetingevents. This means that any HTTP request sent to function HttpTriggerJS1 leads to an event published to Event Hub greetingevents and in turn to function EventHubTrigger-GreetingEvents being triggered.



    To add this additional output flow to the function, first open the Integration tab for the function and create a new Output Binding, of type Azure Event Hubs. Select the connection string and the target Event Hub – greetingevents. Define the name of the context parameter that provides the value to be published to the Event Hub – outputEventHubMessage:


    We now need to modify the code of the function, to actually set the value of this context parameter called outputEventHubMessage:


    At this point, we can test the function – and see how it sends the event


    that indirectly triggers our former function.

    When the HTTP Request is sent to the function HttpTriggerJS1 from Postman for example


    The function returns it response and also publishes the event. We can tell, because in the logging for function EventHubTrigger-GreetingEvents we see the name sent as parameter to the HttpTriggerJS1 function.

    (Note: In this receiving function, I have added the line the red to see the contents of the event message.)





    Azure Function – Event Hub binding – 

    Azure Documentation on Configuring App Settings – 

    Azure Event Hubs Overview – 

    Azure Functions Triggers and Binding Concepts –

    The post Serverless computing with Azure Functions – interaction with Event Hub appeared first on AMIS Oracle and Java Blog.

    Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function

    Wed, 2017-08-30 03:59

    If your application does not have internal state – and sometimes it is handling peak loads of requests while at other times it is not doing any work at all, why then should there be one or even more instances of the application (plus container and/or server) continuously and dedicatedly up and running for the application? For peak loads – a single instance is nowhere near enough. For times without any traffic, even a single instance is too much – and yet you pay for it.

    Serverless computing – brought to prominence with AWS Lambda – is an answer to this. It is defined on Wikipedia as a “cloud execution model” in which “the cloud provider dynamically manages the allocation of machine resources”. The subscriber to the cloud service provides the code to execute and specifies the events that should trigger execution. The cloud provider takes care of running that code whenever the event occurs. Pricing is based on the combination of the resources used (memory, possibly CPU) and the time it takes to execute the function. No compute node is permanently associated with the function and any function [execution]instance can run on a different virtual server. (so it is not really serverless in a strict sense – a server is used for running the function; but it can be a different server with each execution). Of course, function instances can still have and share state by using a cache or backend data store of some kind.

    The Serverless Function model can be used for processing events (a very common use case) but also for handling HTTP requests and therefore for implementing REST APIs or even stateless web applications. Implementation languages for serverless functions differ a little across cloud providers. Common runtimes are Node, Python, Java and C#. Several cloud vendors provide a form of Serverless Computing – AWS with Lamba, Microsoft with Azure Functions, Google with Google Cloud Functions and IBM with BlueMix FaaS (Function as a Service). Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016 (Oracle Functions – Serverless architecture on the Oracle PaaS Cloud) and is expected to actually the service (including support for orchestration for distributed serverless functions) around Oracle OpenWorld 2017 (October 2017) – see for example the  list of session at OOW2017 on Serverless.

    Note: monitoring the execution of the functions, collecting run time metrics and doing debugging on issues can be a little challenging. Special care should be taken when writing the functions – as for example there is no log file written on the server on which the code executes.

    In this article, I briefly show an example of working with Serverless Computing using Azure Functions.

    Steps for implementing a Function:

    • arrange Azure cloud account
    • create Function App as context for Functions
    • create Function
    • trigger Function – cause the events that trigger the Function.
    • inspect the result from the function
    • monitor the function execution

    Taking an existing Azure Cloud Account, the first step is to create a Function App in your Azure subscription – as a context to create individual functions in (“You must have a function app to host the execution of your functions. A function app lets you group functions as a logic unit for easier management, deployment, and sharing of resources.”).


    I will not discuss the details for this step – they are fairly trivial (see for example this instruction:

    Quick Overview of Steps

    Navigate into the function app:


    Click on plus icon to create a new Function:


    Click on goto quickstart for the easiest way in


    Select scenario WebHook + API; select JavaScript as the language. Note: the JavaScript runtime environment is Node 6.5 at the time of writing (August 2017).

    Click on Create this function.


    The function is created – with a name I cannot influence


    When the function was created, two files were created: index.js and function.json. We can inspect these files by clicking on the View Files tab:


    The function.json file is a configuration file where we specify generic meta-data about the function.

    The integration tab shows the triggering event (s) for this function – configured for HTTP requests.


    The manage tab allows us to define environment variable to pass into the function runtime execution environment:


    The Monitor tab allows us to monitor executions of the Function and the logging they produce:


    Return to the main tab with the function definition. Make a small change in the template code – to make it my own function; then click on Save & Run to store the modified definition and make a test call to the Function:


    The result of the test call is shown on the right as well in the logging tab at the bottom of the page:


    To invoke the function outside the Azure Cloud environment, click on Get Function URL.


    Click on the icon to copy the URL to the clipboard.

    Open a browser, paste the URL and add the name query parameter:


    In Postman we can also make a test call:


    Both these calls are from my laptop without any special connection to the Azure Cloud. You can make that same call from your environment. The function is triggerable – and when an HTTP request is received to hand to the function, Azure will assign it a run time environment in which to execute the JavaScript code. Pretty cool.

    The logging shows the additional instances of the function:


    From within the function, we can write output to the logging. All function execution instances write to the same pile of logging, from within their own execution environments:


    Now Save & Run again – and see the log line written during the function execution:


    Functions lets you define the threshold trace level for writing to the console, which makes it easy to control the way traces are written to the console from your functions. You can set the trace-level threshold for logging in the host.json file, or turn it off.

    The Monitor tab provides an overview of all executions of the function, including the not so happy ones (I made a few coding mistakes that I did not share). For each instance, the specific logging and execution details are available:



    Debug Console and Package Management

    At the URL https://<function_app_name> we can access a management/development console where we can perform advanced operations regarding application deployment and configuration:


    The CMD console looks like this:


    NPM packages en Node Modules can be added to a JavaScript Function. See for details : 

    An not obvious feature of the CMD Console is the ability to drag files from my local Windows operating system into the browser – such as the package.json shown in this figure:


    Note: You should define a package.json file at the root of your function app. Defining the file lets all functions in the app share the same cached packages, which gives the best performance. If a version conflict arises, you can resolve it by adding a package.json file in the folder of a specific function.


    Creating a JavaScript (Node) Function in Azure Functions is pretty straightforward. The steps are logical, the environment reacts intuitively and smoothly. Good fun working with this.

    I am looking forward to Oracle’s Cloud service for serverless computing – to see if it provides a similar good experience,and perhaps even more. More on that next month I hope.

    Next steps for me: trigger Azure Functions from other events than HTTP Requests and leveraging NPM packages from my Function. Perhaps also trying out Visual Studio as the development and local testing environment for Azure Functions.



    FAQ on AWS Lambda –

    Wikipedia on Serverless Computing –

    Oracle announced Oracle [Cloud] Functions at Oracle OpenWorld 2016  – Oracle Functions – Serverless architecture on the Oracle PaaS Cloud

    Sessions at Oracle OpenWorld 2017 on Serverless Computing (i.e. Oracle Functions) –  list of session at OOW2017 on Serverless

    Azure Functions – Create your first Function – 

    Azure Functions Documentation – 

    Azure Functions HTTP and webhook bindings –

    Azure Functions JavaScript developer guide –

    How to update function app files – package.json, project.json, host.json –

    The post Serverless Computing – Function as a Service (FaaS) – with Azure Functions – first small steps with a Node/JavaScript function appeared first on AMIS Oracle and Java Blog.

    Creating JSFiddle for Oracle JET snippet – using additional modules

    Tue, 2017-08-29 02:14

    My objective in this article: describe how I (and therefore you) can use JSFiddle to create running, shared samples of Oracle JET code. This is useful for question on the JET Forum or on StackOverflow and also as demo/illustration along a blog post or other publication. JSFiddle is an IDE-like web site that allows us to create mini-applications consisting of CSS, HTML and JavaScript resources and run these client side applications in the browser. We can edit the code and re-run. We can easily embed JSFiddle components in articles and we can share JSFiddle entries simply by sharing the URL.

    In order to create Oracle JET fiddles, we need a template that takes care of all the scaffolding – the basic dependencies (CSS and JavaScript) that we always need. Ideally, by using the template, we can focus on the code that is specific to the sample we want to create as a fiddle.

    The original JSFiddle that I used as a started point is from John  Brock … ehm, Peppertech:

    As an external resource the fiddle loads requireJS:

    All other required JavaScript modules are loaded by requireJS – as instructed in the configuration of the paths property in requirejs.config. The modules include Oracle JET (core, translation, drag and drop), jQuery, Hammer, Knockout and ES6 Promise.


    The custom JavaScript for the specific JET snippet we want to demonstrate in the Fiddle goes into the main function that is passed to require at the beginning of the Fiddle – along with the list of modules required by the main function. This function defines the ViewModel and applies data bindings through knockout, linking the ViewModel to an HTML element.

    If we have additional custom JavaScript that is in a separate JavaScript files, we can access these as external dependencies that are added to the fiddle. Note that JSFiddle will only access resources from Content Delivery Networks; we can make use of a trick to add our own custom JavaScript resources to the fiddle:

    • store the files on GitHub
    • create a CDN-style URL to each file, for example using RawGit (a site that serves GitHub project files through MaxCDN)
    • add the URL as external resource to the fiddle

    Any file added in this fashion is loaded by JSFiddle when the fiddle is executed.

    In my case, I want to load a custom module – through require.js . In that case, I do not have to add the file that contains the module definition to the JSFiddle as external resource. I can have require.js load the resource directly from the CDN URL (note: loading the file from the raw GitHub URL does not work: “Refused to execute script from ‘’ because its MIME type (‘text/plain’) is not executable, and strict MIME type checking is enabled.”.

    My custom module is on GitHub:


    I copy the URL to the clipboard. Then on I paste the URL:


    I then copy the CDN style URL to the clipboard. In JSFiddle I can add this URL path to the code – in function _getCDNPath(paths) . Note: I remove the actual name of the file, so the path itself refers to the directory. In this directory, there could be multiple modules.


    Finally the module is required into fiddle through:


    Here I refer to custom-modules/my-module which resolves to the module defined in file my-module.js in the [GitHub] directory referred to by the CDN Url added to newPaths.

    The full fiddle looks like this – hardly anything specific, just a tiny little bit of data binding to the ViewModel:


    This fiddle now becomes my starting point for any future fiddle for Oracle JET 3.2. As is shown below.

    Create New Fiddle from Template

    To create any Oracle JET fiddle, I can now (and you can do that as well) go to my template fiddle ( and click on Fork.


    A new fiddle is created as a clone of the template. I should update the meta data of the fiddle (as to not get confused myself) and can then create the example I want. Here I show a very basic popup example:



    The resulting fiddle: – created as a clone of the template fiddle, extended with a few lines of code to create the intended effect.

    The two fiddles show up on my public JSFiddle Dashboard (



    Fiddles can be embedded in articles and other publications. Open the embed option in the top menu and copy the embed-code or link:


    Then use that code in the source of whatever you want to embed the fiddle into. For example – down here:



    Jim Marion’s blog article


    Source in GitHub:

    The starting point fiddle by PepperTech:

    The final resulting fiddle with the JET Tooltip example:

    My public JSFiddle Dashboard (

    The post Creating JSFiddle for Oracle JET snippet – using additional modules appeared first on AMIS Oracle and Java Blog.

    Oracle JET Nested Data Grid for presenting Hierarchical Data Sets – with cell popup, collapse and expand, filter and zoom

    Mon, 2017-08-28 01:27

    As part of a SaaS Enablement project we are currently working on for a customer using Oracle JET, a requirement came up to present an hierarchical data set – in a way that quickly provides an overview as well as allows access to detail information and the ability to focus on specific areas of the hierarchy. The data describes planning and availability of employees and the hierarchy is by time: months, weeks and days. One presentation that would fit the requirements bill was a spreadsheet like data grid with employees in the rows, the time hierarchy in [nested]columns and the hours planned and available in the cells. A popup that appears when the mouse hovers over a cell will present detail information for the planned activities for that day and employee. Something like this:



    This article will not describe in detail how I implemented this functionality using Oracle JET – although I did and all the source code is available in GitHub: .

    This article is a brief testament to the versatility of Oracle JET [and modern browsers and the JavaScript ecosystem]and the strength of the Oracle JET UI component set as well as its documentation – both Cookbook for the Components and the JS API documentation. They allowed me to quickly turn the requirements into working code. And throw in some extra functionality while I was at it.

    When I first looked at the requirements, it was not immediately clear to me that JET would be able to easily take on this challenge. I certainly did not rush out to give estimates to our customer – depending on the the shoulders we could stand on, this could be a job for weeks or more. However, browsing through the Oracle JET Cookbook, it did not take too long to identify the data grid (the smarter sibling of the table component) as the obvious starting point. And to our good fortune, the Cookbook as a recipe for Data Grid with Nested Headers:

    imageWith this recipe – that includes source code – as starting point, it turned out to be quite straightforward to plug in our own data, rewrite the code from the recipe to handle our specific data structure and add custom cell styling. When that was done – rather easily – it was very seductive to start adding some features, both to take on the challenge and to further woo (or wow – but not woe as I had mistyped originally) our customer.

    Because the data set presented in the grid is potentially quite large, it is convenient to have ways to narrow down what is shown. An intuitive way with hierarchical data is to collapse branches of the data set that are currently not relevant. So we added collapse icons to month column headers; when a month is collapse, the icon is changed to an expand icon. Clicking the icon has the expected effect of collapsing all weeks and days under the month or expanding them. From here it is a small step to allow all months to be collapsed or expanded by single user actions – so we added icons and supporting logic to make that happen.


    Also intuitive is the ability to drill down or zoom into a specific column – either a month or a week. We added that feature too – by allowing the month name or week number in the column header to be clicked upon. When that happens, all data outside the selected month and week are hidden.



    Finally, and very rough at present, we added a search field. The user can currently enter a number; this is interpreted as the number of the month to filter on. However, it would not be hard to interpret the search value more intelligently – also filtering on cell content for example.



    Did we not have any challenges? Well, not major stumbling blocks. Some of the topics that took a little longer to deal with:

    • understand the NestedHeaderDataGridDataSource code and find out where and how to customize for our own needs
    • create a custom module and use require to make it available in our ViewModel (see
    • use of Cell template and knock out tag for conditional custom cell rendering
    • capture the mouseover event in the cell template and pass the event and the cell (data) context to a custom function
    • generate a unique id for span element in cell in order to have an identifiable DOM element to attach the popup to
    • programmatically notify all subscribers to KnockOut observables that the observable has been updated (and the data grid component should refresh) (use function valueHasMutated() on observable)
    • programmatically manipulate the contents of the popup
    • take the input from the search field and use it (instantly) to filter and refresh the data grid
    • include images in a in GitHub (yes, that is a very different topic from Oracle JET)
    • create an animated gif (and that too) – see the result below, using

    I hope to describe these in subsequent blog posts.


    This animated gif gives an impression of what the prototype I put together does. In short:

    – present all data in an hierarchical grid

    – show a popup with cell details when hovering over a cell

    – collapse (and expand) months (by clicking the icon)

    – drill down on (zoom into) a single month or week (by clicking the column header)

    – collapse or expand all months and weeks

    – filter the data in the grid by entering a value in the search field





    Final Words

    In the end, the customer decided to have us use the Gantt Chart to present the hierarchical data. Mainly because of its greater visual appeal (the data grid looks too much like Excel) and the more fluid ability to zoom in and out. I am sure our explorations of the data grid will come in handy some other time. And if not, they have been interesting and fun.


    Source code for the article (and the nested header data grid component):

    Oracle JET JavaScript API Documentation

    – Popup –

    – Datagrid –

    Oracle JET Cookbook:

    – Popup Tooltip –

    – Nested Headers with Data Grid –

    – CRUD with Data Grid –

    Documentation for KnockOut –

    Documentation for RequireJS –

    The foundation of JS Fiddles for JET 3.2 –

    Blog Article on AMIS Blog – Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters

    The post Oracle JET Nested Data Grid for presenting Hierarchical Data Sets – with cell popup, collapse and expand, filter and zoom appeared first on AMIS Oracle and Java Blog.

    R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10

    Wed, 2017-08-23 10:14

    R uses data extensively. Data often resides in a database. In this blog I will describe installing and using dplyr, dbplyr and ROracle on Windows 10 to access data from an Oracle database and use it in R.

    Accessing the Oracle database from R

    dplyr makes the most common data manipulation tasks in R easier. dplyr can use dbplyr. dbplyr provides a transformation from the dplyr verbs to SQL queries. dbplyr 1.1.0 is released 2017-06-27. See here. It uses the DBI (R Database Interface). This interface is implemented by various drivers such as ROracle. ROracle is an Oracle driver based on OCI (Oracle Call Interface) which is a high performance native C interface to connect to the Oracle Database.

    Installing ROracle on Windows 10

    I encountered several errors when installing ROracle in Windows 10 on R 3.3.3. The steps to take to do this right in one go are the following:

    • Determine your R platform architecture. 32 bit or 64 bit. For me this was 64 bit
    • Download and install the oracle instant client with the corresponding architecture (here). Download the basic and SDK files. Put the sdk file from the sdk zip in a subdirectory of the extracted basic zip (at the same level as vc14)
    • Download and install RTools (here)
    • Set the OCI_LIB64 or OCI_LIB32 variables to the instant client path
    • Set the PATH variable to include the location of oci.dll
    • Install ROracle (install.packages(“ROracle”) in R)
    Encountered errors
    Warning in install.packages :
     package ‘’ is not available (for R version 3.3.3)

    You probably tried to install the ROracle package which Oracle provides on an R version which is too new (see here). This will not work on R 3.3.3. You can compile ROracle on your own or use the (older) R version Oracle supports.

    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’ These will not be installed

    This can be fixed by installing RTools (here). This will install all the tools required to compile sources on a Windows machine.

    Next you will get the following question:

    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
    Do you want to attempt to install these from sources?

    If you say y, you will get the following error:

    installing the source package ‘ROracle’
    trying URL ''
    Content type 'application/x-gzip' length 308252 bytes (301 KB)
    downloaded 301 KB
    * installing *source* package 'ROracle' ...
    ** package 'ROracle' successfully unpacked and MD5 sums checked
    ERROR: cannot find Oracle Client.
     Please set OCI_LIB64 to specify its location.

    In order to fix this, you can download and install the Oracle Instant Client (the basic and SDK downloads).

    Mind that when running a 64 bit version of R, you also need a 64 bit version of the instant client. You can check with the R version command. In my case: Platform: x86_64-w64-mingw32/x64 (64-bit). Next you have to set the OCI_LIB64 variable (for 64 bit else OCI_LIB32) to the specified path. After that you will get the error as specified below:

    Next it will fail with something like:

    Error in inDL(x, as.logical(local), as.logical(now), ...) :
     unable to load shared object 'ROracle.dll':
     LoadLibrary failure: The specified module could not be found.

    This is caused when oci.dll from the instant client is not in the path environment variable. Add it and it will work! (at least it did on my machine). The INSTALL file from the ROracle package contains a lot of information about different errors which can occur during installation. If you encounter any other errors, be sure to check it.

    How a successful 64 bit compilation looks
    > install.packages("ROracle")
    Installing package into ‘C:/Users/maart_000/Documents/R/win-library/3.3’
    (as ‘lib’ is unspecified)
    Package which is only available in source form, and may need compilation of C/C++/Fortran: ‘ROracle’
    Do you want to attempt to install these from sources?
    y/n: y
    installing the source package ‘ROracle’
    trying URL ''
    Content type 'application/x-gzip' length 308252 bytes (301 KB)
    downloaded 301 KB
    * installing *source* package 'ROracle' ...
    ** package 'ROracle' successfully unpacked and MD5 sums checked
    Oracle Client Shared Library 64-bit - Operating in Instant Client mode.
    found Instant Client C:\Users\maart_000\Desktop\instantclient_12_2
    found Instant Client SDK C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
    copying from C:\Users\maart_000\Desktop\instantclient_12_2/sdk/include
    ** libs
    Warning: this package has a non-empty '' file,
    so building only the main architecture
    c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rodbi.c -o rodbi.o
    c:/Rtools/mingw_64/bin/gcc  -I"C:/PROGRA~1/R/R-33~1.3/include" -DNDEBUG -I./oci    -I"d:/Compiler/gcc-4.9.3/local330/include"     -O2 -Wall  -std=gnu99 -mtune=core2 -c rooci.c -o rooci.o
    c:/Rtools/mingw_64/bin/gcc -shared -s -static-libgcc -o ROracle.dll tmp.def rodbi.o rooci.o C:\Users\maart_000\Desktop\instantclient_12_2/oci.dll -Ld:/Compiler/gcc-4.9.3/local330/lib/x64 -Ld:/Compiler/gcc-4.9.3/local330/lib -LC:/PROGRA~1/R/R-33~1.3/bin/x64 -lR
    installing to C:/Users/maart_000/Documents/R/win-library/3.3/ROracle/libs/x64
    ** R
    ** inst
    ** preparing package for lazy loading
    ** help
    *** installing help indices
    ** building package indices
    ** testing if installed package can be loaded
    * DONE (ROracle)
    Testing ROracle

    You can read the ROracle documentation here. Oracle has been so kind as to provide developer VM’s to play around with the database. You can download them here. I used ‘Database App Development VM’.

    After installation of ROracle you can connect to the database and for example fetch employees from the EMP table. See for example below (make sure you also have DBI installed).

    drv <- dbDriver("Oracle")
    host <- "localhost"
    port <- "1521"
    sid <- "orcl12c"
    connect.string <- paste(
    "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
    "(CONNECT_DATA=(SID=", sid, ")))", sep = "")
    con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
    bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
    sysdba = FALSE)
    dbReadTable(con, "EMP")

    This will yield the data in the EMP table.

    1 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
    2 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
    3 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
    4 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
    5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
    6 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
    7 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
    8 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
    9 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
    10 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
    11 7900 JAMES CLERK 7698 1981-12-02 23:00:00 950 NA 30
    Using dplyr

    dplyr uses dbplyr and it makes working with database data a lot easier. You can see an example here.

    Installing dplyr and dbplyr in R is easy:


    Various functions are provides to work with data.frames, a popular R datatype in combination with data from the database. Also dplyr uses an abstraction above SQL which makes coding SQL for non-SQL coders more easy. You can compare it in some ways with Hibernate which makes working with databases from the Java object world more easy.

    Some functions dplyr provides:

    • filter() to select cases based on their values.
    • arrange() to reorder the cases.
    • select() and rename() to select variables based on their names.
    • mutate() and transmute() to add new variables that are functions of existing variables.
    • summarise() to condense multiple values to a single value.
    • sample_n() and sample_frac() to take random samples.

    I’ll use the same example data as with the above sample which uses plain ROracle

    #below are required to make the translation done by dbplyr to SQL produce working Oracle SQL
    sql_translate_env.OraConnection <- dbplyr:::sql_translate_env.Oracle
    sql_select.OraConnection <- dbplyr:::sql_select.Oracle
    sql_subquery.OraConnection <- dbplyr:::sql_subquery.Oracle
    drv <- dbDriver("Oracle")
    host <- "localhost"
    port <- "1521"
    sid <- "orcl12c"
    connect.string <- paste(
    "(ADDRESS=(PROTOCOL=tcp)(HOST=", host, ")(PORT=", port, "))",
    "(CONNECT_DATA=(SID=", sid, ")))", sep = "")
    con <- dbConnect(drv, username = "system", password = "oracle", dbname = connect.string, prefetch = FALSE,
    bulk_read = 1000L, stmt_cache = 0L, external_credentials = FALSE,
    sysdba = FALSE)
    emp_db <- tbl(con, "EMP")

    The output is something like:

    # Source: table<EMP> [?? x 8]
    # Database: OraConnection
    <int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
    1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
    2 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
    3 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
    4 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
    5 7788 SCOTT ANALYST 7566 1987-04-19 00:00:00 3000 NA 20
    6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
    7 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
    8 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
    9 7521 WARD SALESMAN 7698 1981-02-21 23:00:00 1250 500 30
    10 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
    # ... with more rows

    If I now want to select specific records, I can do something like:

    emp_db %>% filter(DEPTNO == "10")

    Which will yield

    # Source: lazy query [?? x 8]
    # Database: OraConnection
    <int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
    1 7839 KING PRESIDENT NA 1981-11-16 23:00:00 5000 NA 10
    2 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10
    3 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10

    A slightly more complex query:

    emp_db %>%
    group_by(DEPTNO) %>%
    summarise(EMPLOYEES = count())

    Will result in the number of employees per department:

    # Source: lazy query [?? x 2]
    # Database: OraConnection
    <int> <dbl>
    1 30 6
    2 20 5
    3 10 3

    You can see the generated query by:

    emp_db %>%
    group_by(DEPTNO) %>%
    summarise(EMPLOYEES = count()) %>% show_query()

    Will result in

    FROM ("EMP")

    If I want to take a random sample from the dataset to perform analyses on, I can do:

    sample_n(as_data_frame(emp_db), 10)

    Which could result in something like:

    # A tibble: 10 x 8
    <int> <chr> <chr> <int> <dttm> <dbl> <dbl> <int>
    1 7844 TURNER SALESMAN 7698 1981-09-08 00:00:00 1500 0 30
    2 7499 ALLEN SALESMAN 7698 1981-02-19 23:00:00 1600 300 30
    3 7566 JONES MANAGER 7839 1981-04-02 00:00:00 2975 NA 20
    4 7654 MARTIN SALESMAN 7698 1981-09-27 23:00:00 1250 1400 30
    5 7369 SMITH CLERK 7902 1980-12-16 23:00:00 800 NA 20
    6 7902 FORD ANALYST 7566 1981-12-02 23:00:00 3000 NA 20
    7 7698 BLAKE MANAGER 7839 1981-05-01 00:00:00 2850 NA 30
    8 7876 ADAMS CLERK 7788 1987-05-23 00:00:00 1100 NA 20
    9 7934 MILLER CLERK 7782 1982-01-22 23:00:00 1300 NA 10
    10 7782 CLARK MANAGER 7839 1981-06-09 00:00:00 2450 NA 10

    Executing the same command again will result in a different sample.


    There are multiple ways to get data to and from the Oracle database and perform actions on them. Oracle provides Oracle R Enterprise. Oracle R Enterprise is a component of the Oracle Advanced Analytics Option of Oracle Database Enterprise Edition. You can create R proxy objects in your R session from database-resident data. This allows you to work on database data in R while the database does most of the computations. Another feature of Oracle R Enterprise is an R script repository in the database and there is also a feature to allow execution of R scripts from within the database (embedded), even within SQL statements. As you can imagine this is quite powerful. More on this in a later blog!

    The post R and the Oracle database: Using dplyr / dbplyr with ROracle on Windows 10 appeared first on AMIS Oracle and Java Blog.

    Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters

    Sun, 2017-08-20 04:33


    A common requirement in any web application: allow the user to quickly drill down to records of interest by specifying relevant filters. The figure overhead shows two way of setting filters: by selecting from the [limited number of]existing values in a certain column – here Location – and by specifying a search string whose value should occur in records to be displayed after filtering.

    Oracle JET is a toolkit that supports development of rich web applications. With Oracle JET too this filtering feature is a common requirement. In this article I take a brief look at how to:

    • create the multi select element and how to populate it with data from the Location attribute of the records in the table
    • handle a (de)selection event in the multi select – leading to filtering of the records shown in the table
    • create the search field and intercept changes in the search field
    • handle resetting the search field
    • invoking the REST API when the search field has changed

    I am not claiming to present the best possible way to implement this functionality. I am not fluent enough in JET to make such a claim, and I have seen too many different implementations in Oracle documentation, blog articles, tutorials etc. to be able to point out the one approach that stands out (for the current JET release). However, the implementation I demonstrate here seems good enough as a starting point.

    The HRM module is a tab I have added to the Work Better demo application. It has its own ViewModel (hrm.js) and its own HTML view (hrm.html). I have implemented a very simple REST API in Node (http://host:port/departments?name=)  that provides the departments in a JSON document.

    Sources are in this Gist:

    Starting Point

    The starting point in this article is a simple JET application with a tab that contains a  table that displays Department records retrieved from a REST API. The implementation of this application is not very special and is not the topic of this article.


    The objective of this article is to show how to add the capability to filter the records in this table – first by selecting the locations for which departments should be shown, using a multiselect widget. The filtering takes place on the client, against the set of departments retrieved from the backend service. The second step adds filtering by name using a search field. This level of filtering is performed by the server that exposes the REST API.


    Create and Populate the Multiselect Element for Locations

    The multiselect element in this case is the Oracle JET ojSelect component (see cookbook). `The element shows a dropdownlist of options that can be selected, displays the currently selected options and allows selected options to be deselected.


    The HTML used to add the multiselect component to the page is shown here:

    <label for="selectLocation">Locations</label>
    <select id="selectLocation" data-bind="ojComponent: { component: 'ojSelect' , options: locationOptions, multiple: true , optionChange:optionChangedHandler, rootAttributes: {style:'max-width:20em'}}">  

    The options attribute references the locationOptions property of the ViewModel that returns the select(able) option values – more on that later. The attribute multiple is set to true to allow multiple values to be selected and the optionChange attribute references the optionChangedHandler, a function in the ViewModel that handles option change events that are published whenever options are selected or deselected.

    When the Departments have been fetched from the REST API, the locationOptions are populated by identifying the unique values for the Location attribute in all Department records. Subsequently, all locations are set as selected values on the select component – as we started out with an unfiltered set of departments. function handleDepartmentsFetch is called whenever fresh data has been fetched from the REST API.

    // values for the locations shown in the multiselect
    self.locationOptions = ko.observableArray([]);
    self.handleDepartmentsFetch = function (collection) {
        var locationData = new Set();
        //collect distinct locations and add to locationData array 
        var locations = collection.pluck('Location'); // get all values for Location attribute
        // distill distinct values
        var locationData = new Set(locations.filter((elem, index, arr) => arr.indexOf(elem) === index));
        // rebuild locationOptions
        var uniqueLocationsArray = [];
        for (let location of locationData) {
            uniqueLocationsArray.push({ 'value': location, 'label': location });
        ko.utils.arrayPushAll(self.locationOptions(), uniqueLocationsArray);
        // tell the observers that this observable array has been updated
        // (as result, the Multiselect UI component will be refreshed)
        // set the selected locations on the select component based on all distinct locations available
        $("#selectLocation").ojSelect({ "value": Array.from(locationData) });

    I did not succeed in setting the selected values on the select component by updating an observable array that backs the value attribute of the ojSelect component. As a workaround, I now use the direct manipulation using the programmatic manipulation via jQuery selection ($(“#selectLocation”)) of the ojSelect component.


    Handle a (de)selection event in the multi select

    When the user changes the set of selected values in the Locations multiselect, we want the set of departments shown in the table to be updated – narrowed down or expanded, depending on whether a location was removed or added to the selected items.

    The ojSelect component has the optionChange attribute that in this case references the function optionChangeHandler . This function inspects the type of option change (equals “data”?) and then invokes function prepareFilteredDepartmentsCollection while passing the self.deppies collection that was initialized during the fetch from the REST API. This function clones the collection of all departments fetched from the REST API and subsequently filters it based on the selectedLocations.

    // returns an array of the values of the currently selected options in select component with id selectLocation
    self.getCurrentlySelectedLocations = function () {
        return $("#selectLocation").ojSelect("option", "value");
    self.optionChangedHandler = function (event, data) {
        if (data.option == "value") {
            // REFILTER the data in self.DeptCol into the collection backing the table
            self.prepareFilteredDepartmentsCollection(self.deppies, getCurrentlySelectedLocations());
    // prepare (possibly filtered) set of departments and set data source for table
    self.prepareFilteredDepartmentsCollection = function (collection, selectedLocations) {
        if (collection) {
            // prepare filteredDepartmentsCollection
            var filteredDepartmentsCollection = collection.clone();
            var selectedLocationsSet = new Set(selectedLocations);
            var toFilter = [];
            // find all models in the collection that do not comply with the selected locations
            for (var i = 0; i < filteredDepartmentsCollection.size(); i++) {
                var deptModel =;
                if (!selectedLocationsSet.has(deptModel.attributes.Location)) {
            // remove all departments that do not qualify according to the locations that are (not) selected
            // update data source with fresh data and inform any observers of data source (such as the table component)
                new oj.CollectionTableDataSource(filteredDepartmentsCollection));
        }// if (collection)

    When the collection of filtered departments is created, the self.dataSource is refreshed with a new CollectionTableDataSource. With the call to self.dataSource.valueHasMutated(), we explicitly trigger subscribers to the dataSource – the Table component.


    Create the search field and Intercept Changes in the Search Field

    The search field is simply an inputText element with some decoration. Associated with the search field is a button to reset (clear) the search field.


    The HTML code for these elements is:

    <div class="oj-flex-item oj-sm-8 ">
    <div class="oj-flex-item" style="max-width: 400px; white-space: nowrap">
            <input aria-label="search box" placeholder="search" data-bind="value: nameSearch, valueUpdate: 'afterkeydown' , ojComponent: {component: 'ojInputText', rootAttributes:{'style':'max-width:100%;'} }" />
    <div id="searchIcon" class="demo-icon-sprite demo-icon-search demo-search-position"></div>
            <button id="clearButton" data-bind="click: clearClick, ojComponent: { component: 'ojButton', label: 'Clear', display: 'icons', chroming: 'half', icons: {start:'oj-fwk-icon oj-fwk-icon-cross03'}}">

    The search field is bound to nameSearch, an observable in the ViewModel. When the user edits the contents of the search field, the observable is updated and any subscribers are triggered. One such subscriber is function – this is a computed KnockOut function that has a dependency on nameSearch. When the function is triggered – by a change in the value of nameSearch – it checks if the search string consists of three or more characters and if so, it triggers a new fetch of departments from the REST API by calling function fetchDepartments().

    // bound to search field
    self.nameSearch = ko.observable('');
    // this computed function is implicitly subscribed to self.nameSearch; any changes in the search field will trigger this function = ko.computed(function () {
        var searchString = self.nameSearch();
        if (searchString.length > 2) {
    function getDepartmentsURL(operation, collection, options) {
        var url = dataAPI_URL + "?name=" + self.nameSearch();
        return url;

    Function getDepartmentsURL() is invoked just prior to fetching the Departments. It returns the URL to use for fetching from the REST API. This function will add a query parameter to the URL with the value of the nameSearch observable.


    Handle Resetting the Search Field

    The Clear button – shown in the previous HTML snipptet – is associated with a click event handler: function clearClick. This function resets the nameSearch observable and explicitly declares its value updated – in order to trigger subscribers to the nameSearch observable. One such subscriber is function which will be triggered by this, and will go ahead with refetching the departments from the REST API.

    // event handler for reset button (for search field)
    self.clearClick = function (data, event) {
        return true;
    The REST API

    The REST API is implemented with Node and Express. It is extremely simple; initially it just returns the contents of a static file (departments.json) with department records. It is slightly extended to handle the name query parameter, to only return selected departments. Note that this implementation is not the most efficient. For the purpose of this article, it will do the job.


    var app = express();
    var departments  = JSON.parse(require('fs').readFileSync('./departments.json', 'utf8'));
      // add a location to each record
      for (i = 0; i < departments.length; i++) { departments[i].location = locations[Math.floor(Math.random() * locations.length)] ; } app.get('/departments', function (req, res) { //process var nameFilter =; // read query parameter name (/departments?name=VALUE) // filter departments by the name filter res.send( departments.filter(function (department, index, departments) { return !nameFilter ||department.DEPARTMENT_NAME.toLowerCase().indexOf(nameFilter)>-1; 
                ); //using send to stringify and set content-type
      Complete Source Code GIST

      Putting all source code together:





      Sources for this article in GitHub Gist:

      JET Cookbook on Multiselect

      JET Cookbook on Table and Filtering –

      Blog post by Andrejus Baranovskis – Oracle JET Executing Dynamic ADF BC REST URL

      JET Documentation on Collection  and its API Documentation.

      Knock Documentation on Computed [Obserable] and Observable

      JavaScript Gist on removing duplicates from an array –

      JavaScript Filter, Map and Reduce on Arrays:

      Oracle JET Cookbook – Recipe on Filtering

      The post Oracle JET – Filtering Rows in Table with Multiselect and Search Field Filters appeared first on AMIS Oracle and Java Blog.

      ODA X6-2M – How to create your own ACFS file system

      Mon, 2017-08-14 15:12

      In this post I will explain how to create your own ACFS file system (on the command line) that you can use to (temporarily) store data.

      So you have this brand new ODA X6-2M and need to create or migrate some databases to it. Thus you need space to store data to import into the new databases you will create.  Or for some other reason. The ODA X6-2M comes with lots of space in the form of (at least) two 3.2 TB NVMe disks. But those have been formatted to ASM disks when you executed the odacli create-appliance command, or when you used the GUI to deploy the ODA.

      If you opted for “External Backups” most of the disk space will have been allocated to the +DATA ASM diskgroup. Or in the +RECO diskgroup.

      Thus you need to decide which diskgroup you will use to create an ACFS file system on. Since we have 80% of space allocated to +DATA I decided to use some of that.

      Logon to your ODA as root and make a mount point that you will use:

      as root:
      mkdir /migration

      Then su to user grid and set the ASM environment:

      su - grid
      . oraenv
      [+ASM1] <press enter>

      The command below will use asmca to create a volume called migration on the ASM DiskGroup +DATA with initial allocation of 50 GB.

      asmca -silent -createVolume -volumeName migration -volumeDiskGroup DATA -volumeSizeGB 50

      Then you need to find the name of the volume you created in order to create an ACFS file system on it:

      asmcmd volinfo -G DATA migration | grep -oE '/dev/asm/.*'

      Let’s assume that the above command returned:


      Then you can use the following command to create an ACFS file system on that volume and mount it on /migration:

      asmca -silent -createACFS -acfsVolumeDevice /dev/asm/migration-46 -acfsMountPoint /migration

      Next you need to run the generated script as an privileged user (aka root), which is the message you get when executing the previous step:


      To check the system for the newly created file system use:

      df -h /migration

      To get the details of the created file system use:

      acfsutil info fs /migration

      Or to just check the autoresize parameter, autoresizemax or autoresizeincrement use:

      /sbin/acfsutil info fs -o autoresize /migration
      /sbin/acfsutil info fs -o autoresizemax /migration
      /sbin/acfsutil info fs -o autoresizeincrement /migration

      To set the autoresize on with an increment of 10GB:

      /sbin/acfsutil size -a 10G /migration

      And to verify that it worked as expected:

      acfsutil info fs /migration
      acfsutil info fs -o autoresize /migration

      To use the file system as the oracle user you might want to set the permissions and ownership:

      ls -sla /migration
      chown oracle:oinstall /migration
      chmod 775 /migration
      ls -sla /migration

      And you are good to go!

      Of course you can also use the GUI to do this, then just start asmca as the grid user without the parameters and follow similar steps but then in the GUI.

      HTH – Patrick

      The post ODA X6-2M – How to create your own ACFS file system appeared first on AMIS Oracle and Java Blog.

      Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud

      Sat, 2017-08-12 06:00

      In a previous post I described how to do Continuous Integration & Delivery from Oracle Developer Cloud to Oracle Application Container Cloud on simple Node applications: Automating Build and Deployment of Node application in Oracle Developer Cloud to Application Container Cloud. In this post, I am going to extend that very simple application with the functionality to count requests. With every HTTP request to the application, a counter is incremented and the current counter value is returned in the response.


      The initial implementation is a very naïve one: the Node application contains a global variable that is increased for each request that is handled. This is naïve because:

      • multiple instances are running concurrently and each is keeping its own count; because of load balancing, the subsequent requests are handled by various instances and the responses will show a somewhat irregular request counter pattern; the total number of requests is not known: each instance as a subtotal for that instance
      • when the application is restarted – or even a single instance is restarted or added – the request counter for each instance involved is reset

      Additionally, the request count value is not available outside the Node application and it can only be retrieved by calling the application -which in turn increases the count.

      A much better implementation would be one that uses a cache – that is shared by the application instances and that survives application (instance) restarts. This would also potentially make the request count value available to other microservices that can access the same cache – if we allow that to happen.

      This post demonstrates how an Application Cache can be set up on Application Container Cloud Service and how it can be leveraged from a Node application. It shows that the request counter will be shared across instances and survives redeployments and restarts.


      Note: there is still the small matter of race conditions that are not addressed in this simple example because read,update and write are not performed as atomic operation and no locking has been implemented.

      The steps are:

      • Add (naïve) request counting capability to greeting microservice
      • Demonstrate shortcomings upon multiple requests (handled by multiple instances) and by instance restart
      • Implement Application Cache
      • Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service
      • Utilize Application Cache in greeting microservice
      • Redeploy greeting microservice and demonstrate that request counter is shared and preserved

      Sources for this article are in GitHub: .

      Add (naïve) request counting capability to greeting microservice

      The very simple HTTP request handler is extended with a global variable requestCounter that is displayed and incremented for each request:


      It’s not hard to demonstrate shortcomings upon multiple requests (handled by multiple instances) :


      Here we see how subsequent requests are handled (apparently) by two different instances that each have their own, independently increasing count.

      After application restart, the count is back to the beginning.

      Implement Application Cache

      To configure an Application Cache we need to work from the Oracle Application Container Cloud Service console.



      Specify the details – the name and possibly the sizing:



      Press Create and the cache will be created:


      I got notified about its completion by email:



      Add Application Cache service binding to ACCS Deployment profile for greeting in Developer Cloud Service

      In order to be able to access the cache from within an application on ACCS, the application needs a service binding to the Cache service. This can be configured in the console (manually) as well as via the REST API, psm cli and the deployment descriptor in the Deployment configuration in Developer Cloud Service.

      Manual configuration through the web ui looks like this:


      or though a service binding:



      and applying the changes:



      I can then utilize the psm command line interface to inspect the JSON definition of the application instance on ACCS and so learn how to edit the deployment.json file with the service binding for the application cache. First setup psm:


      And inspect the greeting application:

      psm accs app -n greeting -o verbose -of json


      to learn about the JSON definition for the service binding:


      Now I know how to update the deployment descriptor in the Deployment configuration in Developer Cloud Service:


      The next time this deployment is performed, the service binding to the application cache is configured.

      Note: the credentials for accessing the application cache have to be provided and yes, horrible as it sounds and is, the password is in clear text!

      It seems that the credentials are not required. The value of password is now BogusPassword – which is not the true value of my password – and still accessing the cache works fine. Presumably the fact that the application is running inside the right network domain qualifies it for accessing the cache.

      The Service Binding makes the following environment variable available to the application – populated at runtime by the ACCS platform:


      Utilize Application Cache in greeting microservice

      The simplest way to make use of the service binding’s environment variable is demonstrated here (note that this does not yet actually use the cache):


      and the effect on requests:


      Now to actually interact with the cache – through REST calls as explained here: – we will use a node module node-rest-client. This module is added to the application using

      npm install node-rest-client –save


      Note: this instruction will update package.json and download the module code. Only the changed package.json is committed to the git repository. When the application is next built in Developer Cloud Service, it will perform npm install prior to zipping the Node application into a single archive. That action of npm install ensures that the sources of node-rest-client are downloaded and will get added to the file that is deployed to ACCS.

      Using this module, the app.js file is extended to read from and write to the application cache. See here the changed code (also in GitHub

      var http = require('http');
      var Client = require("node-rest-client").Client;
      var version = '1.2.3';
      // Read Environment Parameters
      var port = Number(process.env.PORT || 8080);
      var greeting = process.env.GREETING || 'Hello World!';
      var requestCounter = 0;
      var server = http.createServer(function (request, response) {
        getRequestCounter( function (value) {
           requestCounter = (value?value+1:requestCounter+1);
           // put new value in cache  - but do not wait for a response          
           console.log("write value to cache "+requestCounter);
           response.writeHead(200, {"Content-Type": "text/plain"});
           response.end( "Version "+version+" says an unequivocal: "+greeting 
                       + ". Request counter: "+ requestCounter +". \n"
      // functionality for cache interaction
      // for interaction with cache
      var CCSHOST = process.env.CACHING_INTERNAL_CACHE_URL; 
      var baseCCSURL = 'http://' + CCSHOST + ':8080/ccs';
      var cacheName = "greetingCache";
      var client = new Client(); 
      var keyString = "requestCount";
      function getRequestCounter(callback)  {
              function(data, rawResponse){
                  var value;
                  // If nothing there, return not found
                  if(rawResponse.statusCode == 404){
                    console.log("nothing found in the cache");
                    value = null;
                    // Note: data is a Buffer object.
                    console.log("value found in the cache "+data.toString());
                    value = JSON.parse(data.toString()).requestCounter;
      function writeRequestCounter(requestCounter) {
      var args = {
              data: { "requestCounter": requestCounter},
              headers: { "Content-Type" : "application/json" }
              function (data, rawResponse) {   
                  // Proper response is 204, no content.
                  if(rawResponse.statusCode == 204){
                    console.log("Successfully put in cache "+JSON.stringify(data))
                    console.error("Error in PUT "+rawResponse);
                    console.error('writeRequestCounter returned error '.concat(rawResponse.statusCode.toString()));
      }// writeRequestCounter

      Redeploy greeting microservice and demonstrate that request counter is shared and preserved

      When we make multiple invocations to the greeting service, we see a consistently increasing series of count values:


      Even when the application is restarted or redeployed, the request count is preserved and when the application becomes available again, we simply resume counting.

      The logs from the two ACCS application instances provide insight in what takes place – how load balancing makes these instances handle requests intermittently – and how they read each others’ results from the cache:




      Sources for this article are in GitHub: .

      Blog article by Mike Lehmann, announcing the Cache feature on ACCS:

      Documentation on ACCS Caches:

      Tutorials on cache enabling various technology based applications on ACCS:

      Tutorial on Creating a Node.js Application Using the Caching REST API in Oracle Application Container Cloud Service

      Public API Docs for Cache Service –

      Using psm to retrieve deployment details of ACCS application: (to find out how Application Cache reference is defined)

      The post Adding a Cross Instance, Cross Restarts and Cross Application Cache to Node Applications on Oracle Application Container Cloud appeared first on AMIS Oracle and Java Blog.

      Oracle Mobile Cloud Service (MCS): Overview of integration options

      Fri, 2017-08-11 04:40

      Oracle Mobile Cloud Service has a lot of options which allows it to integrate with other services and systems. Since it runs JavaScript on Node.js for custom APIs, it is very flexible.

      Some features allow it to extent its own functionality such as the Firebase configuration option to send notifications to mobile devices, while for example the connectors allow wizard driven integration with other systems. The custom API functionality running on a recent Node.js version ties it all together. In this blog article I’ll provide a quick overview and some background of the integration options of MCS.

      MCS is very well documented here and there are many YouTube video’s available explaining/demonstrating various MCS features here. So if you want to know more, I suggest looking at those.

      Some recent features

      Oracle is working hard on improving and expanding MCS functionality. For the latest improvements to the service see the following page. Some highlights I personally appreciate of the past half year which will also get some attention in this blog:

      • Zero footprint SSO (June 2017)
      • Swagger support in addition to RAML for the REST connector (April 2017)
      • Node.js version v6.10.0 support (April 2017)
      • Support for Firebase (FCM) to replace GCM (December 2016)
      • Support for third party tokens (December 2016)
      Feature integration Notification support

      In general there are two options for sending notifications from MCS. Integrating with FCM and integrating with Syniverse. Since they are third party suppliers, you should compare these options (license, support, performance, cost, etc) before choosing one of them.

      You can also use any other notification provider if it offers a REST interface by using the REST connector. You will not get much help in configuring it through the MCS interface though; it will be a custom implementation.

      Firebase Cloud Messaging / Google Cloud Messaging

      Notification support is implemented by integrating with Google cloud messaging products. Google Cloud Messaging (GCM) is being replaced with Firebase Cloud Messaging (FCM) in MCS. GCM has been deprecated by Google for quite a while now so this is a good move. You do need a Google Cloud Account though and have to purchase their services in order to use this functionality. See for example here on how to implement this from a JET hybrid application.


      Read more on how to implement this here. You first have to create a Syniverse account. Next subscribe to the Syniverse Messaging Service, register the app and get credentials. These credentials you can register in MCS, client management.


      Beacon support

      Beacons create packages which can be detected on Bluetooth by mobile devices. The package structure the beacons broadcast, can differ. There are samples available for iBeacon, altBeacon and Eddystone but others can be added if you know the corresponding package structure. See the following presentation some background on beacons and how they can be integrated in MCS. How to implement this for an Android app can be watched here.


      Client support

      MCS comes with several SDKs which provide easy integration of a client with MCS APIs. Available client SDKs are iOS, Android, Windows, Web (plain JavaScript). These SDKs provide an easy alternative to using the raw MCS REST APIs. They provide a wrapper for the APIs and provide easy access in the respective language the client uses.

      Authentication options (incoming) SAML, JWT

      Third party token support for SAML and JWT is available. Read more here. A token exchange is available as part of MCS which creates MCS tokens from third party tokens based on specifically defined mappings. This MCS tokens can be used by clients in subsequent requests. This does require some work on the client side but the SDKs of course help with this.

      Facebook Login

      Read here for an example on how to implement this in a hybrid JET application.

      OAuth2 and Basic authentication support.

      No third party OAuth tokens are supported. This is not strange since the OAuth token does not contain user data and MCS needs a way to validate the token. MCS provides its own OAuth2 STS (Secure Token Service) to create tokens for MCS users. Read more here.

      Oracle Enterprise Single Sign-on support.

      Read here. This is not to be confused with the Oracle Enterprise Single Sign-on Suite (ESSO). This is browser based authentication of Oracle Cloud users which are allowed access to MCS.

      These provide the most common web authentication methods. Especially the third party SAML and JWT support provides for many integration options with third party authentication providers. OKTA is given as an example in the documentation.

      Application integration: connectors

      MCS provides connectors which allow wizard driven configuration in MCS. Connectors are used for outgoing calls. There is a connector API available which makes it easy to interface with the connectors from custom JavaScript code. The connectors support the use of Oracle Credential Store Framework (CSF) keys and certificates. TLS versions to TLS 1.2 are supported. You are of course warned that older versions might not be secure. The requests the connectors do are over HTTP since no other technologies are currently directly supported. You can of course use REST APIs and ICS as wrappers should you need it.

      Connector security settings

      For the different connectors, several Oracle Web Service Security Manager (OWSM) policies are used. See here. These allow you to configure several security settings and for example allow usage of WS Security and SAML tokens for outgoing connections. The policies can be configured with security policy properties. See here.


      It is recommended to use the REST connector instead of doing calls directly from your custom API code because of they integrate well with MCS and provide security and monitoring benefits. For example out of the box analytics.


      The SOAP connector can do a transformation from SOAP to JSON and back to make working with the XML easier in JavaScript code. This has some limitations however:

      Connector scope

      There are also some general limitations defined by the scope of the API of the connector:

      • Only SOAP version 1.1 and WSDL version 1.2 are supported.
      • Only the WS-Security standard is supported. Other WS-* standards, such as WS-RM or WS-AT, aren’t supported.
      • Only document style and literal encoding are supported.
      • Attachments aren’t supported.
      • Of the possible combinations of input and output message operations, only input-output operations and input-only operations are supported. These operations are described in the Web Services Description Language (WSDL) Version 1.2 specification.
      Transformation limitations

      • The transformation from SOAP to XML has limitations
      • A choice group with child elements belonging to different namespaces having the same (local) name. This is because JSON doesn’t have any namespace information.
      • A sequence group with child elements having duplicate local names. For example, <Parent><ChildA/><ChildB/>…<ChildA/>…</Parent>. This translates to an object with duplicate property names, which isn’t valid.
      • XML Schema Instance (xsi) attributes aren’t supported.
      Integration Cloud Service connector

      Read more about this connector here. This connector allows you to call ICS integrations. You can connect to your ICS instance and select an integration from a drop-down menu. For people who also use ICS in their cloud architecture, this will probably be the most common used connector.

      Fusion Applications connector

      Read more about this connector here. The flow looks similar to that of the ICS Cloud Adapters (here). In short, you authenticate, a resource discovery is done and local artifacts are generated which contain the connector configuration. At runtime this configuration is used to access the service. The wizard driven configuration of the connector is a great strength. MCS does not provide the full range of cloud adapters as is available in ICS and SOA CS.

      Finally Flexibility

      Oracle Mobile Cloud Service allows you to define custom APIs using JavaScript code. Oracle Mobile Cloud Service V17.2.5-201705101347 runs Node.js version v6.10.0 and OpenSSL version 1.0.2k (process.versions) which are quite new! Because a new OpenSSL version is supported, TLS 1.2 ciphers are also supported and can be used to create connections to other systems. This can be done from custom API code or by configuring the OWSM settings in the connector configuration. It runs on Oracle Enterprise Linux 6 kernel 2.6.39-400.109.6.el6uek.x86_64 (JavaScript: os.release()). Most JavaScript packages will run on this version so few limitations there.

      ICS also provides an option to define custom JavaScript functions (see here). I haven’t looked at the engine used in ICS though but I doubt this will be a full blown Node.js instance and suspect (please correct me if I’m wrong) a JVM JavaScript engine is used like in SOA Suite / SOA CS. This provides less functionality and performance compared to Node.js instances.

      What is missing? Integration with other Oracle Cloud services

      Mobile Cloud Service does lack out of the box integration options with other Oracle Cloud Services. Only 4 HTTP based connectors are available. Thus if you want to integrate with an Oracle Cloud database (a different one than which is provided) you have to use the external DB’s REST API (with the REST connector or from custom API code) or use for example the Integration Cloud Service connector or the Application Container Cloud Service to wrap the database functionality. This of course requires a license for the respective services.

      Cloud adapters

      A Fusion Applications Connector is present in MCS. Also OWSM policies are used in MCS. It would therefore not be strange if MCS would be technically capable of running more of the Cloud adapters which are present in ICS. This would greatly increase the integration options for MCS.

      Mapping options for complex payloads

      Related to the above, if the payloads become large and complex, mapping fields also becomes more of a challenge. ICS does a better job at this than MCS currently. It has a better mapping interface and provides mapping suggestions.

      The post Oracle Mobile Cloud Service (MCS): Overview of integration options appeared first on AMIS Oracle and Java Blog.