Andrejus Baranovski

Subscribe to Andrejus Baranovski feed
Blog about Oracle technology
Updated: 17 hours 51 min ago

Amazon SageMaker Model Endpoint Access from Oracle JET

Tue, 2018-11-13 10:54
If you are implementing machine learning model with Amazon SageMaker, obviously you would want to know how to access trained model from the outside. There is good article posted on AWS Machine Learning Blog related to this topic - Call an Amazon SageMaker model endpoint using Amazon API Gateway and AWS Lambda. I went through described steps and implemented REST API for my own module. I went one step further and tested API call from JavaScript application implemented with Oracle JET JavaScript free and open source toolkit.

I will not go deep into machine learning part in this post. I will focus exclusively on AWS SageMaker endpoint. I'm using Jupyter notebook from Chapter 2 of this book - Machine Learning for Business. At the end of the notebook, when machine learning model is created, we initialize AWS endpoint (name: order-approval). Think about it as about some sort of access point. Through this endpoint we can call prediction function:


Wait around 5 minutes until endpoint starts. Then you should see endpoint entry in SageMaker:


How to expose endpoint to be accessible outside? Through AWS Lambda and AWS API Gateway.

AWS Lambda

Go to AWS Lambda service and create new function. I already have function, with Python 3.6 set for runtime. AWS Lambda acts as proxy function between endpoint and API. This is the place where we can prepare input data and parse response, before returning it to API:


Function must be granted role to access SageMaker resources:


This is function implementation. Endpoint name is moved out into environment variable. Function gets input, calls SageMaker endpoint and does some minimal processing for the response:


We can test lambda function and provide test payload. This is test payload I'm using. This is encoded list of parameters for machine learning model. Parameters describe purchase order. Model decides if manual approval is required or not. Decision rule - if PO was raised by someone not from IT, but they order IT product - manual approval is required. Read more about it in the book mentioned above. Test payload data:


Run test execution, model responds - manual approval for PO is required:


AWS API Gateway

Final step is to define API Gateway. Client will be calling Lambda function through API:


I have defined REST resource and POST method for API gateway. Client request will go through API call and then will be directed to Lambda function, which will make call for SageMaker prediction based on client input data:


POST method is set to call Lambda function (function with this name was created above):


Once API is deployed, we get URL. Make sure to add REST resource name at the end. From Oracle JET we can use simple JQuery call to execute POST method. Once asynchronous response is received, we display notification message:


Oracle JET displays prediction received from SageMaker - manual review is required for current PO:


Download Oracle JET sample application with AWS SageMaker API call from my GitHub repo.

Introduction to Oracle Digital Assistant Dialog Flow

Fri, 2018-11-09 08:59
Oracle Digital Assistant is a new name for Oracle Chatbot. Actually it is not only a new name - from now on chatbot functionality is extracted into separate cloud service - Oracle Digital Assistance (ODA) Cloud service. It runs separately now, not part of Oracle Mobile Cloud Service. I think this is a strong move forward - this should make ODA service lighter, easier to use and more attractive to someone who is not Oracle Mobile Cloud service customer.

I was playing around with dialog flow definition in ODA and would like to share few lessons learned. I extracted my bot definition from ODA and uploaded to GitHub repo for your reference.

When new bot is created in ODA service, first of all you need to define list of intents and provide sample phrases for each intent. Based on this information algorithm trains and creates machine learning model for user input classification:


ODA gives us a choice - to user simpler linguistics based model or machine learning algorithm. In my simple example I was using the first one:


Intent is assigned with entities:


Think about entity as about type, which defines single value of certain basic type or it can be a list of values. Entity will define type for dialog flow variables:


Key part in bot implementation - dialog flow. This is where you define rules how to handle intents and also how to process conversation context. Currently ODA doesn't provide UI interface to managed dialog flow, you will need to type rules by hand (probably if your bot logic is complex, you can create YAML structure outside of ODA). I would highly recommend to read ODA dialog flow guide, this is the most complex part of bot implementation - The Dialog Flow Definition.

Dialog flow definition is based on two main parts - context variables and states. Context variables - this is where you would define variables accessible in bot context. As you can see it is possible to use either basic types or our own defined type (entity). Type nlpresult is built-in type, variable of this type gets classified intent information:


States part defines sequence of stops (or dialogs), bot transitions from one stop to another during conversation with the user. Each stop points to certain component, there is number of built-in components and you could use custom component too (too call REST service for example). In the example below user types submit project hours, this triggers classification and result is handled by System.Intent, from where conversation flow starts - it goes to the dialog, where user should select project from the list. Until conversation flow stays in the context - we don't need to classify user input, because we treat user answers as input variables:


As soon as user selects project - flow transitions to the next stop selecttask, where we ask user to select task:


When task is selected - going to the next stop, to select time spent on this task. See how we are referencing previous answers in current prompt text. We can refer and display previous answer through expression:


Finally we ask a question - if user wants to type more details about task. By default all stops are executed in sequential order from top to bottom, if transition is empty - this means the next stop will execute - confirmtaskdetails in this case. Next stop will be conditional (System.ConditionEquals component), depending on user answer it will choose which stop to execute next:


If user chooses Yes - it will go to next stop, where user needs to type text (System.Text component):


At the end we print task logging information and ask if user wants to continue. If he answers No, we stop context flow, otherwise we ask user - what he wants to do next:


We are out of conversation context, when user types sentence - it will be classified to recognize new intent and flow will continue:


I hope this gives you good introduction about bot dialog flow implementation in Oracle Digital Assistant service.

Managing Persisted State for Oracle JET Web Component Variable with Writeback Property

Thu, 2018-11-08 01:03
Starting from JET 6.0.0 Composite Components (CCA) are renamed to be Web Components (I like this new name more, it sounds more simple to me). In today post I will talk about Web Component writeback property and importance of it.

All variables (observable or not) defined inside Web Component will be reset when navigating away and navigating back to the module where Web Component is included. This means you can't store any values inside Web Component, because these values will be lost during navigation. Each time when we navigate back to module, all Web Components used inside that model will be reloaded, this means JS script for Web Component will be reloaded and variables will be re-executed loosing previous values. This behaviour is specific to Web Component only, values for variables created in the owning module will not be reset.

If you want to keep Web Component variable value, you will need to store variable state outside of Web Component. This can be achieved using Web Component property with writeback support.

Let's see how Web Component behaves on runtime. Source code is available on my GitHub repo.

Here I got basic Web Component included into dashboard module:


Web Component doesn't implement anything except JET switcher. Once switcher state is changed, variable is updated in JS script:


Variable which holds switcher state in Web Component:


Web Component is reloaded each time we navigate away and come back to the module - this means variables will be reset. This is how looks like - imagine we open module for the first time, switcher position is OFF:


Change it to be ON:


Navigate to any other module and come back - you will see that switcher is reset back to default OFF state, this means variable was reset (otherwise we should see ON state):


If you want to keep variable state, then it should be maintained outside of Web Component. To achieve this, create Web Component property to hold variable value, make sure set this property with writeback support:


For debugging purposes, add logging into Web Component, this will help to see when it will be reloaded:


Switcher variable must be initialized from Web Component property. Very first time it will be empty, but as soon as user will changed switcher state -  next time when Web Component is reloaded, it will assign correct value which was selected before:


When switcher state is changed, we need to handle this event and make sure that Web Component property is updated with new value:


Writeback property must be assigned with observable variable which is created in the module. Variable reference must be writable with {{}} brackets:


Once value will be changed inside Web Component, this change will be propagated up to observable variable defined in the module. Next time when we navigate away and come back to the module - we will pass recent value to the Web Component:


This is how it works now. Load module, change switcher state (see in the log -  Web Component was loaded once):


Navigate to any other module:


Come back to the module, where Web Component is included. See in the log - Web Component is reloaded, but switcher variable value is not lost, because it was saved to module observable variable through Web Component writeback property:

Machine Learning - Getting Data Into Right Shape

Wed, 2018-11-07 06:17
When you build machine learning model, first start with the data - make sure input data is prepared well and it represents true state of what you want machine learning model to learn. Data preparation task takes time, but don't hurry - quality data is a key for machine learning success. In this post I will go through essential steps required to bring data into right shape to feed it into machine learning algorithm.

Sample dataset and Python notebook for this post can be downloaded from my GitHub repo.

Each row from dataset represents invoice which was sent to customer. Original dataset extracted from ERP system comes with five columns:

customer - customer ID
invoice_date - date when invoice was created
payment_due_date - expected invoice payment date
payment_date - actual invoice payment date
grand_total - invoice total


invoice_risk_decision - 0/1 value column which describe current invoice risk. Goal of machine learning module will be to identify risk for future invoices, based on risk estimated for historical invoice data.

There are two types of features - categorical and continuous:

categorical - often text than number, something that represents distinct groups/types
continuous - numbers

Machine learning typically works with numbers. This means we need to transform all categorical features into continuous. For example, grand_total is continuous feature, but dates and customer ID are not.

Date can be converted to continuous feature by breaking it into multiple columns. Here is example of breaking invoice_date into multiple continuous features (year, quarter, month, week, day of year, day of month, day of week):


Using this approach all date columns can be transformed into continuous features. Customer ID column can be converted into matrix of 0/1. Each unique text value is moved into separate column and assigned with 1, all other column in that row are assigned with 0. This transformation can be done with Python library called Pandas, we will see it later.

You may or may not have decision values for your data, this depends how data was collected and what process was implemented in ERP app to collect this data. Decision column (invoice_risk_decision) value represents business rule we want to calculate with machine learning. See 0/1 assigned to this column:


Rule description:

0 - invoice was payed on time, payment_date less or equal payment_due_date
0 - invoice wasn't payed on time, but total is less than all invoices total average and payment delay is less or equal 10% for current customer average
1 - all other cases, indicates high invoice payment risk

I would recommend to save data in CSV format. Once data is prepared, we can load it in Python notebook:


I'm using Pandas library (imported through pd variable) to load data from file into data frame. Function head() prints first five rows from data frame (dataset size 5x24):


We can show number of rows with 0/1, this helps to understand how data set is constructed - we see that more than half rows represent invoices without payment risk:


Customer ID column is not a number, we need to convert it. Will be using Pandas get_dummies function for this task. It will turn every unique value into a column and place 0 or 1 depending on whether the row contains the value or not (this will increase dataset width):


Original customer column is gone, now we have multiple columns for each customer. If customer with ID = 4 is located it given row, 1 is set:


Finally we can check correlation between decision column - invoice_risk_decision and other columns from dataset. Correlation shows which columns will be used by machine learning algorithm to predict a value based on the values in other columns in the dataset. Here is correlation for our dataset (all columns with more than 10% correlation):


As you can see, all date columns have high correlation as well as grand_total. Our rule tells that invoice payment risk is low, if invoice amount is less than all total average - thats why correlation on grand_total value exist.

Customer with ID = 11 is the one with largest number of invoices, correlation for this customer is higher than for others, as expected.

TypeScript Example in Oracle JET 6.0.0

Wed, 2018-10-31 09:48
JET 6.0.0 officially supports TypeScript, wow that great news. If you are building large JavaScript application with JET, it will be much easier to manage code with TypeScript - it does type checking and reports code errors during build time. Logic can be encapsulated into classes with inheritance. Read more about classes support in TypeScript.

In this post I will share simple JET application enabled with TypeScript support. Sample application can be downloaded from GitHub repo. Before running it with ojet serve, make sure to execute ojet restore to install all dependent modules.

If you want to add TypeScript support to the new JET app, this can be achieved with npm command, executed in application root:

npm install @types/oracle__oraclejet

I would recommend to use Microsoft Visual Studio Code for Oracle JET development with TypeScript. IDE comes with very good support for TypeScript, it supports autocompletion, debugging - I'm sure it will make JET development faster.

To be able to use TypeScript, install it globally with this command (read more about various options - TypeScript setup):

npm install -g typescript

First step is to add tsconfig.json to the root folder of JET app. This configuration file enables TypeScript support in JET app. You can copy tsconfig.json from JET in TypeScript guide. I have updated outDir to my app folder structure, this allows to write translated JS file out of TypeScript directly into standard JET folder with JS files and override JS module:


Next we should create new TypeScript file (extension ts) under typescripts folder. File name should match existing JS module file name, in order for that JS target file to be overridden during TypeScript build:


TypeScript reports code errors during build time - for example, function name not found:


Visual Code provides auto completion for JET code, for example it helps to import module:


In TypeScript we can define classes. Variables can be created as objects of certain class, this helps to define input parameter types and do strict type checks when passing these variables into functions. Study this simple code example written in TypeScript, take a look how observable variable is defined:


Visual Code offers build command to translate TypeScript code into JS:


Once build completes, we get translated JS code associated with JET module. Take a look how class was translated. See how callAction function was translated with event input parameter:


HTML part of JET module remains same as without TypeScript:


Observable variable change is handled in TypeScript:


Action listener is invoked and function with class type parameter is called:

ADF 19 Demo from Oracle Open World San Francisco

Tue, 2018-10-30 07:40
ADF 19 was announced by Shay Shmeltzer at OOW'18. Expect to have many bug fixes and improvements in this release. I have recorded two videos demonstrating:

1. Client side responsive layout
2. Vertical tabs with text labels
3. ADF list with swipe option
4. New client side date components
5. Client LOVs with search and custom result list

Part I demo:


Part II demo:


Slides from the session:

1. Oracle ADF 19 - What's Next


2. What's New in ADF Faces

ADF Task Flow Performance Boost with JET UI Shell Wrapper

Fri, 2018-10-19 01:09
ADF application with UI Shell and ADF Task Flows rendered in dynamic tabs would not offer instant switch from one tab to another experience. Thats because tab switch request goes to the server and only when browser gets response - tab switch happens. There is more to this - even if tab in ADF is not currently active (tab is disclosed), tab content (e.g. region rendered from ADF Task Flow) still may participate in the request processing. If user opens many tabs, this could result in slightly slower request processing time overall.

ADF allows to render ADF Task Flows directly by accessing them through URL, if it is configured with page support on the root level. ADF Task Flow can be accessed by URL, this means we can include it into iframe. Imagine using iframe for each tab and rendering ADF Task Flows inside. This will enable ADF Task Flow independent processing in each tab, similar to opening them in separate browser tab.

Iframe can be managed in Oracle JET, using plain JavaScript and HTML code. My sample implements dynamic JET tabs with iframe support. Iframe renders ADF Task Flow. While navigating between tabs, I simply hide/show iframes, this allows to keep the state of ADF Task Flow and return to the same state, when opening back the tab. Huge advantage in this case - tab navigation and switching between tabs with ADF Task Flows works very fast - it takes only client time processing. Look at this recorded gif, where I navigate between tabs with ADF content:


Main functions are listed below.

1. Add dynamic iframe. Here we check if frame for given ADF Task Flow is already created, if no we create it and append to HTML element


2. Select iframe, when switching tabs. Hide all frames first, select frame which belongs to the selected tab


3. Remove iframe. Remove frame, when tab is closed


4. Select frame after remove. This method helps to set focus to the next frame, after current tab was removed


We can control when iframe or regular JET module is rendered, by using flag computed function assigned to main div:


In this app I have defined static URL's for displayed ADF Task Flows. Same can be loaded by fetching menu, etc.:


To be able to load ADF Task Flow by URL, make sure to use ADF Task Flow with page (you can include ADF region with fragments into that page). Set url-invoke-allowed property:


This is how it looks like. By default, JET dashboard module is displayed, select item from the menu list to load tab with ADF Task Flow:


JET tab rendering iframe with ADF table:


You can monitor ADF content loading in iframe within JET application:


JET tab rendering iframe with ADF form:


Download sample app from GitHub repository.

Oracle Offline Persistence Toolkit - Applying Server Changes

Wed, 2018-10-10 09:00
This is my final post related to Oracle Offline Persistence Toolkit. I will show simple example, which explains how to apply server changes, if data conflict comes up. Read previous post about - Oracle Offline Persistence Toolkit - Submitting Client Changes.

To apply server changes is easier, than to apply client changes. You need to remove failed request from sync queue and fetch server data to client by key.

Example of data conflict during sync:


User decides to cancel his changes and bring data from the server. GET is executed to fetch latest data and push it to the client:


In JS code, first of all we remove request from sync queue, in promise we read key value for that request and then refetch data:


Download sample code from GitHub repository.

Oracle Offline Persistence Toolkit - Submitting Client Changes

Sat, 2018-10-06 20:30
One of the key topics related to Oracle Offline Persistence toolkit - submitting client changes to backend when data conflict exists. If data was updated on the backend, while client was offline and client wants to submit his changes - we inform about the conflict and ask what client really wants to do. If client choose to submit changes, this means we should push client changes to the backend with the latest change indicator.

There is a special case, when client updates same data multiple times while offline - during online sync we need to make sure, change indicator will be retrieved in after sync and applied in before sync listeners, to make sure subsequent requests execute correctly. Check my previous post about before request sync listener - Oracle Offline Persistence Toolkit - Before Request Sync Listener.

Example - let's update a record and submit change to the backend:


Assume another user is offline and updates same record:


User updates same record again, before going online. Now we will have two requests in the sync queue:


Once going online, sync will be executed and we will get conflict for the first request (same row was updated already by another user). At this moment, after sync listener will get info about conflict and will cache latest change indicator value returned from backend. If user decides to apply his changes, requests is removed, new request is constructed with the latest change indicator value received from backend and this request is inserted into sync queue:


If same record was updated multiple times, second request will fail too - because this request wasn't updated yet with latest change indicator:


Assuming user decided to apply changes from the second request too, we will update request with latest change indicator and submit it for sync. In after sync listener, change indicator value stored in local cache will be updated.

Successful sync with change indicator = 296:


New change indicator value will be retrieved in after sync listener and applied in before sync listener for the second request, updating same data row:


Here is the code, which allows user to apply changes to backend. We remove failed request, update it and create new request in sync queue, resuming sync process:


Download sample code for the described use case from my GitHub repository.

Oracle Offline Persistence Toolkit - Before Request Sync Listener

Tue, 2018-10-02 15:09
One more post from me related to Oracle Offline Persistence Toolkit. I already described how after request listener could be useful to read response data after sync - Oracle Offline Persistence Toolkit - After Request Sync Listener. Today will explain when before request listener could be useful. Same as after request listener, it is defined during persistence manager registration:


Before request listener must return promise. We can control resolved action. For example if there is no need to update request, we simply return continue. We would need to update request, if same row is updated multiple times during sync. Change indicator value must be updated in request payload. We read latest change indicator value from array, initialised in after request listener. Request payload is converted to JSON, value updated and then we construct new request and resolve it with replay. API allows to provide new request, by replacing original:


Here is the use case. While offline - update value:


While remaining offline, update same value again:


We should trace executed requests during sync, when going online. First request, initiated by first change is using change indicator value 292:


Second request is using updated change indicator value 293:


Without before and after request listener logic, second request would execute with same change indicator value as the first one. This would lead to data conflict on backend.

Sample application code is available on GitHub.

Oracle Offline Persistence Toolkit - After Request Sync Listener

Fri, 2018-09-28 11:15
In my previous post, we learned how to handle replay conflict - Oracle Offline Persistence Toolkit - Reacting to Replay Conflict. Additional important thing to know - how to handle response from request which was replayed during sync (we are talking here about PATCH). It is not as obvious as handling response from direct REST call in callback (there is no callback for response which is sinchronised later). You may think, why you would need to handle response, after successful sync. Well there could be multiple reasons - for instance you may read returned value and update value stored on the client.

Listener is registered in Persistence Manager configuration, by adding event listener of type syncRequest for given endpoint:


This is listener code. We are getting response, reading change indicator value (it was updated on the backend and new value is returned in response) and storing it locally on the client. Additionally we maintain array with mapping of change indicator value to updated row ID (in my next post I will explain why this is needed). After request listener must return promise:


On runtime - when request sync is executed, you should see in the log message printed, which shows new change indicator value:


Double check in payload, to make sure request was submitted with previous value:


Check response, you will see new value for change indicator (same as in after request listener):


Sample code can be downloaded from GitHub repository.

Oracle Offline Persistence Toolkit - Reacting to Replay Conflict

Sat, 2018-09-22 08:01
This is next post related to Oracle Offline Persistence Toolkit. Check my previous writing on same subject - Implementing Handle Patch Method in JET Offline Toolkit. Read more about toolkit on GitHub repo.

When application goes online, we call synchronisation method. If at least one of the requests fails, then synchronisation is stopped and error callback is invoked, where we can handle failure. In error callback, we check if failure is related to the conflict - then we open dialog, where user will decide what to do (to force client changes or take server changes). Reading latest change indicator value from response in error callback (to apply it, if user decides to force client changes in the next request):


Dialog is simple - it displays dynamic text for conflicted value and provides user with a choice of actions:


Let's see how it works.

User A editing value Lex and saving it to backend:


User B is offline, editing same value B and saving it in local storage:


We can check it in the log - changes value was stored in local storage:


When going online, pending requests logged offline, will be re-executed. Obviously above request will fail, because same value was changed by another user. Conflict will be reported:


PATCH operation fails with conflict code 409:


User will be asked - how to proceed. To apply changes and override changes in the backend, or on opposite take changes from the backend and bring them to the client:


I will explain how to implement these actions in my next post. In the meantime you can study complete application available on GitHub repo.

Query Logic Implementation in VBCS for ADF BC REST

Wed, 2018-09-19 14:19
Oracle Visual Builder Cloud Service allows to define external REST service connections. In this post I will explain how to implement query logic against such service. Connection is defined for ADF BC REST service.

Wizard provides option to add query parameters, both static and dynamic. I have set one static parameter onlyData=true, to return data only from the service. Also I have created multiple dynamic parameters, the one used in this use case - q parameter. This parameter accepts query expression to filter data. Later in VBCS action chain, I will assign value to this parameter and service will be re-executed to bring filtered data:


Search form elements will be assigned with page scope variables, to hold user query input. On search button click, VBCS action chain will be invoked to read these values and update query parameter. Page scope variables:


Variables firstNameQueryVar and lastNameQueryVar are assigned to search form fields, here is example:


Search button invokes action chain:


Action chain does two things - calls JS function to construct query parameter and then assigns returned value to rest service query parameter to execute search:


JS function is mapped to accept input parameters from search form input fields:


JS function code - parameters are joined into ADF BC REST query string:


JS function result is mapped with page scope variable - result is assigned to this variable:


REST service query parameter q variable is assigned with this value. Once value changes, query is automatically re-executed:


In my next post I will explain how to implement filtering and pagination with transformation function, on top of service connection:


VBCS sample application code is available on GitHub (if you download ZIP from GitHub, make sure to extract it and create new archive including extracted content directly, without top folder).

Implementing Handle Patch Method in JET Offline Toolkit

Wed, 2018-09-12 13:41
When executing PATCH requests offline, JET Offline Persistence Toolkit will record that request and synch it to the backend, once online. But it will not update data stored in cache, this is by design. Since cached data will not be updated, search queries against offline cache would not bring results based on latest changes. To solve this we need to implement cache update ourself by providing handle patch method.

Handle patch is configured through requestHandlerOverride property while registering persistence manager:


Sample implementation for handle patch. This method is invoked, when PATCH is executed while offline only. We must read information from request and pass it to cache store. Search for entry in cache based on key, updating record and updating info back to the store:


Let's do offline test - switch browser tab to be offline (you can do it Chrome browser developer tools). Do search and check log from JET Offline Persistence Toolkit - it executes search automatically against cache store:


Update same record, while offline - PATCH request will be recorded for later synchronisation. Our handle patch method will be invoked to write changes to cache store:


You will notice in the log, actions executed from handle patch method. It finds record by key in cache and updates it:


Search by updated value - updated value is found and returned from cache store:


Code is available in GitHub repository.

ADF BC REST Query and SQL Nesting Control Solution

Thu, 2018-08-16 15:04
I will talk about expert mode View Object (with hand written SQL), this View Object is created based on SQL join. So, thats my use case for today example. I will describe issue related to generated SQL statement and give a hint how to solve it. This is in particular useful, if you want to expose complex VO (SQL with joins and calculating totals) over ADF BC REST service and then run queries against this REST resource.

Code is available on my GitHub repository.

Here is SQL join and expert mode VO (the one where you can modify SQL by hand):


This VO is exposed through ADF BC REST, I will not go through those details, you can find more info about it online. Once application is running, REST resource is accessible through GET. ADF BC REST syntax allows to pass query string along with REST request, here I'm filtering based on StreetAddress='ABC':


On backend this works OK by default and generates nested query (this is expected behaviour for expert mode VOs, all additional criteria clauses will be added through SQL wrapping). While such query executes just fine, this is not what we want in some use cases. If we calculate totals or average aggregated values in SQL, we don't want it to be wrapped:


To prevent SQL wrapping we can call ADF BC API method in VO constructor:


While probably this works with regular ADF BC, it doesn't work with criteria coming from ADF BC REST. SQL query is generated with two WHERE clauses, after query nesting was disabled:


Possible solution proposed by me - override executeQueryForCollection method, do some parsing and change second WHERE to be AND, apply changed query string and then execute super:


This trick helps and query is generated as we would expect, criteria added from ADF BC REST query call is appended at the end of WHERE clause:

Flow Navigation Menu Control in Oracle VBCS

Sun, 2018-08-12 14:33
Oracle VBCS allows us to build multiple flows within the application. This is great - this helps to split application logic into different smaller modules. Although VBCS doesn't offer (in the current version) declarative support to build menu structure to navigate between the flows. Luckily this requirement can be achieved in few simple steps, please read John Ceccarelli post - Adding a Navigation Bar to a VBCS Application. I thought to go through instructions listed by John and test it out, today post is based on this. In my next posts I will take a look how to replace navigation bar menu structure with something more advanced, for example - menu slider on the left.

I think VBCS have great potential as JavaScript declarative development IDE. I see many concepts are similar to other Oracle declarative development tools, e.g. Forms, Oracle ADF. VBCS runs Oracle JET, all you build in VBCS is Oracle JET. Oracle takes care upgrading Oracle JET version in VBCS, I have applied recent patch (by click of the button) and latest JET version is available within our VBCS environment:


Coming back to flows in VBCS. We can create as many flows we want. Each flow could be based on one or multiple fragments (HTML/JS modules). Here I have created three flows, each with single fragment:


We can select flow and this will bring us flow diagram, where we could have navigation implementation between flow elements/fragments:


Fragment - this is where UI part is done:


So thats about flows and fragments. For someone with ADF background, this sounds very similar to task flows and fragments. Next we should see how to implement flow navigation, to be able to select flow from the top menu. VBCS application comes with so called shell page. This page is top UI wrapper, which contains application name, logged user info, etc. Here we can implement top level menu, which would navigate through application flows:


There must be default flow, which is displayed once application is loaded. Default flow is set in settings of the shell page. Go to settings and choose default flow, dashboard-flow in my case:


Next we need to add JET component - navigation list to the shell page, to render menu UI. You can do it by drag and drop, but easier is to switch shell page to source view and add navigation list HTML portion manually (you can copy paste it from source code uploaded to GitHub, see link at the end of this post) - highlighted HTML will render menu bar to navigate between flows:


Initially you will notice error related to JET navigation list not recognised, we need to import it. Another error - selection listener is not found, we will implement it.

To import JET navigation list component, go to source implementation of the shell page and add oj-navigation-list in component imports section - this will solve issue with unknown navigation list entry:


To execute action in VBCS, we must create Action Chain. Crete Action Chain within shell page - navigateToPage:


We need input parameter - flow name, which want navigate to. Create variable in Action Chain - currentFlow:


Add action of type Navigate to Action Chain, this will trigger navigation logic:


Go to Action Chain source and add "page": "{{$variables.currentFlow}}" under actions. This will force navigation to the flow, which will be passed through parameter:


Finally we create navigation list selection event (within shell page), this event will trigger action chain created above and pass current flow ID. We must create custom event and its name should match even name defined in JET navigation list in HTML (see above):


Choose to create custom event (it didn't work for me in Chrome, only in Safari browser. VBCS bug?) and provide same name as in navigation list component listener:


Choose our navigation Action Chain to be triggered from this event:


Just a reminder, event is called from navigation list selection:


Event is passing flow ID from currently selected tab item:


On runtime, dashboard flow is loaded by default:


We can switch to Jobs, etc.:


Download exported (runnable only in VBCS) VBCS app from GitHub repo.

Oracle Offline Persistence Toolkit - Controlling Online Replay

Thu, 2018-08-09 13:25
Few months ago I had a post about Oracle Offline Persistence toolkit, which integrates well with Oracle JET (JavaScript toolkit from Oracle) - Oracle JET Offline Persistence Toolkit - Offline Update Handling. I'm back to this topic with sample application upgraded to JET 5.1 and offline toolkit upgraded to 1.1.5. In this post I will describe how to control online replay by filtering out some of the requests, to be excluded from replay.

Source code is available on GitHub. Below I describe changes and functionality in the latest commit.

To test online replay, go offline and execute some actions in the sample app - change few records and try to search by first name, also try to use page navigation buttons. You will be able to save changes in offline mode, but if this is your first time loading app and data from other pages wasn't fetch yet, then page navigation would not bring any new results in offline mode (make sure to load more records while online and then go offline):


In online replay manager, I'm filtering out GET requests intentionally. Once going online, I replay only PATCH requests. This is done mainly for a test, to learn how to control replay process. PATCH requests are executed during replay:


Printing out in the log, each GET request which was removed from replay loop:


Replay implementation (I would recommend to read Offline Persistence Toolkit usage doc for more info):


This code is executed, after transition to online status. Calling getSyncLog method from Sync Manager - returns a list of requests pending replay. Promise returns function with array of requests waiting for online replay. I have marked function to be async, this allows to implement sequential loop, where each GET request will be removed one by one in order. This is needed, since removeRequest from Sync Manager is executed in promise and loop would complete too late - after we pass execute replay phase. Read more about sequential loop implementation in JS, when promise is used - JavaScript - Method to Call Backend Logic in Sequential Loop. Once all GET requests are removed, we execute sync method, this will force all remaining requests in queue to be replayed.

Data Conflict Solution for ADF BC REST with Versioning

Mon, 2018-08-06 10:06
I would like to share sample solution for data conflict processing in ADF BC REST using versioning. When multiple users are editing concurrently the same data row - it is important to inform user before overriding changes already committed by another user. There are other approaches to implement data conflict control, you should evaluate if solution explained below is suitable for your use case, before applying it.

Sample code can be obtained from GitHub repository.

I'm using custom change indicator property, to evaluate if client data is expired. Change indicator value is sent to the client together with request data. PATCH request must include current client side change indicator value, if change indicator will match value in backend - PATCH is allowed, otherwise new change indicator will be returned to the client and response will be marked with 409 Conflict status code. Based on this, client could decide either to resubmit PATCH request with new change indicator and overwrite current data in DB or refresh client side data and try to submit changes later.

In this example - PATCH was executed with valid change indicator, response status is 200 OK. New change indicator value is returned to the client (it should be submitted for the next PATCH call for current row):


To test data change conflict, I would go directly to DB and change same record. Change indicator will be updated too:


Client doesn't know about change indicator update (data was changed by another user). Client will include currently known change indicator value and execute PATCH. This will result in 409 Conflict status. Backend returns latest change indicator value in the response:


Data wasn't updated, PATCH request was stopped on the backend:


Client knows latest change indicator value and can submit it again - this time successful (no one else changed data in the meantime):


Status 200 OK is returned, along with new change indicator value. Data is changed in DB as expected:


Backend implementation is not complex. You need DB trigger, which will get value from DB sequence and assign it for each changed row:


ADF BC REST includes change indicator attribute, it is marked with Refresh on Update support. This allows to get latest value assigned from DB trigger and return it to the client:


In doDML method we compare change indicator attribute value currently stored in DB and the one which comes from the client. If values do not match (client doesn't have the latest value) - update is not allowed:


When update is not allowed, we also must change HTTP response code to be 409 Conflict. This will allow to execute error callback on client side and take required action to process data conflict on the client. HTTP response code is set from custom ADF BC REST filter:

Text Classification with Deep Neural Network in TensorFlow - Simple Explanation

Mon, 2018-07-30 13:05
Text classification implementation with TensorFlow can be simple. One of the areas where text classification can be applied - chatbot text processing and intent resolution. I will describe step by step in this post, how to build TensorFlow model for text classification and how classification is done. Please refer to my previous post related to similar topic - Contextual Chatbot with TensorFlow, Node.js and Oracle JET - Steps How to Install and Get It Working. I would recommend to go through this great post about chatbot implementation - Contextual Chatbots with Tensorflow.

Complete source code is available in GitHub repo (refer to the steps described in the blog referenced above).

Text classification implementation:

Step 1: Preparing Data
  • Tokenise patterns into array of words
  • Lower case and stem all words. Example: Pharmacy = pharm. Attempt to represent related words 
  • Create list of classes - intents
  • Create list of documents - combination between list of patterns and list of intents
Python implementation:


Step 2: Preparing TensorFlow Input
  • [X: [0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, ...N], Y: [0, 0, 1, 0, 0, 0, ...M]]
  • [X: [0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, ...N], Y: [0, 0, 0, 1, 0, 0, ...M]]
  • Array representing pattern with 0/1. N = vocabulary size. 1 when word position in vocabulary is matching word from pattern
  • Array representing intent with 0/1. M = number of intents. 1 when intent position in list of intents/classes is matching current intent
Python implementation:


Step 3: Training Neural Network
  • Use tflearn - deep learning library featuring a higher-level API for TensorFlow
  • Define X input shape - equal to word vocabulary size
  • Define two layers with 8 hidden neurones - optimal for text classification task (based on experiments)
  • Define Y input shape - equal to number of intents
  • Apply regression to find the best equation parameters
  • Define Deep Neural Network model (DNN)
  • Run model.fit to construct classification model. Provide X/Y inputs, number of epochs and batch size
  • Per each epoch, multiple operations are executed to find optimal model parameters to classify future input converted to array of 0/1
  • Batch size
    • Smaller batch size requires less memory. Especially important for datasets with large vocabulary
    • Typically networks train faster with smaller batches. Weights and network parameters are updated after each propagation
    • The smaller the batch the less accurate estimate of the gradient (function which describes the data) could be
Python implementation:


Step 4: Initial Model Testing
  • Tokenise input sentence - split it into array of words
  • Create bag of words (array with 0/1) for the input sentence - array equal to the size of vocabulary, with 1 for each word found in input sentence
  • Run model.predict with given bag of words array, this will return probability for each intent
Python implementation:


Step 5: Reuse Trained Model
  • For better reusability, it is recommended to create separate TensorFlow notebook, to handle classification requests
  • We can reuse previously created DNN model, by loading it with TensorFlow pickle
Python implementation:


Step 6: Text Classification
  • Define REST interface, so that function will be accessible outside TensorFlow
  • Convert incoming sentence into bag of words array and run model.predict
  • Consider results with probability higher than 0.25 to filter noise
  • Return multiple identified intents (if any), together with assigned probability
Python implementation:

Oracle VBCS - Pay As You Go Cloud Model Experience Explained

Thu, 2018-07-19 14:03
If you are considering starting using VBCS cloud service from Oracle, may be this post will be useful. I will share my experience with pay as you go model.

Two payment models are available:

1. Pay As You Go - good when accessing VBCS time to time. Can be terminated at any time
2. Monthly Flex - good when need to run VBCS 24/7. Requires commitment, can't be terminated at any time

When you create Oracle Cloud account, initially you will get 30 days free trial period. At the end of that period (or earlier), you can upgrade to billable plan. To upgrade, go to account management and choose to upgrade promotional offer - you will be given choice to go with Pay As You Go or Monthly Flex:


As soon as you upgrade to Pay As You Go, you will start seeing monthly usage amount in the dashboard. Also it shows hourly usage of VBCS instance, for the one you will be billed:


Click on monthly usage amount, you will see detail view per each service billing. When VBCS instance is stopped (in case of Pay As You Go) - you will be billed only for hardware storage (Compute Classic) - this is relatively very small amount:


There are two options, how you can create VBCS instance - either autonomous VBCS or customer managed VBCS. To be able to stop/start VBCS instance and avoid billing when instance is not used (in case of Pay As You Go) - make sure to go with customer managed VBCS. In this example, VBCS instance was used only for 1 hour and then it was stopped, it can be started again at anytime:


To manage VBCS instance, you would need to navigate to Oracle Cloud Stack UI. From here you can start stop both DB and VBCS in single action. It is not enough to stop VBCS, make sure to stop DB too, if you are not using it:

Pages