Christopher Jones

Subscribe to Christopher Jones feed
Oracle Blogs
Updated: 5 hours 59 min ago

So you want to use JSON in Oracle Database with Node.js?

Wed, 2018-08-15 05:52

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module.

I'll start with some examples showing a simple, naive, implementation which you can use with all versions of Oracle Database. Then I'll go on to show some of the great JSON functionality introduced in Oracle Database 12.1.0.2.

The examples below use the async/await syntax available in Node 7.6, but they can be rewritten to use promises or callbacks, if you have an older version of Node.js.

Storing JSON as character data in Oracle Database 11.2

At the simplest, you can stores JSON as character strings, such as in the column C of MYTAB:

CREATE TABLE mytab (k NUMBER, c CLOB);

Using a CLOB means we don't need to worry about the length restrictions of a VARCHAR2.

A JavaScript object like myContent can easily be inserted into Oracle Database with the node-oracledb module by stringifying it:

const oracledb = require('oracledb'); let connection, myContent, json, result; async function run() { try { connection = await oracledb.getConnection( {user: "hr", password: "welcome", connectString: "localhost/orclpdb"}); myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json } ); console.log('Rows inserted: ' + result.rowsAffected); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();

If you are just inserting one record you may want to autocommit, but make sure you don't unnecessarily commit, or break transactional consistency by committing a partial set of data:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected);

The output is:

Rows inserted: 1

To retrieve the JSON content you have to use a SQL query. This is fine when you only need to lookup records by their keys:

result = await connection.execute( 'select c from mytab where k = :kbv', { kbv: 1 }, // the key to find { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

The fetchInfo clause is used to return the CLOB as a string. This is simpler and generally faster than the default, streamed access method for LOBs. (Streaming is great for huge data streams such as videos.)

The JSON.parse() call converts the JSON string into a JavaScript object so fields can be accessed like 'js.address.city'.

Output is:

Name is: Sally City is: Melbourne

Code gets trickier if you need to match JSON keys in the query. You need to write your own matching functionality using LOB methods like dbms_lob.instr():

result = await connection.execute( 'select c from mytab where dbms_lob.instr(c, \'"name":"\' || :cbv ||\'"\') > 0', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

You can see this could be slow to execute, error prone to do correctly, and very hard to work with when the JSON is highly nested. But there is a solution . . .

Oracle Database 12c JSON

With Oracle 12.1.0.2 onward you can take advantage of Oracle's JSON functionality. Data is stored as VARCHAR2 or LOB so the node-oracledb code is similar to the naive storage solution above. However, in the database, extensive JSON functionality provides tools for data validation, indexing and matching, for working with GeoJSON, and even for working with relational data. Check the JSON Developer's Guide for more information. You may also be interested in some of the JSON team's blog posts.

To start with, when you create a table, you can specify that a column should be validated so it can contain only JSON:

c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE)

In this example I also take advantage of Oracle 12c's 'autoincrement' feature called 'identity columns'. This automatically creates a monotonically increasing sequence number for the key. The complete CREATE TABLE statement used for following examples is:

CREATE TABLE myjsontab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Strictly speaking, since I know my application will insert valid JSON, I could have improved database performance by creating the table without the CHECK (c IS JSON) clause. However, if you don't know where your data is coming from, letting the database do validation is wise.

Inserting a JavaScript object data uses the same stringification as the previous section. Since we don't need to supply a key now, we can use a DML RETURNING clause to get the new key's autoincremented value:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into myjsontab (c) values (:cbv) returning k into :kbv', { cbv: json, kbv: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT } }, { autoCommit: true} ); console.log('Data key is: ' + result.outBinds.kbv);

This inserts the data and returns the key of the new record. The output is:

Data key is: 1

To extract data by the key, a standard SQL query can be used, identical to the naive CLOB implementation previously shown.

Oracle Database's JSON functionality really comes into play when you need to match attributes of the JSON string. You may even decide not to have a key column. Using Oracle 12.2's 'dotted' query notation you can do things like:

result = await connection.execute( 'select c from myjsontab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

Output is:

Name is: Sally City is: Melbourne

(If you use Oracle Database 12.1.0.2, then the dotted notation used in the example needs to be replaced with a path expression, see the JSON manual for the syntax).

Other JSON functionality is usable, for example to find any records that have an 'address.city' field:

select c FROM myjsontab where json_exists(c, '$.address.city')

If you have relational tables, Oracle Database 12.2 has a JSON_OBJECT function that is a great way to convert relational table data to JSON:

result = await connection.execute( `select json_object('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < :did`, { did: 50 }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { for (var i = 0; i < result.rows.length; i++) { console.log("Department: " + result.rows[i][0]); js = JSON.parse(result.rows[i][0]); console.log('Department Name is: ' + js.name); } } else { console.log('No rows fetched'); }

Output is:

Department: {"deptId":10,"name":"Administration"} Department Name is: Administration Department: {"deptId":20,"name":"Marketing"} Department Name is: Marketing Department: {"deptId":30,"name":"Purchasing"} Department Name is: Purchasing Department: {"deptId":40,"name":"Human Resources"} Department Name is: Human Resources

If you are working with JSON tables that use BLOB storage instead of CLOB, for example:

CREATE TABLE myjsonblobtab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c BLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Then you need to bind a Buffer for insert:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); b = Buffer.from(json, 'utf8'); result = await connection.execute( 'insert into myjsonblobtab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: b }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); Querying needs to return a Buffer too: result = await connection.execute( 'select c from myjsonblobtab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.BUFFER } }}); if (result.rows.length) { js = JSON.parse(result.rows[0].toString('utf8')); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } A final JSON tip

One final tip is to avoid JSON.parse() if you don't need it. An example is where you need to pass a JSON string to a web service or browser. You may be able pass the JSON string returned from a query directly. In some cases the JSON string may need its own key, in which case simple string concatenation may be effective. In this example, the Oracle Locator method returns a GeoJSON string:

result = await connection.execute( `select sdo_util.to_geojson( sdo_geometry(2001, 8307, sdo_point_type(-90, 20, null), null, null)) as c from dual`, { }, // no binds { fetchInfo: {"C": {type: oracledb.STRING } }}); json = '{"geometry":' + result.rows[0][0] + '}'; console.log(json);

The concatenation above avoids the overhead of a parse and re-stringification:

js = JSON.parse(result.rows[0][0]); jstmp = {geometry: js}; json = JSON.stringify(jstmp); Summary

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JSON in Node.js and Oracle Database. Combined with node-oracledb's ability to work with LOBs as Node.js Strings, database access is very efficient. Oracle Database 12.1.0.2's JSON features make JSON operations in the database simple. Advances in Oracle Database 12.2 and 18c further improve the functionality and usability.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Issues and questions about node-oracledb can be posted on GitHub.

The Oracle JSON Developer's Guide is here.

Some New Features of Oracle Instant Client 18.3

Mon, 2018-07-30 05:52

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features.

Many of the readers of my blog have an equal interest in the "client side". You'll be happy that Oracle Instant Client 18.3 for Linux 64-bit and 32-bit is also available. Instant Client is just a rebundling of the Oracle client libraries and some tools. They are the same ones available with an Oracle Database installation or the "full" Oracle Client installation but installation is much simpler: you just unzip a file, or install an RPM package on Linux and use them to connect your applications to Oracle Database.

The "Oracle Client", in whatever install footprint you choose, covers a number of technologies and provides a lot of language APIs. The Instant Client packages contain these APIs and selected tools like SQL*Plus and Data Pump. I'll let those teams blow their own trumpets about the new release. Here I'll talk about some of the Oracle Client functionality that benefits the Oracle Oracle Call Interface (OCI) API for C programs, and all the scripting languages that use OCI:

  • My wider group's most exciting project in 18.3 is the Connection Manager (CMAN) Traffic Director mode whose sub-location in the Oracle manual is a sign of how the feature its transparent, and not indicative of the huge engineering effort that went into it. CMAN in Traffic Director Mode is a proxy between the database clients and the database instances. Supported OCI clients from Oracle Database 11g Release 2 (11.2) and later can connect to CMAN to get improved high availability (HA) for planned and unplanned database server outages, connection multiplexing support, and load balancing.

Cherry picking some notable Oracle Client 18c features that are available via OCI:

  • You probably know that Oracle Database 18c is really just a re-badged 12.2.0.2. Due to the major version number change and the new release strategy, there is a new OCIServerRelease2() call to get the database version number. The old OCIServerRelease() function will give just the base release information so use the new function to get the actual DB patch level. Why? Let's just say there were robust discussions about the upgrade and release cycles, and about handling the "accelerated" version change across the whole database product suite and how things like upgrade tools were affected.

  • Extracting Instant Client 18.3 ZIP files now pre-creates symbolic links for the C and C++ client libraries on relevant operating systems. Yay! One fewer install step.

  • Instant Client now also pre-creates a network/admin sub-directory to show where you can put any optional network and other configuration files such as tnsnames.ora, sqlnet.ora, ldap.ora, and oraaccess.xml. This directory will be used by default for any application that loads the related Oracle Client libraries.

  • Support for Client Result Caching with dynamic binds where descriptors are not involved and the bind length is less than 32768. Since scripting languages tend to use dynamic binds for character data this could be a big performance win for your lookup table queries.

  • Unique ID generation improvements. One little old gotcha, particularly in some hosted or cloud environments, were errors when Oracle applications tried to generate a unique key for your client. This manifested itself as an Oracle error when you tried to start a program. Workarounds included adding a hostname to /etc/hosts. There were improvements in Oracle Client 18c for unique key generation so the problem should be less common.

  • A new call timeout parameter can be enabled for C applications. This applies to post-connection round-trips to the database, making it easier to interrupt long running calls and satisfy application quality of service requirements. After you connect, each OCI call may make one of more round-trips to Oracle database:

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds the call timeout milliseconds, then the operation is halted and an Oracle error is returned.

    • In the case where an OCI call requires more than one round-trip and each round-trip takes less than the specified number of milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds the call timeout value.

    • If no round-trip is required, the operation will never be interrupted.

    After a timeout has occurred, the connection must be cleaned up. This is allowed to run for the same amount of time as specified for the original timeout. For very small timeouts, if the cleanup fails, then an ORA-3114 is returned and the connection must be released. However if the cleanup is successful then an ORA-3136 is returned and the application can continue using the connection.

    You can see this will be most useful for interrupting SQL statements whose "execute" phase may take some time.

  • The OCI Session pool underlays many application connection pools (and if it doesn't underlay yours, then it should - ask me why). Improvements in 18c session pooling include some usability "do-what-I-mean" parameter size check tweaks, internal lock improvements, and a new attribute OCI_ATTR_SPOOL_MAX_USE_SESSION.

    One other change that was much debated during development is the OCISessionGet() behavior of OCI_SPOOL_ATTRVAL_NOWAIT mode when a pool has to grow. Prior to 18c, even though it was a 'no wait' operation, getting a connection would actually wait for the pool to grow. Some users didn't like this. Since creating connections could take a few moments they had no way to control the quality of service. Now in 18c the mode doesn't wait - if there's no free connection immediately available, then control is returned to the application with an error. If you are impacted by the new behavior, then look at using alternative session acquire modes like OCI_SPOOL_ATTRVAL_TIMEDWAIT. Or better, keep your pool a constant size so it doesn't need to grow, which is what is recommended by Oracle's Real World Performance Group.

  • SODA support. Simple Oracle Document Access (SODA) that was previously only available via JDBC is now available in OCI. Yum. Let's see what we can do with this now it's in C. More on this later.

I hope this has given you a taste of some Oracle Client 18c changes and given you links to explore more. Don't forget that much new database functionality is available to clients transparently or via SQL and PL/SQL.

Finally, remember that Oracle has client-server version interoperability so 18c OCI programs can connect to Oracle Database 11.2 or later. It's time to upgrade your client!

Python cx_Oracle 6.4 Brings a World Cup of Improvements

Mon, 2018-07-02 19:58

cx_Oracle logo

cx_Oracle 6.4, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

At a nicely busy time of year, cx_Oracle 6.4 has landed. To keep it brief I'll point you to the release notes since there have been quite a number of improvements. Some of those will significantly help your apps

A few things to note:

  • Improvements to Continuous Query Notification and Advanced Queuing notifications

  • Improvements to session pooling

  • A new encodingErrors setting to choose how to handle decoding corrupt character data queried from the database

  • You can now use a cursor as a context manager:

    with conn.cursor() as c: c.execute("SELECT * FROM DUAL") result = c.fetchall() print(result)
cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Demo: GraphQL with node-oracledb

Thu, 2018-06-21 09:18

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about.

I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples.

In brief, GraphQL:

  • Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements.

  • Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints.

Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned:

{ blog(id: 2) { id title content } }

The response from the server would contain the three request fields, for example:

{ "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } }

Compare that result with this query that does not ask for the title:

{ blog(id: 2) { id content } }

With the same data, this would give:

{ "data": { "blog": { "id": 2, "content": "This is blog 2" } } }

So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop.

Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend.

The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types:

type Blog { id: Int!, title: String!, content: String! }

The exclamation mark means a field is required.

The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query':

type Query { blog(id: Int): Blog }

This defines a capability to query a single blog post and return the Blog type we defined above.

We may also want to get all blog posts, so we add a "blogs" field to the Query type:

type Query { blog(id: Int): Blog blogs: [Blog], }

The square brackets indicates a list of Blogs is returned.

A query to get all blogs would be like:

{ blogs { id title content } }

You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determin how to handle each. There is no need for multiple endpoints.

To manipulate data requires some 'mutations', typically making up the CUD of CRUD:

input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }

To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database.

In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later.

Combined, we represent the schema in Node.js like:

const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`;

This is the contract, defining the data types and available operations.

In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application:

CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3;

Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.)

The Node.js ecosystem has some powerful modules for GraphQL. The package.json is:

{ "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } }

If you want to see the full graphql_oracle.js file it is here.

Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created:

const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] };

To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app:

const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); })

On the Oracle side, we want to use a connection pool, so the first thing the app does is start one:

await oracledb.createPool(dbConfig);

The helper functions can get a connection from the pool. For example, the helper to get one blog is:

async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); }

The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned.

Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation:

To insert a new blog, the createBlog mutation can be used:

If you want to play around more, I've put the full set of demo-quality files for you to hack on here. You may want to look at the GraphQL introductory videos, such as this comparison with REST.

To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Node-oracledb 2.3 with Continuous Query Notifications is on npm

Fri, 2018-06-08 01:48

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Continuous Query Notifications. Heterogeneous Connection Pools.

 

 

Our 2.x release series continues with some interesting improvements: Node-oracledb 2.3 is now available for your pleasure. Binaries for the usual platforms are available for Node.js 6, 8, and 10; source code is available on GitHub. We are not planning on releasing binaries for Node.js 4 or 9 due to the end of life of Node.js 4, and the release of Node.js 10.

The main new features in node-oracledb 2.3 are:

  • Support for Oracle Database Continuous Query Notifications, allowing JavaScript methods to be called when database changes are committed. This is a cool feature useful when applications want to be notified that some data in the database has been changed by anyone.

    I recently posted a demo showing CQN and Socket.IO keeping a notification area of a web page updated. Check it out.

    The new node-oracledb connection.subscribe() method is used to register a Node.js callback method, and the SQL query that you want to monitor. It has two main modes: for object-level changes, and for query-level changes. These allow you to get notifications whenever an object changes, or when the result set from the registered query would be changed, respectively. There are also a bunch of configuration options for the quality-of-service and other behaviors.

    It's worth noting that CQN requires the database to establish a connection back to your node-oracledb machine. Commonly this means that your node-oracledb machine needs a fixed IP address, but it all depends on your network setup.

    Oracle Database CQN was designed for infrequently modified tables, so make sure you test your system scalability.

  • Support for heterogeneous connection pooling and for proxy support in connection pools. This allows each connection in the pool to use different database credentials.

    Some users migrating to node-oracledb had schema architectures that made use of this connection style for data encapsulation and auditing. Note that making use of the existing clientId feature may be a better fit for new code, or code that does mid-tier authentication.

  • A Pull Request from Danilo Silva landed, making it possible for Windows users to build binaries for self-hosting. Thanks Danilo! Previously this was only possible on Linux and macOS.

  • Support for 'fetchAsString' and 'fetchInfo' to allow fetching RAW columns as hex-encoded strings.

See the CHANGELOG for the bug fixes and other changes.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

ODPI-C 2.4 has been released

Wed, 2018-06-06 16:44
ODPI-C logo

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Better database notification support. New pool timeout support.

 

I'll keep this brief. See the Release Notes for all changes.

  • Support for Oracle Continuous Query Notification and Advanced Queuing notifications was improved. Notably replacement subscribe and unsubscribe methods were introduced to make use more flexible. Support for handling AQ notifications was added, so now you can get notified there is a message to dequeue. And settings for the listening IP address, for notification grouping, and to let you check the registration status are now available.

  • Some additional timeout options for connection pools were exposed.

  • Some build improvements were made: the SONAME is set in the shared library on *ix platforms. There is also a new Makefile 'install' target that installs using a standard *ix footprint.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Demo: Oracle Database Continuous Query Notification in Node.js

Sat, 2018-06-02 08:32

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo.

 

 

Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too.

CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected.

If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables.

DEMO APP

I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples.

There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version.

One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy.

The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings:

To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app:

SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM

The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM.

The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications:

GRANT CHANGE NOTIFICATION TO cj;

We then need a table that our app will get notifications about, and then query to get the latest messages:

CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) );

The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same.

The little SQL script I use to insert data (and trigger notifications) is:

INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT;

The Node.js app code is more interesting, but not complex. Here is the code that registers the query:

conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created");

Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed.

When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page:

async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page }

The helper function to query the table is obvious:

async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; }

At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO:

<ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script>

You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list.

That's it.

DEMO IN ACTION

To see it in action, extract the code and install the dependencies:

cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s

I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling.

Edit server.js and set your database credentials - or set the referenced environment variables:

let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode }

Then start the app server:

cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000

Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader.

Start SQL*Plus in a terminal window and create a message:

SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT;

Every time data is committed to msgtable, the message list on the web page is automatically updated:

If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc.

Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

Python and cx_Oracle RPMs are available from yum.oracle.com

Tue, 2018-05-29 21:04

cx_Oracle logo

This is worth cross posting: Getting Started with Python Development on Oracle Linux

Our Oracle Linux group has made Python and cx_Oracle RPMs available for a while. They recently launched a new landing page with nice, clear instructions on how to install various versions of Python, and how to install the cx_Oracle interface for Oracle Database. Check the link above.

Reflecting Changes in Business Objects in UI Tables with Visual Builder

Mon, 2018-05-21 13:14

While the quick start wizards in Visual Builder Cloud Service (VBCS) make it very easy to create tables and other UI components and bind them to business objects, it is good to understand what is going on behind the scenes, and what the wizards actually do. Knowing this will help you achieve things that we still don't have wizards for.

For example - let's suppose you created a business object and then created a UI table that shows the fields from that business object in your page. You probably used the "Add Data" quick start wizard to do that. But then you remembered that you need one more column added to your business object, however after you added that one to the BO, you'll notice it is not automatically shown in the UI. That makes sense since we don't want to automatically show all the fields in a BO in the UI.

But how do you add this new column to the UI?

The table's Add Data wizard will be disabled at this point - so is your only option to drop and recreate the UI table? Of course not!

 

If you'll look into the table properties you'll see it is based on a page level ServiceDataProvider ( SDP for short) variable. This is a special type of object that the wizards create to represent collections. If you'll look at the variable, you'll see that it is returning data using a specific type. Note that the type is defined at the flow level - if you'll look at the type definition you'll see where the fields that make up the object are defined.

Type Definition

It is very easy to add a new field here - and modify the type to include the new column you added to the BO. Just make sure you are using the column's id - and not it's title - when you define the new field in the items array.

Now back in the UI you can easily modify the code of the table to add one more column that will be hooked up to this new field in the SDP that is based on the type.

Sounds complex? It really isn't - here is a 3 minute video showing the whole thing end to end:

As you see - a little understanding of the way VBCS works, makes it easy to go beyond the wizards and achieve anything.

European Privacy Requirements: Considerations for Retailers

Mon, 2018-05-21 11:52

When retailers throughout Europe adopt a new set of privacy and security regulations this week, it will be the first major revision of data protection guidelines in more than 20 years. The 2018 regulations address personal as well as financial data, and require that retailers use systems already designed to fulfill these protections by default.

In 1995, the European Commission adopted a Data Protection Directive that regulates the processing of personal data within the European Union. This gave rise to 27 different national data regulations, all of which remain intact today. In 2012, the EC announced that it would supersede these national regulations and unify data protection law across the EU by adopting a new set of requirements called the General Data Protection Regulation (GDPR).

The rules apply to any retailer selling to European consumers. The GDPR, which takes effect May 25, 2018, pertains to any company doing business in, or with citizens of, the European Union, and to both new and existing products and services. Organizations found to be in violation of the GDPR will face a steep penalty of 20 million euros or four percent of their gross annual revenue, whichever is greater.

Retailers Must Protect Consumers While Personalizing Offers

GDPR regulations will encompass personal as well as financial data, including much of the data found in a robust customer engagement system, CRM, or loyalty program. It also includes information not historically considered to be personal data: device IDs, IP addresses, log data, geolocation data, and, very likely, cookies.

For the majority of retailers relying on customer data to personalize offers, it is critically important to understand how to fulfill GDPR requirements and execute core retail, customer, and marketing operations. Developing an intimate relationship with consumers and delivering personalized offers means tapping into myriad data sources.

This can be done, but systems must be GDPR-compliant by design and by default. A key concept underlying the GDPR is Privacy by Design (PBD), which essentially stipulates that systems be designed to minimize the amount of personal data they collect. Beginning this week, Privacy by Design features will become a regulatory requirement for both Oracle and our customers and GDPR stipulates that these protections are, by default, turned on.

Implementing Security Control Features

While the GDPR requires “appropriate security and confidentiality,” exact security controls are not specified. However, a number of security control features are discussed in the text and will likely be required for certain types of data or processing. Among them are multi-factor authentication for cloud services, customer-configurable IP whitelisting, granular access controls (by record, data element, data type, or logs), encryption, anonymization, and tokenization.

Other security controls likely to be required are “separation of duties” (a customer option requiring two people to perform certain administrative tasks); customer options for marking some fields as sensitive and restricted; limited access on the part of the data controller (i.e. Oracle) to customer information; displaying only a portion of a data field; and the permanent removal of portions of a data element.

Summary of Critical GDPR Requirements

The GDPR includes a number of recommendations and requirements governing users’ overall approach to data gathering and use. Among the more important are:

  • Minimization. Users are required to minimize the amount of data used, length of time it is stored, the number of people who have access to it, and the extent of that access.
  • Retention and purging. Data may be retained for only as long as reasonably necessary. This applies in particular to personal data, which should be processed only if the purpose of processing cannot reasonably be fulfilled by other means. Services must delete customer data on completion of the services.
  • Exports and portability. End users must be provided with copies of their data in a structured, commonly used digital format. Customers will be required to allow end users to send data directly to a competing service provider for some services.
  • Access, correction, and deletion. End-user requests for data access, correction, and deletion for data they store in any service. Users may have a “right to be forgotten”—a right to have all their data erased.
  • Notice and consent. When information is collected, end-user notice and consent for data processing is generally required.
  • Backup and disaster recovery. Timely availability of end-user data must be ensured.

Are you prepared?

Oracle is prepared for the EU General Data Protection Regulation (GDPR) that was adopted by the European Parliament in April 2016 and will become effective on May 25, 2018. We welcome the positive changes it is expected to bring to our service offerings by providing a consistent and unified data protection regime for businesses across Europe. Oracle is committed to helping its customers address the GDPR’s new requirements that are relevant to our service offerings, including any applicable processor accountability requirements.

Our customers can rest assured that Oracle Retail’s omnichannel suite will empower them to continue delivering personalized customer experiences that meet complex global data privacy regulations. Contact Oracle Retail to learn more about Oracle systems, services and GDPR compliance: oneretailvoice_ww@oracle.com

 

 

 

 

New Oracle E-Business Suite Person Data Removal Tool Now Available

Mon, 2018-05-21 10:27

Oracle is pleased to announce the availability of the Oracle E-Business Suite Person Data Removal Tool, designed to remove (obfuscate) data associated with people in E-Business Suite systems. Customers can apply the tool to select information in their E-Business Suite production systems to help address internal operational and external regulatory requirements, such as the EU General Data Protection Regulation (GDPR).

For more details, see:

DP World Extends Strategic Collaboration with Oracle to Accelerate Global Digital ...

Mon, 2018-05-21 09:56

Global trade enabler DP World has extended its partnership with Oracle to implement its digital transformation programme that supports its strategy to develop complementary sectors in the global supply chain such as industrial parks, free zones and logistics. 

 

Suhail Al Banna, Senior Vice President, DP World, Middle East and Africa Region; Arun Khehar, Senior Vice President – Business Applications, ECEMEA, Oracle; Mohammed Al Muallem, CEO and Managing Director, DP World, UAE Region and CEO, JAFZA.

Suhail Al Banna, Senior Vice President, DP World, Middle East and Africa Region; Arun Khehar, Senior Vice President – Business Applications, ECEMEA, Oracle; Mohammed Al Muallem, CEO and Managing Director, DP World, UAE Region and CEO, JAFZA.

 

The move follows an announcement by DP World earlier this year to use the Oracle Cloud Suite of Applications drive business transformation. Oracle Consulting will now implement the full suite of Fusion Enterprise Resource Planning (ERP), Human Capital Management (HCM) and Enterprise Performance Management (EPM) Cloud solutions using its True Cloud methodology. The technology roll out across the Group has already started with the Group’s UAE Region and Middle East and Africa Region the first to sign up.

Teo Chin Seng, Senior Vice President IT, DP World Group, said:“Our focus on building our digital capability follows our vision to become a digitised global trade enabler and we working to achieve a new operational efficiency level while creating value for our stakeholders.”

Arun Khehar, Senior Vice President – Business Applications, ECEMEA, Oracle said:“Following the recent announcement of our strategic partnership to help DP World drive its global digital transformation with our best-in-class Cloud Suite of Applications (SaaS), we are proud to extend our collaboration by leveraging the deep expertise of Oracle Consulting to drive this large scale project. We are confident that this strategic cloud deployment will help them deliver the next level of innovation and differentiation.”

The Oracle Consulting team is focused exclusively on Oracle Cloud solutions and staffed with more than 7,000 experts in 175 countries serving more than 20 million users to help organizations implement Oracle Cloud in an efficient and cost-effective manner.

 

Further press releases Oracle Middle East Newsroom 

If You Are Struggling With GDPR, Then You Are Not Alone

Mon, 2018-05-21 08:00

Well, it's only 5 days to go until the infamous GDPR deadline of 25th May 2018 and you can certainly see the activity accelerating.

You would have thought that with the deadline so close, most organisations would be sat back, relaxing, safe in the knowledge that they have had 2 years to prepare for GDPR, and therefore, are completely ready for it. It's true, some organisations are prepared and have spent the last 24 months working hard to meet the regulations. Sadly, there are also a significant proportion of companies who aren't quite ready. Some, because they have left it too late. Others, by choice.

Earlier this week I had the pleasure of being invited to sit on a panel discussing GDPR at Equinix's Innovation through Interconnection conference in London.

As with most panels, we had a very interesting discussion, talking about all aspects of GDPR including readiness, data sovereignty, healthcare, the role of Cloud, and the dreaded Brexit!

I have written before about GDPR, but this time I thought I would take a bit of time to summarise three of the more interesting discussion topics from the panel, particularly areas where I feel companies are struggling.

Are you including all of your personal right data?

There is a clear recognition that an organisation's customer data is in scope for GDPR. Indeed, my own personal email account has been inundated with opt-in consent emails from loads of companies, many of whom I had forgotten even had my data. Clearly, companies are making sure that they are addressing GDPR for their customers. However, I think there is a general concern that some organisations are missing some of the data, especially internal data, such as that of their employees. HR data is just as important when it comes to GDPR. I see some companies paying far less attention to this area than their customer's data.

Does Cloud help or hinder GDPR compliance?

A lot was discussed on the panel around the use of cloud. Personally, I think that cloud can be a great enabler, taking away some of the responsibility and overhead of implementing security controls, processes, and procedures and allowing the Data Processor (the Cloud Service Provider) to bring all of their experience, skill and resources into delivering you a secure environment. Of course, the use of Cloud also changes the dynamic. As the Data Controller, an organisation still has plenty of their own responsibility, including that of the data itself. Therefore, putting your systems and data into the Cloud doesn't allow you to wash your hands of the responsibility. However, it does allow you to focus on your smaller, more focused areas of responsibility. You can read more about shared responsiblity from Oracle's CISO, Gail Coury in this article. Of course, you need to make sure you pick the right cloud service provider to partner with. I'm sure I must have mentioned before that Oracle does Cloud and does it extremely well.

What are the real challenges customers are facing with GDPR?

I talk to lots of customers about GDPR and my observations were acknowledged during the panel discussion. Subject access rights is causing lots of headaches. To put it simply, I think we can break GDPR down into two main areas: Information Security and Subject Access Rights. Organisations have been implementing Information Security for many years (to varying degrees), especially if they have been subject to other legislations like PCI, HIPAA, SOX etc. However, whilst the UK Data Protection Act has always had principles around data subjects, GDPR really brings that front and centre. Implementing many of the principles associated with data subjects, i.e. me and you, can mean changes to applications, implementing new processes, identifying sources of data across an organisation etc. None of this is proving simple.

On a similar theme, responding to subject access rights due to this spread of data across an organisation is worrying many company service desks, concerned that come 25th May, they will be inundated with requests they cannot fulfil in a timely manner.

Oh and of course, that's before you even get to paper-based and unstructured data, which is proving to be a whole new level of challenge.

I could continue, but the above 3 areas are some of the main topics I am hearing over and over again with the customers I talk to. Hopefully, everyone has realised that there is no silver bullet for achieving GDPR compliance, and, for those companies who won't be ready in 5 days time, I hope you at least have a strong plan in place.

Experience, Not Conversion, is the Key to the Switching Economy

Mon, 2018-05-21 08:00

In a world increasingly defined by instant-gratification, the demand for positive and direct shopping experiences has risen exponentially. Today’s always-on customers are drawn to the most convenient products and services available. As a result, we are witnessing higher customer switching rates, with consumers focusing more on convenience than on branding, reputation, or even on price.  

In this switching economy – where information and services are always just a click away –  we tend to reach for what suits our needs in the shortest amount of time. This shift in decision making has made it harder than ever for businesses to build loyalty among their customers and to guarantee repeat purchases. According to recent research, only 1 in 5 consumers now consider it a hassle to switch between brands, while a third would rather shop for better deals than stay loyal to a single organization. 

What's Changed? 

The consumer mindset for one. And the switching tools available to customers have also changed. Customers now have the ability to research extensively before they purchase, with access to reviews and price comparison sites often meaning that consumers don’t even make it to a your website before being captured by a competitor. 

This poses a serious concern for those brands that have devoted their time – and marketing budgets – to building great customer experiences across their websites. 

Clearly this is not to say that on-site experiences aren’t important, but rather that they are only one part of the wider customer journey. In an environment as complex and fast moving as the switching economy, you must look to take a more omnichannel approach to experience, examining how your websites, mobile apps, customer service teams, external reviews and in-store experiences are all shaping the customers’ perceptions of your brand. 

What Still Needs to Change?

Only by getting to know your customers across all of these different channels can you future-proof your brand in the switching economy. To achieve this, you must establish a new set of metrics that go beyond website conversion. The days of conversion optimization being viewed as the secret sauce for competitive differentiation are over; now brands must recognize that high conversion rates are not necessarily synonymous with a great customer experience – or lifetime loyalty. 

Today, the real measure of success does not come from conversion, but from building a true understanding of your customers – across every touchpoint in the omnichannel journey. Through the rise of experience analytics, you finally have the tools and technologies needed to understand customers in this way, and to tailor all aspects of your brand to maximize convenience, encourage positive mindsets and pre-empt when your customers are planning to switch to a different brand. 

It is only through this additional layer of insight that businesses and brands will rebuild the notion of customer loyalty, and ultimately, overcome the challenges of the switching economy. 

Want to learn more about simplifying and improving the customer experience? Read Customer Experience Simplified: Deliver The Experience Your Customers Want to discover how to provide customer experiences that are managed as carefully as the product, the price, and the promotion of the marketing mix.

Customer Experience Simplified

See What Your Guests Think with Data Visualization

Mon, 2018-05-21 06:00

As we approach the end of May, thoughts of summer and vacations begin. Naturally, a key component is finding the best place to stay and often that means considering the hotel options at your chosen destination. But what’s the best way to decide? That’s where reading reviews is so important.   

And that brings us to the latest blog in the series of taking datasets from ‘less typical’ sources and analyzing them with Oracle Data Visualization. Here, we’ve pulled the reviews from Booking.com as a dataset and visualized it to see how we – the general public - rate the hotels we stay in.

Working with Ismail Syed, pre-sales intern, and Harry Snart, pre-sales consultant, both from Oracle UK, we ran the analysis and created visualizations. We decided to look at the most common words used in both positive and negative reviews, see how long each of them is – and work out which countries are the most discerning when they give their feedback. 

So, what are the main irritations when we go away? Conversely - what's making a good impression?

Words of discontent

First, we wanted to combine the most commonly used words in a positive review with those most likely used in a negative review. You can see these in the stacked bar chart below. Interestingly, 'room' and 'staff' both appear in the positive and negative comments list. However, there are far more positive reviews around staff than negative ones, and likewise a lot more negative reviews around the room than positive reviews.

It seems then, across the board, guests find customer service better than the standard of the rooms they receive – implying an effective way to boost client retention would be by starting with improving rooms. In particular the small size of the rooms was complained about, that’s a tough fix, but people were more upset about the standard of the beds, their bathrooms and the toilets, which can be updated a bit more easily.

You’ll also notice 'breakfast' appears prominently in both the positive and negative word clouds – so a more achievable fix could be to start there. A bad breakfast can leave a bad taste, but a good one is obviously remembered. 

Who’ll give a good review?

Next, we wanted to see who the most complimentary reviewers were, by nationality. While North Americans, Australians and Kyrgyz (highlighted in green) tend to leave the most favorable reviews, hotels have a harder time impressing those from Madagascar, Nepal and Mali (in red). Europeans sit somewhere in the middle – except for Bosnia and Herzegovina, who like to leave an upbeat review.   

Next, we wanted to see who is the most verbose in their feedback – the negative reviewers or the positive reviewers – and which countries leave the longest posts.

Are shorter reviews sweeter?

Overall, negative reviews were slightly longer, but only by a small amount – contrary to the popular belief that we tend to ‘rant’ more when we’re perturbed about something. People from Trinidad and Tobago left the longest good reviews, at an average of 29 words. Those from Belarus, the USA and Canada followed as the wordiest positive reviewers. On the flip side, the Romanians, Swedish, Russians and Germans had a lot to say about their bad experiences – leaving an average of 22 words showing their displeasure.

It's business, but also personal...

Clearly data visualization doesn't necessarily just need to be a tool just for the workplace; you can deploy it to gain an insight into other aspects as well – including helping you prepare for some valuable time off.

If you’re an IT leader your organization and need to enable insights for everyone across business, you should consider a complete, connected and collaborative analytics platform like Oracle Analytics Cloud. Why not find out a bit more and get started for free.

If you simply interested in visual analysis of your own data? Why not see what you can find out by taking a look at our short demo and signing up for an Oracle Data Visualization trial?

Either way, make sure you and your business take a vacation from spreadsheets and discover far more from your data through visualization.

HR today: right skills, right place, right time, right price

Mon, 2018-05-21 05:49

The only constant in today’s work environment is change. If you’re going to grow and stay competitive in this era of digital transformation, your business has to keep up—and HR must too.

A wide range of factors all mean that HR constantly has to grow and transform—changing demographics, new business models, economic uncertainty, evolving employee expectations, the bring-your-own-device revolution, increased automation, AI, the relentless search for cost savings, and more.

Things are different today. In the past, business change processes typically had a start and target end date, with specific deliverables that were defined in advance. Now change is open-ended, and its objectives evolve over time—based on the world as it is, rather than a set of assumptions. An agile model for transformation is therefore essential, along with a decision-making process that can survive constant change.

The fact is that people are still—and will always be—the most important part of any business, so HR has to be closely aligned to your overall business goals, delivering benefits to the whole organisation. Every move your HR team makes should be focused on how to deliver the right skills in the right place, at the right time and at the right price, to achieve your business’s goals.

 

Workforce planning

To manage your workforce effectively as the needs of your business change, you need to know what talent you have, where it’s located—and also what skills you are likely to need in the future. It’s much easier to fill skills gaps when you can see, or anticipate, them.

 

Deliver maximum value from your own people

And it’s much easier to do if you’ve already nurtured a culture of personal improvement. Giving people new opportunities to learn and develop, and a sense of control over their own careers will help you maintain up-to-date skills within your business and also identify the most ideal candidates—whether for promotion, relocation within the company or to take on specific roles. Moreover, it should enable them to, for example, pursue areas of personal interest, train for qualifications, or perhaps work flexibly—all of which will improve loyalty and morale.

You can also look for skills gaps that you absolutely must recruit externally to fill, and understand how best to do that, especially at short notice. What are the most cost-efficient and effective channels, for example? You might consider whether offshoring for skills is helpful, or maintaining a base of experienced temporary workers that you can call on.

 

Unknown unknowns

Yet these are all known gaps. Organisations now also have to consider recruiting people for unknown jobs too. Some estimates suggest that as much as two-thirds of primary school children will end up working in jobs that don’t yet exist. So what new roles are being created in your industry, and how are you selecting people that will be able to grow into them?

 

Maximise the value of your HR function

Your HR organisation must be capable of, and ready to support these changes, and that means three things. First, the strategic workforce planning activities described above, supported by modern data and analytics. Next, HR has to provide the very best employee experience possible, enabling personal development and support. Finally, they need to be able to support the process of constant change itself, and move to a more agile way of operating.

 

Get the culture right

Creating and nurturing a strong culture is essential here, and that relies on close co-ordination between HR, line managers and employees. Having a core system of record on everyone’s roles and various skills supports all these objectives, and can help you to grow your business through the modern era of change.

 

Essential enablers for implementing a modern product strategy

Mon, 2018-05-21 05:49

Continuous improvement across your entire mix of products and services is essential to innovate and stay competitive nowadays. Digital disruption requires companies to transform, successfully manage a portfolio of profitable offerings, and deliver unprecedented levels of innovation and quality. But creating your product portfolio strategy is only the first part—four key best practices are necessary to successfully implement it.

New technologies—the Internet of Things (IoT), Big Data, Social Media, 3D printing, and digital collaboration and modelling tools—are creating powerful opportunities to innovate. Increasingly customer-centric propositions are being delivered ‘as-a-service’ via the cloud, with just-in-time fulfilment joining up multiple parts of the supply chain. Your products and services have to evolve continually to keep up, causing massive amounts of data to be generated that has to be fed back in to inform future development.

 

Common language

To minimise complexity, it’s essential that there is just one context for all communication. You therefore need a standardised—and well-understood—enterprise product record that acts as a common denominator for your business processes. And that means every last piece of information—from core service features to how your product uses IoT sensors; from business processes to your roadmap for innovation, and all other details—gets recorded in one place, in the same way, for every one of your products, from innovation through development to commercialisation.

That will make it far easier for you to collect and interpret product information; define service levels and deliver on them; support new business models, and manage the overall future design of your connected offerings. Moreover, it enables your product development methods to become more flexible, so they can be updated more frequently, enabled by innovations in your supply chain, supported more effectively by IT, and improved over time.

 

Greater quality control in the digital world…

By including form, fit and function rules—that describe the characteristics of your product, or part of it—within the product record, you add a vital layer of change control. It enables you to create a formal approvals process for quality assurance. For example, changes made in one area—whether to a product or part of it—may create problems in other areas. The form, fit and function rules force you to perform cross-functional impact analyses and ensure you’re aware of any consequences.

As part of this, you can run simulations with ‘digital twins’ to predict changes in performance and product behaviour before anything goes wrong. This obviously has major cost-saving implications, enabling far more to be understood at the drawing-board stage. Moreover, IoT applications can be leveraged to help product teams test and gather data of your connected assets or production facilities.

 

Transparency and effective communications

The enterprise product record should also contain a full audit trail of decisions about the product, including data from third parties, and from your supply chain. The objective is full traceability from the customer perspective—with evidence of regulatory compliance, provenance of preferred suppliers, and fully-auditable internal quality processes. Additionally, it’s often helpful to be able to prove the safety and quality of your product and processes, as that can be a key market differentiator. Powerful project management and social networking capabilities support the collaborative nature of the innovation process.

 

Lean and efficient

Overall, your innovation platform should be both lean and efficient, based on the continual iteration of the following key stages:

  • Ideation, where you capture, collaborate and analyse ideas
  • Proposal, where you create business cases and model potential features
  • Requirements, where you evaluate, collaborate and manage product needs
  • Concepts, where you accelerate product development and define structures
  • Portfolio analysis, where you revise and optimise your product investment
  • Seamless Integration with downstream ERP and Supply Chain processes

 

The result: Powerful ROI

Being able to innovate effectively in a digital supply chain delivers returns from both top-line growth—with increased revenues and market share—and reduced costs from improved safety, security, sustainability and fewer returns.

 

 

Cloud: Look before you leap—and discover unbelievable new agility

Mon, 2018-05-21 05:48

All around the world, finance teams are now fully embracing the cloud to simplify their operations. The heady allure of reduced costs, increased functionality, and other benefits are driving the migration. Yet what’s getting people really excited is the unexpected flush of new business agility they experience after they’ve made the change.

At long last, the cloud is becoming accepted as the default environment to simplify ERP and EPM. Fifty-six percent* of finance teams have already moved to the cloud—or will do so within the next year—and 24% more plan to move at some point soon.

 

Major cost benefits in the cloud

Businesses are making the change to enjoy a wide range of benefits. According to a recent survey by Oracle*, reducing costs is (predictably) the main motivation, with improved functionality in second place—and culture, timing and the ability to write-off existing investments also key factors. The financial motivation breaks down into a desire to avoid infrastructure investment and on-premises upgrades, and also to achieve a lower total cost of ownership.

And Cloud is delivering on its promise in all these areas—across both ERP and EPM, 70% say they have experienced economic benefits after moving to the cloud.

 

Leap for joy at cloud agility

But the biggest overall benefit of moving to the cloud—quoted by 85% of those who have made the change—is staying current on technology. Moreover, 75% say that cloud improves usability, 71% say it increases flexibility and 68% say that it enables them to deploy faster. Financial gain is the top motivation for moving to the cloud, but that’s only the fourth-ranked advantage overall once there. It turns out that the main strengths of the cloud are in areas that help finance organisations improve business agility.

These are pretty amazing numbers. It would be unheard of, until fairly recently, for any decent-sized organisation to consider migrating its core ERP or EPM systems without a very, very good reason. Now, the majority of companies believe that the advantages of such a move—and specifically, moving to the cloud—overwhelm any downside.

 

The commercial imperative

Indeed, the benefits are more likely viewed as a competitive necessity. Cloud eliminates the old cycle of new system launches every two or three years—replacing it with incremental upgrades several times each year, and easy, instant access to additional features and capabilities.

And that is, no doubt, what’s behind the figures above. Finance professionals have an increasingly strong appetite to experiment with and exploit the latest technologies. AI, robotic process automation, internet of things, intelligent bots, augmented reality and blockchain are all being evaluated and used by significant numbers of organisations.

They’re improving efficiency in their day-to-day operations, joining-up operating processes across their business and reducing manual effort (and human error) through increased automation. Moreover, AI is increasingly being applied to analytics to find answers to compelling new questions that were, themselves, previously unthinkable—providing powerful new strategic insights.

Finance organisations are becoming more agile—able to think smarter, work more flexibly, and act faster using the very latest technical capabilities.

 

But it’s only available via cloud-based ERP and EPM

Increasingly, all these advances are only being developed as part of cloud-based platforms. And more and more advanced features are filtering down to entry-level cloud solutions—at least in basic form—encouraging finance people everywhere to experiment with what’s possible. That means, if you’re not yet using these tools in the cloud, you’re most likely falling behind your competitors that are—and that applies both from the broader business perspective as well as from the internal operating competency viewpoint.

The cloud makes it simple to deploy, integrate and experiment with new capabilities, alongside whatever you may already have in place. It has become the new normal in finance. It seems like we’re now at a watershed moment where those that embrace the potential of cloud will accelerate away from those that do not, and potentially achieve unassailable new operating efficiencies.

The good news is that it’s easy to get started.  According to MIT Technology Review in a 2017 report, 86% of those making a transition to the cloud said the costs were in line with, or better than expected, and 87% said that the timeframe of transition to the cloud was in line with, or better than expected.

_______

* Except where stated otherwise, all figures in this article are taken from ‘Combined ERP and EPM Cloud Trends for 2018’, Oracle, 2018.

 

You’ve got to start with the customer experience

Mon, 2018-05-21 05:47

Visionary business leader Steve Jobs once remarked: ‘You’ve got to start with the customer experience and work backwards to the technology.’ From someone who spent his life creating definitive customer experiences in technology itself, these words should carry some weight—and are as true today as ever.

The fact is that customer experience is a science, and relevance is its key goal. A powerful customer experience is essential to compete today. And relevance is what cuts through the noise of the market to actually make the connection with customers.

 

The fundamentals of success

For companies to transform their customer experience, they need to be able to streamline their processes and create innovative customer experiences. They also have to be able to deliver by connecting all their internal teams together so they always speak with one consistent voice.

But that’s only part of the story. Customers have real choice today. They’re inundated with similar messages to yours and are becoming increasingly discerning in their tastes.

Making yourself relevant depends on the strength of your offering and content, and the effectiveness of your audience targeting. It also depends on your technical capabilities. Many of your competitors will already be experimenting with powerful new technologies to increase loyalty and drive stronger margins.

 

The value of data

Learning to collect and use relevant customer data is essential. Data is the lifeblood of modern business—it’s the basis of being able to deliver any kind of personalised service on a large scale. Businesses need to use data to analyse behaviour, create profiles for potential new customers, build propositions around those target personas and then deliver a compelling experience. They also need to continually capture new data at every touchpoint to constantly improve their offerings.

Artificial intelligence (AI) and machine learning (ML) have a key role to play both in the analysis of the data and also in the automation of the customer experience. These technologies are developing at speed to enable us to improve our data analysis, pre-empt changing customer tastes and automate parts of service delivery.

 

More mature digital marketing

You can also now add in all kinds of technologies to the customer experience mix that are straight out of sci-fi. The internet of things (IoT) is here, with connected devices providing help in all kinds of areas—from keeping you on the right road to telling you when your vehicle needs maintenance, from providing updates on your order status to delivering personal service wherever you are, and much more—enabling you to drive real transformation.

Moreover, intelligent bots are making it much easier to provide high-quality, cost-effective, round-the-clock customer support—able to deal with a wide range of issues—and using ML to improve their own performance over time.

Augmented reality makes it possible to add contextual information, based on your own products and services, to real-world moments. So, if you’re a car manufacturer you may wish to provide help with simple roadside repairs (e.g. change of tire) via a smartphone app.

 

Always omnichannel

Finally, whether at the pre-sale or delivery stage, your customer experience platform must give you the ability to deliver consistency at every touchpoint. Whatever channel, whatever time, whatever context, your customers must all believe that your whole business is one person.

Indeed, as Michael Schrage, author of the Harvard Business Review, said: ‘Innovation is an investment in the capabilities and competencies of your customers. Your future depends on their future.’ So you have to get as close as possible to your customers to learn what they want today, and understand what experiences they are likely to want tomorrow. Work backwards from that and use any technology that can help you deliver it.

How APIs help make application integration intelligent

Mon, 2018-05-21 05:47

Artificial intelligence (AI) represents a technology paradigm shift, with the potential to completely revolutionise the way people work over the next few years. Application programming interfaces (APIs) are crucially important in enabling the rapid development of these AI applications. Conversely AI is also being used to validate APIs, themselves, and also to analyse and optimise their performance.

Wikipedia defines an API as a ‘set of subroutine definitions, protocols and tools for building application software’. In slightly less dry terms, an API is basically a gateway to the core capabilities of an application, enabling that functionality to be built into other software. So, for example, if you were creating an app that needed to show geographic location, you might choose to implement Google Maps’ API. It’s obviously much easier, faster and future-proof to do that than to build your own mapping application from scratch.

 

How APIs are used in AI

And that’s the key strength of API—it’s a hugely efficient way of enabling networked systems to communicate and draw on each other’s functionality, offering major benefits for creating AI applications.

Artificially intelligent machine ‘skills’ are, of course, just applications that can be provided as APIs. So if you ask your voice-activated smart device—whether it’s Siri, Cortana, Google Assistant, or any of the rest—what time you can get to the Town Hall via bus, it’s response will depend on various skills that might include:

  • Awareness of where you are—from a geo-location API
  • Knowledge of bus routes and service delays in your area—from a publicly available bus company API
  • Tracking of general traffic and passenger levels—from APIs that show user locations provided by mobile device manufacturers
  • Being able to find the town hall—from a mapping API

None of these APIs needs to know anything about the others. They simply take information in a pre-defined format and output data in their own way. The AI application, itself, has to understand each API’s data parameters, tie all their skills together, apply the intelligence and then process the data.

 

Everything is possible

That means you can combine the seemingly infinite number of APIs that exist in any way you like, giving you the power to produce highly advanced applications—and create unique sources of value for your business. You could potentially build apps to enhance the customer experience, improve your internal processes, and analyse data more effectively to strengthen decision making—and perhaps even identify whole new areas of business to get into.

 

How AI is being used to improve APIs

APIs are the ideal way of getting information into AI applications and also helping to streamline analytics—yet artificial intelligence also has a vital role to play within API development itself. For example, AI can be used to automatically create, validate and maintain API software development kits (implementations of APIs in multiple different programming languages).

AI can also be used to monitor API traffic. By analysing calls to APIs using intelligent algorithms, you can identify problems and trends, potentially helping you tailor and improve the APIs over time. Indeed, AI can be used to analyse internal company system APIs, for example, helping you score sales leads, predict customer behaviour, optimise elements of your supply chain, and much more.

 

Pages