Christopher Jones

Subscribe to Christopher Jones feed
Oracle Blogs
Updated: 14 hours 46 min ago

Historical Links for Oracle Net Services aka SQL*Net

Wed, 2018-12-12 00:06

Sometime you just have to do a cleanup but it seems a waste not to keep information that may still be valuable for the historical record. This is one such time. Here are some of the older resources about Oracle Net Services, aka SQL*Net taken from a web page that is about to be revamped.

Of course, if you want current information, visit the Oracle Net Service page, the Net Services Net Services Reference, or the latest Net Services Adminstrator's Guide.

Overview

Oracle Net Services provides a scalable, secure, and easy-to-use high-availability network infrastructure for Oracle environment. It eases the complexities of network configuration and management, maximizes performance, and improves network security and diagnostic capabilities as summarized hereafter.

  • Connectivity: Oracle Net Services enables a network session from a client application to an Oracle database server. Once a network session is established, Oracle Net acts as the data courier for both the client application and the database server. It is responsible for establishing and maintaining the connection between the client application and database server, as well as exchanging messages between them.
  • Manageability: it encompasses location transparency, centralized configuration and management, quick installation and configuration. Location Transparency service allows a database client to identify the target database server; to achieve this goal, several naming methods are available today: Oracle Net Directory naming, Local naming (TNSNAMES.ORA), Host naming, and External naming.
  • Centralized Configuration and Management: allows administrators, in large network environments, to easily access a centralized repository (i.e., LDAP-compliant directory server such as Oracle Internet Directory) to specify and modify the network configuration.
  • Quick Installation and Configuration: networking components for the Oracle database server and clients are preconfigured for most environments. The Oracle database service is resolved using various naming methods. As a result, clients and servers are ready to immediately connect when installed.
  • Performance and Scalability: features such as Database Resident Connection Pool (connection pooling), Shared Server (session multiplexing), and scalable event models (polling) enable performance and high scalability.
  • Network Security: Oracle Net Services enables database access control using features of firewall access control and protocol access control.
  • Diagnosability: a diagnostic and performance analysis tool, Trace Assistant, provides detailed information about the source and context of problems as they arise.
What's New in Oracle Database 12c

The new features for Net Services in Oracle Database 12c include:

  • Internet Protocol Version 6 (IPv6) Addressing and Connectivity support over Public VIPs to Real Application Clusters (please refer the IPv6 Statement of Direction below for more information)
  • Networking support for new architectures, such as, Database Consolidation and Global Data Services
  • Large Session Data Unit (SDU) sizes, with a new upper limit of 2 MB. The larger SDU size can be used to achieve better utilization of available bandwidth in networks that have high bandwidth delay products and host resources, according to application characteristics.
  • Advanced Network Compression. This feature can be used to reduce the amount of data transmitted over the network. New parameters for the sqlnet.ora file enable compression and selection of the preferred compression level. Please refer the Advanced Network Compression white paper below for more information.
  • Dead Connection Detection has been enhanced to reduce the amount of time taken to detect terminated connections. The SQLNET.EXPIRE_TIME parameter must be set in the sqlnet.ora file to detect terminated clients. If the system supports TCP keepalive tuning, then Oracle Net Services automatically uses the enhanced detection model, and tunes the TCP keepalive parameters at a per-connection level.
  • Intelligent Client Connection establishment feature reduces the priority of connection attempts to unresponsive nodes in the address string of connect descriptors. No configuration changes are required to use this feature.
  • Incident Generation for Process Failures for the following Oracle Net Services components has been added:
    • Oracle Net Listener
    • Listener control utility (LSNRCTL)
    • Connection Manager Administration (CMADMIN)
    • Oracle Connection Manager gateway (CMGW)
What's New in Oracle Database 11gR1 & 11gR2

The new features for Net Services in Oracle Database 11g Release 2 include:

  • Internet Protocol Version 6 (IPv6) Addressing and Connectivity for single-instance database (please refer the IPv6 Statement of Direction below for more information)
  • Oracle Restart in single instance environment for restarting the listener, the database and other Oracle components following hardware/software failure or host restart.
  • Support for CONNECT_TIMEOUT and TRANSPORT_CONNECT_TIMEOUT for each IP address
  • CIDR notation and wildcard support for valid node checking
  • The new features for Net Services in Oracle Database 11g Release 1 include:
    • Support for authenticated LDAP access for Net Naming
    • Performance Improvements
    • Fastpath for common usage scenarios, enforced transparently (no configuration required)
    • Efficient network support for bulk/large data transfers (such as, SecureFile LOBs)
    • Efficient event dispatch mechanisms for PHP and process-based systems (enforced transparently, no config. required)
    • Fast Reconnects for High Availability: efficient detection of terminated instances and conect time failover
    • Support for Database Resident Connection Pool
    • Enhancements to the Easy Connect Naming method

For more details see the Oracle Database Net Services Administrator's Guide

Technical Information

Oracle OpenWorld

Oracle Database 12c

Oracle Database 11gR2

Oracle Database 11gR1

Oracle Database 10g

Oracle Database 9i

ODPI-C: A Light Weight Driver for Oracle Database

Tue, 2018-12-11 18:52

This is a guest post by Oracle’s Vaddeti Karthik Raju, a contributor to the ODPI-C project.

What is ODPI-C ?

Oracle Database Programming Interface for C (ODPI-C) is an open source library of C code that simplifies the use of common Oracle Call Interface (OCI) features for Oracle Database drivers and user applications which are written in C or C++. It is a wrapper over Oracle Call Interface (OCI) that makes applications and language interfaces easier to develop.

For complete details of ODPI-C, visit the homepage. Here you can find a list of all its powerful features, links to the source code, the documentation, and some samples.

Why ODPI-C ?

Since ODPI-C is a wrapper on top of OCI and exposes common OCI functionality, why use ODPI-C? Though OCI is highly efficient and flexible, it requires a lot of code and technical skill to use well. ODPI-C reduces the amount of code and the skill level required, which is particularly useful for language driver creators. Thus it reduces the amount of time required for a developer to implement new Oracle features in a particular language or application.

User applications and language drivers have been written in ODPI-C. Drivers currently using ODPI-C include:

Oracle Drivers

  • cx_Oracle Python interface

  • node-oracledb Node.js module

Third-party Drivers

  • go-goracle Go Driver

  • mirmir Rust Bindings

  • odpic-raw Haskell Raw Bindings

  • ruby-ODPI Ruby Interface

  • rust-oracle Driver for Rust

Common setup

Let us go through few example programs which demonstrates how to perform operations in ODPI-C.

For installation of ODPI-C, visit the installation instructions. In summary, the supplied Makefile or Makefile.win32 can be used to build a shared library, which can be linked with samples and tests. Once the library has been built, locate the directory it is in and add the directory name to your system library search path, e.g. PATH on Windows or LD_LIBRARY_PATH on Linux. You'll also need some Oracle client libraries, such as from the Oracle Instant Client "Basic" package.

Before going further, create the database objects using the below code. Here I am using database credentials scott/tiger@localhost/orclpdb:

$ sqlplus scott/tiger@localhost/orclpdb create table TestTable ( IntCol number(9) not null, StringCol varchar2(100) );

Include the below headers, defines and printError( ) function for all the programs shown in this blog post: this code is common for all programs. Make sure that you connect to the same schema where you created the table.

#include <dpi.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #define USER "scott" #define PASSWORD "tiger" #define CONNECT_STRING "localhost/orclpdb" static dpiContext *gContext = NULL; static dpiErrorInfo gErrorInfo; //----------------------------------------------------------------------------- // printError() // Prints the error message. The error is first fetched // from the global DPI context. //----------------------------------------------------------------------------- int printError(void) { if (gContext) dpiContext_getError(gContext, &gErrorInfo); fprintf(stderr, " [FAILED]\n"); fprintf(stderr, " FN: %s\n", gErrorInfo.fnName); fprintf(stderr, " ACTION: %s\n", gErrorInfo.action); fprintf(stderr, " MSG: %.*s\n", gErrorInfo.messageLength, gErrorInfo.message); fflush(stderr); return DPI_FAILURE; } Inserting

The program insert.c shows how to perform simple insert operation that adds couple of records to the table.

// insert.c int main() { const char *insertSql = "insert into TestTable values (:1, :2)"; dpiData intColValue, stringColValue; dpiConn *conn; dpiStmt *stmt; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare insert statement for execution if (dpiConn_prepareStmt(conn, 0, insertSql, strlen(insertSql), NULL, 0, &stmt) < 0) return printError(); // create first row dpiData_setInt64(&intColValue, 1); if (dpiStmt_bindValueByPos(stmt, 1, DPI_NATIVE_TYPE_INT64, &intColValue) < 0) return printError(); dpiData_setBytes(&stringColValue, "Test data 1", strlen("Test data 1")); if (dpiStmt_bindValueByPos(stmt, 2, DPI_NATIVE_TYPE_BYTES, &stringColValue) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // create second row dpiData_setInt64(&intColValue, 2); if (dpiStmt_bindValueByPos(stmt, 1, DPI_NATIVE_TYPE_INT64, &intColValue) < 0) return printError(); dpiData_setBytes(&stringColValue, "Test data 2", strlen("Test data 2")); if (dpiStmt_bindValueByPos(stmt, 2, DPI_NATIVE_TYPE_BYTES, &stringColValue) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // commit changes if (dpiConn_commit(conn) < 0) return printError(); // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; }

Remember to add the headers, defines and printError() to insert.c before compiling it. I recommend using the Makefile or Makefile.win32 from the samples directory to help you compile.

Let us walk through each function of the code:

  1. Create the ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()).

  2. Create a standalone connection to the database (dpiConn_create()).

  3. Prepare a statement for execution (dpiConn_prepareStmt()).

  4. Populate data in structures and bind it to the statement (dpiData_setInt64(), dpiData_setBytes(), dpiStmt_bindValueByPos()).

  5. Execute the statement (dpiStmt_execute()).

  6. Repeat steps 4 and 5 for the second row.

  7. Commit the data (dpiConn_commit()).

  8. Clean up (dpiStmt_release(), dpiConn_release()).

Inserting multiple records

In insert.c we saw how to insert single record at a time. If you want to insert multiple records into a table you can use the dpiStmt_executeMany() function. In insertmany.c we will add multiple records to the table more efficiently by using an array bind operation.

// insertmany.c int main() { const char *insertSql = "insert into TestTable values (:1, :2)"; dpiData *intData, *strData; uint32_t numRows = 5, i; dpiVar *intVar, *strVar; char buffer[100]; dpiConn *conn; dpiStmt *stmt; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare and bind insert statement if (dpiConn_prepareStmt(conn, 0, insertSql, strlen(insertSql), NULL, 0, &stmt) < 0) return printError(); if (dpiConn_newVar(conn, DPI_ORACLE_TYPE_NUMBER, DPI_NATIVE_TYPE_INT64, numRows, 0, 0, 0, NULL, &intVar, &intData) < 0) return printError(); if (dpiStmt_bindByPos(stmt, 1, intVar) < 0) return printError(); if (dpiConn_newVar(conn, DPI_ORACLE_TYPE_VARCHAR, DPI_NATIVE_TYPE_BYTES, numRows, 100, 1, 0, NULL, &strVar, &strData) < 0) return printError(); if (dpiStmt_bindByPos(stmt, 2, strVar) < 0) return printError(); // populate some dummy data for (i = 0; i < numRows; i++) { dpiData_setInt64(&intData[i], i + 1); sprintf(buffer, "Test data %d", i + 1); if (dpiVar_setFromBytes(strVar, i, buffer, strlen(buffer)) < 0) return printError(); } // perform execute many if (dpiStmt_executeMany(stmt, DPI_MODE_EXEC_DEFAULT, numRows) < 0) return printError(); // commit changes if (dpiConn_commit(conn) < 0) return printError(); // cleanup if (dpiVar_release(intVar) < 0) return printError(); if (dpiVar_release(strVar) < 0) return printError(); dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; }

Let us walk through each function of the code:

  1. Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()).

  2. Create standalone connection to the database (dpiConn_create()).

  3. Prepare statement for execution (dpiConn_prepareStmt()).

  4. Create variables and bind them to the statement by position. (dpiConn_newVar(), dpiStmt_bindByPos()).

  5. Populate data into bound variables (dpiData_setInt64(), dpiVar_setFromBytes()).

  6. Execute the statement by specifying the number of times (dpiStmt_executeMany()).

  7. Commit the data (dpiConn_commit()).

  8. Clean up (dpiVar_release(), dpiStmt_release(), dpiConn_release()).

Fetching

So far we have seen how to insert data into a table. In the following program we will see how to fetch records from the table. In order to make sure some records exist in the table run insertmany before running fetch.

// fetch.c int main() { const char *selectSql = "select IntCol, StringCol from TestTable"; dpiData *intColValue, *stringColValue; dpiNativeTypeNum nativeTypeNum; uint32_t bufferRowIndex; dpiConn *conn; dpiStmt *stmt; int found; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare select statement if (dpiConn_prepareStmt(conn, 0, selectSql, strlen(selectSql), NULL, 0, &stmt) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, NULL) < 0) return printError(); // fetch rows while (1) { if (dpiStmt_fetch(stmt, &found, &bufferRowIndex) < 0) return printError(); if (!found) break; if (dpiStmt_getQueryValue(stmt, 1, &nativeTypeNum, &intColValue) < 0) return printError(); if (dpiStmt_getQueryValue(stmt, 2, &nativeTypeNum, &stringColValue) < 0) return printError(); printf("Int = %ld String = '%.*s'\n", intColValue->value.asInt64, stringColValue->value.asBytes.length, stringColValue->value.asBytes.ptr); } // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; }

When I run the program, the output is:

Int = 1 String = 'Test data 1' Int = 2 String = 'Test data 2' Int = 3 String = 'Test data 3' Int = 4 String = 'Test data 4' Int = 5 String = 'Test data 5'

Let us walk through each function of the code:

  1. Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()).

  2. Create standalone connection to the database (dpiConn_create()).

  3. Prepare statement for execution (dpiConn_prepareStmt()).

  4. Execute the statement (dpiStmt_execute()).

  5. Fetch the row and get column values (dpiStmt_fetch(), dpiStmt_getQueryValue()).

  6. Clean up (dpiStmt_release(), dpiConn_release()).

Scrolling

In the above example rows are retrieved in order from the statement until the rows are exhausted.

Now let us see an example on how to scroll between rows using the dpiStmt_scroll() function. To make sure some records exist in the table, run insertmany before running this program.

// scroll.c int main() { const char *sql = "select IntCol from TestTable order by IntCol"; dpiNativeTypeNum nativeTypeNum; uint32_t numQueryColumns; uint32_t bufferRowIndex; dpiData *data; dpiStmt *stmt; dpiConn *conn; int found; // create context if (dpiContext_create(DPI_MAJOR_VERSION, DPI_MINOR_VERSION, &gContext, &gErrorInfo) < 0) return printError(); // create connection if (dpiConn_create(gContext, USER, strlen(USER), PASSWORD, strlen(PASSWORD), CONNECT_STRING, strlen(CONNECT_STRING), NULL, NULL, &conn) < 0) return printError(); // prepare select statement if (dpiConn_prepareStmt(conn, 1, sql, strlen(sql), NULL, 0, &stmt) < 0) return printError(); // number of rows fetched from the database if (dpiStmt_setFetchArraySize(stmt, 5) < 0) return printError(); if (dpiStmt_execute(stmt, DPI_MODE_EXEC_DEFAULT, &numQueryColumns) < 0) return printError(); // set the relative position to scroll if (dpiStmt_scroll(stmt, DPI_MODE_FETCH_RELATIVE, 4, 0) < 0) return printError(); // fetch the row if (dpiStmt_fetch(stmt, &found, &bufferRowIndex) < 0) return printError(); if (dpiStmt_getQueryValue(stmt, 1, &nativeTypeNum, &data) < 0) return printError(); printf("\nIntCol = %ld\n", data->value.asInt64); // cleanup dpiStmt_release(stmt); dpiConn_release(conn); return DPI_SUCCESS; }

When I run scroll the output is:

IntCol = 5

Let us walk through each function of the code:

  1. Create ODPI-C context which is the first function that must be called in all ODPI-C applications (dpiContext_create()).

  2. Create standalone connection to the database (dpiConn_create()).

  3. Prepare statement for execution (dpiConn_prepareStmt()).

  4. Set the fetch array size (dpiStmt_setFetchArraySize()).

  5. Execute the statement (dpiStmt_execute()).

  6. Scroll the statement to the specified position (dpiStmt_scroll()).

  7. Fetch the row and get column value (dpiStmt_fetch(), dpiStmt_getQueryValue()).

  8. Clean up (dpiStmt_release(), dpiConn_release()).

Wrap up

ODPI-C has a number of advanced features letting you create powerful applications.

For more ODPI-C tests and samples visit the tests and samples directories.

For complete documentation details visit the ODPI-C documentation page.

If you have questions, feel free to contact us on GitHub.

Vaddeti Karthik Raju is a Senior Member Technical Staff for Oracle Bengaluru. He contributes to a number of projects, including ODPI-C.

PHP OCI8 2.2.0 has database call timeouts

Mon, 2018-12-10 22:55

Hot on the heels of PHP 7.3 being released, I've just pushed PHP OCI8 2.2.0 to PECL.

This release will install on PHP 7.0 to 7.3

One new feature available when OCI8 is compiled with Oracle Client 18c libraries is a database call timeout. A new oci_set_call_timeout() function is available. This will interrupt long running calls to the database, allowing better control over application behavior.

Install with 'pecl install oci8'. Note that currently the Windows builds are not using 18c, so do not have oci_set_call_timeout(). (Yet another reason to move to Linux!)

The OCI8 2.2 code line will also be included in future PHP7.2.14 and PHP 7.3.1 full software releases.

Looking for documentation? As well as php.net/oci8, there is still a lot of useful information in The Underground PHP and Oracle Manual.

"Python and Oracle Database on the Table" Conference Recording

Sun, 2018-10-28 10:20

Anthony Tuininga's recent Oracle Code One conference session on Python cx_Oracle best practices was recorded and is available on YouTube:

The slides are here.

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Questions: github.com/oracle/python-cx_Oracle/issues

Oracle Instant Client-based Middle Tiers Such As Docker Without a Click-through

Sun, 2018-10-28 05:13

The Oracle Linux group is pushing forward hard and their latest news is that Instant Client RPMs for Oracle Linux are downloadable for Oracle Linux 7 and Oracle Linux 6 without needing a click-through. This means you can now more easily provsion machines using an automated install process. Sergio's blog post has the announcement and steps to use the packages.

Our Instant Client 18.3 Dockerfile has also been updated. As you'd expect, 'docker build' now does the download and install of Instant Client. Since 18.3 is effectively the renamed 12.2.0.2, and you can connect to Oracle Database 11.2 or later (the same as with the Instant Client 12.2), using the new Dockerfile is an obvious win.

What are you doing at Oracle OpenWorld and Code One 2018?

Tue, 2018-10-09 02:11

The time for Oracle OpenWorld and CodeOne conferences is fast approaching.  These two conferences run concurrently in San Francisco over October 22 - 25.  If you are flying in, you will have already sorted out your conference tickets, but if you are local there are options from the free Discover pass (search the page for 'Discover') on upwards that you should take advantage of.

There are plenty of things to attend and do - you can get your hands dirty in any number of ways! There are so many sessions running that you need to keep a clear head.  If you want to see sessions that are related to the areas this blog covers check out our 'Focus On' document.  There is a short URL: https://tinyurl.com/AppDevOOW18 which will also take you there.  This document has the abstracts and will be up to date if there are any changes, but for ease of reference here is the current list of wonderful events:

HOL (Hands-on Lab) Sessions

Python and Oracle Database: Scripting for the Future - BYOL [HOL5052]
Monday, Oct 22, 9:00 a.m. - 11:00 a.m. | Moscone West - Overlook 2A (HOL)

Python and Oracle Database 18c: Scripting for the Future [HOL6329]
Tuesday, Oct 23, 2:15 p.m. - 3:15 p.m. | Marriott Marquis (Yerba Buena Level) - Salon 3/4

Developer Sessions

Getting Started with R and ROracle for Data Science with Oracle Database [DEV5049]
Monday, Oct 22, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2016

The Go Language: Principles and Practices for Oracle Database [DEV5047]
Monday, Oct 22, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 2001

How to Build Geospatial Analytics with Python and Oracle Database [DEV5185]
Monday, Oct 22, 1:30 p.m. - 2:15 p.m. | Moscone West - Room 2003

How Does Ruby on Rails Work with MySQL, Oracle Database, and Other Databases [DEV4948]
Monday, Oct 22, 2:30 p.m. - 3:15 p.m. | Moscone West - Room 2014

Getting Started with GraphQL APIs on Oracle Database with Node.js [DEV4879]
Tuesday, Oct 23, 11:30 a.m. - 12:15 p.m. | Moscone West - Room 2012

Product Training Sessions

Oracle Net Services: Best Practices for Database Performance and High Availability [TRN4073]
Monday, Oct 22, 3:45 p.m. - 4:30 p.m. | Moscone West - Room 3009

A Database Proxy for Transparent High Availability, Performance, Routing, and Security [TRN4070]
Wednesday, Oct 24, 11:15 a.m. - 12:00 p.m. | Moscone West - Room 3009

Application High Availability Best Practices and New Features [TRN4078]
Thursday, Oct 25, 10:00 a.m. - 10:45 a.m. | Moscone West - Room 3009

Using Location in Cloud Applications with Python, Node.js, and More [TRN4089]
Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3001

Building Real-Time Data in Web Applications with Node.js [TRN4081]
Thursday, Oct 25, 1:00 p.m. - 1:45 p.m. | Moscone West - Room 3009

IoT for Oracle Database: Soldering, Python, and a Little PL/SQL [TRN4077]
Thursday, Oct 25, 2:00 p.m. - 2:45 p.m. | Moscone West - Room 3009

Tips and Tricks Sessions

Python and Oracle Database on the Table [TIP4076]
Tuesday, Oct 23, 12:30 p.m. - 1:15 p.m. | Moscone West - Room 3009

Node.js: Async Data In and Data Out with Oracle Database [TIP4080]
Thursday, Oct 25, 11:00 a.m. - 11:45 a.m. | Moscone West - Room 3009

Performance and Scalability Techniques for Oracle Database Applications [TIP4075]
Thursday, Oct 25, 12:00 p.m. - 12:45 p.m. | Moscone West - Room 3009
Meet the Experts

Node.js, Python, PHP, and Go with Oracle Database [MTE6765]
Wednesday, Oct 24, 3:00 p.m. - 3:50 p.m. | Moscone West - The Hub - Lounge B

Demos

We'll also have a demo booth in one of the demo halls:

Python, Node.js, Go, C, and C++ Application Development for Oracle Database [APD-A03]

Other Happenings

Some other 'Focus On' Documents from my wider group are:

 

node-oracledb 3.0 Introduces SODA Document Storage

Mon, 2018-10-01 17:11

node-oracledb icon

 

 

Release announcement: Node-oracledb 3.0.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Simple Oracle Document Access (SODA) preview, Connection Pool draining, Call timeouts.

 

Node-oracledb 3 has been released, which is very exciting. This release adds support for some Oracle Database and Oracle Client 18c features.

As with earlier versions, node-oracledb 3 is usable with Oracle Client libraries 11.2 onwards. This allows it to connect to Oracle Database 9.2 or later, depending on the client version. But only when using Oracle Client 18.3 libraries and connected to Oracle Database 18.3 will you get the latest and greatest set of Oracle features, which we are pleased to bring you.

Here are the highlights of thise node-oracledb 3.0 release:

  • Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See the section lower down for examples.

  • A new drainTime argument to pool.close() allows pools to be force-closed after a specified number of seconds. This feature was a contribution from Danilo Silva. Thanks Danilo!

    When a pool is closed with a given drainTime, any subsequent getConnection() calls will fail, but connections currently in use will continue working. This allows code to complete and outstanding transactions to be committed. When no connections are in use, or at the end of the drain time (whichever occurs first), the pool and all its connections will be forcibly closed.

    Although the words 'forcibly closed' seem harsh, the drain time actually allows more graceful shutdown of applications, so that users will get clear messages that the pool is closing (or has closed), and letting the database-side sessions be cleanly freed without waiting for them to timeout. The drainTime can also be zero, forcing the immediate close of all sessions - this is handy when you want to kill an app but be nice to the database.

  • Installation of the pre-built node-oracledb binaries got a bit easier with basic proxy authentication support. Also the 'npm config' proxy value is now used when installing if there are no proxy environment variables set. These changes were a contribution from Cemre Mengu. Thank you Cemre!

  • Added a connection.callTimeout property to interrupt long running database calls. This is available when node-oracledb 3 is using Oracle Client libraries version 18.1, or later, regardless of Oracle Database version.

    The call timeout is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution. For background, the main code layer beneath node-oracledb's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by node-oracledb, zero or more 'round-trips' to the database can occur - calling the database and getting a response back.

    The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in node-oracledb before or after the completion of each round-trip is not counted.

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and an error is returned.

    • In the case where a node-oracledb operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout.

    • If no round-trip is required, the operation will never be interrupted.

    When callTimeout is exceeded, node-oracledb attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds.

    If the cleanup was successful, a "DPI-1067: call timeout of N ms exceeded with ORA-XXX" error will be returned and the application can continue to use the connection.

    For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed.

  • On Windows, node-oracledb will now attempt to load the Oracle Client libraries from the 'node_modules\oracledb\build\Release' directory before doing the standard Windows library directory search i.e. of the PATH directories.

    This new feature could be useful if you are bundling up applications on Windows and want to include the Oracle Instant Client. By putting the client libraries in the 'node_modules\oracledb\build\Release' directory there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle Client libraries in PATH.

  • poolPingInterval functionality has been 're-enabled' when using the connection pool with Oracle Client libraries 12.2, or later. Previously it was deliberately not enabled with these versions, since an internal check in those Oracle clients is very efficient for seeing if the network has dropped out. However users were unhappy that the internal check does not identify connections that are unusable because they have exceeded database session resource limits (e.g. return ORA-02396 when used), or have been explicitly closed by DBAs (e.g. return ORA-00028). This is where poolPingInterval helps.

    This change can make your apps seem more highly available but there is a drawback: your apps may be silently reconnecting more than is optimal, and you might not be aware of connection storms if a large pool needs to be re-established. You should monitor AWR reports to see if connections are occurring too frequently, and then work with your network and DBA administrators to prevent idle sessions being killed.

These are just the highlights. For other changes and improvements see the CHANGELOG. But read on to hear more about SODA . . . .

Simple Oracle Document Access (SODA) in node-oracledb

Oracle Simple Document Access (SODA) is a set of NoSQL-style APIs that let you create and store collections of documents (in particular JSON) in Oracle Database, retrieve them, and query them, without needing to know SQL. SODA support is now available in node-oracledb 3 when using Oracle Database 18.3 and Oracle Client 18.3, or higher. SODA APIs are also available for Python, C, Java, PL/SQL and via REST, so it is widely accessible and bound to be a useful tool in your data toolkit. We are currently labelling node-oracledb 3 SODA APIs as a 'preview' but, with a future version of the Oracle Client libraries, this will change.

The class diagram of node-oracledb shows the separation of the relational and SODA worlds:

node-oracledb class overview diagram

In reality, SODA is backed by Oracle Database tables, providing a well known, secure, and efficient storage solution. You could access those tables via SQL but this would rarely be needed, perhaps for some advanced Oracle Database functionality such as analytics for reporting.

Instead you will almost certainly just use the new classes and methods. Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents (e.g JSON) in them. Some basic examples are:

// Create the parent object for SODA. soda = await connection.getSodaDatabase(); // Create a new SODA collection, if it doesn't exist. // This will open an existing collection, if the name is already in use. collection = await soda.createCollection("mycollection"); // Insert a document. // A system generated key is created by default. content = {name: "Matilda", address: {city: "Melbourne"}}; doc = await collection.insertOneAndGet(content); key = doc.key; console.log("The key of the new SODA document is: ", key);

You can then get documents back via a key look up, or by a search. A key lookup is straightforward:

// Fetch the document back doc = await collection.find().key(key).getOne(); // A SodaDocument content = doc.getContent(); // A JavaScript object console.log('Retrieved SODA document as an object:'); console.log(content);

For documents that can be converted to JSON you can alternatively get them as a string:

content = doc.getContentAsString(); // A JSON string console.log('Retrieved SODA document as a string:'); console.log(content);

The find() method is an operation builder, with methods that allow progressively limiting criteria to be set, reducing the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() or count(), amongst others.

With JSON documents, a complete filtering specification language can be used for query-by-example (QBE) to find documents. A brief example is:

// Find all documents with city names starting with 'S' documents = await collection.find() .filter({"address.city": {"$like": "S%"}}) .getDocuments(); for (let i = 0; i < documents.length; i++) { content = documents[i].getContent(); console.log(' city is: ', content.address.city); }

A runnable example is in soda1.js

Check out the node-oracledb SODA manual section and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity.

You don't have Oracle Database 18.3 yet? Get it from here. Or you may be interested in using JSON with older versions of Oracle Database.

Summary

We are continuing to introduce important features to node-oracledb to make your development experience better. We have a long wishlist and will continue our work. Contributions from the community are always welcome, and we thank the people who have contributed to this and previous releases of node-oracledb for helping to make it better.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub or Slack (link to join Slack).

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Follow us on Twitter or Facebook.

Node.js node-oracledb: "[POLL] Most Wanted Enhancement"

Wed, 2018-09-26 22:08

Danilo Silva, a community member much involved with node-oracledb development, has opened a poll to identify the most requested node-oracledb features.  This will help prioritize our development efforts. 

You can vote here!

Danilo most recently contributed connection pool draining support to node-oracledb, which will be available with node-oracledb 3.0.

ODPI-C 3.0 Introduces SODA Document Storage

Fri, 2018-09-14 06:59
ODPI-C logo

Release 3.0 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

 

 

This release introduces support for Simple Oracle Document Access (SODA) when using Oracle Database 18c. SODA provides a non-SQL API for storing and accessing documents. Commonly documents are JSON, but other types can also be used. In this release the SODA API is a Preview. More details about SODA are in the companion release announcement for Python cx_Oracle 7.0.

Also introduced is a call timeout feature for Oracle Client 18c users. This allows applications direct control over how long database operations are allowed to run, making it easier for applications to control outcomes and keep control of user interaction. Again, details are in the cx_Oracle announcement.

If you're creating Windows applications for distribution, a change to how Oracle client libraries are located will be helpful. ODPI-C will try to load the Oracle client from the same directory as the ODPI-C binary, before defaulting to the standard search, i.e. using PATH. This means you can bundle a specific version of Instant Client with your application and know that it will be used in preference to any other Oracle libraries on the system.

There are a raft of other tweaks and improvements which can be found in the release notes.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Python cx_Oracle 7 Introduces SODA Document Storage

Thu, 2018-09-13 22:48

cx_Oracle logo

cx_Oracle 7.0, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

 

 

Anthony Tuininga has just released cx_Oracle 7.0. This release brings some key technologies and new features to the Python developer:

  • Oracle Simple Document Access (SODA) support - an exciting addition to the standard relational access model is the new set of APIs for Oracle SODA. See below.

  • Added Connection.callTimeout to support call timeouts when cx_Oracle is using Oracle Client 18.1 and higher. This is a useful backstop to prevent out-of-control SQL and PL/SQL statement execution.

    The main code layer beneath cx_Oracle's implementation is Oracle Call Interface. This API handles all the network connectivity to Oracle Database. For each OCI function executed by cx_Oracle, zero or more 'round-trips' to the database can occur - calling the database and getting a response back.

    The callTimeout value applies to each round-trip individually, not to the sum of all round-trips. Time spent processing in cx_Oracle before or after the completion of each round-trip is not counted.

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds callTimeout milliseconds, then the operation is halted and error "DPI-1067: call timeout of N ms exceeded with ORA-XXX" is returned.

    • In the case where a cx_Oracle operation requires more than one round-trip and each round-trip takes less than callTimeout milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds callTimeout.

    • If no round-trip is required, the operation will never be interrupted.

    After a timeout occurs, cx_Oracle attempts to clean up the internal connection state. The cleanup is allowed to take another callTimeout milliseconds.

    If the cleanup was successful, the DPI-1067 error will be returned and the application can continue to use the connection.

    For small values of callTimeout, the connection cleanup may not complete successfully within the additional callTimeout period. In this case an ORA-3114 is returned and the connection will no longer be usable. It should be closed.

  • Added support for closing a session pool via the function SessionPool.close(). This is useful for being 'nice' to the database and making sure that database sessions are not left dangling until the database cleans them up. In particular the optional 'force' argument is handy when you need to suddenly halt a Python application and immediately free all the sessions in the database.

  • Added support for getting the contents of a SQL collection object as a dictionary, where the keys are the indices of the collection and the values are the elements of the collection. See function Object.asdict().

  • On Windows, cx_Oracle will now attempt to load the Oracle client libraries from the same directory as the cx_Oracle module before doing the standard Windows library location search, e.g. in the directories in the PATH environment variable. This new feature could be useful if you are bundling up applications and want to include the Oracle Instant Client. By putting the client in the same directory as the cx_Oracle library there is no need to set PATH, no need to worry about users changing PATH, and no need to worry about having multiple versions of Oracle client libraries in PATH.

  • A change in cx_Oracle 7 is that when a DML RETURNING statement is executed, variables bound to it will return an array when calling Variable.getvalue(). Attempts to set cx_Oracle.__future__.dml_ret_array_val are now ignored.

  • When a connection is used as a context manager, the connection is now closed when the block ends. Attempts to set cx_Oracle.__future__.ctx_mgr_close are now ignored.

The full release notes show the other new features and changes. Review this list before you upgrade:

python -m pip install cx_Oracle --upgrade SODA in Python cx_Oracle

Oracle Simple Document Access (SODA) support was originally introduced in Java and recently exposed to C. Python support for SODA is now available in cx_Oracle 7 when using Oracle client 18.3 libraries and connecting to Oracle Database 18.1 or higher. SODA is all hot and new and under rapid development. For this cx_Oracle release we're labelling SODA support as a 'preview'. With a future version of the Oracle Client libraries this will change.

SODA is typically used to store JSON documents in Oracle Database, but has flexibility to let you store other types of content.

Once a DBA has granted you the SODA_APP privilege, you can simply create collections and store documents in them. Some basic examples are:

# Create the parent object for SODA soda = connection.getSodaDatabase() # Create a new SODA collection # This will open an existing collection, if the name is already in use. collection = soda.createCollection("mycollection") # Insert a document # A system generated key is created by default. content = {'name': 'Matilda', 'address': {'city': 'Melbourne'}} doc = collection.insertOneAndGet(content) key = doc.key print('The key of the new SODA document is: ', key)

You can then get documents back via a key look up, or by a search. A key lookup is straightforward:

# Fetch the document back doc = collection.find().key(key).getOne() # A SodaDocument content = doc.getContent() # A JavaScript object print('Retrieved SODA document dictionary is:') print(content)

For documents that can be converted to JSON you can alternatively get them as string:

content = doc.getContentAsString() # A JSON string print('Retrieved SODA document string is:') print(content)

The find() method is an operation builder, with methods that allow progressive filtering criteria to be set, limiting the set of documents that are then operated on by a terminal method such as getOne(), getDocuments() and count().

With JSON documents, a complete filtering specification language can be used to pattern match documents. A brief example is:

# Find all documents with names like 'Ma%' print("Names matching 'Ma%'") documents = collection.find().filter({'name': {'$like': 'Ma%'}}).getDocuments() for d in documents: content = d.getContent() print(content["name"])

A runnable example is in SodaBasic.py

Check out the cx_Oracle SODA manual and the Introduction to Simple Oracle Document Access (SODA) manual to see its power and simplicity.

Check it out!

PS the photo is one I took last weekend on a beach in Australia, which is coming into Spring. I thought you'd like it better than corporate clip art.

cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Node.js on Oracle Linux: It's So Easy

Mon, 2018-09-10 18:39
This is a quick placeholder cross-post of the 'so I can find it again' category: Sergio Leunissen from our Linux group has a blog post on using Node.js and node-oracledb on Oracle Linux.  You can find the post here.  The Oracle Linux RPMs packages make it all so easy!

So you want to use JSON in Oracle Database with Node.js?

Wed, 2018-08-15 05:52

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module.

I'll start with some examples showing a simple, naive, implementation which you can use with all versions of Oracle Database. Then I'll go on to show some of the great JSON functionality introduced in Oracle Database 12.1.0.2.

The examples below use the async/await syntax available in Node 7.6, but they can be rewritten to use promises or callbacks, if you have an older version of Node.js.

Storing JSON as character data in Oracle Database 11.2

At the simplest, you can stores JSON as character strings, such as in the column C of MYTAB:

CREATE TABLE mytab (k NUMBER, c CLOB);

Using a CLOB means we don't need to worry about the length restrictions of a VARCHAR2.

A JavaScript object like myContent can easily be inserted into Oracle Database with the node-oracledb module by stringifying it:

const oracledb = require('oracledb'); let connection, myContent, json, result; async function run() { try { connection = await oracledb.getConnection( {user: "hr", password: "welcome", connectString: "localhost/orclpdb"}); myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json } ); console.log('Rows inserted: ' + result.rowsAffected); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();

If you are just inserting one record you may want to autocommit, but make sure you don't unnecessarily commit, or break transactional consistency by committing a partial set of data:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected);

The output is:

Rows inserted: 1

To retrieve the JSON content you have to use a SQL query. This is fine when you only need to lookup records by their keys:

result = await connection.execute( 'select c from mytab where k = :kbv', { kbv: 1 }, // the key to find { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

The fetchInfo clause is used to return the CLOB as a string. This is simpler and generally faster than the default, streamed access method for LOBs. (Streaming is great for huge data streams such as videos.)

The JSON.parse() call converts the JSON string into a JavaScript object so fields can be accessed like 'js.address.city'.

Output is:

Name is: Sally City is: Melbourne

Code gets trickier if you need to match JSON keys in the query. You need to write your own matching functionality using LOB methods like dbms_lob.instr():

result = await connection.execute( 'select c from mytab where dbms_lob.instr(c, \'"name":"\' || :cbv ||\'"\') > 0', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

You can see this could be slow to execute, error prone to do correctly, and very hard to work with when the JSON is highly nested. But there is a solution . . .

Oracle Database 12c JSON

With Oracle 12.1.0.2 onward you can take advantage of Oracle's JSON functionality. Data is stored as VARCHAR2 or LOB so the node-oracledb code is similar to the naive storage solution above. However, in the database, extensive JSON functionality provides tools for data validation, indexing and matching, for working with GeoJSON, and even for working with relational data. Check the JSON Developer's Guide for more information. You may also be interested in some of the JSON team's blog posts.

To start with, when you create a table, you can specify that a column should be validated so it can contain only JSON:

c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE)

In this example I also take advantage of Oracle 12c's 'autoincrement' feature called 'identity columns'. This automatically creates a monotonically increasing sequence number for the key. The complete CREATE TABLE statement used for following examples is:

CREATE TABLE myjsontab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Strictly speaking, since I know my application will insert valid JSON, I could have improved database performance by creating the table without the CHECK (c IS JSON) clause. However, if you don't know where your data is coming from, letting the database do validation is wise.

Inserting a JavaScript object data uses the same stringification as the previous section. Since we don't need to supply a key now, we can use a DML RETURNING clause to get the new key's autoincremented value:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into myjsontab (c) values (:cbv) returning k into :kbv', { cbv: json, kbv: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT } }, { autoCommit: true} ); console.log('Data key is: ' + result.outBinds.kbv);

This inserts the data and returns the key of the new record. The output is:

Data key is: 1

To extract data by the key, a standard SQL query can be used, identical to the naive CLOB implementation previously shown.

Oracle Database's JSON functionality really comes into play when you need to match attributes of the JSON string. You may even decide not to have a key column. Using Oracle 12.2's 'dotted' query notation you can do things like:

result = await connection.execute( 'select c from myjsontab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

Output is:

Name is: Sally City is: Melbourne

(If you use Oracle Database 12.1.0.2, then the dotted notation used in the example needs to be replaced with a path expression, see the JSON manual for the syntax).

Other JSON functionality is usable, for example to find any records that have an 'address.city' field:

select c FROM myjsontab where json_exists(c, '$.address.city')

If you have relational tables, Oracle Database 12.2 has a JSON_OBJECT function that is a great way to convert relational table data to JSON:

result = await connection.execute( `select json_object('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < :did`, { did: 50 }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { for (var i = 0; i < result.rows.length; i++) { console.log("Department: " + result.rows[i][0]); js = JSON.parse(result.rows[i][0]); console.log('Department Name is: ' + js.name); } } else { console.log('No rows fetched'); }

Output is:

Department: {"deptId":10,"name":"Administration"} Department Name is: Administration Department: {"deptId":20,"name":"Marketing"} Department Name is: Marketing Department: {"deptId":30,"name":"Purchasing"} Department Name is: Purchasing Department: {"deptId":40,"name":"Human Resources"} Department Name is: Human Resources

If you are working with JSON tables that use BLOB storage instead of CLOB, for example:

CREATE TABLE myjsonblobtab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c BLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Then you need to bind a Buffer for insert:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); b = Buffer.from(json, 'utf8'); result = await connection.execute( 'insert into myjsonblobtab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: b }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); Querying needs to return a Buffer too: result = await connection.execute( 'select c from myjsonblobtab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.BUFFER } }}); if (result.rows.length) { js = JSON.parse(result.rows[0].toString('utf8')); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } A final JSON tip

One final tip is to avoid JSON.parse() if you don't need it. An example is where you need to pass a JSON string to a web service or browser. You may be able pass the JSON string returned from a query directly. In some cases the JSON string may need its own key, in which case simple string concatenation may be effective. In this example, the Oracle Locator method returns a GeoJSON string:

result = await connection.execute( `select sdo_util.to_geojson( sdo_geometry(2001, 8307, sdo_point_type(-90, 20, null), null, null)) as c from dual`, { }, // no binds { fetchInfo: {"C": {type: oracledb.STRING } }}); json = '{"geometry":' + result.rows[0][0] + '}'; console.log(json);

The concatenation above avoids the overhead of a parse and re-stringification:

js = JSON.parse(result.rows[0][0]); jstmp = {geometry: js}; json = JSON.stringify(jstmp); Summary

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JSON in Node.js and Oracle Database. Combined with node-oracledb's ability to work with LOBs as Node.js Strings, database access is very efficient. Oracle Database 12.1.0.2's JSON features make JSON operations in the database simple. Advances in Oracle Database 12.2 and 18c further improve the functionality and usability.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Issues and questions about node-oracledb can be posted on GitHub.

The Oracle JSON Developer's Guide is here.

Some New Features of Oracle Instant Client 18.3

Mon, 2018-07-30 05:52

We released Oracle Database 18.3 for Linux last week. It (and the "full" Oracle Client) are downloadable from here. Read this to find out about some of the new database features.

Many of the readers of my blog have an equal interest in the "client side". You'll be happy that Oracle Instant Client 18.3 for Linux 64-bit and 32-bit is also available. Instant Client is just a rebundling of the Oracle client libraries and some tools. They are the same ones available with an Oracle Database installation or the "full" Oracle Client installation but installation is much simpler: you just unzip a file, or install an RPM package on Linux and use them to connect your applications to Oracle Database.

The "Oracle Client", in whatever install footprint you choose, covers a number of technologies and provides a lot of language APIs. The Instant Client packages contain these APIs and selected tools like SQL*Plus and Data Pump. I'll let those teams blow their own trumpets about the new release. Here I'll talk about some of the Oracle Client functionality that benefits the Oracle Oracle Call Interface (OCI) API for C programs, and all the scripting languages that use OCI:

  • My wider group's most exciting project in 18.3 is the Connection Manager (CMAN) Traffic Director mode whose sub-location in the Oracle manual is a sign of how the feature its transparent, and not indicative of the huge engineering effort that went into it. CMAN in Traffic Director Mode is a proxy between the database clients and the database instances. Supported OCI clients from Oracle Database 11g Release 2 (11.2) and later can connect to CMAN to get improved high availability (HA) for planned and unplanned database server outages, connection multiplexing support, and load balancing.

Cherry picking some notable Oracle Client 18c features that are available via OCI:

  • You probably know that Oracle Database 18c is really just a re-badged 12.2.0.2. Due to the major version number change and the new release strategy, there is a new OCIServerRelease2() call to get the database version number. The old OCIServerRelease() function will give just the base release information so use the new function to get the actual DB patch level. Why? Let's just say there were robust discussions about the upgrade and release cycles, and about handling the "accelerated" version change across the whole database product suite and how things like upgrade tools were affected.

  • Extracting Instant Client 18.3 ZIP files now pre-creates symbolic links for the C and C++ client libraries on relevant operating systems. Yay! One fewer install step.

  • Instant Client now also pre-creates a network/admin sub-directory to show where you can put any optional network and other configuration files such as tnsnames.ora, sqlnet.ora, ldap.ora, and oraaccess.xml. This directory will be used by default for any application that loads the related Oracle Client libraries.

  • Support for Client Result Caching with dynamic binds where descriptors are not involved and the bind length is less than 32768. Since scripting languages tend to use dynamic binds for character data this could be a big performance win for your lookup table queries.

  • Unique ID generation improvements. One little old gotcha, particularly in some hosted or cloud environments, were errors when Oracle applications tried to generate a unique key for your client. This manifested itself as an Oracle error when you tried to start a program. Workarounds included adding a hostname to /etc/hosts. There were improvements in Oracle Client 18c for unique key generation so the problem should be less common.

  • A new call timeout parameter can be enabled for C applications. This applies to post-connection round-trips to the database, making it easier to interrupt long running calls and satisfy application quality of service requirements. After you connect, each OCI call may make one of more round-trips to Oracle database:

    • If the time from the start of any one round-trip to the completion of that same round-trip exceeds the call timeout milliseconds, then the operation is halted and an Oracle error is returned.

    • In the case where an OCI call requires more than one round-trip and each round-trip takes less than the specified number of milliseconds, then no timeout will occur, even if the sum of all round-trip calls exceeds the call timeout value.

    • If no round-trip is required, the operation will never be interrupted.

    After a timeout has occurred, the connection must be cleaned up. This is allowed to run for the same amount of time as specified for the original timeout. For very small timeouts, if the cleanup fails, then an ORA-3114 is returned and the connection must be released. However if the cleanup is successful then an ORA-3136 is returned and the application can continue using the connection.

    You can see this will be most useful for interrupting SQL statements whose "execute" phase may take some time.

  • The OCI Session pool underlays many application connection pools (and if it doesn't underlay yours, then it should - ask me why). Improvements in 18c session pooling include some usability "do-what-I-mean" parameter size check tweaks, internal lock improvements, and a new attribute OCI_ATTR_SPOOL_MAX_USE_SESSION.

    One other change that was much debated during development is the OCISessionGet() behavior of OCI_SPOOL_ATTRVAL_NOWAIT mode when a pool has to grow. Prior to 18c, even though it was a 'no wait' operation, getting a connection would actually wait for the pool to grow. Some users didn't like this. Since creating connections could take a few moments they had no way to control the quality of service. Now in 18c the mode doesn't wait - if there's no free connection immediately available, then control is returned to the application with an error. If you are impacted by the new behavior, then look at using alternative session acquire modes like OCI_SPOOL_ATTRVAL_TIMEDWAIT. Or better, keep your pool a constant size so it doesn't need to grow, which is what is recommended by Oracle's Real World Performance Group.

  • SODA support. Simple Oracle Document Access (SODA) that was previously only available via JDBC is now available in OCI. Yum. Let's see what we can do with this now it's in C. More on this later.

I hope this has given you a taste of some Oracle Client 18c changes and given you links to explore more. Don't forget that much new database functionality is available to clients transparently or via SQL and PL/SQL.

Finally, remember that Oracle has client-server version interoperability so 18c OCI programs can connect to Oracle Database 11.2 or later. It's time to upgrade your client!

Python cx_Oracle 6.4 Brings a World Cup of Improvements

Mon, 2018-07-02 19:58

cx_Oracle logo

cx_Oracle 6.4, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

At a nicely busy time of year, cx_Oracle 6.4 has landed. To keep it brief I'll point you to the release notes since there have been quite a number of improvements. Some of those will significantly help your apps

A few things to note:

  • Improvements to Continuous Query Notification and Advanced Queuing notifications

  • Improvements to session pooling

  • A new encodingErrors setting to choose how to handle decoding corrupt character data queried from the database

  • You can now use a cursor as a context manager:

    with conn.cursor() as c: c.execute("SELECT * FROM DUAL") result = c.fetchall() print(result)
cx_Oracle References

Home page: oracle.github.io/python-cx_Oracle/index.html

Installation instructions: cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: github.com/oracle/python-cx_Oracle

Demo: GraphQL with node-oracledb

Thu, 2018-06-21 09:18

Some of our node-oracledb users recently commented they have moved from REST to GraphQL so I thought I'd take a look at what it is all about.

I can requote the GraphQL talking points with the best of them, but things like "Declarative Data Fetching" and "a schema with a defined type system is the contract between client and server" are easier to undstand with examples.

In brief, GraphQL:

  • Provides a single endpoint that responds to queries. No need to create multiple endpoints to satisfy varying client requirements.

  • Has more flexibility and efficiency than REST. Being a query language, you can adjust which fields are returned by queries, so less data needs to be transfered. You can parameterize the queries, for example to alter the number of records returned - all without changing the API or needing new endpoints.

Let's look at the payload of a GraphQL query. This query with the root field 'blog' asks for the blog with id of 2. Specifically it asks for the id, the title and the content of that blog to be returned:

{ blog(id: 2) { id title content } }

The response from the server would contain the three request fields, for example:

{ "data": { "blog": { "id": 2, "title": "Blog Title 2", "content": "This is blog 2" } } }

Compare that result with this query that does not ask for the title:

{ blog(id: 2) { id content } }

With the same data, this would give:

{ "data": { "blog": { "id": 2, "content": "This is blog 2" } } }

So, unlike REST, we can choose what data needs to be transferred. This makes clients more flexible to develop.

Let's looks at some code. I came across this nice intro blog post today which shows a basic GraphQL server in Node.js. For simplicity its data store is an in-memory JavaScript object. I changed it to use an Oracle Database backend.

The heart of GraphQL is the type system. For the blog example, a type 'Blog' is created in our Node.js application with three obvious values and types:

type Blog { id: Int!, title: String!, content: String! }

The exclamation mark means a field is required.

The part of the GraphQL Schema to query a blog post by id is specified in the root type 'Query':

type Query { blog(id: Int): Blog }

This defines a capability to query a single blog post and return the Blog type we defined above.

We may also want to get all blog posts, so we add a "blogs" field to the Query type:

type Query { blog(id: Int): Blog blogs: [Blog], }

The square brackets indicates a list of Blogs is returned.

A query to get all blogs would be like:

{ blogs { id title content } }

You can see that the queries include the 'blog' or 'blogs' field. We can pass all queries to the one endpoint and that endpoint will determin how to handle each. There is no need for multiple endpoints.

To manipulate data requires some 'mutations', typically making up the CUD of CRUD:

input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }

To start with, the "input" type allows us to define input parameters that will be supplied by a client. Here a BlogEntry contains just a title and content. There is no id, since that will be automatically created when a new blog post is inserted into the database.

In the mutations, you can see a BlogEntry type is in the argument lists for the createBlog and updateBlog fields. The deleteBlog field just needs to know the id to delete. The mutations all return a Blog. An example of using createBlog is shown later.

Combined, we represent the schema in Node.js like:

const typeDefs = ` type Blog { id: Int!, title: String!, content: String! } type Query { blogs: [Blog], blog(id: Int): Blog } input BlogEntry { title: String!, content: String! } type Mutation { createBlog(input: BlogEntry): Blog!, updateBlog(id: Int, input: BlogEntry): Blog!, deleteBlog(id: Int): Blog! }`;

This is the contract, defining the data types and available operations.

In the backend, I decided to use Oracle Database 12c's JSON features. There's no need to say that using JSON gives developers power to modify and improve the schema during the life of an application:

CREATE TABLE blogtable (blog CLOB CHECK (blog IS JSON)); INSERT INTO blogtable VALUES ( '{"id": 1, "title": "Blog Title 1", "content": "This is blog 1"}'); INSERT INTO blogtable VALUES ( '{"id": 2, "title": "Blog Title 2", "content": "This is blog 2"}'); COMMIT; CREATE UNIQUE INDEX blog_idx ON blogtable b (b.blog.id); CREATE SEQUENCE blog_seq START WITH 3;

Each field of the JSON strings corresponds to the values of the GraphQL Blog type. (The 'dotted' notation syntax I'm using in this post requires Oracle DB 12.2, but can be rewritten for 12.1.0.2.)

The Node.js ecosystem has some powerful modules for GraphQL. The package.json is:

{ "name": "graphql-oracle", "version": "1.0.0", "description": "Basic demo of GraphQL with Oracle DB", "main": "graphql_oracle.js", "keywords": [], "author": "christopher.jones@oracle.com", "license": "MIT", "dependencies": { "oracledb": "^2.3.0", "express": "^4.16.3", "express-graphql": "^0.6.12", "graphql": "^0.13.2", "graphql-tools": "^3.0.2" } }

If you want to see the full graphql_oracle.js file it is here.

Digging into it, the application has some 'Resolvers' to handle the client calls. From Dhaval Nagar's demo, I modified these resolvers to invoke new helper functions that I created:

const resolvers = { Query: { blogs(root, args, context, info) { return getAllBlogsHelper(); }, blog(root, {id}, context, info) { return getOneBlogHelper(id); } }, [ . . . ] };

To conclude the GraphQL part of the sample, the GraphQL and Express modules hook up the schema type definition from above with the resolvers, and start an Express app:

const schema = graphqlTools.makeExecutableSchema({typeDefs, resolvers}); app.use('/graphql', graphql({ graphiql: true, schema })); app.listen(port, function() { console.log('Listening on http://localhost:' + port + '/graphql'); })

On the Oracle side, we want to use a connection pool, so the first thing the app does is start one:

await oracledb.createPool(dbConfig);

The helper functions can get a connection from the pool. For example, the helper to get one blog is:

async function getOneBlogHelper(id) { let sql = 'SELECT b.blog FROM blogtable b WHERE b.blog.id = :id'; let binds = [id]; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds); await conn.close(); return JSON.parse(result.rows[0][0]); }

The JSON.parse() call nicely converts the JSON string that is stored in the database into the JavaScript object to be returned.

Starting the app and loading the endpoint in a browser gives a GraphiQL IDE. After entering the query on the left and clicking the 'play' button, the middle pane shows the returned data. The right hand pane gives the API documentation:

To insert a new blog, the createBlog mutation can be used:

If you want to play around more, I've put the full set of demo-quality files for you to hack on here. You may want to look at the GraphQL introductory videos, such as this comparison with REST.

To finish, GraphQL has the concept of real time updates with subscriptions, something that ties in well with the Continous Query Notification feature of node-oracledb 2.3. Yay - something else to play with! But that will have to wait for another day. Let me know if you beat me to it.

Node-oracledb 2.3 with Continuous Query Notifications is on npm

Fri, 2018-06-08 01:48

Release announcement: Node-oracledb 2.3.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Continuous Query Notifications. Heterogeneous Connection Pools.

 

 

Our 2.x release series continues with some interesting improvements: Node-oracledb 2.3 is now available for your pleasure. Binaries for the usual platforms are available for Node.js 6, 8, and 10; source code is available on GitHub. We are not planning on releasing binaries for Node.js 4 or 9 due to the end of life of Node.js 4, and the release of Node.js 10.

The main new features in node-oracledb 2.3 are:

  • Support for Oracle Database Continuous Query Notifications, allowing JavaScript methods to be called when database changes are committed. This is a cool feature useful when applications want to be notified that some data in the database has been changed by anyone.

    I recently posted a demo showing CQN and Socket.IO keeping a notification area of a web page updated. Check it out.

    The new node-oracledb connection.subscribe() method is used to register a Node.js callback method, and the SQL query that you want to monitor. It has two main modes: for object-level changes, and for query-level changes. These allow you to get notifications whenever an object changes, or when the result set from the registered query would be changed, respectively. There are also a bunch of configuration options for the quality-of-service and other behaviors.

    It's worth noting that CQN requires the database to establish a connection back to your node-oracledb machine. Commonly this means that your node-oracledb machine needs a fixed IP address, but it all depends on your network setup.

    Oracle Database CQN was designed for infrequently modified tables, so make sure you test your system scalability.

  • Support for heterogeneous connection pooling and for proxy support in connection pools. This allows each connection in the pool to use different database credentials.

    Some users migrating to node-oracledb had schema architectures that made use of this connection style for data encapsulation and auditing. Note that making use of the existing clientId feature may be a better fit for new code, or code that does mid-tier authentication.

  • A Pull Request from Danilo Silva landed, making it possible for Windows users to build binaries for self-hosting. Thanks Danilo! Previously this was only possible on Linux and macOS.

  • Support for 'fetchAsString' and 'fetchInfo' to allow fetching RAW columns as hex-encoded strings.

See the CHANGELOG for the bug fixes and other changes.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

ODPI-C 2.4 has been released

Wed, 2018-06-06 16:44
ODPI-C logo

Release 2.4 of Oracle Database Programming Interface for C (ODPI-C) is now available on GitHub.

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Top features: Better database notification support. New pool timeout support.

 

I'll keep this brief. See the Release Notes for all changes.

  • Support for Oracle Continuous Query Notification and Advanced Queuing notifications was improved. Notably replacement subscribe and unsubscribe methods were introduced to make use more flexible. Support for handling AQ notifications was added, so now you can get notified there is a message to dequeue. And settings for the listening IP address, for notification grouping, and to let you check the registration status are now available.

  • Some additional timeout options for connection pools were exposed.

  • Some build improvements were made: the SONAME is set in the shared library on *ix platforms. There is also a new Makefile 'install' target that installs using a standard *ix footprint.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Installation Instructions: oracle.github.io/odpi/doc/installation.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Demo: Oracle Database Continuous Query Notification in Node.js

Sat, 2018-06-02 08:32

Native Oracle Database Continuous Query Notification (CQN) code has landed in the node-oracledb master branch on GitHub. If you want to play with it, but don't want to wait for the next binary node-oracledb release, you can compile node-oracledb yourself and play with this demo.

 

 

Some of you may already be using CQN via its PL/SQL APIs. The new, native support in node-oracledb makes it all so much nicer. Check out the development documentation for connection.subscribe() and the 'user manual'. There are a couple of examples cqn1.js and cqn2.js available, too.

CQN allows JavaScript methods to be called when database changes are committed by any transaction. You enable it in your node-oracledb app by registering a SQL query. CQN has two main modes: object-level and query-level. The former sends notifications (i.e. calls your nominated JavaScript method) when changes are made to database objects used in your registered query. The query-level mode only sends notifications when database changes are made that would impact the result set of the query, e.g. the WHERE clause is respected.

If you're not using CQN, then you might wonder when you would. For infrequently updated tables you can get CQN to generate notifications on any data or table change. I can see how query-level mode might be useful for proactive auditing to send alerts when an unexpected, but valid, value is inserted or deleted from a table. For tables with medium levels of updates, CQN allows grouping of notifications by time, which is a way of reducing load by preventing too many notifications being generated in too short a time span. But, as my colleague Dan McGhan points out, if you know the table is subject to a lot of change, then your apps will be better off simply polling the table and avoiding any CQN overhead. Note that CQN was designed to be used for relatively infrequently updated tables.

DEMO APP

I've thrown together a little app that uses CQN and Socket.IO to refresh a message notification area on a web page. It's really just a simple smush of the Socket.IO intro example and the node-oracledb CQN examples.

There is a link to all the code in the next section of this post; I'll just show snippets inline here. I'm sure Dan will update his polished 'Real-time Data' example soon, but until then here is my hack code. It uses Node.js 8's async/await style - you can rewrite it if you have an older Node.js version.

One thing about CQN is that the node-oracledb computer must be resolvable by the Database computer; typically this means having a fixed IP address which may be an issue with laptops and DHCP. Luckily plenty of other cases work too. For example, I replaced my Docker web service app with a CQN example and didn't need to do anything with ports or identifying IP addresses. I'll leave you to decide how to run it in your environment. There are CQN options to set the IP address and port to listen on, which may be handy.

The demo premise is a web page with a message notification area that always shows the five most recent messages in a database table. The messages are being inserted into that table by other apps (I'll just use SQL*Plus to do these inserts) and the web page needs to be updated with them only when there is a change. I'm just using dummy data and random strings:

To see how it fits together, look at this no-expense-spared character graphic showing the four components: SQL*Plus, the database, the browser and the Node.js app:

SQL*PLUS: DATABASE: insert into msgtable >-------> msgtable >-------CQN-notification------------------+ commit | | BROWSER: <-------+ NODE.JS APP: | 5 Message | URL '/' serves index.html | 4 Message | | 3 Message | CQN: | 2 Message | subscribe to msgtable with callback myCallback | 1 Message | | | myCallback: <------------------------------------+ | query msgtable +-----------< send rows to browser to update the DOM

The app (bottom right) serves the index page to the browser. It connects to the DB and uses CQN to register interest in msgtable. Any data change in the table from SQL*Plus (top left) triggers a CQN notification from the database to the application, and the callback is invoked. This callback queries the table and uses Socket.IO to send the latest records to the browser, which updates the index.html DOM.

The first thing is to get your DBA (i.e. log in as the SYSTEM user) to give you permission to get notifications:

GRANT CHANGE NOTIFICATION TO cj;

We then need a table that our app will get notifications about, and then query to get the latest messages:

CREATE TABLE cj.msgtable ( k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), message VARCHAR(100) );

The column K is an Oracle Database 12c identity column that will automatically get a unique number inserted whenever a new message is inserted. In older database versions you would create a sequence and trigger to do the same.

The little SQL script I use to insert data (and trigger notifications) is:

INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); COMMIT;

The Node.js app code is more interesting, but not complex. Here is the code that registers the query:

conn = await oracledb.getConnection(); await conn.subscribe('mysub', { callback: myCallback, sql: "SELECT * FROM msgtable" }); console.log("CQN subscription created");

Although CQN has various options to control its behavior, here I keep it simple - I just want to get notifications when any data change to msgtable is committed.

When the database sends a notifications, the method 'myCallback' will get a message, the contents of which will vary depending on the subscription options. Since I know the callback is invoked when any table data has changed, I ignore the message contents and go ahead and query the table. The rows are then stringified and, by the magic of Socket.IO, sent to the web page:

async function myCallback(message) { let rows = await getData(); // query the msgtable io.emit('message', JSON.stringify(rows)); // update the web page }

The helper function to query the table is obvious:

async function getData() { let sql = `SELECT k, message FROM msgtable ORDER BY k DESC FETCH NEXT :rowcount ROWS ONLY`; let binds = [5]; // get 5 most recent messages let options = { outFormat: oracledb.OBJECT }; let conn = await oracledb.getConnection(); let result = await conn.execute(sql, binds, options); await conn.close(); return result.rows; }

At the front end, the HTML for the web page contains a 'messages' element that is populated by JQuery code when a message is received by Socket.IO:

<ul id="messages"></ul> <script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.1.1/socket.io.js"></script> <script src="https://code.jquery.com/jquery-3.3.1.js"></script> <script> $(function () { var socket = io(); socket.on('message', function(msg){ $('#messages').empty(); $.each(JSON.parse(msg), function(idx, obj) { $('#messages').append($('<li>').text(obj.K + ' ' + obj.MESSAGE)); }); }); }); </script>

You can see that the JSON string received from the app server is parsed and the K and MESSAGE fields of each row object (corresponding to the table columns of the same names) are inserted into the DOM in an unordered list.

That's it.

DEMO IN ACTION

To see it in action, extract the code and install the dependencies:

cjones@mdt:~/n/cqn-sockets$ npm install npm WARN CQN-Socket-Demo@0.0.1 No repository field. npm WARN CQN-Socket-Demo@0.0.1 No license field. added 86 packages in 2.065s

I cheated a bit there and didn't show node-oracledb compiling. Once a production release of node-oracledb is made, you should edit the package.json dependency to use its pre-built binaries. Until then, node-oracledb code will be downloaded and compiled - check the instructions for compiling.

Edit server.js and set your database credentials - or set the referenced environment variables:

let dbConfig = { user: process.env.NODE_ORACLEDB_USER, password: process.env.NODE_ORACLEDB_PASSWORD, connectString: process.env.NODE_ORACLEDB_CONNECTIONSTRING, events: true // CQN needs events mode }

Then start the app server:

cjones@mdt:~/n/cqn-sockets$ npm start > CQN-Socket-Demo@0.0.1 start /home/cjones/n/cqn-sockets > node server.js CQN subscription created Listening on http://localhost:3000

Then load http://localhost:3000/ in a browser. Initially the message pane is blank - I left bootstrapping it as an exercise for the reader.

Start SQL*Plus in a terminal window and create a message:

SQL> INSERT INTO msgtable (message) VALUES (DBMS_RANDOM.STRING('A', 10)); SQL> COMMIT;

Every time data is committed to msgtable, the message list on the web page is automatically updated:

If you don't see messages, review Troubleshooting CQN Registrations. The common problems will be network related: the node-oracledb machine must be resolvable, the port must be open etc.

Try it out and let us know how you go. Remember you are using development code that just landed, so there may be a few rough edges.

Python and cx_Oracle RPMs are available from yum.oracle.com

Tue, 2018-05-29 21:04

cx_Oracle logo

This is worth cross posting: Getting Started with Python Development on Oracle Linux

Our Oracle Linux group has made Python and cx_Oracle RPMs available for a while. They recently launched a new landing page with nice, clear instructions on how to install various versions of Python, and how to install the cx_Oracle interface for Oracle Database. Check the link above.

Pages