Feed aggregator

nls_date_format difference between v$parameter and database_properties

Tom Kyte - Thu, 2017-12-14 04:06
When I query v$parameter, nls_date_format is YYYY-MM-DD When I query database_properties, nls_date_format is DD-MON-RR Why is it different? Could this cause problems?
Categories: DBA Blogs

Latest Updates + FREE Training This Week

Online Apps DBA - Thu, 2017-12-14 03:38

In this Week, you will find: 1. Oracle GoldenGate 12c Administration Training Review   1.1 GoldenGate Day 1 Architecture Overview & Installation: Lessons Learned & Key Takeaways   1.2 GoldenGate Day 2 Processes Configuration & Replication Setup: Lessons Learned & Key Takeaways 2. Oracle Apps DBA – Troubleshoot/Debug Long Running Concurrent Request in Oracle EBS (R12/11i) 3. Oracle SOA Suite […]

The post Latest Updates + FREE Training This Week appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Node-oracledb v2 Query Methods and Fetch Tuning

Christopher Jones - Thu, 2017-12-14 01:09
Computer screen showing random javascript code (Photo by Markus Spiske on unsplash.com)

 

For our Node.js node-oracledb v2 add-on for Oracle Database we revamped its lower data access layer and made it into a standalone project ODPI-C, which is being reused by several other language APIs. ODPI-C gave us a reason and opportunity to simplify some of internal query handling code in node-oracledb.

To recap, node-oracledb has four ways to execute queries against an Oracle Database. These are the same in version 1 and version 2:

  • Direct Fetches - these are non-ResultSet, non- queryStream() fetches. All rows are returned in one big array, limited to maxRows (v2 allows an unlimited array size).

  • ResultSet getRow() - return one row on each call until all rows are returned.

  • ResultSet getRows(numRows) - return batches of rows in each call until all rows are returned.

  • queryStream() - stream rows until all rows are returned.

The changes in node-oracledb v2 are:

  • Enhanced direct fetches to allow an unlimited number of rows to be fetched, and made this the default. This occurs when maxRows = 0

  • Replaced prefetchRows (previously used for internal fetch buffering and tuning) with a new property fetchArraySize; the default size is 100.

    fetchArraySize affects direct fetches, ResultSet getRow() and queryStream().

  • getRows(numRows,...) internal fetch buffering is now only tuned by the numRows value. Previously prefetchRows could also affect internal buffering.

  • queryStream() now use fetchArraySize for internal buffer sizing.

  • Implemented getRow() in JavaScript for better performance and use fetchArraySize as the way to tune internal buffer sizes.

The change in v2 that I want to discuss is how a 'direct fetch' does its internal buffering.

To make fetching an unlimited number of rows feasible for direct fetches, data is now internally fetched from Oracle Database in batches of size fetchArraySize and then concatenated to form one big array of results that is returned in the callback. This lets you use fetchArraySize to tune fetch performance. In node-oracledb v1, one big array of size maxRows was allocated before the data was fetched from the database. (With node-oracledb v2 you can have the same behavior, if you really want it, by setting fetchArraySize = maxRows, where maxRows > 0).

Let's look at two different scenarios that fetch 50000 rows with fetchArraySize of 100 (the default) and 1000. The code is at the end of the post.

Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643

You can see in this case (with a local database) that increasing fetchArraySize improved performance. There may be various factors for this, but the common one is reduced network cost and reduced load on the database because there were fewer 'round trips' to get batches of records. Each query and environment will be different, and require its own tuning.

The benefits of using fetchArraySize for direct fetches are:

  • Performance of batching and network transfer of data can be tuned.

  • Memory can incrementally grow when the number of query rows is unknown, or varies from execution to execution. A single large chunk of memory (based on maxRows in v1) doesn't need to pre-allocated to handle the 'worst case' of a large number of rows.

There are two drawbacks with direct fetches:

  • One big array of results is needed. This is the same in v1 and v2.

  • The concatenation of batches of records can use more memory than the final array requires, and can cause fragmentation.

Let's look at timings for all query methods. This is one run; there was expected variability each time I ran the scripts. The 'batch size' number is numRows for getRows(numRows) calls, or fetchArraySize for the other fetch methods.

Direct fetch: rows: 50000, batch size: 100, seconds: 4.054 ResultSet getRow(): rows: 50000, batch size: 100, seconds: 1.625 ResultSet getRows(): rows: 50000, batch size: 100, seconds: 1.586 queryStream(): rows: 50000, batch size: 100, seconds: 1.691 Direct fetch: rows: 50000, batch size: 1000, seconds: 1.643 ResultSet getRow(): rows: 50000, batch size: 1000, seconds: 1.471 ResultSet getRows(): rows: 50000, batch size: 1000, seconds: 1.327 queryStream(): rows: 50000, batch size: 1000, seconds: 1.415

The ResultSet and queryStream() methods don't have to store all rows in memory at once so there is less memory management involved. The outlier is obviously the first result: the memory management of concatenating small chunks of memory together is big. We have some ideas on what we can do inside node-oracledb to improve this a bit, but that is a future project to be investigated, and can't solve the first drawback that all rows eventually have to be held in memory at the same time.

The conclusion is to use ResultSets or streaming for large numbers of rows. This is the same recommendation we gave for v1.

For small numbers of rows, the various query methods perform pretty much the same. The timings are so short that you can treat the differences in the one run shown below as noise. Here each query only returned 1 row:

Direct fetch: rows: 1, batch size: 100, seconds: 0.011 ResultSet getRow(): rows: 1, batch size: 100, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 100, seconds: 0.013 queryStream(): rows: 1, batch size: 100, seconds: 0.013 Direct fetch: rows: 1, batch size: 1, seconds: 0.012 ResultSet getRow(): rows: 1, batch size: 1, seconds: 0.012 ResultSet getRows(): rows: 1, batch size: 1, seconds: 0.013 queryStream(): rows: 1, batch size: 1, seconds: 0.013

Although the timings are small, I suggest using a small fetchArraySize or numRows if you are querying a small handful of rows, particularly if the number of rows is known (such as 1). This reduces the amount of memory that needs to be allocated throughout node-oracledb, the Oracle client libraries, and also in the database.

References

Node.oracledb documentation is here.

If you are currently using node-oracledb v1, you may be interested in the documentation on Migrating from node-oracledb 1.13 to node-oracledb 2.0.

Code

Here are the rudimentary test scripts I used. The ResultSet code originated in the v1 examples from https://jsao.io/2015/07/an-overview-of-result-sets-in-the-nodejs-driver/

The config.js file is at the end. The dbconfig.js file is the same as in the examples.

The timings include statement execution in the DB, though this is not controlled by node-oracledb. With direct fetches there isn't a way in JavaScript to distinguish the query execute cost from the data fetch costs that fetchArraySize and numRows control.

Direct Fetch

// direct fetch var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; rowsProcessed = results.rows.length; // do work on the rows here var t = ((Date.now() - startTime)/1000); console.log('Direct fetch: rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.release(function(err) { if (err) console.error(err.message); }); }); });

ResultSet getRow()

// ResultSet getRow() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true, fetchArraySize: config.batchSize }, function(err, results) { if (err) throw err; function processResultSet() { results.resultSet.getRow(function(err, row) { if (err) throw err; if (row) { rowsProcessed++; // do work on the row here processResultSet(); // try to get another row from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRow(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); } ); } );

ResultSet getRows()

// ResultSet getRows() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); oracledb.fetchArraySize = 1; connection.execute( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { resultSet: true }, function(err, results) { var rowsProcessed = 0; if (err) throw err; function processResultSet() { results.resultSet.getRows(config.batchSize, function(err, rows) { if (err) throw err; if (rows.length) { rows.forEach(function(row) { rowsProcessed++; // do work on the row here }); processResultSet(); // try to get more rows from the result set return; // exit recursive function prior to closing result set } var t = ((Date.now() - startTime)/1000); console.log('ResultSet getRows(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); results.resultSet.close(function(err) { if (err) console.error(err.message); connection.release(function(err) { if (err) console.error(err.message); }); }); }); } processResultSet(); }); });

queryStream()

// queryStream() var oracledb = require('oracledb'); var dbConfig = require('./dbconfig.js'); var config = require('./config.js'); oracledb.getConnection( dbConfig, function(err, connection) { if (err) throw err; var rowsProcessed = 0; var startTime = Date.now(); var stream = connection.queryStream( 'select * from all_objects where rownum <= :mr', [ config.maxRows ], { fetchArraySize: config.batchSize } ); stream.on('data', function (data) { // do work on the row here rowsProcessed++; }); stream.on('end', function () { var t = ((Date.now() - startTime)/1000); console.log('queryStream(): rows: ' + rowsProcessed + ', batch size: ' + config.batchSize + ', seconds: ' + t); connection.close( function(err) { if (err) { console.error(err.message); } }); }); });

The Configuration File

// config.js var maxRows; // number of rows to query var batchSize; // batch size for fetching rows maxRows = 50000; batchSize = 1000 exports.maxRows = maxRows; exports.batchSize = batchSize;

node-oracledb 2.0 with pre-built binaries is on npm

Christopher Jones - Thu, 2017-12-14 00:54

Release announcement: Node-oracledb 2.0.15, the Node.js add-on for Oracle Database, is now on npm for general use.

Top features: Pre-built binaries, Query fetch improvements

It's time to shake things up. Node-oracledb version 1 has been stable for a while. Our tight, hard working team is now proud to deliver node-oracledb 2 to you. With improvements throughout the code and documentation, this release is looking great. There are now over 3000 functional tests, as well as solid stress tests we run in various environments under Oracle's internal testing infrastructure.

Review the CHANGELOG for all changes. For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0.

node-oracledb v2 highlights
  • node-oracledb 2.0 is the first release to have pre-built binaries. These are provided for convenience and will make life a lot easier, particularly for Windows users.

    Binaries for Node 4, 6, 8 and 9 are available for Windows 64-bit, macOS 64-bit, and Linux 64-bit (built on Oracle Linux 6).

    Simply add oracledb to your package.json dependencies or manually install with:

    npm install oracledb

    (Yes, Oracle Client libraries are still required separately - these do all the heavy lifting.)

    We've got great error messages when a binary isn't available, and improved the messages when require('oracledb') fails, however Linux users with older glibc libraries may get Linux runtime errors - you should continue building node-oracledb from source in this case, see below.

    There is support for hosting binary packages on your own internal web server, which will be great for users with a large number of deployments. See package/README.

    This is the first release to use the ODPI-C abstraction layer which is also used by Python's cx_Oracle 6.x API, as well as 3rd party APIs for other languages. Using ODPI is the the main change that allowed node-oracledb 2.0 binaries to be distributed. As another consequence of ODPI-C, any node-oracledb 2 binary will run with any of the Oracle client 11.2, 12.1 or 12.2 libraries without needing recompilation. This improves portability when node-oracledb builds are copied between machines. Since the available Oracle functionality will vary with different Oracle Client (and Oracle Database!) versions, it's important to test your applications using your intended deployment environment.

  • The driver can still be built from source code if you need, or want, to do so. Compiling is now simpler than in version 1, since you no longer need Oracle header files, and no longer need OCI_*_DIR environment variables.

    To build from source you need to pull the code from a GitHub branch or tag - generally the most recent release tag is what you want. Make sure you have Python 2.7, the 'git' utility, and a compiler, and add oracle/node-oracledb.git#v2.0.15 to your package.json dependencies. Or manually run the install:

    npm install oracle/node-oracledb.git#v2.0.15

    Users without 'git', or with older versions of Node that won't pull the ODPI submodule code, can use the source package:

    npm install https://github.com/oracle/node-oracledb/releases/download/v2.0.15/oracledb-src-2.0.15.tgz

    I've noticed GitHub can be a little slow to download the source before compilation can begin, so bear with it.

  • Improved query handling:

    • Enhanced direct fetches to allow an unlimited number of rows to be fetched and changed the default number of rows fetched by this default method to be unlimited. The already existing ResultSet and Streaming methods are still recommended for large numbers of rows.

    • Since ODPI-C internally uses 'array fetches' instead of 'prefetching' (both are underlying methods for buffering/tuning differing only in where the buffering takes place - both are transparent to applications), we've replaced prefetchRows with a new, almost equivalent property fetchArraySize..

    • We've moved the buffering or rows for getRow() into JavaScript for better performance. It no longer needs to call down into lower layers as often.

  • We tightened up some resource handling to make sure applications don't leak resources. If you inadvertently try to close a connection when a LOB or ResultSet is open, you will see a new error DPI-1054.

  • The node-oracledb size restrictions on LOB fetchAsString and fetchAsBuffer queries, and also on LOB binds. In node-oracledb 1 these were particularly low when Oracle 11gR2 client libraries were used, so this should be welcome for people who have not updated the Oracle client. Node.js and V8 will still limit sizes, so continue to use the Stream interface for large LOBs.

  • Added support for ROWID and UROWID. Data is fetched as a String

  • Added support for fetching columns types LONG (as String) and LONG RAW (as Buffer).

  • Added support for TIMESTAMP WITH TIME ZONE date type. These are mapped to a Date object in node-oracledb using LOCAL TIME ZONE. The TIME ZONE component is not available in the Date object.

  • Added query support for NCHAR, NVARCHAR2 and NCLOB columns. Binding for DML may not insert data correctly, depending on the database character set and the database national character set.

Plans for Version 1

Our stated plan was to cease formal support for version 1 when Node 4 LTS maintenance ended in April 2018. We're pleased 1.13.1 has been stable for some time, and we are not anticipating needing any further node-oracledb 1 releases, unless exceptional circumstances occur.

Plans for Version 2

We are now testing with Node 4, 6, 8 and 9. This list will, of course, change as new Node versions are released. The pre-built binaries will also change and availability is not guaranteed.

ODPI-C forms a solid base to extend node-oracledb. Users of Python cx_Oracle 6, which is also based on ODPI-C, are appreciating all the advanced Oracle features that are available. Many of these features have also been requested by node-oracledb users. As with any open source project, there are no hard and fast guarantees for node-oracledb, but you can see the general direction we'd like to head in. Pull Requests are welcome.

One little thing you might be unaware of is that along the way we have been testing (and helping create) the next major release of Oracle Database, so sometimes we've had to take detours from direct node-oracledb work order to move the whole of Oracle along. Whether we work on the "client" or the "server", we look forward to bringing you great things in future releases.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

For information on migrating see Migrating from node-oracledb 1.13 to node-oracledb 2.0.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Using sed to backup file and remove lines

Michael Dinh - Wed, 2017-12-13 19:13
[oracle@racnode-dc1-2 ~]$ cd /u01/app/oracle/12.1.0.2/db1/rdbms/log/

--- DDL will fail since datafile is hard coded!
[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

--- Remove ALTER and RESIZE from sql file.
--- Most likely the incorrect way to do this since TBS may be undersized.

12.2 Datapump Improvements actually does this the right way.

[oracle@racnode-dc1-2 log]$ sed -i.bak '/ALTER DATABASE DATAFILE\|RESIZE/ d' tablespaces_ddl.sql

[oracle@racnode-dc1-2 log]$ ls -l tablespace*
-rw-r--r-- 1 oracle dba 1214 Dec 14 02:03 tablespaces_ddl.sql
-rw-r--r-- 1 oracle dba 1488 Dec 14 01:45 tablespaces_ddl.sql.bak

[oracle@racnode-dc1-2 log]$ diff tablespaces_ddl.sql tablespaces_ddl.sql.bak
14a15,16
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
24a27,28
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
32a37,38
>    ALTER DATABASE DATAFILE
>   '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

[oracle@racnode-dc1-2 log]$ cat tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

[oracle@racnode-dc1-2 log]$

12.2 Datapump Improvements

Michael Dinh - Wed, 2017-12-13 19:00

Datafile for tablespace USERS was resize to 5242880.

12.2.0.1.0
5242880 size is part of create tablespace.

CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

12.1.0.2.0
5242880 size is part of alter tablespace.

Why is this important?
Manual intervention is no longer required to have correct datafiles size.

 
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;

Test Case:

01:01:05 SYS @ owl:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:01:57 SYS @ owl:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:02:43 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    4194304

01:04:09 SYS @ owl:>alter database datafile 5 resize 5242880;

Database altered.

01:05:08 SYS @ owl:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         7    5242880
         5    5242880

01:05:15 SYS @ owl:>

+++++++++++

[oracle@db-asm-1 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.2.0.1.0 - Production on Thu Dec 14 01:31:12 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
W-1 Startup took 1 seconds
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
W-1 Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/admin/owl/dpdump/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:31:19 2017 elapsed 0 00:00:06

+++++++++++

[oracle@db-asm-1 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.2.0.1.0 - Production on Thu Dec 14 01:32:51 2017

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
W-1 Startup took 0 seconds
W-1 Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
W-1 Processing object type DATABASE_EXPORT/TABLESPACE
W-1      Completed 3 TABLESPACE objects in 0 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:32:54 2017 elapsed 0 00:00:02

+++++++++++

[oracle@db-asm-1 ~]$ cat /u01/app/oracle/admin/owl/dpdump/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 73400320
  AUTOEXTEND ON NEXT 73400320 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;


CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 33554432
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 5242880
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;


[oracle@db-asm-1 ~]$

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

01:40:59 SYS @ hawk2:>select bytes,tablespace_name,autoextensible,maxbytes from dba_data_files where tablespace_name='USERS';

     BYTES TABLESPACE_NAME                AUT   MAXBYTES
---------- ------------------------------ --- ----------
   5242880 USERS                          YES 3.4360E+10

01:41:17 SYS @ hawk2:>alter tablespace users add datafile size 4m;

Tablespace altered.

01:41:24 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    4194304


01:41:34 SYS @ hawk2:>alter database datafile 2 resize 5242880;

Database altered.

01:41:56 SYS @ hawk2:>select file_id,bytes from dba_data_files where tablespace_name='USERS';

   FILE_ID      BYTES
---------- ----------
         6    5242880
         2    5242880

01:42:02 SYS @ hawk2:>

++++++++++

[oracle@racnode-dc1-2 ~]$ expdp parfile=expdp_tbs.par

Export: Release 12.1.0.2.0 - Production on Thu Dec 14 01:43:19 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

Starting "SYS"."SYS_EXPORT_FULL_01":  /******** AS SYSDBA parfile=expdp_tbs.par
Startup took 12 seconds
Estimate in progress using BLOCKS method...
Total estimation using BLOCKS method: 0 KB
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 2 seconds
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /u01/app/oracle/12.1.0.2/db1/rdbms/log/tbs.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Thu Dec 14 01:44:34 2017 elapsed 0 00:00:43

++++++++++

[oracle@racnode-dc1-2 ~]$ impdp parfile=impdp_tbs.par

Import: Release 12.1.0.2.0 - Production on Thu Dec 14 01:45:48 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
Startup took 1 seconds
Master table "SYS"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "SYS"."SYS_SQL_FILE_FULL_01":  /******** AS SYSDBA parfile=impdp_tbs.par
Processing object type DATABASE_EXPORT/TABLESPACE
     Completed 4 TABLESPACE objects in 1 seconds
Job "SYS"."SYS_SQL_FILE_FULL_01" successfully completed at Thu Dec 14 01:45:57 2017 elapsed 0 00:00:05

[oracle@racnode-dc1-2 ~]$

++++++++++

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oracle/12.1.0.2/db1/rdbms/log/tablespaces_ddl.sql
-- CONNECT SYS
ALTER SESSION SET EVENTS '10150 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10904 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10407 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '10851 TRACE NAME CONTEXT FOREVER, LEVEL 1';
ALTER SESSION SET EVENTS '22830 TRACE NAME CONTEXT FOREVER, LEVEL 192 ';
-- new object type path: DATABASE_EXPORT/TABLESPACE
CREATE UNDO TABLESPACE "UNDOTBS1" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 5242880 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs1.260.962253853' RESIZE 152043520;
CREATE TEMPORARY TABLESPACE "TEMP" TEMPFILE
  SIZE 213909504
  AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576;
CREATE UNDO TABLESPACE "UNDOTBS2" DATAFILE
  SIZE 26214400
  AUTOEXTEND ON NEXT 26214400 MAXSIZE 32767M
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/undotbs2.265.962254263' RESIZE 235929600;
  
CREATE TABLESPACE "USERS" DATAFILE
  SIZE 5242880
  AUTOEXTEND ON NEXT 1310720 MAXSIZE 32767M,
  SIZE 4194304
  LOGGING ONLINE PERMANENT BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT
 NOCOMPRESS  SEGMENT SPACE MANAGEMENT AUTO;

   ALTER DATABASE DATAFILE
  '+DATA/HAWK/DATAFILE/users.269.962674885' RESIZE 5242880;
[oracle@racnode-dc1-2 ~]$

Google dashboard for lazy businessman

Nilesh Jethwa - Wed, 2017-12-13 15:20

Once you start your own business or startup, the very first thing that comes to mind is "How many visitors did my website receive?"

Google Analytics provides tons of metrics and it becomes quite a chore to keep up with all the dashboard pages and filter options. As a small biz owner myself, I went through a phase where I ended up spending significant amount of time checking out Google Analytics

To save time and mental energy on a daily routine task, I asked "What are the most basic metrics I need to measure from Google Analytics?"

The answer pretty much came down as a need to have "one page dashboard that displays various metrics".

Read more at http://www.infocaptor.com/dashboard/what-are-the-bare-minimum-traffic-metrics-that-i-can-track-easily

Docker-CE: Setting up a tomcat in less than a minute and running your JSP...

Dietrich Schroff - Wed, 2017-12-13 15:14
Last time i wrote about processes and files of a docker container hosting the docker example myapp.py.
Next step was to run a tomcat with a small application inside.

This can be done with theses commands:
  1. Get tomcat from the docker library:
    # docker pull tomcat
    Using default tag: latest
    latest: Pulling from library/tomcat
    3e17c6eae66c: Pull complete
    fdfb54153de7: Pull complete
    a4ca6e73242a: Pull complete
    5161d2a139e2: Pull complete
    7659b327f9ec: Pull complete
    ce47e69f11ad: Pull complete
    7d946df3a3d8: Pull complete
    a57cba73d797: Pull complete
    7e6f56cdb523: Pull complete
    06e4787b3ca5: Pull complete
    c760cb7e43cb: Pull complete
    ad6d0815df5c: Pull complete
    d7e1da09fc22: Pull complete
    Digest: sha256:a069d49c414bad0d98f5a4d7f9b7fdd318ccc451dc535084480c8aead68272d2
    Status: Downloaded newer image for tomcat:latest
  2. Test the tomcat:
    # docker run -p 4000:8080 tomcat
    20-Nov-2017 20:38:11.754 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version:        Apache Tomcat/8.5.23
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built:          Sep 28 2017 10:30:11 UTC
    20-Nov-2017 20:38:11.762 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number:         8.5.23.0
    ....
    ....
    org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["http-nio-8080"]
    20-Nov-2017 20:41:59.928 INFO [Thread-5] org.apache.coyote.AbstractProtocol.destroy Destroying ProtocolHandler ["ajp-nio-8009"]
     

This was easy.
Now create your JSP and run it:
  1. create a directory
    mkdir tomcatstatus
  2. create a jsp inside this direcotry
    vi tomcatstatus/index.jsp
    and insert the following content:
    <%@ page language="java" import="java.util.*" %>


    Host name : <%=java.net.InetAddress.getLocalHost().getHostName() %>

    Server Version: <%= application.getServerInfo() %>

    Servlet Version: <%= application.getMajorVersion() %>.<%= application.getMinorVersion(
    ) %>
    JSP Version: <%= JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersio
    n() %>
  3. Run docker
    docker run -v /home/schroff/tomcatstatus:/usr/local/tomcat/webapps/status -p 4000:8080 tomcat
  4. Connect to port 4000:

Wow - i am really stunned how fast the tomcat was setup and the jsp was launched. No installation of java (ok, this is only apt install) and no setup procedure for Apache tomcat (ok, this is just a tar -zxvf). But if i want to run more than one installation - docker is faster than repeating the installation or copying files.  Really cool!


(One thing i forgot: Installation of docker onto your server)

Both talks accepted for Collaborate 18

Bobby Durrett's DBA Blog - Wed, 2017-12-13 10:35

IOUG accepted both my Toastmasters and Python talks for Collaborate 18. RMOUG also accepted them both so I will be doing these two talks in both February and April. I am a little surprised because I have had talks rejected by IOUG in the past. There are a lot of great speakers competing for speaking slots. This is my first time for RMOUG so I did not know how hard it would be to get a talk accepted. I put both talks in for both conferences not knowing if either would be accepted and both were at both conferences!  So, 2018 will be a busier year than normal for me in terms of speaking at conferences. My last conference was two years ago at Oracle OpenWorld where I spoke about Delphix, a tool that I use with the Oracle database. Next year I’m talking about two things that I feel passionate about. The talks are not about the Oracle database but they are about things I have learned that have helped me in my Oracle database work. They are about how Toastmasters has helped me improve my speaking and leadership skills and about why the Python programming language has become my favorite general purpose scripting tool. I am looking forward to giving the talks. If you are able to attend one of the conferences maybe you could check out one of my talks. Fun.

Bobby

Categories: DBA Blogs

RMAN Parallelism question

Tom Kyte - Wed, 2017-12-13 09:26
Tom, I am reading Oracle? Database Backup and Recovery Advanced User's Guide, 10g Release 2 (10.2), Part Number B14191-02,http://download-east.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconfg003.htm#sthref521 The document says: 1) "As a...
Categories: DBA Blogs

partitioned index

Tom Kyte - Wed, 2017-12-13 09:26
Hi Tom, please explain the differences between global partitioned index and local partitioned index. Thanks!
Categories: DBA Blogs

Oracle MOOC: Developing Chatbots with Oracle Intelligent Bots

Oracle Intelligent Bots with Oracle Mobile Cloud Services gives you the ability to create an artificially intelligent bot that can converse with your users, to determine their intent and perform...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How to verify the parallel execution in a custom User-Defined Aggregate Function

Tom Kyte - Tue, 2017-12-12 15:06
I want to verify the parallel execution in my User-Defined Aggregate Function. So I put some DBMS_OUTPUT inside the code - but it seems not to work correctly... You can reproduce the behaviour by simple create the example user-defined aggregate fu...
Categories: DBA Blogs

How to concatenate string having length greater than 4000 characters

Tom Kyte - Tue, 2017-12-12 15:06
I am using xmlagg function to concatenate records with comma separated values. But i am getting error when columns count is than 300 I getting below error <code>Error starting at line : 8 in command - select rtrim (xmlagg (xmlelement (e...
Categories: DBA Blogs

Shrink partition table

Tom Kyte - Tue, 2017-12-12 15:06
Hi, The shrink table doesn't defragments the partition tables. executed : alter table test_table_1 shrink space and checked with below query wasted space before and after but the values are identical. <code>select table_name,rou...
Categories: DBA Blogs

Imdp xml schema date format issue ORA-01858

Tom Kyte - Tue, 2017-12-12 15:06
Hi, I exported a schema from Oracle 11.2.0.1 and trying to import it in Oracle 12c. My Oracle schema contains xml schema and xmltype columns in a table. My xml fragment is <code><Tag0> <Tag1> <Tag2 Id="10202" date1="2017-11-15T13:36:34.00000...
Categories: DBA Blogs

Removal of Archive Files when using OS to backup offline DB

Tom Kyte - Tue, 2017-12-12 15:06
I have been thrown in at the deep end and given an Oracle DB to look after. I have no prior experience of Oracle so everything I am doing is new and a massive learning curve. The current DB data set is approx 400GB and we are working with the appl...
Categories: DBA Blogs

cx_Oracle 6.1 with Oracle Database Sharding Support is now Production on PyPI

Christopher Jones - Tue, 2017-12-12 14:45

cx_Oracle 6.1, the extremely popular Oracle Database interface for Python, is now Production on PyPI.

cx_Oracle logo

cx_Oracle is an open source package that covers the Python Database API specification with many additions to support Oracle advanced features.

In the words of the creator and maintainer, Anthony Tuininga: The cx_Oracle 6.1 release has a number of enhancements building upon the release of 6.0 made a few months ago. Topping the list is support for accessing sharded databases via new shardingkey and supershardingkey parameters for connections and session pools. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure and a number of bugs were squashed. The test suite has also been expanded. See the full release notes for more information.

cx_Oracle References

Home page: https://oracle.github.io/python-cx_Oracle/index.html

Installation instructions: http://cx-oracle.readthedocs.io/en/latest/installation.html

Documentation: http://cx-oracle.readthedocs.io/en/latest/index.html

Release Notes: http://cx-oracle.readthedocs.io/en/latest/releasenotes.html

Source Code Repository: https://github.com/oracle/python-cx_Oracle

Does pg_upgrade in check mode raises a failure when the old cluster is running?

Yann Neuhaus - Tue, 2017-12-12 14:31

Today I had the pleasure to have Bruce Momjian in my session about PostgreSQL Upgrade Best Practices at the IT Tage 2017 in Frankfurt. While browsing through the various options you have for upgrading there was one slide where I claimed that the old cluster needs to be down before you run pg_upgrade in check mode as you will hit a (non-critical) failure message otherwise. Lets see if that really is the case or I did something wrong…

To start with lets initialize a new 9.6.2 cluster:

postgres@pgbox:/home/postgres/ [PG962] initdb --version
initdb (PostgreSQL) 9.6.2 dbi services build
postgres@pgbox:/home/postgres/ [PG962] initdb -D /tmp/aaa
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/aaa ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/aaa -l logfile start

Start that:

postgres@pgbox:/home/postgres/ [PG962] pg_ctl -D /tmp/aaa -l logfile start
postgres@pgbox:/home/postgres/ [PG962] psql -c "select version()" postgres
                                                           version                                                           
-----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6.2 dbi services build on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-11), 64-bit
(1 row)

Time: 0.861 ms

For being able to upgrade we’ll need a new cluster, so:

postgres@pgbox:/home/postgres/ [PG10] initdb --version
initdb (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] initdb -D /tmp/bbb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.UTF-8
  CTYPE:    en_US.UTF-8
  MESSAGES: en_US.UTF-8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     de_CH.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /tmp/bbb ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /tmp/bbb -l logfile start

We’ll not start that one but will just run pg_upgrade in check mode from the new binaries:

postgres@pgbox:/home/postgres/ [PG10] pg_upgrade --version
pg_upgrade (PostgreSQL) 10.0 dbi services build
postgres@pgbox:/home/postgres/ [PG10] export PGDATAOLD=/tmp/aaa
postgres@pgbox:/home/postgres/ [PG10] export PGDATANEW=/tmp/bbb
postgres@pgbox:/home/postgres/ [PG10] export PGBINOLD=/u01/app/postgres/product/96/db_2/bin/
postgres@pgbox:/home/postgres/ [PG10] export PGBINNEW=/u01/app/postgres/product/10/db_0/bin/
postgres@pgbox:/home/postgres/ [PG10] pg_upgrade -c

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
...

… and here we go. From the log:

postgres@pgbox:/home/postgres/ [PG10] cat pg_upgrade_server.log

-----------------------------------------------------------------
  pg_upgrade run on Tue Dec 12 21:23:43 2017
-----------------------------------------------------------------

command: "/u01/app/postgres/product/96/db_2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/tmp/aaa" -o "-p 50432 -c autovacuum=off -c autovacuum_freeze_max_age=2000000000  -c listen_addresses='' -c unix_socket_permissions=0700" start >> "pg_upgrade_server.log" 2>&1
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  lock file "postmaster.pid" already exists
HINT:  Is another postmaster (PID 2194) running in data directory "/tmp/aaa"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So, @Bruce: Something to improve :)
Again: It was a pleasure to have you there and I hope we’ll meet again at one of the conferences in 2018.

 

Cet article Does pg_upgrade in check mode raises a failure when the old cluster is running? est apparu en premier sur Blog dbi services.

ODPI-C 2.1 is now available for all your Oracle Database Access Needs

Christopher Jones - Tue, 2017-12-12 13:56
ODPI-C logo ODPI-C

Anthony Tuininga has just released version 2.1 of the Oracle Database Programming Interface for C (ODPI-C) on GitHub.

ODPI-C is an open source library of C code that simplifies access to Oracle Database for applications written in C or C++.

Key new feature: support for accessing sharded databases.

 

In the words of Anthony: This release has a number of small enhancements intended to build upon the release of 2.0 made a few months ago. Topping the list is support for accessing sharded databases, a new feature in Oracle Database 12.2. Support for creating connections using the SYSBACKUP, SYSDG, SYDKM and SYSRAC roles was also added, as was support for identifying the id of the transaction which spawned a subscription message. For those on Windows, improved error messages were created for when the wrong architecture Oracle Client is in the PATH environment variable. Improvements were also made to the debugging infrastructure. The test suite has also been expanded considerably. A number of bugs were squashed to improve ODPI-C and in preparation for the upcoming releases of cx_Oracle 6.1 and node-oracledb 2.0, both of which use ODPI-C. See the full release notes for more information.

ODPI-C References

Home page: https://oracle.github.io/odpi/

Code: https://github.com/oracle/odpi

Documentation: https://oracle.github.io/odpi/doc/index.html

Release Notes: https://oracle.github.io/odpi/doc/releasenotes.html

Report issues and discuss: https://github.com/oracle/odpi/issues

Pages

Subscribe to Oracle FAQ aggregator