Feed aggregator

How to Improve the Performance of Group By with Having

Tom Kyte - Wed, 2018-02-14 19:06
I have a table t containing three fields accountno, tran_date, amount and amount is always > 0. . There are many records for each accountno. I want to select all the accountnos where the sum(amount) > 100. The simple query is like this <code>...
Categories: DBA Blogs

The Tale of the Ardent SpringCleaning Maniac (ASM)

Amis Blog - Wed, 2018-02-14 17:55

or

If you want to get rid of stuff, you can always do a good spring-cleaning. Or you can do what I do. Move.

Ellen DeGeneres

In order to prepare for a network reorganization we needed to move our current Oracle Enterprise Manager machine into another network segment. So we grabbed the chance, and instead of just moving the existing OEM12c Rel.3 machine we decided to install a new Oracle Enterprise Manager 13(.2.2.0.0) onto two Linux RedHat 6.9 machines and get rid of the SPOF (single point of failure) as well in one elegant movement.

Little did we know that we would almost cripple our RAC-clusters by just preparing for the new agents.

Here comes the story of this little adventure.

When we were finally ready to move the monitored targets into the new OMS, I started to install second agents on all the hosts, so we still could quickly switch back to EM12c in case we hit a major road block with 13c. Starting with our single instance development- and test machines and then tentatively I began to install on some acceptance machines. Most of our acceptance and production databases are 2-node RAC 11.2.0.4 with (active) DataGuard, meaning each database consisting of 4 instances on four different machines. Except for some connectivity problems in the network, so far all went smoothly.

In order to install the new agents I had to check each machine upfront for the required 1Gb free space in /tmp. Often, I had to clean out a lot of old stuff from years ago like long-forgotten Oracle install logs, spoolfiles from SQL adhoc-queries of the developers, downloaded rpm’s of the Linux administrators, never-checked Puppet logs and so on. I was intrigued that in all cases node 2 was more cluttered than node 1.

BTW: 1G was not even enough, I would advise to go for at least 1,3G free space in /tmp for the 13c-agents.

At first, I removed only all old files of user oracle before I even considered removing root files or stuff other users had left years ago. And I did not touch Oracle files and directories younger than 4 weeks. Often that was sufficient to clear enough space to be able to install the agent.

But there were machines which were particularly cluttered and I was happily cleaning away until I had enough free space in /tmp freed to install the agent via the GUI. The first node of the first cluster I installed, was ready at about 2018-01-29 15:10h, followed by Node2, due to the necessary cleanup in /tmp, at about 15:45h. The newly installed agents on the cluster worked fine and produced entries about “Compliance …”  which were quickly suppressed for the target. Everything seemed fine and I went home quite contently.

The next morning we noted that one of the databases on this cluster behaved strangely and complained about not being able to complete the backup due to unrecognized files in the FRA. What?! Some hours later, other databases on this node suddenly began likewise to issue events in OMS13c and slowly hell broke loose which reminded us about ASM issues we had some weeks earlier after some updates. But the only change this machine had undergone lately was the new EM agent. No releation with ASM, forget about that idea….

We took a look into the alert log of the first complaining database (primary) and it looked like this (DB and instance names changed):

2018-01-29 22:00:00.031000 +01:00

Setting Resource Manager plan SCHEDULER[0x32D9]:DEFAULT_MAINTENANCE_PLAN via scheduler window

Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter

Starting background process VKRM

VKRM started with pid=43, OS id=13348

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m000_13400.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:18:11.242000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_35726.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:28:11.814000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_53927.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:38:12.346000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_65572.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:48:12.872000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_75911.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:58:13.371000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_86086.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 23:05:10.655000 +01:00

Thread 2 advanced to log sequence 917 (LGWR switch)

Current log# 3 seq# 917 mem# 0: +DATA01/db1_prim/onlinelog/group_3.294.918479161

Current log# 3 seq# 917 mem# 1: +FRA01/db1_prim/onlinelog/group_3.306.918479161

2018-01-29 23:08:02.093000 +01:00

WARNING: ASM communication error: op 11 state 0x50 (3113)

ERROR: slave communication error with ASM

Unable to create archive log file ‘+FRA01’

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_arc0_18470.trc:

ORA-19816: WARNING: Files may exist in db_recovery_file_dest that are not known to database.

ORA-17502: ksfdcre:4 Failed to create file +FRA01

ORA-03113: end-of-file on communication channel

Process ID:

Session ID: 708 Serial number: 1895

*************************************************************

WARNING: A file of type ARCHIVED LOG may exist in

db_recovery_file_dest that is not known to the database.

Use the RMAN command CATALOG RECOVERY AREA to re-catalog

any such files. If files cannot be cataloged, then manually

delete them using OS command. This is most likely the

result of a crash during file creation.

*************************************************************

ARC0: Error 19504 Creating archive log file to ‘+FRA01’

ARCH: Archival stopped, error occurred. Will continue retrying

ORACLE Instance DB1_PRIM2 – Archival Error

ORA-16038: log 4 sequence# 916 cannot be archived

ORA-19504: failed to create file “”

ORA-00312: online log 4 thread 2: ‘+DATA01/db1_prim/onlinelog/group_4.295.918479161’

ORA-00312: online log 4 thread 2: ‘+FRA01/db1_prim/onlinelog/group_4.307.918479163’RNING: A file of type ARCHIVED LOG may exist in …

Remember the installation of the agent was in the afternoon at about 4 o’clock and the trouble seemed to have started at 22:00h and not all databases on the cluster seem to have a problem with ASM or their FRA’s. The connection between FRA-trouble and agent was not obvious, I would say.

At that moment we did not ‘see’ the WARNING: ASM communication error: op 11 state 0x50 (3113) but reacted to the archival error. But of course, that did not solve the problem.

When we had a look into the ASM-log on node2 of this cluster, it revealed that its ASM instance had crashed with ORA-29701… while node1 still functioned normally!

A quick search on MOS for “ORA-29701” resulted in nothing that seem to fit our problem, until I widened the search on the net and almost accidentally found a website about EXADATA, describing an ORA-29701 error of ASM and giving advice about “restoring a lost socket file” by trying to restart the complete CRS stack on the node. And if that didn’t do the trick, try rebooting the server. The last was in this environment a little tricky and takes some time to get approval, so I was prepared to do whatever necessary, as long as it did not involve a reboot of the server to solve this problem.

But a lost socket file?!? They used to be in /var/tmp on Linux, if I was not mistaken, but I only touched /tmp…

I decided to go along with this advice and stopped the CRS stack on the unhealthy node and therefore all database instances on it, which were still limping on as good or bad as they could on one node, as grid owner with crsctl stop crs.
Then I checked if all processes were really down, which they weren’t. Therefore I wielded the kill-9–sword until everything which had anything to do with CRS or oracle was gone from the process list (Do I hear some deep groans about this?). And started anew with crsctl start crs.
Luckily, the result was a node with CRS, ASM, listeners and all database instances started as if the machine just came back from a successful and smooth reboot. And even a new /tmp/.oracle directory … one of the older directories I had cleaned up in /tmp to make room for the agent installs!
Pfew, blessed are the innocent! It all worked again.

But what had caused all this trouble? According to this article, the socket files for Linux can be found in a hidden directory called .oracle. Depending on the *NIX dialect that could be under /tmp/.oracle, /usr/tmp/.oracle or /var/tmp/.oracle. Under Linux86_64 this directory is officially under /var/tmp but Oracle also hides a copy under /tmp/.oracle and also stores socket files of the Oracle Net Listener or Cluster Healthy Monitor in there (see DOC ID 1322234.1).

The article also cited DOC ID 391790.1, which is not among the first 10 documents presented when you query MOS for “ORA-29701”, which would have helped to find the culprit far quicker!

What an irony – that later that day, Jan 30th 2018, Oracle released DOC ID 370605.1 …. which tries to remind us all again:
Never delete any hidden .oracle directory you come across or fear for your cluster!

Have fun!

 

The post The Tale of the Ardent SpringCleaning Maniac (ASM) appeared first on AMIS Oracle and Java Blog.

node-oracledb 2.1 is now available from npm

Christopher Jones - Wed, 2018-02-14 17:12

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()' method

After the big refactoring of node-oracledb 2.0, we're pleased to push node-oracledb forward with the release of 2.1.0.

 

Changes in node-oracledb 2.1.0 include:

  • Support for SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM, and SYSRAC privileges in standalone connections. You can now connect like:

    oracledb.getConnection( { user: 'sys', password: 'secret', connectString: 'localhost/orclpdb', privilege: oracledb.SYSDBA }, function(err, connection) { if (err) console.error(err); else console.log('I have power'); } // . . . );
  • The internal implementations of 'direct fetches' and 'queryStream()' have been simplified by reusing the ResultSet changes of 2.0. You might notice speedups in some cases.

  • The 'queryStream()' result Stream now supports Node 8's destroy() method, which obsoletes the previous experimental '_close()' method of node-oracledb.

    If you are using '_close()' in Node 8 or later, you should:

    • Change the method name from '_close()' to 'destroy()'.

    • Stop passing a callback.

    • Optionally pass an error.

  • Improved the Error object with new 'errorNum' and 'offset' properties. The properties will be included when you get errors from Oracle Database such as this:

    connection.execute( "select wrong_col from departments", function(err, result) { if (err) console.error(err); else console.log(result.rows); });

    The output is:

    { Error: ORA-00904: "WRONG_COL": invalid identifier errorNum: 904, offset: 7 }

    The offset is relative to the start of the SQL statement. For non SQL errors, the offset will be 0.

  • New 'versionSuffix' and 'versionString' properties to the oracledb object to aid showing the release status and version. The 'versionSuffix' attribute will be an empty string for production releases, but may be something like '-dev' or '-beta.1' for other builds.

    The existing 'version' attribute is great for runtime comparisons, but not as handy as the new 'versionString' attribute for displaying to users.

  • With a code contribution from Sagie Gur-Ari an annoying little gotcha was fixed. Now 'connectionString' is an alias for 'connectString'.

    oracledb.getConnection( { user: 'hr', password: 'welcome', connectionString: 'localhost/orclpdb', }, function(err, connection) { if (err) console.error(err); else console.log('I am HR'); } );
  • The underlying data access layer ODPI-C had some improvements which flow through to node-oracledb.

    • Connections can now be closed when ResultSets and Lobs are open. This removes the DPI-1054 error you might have seen if you closed resources in the 'wrong' order.

    • At connection close there is no longer an unconditional rollback sent to the database. Instead ODPI-C makes use of some internal state to avoid the overhead of the 'round trip' to the database when it is known that there is no open transaction.

  • Node-oracledb 2.1 no longer compiles with the long-obsolete Node 0.10 or 0.12 versions. We'd stopped testing and claiming support for these versions a while back, but technically compilation had been possible until now. Updates to NAN API usage and other cleanups have lead to this change.

  • Our testing and documentation improved, as always.

See the CHANGELOG for other changes in node-oracledb 2.1

Finally, you may be interested to know about these cool new resources:

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

For information on migrating see Migrating from node-oracledb 2.0 to node-oracledb 2.1.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

Full page logging in Postgres and Oracle

Yann Neuhaus - Wed, 2018-02-14 14:13

In my opinion, the volume of logging (aka redo log, aka xlog, aka WAL) is the most important factor for OLTP performance, availability and scalability, for several reasons:

  • This is the only structure where disk latency is a mandatory component of response time
  • This is a big part of the total volume of backups
  • This is sequential by nature, and very difficult to scale by parallelizing

In this post, I look at the volume of logging generated by some DML in Postgres and Oracle. I know Oracle quite well and just start to look at Postgres. The comparison here is not a contest but a way to better understand. For example, the default behavior of Postgres, with full_page_writes=on, is very similar to Oracle ‘begin backup’ mode. The comparison makes no sense for most of Postgres DBAs, but probably helps Oracle DBAs to understand it.

Measure WAL segment writes

Here is how I measured the volume of transaction log written: start the Postgres server with ‘strace -f’ and parse with ‘awk’ the open(), write() and close() calls:

sudo su postgres <<'END'
export PGDATA=/u01/pgdata
/usr/pgsql-10/bin/pg_ctl stop
strace -e trace=open,close,write,recvfrom -f /usr/pgsql-10/bin/pg_ctl start 2>&1 | awk '
/^[^[]/{
$0="[pid MAIN] "$0
}
/strace: Process [0-9][0-9]* attached/{
sub(/^.*strace: /,"strace: ") ; "ps -o cmd -hp " $3 |& getline proc[$3"]"] ; print "" ; print $0,proc[$3"]"] }
/open[(].*pg_wal[/].* = [0-9]*$/{
z=$0 ; gsub(qq," ") ; fd_wal[$2 $NF]=$4
}
/checkpoint;/{
total_written_wal=0
}
/write[(]/{
#pid=$2 ; sub("]","",$2) ; "ps -o cmd -hp " p |& getline proc[p"]"] z=$0 ; gsub("[(,]"," ") ; if ( fd_wal[$2 $4]>0 ) { written_wal[$2 $4]=written_wal[$2 $4]+$NF ; total_written_wal=total_written_wal+$NF } next
}
/close[(]/{
pid=$2 ; sub("[^0-9]","",pid) ;
z=$0 ; gsub("[()]"," ") ; if ( ( fd_wal[$2 $4]!="" ) && ( written_wal[$2 $4] > 0 ) ) {
printf " ( written %d bytes to %s -> total WAL segments: %.2f MB ) cmd=%s\n",written_wal[$2 $4],fd_wal[$2 $4],total_written_wal/1024/1024 , proc[$2] ; fd_wal[$2 $4]=""
} next
}
' qq='"'
END

Do not do that in production. This is experimentation in a lab. Do not attach strace to a critical process in production.

There’s probably an easier way to get the same information, maybe with postgres activity statistics, or through a size counting archive_command, so please don’t hesitate to comment. Anyway, from the ‘write()’ calls I am sure that I’m counting exactly what I want: the volume of logging written to disk. As an Oracle DBA used to LogWriter and its slave threads managing all writes, I started to trace only the WAL writer process but quickly realized that part ot the logging is directly written by my server process.

Postgres: insert

I create a table with some numbers and a 100 bytes character string.

create table demo as select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(0,0);

The first operation I test is the insert of 1 million rows.

insert into demo select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(1,1000000);
( written 4349952 bytes to pg_wal/000000010000000A0000005F -> total WAL segments: 4.16 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 8192 bytes to pg_wal/000000010000000A0000005F -> total WAL segments: 9.00 MB ) cmd=postgres: wal writer process
( written 17735680 bytes to pg_wal/000000010000000A00000060 -> total WAL segments: 20.07 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 3309568 bytes to pg_wal/000000010000000A00000060 -> total WAL segments: 31.40 MB ) cmd=postgres: wal writer process
( written 33783808 bytes to pg_wal/000000010000000A00000061 -> total WAL segments: 36.03 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 3997696 bytes to pg_wal/000000010000000A00000061 -> total WAL segments: 39.80 MB ) cmd=postgres: wal writer process
( written 49676288 bytes to pg_wal/000000010000000A00000062 -> total WAL segments: 51.19 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 65273856 bytes to pg_wal/000000010000000A00000063 -> total WAL segments: 66.06 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 79364096 bytes to pg_wal/000000010000000A00000064 -> total WAL segments: 82.04 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 6660096 bytes to pg_wal/000000010000000A00000064 -> total WAL segments: 82.39 MB ) cmd=postgres: wal writer process
( written 88285184 bytes to pg_wal/000000010000000A00000065 -> total WAL segments: 98.02 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 14491648 bytes to pg_wal/000000010000000A00000065 -> total WAL segments: 106.82 MB ) cmd=postgres: wal writer process
( written 101703680 bytes to pg_wal/000000010000000A00000066 -> total WAL segments: 113.99 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 17825792 bytes to pg_wal/000000010000000A00000066 -> total WAL segments: 117.19 MB ) cmd=postgres: wal writer process
( written 115769344 bytes to pg_wal/000000010000000A00000067 -> total WAL segments: 128.20 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 18661376 bytes to pg_wal/000000010000000A00000067 -> total WAL segments: 135.09 MB ) cmd=postgres: wal writer process
( written 19824640 bytes to pg_wal/000000010000000A00000068 -> total WAL segments: 144.17 MB ) cmd=postgres: wal writer process
( written 131350528 bytes to pg_wal/000000010000000A00000068 -> total WAL segments: 148.16 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 27435008 bytes to pg_wal/000000010000000A00000069 -> total WAL segments: 159.80 MB ) cmd=postgres: wal writer process
( written 140132352 bytes to pg_wal/000000010000000A00000069 -> total WAL segments: 159.80 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
INSERT 0 1000000

You can see that my ‘strace|awk’ script is running in the background and has counted about 160 MB of logging, partially from the ‘postgres: wal writer process’ and partly from ‘postgres: demo demo 192.168.56.122(38013)’ serving my connection.

The relation size as stored on disk is about 150 MB;

analyze demo;
ANALYZE
select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo';
relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb
---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+-----
demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 19231 | 1e+06 | 0 | 150
(1 row)

This makes sense. An insert has to write all new data into the log in order to be able to recover the pages until they are checkpointed.

Note that I have no index on this table for this test.

Postgres: update

I’m now updating one column for all rows.

update demo set b=b+1;
( written 150528000 bytes to pg_wal/000000010000000A0000006A -> total WAL segments: 4.01 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 162693120 bytes to pg_wal/000000010000000A0000006B -> total WAL segments: 17.84 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 29769728 bytes to pg_wal/000000010000000A0000006B -> total WAL segments: 28.44 MB ) cmd=postgres: wal writer process
...
( written 84287488 bytes to pg_wal/000000010000000A00000081 -> total WAL segments: 343.65 MB ) cmd=postgres: wal writer process
( written 453705728 bytes to pg_wal/000000010000000A00000082 -> total WAL segments: 347.36 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
UPDATE 1000001

I touched only a small part of the volume in bytes, but I touched all rows and all pages. An, even if only a few bytes are modified, Postgres logs the whole page to protect from fractured blocks in case of crash (pages partially written). So that’s about 150 MB. But postgres do not update rows in-place. The whole row is inserted in its new version, which means the whole volume again, which is another 150 MB. If we look at the size of the table, we can see 300MB of pages:

analyze demo;
ANALYZE
select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo';
relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb
---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-------------+---------------+-----
demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 38462 | 1.21882e+06 | 0 | 300
(1 row)

So this update has generated even more logging: 347 MB.

Postgres: sparse update

Now updating only 1 row out of ten, still one column only:

update demo set b=b+1 where mod(a,10)=1;
( written 89923584 bytes to pg_wal/000000010000000A00000083 -> total WAL segments: 13.88 MB ) cmd=postgres: wal writer process
( written 469123072 bytes to pg_wal/000000010000000A00000084 -> total WAL segments: 22.98 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
...
( written 563576832 bytes to pg_wal/000000010000000A0000008D -> total WAL segments: 151.07 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 130940928 bytes to pg_wal/000000010000000A0000008D -> total WAL segments: 151.27 MB ) cmd=postgres: wal writer process
UPDATE 100000
analyze demo;

So, 10% of the rows had to be copied to their new version, which brings the table size to additional 15 MB.

analyze demo;
ANALYZE
select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo';
relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb
---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-------------+---------------+-----
demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 40385 | 1.07267e+06 | 0 | 315
(1 row)

For these additional 15 MB, half of the table pages had to be modified (the current version having to point to the new version), and the logging generated was 150 MB. Because of MVCC at tuple level, doing something similar to ‘chained rows’ and ‘row migration’ for all updates, and because of full page logging, even sparse updates generate a lot log writes.

Postgres: delete

Here is a delete of those million rows:

delete from demo;
( written 576364544 bytes to pg_wal/000000010000000A0000008E -> total WAL segments: 6.44 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
( written 134930432 bytes to pg_wal/000000010000000A0000008E -> total WAL segments: 6.73 MB ) cmd=postgres: wal writer process
( written 589225984 bytes to pg_wal/000000010000000A0000008F -> total WAL segments: 18.70 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
...
( written 162054144 bytes to pg_wal/000000010000000A00000099 -> total WAL segments: 184.70 MB ) cmd=postgres: wal writer process
( written 740352000 bytes to pg_wal/000000010000000A0000009A -> total WAL segments: 189.80 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
DELETE 1000001
( written 163217408 bytes to pg_wal/000000010000000A0000009A -> total WAL segments: 196.22 MB ) cmd=postgres: wal writer process

Marking tuples as deleted does not increase the table:

analyze demo;
ANALYZE
select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo';
relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb
---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+-----
demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 40385 | 275837 | 0 | 315
(1 row)

But all current tuples have to be marked as deleted and not visible once the transaction is committed. This touches all pages for the current version, which is more than 150 MB of logging here.

Postgres: vacuum

After two updates and a delete, I have old tuples in this table. It seems that VACUUM does not generate any logging:

vacuum demo;
( written 762445824 bytes to pg_wal/000000010000000A0000009B -> total WAL segments: 14.67 MB ) cmd=postgres: demo demo 192.168.56.122(38013)
VACUUM

My guess (but remember that I am a newbie in Postgres) is that in case of a crash occurring before the next checkpoint we will just have to vacuum again. But this is not what was answered in the postgres-general list a few years ago.

Note that full page logging is not necessary for all changes, but only for the first change after the page was read from disk after a checkpoint. This is sufficient to cover future writes failures because recovery will start from there. Once we have full page logged, change vector is sufficient for further recovery. However, I had the same amount of WAL, 15 MB, when vacuuming after a checkpoint.

Oracle: insert

Let’s do some similar things in Oracle, which MVCC implementation is completely different: at block level, with undo logging.

SQL> create table demo as select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('0 to 0');
Table created.

I have exposed in a previous post how I get the delta values from V$MYSTAT join V$STATNAME using (STATISTIC#) for ‘redo size’, so no need to strace here. But we can see the same result by measuring the writes to redo log groups (do not double count the multiplexed members).

SQL> insert into demo select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('1 to 1000000');
1000001 rows inserted.
 
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
141,342 155,218,876 4,380,448 104,411

This is about 150MB, which is the volume of the table:

SQL> exec dbms_stats.gather_table_stats(user,'DEMO');
PL/SQL procedure successfully completed.
SQL> select table_name,num_rows,blocks,(blocks*block_size/1024/1024) MB, avg_row_len from user_tables join dba_tablespaces using(tablespace_name) where table_name='DEMO';
 
TABLE_NAME NUM_ROWS BLOCKS MB AVG_ROW_LEN
------------------------------ ---------- ---------- ---------- -----------
DEMO 1000001 19280 150.625 131

Conclusion for inserts: all databases have to log the whole data inserted in order to be protected from instance crash. Note that Oracle has a way to insert directly into the file, bypassing the buffer cache, and then reduce the logging required for crash recovery. But I’m not doing bulk inserts here.

Oracle: update

The update in Oracle is done in-place. There is no need to copy the whole row (except in the rare cases where the row increases and do not fit into the block). However, the old value of the column must be copied for MVCC, into the UNDO segment. This is why we see 46 MB of ‘undo change vector size’ here.

SQL> update demo set b=b+1;
1000001 rows updated.
SQL> commit;
Commit complete.
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
170,777 105,301,308 48,641,772 82,221

The UNDO is only the change vector, not the full block. If you read about copies of full blocks to rollback segments, it is a confusion either from veterans of Oracle 5, or a misunderstanding of flashback features. The UNDO being stored in segments, written first into buffer cache, it is protected by redo logging, so about 46 MB of redo is actually the redo vector of undo vectors. The other 54 MB of redo is the new value of the update.

Oracle: sparse update

The logging of change vectors rather than full pages is even cheaper with sparse updates:

SQL> update demo set b=b+1 where mod(a,10)=1;
100001 rows updated.
SQL> commit;
Commit complete.
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
56,583 15,414,328 6,111,608 36,921

The volume of undo and redo generated is only 15 MB here, including 6 MB of undo vectors. This is really optimized and this is one reason why you should update only the columns changed (and not use the default non-dynamic update of Hibernate for example).

Oracle: delete

The delete has to mark all rows as deleted and because the space can immediately be reused then whole row must be logged into the UNDO, and this has to be logged into the REDO, so the delete generates lot of logging:

SQL> delete from demo;
1000001 rows deleted.
SQL> commit;
Commit complete.
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
2,124,823 403,755,892 240,302,088 1,093,821

I have no indexes here. With indexes, all index entries have to be marked as deleted, and this generates undo and redo vector because MVCC in Oracle is at block level: each block modification – for table or index – have to be logged.

Deleting a lot of rows is an expensive operation in Oracle. For bulk purges, it is often better to truncate and insert /*+ append */ when possible (as in non-atomic materialized view refresh). Partitioning helps for that for example to purge old data when partitioned on date.

Postgres without full page logging

Given the huge overhead, is full page logging really required? There are plans to avoid it, mentioned in the Postgres ToDo wiki, or at least to keep it only short term for crash recovery and not media recovery. Another possibility is to implement a checksum on the blocks so that fractured blocks can be detected. Then, when detected, the fractured blocks may not need full page logging to recover them if we can restore a previous backup. This takes longer to recover, but can be acceptable given the low probability of this kind of failure. In addition to that, when you have a physical standby synchronized with log-shipping, you have a easy way to recover without having to restore files. But you need a checksum to detect the problem.

Without a checksum, the problem is the detection of partial writes. But if you trust your storage and if you failover to the standby in case of a crash, you may accept to set full_page_writes=off and this is what I did here.


insert into demo select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(1,1000000);
...
( written 125255680 bytes to pg_wal/000000010000000A000000E3 -> total WAL segments: 140.65 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
INSERT 0 1000000

The insert still have to log all new data: 140 MB.


update demo set b=b+1;
...
( written 72613888 bytes to pg_wal/000000010000000A000000F2 -> total WAL segments: 213.02 MB ) cmd=postgres: wal writer process
UPDATE 1000001

The update has to log only what is modified, but because of Postgres MVCC implementation, the whole row has to be written in its new version, and the old ones have their pointer updated: 210 MB here.


update demo set b=b+1 where mod(a,10)=1;
( written 305709056 bytes to pg_wal/000000010000000A000000F3 -> total WAL segments: 1.96 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
( written 72613888 bytes to pg_wal/000000010000000A000000F3 -> total WAL segments: 5.62 MB ) cmd=postgres: wal writer process
( written 75718656 bytes to pg_wal/000000010000000A000000F4 -> total WAL segments: 9.65 MB ) cmd=postgres: wal writer process
( written 310665216 bytes to pg_wal/000000010000000A000000F4 -> total WAL segments: 9.65 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
UPDATE 100000

The sparse update benefits from logging only the changed rows: 10 MB here. This one is even smaller than with Oracle because there’s no UNDO to write here: the old values stay in-place.


delete from demo;
( written 323256320 bytes to pg_wal/000000010000000A000000F5 -> total WAL segments: 11.27 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
( written 338829312 bytes to pg_wal/000000010000000A000000F6 -> total WAL segments: 26.92 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
( written 76562432 bytes to pg_wal/000000010000000A000000F6 -> total WAL segments: 31.41 MB ) cmd=postgres: wal writer process
( written 345415680 bytes to pg_wal/000000010000000A000000F7 -> total WAL segments: 39.73 MB ) cmd=postgres: demo demo 192.168.56.122(38109)
( written 83410944 bytes to pg_wal/000000010000000A000000F7 -> total WAL segments: 40.41 MB ) cmd=postgres: wal writer process
DELETE 1000001

The delete is cheap when full_page_writes=off because there’s only the visibility is changed but data remains (until committed and vacuumed). If you have a lot of rows to delete, then consider to set full_page_writes=off and be sure to have a backup to restore in case of crash.

Oracle full page logging in backup mode

So, Oracle by default does not need to protect from fractured blocks, because they can be detected. If the storage crashes while a block is partially written, the block is corrupt. Thanks to the checksum, this corruption will be detected during recovery (or even earlier depending on DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT). The redo is not sufficient, as it contains only change vectors, but you can recover from the last backup and Oracle can do a simple block recover. This recovery can also be done from the standby database.

However, full page logging exists in Oracle. When running backup from a non-Oracle tool, not aware of block checksum, you need to enclose the copy or snapshot between ‘begin backup’ and ‘end backup’. You do this because online backup may read partially updated blocks, and without the checksum, cannot detect it. A corrupt backup is not very useful and this is why this backup mode will generate more redo to be able to recover them. This is very similar to full page logging: the redo generated for the first modification of the buffer will store the whole block. Next modifications, until buffer is checkpointed, will need only the change vectors.

I think the first article I’ve ever written was a description of the Oracle backup mode. And it is still visible thanks to archive.org only because it was published on… Google Knol!

So, here is the same run with Oracle in backup mode.

Insert does not change a lot as it fills full blocks:

SQL> insert into demo select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('1 to 1000000');
1000000 rows created.
 
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
141,376 156,527,072 4,380,448 124,195

Full update of one column generates same undo, but more than 2x redo because of full page logging:

SQL> update demo set b=b+1;
1000001 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
170,778 238,317,632 48,641,772 104,640

Sparse update is exactly the same as full update because this 10% touches all pages:

SQL> update demo set b=b+1 where mod(a,10)=1;
100001 rows updated.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
319,622 240,502,284 17,832,196 192,815

Delete generates even more because there’s all the the UNDO in addition to all data pages:

SQL> delete from demo;
1000001 rows deleted.
 
SQL> commit;
Commit complete.
 
SQL> @ _mystat_diff.sql
 
db block changes redo size undo change vector size redo entries
---------------- ---------------- ----------------------- ----------------
2,125,285 558,510,928 240,303,768 1,143,131

So what?

Beyond the very different implementation of Postgres and Oracle, we can see that we have flexibility: the large logging generated by Postgres by default may be reduced in some cases, and the minimal logging which is the default for Oracle may be larger in some situations. The most important, as for all technologies, is to understand how it works. Only then you can do the right choice to balance between performance, availability, and cost. Understand how it works means: read the docs (how it is supposed to work) and test (how it actually works). With Oracle there’s additional information from a huge community testing and using it for decades. With Postgres, as with all Open Source projects, the source code with comments is an amazing documentation.

 

Cet article Full page logging in Postgres and Oracle est apparu en premier sur Blog dbi services.

Oracle Cloud Applications Help Providence St. Joseph Health Deliver on its Digital Innovation Vision

Oracle Press Releases - Wed, 2018-02-14 11:00
Press Release
Oracle Cloud Applications Help Providence St. Joseph Health Deliver on its Digital Innovation Vision Oracle Cloud Applications improves organizational efficiency, delivery of services, and patient outcomes for non-profit healthcare organizations

Redwood Shores, Calif.—Feb 14, 2018

Oracle today announced that Providence St. Joseph Health has selected Oracle Cloud Applications to advance its critical business systems and streamline operational processes.

Providence St. Joseph Health needed a modern business platform that could scale to support its changing needs. With Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Enterprise Performance Management (EPM) Cloud, Oracle Human Capital (HCM) Cloud, and Oracle Supply Chain Management (SCM) Cloud, Providence St. Joseph Health will be able to combine finance, employee, and supplier data to help improve productivity, reduce costs, and enhance visibility within the organization.

“We are focused on creating smarter and more efficient systems for managing our growing system, improving planning and forecasting, and enabling agile decision-making,” said Jo Ann Escasa-Haigh, Executive Vice President & Chief Financial Officer, Operations at Providence St. Joseph Health. “We are excited to partner with Oracle to deliver a solution to match our expanding needs.”

Oracle ERP Cloud’s complete, modern, and innovative platform will help Providence St. Joseph Health transform its finance function by enabling increased productivity, lower costs, and improved controls. Oracle EPM Cloud will enable Providence St. Joseph to analyze data for accurate forecasting and budget planning. Oracle SCM Cloud will help increase the efficiency of operations by enforcing procurement policies, providing real-time insights, and automating demand/supply planning. Oracle HCM Cloud will provide insightful data and tools to help Providence St. Joseph Health optimize talent management, gain complete workforce insights, and increase operational efficiency to drive a better employee experience.

“Oracle Cloud Applications will support Providence St. Joseph Health in driving our vision of digital innovation in our business processes, agility in our deployment of solutions, and long-term value and scalability,” adds Janice Newell, Executive Vice President, Information Services & Chief Information Officer at Providence St. Joseph Health.

“Technology plays an important role in the healthcare industry, from helping offices run more efficiently to enabling hospital staff to deliver more accurate diagnoses and save more lives,” said Steve Miranda, Executive Vice President, Applications Product Development, Oracle. “Oracle Cloud Applications will significantly improve business systems and organizational efficiency to help Providence St. Joseph Health provide superior patient care and better respond to community health needs.”

Additional Information

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

More information on Oracle HCM Cloud can be found on the Modern HR in the Cloud blog, follow @OracleHCM on Twitter or Facebook.

For additional information on Oracle SCM Cloud, visit FacebookTwitter or the Oracle SCM blog.

To learn more about Providence St. Joseph Health, visit psjhealth.org.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
Nisha Morris
Providence St. Joseph Health PR
1.714.414.3232
nisha.morris@stjoe.org
About Providence St. Joseph Health

Providence St. Joseph Health is committed to improving the health of the communities it serves, especially those who are poor and vulnerable. Including 50 hospitals, 829 physician clinics, senior services, supportive housing and many other health and educational services, the health system and its partners employ more than 111,000 caregivers (employees) serving communities across seven states – Alaska, California, Montana, New Mexico, Oregon, Texas and Washington. With system offices based in Renton, Wash. and Irvine, Calif., the Providence St. Joseph Health family of organizations works together to meet the needs of its communities, both today and into the future. For more information about Providence St. Joseph Health, visit psjhealth.org.

About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

Nisha Morris

  • 1.714.414.3232

Announcing Packer Builder for Oracle Cloud Infrastructure Classic

OTN TechBlog - Wed, 2018-02-14 10:30

HashiCorp Packer 1.2.0 adds native support for building images on Oracle Cloud Infrastructure Classic.

Packer is an open source tool for creating machine images across multiple platforms from a single source configuration. With the new oracle-classic builder, Packer can now build new application images directly on Oracle Classic Compute, similar to the oracle-oci builder. New Images can be created from an Oracle provided base OS image, an existing private image, or an image that that has been installed from the Oracle Cloud Marketplace

Note: Packer can also create Oracle Cloud Infrastructure Classic compatible machine images using the VirtualBox builder - and this approach still remains useful when building new base OS images from ISOs, see Creating Oracle Compute Cloud Virtual Machine Images using Packer

oracle-classic Builder Example

This examples creates a new image with Redis installed using an existing Ubuntu image as the base OS.

Create a packer configuration file redis.json

Now run Packer to build the image

After packer completes the new Image is available in the Compute Classic console to launch new instances.

See also

For building Oracle Cloud Infrastructure images see:

Backup and Restore PostgreSQL with PgBackRest I

Yann Neuhaus - Wed, 2018-02-14 09:58

Many tools can be used to backup PostgreSQL databases. In this blog I will talk about PgBackRest which is a simple tool that can be used to backup and restore a PostgreSQL database. Full, differential, and incremental backups are supported.
In this first blog I will present a basic configuration of pgbackprest. Our configuration is composed of only one cluster and pgbackrest is installed on the server hosting the database. The goal is to explain a first use of PgBackRest.
Below our configuration
Server with Oracle Linux 7
PostgreSQL 10.1
PgBackRest 1.28
We supposed that the linux box and PostgreSQL 10.1 are already installed. So let’s install PgBackRest.

root@pgserver ~]# yum search pgbackrest
Loaded plugins: langpacks, ulninfo
=========================== N/S matched: pgbackrest ============================
pgbackrest.noarch : Reliable PostgreSQL Backup & Restore
pgbackrest.x86_64 : Reliable PostgreSQL Backup & Restore
Name and summary matches only, use "search all" for everything

And then we can install PgBackRest
[root@pgserver ~]# yum install pgbackrest.x86_64
After we can check the installation using pgbackrest command

[postgres@pgserver ~]$ /usr/bin/pgbackrest
pgBackRest 1.28 - General help
Usage:
pgbackrest [options] [command] Commands:
archive-get Get a WAL segment from the archive.
archive-push Push a WAL segment to the archive.
backup Backup a database cluster.
check Check the configuration.
expire Expire backups that exceed retention.
help Get help.
info Retrieve information about backups.
restore Restore a database cluster.
stanza-create Create the required stanza data.
stanza-delete Delete a stanza.
stanza-upgrade Upgrade a stanza.
start Allow pgBackRest processes to run.
stop Stop pgBackRest processes from running.
version Get version.
Use 'pgbackrest help [command]' for more information.

The configuration of PgBackRest is very easy, it consists of a configuration pgbackrest.conf file that must be edited. In my case the file is located in /etc. As specified, we will use a very basic configuration file.
Below the contents of my configuration file

[root@pgserver etc]# cat pgbackrest.conf
[global] repo-path=/var/lib/pgbackrest
[clustpgserver] db-path=/var/lib/pgsql/10/data
retention-full=2
[root@pgserver etc]#

In the file above,
• repo-path is where backup will be stored,
• clusterpgserver is the name of my cluster stanza (free to take what you want as name). A stanza is the configuration for a PostgreSQL database cluster that defines where it is located, how it will be backed up, archiving options, etc.
• db-path is the path of my database files
• retention-full : configure retention to 2 full backups
A complete list can be found here
Once the configuration file done, we can now create the stanza with the command create-stanza. Note that my PostgreSQL cluster is using the port 5435.

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 stanza-create
2018-02-08 14:01:49.293 P00 INFO: stanza-create command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:01:50.707 P00 INFO: stanza-create command end: completed successfully
[postgres@pgserver ~]$

After we create the stanza, we can verify that the configuration is fine using the check command

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 check
2018-02-08 14:03:42.095 P00 INFO: check command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:03:48.805 P00 INFO: WAL segment 00000001000000000000000C successfully stored in the archive at '/var/lib/pgbackrest/archive/clustpgserver/10-1/0000000100000000/00000001000000000000000C-c387b901a257bac304f27865478fd9f768de83d6.gz'
2018-02-08 14:03:48.808 P00 INFO: check command end: completed successfully
[postgres@pgserver ~]$

Since we did not take yet any backup with PgBackRest, the command info for the backups returns error

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info info
stanza: clustpgserver
status: error (no valid backups)
db (current)
wal archive min/max (10-1): 00000001000000000000000C / 00000001000000000000000C
[postgres@pgserver ~]$

Now let’s take a backup

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup
2018-02-08 14:06:52.706 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver
WARN: no prior backup exists, incr backup has been changed to full
2018-02-08 14:06:54.734 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:06:53": backup begins after the next regular checkpoint completes
2018-02-08 14:06:55.159 P00 INFO: backup start archive = 00000001000000000000000E, lsn = 0/E000060
2018-02-08 14:07:09.867 P01 INFO: backup file /var/lib/pgsql/10/data/base/13805/1255 (592KB, 2%) checksum 61f284092cabf44a30d1442ef6dd075b2e346b7f


2018-02-08 14:08:34.709 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver
2018-02-08 14:08:34.895 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention
2018-02-08 14:08:34.932 P00 INFO: expire command end: completed successfully
[postgres@pgserver ~]$

We can see that by default PgBackRest will try to do an incremental backup. But as there is no full backup yet, a full backup will be done. Once full backup done, all future backups will be incremental unless we specify the type of backup.

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup
2018-02-08 14:26:25.590 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver
2018-02-08 14:26:29.314 P00 INFO: last backup label = 20180208-140653F, version = 1.28
2018-02-08 14:26:30.135 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:26:26": backup begins after the next regular checkpoint completes
...
2018-02-08 14:27:01.408 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver
2018-02-08 14:27:01.558 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention
2018-02-08 14:27:01.589 P00 INFO: expire command end: completed successfully
[postgres@pgserver ~]$

If we want to perform another full backup we can specify the option –type=full

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 --type=full backup
2018-02-08 14:30:05.961 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver --type=full
2018-02-08 14:30:08.472 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-08 14:30:06": backup begins after the next regular checkpoint completes
2018-02-08 14:30:08.993 P00 INFO: backup start archive = 000000010000000000000012, lsn = 0/12000028
….
….

To have info about our backups
[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver info
stanza: clustpgserver
status: ok
db (current)
wal archive min/max (10-1): 00000001000000000000000E / 000000010000000000000012
full backup: 20180208-140653F
timestamp start/stop: 2018-02-08 14:06:53 / 2018-02-08 14:08:19
wal start/stop: 00000001000000000000000E / 00000001000000000000000E
database size: 23.2MB, backup size: 23.2MB
repository size: 2.7MB, repository backup size: 2.7MB
incr backup: 20180208-140653F_20180208-142626I
timestamp start/stop: 2018-02-08 14:26:26 / 2018-02-08 14:26:52
wal start/stop: 000000010000000000000010 / 000000010000000000000010
database size: 23.2MB, backup size: 8.2KB
repository size: 2.7MB, repository backup size: 472B
backup reference list: 20180208-140653F
full backup: 20180208-143006F
timestamp start/stop: 2018-02-08 14:30:06 / 2018-02-08 14:31:30
wal start/stop: 000000010000000000000012 / 000000010000000000000012
database size: 23.2MB, backup size: 23.2MB
repository size: 2.7MB, repository backup size: 2.7MB
[postgres@pgserver ~]$

Now that we see how to perform backup with pgbackrest, let’s see how to restore.
First let identify the directory of our database files

[postgres@pgserver ~]$ psql
psql (10.1)
Type "help" for help.
postgres=# show data_directory ;
data_directory
------------------------
/var/lib/pgsql/10/data
(1 row)
postgres=#

And let’s remove all files in the directory

[postgres@pgserver data]$ pwd
/var/lib/pgsql/10/data
[postgres@pgserver data]$ ls
base pg_dynshmem pg_notify pg_stat_tmp pg_wal postmaster.pid
current_logfiles pg_hba.conf pg_replslot pg_subtrans pg_xact
global pg_ident.conf pg_serial pg_tblspc postgresql.auto.conf
log pg_logical pg_snapshots pg_twophase postgresql.conf
pg_commit_ts pg_multixact pg_stat PG_VERSION postmaster.opts
[postgres@pgserver data]$ rm -rf *
[postgres@pgserver data]$

Now if we try to connect, of course we will get errors

[postgres@pgserver data]$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5435"?
[postgres@pgserver data]$

So let’s restore with PgBackRest with the restore command

[postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info restore
2018-02-08 14:52:01.845 P00 INFO: restore command begin 1.28: --db1-path=/var/lib/pgsql/10/data --log-level-console=info --repo-path=/var/lib/pgbackrest --stanza=clustpgserver
2018-02-08 14:52:03.490 P00 INFO: restore backup set 20180208-143006F
2018-02-08 14:52:21.904 P01 INFO: restore file /var/lib/pgsql/10/data/base/13805/1255 (592KB, 2%) checksum 61f284092cabf44a30d1442ef6dd075b2e346b7f
….
….
2018-02-08 14:53:21.186 P00 INFO: write /var/lib/pgsql/10/data/recovery.conf
2018-02-08 14:53:23.948 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started)
2018-02-08 14:53:28.258 P00 INFO: restore command end: completed successfully
[postgres@pgserver ~]$

At the end of the backup, a recovery.conf file is created in the data directory

[postgres@pgserver data]$ cat recovery.conf
restore_command = '/usr/bin/pgbackrest --log-level-console=info --stanza=clustpgserver archive-get %f "%p"'

Now we can restart the PostgreSQL cluster

[postgres@pgserver data]$ pg_ctl start
waiting for server to start....2018-02-08 14:57:06.519 CET [4742] LOG: listening on IPv4 address "0.0.0.0", port 5435
2018-02-08 14:57:06.522 CET [4742] LOG: listening on IPv6 address "::", port 5435
2018-02-08 14:57:06.533 CET [4742] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5435"
2018-02-08 14:57:06.551 CET [4742] LOG: listening on Unix socket "/tmp/.s.PGSQL.5435"
2018-02-08 14:57:06.645 CET [4742] LOG: redirecting log output to logging collector process
2018-02-08 14:57:06.645 CET [4742] HINT: Future log output will appear in directory "log".
...... done
server started

And then connect

[postgres@pgserver data]$ psql
psql (10.1)
Type "help" for help.
postgres=#

Conclusion
In this blog we shown in a simple configuration how to perform backup using PgBackRest. This basic configuration can help for first use of PgBackRest. In future articles we will go further in an advanced use of this tool.

 

Cet article Backup and Restore PostgreSQL with PgBackRest I est apparu en premier sur Blog dbi services.

Join Factorization

Jonathan Lewis - Wed, 2018-02-14 09:38

This item is, by a roundabout route, a follow-up to yesterday’s note on a critical difference in cardinality estimates that appeared if you used the coalesce() function in its simplest form as a substitute for the nvl() function. Connor McDonald wrote a followup note about how using the nvl() function in a suitable predicate could lead to Oracle splitting a query into a UNION ALL (in version 12.2), which led me to go back to a note I’d written on the same topic about 10 years earlier where the precursor of this feature already existed but used CONCATENATION instead of OR-EXPANSION. The script I’d used for my earlier article was actually one I’d written in February 2003 and tested fairly regularly since – which brings me to this article, because I finally tested my script against 12.2.0.1 to discover a very cute bit of optimisation.

The business of splitting a query into two parts can be used even when the queries are more complex and include joins – this doesn’t always happen automatically and sometimes has to be hinted, but that can be a costs/statistics thing) for example, from 12.1.0.2 – a query and its execution plan:


select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

---------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |         |  1001 |   228K|    11   (0)| 00:00:01 |
|   1 |  CONCATENATION                          |         |       |       |            |          |
|*  2 |   FILTER                                |         |       |       |            |          |
|*  3 |    HASH JOIN                            |         |  1000 |   228K|     8   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL                   | T2      |  1000 |   106K|     4   (0)| 00:00:01 |
|*  5 |     TABLE ACCESS FULL                   | T1      |  1000 |   122K|     4   (0)| 00:00:01 |
|*  6 |   FILTER                                |         |       |       |            |          |
|   7 |    NESTED LOOPS                         |         |     1 |   234 |     3   (0)| 00:00:01 |
|   8 |     NESTED LOOPS                        |         |     1 |   234 |     3   (0)| 00:00:01 |
|   9 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1      |     1 |   125 |     2   (0)| 00:00:01 |
|* 10 |       INDEX RANGE SCAN                  | T1_IDX1 |     1 |       |     1   (0)| 00:00:01 |
|* 11 |      INDEX UNIQUE SCAN                  | T2_PK   |     1 |       |     0   (0)| 00:00:01 |
|  12 |     TABLE ACCESS BY INDEX ROWID         | T2      |     1 |   109 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(:V1 IS NULL)
   3 - access("T2"."N1"="T1"."N1")
   5 - filter("T1"."V1" IS NOT NULL)
   6 - filter(:V1 IS NOT NULL)
  10 - access("T1"."V1"=:V1)
  11 - access("T2"."N1"="T1"."N1")

You can see in this plan how Oracle has split the query into two queries combined through concatenation with FILTER operations at lines 2 (:v1 is null) and 6 (:v1 is not null) to allow the runtime engine to execute only the appropriate branch. You’ll also note that each branch can be optimised separately and in this case the two branches get dramatically different paths because of the enormous difference in the estimated volumes of data.

So let’s move up to 12.2.0.1 and see what happens to this query – but first I’m going to execute a naughty “alter session…”:


------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |  1001 |   180K|    11   (0)| 00:00:01 |
|   1 |  VIEW                                    | VW_ORE_F79C84EE |  1001 |   180K|    11   (0)| 00:00:01 |
|   2 |   UNION-ALL                              |                 |       |       |            |          |
|*  3 |    FILTER                                |                 |       |       |            |          |
|   4 |     NESTED LOOPS                         |                 |     1 |   234 |     3   (0)| 00:00:01 |
|   5 |      NESTED LOOPS                        |                 |     1 |   234 |     3   (0)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID BATCHED| T1              |     1 |   125 |     2   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN                  | T1_IDX1         |     1 |       |     1   (0)| 00:00:01 |
|*  8 |       INDEX UNIQUE SCAN                  | T2_PK           |     1 |       |     0   (0)| 00:00:01 |
|   9 |      TABLE ACCESS BY INDEX ROWID         | T2              |     1 |   109 |     1   (0)| 00:00:01 |
|* 10 |    FILTER                                |                 |       |       |            |          |
|* 11 |     HASH JOIN                            |                 |  1000 |   228K|     8   (0)| 00:00:01 |
|  12 |      TABLE ACCESS FULL                   | T2              |  1000 |   106K|     4   (0)| 00:00:01 |
|* 13 |      TABLE ACCESS FULL                   | T1              |  1000 |   122K|     4   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(:V1 IS NOT NULL)
   7 - access("T1"."V1"=:V1)
   8 - access("T2"."N1"="T1"."N1")
  10 - filter(:V1 IS NULL)
  11 - access("T2"."N1"="T1"."N1")
  13 - filter("T1"."V1" IS NOT NULL)

There’s nothing terribly exciting about the change – except for the disappearence of the CONCATENATION operator and the appearance of the VIEW and UNION ALL operators to replace it (plus you’ll see that the two branches appear in the opposite order in the plan). But let’s try again, without doing that “alter session…”:


--------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                    |  1001 |   229K|    10   (0)| 00:00:01 |
|*  1 |  HASH JOIN                              |                    |  1001 |   229K|    10   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL                     | T2                 |  1000 |   106K|     4   (0)| 00:00:01 |
|   3 |   VIEW                                  | VW_JF_SET$A2355C8B |  1001 |   123K|     6   (0)| 00:00:01 |
|   4 |    UNION-ALL                            |                    |       |       |            |          |
|*  5 |     FILTER                              |                    |       |       |            |          |
|*  6 |      TABLE ACCESS FULL                  | T1                 |  1000 |   122K|     4   (0)| 00:00:01 |
|*  7 |     FILTER                              |                    |       |       |            |          |
|   8 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1                 |     1 |   125 |     2   (0)| 00:00:01 |
|*  9 |       INDEX RANGE SCAN                  | T1_IDX1            |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N1"="ITEM_1")
   5 - filter(:V1 IS NULL)
   6 - filter("T1"."V1" IS NOT NULL)
   7 - filter(:V1 IS NOT NULL)
   9 - access("T1"."V1"=:V1)

The plan now shows a VIEW which is a UNION ALL involving only table t1 in both its branches. The result set from the view is then used as the probe table of a hash join with t2. You’ll note that the name of the view is now VW_JF_SET$A2355C8B – that’s JF for “Join Factorization”, and the alter session I excecuted to get the first plan was to disable the feature: ‘alter session set “_optimizer_join_factorization”= false;’.

Join factorization can occur when the optimizer sees a union all view with some tables that are common to both (all) branches of the query, and finds that it can move those tables outside the query while getting the same end result at a lower cost. In this case it happens to be a nice example of how the optimizer can transform and transform again to get to the lowest cost plan.

It’s worth noting that Join Factorization has been around since 11.2.x.x, and Or Expansion has been around for even longer – but it’s not until 12.2 that nvl() transforms through Or Expansion, which allows it to transform through Join Factorization.

You’ll note, by the way that with this plan we always do a full tablescan of t2, whereas with just Or-Expansion it’s a potential threat that may never (or hardly ever) be realised.  That’s a point to check if you find that the transformation starts to appear inappropriately on an upgrade. There is a hint to disable the feature for a query, but it’s not trivial to get it right so if you do need to block the feature the smart hint (or SQL Patch) would be “opt_param(‘_optimizer_join_factorization’ ‘false’)”.

Footnote:

If you want to run the experiments yourself, here’s the script I used to generate the data. It’s more complicated than it needs to be because I use the same tables in several different tests:

rem
rem     Script:         null_plan_122.sql
rem     Author:         Jonathan Lewis
rem     Dated:          February 2018
rem     Purpose:
rem
rem     Last tested
rem             12.2.0.1        Join Factorization
rem             12.1.0.2        Concatenation
rem
rem

drop table t2;
drop table t1;

-- @@setup  -- various set commands etc.

create table t1 (
        n1              number(5),
        n2              number(5),
        v1              varchar2(10),
        v2              varchar2(10),
        v3              varchar2(10),
        v4              varchar2(10),
        v5              varchar2(10),
        padding         varchar2(100),
        constraint t1_pk primary key(n1)
);

insert into t1
select
        rownum,
        rownum,
        rownum,
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        trunc(100 * dbms_random.value),
        rpad('x',100)
from all_objects
where
        rownum <= 1000 -- > comment to avoid WordPress format mess
;

create unique index t1_n2 on t1(n2);

create index t1_idx1 on t1(v1);
create index t1_idx2 on t1(v2,v1);
create index t1_idx3 on t1(v3,v2,v1);

create table t2 (
        n1              number(5),
        v1              varchar2(10),
        padding         varchar2(100),
        constraint t2_pk primary key(n1)
);

insert into t2
select
        rownum,
        rownum,
        rpad('x',100)
from all_objects
where
        rownum <= 1000     -- > comment to avoid WordPress format mess
;

create index t2_idx on t2(v1);

begin dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

variable n1 number
variable n2 number
variable v1 varchar2(10)
variable v2 varchar2(10)
variable v3 varchar2(10)

exec :n1 := null
exec :n2 := null
exec :v1 := null
exec :v2 := null
exec :v3 := null

spool null_plan_122

set autotrace traceonly explain

prompt  ============================================
prompt  One colx = nvl(:b1,colx) predicate with join
prompt  ============================================

select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

alter session set "_optimizer_join_factorization" = false;

select
        *
from
        t1, t2
where
        t1.v1 = nvl(:v1,t1.v1)
and     t2.n1 = t1.n1
;

alter session set "_optimizer_join_factorization" = true;

set autotrace off

spool off

Caesars Entertainment Transforms Its Iconic Business with Oracle

Oracle Press Releases - Wed, 2018-02-14 07:15
Press Release
Caesars Entertainment Transforms Its Iconic Business with Oracle World-leading gaming and entertainment company selects Oracle Cloud Applications to transform business processes

Redwood Shores, Calif.—Feb 14, 2018

Caesars Entertainment Corporation, the world’s most diversified casino entertainment company, has selected Oracle Cloud Applications to improve the experiences of its guests and employees. With Oracle Cloud Applications, Caesars Entertainment has increased business agility and reduced costs by streamlining financial processes and improving employee productivity and engagement across its entire business operations, which includes Harrah’s, Caesars and Horseshoe.

To modernize business processes while continuing to provide its guests with unsurpassed service across 47 properties in five countries, Caesars Entertainment needed to completely rethink its existing business systems. With Oracle Enterprise Resource Planning (ERP) Cloud  and Oracle Human Capital Management (HCM) Cloud services, Caesars Entertainment has transformed its business by connecting 650 disparate systems to a cloud-based solution that unifies business and employee data on a modern, unified platform.

“We are always looking at how we can provide guests with the best possible services and products,” said Keith Causey, senior vice president and chief accounting officer, Caesars Entertainment. “While we have traditionally focused on the guest experience, a critical part of that process involves the business applications we use to run our organization. Oracle Cloud Applications enabled us to modernize our financial and HR systems so that we could quickly and easily embrace industry best practices, connect disparate applications and data sets and improve productivity.”

Oracle Cloud Applications will enable Caesars Entertainment to benefit from a complete and fully integrated suite of business applications. With Oracle ERP Cloud, Caesars Entertainment will be able to increase productivity, lower costs and improve controls by eliminating spreadsheets and manual processes. Oracle HCM Cloud will enable Caesars Entertainment to find, grow and retain the best talent and achieve complete workforce insights by streamlining access to internal and external talent and delivering advanced reporting and analytics.

“Caesars Entertainment has become part of the American culture, earning its place in the minds of millions worldwide,” said Steve Miranda, executive vice president of applications development at Oracle. “With Oracle Cloud Applications, Caesars Entertainment has embraced modern finance and HR best practices, increased productivity and dedicated more resources to its ongoing mission to provide guests with amazing experiences.”

Contact Info
Evelyn Tam
Oracle PR
+1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • +1.650.506.5936

Report: Cloud Services to Increase U.S. GDP by $2 Trillion over the Next Decade

Oracle Press Releases - Wed, 2018-02-14 07:00
Press Release
Report: Cloud Services to Increase U.S. GDP by $2 Trillion over the Next Decade New research from Dr. Michael Mandel, senior fellow at the Mack Institute of Innovation Management at Wharton, predicts cloud accessibility to technologies such as blockchain, AI, and automation, will power the next big wave of U.S. productivity

Oracle Modern Finance Experience, New York, NY—Feb 14, 2018

Oracle (NYSE: ORCL) today published a report examining the potential impact of cloud services on the United States economy. Authored by Dr. Michael Mandel, senior fellow at the Mack Institute of Innovation Management at the Wharton School and commissioned by Oracle, the report estimates that a cumulative US$2 trillion will be added to U.S. Gross Domestic Product over the next ten years as a result of the productivity and innovation gains that cloud services will deliver. These gains will be attained through the widespread diffusion of advanced technologies such as blockchain, artificial intelligence, cognitive computing, machine learning, and intelligent automation; as well as industry best practices.

The report, titled “Intelligent Finance: How CFOs Can Lead the Coming Productivity Boom builds upon a 2017 study conducted by Dr. Mandel on behalf of the Technology CEO Council. It provides CFOs and finance leaders with a deeper understanding of the potential productivity and profitability gains that can be realized by embracing cloud services and the emerging technologies and best practices they contain.

Dr. Mandel’s research shows a widening productivity divide between the organizations and industries that have invested in software technologies and those that haven’t. But, the report predicts cloud services will close the productivity gap as organizations and industries that have traditionally lagged in technology adoption begin to take advantage of more cost effective and accessible cloud-based solutions. These gains will benefit workers, shareholders, and the broader economy. 

“The cloud era will give low-productivity organizations and industries access to the same technology and best practices that companies in high-productivity industries benefit from,” said Dr. Michael Mandel. “By standardizing and automating routine tasks, the lower producers will increase efficiency and reduce the cost of many processes, which will help them self-fund further investments in technology, develop new capabilities, and redeploy and hire resources for higher-level and better-paid tasks.”

One low-productivity industry that is well-poised to take advantage of emerging cloud technologies is healthcare—a $3 trillion industry.

“Cloud services now provide us with the basis for integrating our financial data with population health data,” said Michael Murray, senior vice president and chief financial officer at Blue Shield of California, which provides healthcare coverage for more than four million Americans. “Through data analytics and better population health management, the United States could remove hundreds of millions of dollars in costs out of the healthcare system. It’s a huge economic opportunity and will ultimately enhance clinical quality and patient outcomes.”

“The potential impact of emerging technologies on society cannot be understated,” noted Dave Donatelli, executive vice president, Cloud Business Group, Oracle. “In addition to creating new jobs and industries based on technology innovations in areas such as 3D printing, artificial intelligence, and cognitive computing, new technologies can help address some of our country’s basic needs. For example, many Americans are struggling to afford the high cost of healthcare coverage and basic medical expenses. Imagine if technology could predict illness and health risks to keep the population healthier. Similarly, cloud-based technologies have the potential to bring down the cost of education, using artificial intelligence, bots, and IoT to create more efficient institutions and better student outcomes.”

As part of the research, Dr. Mandel conducted in-depth interviews with CEOs, CFOs, and other top executives at companies in key industries identified as essential to U.S. economic growth.

Oracle customers who participated in the report, include:

  • Blue Shield of California (healthcare)
  • Carbon (manufacturing)
  • ConnectOne Bank (financial services)
  • FairfieldNodal (oil & gas)
  • Oracle (high tech)
  • Shawnee State University (education)
  • The Wonderful Company (retail/consumer)


The full report can be downloaded at www.oracle.com/intelligent-finance-report.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

Statements in this article relating to Oracle’s future plans, expectations, beliefs, and intentions are “forward-looking statements” and are subject to material risks and uncertainties. Such statements are based on Oracle’s current expectations and assumptions, some of which are beyond Oracle’s control. All information in this article is current as of October 2, 2017 and Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

SQL Server on Docker and network bridge considerations

Yann Neuhaus - Wed, 2018-02-14 06:46

Let’s continue with this blog post series about SQL Server and Docker. A couple of days ago, I was in a customer shop that already implemented SQL Server 2017 on Linux as Docker containers. It was definitely a very interesting day with a lot of customer experience and feedbacks. We discussed with him about lot of architecture scenarios.

The interesting point here is I was able to compare with a previous customer who used docker containers for a while in a completely different way. Indeed, my new customer implemented a Docker infrastructure exclusively based on SQL Server containers whereas the older one already containerized its applications that were connected to an external and non-containerized SQL Server environment.

Use case 1 – Containerized apps and virtualized SQL Server environments Use case 2 – SQL Server containers and virtualized applications  blog 128 - 1- docker archi 2  blog 128 - 1 - docker archi 1

 

In this blog post I want to focus on the first use case in terms of networks.

Connecting to an outside SQL Server (from a docker perspective) is probably an intermediate solution for many customers who already deal with mission-critical environments implying very restrictive high-availability scenarios and when very high performance is required as well. Don’t get me wrong. I’m not saying docker is not designed for mission critical scenarios but let’s say that fear of unknown things, as virtualization before, is still predominant, at least for this kind of scenario. I always keep in mind the repetitive customer question: is Docker ready for production and for databases? Connecting to a non-containerized SQL Server environment may make sense here at least to speed containers adoption. That’s my guess but feel free to comment with your thoughts!

So, in this context we may use different Docker network topologies. I spent some times to study and to discuss with customers about implemented network topologies in their context. For simple Docker infrastructures (without orchestrators like Swarm or Kubernetes) Docker bridges seem to be predominant with either Docker0 bridges or user-defined bridges.

 

  • Docker default bridge (Docker0)

For very limited Docker topologies, default network settings will be probably sufficient with Docker0 bridge. It is probably the case of my latest customer with only 5 SQL Server containers on the top of one Docker engine. By default, each container created without any network specification (and any Docker engine setting customization) will have one network interface sitting on the docker0 bridge with an IP from 172.17.0.0/16 CIDR or whichever CIDR you have configured docker to use. But did you wonder what is exactly a bridge on Docker world?

Let’s have a deeper look on it with a very simple example concerning one docker engine that includes two containers based on microsoft/mssql-tools each and one outside SQL Server that runs on the top of Hyper-V virtual machine. The below picture shows some network details that I will explain later in this blog post.

blog 128 - 3 - docker network bridge

My 2 containers can communicate together because they are sitting on the same network bridge and they are also able to communicate with my database server through the NAT mechanism. IP masquerading and IP forwarding is enabled on my Docker host.

$ sudo docker run -tid --name docker1 microsoft/mssql-tools
77b501fe29af322dd2d1da2824d339a60ba3080c1e61a2332b3cf563755dd3e3

$ sudo docker run -tid --name docker2 microsoft/mssql-tools
3f2ba669591a1889068240041332f02faf970e3adc85619adbf952d5c135d3f4

$ sudo docker ps
CONTAINER ID        IMAGE                   COMMAND                  CREATED             STATUS              PORTS               NAMES
3f2ba669591a        microsoft/mssql-tools   "/bin/sh -c /bin/bash"   7 seconds ago       Up 6 seconds                            docker2
77b501fe29af        microsoft/mssql-tools   "/bin/sh -c /bin/bash"   11 seconds ago      Up 10 seconds                           docker1

 

Let’s take a look at the network configuration of each container. As a reminder, each network object represents a layer 2 broadcast domain with a layer 3 subnet as shown below. Each container is attached to a network through a specific endpoint.

$ sudo docker inspect docker1
[
"Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630",
                    "EndpointID": "bd82669031ad87ddcb61eaa2dad823d89ca86cae92c4034d4925009aae634c14",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
]

$sudo docker inspect docker2
[
"Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.3",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:03",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "985f25500e3d0c55d419790f1ac446f92c8d1090dddfd69987a52aab0717e630",
                    "EndpointID": "140cd8764506344958e9a9725d1c2513f67e56b2c4a1fc67f317c3e555764c1e",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:03",
                    "DriverOpts": null
                }
            }
]

 

To summarize, two IP addresses have been assigned for Docker1 container (172.17.0.2) and Docker2 container (172.17.0.3) in the IP address interval defined by the Docker0 bridge from the Docker internal IPAM module. Each network interface is created with their own MAC address and the gateway IP address (172.17.0.1) for both containers corresponds to the Docker0 bridge interface.

$ sudo ip a show docker0
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:2a:d0:7e:76 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:2aff:fed0:7e76/64 scope link
       valid_lft forever preferred_lft forever

 

Let’s try to connect from the both containers to my SQL Server database:

$ sudo docker exec -it docker1
…
$ sudo docker exec -it docker2
...

 

Then on each container let’s run the following sqlcmd command:

sqlcmd -S 192.168.40.30,1450 -Usa -Ptoto

 

Finally let’s switch on the SQL Server instance and let’s get a picture of existing connections (IP Address 192.168.40.30 and port 1450).

SELECT 
	c.client_net_address,
	c.client_tcp_port,
	c.local_net_address,
	c.protocol_type,
	c.auth_scheme,
	s.program_name,
	s.is_user_process
FROM sys.dm_exec_connections AS c
JOIN sys.dm_exec_sessions AS s ON c.session_id = s.session_id
WHERE client_net_address <> '<local machine>'

 

blog 128 - 4 - docker network bridge sqlcmd

We may notice that the IP address is basically the same (192.168.40.50) indicating we are using NAT to connect from each container.

Let’s go back to the Docker engine network configuration. After creating my 2 containers, we may notice the creation of 2 additional network interfaces.

$ ip a show | grep veth*
12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP
14: veth46a8316@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP

 

What are they? At this point, we are entering to Linux network namespace world. You can read further technical details on the internet but to keep simple network namespace concepts, I would say they allow to run different and separate network instances (including routing tables) that operate independent of each other. In other words, there is a way to isolate different networks from each other based on the same physical network device. Assuming we are using docker bridge type networks, when creating a container, in background we are creating a dedicated network namespace that includes a virtual ethernet interface which comes in interconnected pairs. In fact, a virtual ethernet interface acts as a tube to connect a Docker container namespace (in this context) to the outside world via the default / global namespace where the physical interface exists.

Before digging further into details about virtual interfaces let’s say by default Docker doesn’t expose network namespace information because it uses it own libcontainer and the microsoft/mssql-tools docker image is based on a simplified Linux image that doesn’t include network tools to easily show virtual interface information. So, a workaround is to expose a Docker container namespace into the host.

First we have to find out the process id of the container and then link its corresponding proc namespace to /var/run/netns host directory as shown below:

$ sudo docker inspect --format '{{.State.Pid}}' docker1
2094
$ sudo ln -s /proc/2094/ns/net /var/run/netns/ns-2094

 

Then we may use ip netns command to extract the network information

$ sudo ip netns
ns-2094 (id: 0)
$ sudo ip netns exec ns-2094 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0

 

Here we go. The interesting information is the container network interface 11: eth0@if12

So, the first pair is the eth0 interface on the Docker container and the “outside” pair corresponds to the interface number 12. On the host the interface 12 corresponds to the virtual ethernet adapter veth45297ff. Note we may also find out the pair corresponding to the container interface (@if11).

$ ip a | grep "^12"
12: veth45297ff@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP

 

Finally, let’s take a look at the bridge used by the virtual ethernet adapter veth45297ff

$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.02422ad07e76       no              veth45297ff
                                                        veth46a8316

 

The other veth (46a8316) corresponds to my second docker2 container.

 

  • User-defined network bridges

But as said previously using the Docker0 bridge is only suitable for very limited scenarios. User-defined bridges are more prevalent with more complex scenarios like microservice applications because they offer a better isolation between containers and the outside world as well as a better manageability and customization. At this stage we may also introduce macvlan networks but probably in the next blog post …

For example, let’s say you want to create 2 isolated network bridges for a 3-tiers application. The users will access the web server (from the exposed port) throughout the first network (frontend-server). But in the same time, you also want to prevent containers that sit on this network to make connections to the outside world. The second network (backend-server) will host containers that must have access to both the outside SQL Server database and the web server.

blog 128 - 5 - docker network bridge segregation

User-defined networks is a good solution to address these requirements. Let’s create two user-defined networks. Note by default containers may make connections to the outside world but the outside is not able to make connections to the containers without exposing listen ports. This is why I disabled ip masquerading (com.docker.network.bridge.enable_ip_masquerade=false) for the frontend-server network to meet the above requirements.

$sudo docker network create \
    --driver bridge \
    --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  backend-server  
$sudo docker network create \
    --driver bridge \
    --subnet 172.19.0.0/16 \
    --gateway 172.19.0.1 \
    --opt com.docker.network.bridge.enable_ip_masquerade=false \
  frontend-server
$ sudo docker network ls 
NETWORK ID          NAME                DRIVER              SCOPE
5c6f48269d2b        backend-server      bridge              local
985f25500e3d        bridge              bridge              local
b1fbde4f4674        frontend-server     bridge              local
ad52b859e3f9        host                host                local
1beda56f93d3        none                null                local

 

Let’s now take a look at the corresponding iptables masquerading rules on my host machine:

$ sudo iptables -t nat -L -n | grep -i "masquerade"
MASQUERADE  all  --  172.20.0.0/16        0.0.0.0/0
MASQUERADE  all  --  172.17.0.0/16        0.0.0.0/0

 

You may notice only the Docker0 (172.17.0.0/16) and backend-server (172.20.0.0/16) bridges are allowed for ip masquerading.

Then let’s create 2 containers with the two first ones (docker1 and docker2) that will sit on the frontend-server network and the second one (docker2) on the backend-server network. For convenient purposes, I setup fixed hostnames for each container. I also used a different ubuntu image that provides this time all necessary network tools including ping command.

$ sudo docker run -d --rm --name=docker1 --hostname=docker1 --net=frontend-server -it smakam/myubuntu:v6 bash

$ sudo docker run -d --rm --name=docker2 --hostname=docker2 --net=frontend-server -it smakam/myubuntu:v6 bash

$sudo docker run -d --rm --name=docker3 --hostname=docker3 --net=backend-server -it smakam/myubuntu:v6 bash

$ sudo docker ps
CONTAINER ID        IMAGE                COMMAND             CREATED             STATUS              PORTS               NAMES
225ee13c38f7        smakam/myubuntu:v6   "bash"              2 minutes ago       Up 2 minutes                            docker3
d95014602fe2        smakam/myubuntu:v6   "bash"              4 minutes ago       Up 4 minutes                            docker2
1d9645f61245        smakam/myubuntu:v6   "bash"              4 minutes ago       Up 4 minutes                            docker1

 

First, probably one of the biggest advantages of using user-defined networks (unlike Docker0 bridge) is the ability to use automatic DNS resolution between containers on the same user-defined subnet on the same host (this is default behavior but you can override DNS settings by specifying –dns parameter at the container creation time). In fact, Docker applies update on the /etc/hosts file of each container when adding / deleting containers.

As expected, I may ping docker2 container from docker1 container and vice-versa but the same doesn’t apply between neither docker1 and docker3 nor docker2 and docker3 because they are not sitting on the same network bridge.

$ sudo docker exec -ti docker1 ping -c2 docker2
PING docker2 (172.19.0.3) 56(84) bytes of data.
64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from docker2.frontend-server (172.19.0.3): icmp_seq=2 ttl=64 time=0.058 ms
…
$ sudo docker exec -ti docker2 ping -c2 docker1
PING docker1 (172.19.0.2) 56(84) bytes of data.
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.054 ms
...
$ sudo docker exec -ti docker1 ping -c2 docker3
ping: unknown host docker3
...

 

From a network perspective, on the host we may notice the creation of two additional bridge interfaces and 3 virtual Ethernet adapters after the creation of the containers.

$ brctl show
bridge name     bridge id               STP enabled     interfaces
br-5c6f48269d2b         8000.0242ddad1660       no              veth79ae355
br-b1fbde4f4674         8000.02424bebccdd       no              vethb66deb8
                                                        vethbf4ab2d
docker0         8000.02422ad07e76       no
$ ip a | egrep "^[1-9][1-9]"
25: br-5c6f48269d2b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
28: br-b1fbde4f4674: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
58: vethb66deb8@if57: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-b1fbde4f4674 state UP
64: veth79ae355@if63: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-5c6f48269d2b state UP

 

If I want to make the docker3 container reachable from docker2 container I may simply connect the latter to the corresponding network as shown below:

$ sudo docker network connect backend-server docker2

$ sudo docker inspect docker2
[
"Networks": {
                "backend-server": {
                    "IPAMConfig": {},
                    "Links": null,
                    "Aliases": [
                        "d95014602fe2"
                    ],
                    "NetworkID": "5c6f48269d2b752bf1f43efb94437957359c6a72675380c16e11b2f8c4ecaaa1",
                    "EndpointID": "4daef42782b22832fc98485c27a0f117db5720e11d806ab8d8cf83e844ca6b81",
                    "Gateway": "172.20.0.1",
                    "IPAddress": "172.20.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:14:00:03",
                    "DriverOpts": null
                },
                "frontend-server": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "d95014602fe2"
                    ],
                    "NetworkID": "b1fbde4f4674386a0e01b7ccdee64ed8b08bd8505cd7f0021487d32951035570",
                    "EndpointID": "651ad7eaad994a06658941cda7e51068a459722c6d10850a4b546382c44fff86",
                    "Gateway": "172.19.0.1",
                    "IPAddress": "172.19.0.3",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:13:00:03",
                    "DriverOpts": null
                }
            }
]

 

You may notice the container is connected to the frontend-server and backend-server as well thanks to an additional network interface created at same time.

$ sudo docker exec -it docker2 ip a show | grep eth
59: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:13:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.19.0.3/16 brd 172.19.255.255 scope global eth0
68: eth2@if69: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:14:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.20.0.3/16 brd 172.20.255.255 scope global eth2

 

Pinging both docker1 container and docker3 container from docker2 container is successful now.

$ sudo docker exec -it docker2 ping -c2 docker1
PING docker1 (172.19.0.2) 56(84) bytes of data.
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from docker1.frontend-server (172.19.0.2): icmp_seq=2 ttl=64 time=0.052 ms
…
$ sudo docker exec -it docker2 ping -c2 docker3
PING docker3 (172.20.0.2) 56(84) bytes of data.
64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=1 ttl=64 time=0.082 ms
64 bytes from docker3.backend-server (172.20.0.2): icmp_seq=2 ttl=64 time=0.054 ms
…

 

In this blog post, we surfaced Docker network bridges and use cases we may have to deal with SQL Server instances regarding the context. As a reminder, user-defined networks may allow to define fine-grained policy rules to interconnect containers on different subnets. This is basically what we may want to achieve with microservices applications. Indeed, such applications include some components that need to span multiple networks (backend and frontend networks) whereas other ones should by isolated (even from outside) regarding their role.

Happy containerization!

 

 

 

 

Cet article SQL Server on Docker and network bridge considerations est apparu en premier sur Blog dbi services.

Could you trust option_packs_usage_statistics.sql ?

Yann Neuhaus - Wed, 2018-02-14 04:17
Introduction

As a former Oracle LMS qualified auditor my opinion is sometimes requested before/during/after an Oracle LMS audit or simply to ensure a customer that his Oracle database is 100% in conformity with Oracle Licensing Policy. Even if

“The Oracle License Management Services (LMS) Group is the only Oracle group authorized to review and provide opinions on compliance status and will provide guidance, education and impartial opinions on a customer or partner’s compliance state. For more information please visit the following website: http://www.oracle.com/corporate/lms.”

I very do hope that you will find interesting tips in this blog.

Most of the time when a customer would like to check which Oracle options are used by his database infrastructure he is using the well known script “option_packs_usage_statistics.sql”. dbi services did checks of the options detected by the script provided by Oracle (My Oracle Support DOC ID 1317265.1). Depending on your database usage this script will detect the usage of different options, but could you really trust the output of this script and how to interpret the output ?

Could we trust the output of option_packs_usage_statistics.sql ?

The answer is quite easy and short: NO you can’t !

Why? Because as for any software there are some bugs and these bugs lead to false positive detection. The good news is that some of these false positive are documented on My Oracle Support. Indeed the script options_packs_usage_statistics.sql used and provided by Oracle has 14 documented bugs (My Oracle Support Doc ID 1309070.1) and some other non-documented bugs (eg. My Oracle Support BUG 17164904). These bugs are related to:

1.    Bug 11902001 – Exclude default users for Feature usage tracking for Securefiles option
2.    Bug 11902142 – Exclude default users for Feature usage tracking for Advanced Compression option
3.    Bug 19618850 – SOLUTION TO PREVENT UNINTENTED ORACLE OPTION USAGE
4.    Query against DBA_FEATURE_USAGE_STATISTICS is not a true test for use of SDO
5.    Bug 16088534 : RMAN default Backup BZIP2 Compression feature is reported wrongly as as an Advanced Compression feature
6.    Bug 22122625 – GETTING FALSE POSITIVES ON USAGE OF ADVANCED INDEX COMPRESSION
7.    Bug 24844549 – ADVANCED INDEX COMPRESSION SHOWS USAGE IN DBA_FEATURE_USAGE_STATISTICS WITH HCC
8.    Bug 16859747 – DBA_FEATURE_USAGE_STATISTICS SHOWS INCORRECT USAGE FOR HEAPCOMPRESSION
9.    Bug 16563444 – HEAT MAP FEATURE USAGE TRACKING IS NOT CORRECT
10.    Bug 19317899 – IMC: IN-MEMORY OPTION IS REPORTED AS BEING USED EVEN INMEMORY_SIZE IS 0
11.    Bug 19308780 – DO NOT FEATURE TRACK OBJECTS FOR IM WHEN INMEMORY_SIZE = 0
12.    Bug 21248059 – DBA_FEATURE_USAGE_STATISTICS BUG IN TRACKING “HYBRID COLUMNAR COMPRESSION” FEAT
13.    Bug 25661076 – DBA_FEATURE_USAGE_STATISTICS INCORRECTLY SHOWS SPATIAL USAGE IN 12C
14.    Bug 23734270 – DBA_FEATURE_USAGE_STATISTICS SHOWS PERMANENT USAGE OF REAL-TIME SQL MONITORING

These bugs may lead to the detection of features such as : Automatic Maintenance – SQL Tuning advisor & Automatic SQL Tuning Advisor, Real-Time SQL monitoring, Advanced security – Oracle Utility Datapump (Export) and Oracle Utility Datapump (Import), Advanced Compression – Heat Map, Advanced Compression – Oracle Utility Datapump (Export) and Oracle Utility Datapump (Import), aso….

Of course these bugs make the real options usage analysis especially difficult even for an experimented Database Administrator. Additionally the Oracle database in version 12 could make usage of options in maintenance windows without manual activation. That the case for instance of options such as : Automatic Maintenance – SQL Tuning Advisor, Automatic SQL Tuning Advisor and Automatic SQL Tuning Advisor.

14. Bug 23734270 – DBA_FEATURE_USAGE_STATISTICS SHOWS PERMANENT USAGE OF REAL-TIME SQL MONITORING
On a freshly created 12c database, DBA_FEATURE_USAGE_STATISTICS shows usage of Real-Time SQL Monitoring even if no reports have been run from OEM pages or with DBMS_SQL_MONITOR.
Reason :SQL Monitor reports are automatically generated and saved in AWR but should be considered as system usage.
This behavior is the same for all 12 releases and is not present in 11.2. – Extract of My Oracle Support Bug 23734270

Even if LMS team is not using option_packs_usage_statistics.sql script, the output of LMS_Collection_Tool (ReviewLite.sql) is quite the same. The direct consequence in case of an Oracle LMS audit is that the auditor could detect options that you simply never used and you will have to make the proof of non usage… if not you will have to pay the invoice following the final LMS report as stated in your LMS preliminary/final report.

“Based upon the information provided to License Management Services, the following licensing issues need to be resolved within 30 days from the date of the Final Report.”

“In accordance to Oracle compliance policies, backdated support charges are due for the period of unlicensed usage of Oracle Programs.
Please provide your feedback on this preliminary report within 10 days from the presentation of this report.”- extract of an Oracle LMS audit

Even if I do not have hundreds of cases where the LMS department made wrong detection, I’ve concrete stories where LMS team detected some false positives. Last case was related to the detection of more than 700 usage of Advanced compression due to unpublished BUG 17164904. Thanks to the metalink Doc ID 1993134.1, the bug is explained:

In 12.1.0.1,  the compression counter is incorrectly incremented (COMPRESSCNT=1) for compression=metadata_only (either explicitly or by default) due to unpublished BUG 17164904 – INCORRECT FEATURE USAGE STATISTICS FOR DATA PUMP COMPRESSION, fixed with 12.1.0.2.

How to interpret the output of option_packs_usage_statistics.sql ?

Sometimes this script could provide you some non sense option usage. That the case for instance for features provided only since database version 12c but detected on your old database version 11g. In such a case simply edit the option_packs_usage_statistics.sql script and have a look on the comments. A perfect example of that is illustrated by the detection of Heat Map usage in database version 11g whereas this option is available since version 12c. You can see below another example of wrong options detection related to “Automatic Maintenance – SQL Tuning Advisor” and “Automatic SQL Tuning Advisor”:


SELECT ‘Tuning Pack’                                         , ‘Automatic Maintenance – SQL Tuning Advisor’              , ‘^12\.‘                      , ‘INVALID‘ from dual union all  – system usage in the maintenance window
SELECT ‘Tuning Pack’                                         , ‘Automatic SQL Tuning Advisor’                            , ‘^11\.2|^12\.’               , ‘INVALID‘ from dual union all  — system usage in the maintenance window
SELECT ‘Tuning Pack’                                         , ‘Real-Time SQL Monitoring’                                , ‘^11\.2′                     , ‘ ‘       from dual union all

This INVALID clause explain that the detection of this option is due to system usage in the maintenance window in version 12 (Automatic Maintenance – SQL Tuning Advisor) and in version 11.2 and 12 for Automatic SQL Tuning Advisor. This is also explained few lines after in the option_packs_usage_statistics.sql script :


where nvl(CONDITION, ‘-‘) != ‘INVALID‘                   — ignore features for which licensing is not required without further conditions

    and not (CONDITION = ‘C003′ and CON_ID not in (0, 1))  — multiple PDBs are visible only in CDB$ROOT; PDB level view is not relevant
)

In such a case the option does not have to be considered since the normal behavior of an oracle database in version 12 is to use this option in the maintenance window. This is just an example to illustrate that some detected option does not have to be licensed as explained in the script.

Conclusion

I very do hope that this blog helps you to have a better understanding of how to detect what your database infrastructure really uses in terms of Oracle options. Anyway if you are convinced that you do not use an Oracle database option despite the output of scripts such as option-packs_usage_statistics or ReviewLite which proofs the opposite, have a look on My Oracle Support. Look for bug related to wrong detection of this feature and with a little bit of luck you will find something interesting. Oracle is definitively engineered for heroes…

Oracle Engineered For Heroes

Oracle Engineered For Heroes

 

Cet article Could you trust option_packs_usage_statistics.sql ? est apparu en premier sur Blog dbi services.

Three Quick Tips API Platform CS - Gateway Installation (Part 1)

OTN TechBlog - Tue, 2018-02-13 16:00

This blog post assumes some prior knowledge of API Platform Cloud Service and pertains to the on premise gateway installation steps. Here we try to list down 3 useful tips (applicable for 18.1.3+), arranged in no particular order:. 

  • Before installing the gateway, make sure you have the correct values for "listenIpAddress" and "publishAddress".  This can be done by the following checklist (Linux only):
    • Does the command "hostname -f" return a valid value ?
    • Does the command "ifconfig" list downs the ip addresses properly ?
    • Do you have additional firewall/network policies that may prevent communication with management tier?
    • Do you authoritatively know the internal and public ip/addresses to be used for the gateway node?

            If you do not know the answers to any of the questions, please contact your network administrator.

           If you see issues with gateway server not starting up properly, incorrect values of  "listenIpAddress" and "publishAddress" could be the possible cause. 

  • Before running the "creategateway" action (or any other action involving the "creategateway" like "create-join" for example), do make sure that the management tier is accessible. You can use something like:
    • wget "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"  
    • curl "<http/https>:<managmentportal_host>:<management_portal_port>/apiplatform"

           If the above steps fail, then "creategateway" will also not work, so the questions to ask are:

  1. Do we need a proxy?
  2. If we have already specified a proxy , is it the correct proxy ?
  3. In case we need a proxy , have we set the "managementServiceConnectionProxy" property in gateway-props.json.

Moreover, it is better if we set the http_proxy/https_proxy to the correct proxy, if proxies are applicable.

  • Know your log location, please refer to the following list:
    • Logs for troubleshooting "install" or  "configure" actions , we have to refer to <install_dir>/logs directory.
    • Logs for troubleshooting "start" or "stop" actions, we have to refer to <install_dir>/domain/<gateway_name>/(start*.out|(stop*.out)).
    • Logs for troubleshooting "create-join"/"join" actions, we have to refer to <install_dir>/logs directory.
    • To troubleshoot issues post installation (i.e. after the physical node has joined the gateway), we can refer to <install_dir>/domain/<gateway_name>/apics/logs directory. 

We will try to post more tips in the coming weeks, so stay tuned and happy API Management.            

Oracle Expands its Global Startup Ecosystem

Oracle Press Releases - Tue, 2018-02-13 13:48
Press Release
Oracle Expands its Global Startup Ecosystem New “Virtual” Global Scaleup Program Launches; Residential Startup Program Expands to North America with Austin, Texas Location

Redwood Shores, Calif.—Feb 13, 2018

Oracle today announced the expansion of its global startup ecosystem in an effort to increase the impact and support for the wider startup community, reach more entrepreneurs worldwide, and drive cloud adoption and innovation. The expansion includes the launch of a new virtual-style, non-residential global program, named Oracle Scaleup Ecosystem, as well as the addition of Austin to the residential Oracle Startup Cloud Accelerator program. The addition of Austin brings the residential program to North America and expands the accelerator’s reach to nine total global locations.

“In 2017, we launched eight residential programs ahead of schedule and attracted almost 4,000 global startups for only 40 program slots—a clear indication of the tremendous demand,” said Reggie Bradford, Oracle senior vice president, Startup Ecosystem and Accelerator. “Oracle Scaleup Ecosystem is a new global program that allows us to reach more innovators and entrepreneurs, regardless of location, including later-stage scaleup companies who need access to Oracle Cloud solutions and resources without the hands-on offerings our residential program provides. We’re building an ecosystem that enables tangible business value, customer growth and revenue—for the startups, our customers and Oracle.”

Oracle’s global startup mission is to provide enriching, collaborative partnerships to enable next-generation growth and drive cloud-based innovation for startups throughout all stages of their journey. To that end, Oracle offers residential and non-residential startup programs that power cloud-based technology innovation and enable co-creation and co-innovation across the startups, customers, partners and Oracle.

Oracle Scaleup Ecosystem is the new non-residential, virtual-style program designed for startups and venture capital and private equity portfolio companies to enable hypergrowth and scale. Oracle’s Scaleup program is collaborating with leading PE and VC firms and will target high-growth entities across EMEA, JAPAC, and the Americas, as well as a select number of investment groups and strategic partners. The program offers mentoring, R&D support, marketing/sales enablement, migration assistance, cloud credits and discounts, and access to Oracle’s customer and product ecosystems.

"We are always exploring new opportunities and resources that can accelerate innovation and growth for our portfolio companies," said Steve Herrod, managing director at General Catalyst. "Oracle has a formidable cloud-based technology stack, product expertise and a thriving customer and partner base. Collaborating with the Oracle Scaleup program has the potential to benefit both our companies and the broader global technology ecosystem."

Industry veteran and Amazon Web Services (AWS) alum Jason Williamson has been tapped by Bradford to lead the Oracle Scaleup Ecosystem program. Williamson helped launch private equity ecosystem initiatives at AWS. Also an author, professor, entrepreneur and former Oracle employee, Williamson brings a wealth of knowledge and unique skillset to lead Oracle’s Scaleup program.

“Lightspeed works closely with global enterprise technology companies to provide resources and access to Lightspeed’s portfolio companies,” said Sunil Rao, Partner, Business Services, Lightspeed India Partners. “Oracle’s Scaleup program is aptly timed for the emerging ecosystem of enterprise software and SaaS vendors, and access to Oracle's cloud solutions, global customers, partner footprint and industry experts who work closely with startups is super useful.”

“Working with Oracle has helped us fast-track growth with business development and technology enhancements,” said Rich Joffe, CEO, Stella.ai, an AI-based recruiting marketplace platform. “Relationships with Oracle Taleo and now Oracle Scaleup will enhance our performance and capabilities as our business rapidly grows globally.”

Austin is the newest addition to Oracle’s residential startup program, Oracle Startup Cloud Accelerator, bringing the program to North America for a total of nine locations worldwide: Austin, Bangalore, Bristol, Mumbai, Delhi, Paris, Sao Paulo, Singapore and Tel Aviv. The program will select five to six startups per cohort, supporting two cohorts a year. Selected companies will be entitled to hands-on technical and business mentoring by Oracle and industry experts, state-of-the-art technology with free Oracle Cloud credits, full access to a dedicated co-working space, as well as access to Oracle’s vast global ecosystem of startup peers, customers, investors and partners. More program details will be announced in the coming month. Interested startups can sign up to receive more information at oracle.com/startup/TX.

Applications for Oracle Scaleup Ecosystem are accepted on a rolling basis at oracle.com/startup/scaleup.

Contact Info
Julia Allyn
Oracle Corporate Communications
+1.650.607.1338
julia.allyn@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe, and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Julia Allyn

  • +1.650.607.1338

When is the Best Time to Apply AD & TXK Patches?

Steven Chan - Tue, 2018-02-13 13:36

E-Business Suite 12.2's Online Patching feature allows you to apply patches to your environment while it is still running. For example, you can apply patches to Financials while your end-users are entering bills in Accounts Receivable. These patches are applied using our Applications DBA (AD) and EBS Technology Stack (TXK) tools.

A reader recently asked whether it's advisable to apply our latest EBS 12.2.7 update at the same time as our  AD-TXK Delta 10 updates. This is a great question, since it touches on both our goals for our AD and TXK tools as well as our internal testing practices for new updates and patches.

The short answer: You should apply the latest AD and TXK infrastructure tools in an initial patching cycle, and then apply EBS updates and patches (e.g. EBS 12.2.7) in a later patching cycle.

Why?

There are two reasons:

  1. Latest AD/TXK tools are better. The latest AD and TXK tool updates always include new patching-related features, as well as critical fixes for stability, security, and performance. For example, the September 2017 AD and TXK Delta 10 updates included new features for parallel index optimization, seed data handling, synchronization of theme files on cutover, and more.
     
    If you have the latest patching tools in place, the latest AD/TXK enhancements, improvements, and fixes reduce risk during the patching process itself. 
     
  2. Updates are always tested with the latest AD/TXK tools.  We do not generally have the resources to test new EBS patches with older patching tools. We always test the installation of our latest EBS updates (e.g. EBS 12.2.7) using the latest AD and TXK patches available at the time. In other words, you may encounter unexpected issues if you apply new patches using older patching tools.

Do new AD and TXK patches trigger functional re-testing?

New AD and TXK patches do not change any aspects of how EBS functional products work.  These infrastructure tools affect how patches are applied, not how patches work.  

In other words, having the latest AD/TXK updates may reduce the risk of applying, say, a new HRMS patch, but it will not affect any Human Resources-related functionality at all.

Likewise, applying the latest AD/TXK update does not affect the functionality of any existing EBS products. Infrastructure tools only affect the patching process, not how installed EBS products work. 

How do I find the latest AD and TXK tools?

Check the following reference:

You should periodically check Document 1617461.1 on My Oracle Support for updates, which are made as required. For example, new bundled fixes or critical one-off patches may become available, or the release update packs may be superseded by newer versions.

Related Articles

Categories: APPS Blogs

How we build our customized PostgreSQL Docker image

Yann Neuhaus - Tue, 2018-02-13 13:21

Docker becomes more and more popular these days and a lot of companies start to really use it. At one project we decided to build our own customized Docker image instead of using the official PostgreSQL one. The main reason for that is that we wanted to compile from source so that we only get want is really required. Why having PostgreSQL compiled with tcl support when nobody will ever use that? Here is how we did it …

To dig in right away, this is the simplified Dockerfile:

FROM debian

# make the "en_US.UTF-8" locale so postgres will be utf-8 enabled by default
ENV LANG en_US.utf8
ENV PG_MAJOR 10
ENV PG_VERSION 10.1
ENV PG_SHA256 3ccb4e25fe7a7ea6308dea103cac202963e6b746697366d72ec2900449a5e713
ENV PGDATA /u02/pgdata
ENV PGDATABASE "" \
    PGUSERNAME "" \
    PGPASSWORD ""

COPY docker-entrypoint.sh /

RUN set -ex \
        \
        && apt-get update && apt-get install -y \
           ca-certificates \
           curl \
           procps \
           sysstat \
           libldap2-dev \
           libpython-dev \
           libreadline-dev \
           libssl-dev \
           bison \
           flex \
           libghc-zlib-dev \
           libcrypto++-dev \
           libxml2-dev \
           libxslt1-dev \
           bzip2 \
           make \
           gcc \
           unzip \
           python \
           locales \
        \
        && rm -rf /var/lib/apt/lists/* \
        && localedef -i en_US -c -f UTF-8 en_US.UTF-8 \
        && mkdir /u01/ \
        \
        && groupadd -r postgres --gid=999 \
        && useradd -m -r -g postgres --uid=999 postgres \
        && chown postgres:postgres /u01/ \
        && mkdir -p "$PGDATA" \
        && chown -R postgres:postgres "$PGDATA" \
        && chmod 700 "$PGDATA" \
        \
        && curl -o /home/postgre/postgresql.tar.bz2 "https://ftp.postgresql.org/pub/source/v$PG_VERSION/postgresql-$PG_VERSION.tar.bz2" \
        && echo "$PG_SHA256 /home/postgres/postgresql.tar.bz2" | sha256sum -c - \
        && mkdir -p /home/postgres/src \
        && chown -R postgres:postgres /home/postgres \
        && su postgres -c "tar \
                --extract \
                --file /home/postgres/postgresql.tar.bz2 \
                --directory /home/postgres/src \
                --strip-components 1" \
        && rm /home/postgres/postgresql.tar.bz2 \
        \
        && cd /home/postgres/src \
        && su postgres -c "./configure \
                --enable-integer-datetimes \
                --enable-thread-safety \
                --with-pgport=5432 \
                --prefix=/u01/app/postgres/product/$PG_VERSION \\
                --with-ldap \
                --with-python \
                --with-openssl \
                --with-libxml \
                --with-libxslt" \
        && su postgres -c "make -j 4 all" \
        && su postgres -c "make install" \
        && su postgres -c "make -C contrib install" \
        && rm -rf /home/postgres/src \
        \
        && apt-get update && apt-get purge --auto-remove -y \
           libldap2-dev \
           libpython-dev \
           libreadline-dev \
           libssl-dev \
           libghc-zlib-dev \
           libcrypto++-dev \
           libxml2-dev \
           libxslt1-dev \
           bzip2 \
           gcc \
           make \
           unzip \
        && apt-get install -y libxml2 \
        && rm -rf /var/lib/apt/lists/*

ENV LANG en_US.utf8
USER postgres
EXPOSE 5432
ENTRYPOINT ["/docker-entrypoint.sh"]

We based the image on the latest Debian image, that is line 1. The following lines define the PostgreSQL version we will use and define some environment variables we will user later. What follows is basically installing all the packages required for building PostgreSQL from source, adding the operating system user and group, preparing the directories, fetching the PostgreSQL source code, configure, make and make install. Pretty much straight forward. Finally, to shrink the image, we remove all the packages that are not any more required after PostgreSQL was compiled and installed.

The final setup of the PostgreSQL instance happens in the docker-entrypoint.sh script which is referenced at the very end of the Dockerfile:

#!/bin/bash

# this are the environment variables which need to be set
PGDATA=${PGDATA}/${PG_MAJOR}
PGHOME="/u01/app/postgres/product/${PG_VERSION}"
PGAUTOCONF=${PGDATA}/postgresql.auto.conf
PGHBACONF=${PGDATA}/pg_hba.conf
PGDATABASENAME=${PGDATABASE}
PGUSERNAME=${PGUSERNAME}
PGPASSWD=${PGPASSWORD}

# create the database and the user
_pg_create_database_and_user()
{
    ${PGHOME}/bin/psql -c "create user ${PGUSERNAME} with login password '${PGPASSWD}'" postgres
    ${PGHOME}/bin/psql -c "create database ${PGDATABASENAME} with owner = ${PGUSERNAME}" postgres
}

# start the PostgreSQL instance
_pg_prestart()
{
    ${PGHOME}/bin/pg_ctl -D ${PGDATA} -w start
}

# start postgres and do not disconnect
# required for docker
_pg_start()
{
    ${PGHOME}/bin/postgres "-D" "${PGDATA}"
}

# stop the PostgreSQL instance
_pg_stop()
{
    ${PGHOME}/bin/pg_ctl -D ${PGDATA} stop -m fast
}

# initdb a new cluster
_pg_initdb()
{
    ${PGHOME}/bin/initdb -D ${PGDATA} --data-checksums
}


# adjust the postgresql parameters
_pg_adjust_config() {
    # PostgreSQL parameters
    echo "shared_buffers='128MB'" >> ${PGAUTOCONF}
    echo "effective_cache_size='128MB'" >> ${PGAUTOCONF}
    echo "listen_addresses = '*'" >> ${PGAUTOCONF}
    echo "logging_collector = 'on'" >> ${PGAUTOCONF}
    echo "log_truncate_on_rotation = 'on'" >> ${PGAUTOCONF}
    echo "log_filename = 'postgresql-%a.log'" >> ${PGAUTOCONF}
    echo "log_rotation_age = '1440'" >> ${PGAUTOCONF}
    echo "log_line_prefix = '%m - %l - %p - %h - %u@%d '" >> ${PGAUTOCONF}
    echo "log_directory = 'pg_log'" >> ${PGAUTOCONF}
    echo "log_min_messages = 'WARNING'" >> ${PGAUTOCONF}
    echo "log_autovacuum_min_duration = '60s'" >> ${PGAUTOCONF}
    echo "log_min_error_statement = 'NOTICE'" >> ${PGAUTOCONF}
    echo "log_min_duration_statement = '30s'" >> ${PGAUTOCONF}
    echo "log_checkpoints = 'on'" >> ${PGAUTOCONF}
    echo "log_statement = 'none'" >> ${PGAUTOCONF}
    echo "log_lock_waits = 'on'" >> ${PGAUTOCONF}
    echo "log_temp_files = '0'" >> ${PGAUTOCONF}
    echo "log_timezone = 'Europe/Zurich'" >> ${PGAUTOCONF}
    echo "log_connections=on" >> ${PGAUTOCONF}
    echo "log_disconnections=on" >> ${PGAUTOCONF}
    echo "log_duration=off" >> ${PGAUTOCONF}
    echo "client_min_messages = 'WARNING'" >> ${PGAUTOCONF}
    echo "wal_level = 'replica'" >> ${PGAUTOCONF}
    echo "hot_standby_feedback = 'on'" >> ${PGAUTOCONF}
    echo "max_wal_senders = '10'" >> ${PGAUTOCONF}
    echo "cluster_name = '${PGDATABASENAME}'" >> ${PGAUTOCONF}
    echo "max_replication_slots = '10'" >> ${PGAUTOCONF}
    echo "work_mem=8MB" >> ${PGAUTOCONF}
    echo "maintenance_work_mem=64MB" >> ${PGAUTOCONF}
    echo "wal_compression=on" >> ${PGAUTOCONF}
    echo "max_wal_senders=20" >> ${PGAUTOCONF}
    echo "shared_preload_libraries='pg_stat_statements'" >> ${PGAUTOCONF}
    echo "autovacuum_max_workers=6" >> ${PGAUTOCONF}
    echo "autovacuum_vacuum_scale_factor=0.1" >> ${PGAUTOCONF}
    echo "autovacuum_vacuum_threshold=50" >> ${PGAUTOCONF}
    # Authentication settings in pg_hba.conf
    echo "host    all             all             0.0.0.0/0            md5" >> ${PGHBACONF}
}

# initialize and start a new cluster
_pg_init_and_start()
{
    # initialize a new cluster
    _pg_initdb
    # set params and access permissions
    _pg_adjust_config
    # start the new cluster
    _pg_prestart
    # set username and password
    _pg_create_database_and_user
}

# check if $PGDATA exists
if [ -e ${PGDATA} ]; then
    # when $PGDATA exists we need to check if there are files
    # because when there are files we do not want to initdb
    if [ -e "${PGDATA}/base" ]; then
        # when there is the base directory this
        # probably is a valid PostgreSQL cluster
        # so we just start it
        _pg_prestart
    else
        # when there is no base directory then we
        # should be able to initialize a new cluster
        # and then start it
        _pg_init_and_start
    fi
else
    # initialze and start the new cluster
    _pg_init_and_start
    # create PGDATA
    mkdir -p ${PGDATA}
    # create the log directory
    mkdir -p ${PGDATA}/pg_log
fi
# restart and do not disconnect from the postgres daemon
_pg_stop
_pg_start

The important point here is: PGDATA is a persistent volume that is linked into the Docker container. When the container comes up we need to check if something that looks like a PostgreSQL data directory is already there. If yes, then we just start the instance with what is there. If nothing is there we create a new instance. Remember: This is just a template and you might need to do more checks in your case. The same is true for what we add to pg_hba.conf here: This is nothing you should do on real systems but can be handy for testing.

Hope this helps …

 

Cet article How we build our customized PostgreSQL Docker image est apparu en premier sur Blog dbi services.

Oracle IoT Cloud for Industry 4.0 Helps Organizations Make Dramatic Process Improvements for More Intelligent Supply Chains

Oracle Press Releases - Tue, 2018-02-13 10:30
Press Release
Oracle IoT Cloud for Industry 4.0 Helps Organizations Make Dramatic Process Improvements for More Intelligent Supply Chains New Augmented Reality, Machine Vision, Digital Twin and Automated Data Science capabilities enhance production, logistics, warehousing and maintenance

Oracle Modern Finance Experience, New York, NY—Feb 13, 2018

Empowering modern businesses to improve production intelligence and market responsiveness, Oracle today unveiled new Industry 4.0 capabilities for Oracle Internet of Things (IoT) Cloud. The advanced monitoring and analytics capabilities of the new offering enables organizations to improve efficiency, reduce costs, and identify new sources of revenue through advanced tracking of assets, workers, and vehicles; real-time issue detection; and predictive analytics.

According to The Economist Intelligence Unit, 63 percent of manufacturers have either undergone substantial digital transformation or are in the process of transforming parts of their organization, and 19 percent are developing transformation strategies. To remain competitive in the modern economy, businesses need to leverage new technologies and data to modernize their supply chains and improve visibility, predictive insights, and automation through connected workflows.

With new augmented reality, machine vision, digital twin and data science capabilities, Oracle IoT Cloud enables organizations to gain rich insight into the performance of assets, machines, workers, and vehicles so they can optimize their supply chain, manufacturing, and logistics, reduce time to market for new products; and enable new business models. 

“IoT is the great enabler of Industry 4.0’s potential, providing real-time visibility and responsiveness at every step of the production process – from raw materials to customer fulfillment,” said Bhagat Nainani, group vice president, IoT Applications at Oracle.

“Oracle empowers organizations to create smart factories and modern supply chains with seamless interaction models between business applications and physical equipment. By receiving real-time data streams enhanced with predictive insights, our IoT applications provide intelligent business processes that deliver quick ROI.”

Today’s expansion follows the recent announcement of artificial Intelligence, digital thread and digital twin for supply chain, as well as industry-specific solutions for Oracle IoT Cloud. Oracle IoT Cloud is offered both as Software-as-a-Service (SaaS) applications, as well as Platform-as-a-Service (PaaS) offerings, enabling a high degree of adaptability for even the most demanding implementations.

“We plan to leverage Oracle IoT Cloud and its machine learning capabilities to automatically analyze information gathered from the robot and process-monitoring systems. These analytics could help Noble identify ways to reduce cycle time, improve the manufacturing process, enhance product quality, and cut downtime,” said Scott Rogers, technical director at Noble Plastics.

Oracle plans to add the new capabilities across the entire range of IoT Cloud Applications – Asset Monitoring, Production Monitoring, Fleet Monitoring, Connected Worker, and Service Monitoring for Connected Assets:

  • Digital Twin: Enables remote users to monitor the health of assets and prevent failures before they occur, as well as running simulations of “what-if” scenarios in the context of the business processes. With Digital Twin, organizations have a new operational paradigm to interact with the physical world, allowing lower operational and capital expenditures, minimizing downtime, and optimizing asset performance.
  • Augmented Reality: Gives operators and plant managers the ability to view operational metrics and related equipment information in the context of the physical asset for faster troubleshooting and assisted maintenance. In addition, the use of AR in training technicians reduces errors and on-boarding time, and improves user productivity.
  • Machine Vision: Provides detailed non-intrusive visual inspections, which can detect defects invisible to the naked eye, at high speed and scale. Following the rapid inspection, Machine Vision sets in motion appropriate corrective actions when anomalies and errors are spotted.
  • Auto Data Science: Automated business-specific data science and artificial intelligence algorithms continuously analyze asset utilization, production yield and quantity, inventory, fleet performance, as well as worker safety concerns, to predict issues before they arise. Auto Data Science features enable users to see performance metrics of each step in the modern supply chain with the ability to drill down into specific issues at each location without employing an army of data scientists.
 

Oracle IoT Cloud enables companies to monitor capital intensive assets to reduce downtime and servicing costs, and track utilization for accurate lifecycle insights and asset depreciation data, which improves enterprise procurement efficiency. The rich pool of data created by sensors within products enables organizations to offer their products as a service, gain insight into how customers are using their products, and offer improved value-added services that drive new sources of revenue.

Contact Info
Vanessa Johnson
Oracle PR
+1.650.607.1692
vanessa.n.johnson@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Vanessa Johnson

  • +1.650.607.1692

Oracle Powers the Next Generation of Finance with New Artificial Intelligence Applications

Oracle Press Releases - Tue, 2018-02-13 10:15
Press Release
Oracle Powers the Next Generation of Finance with New Artificial Intelligence Applications Intelligent applications drive smarter finance decisions, increase efficiencies, and improve collaboration for higher revenues and reduced costs

Modern Finance Experience 2018, New York, NY—Feb 13, 2018

To empower the office of the CFO with data-driven insights they need to adapt to change, capitalize on new market opportunities, and increase profitability, Oracle today announced new artificial intelligence-based applications for finance. By applying advanced data science and machine learning to data from the Oracle Data Cloud and from partners, the new Oracle Adaptive Intelligent Applications for Enterprise Resource Planning (ERP) help automate transactional work and streamline business processes.

Oracle Adaptive Intelligent Applications for ERP are designed to enhance existing applications, including financials, procurement, enterprise performance management, order management, and manufacturing within the ERP Cloud suite.

CFOs and modern finance professionals are under pressure to increase the agility and effectiveness of their organizations. As such, they need to constantly monitor and assess what is working and what is not and redeploy resources for maximum returns.

“To increase their agility, organizations need to leverage the right tools to help improve process efficiency and uncover insights that can guide a business forward,” said Rondy Ng, senior vice president, Oracle Applications Development. “Oracle helps finance leaders drive business transformation with ready-to-go applications that combine advanced machine learning with the industry’s most comprehensive suite of cloud applications to deliver immediate value and results.”

With Oracle Adaptive Intelligent Applications for ERP, finance leaders can benefit from:

  • Better insight: Applying analytics and artificial intelligence to finance can improve performance and increases agility across payables, receivables, procurement, and fiscal period close processes. Intelligent applications are also able to provide suggested actions to help mitigate vendor risk and fraud activity by detecting exceptions in vendor selection criteria.
  • Greater efficiency: Robotic process automation and artificial intelligence capabilities enable touchless transaction processing, minimizing the chance of human error.
  • Smarter business outcomes: Oracle delivers immediate impact by infusing machine learning across the entire suite of business applications; this is done by leveraging data from the Oracle Data Cloud and from partners to derive insights across multiple channels and platforms, including finance, HR, and project management to support strategic business decision-making.
  • Increased influence: The rich insights available to finance leaders via artificial intelligence empower CFOs to anticipate what comes next for the business and to make wise decisions, increasing the influence of the CFO and finance team in the organization.

For example, using Oracle Adaptive Intelligent Applications for ERP can help a finance team at a large national retail brand collect first-party data on their suppliers, such as supplier purchase history, percentage of revenue, discounts taken with third-party data on supplier revenue, credit score, and other company attributes. The finance organization can then decide which suppliers to double down on and which to cease doing business with for maximum cost savings, while maintaining quality standards. The ability to quickly fine tune the business based on data-driven insights will increase the finance function’s value in the organization and CEOs will increasingly rely on the CFO and finance team for strategic recommendations to improve business performance.

By applying advanced data science and machine learning to Oracle’s web-scale data and an organization’s own data, the new Adaptive Intelligent Apps can react, learn, and adapt in real time based on historical and dynamic data, while continuously delivering better business insights.

According to the Gartner report, “Impacts of Artificial Intelligence on Financial Management Applications,” written by Nigel Rayner and Christopher Iervolino, “The transformational potential of AI in financial management applications will come in the next two to three years as more AI technologies are embedded directly into financial management processes to automate complex, non-routine activities with little or no human intervention. Also, using AI to improve the accuracy and effectiveness of financial forecasting and planning will transform these processes.”1

The Oracle Adaptive Intelligent Apps are built into the existing Oracle Cloud Applications to deliver the industry’s most powerful AI-based modern business applications across finance, human resources, supply chain and manufacturing, commerce, customer service, marketing, and sales. The apps are powered by insights from the Oracle Data Cloud, which is the largest third-party data marketplace in the world with a collection of more than 5 billion global consumer and business IDs and more than 7.5 trillion data points collected monthly.

1 Gartner Report: “Impacts of Artificial Intelligence on Financial Management Applications,” Analysts Nigel Rayner, Christopher Iervolino, Published November 7, 2017.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation.

Statements in this article relating to Oracle’s future plans, expectations, beliefs, and intentions are “forward-looking statements” and are subject to material risks and uncertainties. Such statements are based on Oracle’s current expectations and assumptions, some of which are beyond Oracle’s control. All information in this article is current as of October 2, 2017 and Oracle undertakes no duty to update any statement in light of new information or future events.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

Using Let's Encrypt with Oracle Linux in Oracle Cloud Infrastructure

Wim Coekaerts - Tue, 2018-02-13 10:13

I stole Sergio's headline here and I am just going to link to his blog :)...

Sergio wrote up a how-to on using a let's encrypt cert and installing it on OL using nginx in an Oracle Cloud instance created and deployed with Terraform.

That 's a lot of words right there but it should demonstrate a few things:

  • All the extra packages we have been publishing of late in the Oracle Linux EPEL (Extra Packages for Enterprise Linux) mirror. (yes they're the same packages but they're built on Oracle Linux, the packages are signed by us and they're on the same yum repo so you don't have to install separate files to get to it.) This includes certbot etc.. that you need for this.
  • The convenience of having terraform, terraform-provider-oci RPMs to easily get going without downloading anything elsewhere.
  • Integration of Oracle Linux yum servers inside Oracle Cloud Infrastructure for fast and easy access with no external network traffic charges.

So you can find his blog here.

Oracle Transforms Enterprise Data Management

Oracle Press Releases - Tue, 2018-02-13 10:00
Press Release
Oracle Transforms Enterprise Data Management New Oracle Enterprise Data Management Cloud extends industry leading enterprise performance management suite

Oracle Modern Finance Experience, New York, NY—Feb 13, 2018

To meet growing demand from customers engaged in transformation efforts and to improve business agility, Oracle today announced Oracle Enterprise Data Management Cloud. Part of Oracle Enterprise Performance Management (EPM) Cloud, the new offering provides a single platform for easy management of critical enterprise data assets (such as the Chart of Accounts), and improved data integrity and alignment.

Today’s rapidly changing business environment presents multiple data alignment challenges. Cloud adoption, mergers and acquisitions, reorganizations and restructuring can create data inconsistencies that require finance teams to reconcile disparate data sets and associated metadata. Changes to application metadata, dimensions, hierarchies, mappings and related attributes are often handled manually through spreadsheets, email, and in-person meetings.

To help finance leaders eliminate manual errors and inconsistencies, create a single view of all enterprise data, and realize their vision for front and back-office business transformation, Oracle Enterprise Data Management Cloud provides centralized, self-service enterprise data maintenance, and data sharing and rationalization.

“As organizations grow and evolve, business and finance leaders face an increasingly complex range of challenges in managing and governing their enterprise data assets that cannot be successfully addressed through traditional approaches,” said Hari Sankar, group vice president, EPM product management at Oracle. “With Oracle Enterprise Data Management Cloud, we are providing a modern platform that streamlines business transformation efforts and enables organizations to maintain data integrity, accuracy and consistency across all their applications – in the cloud and on-premises.”

Key benefits that Oracle Enterprise Data Management Cloud can provide include:

  • Faster cloud adoption: Migrate and map enterprise data elements and on-going changes across public, private and hybrid cloud environments from Oracle or third parties.
  • Enhanced business agility: Drive faster business transformation through modeling M&A scenarios, reorganizations and restructuring, chart of accounts standardization and redesign.
  • Better alignment of enterprise applications: Manage on-going changes across front-office, back-office and performance management applications through self-service enterprise data maintenance, sharing and rationalization.
  • System of reference for all your enterprise data: Support enterprise data across business domains including: master data, reference data, dimensions, hierarchies, business taxonomies, associated relationships, mappings and attributes across diverse business contexts.

The addition of Oracle Enterprise Data Management Cloud rounds out Oracle’s industry-leading EPM Cloud suite, which has been adopted by thousands of organizations around the world. The new offering has already garnered significant attention with customers such as SunTrust Bank, Baha Mar, Diversey, and others selecting the service to support their business transformation efforts.

Additional Information

For additional information on Oracle EPM Cloud, visit Oracle Enterprise Performance Management (EPM) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
About Oracle

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

  • 1.650.506.5936

Pages

Subscribe to Oracle FAQ aggregator