# Feed aggregator

### Non unique clustered index

Tom Kyte - Thu, 2018-02-15 13:26
Hi Tom, We have a table with a non incremental composite key of (dialaog_id, insertion_date). This creates problems since we have insertions in the middle of the leaves. Data has to be relocated. We cannot use incremental keys since this cre...
Categories: DBA Blogs

### prerequisites for expdp/impdp

Tom Kyte - Thu, 2018-02-15 13:26
Hi, I want to transfer data between test systems, version is Oracle Database 12c Release 12.1.0.1.0 - 64bit Production. I wrote scripts for a full export and import (expdp/impdp) When importing I get many errors because objects do not exist...
Categories: DBA Blogs

### Skipping the incorrect email Ids

Tom Kyte - Thu, 2018-02-15 13:26
Hi Tom, I am not too sure whether this question has been asked earlier or not but here it goes, I have a setup where pl/sql is code is generating reports on a daily , monthly basis, all the reports are in .csv format, the reports generated ar...
Categories: DBA Blogs

### SQL Server Management Studio 17.4: Vulnerability Assessment

Yann Neuhaus - Thu, 2018-02-15 10:01

SQL Server Management Studio is a well know integrated environment used to manage SQL Server infrastructure.
This new version 17.4 can support SQL Server from 2008 up to 2017. It enhances existing features like Showplan, XE Profiler (complete list here) but also add an interesting one which is the Vulnerability Assessment.

Vulnerability Assessment will scan a database in order to help you to track security holes and deviations. Rules to define those deviations are based on Microsoft SQL Server best practices.
Let’s have a look to this new feature.

Once SSMS 17.4 installed, just choose the database you want to scan, right click on the database and select Task, Vulnerability Assessment and Scan For Vulnerabilities…:

Select where you want to save the scan report and click OK:

Once you clicked OK the scan is starting:

At the end of the scan a Vulnerability Assessment Results is displayed:

The report is displayed in a Management studio pane with the number of checks that have been run, how many issues have been found with different level of risk from Low to High and provide also some links about SQL Server security best practices.
Review all failed checks to validate that there are really security issues for your environment and go through results.
For each failed issue you will have a description of the issue, the impact, also the rule query applied and a possible remediation script:

There is also a possibility to accept results even if there are considered as Potential Risk as a baseline. This will validate results that match the baseline.

Once issues are solved or Baseline settled, the Vulnerability Assessment can be run again to see the result of the performed actions:

This new feature integrated in Management Studio gives the ability to check that all your databases have a good level of security but also to keep this level.
Great new feature

Cet article SQL Server Management Studio 17.4: Vulnerability Assessment est apparu en premier sur Blog dbi services.

### Backup and Restore PostgreSQL with PgBackRest II

Yann Neuhaus - Thu, 2018-02-15 09:41

In a precedent blog I shown a basic utilization of PgBackRest which is a tool to backup and restore PostgreSQL databases. In this blog I am going to talk some useful features of this tool. In practical examples we will see some tasks we can do with this tool. Of course the official documentation remains the best source of knowledges.

Encryption
Nowadays encryption of backups is very critical and is mandatory for many companies. PgBackRest allows us to encrypt the repository where backups are stored. A passphrase is used to encrypt/decrypt files of the repository. As you may already know, it is recommended to use a strong passphrase. In the following demonstration we use the openssl to generate a passphrase.
 [postgres@pgserver ~]$openssl rand -base64 48 FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w [postgres@pgserver ~]$ 
Once the passphrase generated, we can update the PgBackRest configuration file with
2 options: repo-cipher-pass and repo-cipher-type
 [postgres@pgserver clustpgserver]$cat /etc/pgbackrest.conf [global] repo-path=/var/lib/pgbackrest repo-cipher-pass=FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w repo-cipher-type=aes-256-cbc [clustpgserver] db-path=/var/lib/pgsql/10/data retention-full=2  The next step is to create the stanza  [postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 stanza-create 2018-02-13 13:54:50.447 P00 INFO: stanza-create command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-13 13:55:04.520 P00 INFO: stanza-create command end: completed successfully [postgres@pgserver ~]$ As we can see the system automatically detect that the repository is encrypted and then will rewrite the command including the –repo-cipher-pass and the –repo-cipher-type options. After the creation of the stanza we can check the status of our stanza  [postgres@pgserver ~]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 check 2018-02-13 13:56:08.999 P00 INFO: check command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-13 13:57:08.026 P00 INFO: WAL segment 00000002000000000000004C successfully stored in the archive at '/var/lib/pgbackrest/archive/clustpgserver/10-1/0000000200000000/00000002000000000000004C-f5ced60cd351d74a91c9ce2e913b761144165e28.gz' 2018-02-13 13:57:08.030 P00 INFO: check command end: completed successfully 
Everything seems fine, so let’s run a backup. Note that outputs are truncated
 [postgres@pgserver ~]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 backup 2018-02-13 14:01:40.012 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver WARN: no prior backup exists, incr backup has been changed to full 2018-02-13 14:01:54.118 P00 INFO: execute non-exclusive pg_start_backup() with label "pgBackRest backup started at 2018-02-13 14:01:52": backup begins after the next regular checkpoint completes ... type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver 2018-02-13 14:35:08.281 P00 INFO: full backup total < 2 - using oldest full backup for 10-1 archive retention 2018-02-13 14:35:08.801 P00 INFO: expire command end: completed successfully [postgres@pgserver ~]$ 
In a non-encrypted repository, file backup.info can be read. Now with encryption the if we try to read the file backup.info in the repository, we cannot.
 [postgres@pgserver clustpgserver]$less /var/lib/pgbackrest/backup/clustpgserver/backup.info "/var/lib/pgbackrest/backup/clustpgserver/backup.info" may be a binary file. See it anyway?  And using the command strings, we can see that the file is encrypted.  [postgres@pgserver clustpgserver]$ strings /var/lib/pgbackrest/backup/clustpgserver/backup.info Salted__Fx .;Ru cz4@ do:t \pi3"E VUSO }a.R* Wx5M ,?,W 3CXWB [postgres@pgserver clustpgserver]$ From now, backups cannot be used unless the password is provided. Restore in another location PgBackRest allows to restore to another location. This can be useful if we want to duplicate our cluster on the same server or to another server. In the following demonstration, let’s duplicate on the same server. The data directory of the source cluster is /var/lib/pgsql/10/data  postgres=# show data_directory; data_directory ------------------------ /var/lib/pgsql/10/data (1 row) postgres=#  To duplicate to a new data directory /u01/devdata for example, the option –db-path is used  [postgres@pgserver log]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-path=/u01/devdata restore … … 2018-02-14 09:40:05.755 P01 INFO: restore file /u01/devdata/base/1/13657 (0B, 100%) 2018-02-14 09:40:05.773 P01 INFO: restore file /u01/devdata/base/1/13652 (0B, 100%) 2018-02-14 09:40:05.811 P01 INFO: restore file /u01/devdata/base/1/13647 (0B, 100%) 2018-02-14 09:40:05.983 P01 INFO: restore file /u01/devdata/base/1/13642 (0B, 100%) 2018-02-14 09:40:06.067 P00 INFO: write /u01/devdata/recovery.conf 2018-02-14 09:40:14.403 P00 INFO: restore global/pg_control (performed last to ensure aborted restores cannot be started) 2018-02-14 09:40:30.187 P00 INFO: restore command end: completed successfully 
After the duplicate don’t forget to change the port (as we are in the same server) and then start your new cluster
 postgres=# show data_directory ; data_directory ---------------- /u01/devdata (1 row) postgres=# 

Restore specific databases
With PgBackRest, we can restore specific user databases. Note that built-in databases (template0, template1 and postgres) are always restored.
Let’s show an example. In our source cluster we actually have two databases test and sandbox.
 sandbox=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | sandbox | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (5 rows) 
In sandbox we have a table mytab with 2 rows
 sandbox=# \c sandbox You are now connected to database "sandbox" as user "postgres". sandbox=# table mytab; id ---- 1 2 (2 rows) 
Now let’s restore the cluster but only with test database, the option –db-include will be used.
 [postgres@pgserver log]$pgbackrest --stanza=clustpgserver --log-level-console=info --db-path=/u01/devdata --db-include=test restore 2018-02-14 10:11:00.948 P00 INFO: restore command begin 1.28: --db-include=test=1 --db1-path=/u01/devdata --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --stanza=clustpgserver 2018-02-14 10:11:05.137 P00 INFO: restore backup set 20180214-095439F_20180214-100446I 2018-02-14 10:11:25.110 P00 INFO: remap$PGDATA directory to /u01/devdata ... 
After the restore completed, let’s start the new cluster and let’s verify present databases.
 [postgres@pgserver devdata]$psql -p 5436 psql (10.1) Type "help" for help. postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+-------------+-------------+----------------------- postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | sandbox | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres test | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (5 rows)  What!! the sandbox is still present despite the use of option –include-db=test. But if we try to connect to sandbox database. We get an error.  postgres=# \c sandbox FATAL: relation mapping file "base/24581/pg_filenode.map" contains invalid data Previous connection kept postgres=#  And if we compare at OS level the size of files of the database at the source cluster and at the target  [postgres@pgserver log]$ du -sh /var/lib/pgsql/10/data/base/24581 7.8M /var/lib/pgsql/10/data/base/24581 [postgres@pgserver log]$du -sh /u01/devdata/base/24581 16K /u01/devdata/base/24581 [postgres@pgserver log]$ 
We can see that at the target cluster, sandbox uses less disk space during the selective restore than it would have if the entire database had been restored. To finish the selective restore, we have to manually drop the sandbox database. Indeed PgBackRest cannot automatically drop the database because the cluster is not accessible until the recovery process finishes.
 postgres=# drop database sandbox; DROP DATABASE postgres=# 

Automatic cleanup of expired backups
Another nice feature of PgBackRest is that expired backups are automatically removed.
If we check our pgbackrest.conf file, we see that the retention-full is set to 2. This means that 2 full backups will be maintained. So if we do a third full backup, the first full backup and all corresponding incremental and differential backups will be expired and removed
 [postgres@pgserver log]$cat /etc/pgbackrest.conf [global] repo-path=/var/lib/pgbackrest repo-cipher-pass=FhXg7oW2pZb9UICZ4iYZPn3X4I6fF0ni7IL6QjaB1IL8qz4LIrP+GW+XqCZqIi3w repo-cipher-type=aes-256-cbc [clustpgserver] db-path=/var/lib/pgsql/10/data retention-full=2 [postgres@pgserver log]$ 
Let’s do a quick demonstration. Actually we have 2 full backups
 [postgres@pgserver log]$pgbackrest --stanza=clustpgserver info stanza: clustpgserver status: ok db (current) wal archive min/max (10-1): 00000002000000000000004E / 000000020000000000000056 full backup: 20180213-140152F timestamp start/stop: 2018-02-13 14:01:52 / 2018-02-13 14:32:00 wal start/stop: 00000002000000000000004E / 00000002000000000000004E database size: 577MB, backup size: 577MB repository size: 28.8MB, repository backup size: 28.8MB incr backup: 20180213-140152F_20180213-152509I timestamp start/stop: 2018-02-14 09:31:03 / 2018-02-14 09:33:17 wal start/stop: 000000020000000000000052 / 000000020000000000000052 database size: 30.7MB, backup size: 285.3KB repository size: 3.6MB, repository backup size: 24.3KB backup reference list: 20180213-140152F full backup: 20180214-095439F timestamp start/stop: 2018-02-14 09:54:39 / 2018-02-14 09:58:53 wal start/stop: 000000020000000000000054 / 000000020000000000000054 database size: 30.7MB, backup size: 30.7MB repository size: 3.6MB, repository backup size: 3.6MB incr backup: 20180214-095439F_20180214-100446I timestamp start/stop: 2018-02-14 10:04:46 / 2018-02-14 10:07:43 wal start/stop: 000000020000000000000056 / 000000020000000000000056 database size: 38.3MB, backup size: 7.6MB repository size: 4.5MB, repository backup size: 928.5KB backup reference list: 20180214-095439F [postgres@pgserver log]$ 
And we can confirm by executing a simple ls in the repository where backups are stored
 [postgres@pgserver clustpgserver]$ls -ld * drwxr-x---. 3 postgres postgres 69 Feb 13 14:32 20180213-140152F drwxr-x---. 3 postgres postgres 69 Feb 14 09:33 20180213-140152F_20180213-152509I drwxr-x---. 3 postgres postgres 69 Feb 14 09:59 20180214-095439F drwxr-x---. 3 postgres postgres 69 Feb 14 10:07 20180214-095439F_20180214-100446I drwxr-x---. 3 postgres postgres 17 Feb 13 14:33 backup.history -rw-r-----. 1 postgres postgres 2992 Feb 14 10:08 backup.info -rw-r-----. 1 postgres postgres 2992 Feb 14 10:08 backup.info.copy lrwxrwxrwx. 1 postgres postgres 33 Feb 14 10:08 latest -> 20180214-095439F_20180214-100446I [postgres@pgserver clustpgserver]$ ls -ld drwxr-x---. 7 postgres postgres 4096 Feb 14 10:08 . [postgres@pgserver clustpgserver]$ls -ld *  Now let’s do a third full backup  [postgres@pgserver clustpgserver]$ pgbackrest --stanza=clustpgserver --log-level-console=info --db-port=5435 --type=full backup 2018-02-14 10:55:52.250 P00 INFO: backup command begin 1.28: --db1-path=/var/lib/pgsql/10/data --db1-port=5435 --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-full=2 --stanza=clustpgserver --type=full … 2018-02-14 11:19:02.001 P00 INFO: backup command end: completed successfully 2018-02-14 11:19:02.107 P00 INFO: expire command begin 1.28: --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --retention-archive=2 --retention-full=2 --stanza=clustpgserver 2018-02-14 11:19:02.928 P00 INFO: expire full backup set: 20180213-140152F, 20180213-140152F_20180213-152509I 2018-02-14 11:22:08.759 P00 INFO: remove expired backup 20180213-140152F_20180213-152509I 2018-02-14 11:22:09.000 P00 INFO: remove expired backup 20180213-140152F 2018-02-14 11:22:49.387 P00 INFO: expire command end: completed successfully [postgres@pgserver clustpgserver]$ We can see that at the end of backups, some old backups are expired and removed. We can also confirm this by listing files in the repository  [postgres@pgserver clustpgserver]$ ls -ld * drwxr-x---. 3 postgres postgres 69 Feb 14 09:59 20180214-095439F drwxr-x---. 3 postgres postgres 69 Feb 14 10:07 20180214-095439F_20180214-100446I drwxr-x---. 3 postgres postgres 69 Feb 14 11:13 20180214-105603F drwxr-x---. 3 postgres postgres 17 Feb 13 14:33 backup.history -rw-r-----. 1 postgres postgres 2320 Feb 14 11:19 backup.info -rw-r-----. 1 postgres postgres 2320 Feb 14 11:20 backup.info.copy lrwxrwxrwx. 1 postgres postgres 16 Feb 14 11:14 latest -> 20180214-105603F [postgres@pgserver clustpgserver]$ Point-in-Time Recovery PgBackRest can also do a point-in-time recovery. Let’s drop table article in the database test  test=# table article; nom --------- printer (1 row) . test=# select now(); now ------------------------------- 2018-02-14 11:39:28.024378+01 (1 row) . test=# drop table article; DROP TABLE . test=# table article; ERROR: relation "article" does not exist LINE 1: table article; ^ test=#  And now let’s restore until just before we drop the table let’s say 2018-02-14 11:39:28. But as we have many backup sets we have to restore from a backup done before the table was dropped. If we check our backups, we have to restore from the full backup: 20180214-105603F which was taken before table article was dropped.  [postgres@pgserver devdata]$ pgbackrest --stanza=clustpgserver --log-level-console=info info stanza: clustpgserver status: ok db (current) wal archive min/max (10-1): 000000020000000000000054 / 00000002000000000000005A full backup: 20180214-095439F timestamp start/stop: 2018-02-14 09:54:39 / 2018-02-14 09:58:53 wal start/stop: 000000020000000000000054 / 000000020000000000000054 database size: 30.7MB, backup size: 30.7MB repository size: 3.6MB, repository backup size: 3.6MB incr backup: 20180214-095439F_20180214-100446I timestamp start/stop: 2018-02-14 10:04:46 / 2018-02-14 10:07:43 wal start/stop: 000000020000000000000056 / 000000020000000000000056 database size: 38.3MB, backup size: 7.6MB repository size: 4.5MB, repository backup size: 928.5KB backup reference list: 20180214-095439F full backup: 20180214-105603F timestamp start/stop: 2018-02-14 10:56:03 / 2018-02-14 11:12:26 wal start/stop: 000000020000000000000058 / 000000020000000000000058 database size: 38.3MB, backup size: 38.3MB repository size: 4.5MB, repository backup size: 4.5MB incr backup: 20180214-105603F_20180214-121044I timestamp start/stop: 2018-02-14 12:10:44 / 2018-02-14 12:15:14 wal start/stop: 00000002000000000000005A / 00000002000000000000005A database size: 38.3MB, backup size: 1.1MB repository size: 4.5MB, repository backup size: 140.8KB backup reference list: 20180214-105603F [postgres@pgserver devdata]$ For the restore we use the option –set which allows us to specify the backup set we want to use for the restore. Note also the use of –type=time and –target  [postgres@pgserver log]$ pgbackrest --stanza=clustpgserver --log-level-console=info --type=time "--target=2018-02-14 11:39:28.024378+01" --db-path=/u01/devdata --set=20180214-105603F restore 2018-02-14 13:36:50.848 P00 INFO: restore command begin 1.28: --db1-path=/u01/devdata --log-level-console=info --repo-cipher-pass= --repo-cipher-type=aes-256-cbc --repo-path=/var/lib/pgbackrest --set=20180214-105603F --stanza=clustpgserver "--target=2018-02-14 11:39:28.024378+01" --type=time 2018-02-14 13:37:03.406 P00 INFO: restore backup set 20180214-105603F ... 
At the end of the restore let’s see the contents of the recovery.done file
 [postgres@pgserver devdata]$cat recovery.conf restore_command = '/usr/bin/pgbackrest --db1-path=/u01/devdata --log-level-console=info --stanza=clustpgserver archive-get %f "%p"' recovery_target_time = '2018-02-14 11:39  If we start our new cluster, we can see in log files that PITR is starting 2018-02-14 13:54:23.824 CET [10049] LOG: starting point-in-time recovery to 2018-02-14 11:39:28.024378+01 And once the recovery finished, we can verify that the table article is present  postgres=# \c test You are now connected to database "test" as user "postgres". test=# \d article Table "public.article" Column | Type | Collation | Nullable | Default --------+-----------------------+-----------+----------+--------- nom | character varying(50) | | | . test=# table article; nom --------- printer (1 row) test=#  Conclusion: In this blog I talked about some features about PgBackRest. But as already specified, there are many, many other options with this wonderful tool. The official documentation can give more information. In future blogs we will explore more with advanced configuration. Cet article Backup and Restore PostgreSQL with PgBackRest II est apparu en premier sur Blog dbi services. ### 12c DataGuard / Broker Pitfalls Michael Dinh - Thu, 2018-02-15 07:01 In a broker configuration, you use the DGConnectIdentifer property to specify a connect identifier for each database. The connect identifier for a database must: Allow all other databases in the configuration to reach it. Allow all instances of an Oracle RAC database to be reached. Specify a service that all instances dynamically register with the listeners so that connect-time failover on an Oracle RAC database is possible.  The service should NOT be one that is defined and managed by Oracle Clusterware. A static service needs to be defined and registered only if Oracle Clusterware or Oracle Restart is not being used. Else, by default, the broker assumes a static service name of db_unique_name_DGMGRL.db_domain and expects the listener has been started with the following content in the listener.ora file: LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = host_name)(PORT = port_num)) ) ) SID_LIST_LISTENER= (SID_LIST= (SID_DESC= (GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain) (ORACLE_HOME=oracle_home) (SID_NAME=sid_name) (ENVS="TNS_ADMIN=oracle_home/network/admin") ) )  As of Oracle Database 12c Release 1 (12.1), for all databases to be added to a broker configuration, any LOG_ARCHIVE_DEST_n parameters that have the SERVICE attribute set, but not the NOREGISTER attribute, must be cleared. Create Configuration Failing with ORA-16698 (Doc ID 1582179.1) Oracle Data Guard Installation ### Oracle Honors Change Agents of Finance at Oracle Modern Finance Experience Oracle Press Releases - Thu, 2018-02-15 07:00 Press Release Oracle Honors Change Agents of Finance at Oracle Modern Finance Experience Awards recognize organizations using modern cloud technologies to drive business transformation Oracle Modern Finance Experience, New York—Feb 15, 2018 To promote best practices and celebrate the success organizations are achieving by modernizing financial and operational processes in the cloud, Oracle (NYSE: ORCL) today announced the winners of the 2018 Change Agents of Finance Awards. The awards recognize top organizations and business leaders that are driving growth by embracing Oracle cloud solutions and modern best practices to transform the finance function and other critical business processes. The 2018 winners include Beall’s Inc., Caesars Entertainment, Dropbox, Hilton, National Rural Utilities Cooperative Finance Corporation (NRUCFC), Orange, Park Hotels & Resorts, and Wilmington Savings Fund Society (WSFS). The 2018 Change Agents of Finance Awards is a global program that encompasses 10 categories that represent the best of twenty-first century finance. Finalists were nominated and voted for by their finance peers based on their ability to deliver tangible business results through the application of innovative cloud technologies to their unique business case. “The Change Agents of Finance Awards recognize visionary leaders that are reimagining their finance functions in order to thrive and grow in today’s digital economy,” said Doug Kehring, executive vice president and Chief of Staff, Oracle. “This year’s winners are industry pioneers leveraging Oracle ERP Cloud and Oracle EPM Cloud solutions to differentiate their organizations and deliver outstanding results.” Winners of the 2018 Oracle Change Agents of Finance Awards are: • Frugal Finance Award: Wilmington Savings Fund Society (WSFS), Dominic C. Canuso, EVP and Chief Financial Officer • Better Together Award: Park Hotels & Resorts, Darren Robb, Senior Vice President and Chief Accounting Officer • Fast Finance Award: Caesars Entertainment, Michael Mann, VP of Transformation • Modern Close Award: National Rural Utilities Cooperative Finance Corporation (NRUCFC), Lori Culp, Assistant Controller • Roll With It Award: Hilton, Anand Naimpally, SVP, Finance • Shared Services Superstar Award: Orange, Djamel Benallal, Head of Finance Group Solutions • Crystal Ball Award: Dropbox, Annie Ogata, Corporate FP&A Lead • Oracle Billion-Dollar Story Award: Orange, Christophe Eouzan, Chief Accounting Officer • Top Talent Award: Beall’s Inc., Brian Crowley, CFO For additional information on these winners, please visit the Oracle Change Agents of Finance Awards website. Contact Info Evelyn Tam Oracle PR +1.650.506.5936 evelyn.tam@oracle.com About Oracle The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Talk to a Press Contact Evelyn Tam • +1.650.506.5936 Follow Oracle Corporate ### Oracle Buys Zenedge Oracle Press Releases - Thu, 2018-02-15 06:04 Press Release Oracle Buys Zenedge Adds Leading Cloud-Based Network and Infrastructure Security Capabilities to Protect Enterprises from Today’s Complex Digital Threats Redwood Shores, Calif.—Feb 15, 2018 Oracle today announced that it has signed an agreement to acquire Zenedge, which helps secure critical IT systems deployed via cloud, on-premise or hybrid hosting environments. Customers leverage Zenedge’s Web Application Firewall (WAF) and Distributed Denial of Service (DDoS) mitigation products to secure their applications, networks, databases and APIs from malicious Internet traffic. Powered by artificial intelligence (AI), Zenedge’s products and 24/7 virtual Security Operations Center (SOC) defend over 800,000 web properties and networks globally. Oracle delivers a comprehensive set of subscription-based cloud infrastructure services that enables businesses to run any workload in an enterprise-grade cloud managed, hosted and supported by Oracle. Zenedge expands Oracle Cloud Infrastructure and Oracle’s Domain Name System (DNS) capabilities, adding innovative application and network protection that augments existing Oracle security services and partnerships. Together, Oracle and Zenedge will allow enterprises to adopt cloud services without compromising performance, cost, control or security through an expanded Oracle Cloud Infrastructure platform. “Customers demand enterprise-grade infrastructure to run their critical business systems in the cloud,” said Don Johnson, Senior Vice President of Product Development, Oracle. “Oracle Cloud Infrastructure as a Service delivers leading cloud services to address those needs. The combination with Zenedge equips Oracle Cloud Infrastructure with integrated, next-generation network and infrastructure security, to address modern security threats.” “Customers achieve tremendous results with Zenedge’s innovative WAF and DDoS mitigation products, from a 99% reduction in illicit website traffic to a 99.75% improvement in page load times,” said Yuri Frayman, CEO of Zenedge. “We could not be more enthusiastic about joining forces with the leader in enterprise-grade cloud infrastructure, and delivering similar results to even more customers at scale.” More information about this announcement is available at www.oracle.com/zenedge. Contact Info Deborah Hellinger Oracle Corporate Communications +1.212.508.7935 deborah.hellinger@oracle.com Ken Bond Oracle Investor Relations +1.650.607.0349 ken.bond@oracle.com About Oracle The Oracle Cloud offers complete SaaS application suites for ERP, SCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE: ORCL), visit www.oracle.com. Trademarks Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Oracle is currently reviewing the existing Zenedge product roadmap and will be providing guidance to customers in accordance with Oracle’s standard product communication policies. Any resulting features and timing of release of such features as determined by Oracle’s review of Zenedge’s product roadmap are at the sole discretion of Oracle. All product roadmap information, whether communicated by Zenedge or by Oracle, does not represent a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. It is intended for information purposes only, and may not be incorporated into any contract. Cautionary Statement Regarding Forward-Looking Statements This document contains certain forward-looking statements about Oracle and Zenedge, including statements that involve risks and uncertainties concerning Oracle’s proposed acquisition of Zenedge, anticipated customer benefits and general business outlook. When used in this document, the words “anticipates”, “can”, “will”, “look forward to”, “expected” and similar expressions and any other statements that are not historical facts are intended to identify those assertions as forward-looking statements. Any such statement may be influenced by a variety of factors, many of which are beyond the control of Oracle or Zenedge, that could cause actual outcomes and results to be materially different from those projected, described, expressed or implied in this document due to a number of risks and uncertainties. Potential risks and uncertainties include, among others, the possibility that the transaction will not close or that the closing may be delayed, the anticipated synergies of the combined companies may not be achieved after closing, the combined operations may not be successfully integrated in a timely manner, if at all, general economic conditions in regions in which either company does business may deteriorate and/or Oracle or Zenedge may be adversely affected by other economic, business, and/or competitive factors. Accordingly, no assurances can be given that any of the events anticipated by the forward-looking statements will transpire or occur, or if any of them do so, what impact they will have on the results of operations or financial condition of Oracle or Zenedge. You are cautioned to not place undue reliance on forward-looking statements, which speak only as of the date of this document. Neither Oracle nor Zenedge is under any duty to update any of the information in this document. Talk to a Press Contact Deborah Hellinger • +1.212.508.7935 Ken Bond • +1.650.607.0349 Follow Oracle Corporate ### Oracle Partner Hub is Now in Kenya Oracle Partner Hub Now, in Kenya! Dear Valued Partner, We are pleased to invite you to the Launch of the Oracle Partner HUB, Oracle’s Cloud Centre of Excellence for Kenya and East... We share our skills to maximize your revenue! Categories: DBA Blogs ### (DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS_LIST=(FAILOVER=YES)(LOAD_BALANCE=NO) Yann Neuhaus - Thu, 2018-02-15 00:02 Do you have complex connection strings with DESCRIPTION_LIST, DESCRIPTION, ADDRESS_LIST, ADDRESS and a nice combination of FAILOVER and LOAD_BALANCE? You probably checked the documentation, telling you that FAILOVER=YES is the default at all levels, but LOAD_BALANCE=YES is the default only for DESCRIPTION_LIST. But when disaster recovery and availability is concerned, the documentation is not sufficient. I want to test it. And here is how I do it. I don’t want to test it with the real configuration and stop the different instances. And I don’t need to. My way to test an address list is to define a tnsnames.ora with the connection string, such as the following: NET_SERVICE_NAME= NET_SERVICE_NAME= (DESCRIPTION_LIST= (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=101)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=102)) ) (DESCRIPTION= (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST= (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=201)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=202)) ) ) )  I used localhost because I know it’s there and I don’t want to wait for the TCP timeout. But I use fake ports, which do not exist. So finally, a connection will never be established but I will be able to see all that are tried. I check them with strace on the connect() system call, with the following script:  for i in {1..10} do TNS_ADMIN=/tmp strace -T -e trace=connect sqlplus -s -L sys/oracle@NET_SERVICE_NAME as sysdba <<< "" 2>&1 | awk ' /sa_family=AF_INET, sin_port=htons/{ gsub(/[()]/," ") ; printf "%s ",$5 } END{ print "" } ' done | sort | uniq

So, I used meaningful numbers for my fake ports: 101 and 102 for the addresses in the first description of the description list, and 201 and 202 for the address list in the second description. The awk script shows the sequence that was tried. And, because of the random round robin, I run them in a loop several times to see all patterns, aggregated by sort|uniq

So here is the result from the connection string above using the defaults for load balancing and failover:
 101 102 201 202 201 202 101 102 

The sequence within the address list is always in order (101,102 and 201,202) because LOAD_BALANCE=NO is the default there. But I have two combinations for the descriptions because LOAD_BALANCE=YES is the default in DESCRIPTION_LIST. Finally, all adresses are tried because FAILOVER=YES is the default at all levels.

If I define LOAD_BALANCE at all levels, such as:
 NET_SERVICE_NAME= (DESCRIPTION_LIST=(FAILOVER=YES)(LOAD_BALANCE=YES) (DESCRIPTION=(FAILOVER=YES)(LOAD_BALANCE=YES) (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=101)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=102)) ) (DESCRIPTION=(FAILOVER=YES) (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST=(FAILOVER=YES)(LOAD_BALANCE=YES) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=201)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=202)) ) ) ) 

The result shows that all combinations can be tried in any order:
 101 102 201 202 101 102 202 201 102 101 201 202 102 101 202 201 201 202 101 102 201 202 102 101 202 201 101 102 202 201 102 101 

By running it in a large loop you will confirm that any address will be tried at most once.

FAILOVER

Now, If I set FAILOVER=NO within the first description:
 NET_SERVICE_NAME= (DESCRIPTION_LIST=(FAILOVER=YES)(LOAD_BALANCE= NO) (DESCRIPTION=(FAILOVER= NO)(LOAD_BALANCE=YES) (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=101)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=102)) ) (DESCRIPTION=(LOAD_BALANCE=NO ) (CONNECT_DATA=(SERVICE_NAME=pdb1)) (ADDRESS_LIST=(FAILOVER=YES)(LOAD_BALANCE=YES) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=201)) (ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=202)) ) ) ) 

the first attempt can be 101 or 102 (because of LOAD_BALANCING) but only one will be tried in this address list, because of no failover. Then, the second description is attempted (because FAILOVER=YES at description list level) and with all addresses there (because of LOAD_BALANCING=YES). The result of all possible combinations is:

 101 201 202 102 201 202 102 202 201 

So here it is. You can test any complex connection description to check what will be the possible connections and in which order they will be tried. From this, you can infer what will happen with a real configuration: the wait for TCP timeout for addresses tested on hosts that are not up, and the load balancing given be the different possible combinations.

### Thanks US Bank for all these appreciation eCards. Thanks Daniel Hoke !!

Arvind Jain - Wed, 2018-02-14 22:29
Sent: Monday, June 12, 2017 2:24 PM
To: Arvind Jain
From: Daniel Hoke
Core Value: We power potential

Sunday was the official launch of the next best action engine. The NBA engine powers potential for U.S. Bank and helps us to stay a step ahead. This is new for U.S. Bank and very much on the leading edge for the financial services industry.

This project was initiated in 2015 and thanks to your work, it launched on schedule with all the expected features. This was a big undertaking, including having to first upgrade the CRM system to Microsoft Dynamics.

The point of all of my summary explanation is that the NBA launch is a big deal to U.S. Bank!
I am writing to personally recognize your role in making this happen. Thank you for your work behind the NBA launch!!!

Daniel Hoke

Keep the recognition going, click here to access The Best In Us Appreciation System.

### When it comes to issuing eCards no body beats US Bank

Arvind Jain - Wed, 2018-02-14 22:26

Sent: Tuesday, June 13, 2017 3:24 PM
To: Arvind Jain
From: Deepak Nair
Core Value: We stay a step ahead

Thank you for your commitment to helping us close out ETM testing. Because of your dedication, even on weekends to help us move this forward, we have delivered with such high speed and quality. This was a great sign of team work and dedication. Thank you for all the support and help!

Deepak

Keep the recognition going, click here to access The Best In Us Appreciation System.

### Materialized view fast refresh with joins - what work is done where?

Tom Kyte - Wed, 2018-02-14 19:06
If I have two tables, both with materialized view logs, and I join them together in a fast refreshable materialized view, when that MV is refreshed, where is the work done to eliminate records that have not been deleted, but should no longer be retur...
Categories: DBA Blogs

### Print sub-total and total name in SQL

Tom Kyte - Wed, 2018-02-14 19:06
Hi Team Code sample provided in LiveSQL link. Expected Result: <code> Mgmt1 State1 18900 1786 Mgmt1 State2 28900 2786 Mgmt1 SUB_TOTAL 47800 4572 Mgmt2 State1 38900 3786 Mgmt2 State2 48900 4786 Mgmt2 SUB-TOTAL 87800 8572 ...
Categories: DBA Blogs

### ALL_ROWS vs FIRST_ROWS_10

Tom Kyte - Wed, 2018-02-14 19:06
Hello Team, An SQL(complex one and there are 10+ tables in join) which is called by Siebel application is set with Session parameter (ALTER SESSION SET OPTIMIZER_MODE = FIRST_ROWS_10) which took around 55 seconds to show the result as "No record f...
Categories: DBA Blogs

### How to Improve the Performance of Group By with Having

Tom Kyte - Wed, 2018-02-14 19:06
I have a table t containing three fields accountno, tran_date, amount and amount is always > 0. . There are many records for each accountno. I want to select all the accountnos where the sum(amount) > 100. The simple query is like this <code>...
Categories: DBA Blogs

### The Tale of the Ardent SpringCleaning Maniac (ASM)

Amis Blog - Wed, 2018-02-14 17:55

or

If you want to get rid of stuff, you can always do a good spring-cleaning. Or you can do what I do. Move.

In order to prepare for a network reorganization we needed to move our current Oracle Enterprise Manager machine into another network segment. So we grabbed the chance, and instead of just moving the existing OEM12c Rel.3 machine we decided to install a new Oracle Enterprise Manager 13(.2.2.0.0) onto two Linux RedHat 6.9 machines and get rid of the SPOF (single point of failure) as well in one elegant movement.

Little did we know that we would almost cripple our RAC-clusters by just preparing for the new agents.

Here comes the story of this little adventure.

When we were finally ready to move the monitored targets into the new OMS, I started to install second agents on all the hosts, so we still could quickly switch back to EM12c in case we hit a major road block with 13c. Starting with our single instance development- and test machines and then tentatively I began to install on some acceptance machines. Most of our acceptance and production databases are 2-node RAC 11.2.0.4 with (active) DataGuard, meaning each database consisting of 4 instances on four different machines. Except for some connectivity problems in the network, so far all went smoothly.

In order to install the new agents I had to check each machine upfront for the required 1Gb free space in /tmp. Often, I had to clean out a lot of old stuff from years ago like long-forgotten Oracle install logs, spoolfiles from SQL adhoc-queries of the developers, downloaded rpm’s of the Linux administrators, never-checked Puppet logs and so on. I was intrigued that in all cases node 2 was more cluttered than node 1.

BTW: 1G was not even enough, I would advise to go for at least 1,3G free space in /tmp for the 13c-agents.

At first, I removed only all old files of user oracle before I even considered removing root files or stuff other users had left years ago. And I did not touch Oracle files and directories younger than 4 weeks. Often that was sufficient to clear enough space to be able to install the agent.

But there were machines which were particularly cluttered and I was happily cleaning away until I had enough free space in /tmp freed to install the agent via the GUI. The first node of the first cluster I installed, was ready at about 2018-01-29 15:10h, followed by Node2, due to the necessary cleanup in /tmp, at about 15:45h. The newly installed agents on the cluster worked fine and produced entries about “Compliance …”  which were quickly suppressed for the target. Everything seemed fine and I went home quite contently.

The next morning we noted that one of the databases on this cluster behaved strangely and complained about not being able to complete the backup due to unrecognized files in the FRA. What?! Some hours later, other databases on this node suddenly began likewise to issue events in OMS13c and slowly hell broke loose which reminded us about ASM issues we had some weeks earlier after some updates. But the only change this machine had undergone lately was the new EM agent. No releation with ASM, forget about that idea….

We took a look into the alert log of the first complaining database (primary) and it looked like this (DB and instance names changed):

2018-01-29 22:00:00.031000 +01:00

Setting Resource Manager plan SCHEDULER[0x32D9]:DEFAULT_MAINTENANCE_PLAN via scheduler window

Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter

Starting background process VKRM

VKRM started with pid=43, OS id=13348

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m000_13400.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:18:11.242000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_35726.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:28:11.814000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_53927.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:38:12.346000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_65572.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:48:12.872000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_75911.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 22:58:13.371000 +01:00

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_m001_86086.trc:

ORA-01114: IO error writing block to file (block # )

2018-01-29 23:05:10.655000 +01:00

Current log# 3 seq# 917 mem# 0: +DATA01/db1_prim/onlinelog/group_3.294.918479161

Current log# 3 seq# 917 mem# 1: +FRA01/db1_prim/onlinelog/group_3.306.918479161

2018-01-29 23:08:02.093000 +01:00

WARNING: ASM communication error: op 11 state 0x50 (3113)

ERROR: slave communication error with ASM

Unable to create archive log file ‘+FRA01’

Errors in file /u01/oracle/diag/rdbms/db1_prim/DB1_PRIM2/trace/DB1_PRIM2_arc0_18470.trc:

ORA-19816: WARNING: Files may exist in db_recovery_file_dest that are not known to database.

ORA-17502: ksfdcre:4 Failed to create file +FRA01

ORA-03113: end-of-file on communication channel

Process ID:

Session ID: 708 Serial number: 1895

*************************************************************

WARNING: A file of type ARCHIVED LOG may exist in

db_recovery_file_dest that is not known to the database.

Use the RMAN command CATALOG RECOVERY AREA to re-catalog

any such files. If files cannot be cataloged, then manually

delete them using OS command. This is most likely the

result of a crash during file creation.

*************************************************************

ARC0: Error 19504 Creating archive log file to ‘+FRA01’

ARCH: Archival stopped, error occurred. Will continue retrying

ORACLE Instance DB1_PRIM2 – Archival Error

ORA-16038: log 4 sequence# 916 cannot be archived

ORA-19504: failed to create file “”

ORA-00312: online log 4 thread 2: ‘+DATA01/db1_prim/onlinelog/group_4.295.918479161’

ORA-00312: online log 4 thread 2: ‘+FRA01/db1_prim/onlinelog/group_4.307.918479163’RNING: A file of type ARCHIVED LOG may exist in …

Remember the installation of the agent was in the afternoon at about 4 o’clock and the trouble seemed to have started at 22:00h and not all databases on the cluster seem to have a problem with ASM or their FRA’s. The connection between FRA-trouble and agent was not obvious, I would say.

At that moment we did not ‘see’ the WARNING: ASM communication error: op 11 state 0x50 (3113) but reacted to the archival error. But of course, that did not solve the problem.

When we had a look into the ASM-log on node2 of this cluster, it revealed that its ASM instance had crashed with ORA-29701… while node1 still functioned normally!

A quick search on MOS for “ORA-29701” resulted in nothing that seem to fit our problem, until I widened the search on the net and almost accidentally found a website about EXADATA, describing an ORA-29701 error of ASM and giving advice about “restoring a lost socket file” by trying to restart the complete CRS stack on the node. And if that didn’t do the trick, try rebooting the server. The last was in this environment a little tricky and takes some time to get approval, so I was prepared to do whatever necessary, as long as it did not involve a reboot of the server to solve this problem.

But a lost socket file?!? They used to be in /var/tmp on Linux, if I was not mistaken, but I only touched /tmp…

I decided to go along with this advice and stopped the CRS stack on the unhealthy node and therefore all database instances on it, which were still limping on as good or bad as they could on one node, as grid owner with crsctl stop crs.
Then I checked if all processes were really down, which they weren’t. Therefore I wielded the kill-9–sword until everything which had anything to do with CRS or oracle was gone from the process list (Do I hear some deep groans about this?). And started anew with crsctl start crs.
Luckily, the result was a node with CRS, ASM, listeners and all database instances started as if the machine just came back from a successful and smooth reboot. And even a new /tmp/.oracle directory … one of the older directories I had cleaned up in /tmp to make room for the agent installs!
Pfew, blessed are the innocent! It all worked again.

But what had caused all this trouble? According to this article, the socket files for Linux can be found in a hidden directory called .oracle. Depending on the *NIX dialect that could be under /tmp/.oracle, /usr/tmp/.oracle or /var/tmp/.oracle. Under Linux86_64 this directory is officially under /var/tmp but Oracle also hides a copy under /tmp/.oracle and also stores socket files of the Oracle Net Listener or Cluster Healthy Monitor in there (see DOC ID 1322234.1).

The article also cited DOC ID 391790.1, which is not among the first 10 documents presented when you query MOS for “ORA-29701”, which would have helped to find the culprit far quicker!

What an irony – that later that day, Jan 30th 2018, Oracle released DOC ID 370605.1 …. which tries to remind us all again:
Never delete any hidden .oracle directory you come across or fear for your cluster!

Have fun!

The post The Tale of the Ardent SpringCleaning Maniac (ASM) appeared first on AMIS Oracle and Java Blog.

### node-oracledb 2.1 is now available from npm

Christopher Jones - Wed, 2018-02-14 17:12

Release announcement: Node-oracledb 2.1.0, the Node.js module for accessing Oracle Database, is on npm.

Top features: Privileged connections such as SYSDBA. A new 'queryStream()' Stream 'destroy()' method

After the big refactoring of node-oracledb 2.0, we're pleased to push node-oracledb forward with the release of 2.1.0.

Changes in node-oracledb 2.1.0 include:

• Support for SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, SYSKM, and SYSRAC privileges in standalone connections. You can now connect like:

oracledb.getConnection( { user: 'sys', password: 'secret', connectString: 'localhost/orclpdb', privilege: oracledb.SYSDBA }, function(err, connection) { if (err) console.error(err); else console.log('I have power'); } // . . . );
• The internal implementations of 'direct fetches' and 'queryStream()' have been simplified by reusing the ResultSet changes of 2.0. You might notice speedups in some cases.

• The 'queryStream()' result Stream now supports Node 8's destroy() method, which obsoletes the previous experimental '_close()' method of node-oracledb.

If you are using '_close()' in Node 8 or later, you should:

• Change the method name from '_close()' to 'destroy()'.

• Stop passing a callback.

• Optionally pass an error.

• Improved the Error object with new 'errorNum' and 'offset' properties. The properties will be included when you get errors from Oracle Database such as this:

connection.execute( "select wrong_col from departments", function(err, result) { if (err) console.error(err); else console.log(result.rows); });

The output is:

{ Error: ORA-00904: "WRONG_COL": invalid identifier errorNum: 904, offset: 7 }

The offset is relative to the start of the SQL statement. For non SQL errors, the offset will be 0.

• New 'versionSuffix' and 'versionString' properties to the oracledb object to aid showing the release status and version. The 'versionSuffix' attribute will be an empty string for production releases, but may be something like '-dev' or '-beta.1' for other builds.

The existing 'version' attribute is great for runtime comparisons, but not as handy as the new 'versionString' attribute for displaying to users.

• With a code contribution from Sagie Gur-Ari an annoying little gotcha was fixed. Now 'connectionString' is an alias for 'connectString'.

oracledb.getConnection( { user: 'hr', password: 'welcome', connectionString: 'localhost/orclpdb', }, function(err, connection) { if (err) console.error(err); else console.log('I am HR'); } );
• The underlying data access layer ODPI-C had some improvements which flow through to node-oracledb.

• Connections can now be closed when ResultSets and Lobs are open. This removes the DPI-1054 error you might have seen if you closed resources in the 'wrong' order.

• At connection close there is no longer an unconditional rollback sent to the database. Instead ODPI-C makes use of some internal state to avoid the overhead of the 'round trip' to the database when it is known that there is no open transaction.

• Node-oracledb 2.1 no longer compiles with the long-obsolete Node 0.10 or 0.12 versions. We'd stopped testing and claiming support for these versions a while back, but technically compilation had been possible until now. Updates to NAN API usage and other cleanups have lead to this change.

• Our testing and documentation improved, as always.

See the CHANGELOG for other changes in node-oracledb 2.1

Finally, you may be interested to know about these cool new resources:

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Node-oracledb change log is here.

For information on migrating see Migrating from node-oracledb 2.0 to node-oracledb 2.1.

Issues and questions about node-oracledb can be posted on GitHub.

Finally, contributions to node-oracledb are more than welcome, see CONTRIBUTING.

### Full page logging in Postgres and Oracle

Yann Neuhaus - Wed, 2018-02-14 14:13

In my opinion, the volume of logging (aka redo log, aka xlog, aka WAL) is the most important factor for OLTP performance, availability and scalability, for several reasons:

• This is the only structure where disk latency is a mandatory component of response time
• This is a big part of the total volume of backups
• This is sequential by nature, and very difficult to scale by parallelizing

In this post, I look at the volume of logging generated by some DML in Postgres and Oracle. I know Oracle quite well and just start to look at Postgres. The comparison here is not a contest but a way to better understand. For example, the default behavior of Postgres, with full_page_writes=on, is very similar to Oracle ‘begin backup’ mode. The comparison makes no sense for most of Postgres DBAs, but probably helps Oracle DBAs to understand it.

Measure WAL segment writes

Here is how I measured the volume of transaction log written: start the Postgres server with ‘strace -f’ and parse with ‘awk’ the open(), write() and close() calls:
 sudo su postgres <<'END' export PGDATA=/u01/pgdata /usr/pgsql-10/bin/pg_ctl stop strace -e trace=open,close,write,recvfrom -f /usr/pgsql-10/bin/pg_ctl start 2>&1 | awk ' /^[^[]/{ $0="[pid MAIN] "$0 } /strace: Process [0-9][0-9]* attached/{ sub(/^.*strace: /,"strace: ") ; "ps -o cmd -hp " $3 |& getline proc[$3"]"] ; print "" ; print $0,proc[$3"]"] } /open[(].*pg_wal[/].* = [0-9]*$/{ z=$0 ; gsub(qq," ") ; fd_wal[$2$NF]=$4 } /checkpoint;/{ total_written_wal=0 } /write[(]/{ #pid=$2 ; sub("]","",$2) ; "ps -o cmd -hp " p |& getline proc[p"]"] z=$0 ; gsub("[(,]"," ") ; if ( fd_wal[$2$4]>0 ) { written_wal[$2$4]=written_wal[$2$4]+$NF ; total_written_wal=total_written_wal+$NF } next } /close[(]/{ pid=$2 ; sub("[^0-9]","",pid) ; z=$0 ; gsub("[()]"," ") ; if ( ( fd_wal[$2$4]!="" ) && ( written_wal[$2$4] > 0 ) ) { printf " ( written %d bytes to %s -> total WAL segments: %.2f MB ) cmd=%s\n",written_wal[$2$4],fd_wal[$2$4],total_written_wal/1024/1024 , proc[$2] ; fd_wal[$2 $4]="" } next } ' qq='"' END  Do not do that in production. This is experimentation in a lab. Do not attach strace to a critical process in production. There’s probably an easier way to get the same information, maybe with postgres activity statistics, or through a size counting archive_command, so please don’t hesitate to comment. Anyway, from the ‘write()’ calls I am sure that I’m counting exactly what I want: the volume of logging written to disk. As an Oracle DBA used to LogWriter and its slave threads managing all writes, I started to trace only the WAL writer process but quickly realized that part ot the logging is directly written by my server process. Postgres: insert I create a table with some numbers and a 100 bytes character string.  create table demo as select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(0,0);  The first operation I test is the insert of 1 million rows.  insert into demo select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(1,1000000); ( written 4349952 bytes to pg_wal/000000010000000A0000005F -> total WAL segments: 4.16 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 8192 bytes to pg_wal/000000010000000A0000005F -> total WAL segments: 9.00 MB ) cmd=postgres: wal writer process ( written 17735680 bytes to pg_wal/000000010000000A00000060 -> total WAL segments: 20.07 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 3309568 bytes to pg_wal/000000010000000A00000060 -> total WAL segments: 31.40 MB ) cmd=postgres: wal writer process ( written 33783808 bytes to pg_wal/000000010000000A00000061 -> total WAL segments: 36.03 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 3997696 bytes to pg_wal/000000010000000A00000061 -> total WAL segments: 39.80 MB ) cmd=postgres: wal writer process ( written 49676288 bytes to pg_wal/000000010000000A00000062 -> total WAL segments: 51.19 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 65273856 bytes to pg_wal/000000010000000A00000063 -> total WAL segments: 66.06 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 79364096 bytes to pg_wal/000000010000000A00000064 -> total WAL segments: 82.04 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 6660096 bytes to pg_wal/000000010000000A00000064 -> total WAL segments: 82.39 MB ) cmd=postgres: wal writer process ( written 88285184 bytes to pg_wal/000000010000000A00000065 -> total WAL segments: 98.02 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 14491648 bytes to pg_wal/000000010000000A00000065 -> total WAL segments: 106.82 MB ) cmd=postgres: wal writer process ( written 101703680 bytes to pg_wal/000000010000000A00000066 -> total WAL segments: 113.99 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 17825792 bytes to pg_wal/000000010000000A00000066 -> total WAL segments: 117.19 MB ) cmd=postgres: wal writer process ( written 115769344 bytes to pg_wal/000000010000000A00000067 -> total WAL segments: 128.20 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 18661376 bytes to pg_wal/000000010000000A00000067 -> total WAL segments: 135.09 MB ) cmd=postgres: wal writer process ( written 19824640 bytes to pg_wal/000000010000000A00000068 -> total WAL segments: 144.17 MB ) cmd=postgres: wal writer process ( written 131350528 bytes to pg_wal/000000010000000A00000068 -> total WAL segments: 148.16 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 27435008 bytes to pg_wal/000000010000000A00000069 -> total WAL segments: 159.80 MB ) cmd=postgres: wal writer process ( written 140132352 bytes to pg_wal/000000010000000A00000069 -> total WAL segments: 159.80 MB ) cmd=postgres: demo demo 192.168.56.122(38013) INSERT 0 1000000  You can see that my ‘strace|awk’ script is running in the background and has counted about 160 MB of logging, partially from the ‘postgres: wal writer process’ and partly from ‘postgres: demo demo 192.168.56.122(38013)’ serving my connection. The relation size as stored on disk is about 150 MB;  analyze demo; ANALYZE select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo'; relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb ---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+----- demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 19231 | 1e+06 | 0 | 150 (1 row)  This makes sense. An insert has to write all new data into the log in order to be able to recover the pages until they are checkpointed. Note that I have no index on this table for this test. Postgres: update I’m now updating one column for all rows.  update demo set b=b+1; ( written 150528000 bytes to pg_wal/000000010000000A0000006A -> total WAL segments: 4.01 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 162693120 bytes to pg_wal/000000010000000A0000006B -> total WAL segments: 17.84 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 29769728 bytes to pg_wal/000000010000000A0000006B -> total WAL segments: 28.44 MB ) cmd=postgres: wal writer process ... ( written 84287488 bytes to pg_wal/000000010000000A00000081 -> total WAL segments: 343.65 MB ) cmd=postgres: wal writer process ( written 453705728 bytes to pg_wal/000000010000000A00000082 -> total WAL segments: 347.36 MB ) cmd=postgres: demo demo 192.168.56.122(38013) UPDATE 1000001  I touched only a small part of the volume in bytes, but I touched all rows and all pages. An, even if only a few bytes are modified, Postgres logs the whole page to protect from fractured blocks in case of crash (pages partially written). So that’s about 150 MB. But postgres do not update rows in-place. The whole row is inserted in its new version, which means the whole volume again, which is another 150 MB. If we look at the size of the table, we can see 300MB of pages:  analyze demo; ANALYZE select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo'; relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb ---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-------------+---------------+----- demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 38462 | 1.21882e+06 | 0 | 300 (1 row)  So this update has generated even more logging: 347 MB. Postgres: sparse update Now updating only 1 row out of ten, still one column only:  update demo set b=b+1 where mod(a,10)=1; ( written 89923584 bytes to pg_wal/000000010000000A00000083 -> total WAL segments: 13.88 MB ) cmd=postgres: wal writer process ( written 469123072 bytes to pg_wal/000000010000000A00000084 -> total WAL segments: 22.98 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ... ( written 563576832 bytes to pg_wal/000000010000000A0000008D -> total WAL segments: 151.07 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 130940928 bytes to pg_wal/000000010000000A0000008D -> total WAL segments: 151.27 MB ) cmd=postgres: wal writer process UPDATE 100000 analyze demo;  So, 10% of the rows had to be copied to their new version, which brings the table size to additional 15 MB.  analyze demo; ANALYZE select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo'; relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb ---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-------------+---------------+----- demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 40385 | 1.07267e+06 | 0 | 315 (1 row)  For these additional 15 MB, half of the table pages had to be modified (the current version having to point to the new version), and the logging generated was 150 MB. Because of MVCC at tuple level, doing something similar to ‘chained rows’ and ‘row migration’ for all updates, and because of full page logging, even sparse updates generate a lot log writes. Postgres: delete Here is a delete of those million rows:  delete from demo; ( written 576364544 bytes to pg_wal/000000010000000A0000008E -> total WAL segments: 6.44 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ( written 134930432 bytes to pg_wal/000000010000000A0000008E -> total WAL segments: 6.73 MB ) cmd=postgres: wal writer process ( written 589225984 bytes to pg_wal/000000010000000A0000008F -> total WAL segments: 18.70 MB ) cmd=postgres: demo demo 192.168.56.122(38013) ... ( written 162054144 bytes to pg_wal/000000010000000A00000099 -> total WAL segments: 184.70 MB ) cmd=postgres: wal writer process ( written 740352000 bytes to pg_wal/000000010000000A0000009A -> total WAL segments: 189.80 MB ) cmd=postgres: demo demo 192.168.56.122(38013) DELETE 1000001 ( written 163217408 bytes to pg_wal/000000010000000A0000009A -> total WAL segments: 196.22 MB ) cmd=postgres: wal writer process  Marking tuples as deleted does not increase the table:  analyze demo; ANALYZE select relname, relnamespace, reltype, reloftype, relowner, relam, relfilenode, reltablespace, relpages, reltuples, relallvisible, relpages*8/1024 MB from pg_class where relname = 'demo'; relname | relnamespace | reltype | reloftype | relowner | relam | relfilenode | reltablespace | relpages | reltuples | relallvisible | mb ---------+--------------+---------+-----------+----------+-------+-------------+---------------+----------+-----------+---------------+----- demo | 2200 | 25157 | 0 | 16385 | 0 | 25155 | 0 | 40385 | 275837 | 0 | 315 (1 row)  But all current tuples have to be marked as deleted and not visible once the transaction is committed. This touches all pages for the current version, which is more than 150 MB of logging here. Postgres: vacuum After two updates and a delete, I have old tuples in this table. It seems that VACUUM does not generate any logging:  vacuum demo; ( written 762445824 bytes to pg_wal/000000010000000A0000009B -> total WAL segments: 14.67 MB ) cmd=postgres: demo demo 192.168.56.122(38013) VACUUM  My guess (but remember that I am a newbie in Postgres) is that in case of a crash occurring before the next checkpoint we will just have to vacuum again. But this is not what was answered in the postgres-general list a few years ago. Note that full page logging is not necessary for all changes, but only for the first change after the page was read from disk after a checkpoint. This is sufficient to cover future writes failures because recovery will start from there. Once we have full page logged, change vector is sufficient for further recovery. However, I had the same amount of WAL, 15 MB, when vacuuming after a checkpoint. Oracle: insert Let’s do some similar things in Oracle, which MVCC implementation is completely different: at block level, with undo logging.  SQL> create table demo as select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('0 to 0'); Table created.  I have exposed in a previous post how I get the delta values from V$MYSTAT join V\$STATNAME using (STATISTIC#) for ‘redo size’, so no need to strace here. But we can see the same result by measuring the writes to redo log groups (do not double count the multiplexed members).
 SQL> insert into demo select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('1 to 1000000'); 1000001 rows inserted.   SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 141,342 155,218,876 4,380,448 104,411 

This is about 150MB, which is the volume of the table:
 SQL> exec dbms_stats.gather_table_stats(user,'DEMO'); PL/SQL procedure successfully completed. SQL> select table_name,num_rows,blocks,(blocks*block_size/1024/1024) MB, avg_row_len from user_tables join dba_tablespaces using(tablespace_name) where table_name='DEMO';   TABLE_NAME NUM_ROWS BLOCKS MB AVG_ROW_LEN ------------------------------ ---------- ---------- ---------- ----------- DEMO 1000001 19280 150.625 131 

Conclusion for inserts: all databases have to log the whole data inserted in order to be protected from instance crash. Note that Oracle has a way to insert directly into the file, bypassing the buffer cache, and then reduce the logging required for crash recovery. But I’m not doing bulk inserts here.

Oracle: update

The update in Oracle is done in-place. There is no need to copy the whole row (except in the rare cases where the row increases and do not fit into the block). However, the old value of the column must be copied for MVCC, into the UNDO segment. This is why we see 46 MB of ‘undo change vector size’ here.
 SQL> update demo set b=b+1; 1000001 rows updated. SQL> commit; Commit complete. SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 170,777 105,301,308 48,641,772 82,221 
The UNDO is only the change vector, not the full block. If you read about copies of full blocks to rollback segments, it is a confusion either from veterans of Oracle 5, or a misunderstanding of flashback features. The UNDO being stored in segments, written first into buffer cache, it is protected by redo logging, so about 46 MB of redo is actually the redo vector of undo vectors. The other 54 MB of redo is the new value of the update.

Oracle: sparse update

The logging of change vectors rather than full pages is even cheaper with sparse updates:
 SQL> update demo set b=b+1 where mod(a,10)=1; 100001 rows updated. SQL> commit; Commit complete. SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 56,583 15,414,328 6,111,608 36,921 
The volume of undo and redo generated is only 15 MB here, including 6 MB of undo vectors. This is really optimized and this is one reason why you should update only the columns changed (and not use the default non-dynamic update of Hibernate for example).

Oracle: delete

The delete has to mark all rows as deleted and because the space can immediately be reused then whole row must be logged into the UNDO, and this has to be logged into the REDO, so the delete generates lot of logging:
 SQL> delete from demo; 1000001 rows deleted. SQL> commit; Commit complete. SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 2,124,823 403,755,892 240,302,088 1,093,821 

I have no indexes here. With indexes, all index entries have to be marked as deleted, and this generates undo and redo vector because MVCC in Oracle is at block level: each block modification – for table or index – have to be logged.

Deleting a lot of rows is an expensive operation in Oracle. For bulk purges, it is often better to truncate and insert /*+ append */ when possible (as in non-atomic materialized view refresh). Partitioning helps for that for example to purge old data when partitioned on date.

Postgres without full page logging

Given the huge overhead, is full page logging really required? There are plans to avoid it, mentioned in the Postgres ToDo wiki, or at least to keep it only short term for crash recovery and not media recovery. Another possibility is to implement a checksum on the blocks so that fractured blocks can be detected. Then, when detected, the fractured blocks may not need full page logging to recover them if we can restore a previous backup. This takes longer to recover, but can be acceptable given the low probability of this kind of failure. In addition to that, when you have a physical standby synchronized with log-shipping, you have a easy way to recover without having to restore files. But you need a checksum to detect the problem.

Without a checksum, the problem is the detection of partial writes. But if you trust your storage and if you failover to the standby in case of a crash, you may accept to set full_page_writes=off and this is what I did here.

 insert into demo select generate_series a,generate_series b,generate_series c,generate_series d,generate_series e,generate_series f,lpad('x',100,'x') g from generate_series(1,1000000); ... ( written 125255680 bytes to pg_wal/000000010000000A000000E3 -> total WAL segments: 140.65 MB ) cmd=postgres: demo demo 192.168.56.122(38109) INSERT 0 1000000 
The insert still have to log all new data: 140 MB.

 update demo set b=b+1; ... ( written 72613888 bytes to pg_wal/000000010000000A000000F2 -> total WAL segments: 213.02 MB ) cmd=postgres: wal writer process UPDATE 1000001 

The update has to log only what is modified, but because of Postgres MVCC implementation, the whole row has to be written in its new version, and the old ones have their pointer updated: 210 MB here.

 update demo set b=b+1 where mod(a,10)=1; ( written 305709056 bytes to pg_wal/000000010000000A000000F3 -> total WAL segments: 1.96 MB ) cmd=postgres: demo demo 192.168.56.122(38109) ( written 72613888 bytes to pg_wal/000000010000000A000000F3 -> total WAL segments: 5.62 MB ) cmd=postgres: wal writer process ( written 75718656 bytes to pg_wal/000000010000000A000000F4 -> total WAL segments: 9.65 MB ) cmd=postgres: wal writer process ( written 310665216 bytes to pg_wal/000000010000000A000000F4 -> total WAL segments: 9.65 MB ) cmd=postgres: demo demo 192.168.56.122(38109) UPDATE 100000 

The sparse update benefits from logging only the changed rows: 10 MB here. This one is even smaller than with Oracle because there’s no UNDO to write here: the old values stay in-place.

 delete from demo; ( written 323256320 bytes to pg_wal/000000010000000A000000F5 -> total WAL segments: 11.27 MB ) cmd=postgres: demo demo 192.168.56.122(38109) ( written 338829312 bytes to pg_wal/000000010000000A000000F6 -> total WAL segments: 26.92 MB ) cmd=postgres: demo demo 192.168.56.122(38109) ( written 76562432 bytes to pg_wal/000000010000000A000000F6 -> total WAL segments: 31.41 MB ) cmd=postgres: wal writer process ( written 345415680 bytes to pg_wal/000000010000000A000000F7 -> total WAL segments: 39.73 MB ) cmd=postgres: demo demo 192.168.56.122(38109) ( written 83410944 bytes to pg_wal/000000010000000A000000F7 -> total WAL segments: 40.41 MB ) cmd=postgres: wal writer process DELETE 1000001 

The delete is cheap when full_page_writes=off because there’s only the visibility is changed but data remains (until committed and vacuumed). If you have a lot of rows to delete, then consider to set full_page_writes=off and be sure to have a backup to restore in case of crash.

Oracle full page logging in backup mode

So, Oracle by default does not need to protect from fractured blocks, because they can be detected. If the storage crashes while a block is partially written, the block is corrupt. Thanks to the checksum, this corruption will be detected during recovery (or even earlier depending on DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT). The redo is not sufficient, as it contains only change vectors, but you can recover from the last backup and Oracle can do a simple block recover. This recovery can also be done from the standby database.

However, full page logging exists in Oracle. When running backup from a non-Oracle tool, not aware of block checksum, you need to enclose the copy or snapshot between ‘begin backup’ and ‘end backup’. You do this because online backup may read partially updated blocks, and without the checksum, cannot detect it. A corrupt backup is not very useful and this is why this backup mode will generate more redo to be able to recover them. This is very similar to full page logging: the redo generated for the first modification of the buffer will store the whole block. Next modifications, until buffer is checkpointed, will need only the change vectors.

I think the first article I’ve ever written was a description of the Oracle backup mode. And it is still visible thanks to archive.org only because it was published on… Google Knol!

So, here is the same run with Oracle in backup mode.

Insert does not change a lot as it fills full blocks:
 SQL> insert into demo select rownum a,rownum b,rownum c,rownum d,rownum e,rownum f,lpad('x',100,'x') g from xmltable('1 to 1000000'); 1000000 rows created.   SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 141,376 156,527,072 4,380,448 124,195 

Full update of one column generates same undo, but more than 2x redo because of full page logging:
 SQL> update demo set b=b+1; 1000001 rows updated.   SQL> commit; Commit complete.   SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 170,778 238,317,632 48,641,772 104,640 

Sparse update is exactly the same as full update because this 10% touches all pages:
 SQL> update demo set b=b+1 where mod(a,10)=1; 100001 rows updated.   SQL> commit; Commit complete.   SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 319,622 240,502,284 17,832,196 192,815 

Delete generates even more because there’s all the the UNDO in addition to all data pages:
 SQL> delete from demo; 1000001 rows deleted.   SQL> commit; Commit complete.   SQL> @ _mystat_diff.sql   db block changes redo size undo change vector size redo entries ---------------- ---------------- ----------------------- ---------------- 2,125,285 558,510,928 240,303,768 1,143,131 

So what?

Beyond the very different implementation of Postgres and Oracle, we can see that we have flexibility: the large logging generated by Postgres by default may be reduced in some cases, and the minimal logging which is the default for Oracle may be larger in some situations. The most important, as for all technologies, is to understand how it works. Only then you can do the right choice to balance between performance, availability, and cost. Understand how it works means: read the docs (how it is supposed to work) and test (how it actually works). With Oracle there’s additional information from a huge community testing and using it for decades. With Postgres, as with all Open Source projects, the source code with comments is an amazing documentation.

Cet article Full page logging in Postgres and Oracle est apparu en premier sur Blog dbi services.

### Oracle Cloud Applications Help Providence St. Joseph Health Deliver on its Digital Innovation Vision

Oracle Press Releases - Wed, 2018-02-14 11:00
Press Release
Oracle Cloud Applications Help Providence St. Joseph Health Deliver on its Digital Innovation Vision Oracle Cloud Applications improves organizational efficiency, delivery of services, and patient outcomes for non-profit healthcare organizations

Redwood Shores, Calif.—Feb 14, 2018

Oracle today announced that Providence St. Joseph Health has selected Oracle Cloud Applications to advance its critical business systems and streamline operational processes.

Providence St. Joseph Health needed a modern business platform that could scale to support its changing needs. With Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Enterprise Performance Management (EPM) Cloud, Oracle Human Capital (HCM) Cloud, and Oracle Supply Chain Management (SCM) Cloud, Providence St. Joseph Health will be able to combine finance, employee, and supplier data to help improve productivity, reduce costs, and enhance visibility within the organization.

“We are focused on creating smarter and more efficient systems for managing our growing system, improving planning and forecasting, and enabling agile decision-making,” said Jo Ann Escasa-Haigh, Executive Vice President & Chief Financial Officer, Operations at Providence St. Joseph Health. “We are excited to partner with Oracle to deliver a solution to match our expanding needs.”

Oracle ERP Cloud’s complete, modern, and innovative platform will help Providence St. Joseph Health transform its finance function by enabling increased productivity, lower costs, and improved controls. Oracle EPM Cloud will enable Providence St. Joseph to analyze data for accurate forecasting and budget planning. Oracle SCM Cloud will help increase the efficiency of operations by enforcing procurement policies, providing real-time insights, and automating demand/supply planning. Oracle HCM Cloud will provide insightful data and tools to help Providence St. Joseph Health optimize talent management, gain complete workforce insights, and increase operational efficiency to drive a better employee experience.

“Oracle Cloud Applications will support Providence St. Joseph Health in driving our vision of digital innovation in our business processes, agility in our deployment of solutions, and long-term value and scalability,” adds Janice Newell, Executive Vice President, Information Services & Chief Information Officer at Providence St. Joseph Health.

“Technology plays an important role in the healthcare industry, from helping offices run more efficiently to enabling hospital staff to deliver more accurate diagnoses and save more lives,” said Steve Miranda, Executive Vice President, Applications Product Development, Oracle. “Oracle Cloud Applications will significantly improve business systems and organizational efficiency to help Providence St. Joseph Health provide superior patient care and better respond to community health needs.”

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Evelyn Tam
Oracle PR
1.650.506.5936
evelyn.tam@oracle.com
Nisha Morris
Providence St. Joseph Health PR
1.714.414.3232
nisha.morris@stjoe.org

Providence St. Joseph Health is committed to improving the health of the communities it serves, especially those who are poor and vulnerable. Including 50 hospitals, 829 physician clinics, senior services, supportive housing and many other health and educational services, the health system and its partners employ more than 111,000 caregivers (employees) serving communities across seven states – Alaska, California, Montana, New Mexico, Oregon, Texas and Washington. With system offices based in Renton, Wash. and Irvine, Calif., the Providence St. Joseph Health family of organizations works together to meet the needs of its communities, both today and into the future. For more information about Providence St. Joseph Health, visit psjhealth.org.

The Oracle Cloud offers complete SaaS application suites for ERP, HCM and CX, plus best-in-class database Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) from data centers throughout the Americas, Europe and Asia. For more information about Oracle (NYSE:ORCL), please visit us at oracle.com

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle Corporation.

Talk to a Press Contact

Evelyn Tam

• 1.650.506.5936

Nisha Morris

• 1.714.414.3232