Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 12 hours 28 min ago

Introducing high-availability with SQL Server on Linux

Sun, 2017-01-15 11:39

A couple of months ago, I did my first installation of SQL Server on Linux. I wrote about it in this blog post. So now it’s time to start the new year by talking about high availability on Linux with SQL Server vNext. Running standalone instances will be suitable for scenarios but I guess introducing high-availability with critical environments still remain mandatory.Currently, the CTP1 supports installing a SQL Server Failover Cluster Instance on the top of RHEL HA Add-on based on Pacemaker. This is a good start although I hope to see also availability groups in the future (maybe a future CTP version). In this blog post I will not go into details of my installation process because some steps will certainly change over the time but I would like to share my notes (or feelings) about this new way to achieve high availability with SQL Server on Linux world.

I performed the installation by using the same infrastructure exposed on the Microsoft documentation. As usual, my environment is fully virtualized with Hyper-V.

 

blog 114 - 1 - sql linux fci architecture

So, the first step consisted in installing 3 virtual machines based on a CentOS 7 distribution (the Microsoft documentation is intended to RHEL 7 but CentOS 7 is perfectly suitable in our case). The first two ones concerned the cluster and SQL Server. I performed the same installation process to install SQL Server on Linux. No change here. The third one concerned the NFS server. So let’s show my storage configuration:

[mikedavem@nfs ~]$ sudo fdisk -l | grep -i sd
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200    41943039    19921920   8e  Linux LVM
Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
/dev/sdb1            2048    20971519    10484736   83  Linux
Disk /dev/sdc: 10.7 GB, 10737418240 bytes, 20971520 sectors
/dev/sdc1            2048    20971519    10484736   83  Linux

 

The corresponding partition /dev/sdb1 (ext4 formatted) need to be mounted automatically by the system and will be used as a shared storage by the NFS server afterwards.

[mikedavem@nfs ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Jan 12 21:46:34 2017
#
…

/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=e4f5fc0b-1fd4-4e18-b655-a76b87778b73 /boot                   xfs     defaults        0 0
/dev/mapper/cl-swap     swap                    swap    defaults        0 0
/dev/sdb1       /mnt/sql_data_nfs       ext4    auto,user,rw    0 0
/dev/sdc1       /mnt/sql_log_nfs        ext4    auto,user,rw    0 0

 

Then my NFS server will expose the shared directory /mnt/sql_data_nfs to the cluster layer.

[mikedavem@nfs ~]$ cat /etc/exports
/mnt/sql_data_nfs 192.168.5.0/24(rw,sync,no_subtree_check,no_root_squash)
/mnt/sql_log_nfs  192.168.5.0/24(rw,sync,no_subtree_check,no_root_squash)

 

We will focus only on the directory /mnt/sql_data_nfs in this case.

[mikedavem@nfs ~]$ showmount -e
Export list for nfs.dbi-services.com:
/mnt/sql_log_nfs  192.168.5.0/24
/mnt/sql_data_nfs 192.168.5.0/24

 

That’s it. My directory is ready to be used by my SQL Server cluster nodes as shared storage for my databases. Let’s continue with the second step. We need to install the cluster underlying infrastructure which includes components as Pacemaker (the resource manager) and Corosync (Communication layer between cluster nodes).

blog 114 - 2 - packemaker architecture

                                        From Microsoft documentation

By reading the Pacemaker documentation on the web, I could find out similar concepts we may have with Microsoft and the Windows Failover Cluster feature. After facing some configuration issues, here my final configuration.

  • Two cluster nodes (linux01.dbi-services.test / linux02.dbi-services.test)
  • Two resources that concern my SQL Server FCI (sqllinuxfci resource + virtualip resource)
[mikedavem@linux01 ~]$ sudo pcs status

Cluster name: linux_cluster
Stack: corosync
Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum

Last updated: Sat Jan 14 19:53:55 2017          Last change: Sat Jan 14 17:28:36 2017 
by root via crm_resource on linux01.dbi-services.test

2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux02.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux02.dbi-services.test

...

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

As said previously, I faced some issues during the cluster installation process. The first one concerned a typo in the Microsoft documentation (at least the command did not work in my case). I was not able to create my SQL Server resource after installing the mssql-server-ha package. Indeed, according to Microsoft documentation we need to create a SQL Server resource based on the ocf:mssql:fci resource agent. However, after some investigations, I was able to figure out that this definition doesn’t exist according to the current OCF resource agent folder hierarchy (see below). In my case, I had to change the definition by ocf:sql:fci

[mikedavem@linux01 ~]$ ls -l /usr/lib/ocf/resource.d/

total 16
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 heartbeat
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 openstack
drwxr-xr-x. 2 root root 4096 Jan 12 19:31 pacemaker
drwxr-xr-x. 2 root root 4096 Jan 14 10:55 sql

[mikedavem@linux01 ~]$ ls -l /usr/lib/ocf/resource.d/sql/
total 20
-rw-r--r--. 1 root root  3473 Jan 14 10:58 cfg
-rwxr-xr-x. 1 root root 15979 Dec 16 02:09 fci

 

Let’s say it was also a good opportunity to understand what an OCF resource agent is. First coming from the Microsoft world, I figured out quickly the OCF resource agents correspond in fact to what we call the Windows Failover Cluster DLL resources. In addition, according to linux documentation, I noticed similar concepts like functions and entry points that a Windows resource DLL is expected to implement. Very interesting!

The second issue concerned a strange behavior when the failover of my SQL Server resource occurred. I first noticed the following messages:

Failed Actions:
* sqllinuxfci_start_0 on linux02.dbi-services.test ‘unknown error’ (1): call=16, status=complete, exitreason=’SQL server crashed during startup.’,
    last-rc-change=’Sat Jan 14 17:35:30 2017′, queued=0ms, exec=34325ms

Then I moved on the SQL Server error log to try to find out some clues about this issue (SQL Server error log is always your friend in this case)

[mikedavem@linux01 sql]$ sudo cat /var/opt/mssql/log/errorlog


2017-01-14 14:38:55.50 spid5s      Error: 17204, Severity: 16, State: 1.
2017-01-14 14:38:55.50 spid5s      FCB::Open failed: Could not open file C:\var\opt\mssql\data\mastlog.ldf for file number 2.  OS error: 2(The system cannot find the file specified.).
2017-01-14 14:38:55.50 spid5s      Error: 5120, Severity: 16, State: 101.
2017-01-14 14:38:55.50 spid5s      Unable to open the physical file "C:\var\opt\mssql\data\mastlog.ldf". Operating system error 2: "2(The system cannot find the file specified.)".
…

That’s the point. My SQL Server engine was not able to open the master database because it can’t find the specified path. As an apart, you may notice the path used by SQL Server in the error message. A Windows fashion path which includes a drive letter! Well, very surprising but I’m sure it will be changed in the near future. For the purpose of my tests, I had no choice to change the folder permission to 777 to expect my SQL Server instance starting well. One point to investigate of course because it will not meet the security policy rules in production environment.

The third one concerned IP and hostname resolution. I had to add my cluster IP and hostnames related information into the /etc/hosts file on each cluster node to get it to resolve correctly as follows:

[mikedavem@linux01 sql]$ cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.5.17    linux01.dbi-services.test linux01
192.168.5.18    linux02.dbi-services.test linux02
192.168.5.19    sqllinuxfci.dbi-services.test sqllinuxfci

 

Finally, after fixing the previous issues, I performed some failover tests (move resources and bring offline a cluster node as well) that ran successfully.

  • Move resource
[mikedavem@linux01 sql]$ sudo pcs resource move sqllinuxfci linux01.dbi-services.test

[mikedavem@linux01 sql]$ sudo pcs status

…
2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.test

 

  • Simulate failure node LINUX02.dbi-services.test
[mikedavem@linux01 ~]$ sudo pcs status

Cluster name: linux_cluster
Stack: corosync
Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - part                                                                                        ition with quorum
Last updated: Sun Jan 15 10:59:14 2017          Last change: Sun Jan 15 10:56:54                                                                                         2017 by root via crm_resource on linux01.dbi-services.test

2 nodes and 2 resources configured

Online: [ linux01.dbi-services.test ]
OFFLINE: [ linux02.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::sql:fci): Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.tes                                                                                        t

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

Another “bug” I noticed is that the SERVERPROPERTY() function output does not show correctly the my FCI name. Probably a mistake which will be resolved in the next CTPs. Be patient …

blog 114 - 3 - serverproperty output

My final thoughts

Here my feeling after playing a little bit with my new infrastructure.

Based on this first installation, we must face the facts: building a SQL Server FCI infrastructure is a fastest process on Linux in comparison to the same in Windows but I prefer to be prudent and not to draw hasty conclusions. Let’s see what we are going to have in the RTM release. One important thing I noticed for example is there is no explicit cluster validation compared to Windows at least in appearance. In fact, if we take a closer look at the cluster side, we already have some validation steps during the cluster creation (node authentication and cluster setup). However, I didn’t see any validation step at the SQL Server side (compared to Windows) except basic verifications which include verifying the standalones instances are able to start and share the same storage.

Moreover, one another important point we may notice is that we don’t need to setup DNS servers to run the cluster infrastructure. During my tests, I didn’t use it (hostname resolution was made only from /etc/hosts file) but as soon as I had to connect my infrastructure from remote computers, DNS resolution became almost mandatory :)

Finally, there is a plenty of tests to perform to understand how behave the cluster layer as well as the cluster resource.

Well, there is still a way to go in order to complete all my work on Linux. To be continued ….

 

 

 

 

Cet article Introducing high-availability with SQL Server on Linux est apparu en premier sur Blog dbi services.

2016 is over and it was great, 2017 will be even more exiting in the open source area

Fri, 2017-01-13 10:05

Disclaimer: This will not be a technical post at all. The goal is to provide a short overview of what we did in 2016 and what we plan to do in 2017 when it comes to our Open Infrastructure division. We’ll start with a few numbers then look at the events we did and some of the projects we completed in 2016. Here we go …

We had a total number of 90 blog posts in 2016 covering the following areas: Linux, MySQL/MariaDB, MongoDB, Cassandra, Mirantis, PostgreSQL, EnterpriseDB, Docker, Ansible, Amazon AWS. Here is the list for 2016:

installing-edb-postgres-advanced-server-9-5
the-postgres-plus-cloud-database
avoiding-access-to-the-public-schema-in-postgresql
using-the-official-postgresql-yum-repositories
external-tables-in-postgresql
postgresql-on-amazon-rds-loading-the-beast
postgresql-on-amazon-rds-adding-a-replica-to-the-beast
postgresql-on-amazon-rds-configuring-the-beast
postgresql-on-amazon-rds-securing-the-beast
postgresql-on-amazon-rds-setting-up-the-beast
the-dbi-services-postgresql-reference-architecture-2-the-community-approach
edb-postgres-advanced-server-9-5-new-features-profiles
install-pljava-in-postgresql-9-5-1
feeding-blogsrrs-items-directly-into-your-postgresql-database
pre-warming-the-buffer-cache-in-postgresql
transactional-ddl
install-pljava-in-postgres-plus-advanced-server-9-5
launching-a-vm-with-the-amazon-aws-command-line-tools
linux-how-to-check-the-exit-status-of-several-piped-commands
lvm-how-to-extend-a-volume-group
linux-quick-tip-what-is-the-local-time-in-kolkata
mongodb-installation
introduction-to-mongodb
getting-started-with-ansible-preparations
the-almost-same-sample-schema-for-all-major-relational-databases-4-mssql
connecting-your-postgresql-instance-to-a-mariadbmysql-instance
the-almost-same-sample-schema-for-all-major-relational-databases-3-mysqlmariadb
maintenance-scenarios-with-edb-failover-manager-3-witness-node
maintenance-scenarios-with-edb-failover-manager-2-primary-node
the-almost-same-sample-schema-for-all-major-relational-databases-2-oracle
the-almost-same-sample-schema-for-all-major-relational-databases-1-postgresql
stay-tuned-with-kernel-parameters
maintenance-scenarios-with-edb-failover-manager-1-standby-node
a-look-at-postgresql-9-6-psql-gexec-2
a-look-at-postgresql-9-6-killing-idle-transactions-automatically
a-look-at-postgresql-9-6-progress-reporting-for-vacuum-operations
a-look-at-postgresql-9-6-the-wait-interface
happy-birthday-postgresql
connecting-your-postgresql-instance-to-a-microsoft-sql-server-instance
interested-in-the-most-advanced-open-source-database-where-you-can-meet-us-in-june
postgresql-as-a-central-reporting-hub-yes-of-course
swiss-pgday-2016-slides-are-online
which-parameter-changes-do-require-a-restart-of-my-postgresql-instance
shrinking-oracle-vm-virtualbox-with-zerofree
elasticsearch-kibana-logstash-and-filebeat-centralize-all-your-database-logs-and-even-more
getting-started-with-ansible-creating-the-postgresql-instance
getting-started-with-ansible-download-the-postgresql-sources-compile-and-install
getting-started-with-ansible-installing-os-packages-creating-groups-and-users/
gettin-ansible-up-and-running-on-a-sles-12-sp1-host
how-to-do-a-filesystem-resize-ext3ext4-on-redhat-running-on-vmware
running-postgresql-on-zfs-on-linux-fun-with-snapshots-and-clones
running-postgresql-on-zfs-on-linux
connecting-your-postgresql-instance-to-an-oracle-database-debian-version
the-dbi-open-infrastructure-technology-division
sharding-with-postgresql
what-the-hell-are-these-template0-and-template1-databases-in-postgresql
securing-your-connections-to-postgresql-by-using-ssl
edb-failover-manager-2-1-two-new-features
edb-failover-manager-2-1-upgrading
disable-red-hat-7-transparent-hugepages
auditing-in-postgresql
understanding-row-level-security-on-postgresql
mariadb-audit-plugin
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1-3
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1-2
how-to-install-mirantis-openstack-9-0-using-virtualbox-part-1
how-to-patch-postgres-plus-advanced-server-in-a-standby-configuration
running-a-cassandra-cluster-in-a-single-server
how-to-patch-postgres-plus-advanced-server
apache-cassandra-overview
running-postgresql-on-zfs-on-linux-compression
can-i-do-it-with-postgresql-5-generating-ddl-commands
can-i-do-it-with-postgresql-4-external-tables
can-i-do-it-with-postgresql-3-tablespaces
can-i-do-it-with-postgresql-2-dual
can-i-do-it-with-postgresql-1-restore-points
mongodb-installation-on-windows
managing-my-amazon-web-services-redhat-instance
linux-instance-in-amazon-web-services-aws
edb-postgres-advanced-server-9-6-beta-released
can-i-do-it-with-postgresql-8-transportable-tablespaces
getting-started-with-docker-2-building-your-own-base-image
getting-started-with-docker-1-overview-and-installation
enterprisedb-backup-and-recovery-tool-bart
installing-postgresql-9-6-1-with-enterprisedb-installer
can-i-do-it-with-postgresql-7-partitioning
oel-7-project-quotas-on-oracle-homes-with-xfs-on-oracle-linux-7
oel-7-how-to-disable-ipv6-on-oracle-linux-7
from-mysql-oracle-to-postgres-using-the-edb-migration-toolkit
can-i-do-it-with-postgresql-6-server-programming

Quite a lot of information was shared. For 2017 you should see more posts in the Cloudera and OpenStack areas as well as the private DBaaS service we are currently working on.

But obviously we did not only write blogs :) There have been two PostgreSQL events organized by us, we sponsored and had two talks at the Swiss PGDay 2016 and we had a talk at the IT Tage 2016 in Frankfurt.
For 2017 we’ll be organizing an Oracle · Open Infrastructure · Cloudera & Big Data Appliance event, we’ll be at the Swiss PGDay 2017 and probably we’ll be at the IT Tage in Frankfurt again. If nothing goes wrong we’ll be at the M|17 in New York. This is what currently is scheduled, maybe there will be even more.

For the projects we did in 2016 there was a lot of stuff: We did high available EnterpriseDB PPAS projects, we did high available community PostgreSQL projects, we did a MongoDB project and we even did a PostgreSQL project on Windows. We implemented Ansible at a customer to automate the PostgreSQL deployments. There have been several MariaDB and MySQL Galera cluster implementations as well as MySQL Cluster Control setups. Out of all these projects we got several SLAs signed for MySQL as well as for PostgreSQL, which is a great thing as this proves that our customers trust us. On top of that we certified 7 of our service desk people on EnterpriseDB Postgres Plus (EDB Postgres Advanced Server 9.5 Professional). Nobody in the D-A-CH region achieved that until now.
For 2017 there are already some projects in the pipe: One huge migration from another vendor to PostgreSQL/EDB, a migration from another vendor to community PostgreSQL. We will assist a customer in building a high available, extremely flexible PostgreSQL deployment for their web infrastructure. For two other projects we’ll migrate existing PostgreSQL deployments to PostgreSQL 9.5 and there will be one huge 6TB migration from PostgreSQL 9.2 to a recent release. And this is just what is known today.

Workshops: In 2016 we released our PostgreSQL DBA Essentials Workshop which was given four times since then. The MySQL DBA Essentials workshop was also given several times last year.
For 2017 the MySQL workshop will be updated and we plan to release a brand new MongoDB workshop.

In 2016 we released DMK for PostgreSQL and NoSQL.
For 2017 all DMK releases will be updated with the latest and greatest we learned at our customers. One big announcement will be an appliance based on open source products. I can not tell you much right now but there will be some news in the next weeks.

You can see that we invested much in these areas in 2016 and 2017 will be no exception to that. The trend goes even up.

Finally my personal wish list for 2017: Dare to be more open, more flexible and to have fun when you work in your IT landscape. Great things can be build based on open source tools.

PS: To my colleagues: I am sure I forgot one or the other thing we did in 2016 which needs to mentioned here. Just leave a comment to complete the list :)

Happy new 2017
Daniel

 

Cet article 2016 is over and it was great, 2017 will be even more exiting in the open source area est apparu en premier sur Blog dbi services.

dbi Tail ranked as the 4th best alternative !

Fri, 2017-01-13 08:19

Dear colleagues,

Since the publication of the dbi Tail one year ago, lots of people downloaded, and some reporters wrote articles about it.

We have got many positive feedbacks, and we would like to thank you all for your attention !

 

The best article is from Softpedia website, and the author made an interesting description of the tool as you can see:

http://www.softpedia.com/get/System/File-Management/dbi-Tail.shtml

 

More interesting, in June, the tool was published alongside of the reference on the “Alternative to” website. Since then, the tail from dbi services was moving higher and higher quite without interruption.

At one point in time, it was ranked as the 3rd best alternative of the reference. Today, it is in the 4th position. What a good feeling !

 

During one year the development of the tail continued, and new features were integrated. The biggest one was to allow SSH connection using a public key authentication, enabling the possibility of connecting without any password. In fact just by using a trusted user public key.

The other one is the ability to quickly navigate between the several “tree” files present in your “etc” folder. In fact this will enhance the user experience by grouping the monitor log files for one context, and the switching to another context (another “tree” file) just in one click.

 

In the freshly 1.3 version of dbi tail, you can also benefit from some bug fixes and enhancements especially for the Linux environment.

Enjoy continuing to use dbi tail, and do not hesitate to provide your feedback or to like it as well:

http://alternativeto.net/software/baretail/

 

dbi tail is an alive open source project, and will continue to be in the future !

https://github.com/pschweitz/DBITail/releases

 

Cheers,

Philippe

 

tail1.3

 

 

Cet article dbi Tail ranked as the 4th best alternative ! est apparu en premier sur Blog dbi services.

Dataguard Oracle 12.2 : Support for Multiple Observers

Tue, 2017-01-10 11:53

With Oracle 12cR2 it is now possible to configure multiple observers within a single Oracle Data Guard broker configuration.
Multiple observers provide an immediate benefit for High Availability. If one observer fails, there are additional observers that can continue to monitor the status of the configuration.
In this blog we are going to talk about this new feature.

Our configuration have 3 servers:
primaserver.localdomain with db_unique_name ORCL_SITE
standserver1.localdomain with db_unique_name ORCL_SITE1
standserver2.localdomain with db_unique_name ORCL_SITE2

With Oracle 12cR2 you can register up to three observers to monitor a single Data Guard broker configuration. Each observer is identified by a name that you supply when you issue the START OBSERVER command

Let’s start with a configuration where the Fast Start Failover is disabled

DGMGRL> show configuration;
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS (status updated 11 seconds ago)
DGMGRL>

And let’s start an observer on each server

DGMGRL> connect sys/root@orcl_site
Connected to "ORCL_SITE"
Connected as SYSDBA.DGMGRL> start observer prima_observer
[W000 01/05 10:46:03.89] FSFO target standby is (empty)


DGMGRL> connect sys/root@ORCL_SITE1
Connected to "ORCL_SITE1"
Connected as SYSDBA.
DGMGRL> start observer standserver1_observer
[W000 01/05 10:47:19.22] FSFO target standby is (empty)


DGMGRL> connect sys/root@orcl_site2
Connected to "ORCL_SITE2"
Connected as SYSDBA.
DGMGRL> start observer standserver2_observer
[W000 01/05 10:48:25.86] FSFO target standby is (empty)

When fast-start failover is disabled the observer does not need to coordinate fast-start failover, so all observers are equivalent and have the same functionality as shown below

DGMGRL> show observer;
Configuration - ORCL_DR
Fast-Start Failover: DISABLED
Observer "prima_observer"
Host Name: primaserver.localdomain
Last Ping to Primary: 17 seconds ago
Observer "standserver1_observer"
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Observer "standserver2_observer"
Host Name: standserver2.localdomain
Last Ping to Primary: 25 seconds ago
DGMGRL>

Now let’s enable the Fast Start Failover

DGMGRL> enable FAST_START FAILOVER
Enabled.


DGMGRL> show configuration
Configuration - ORCL_DR
Protection Mode: MaxProtection
Members:
ORCL_SITE - Primary database
ORCL_SITE1 - (*) Physical standby database
ORCL_SITE2 - Physical standby database
Fast-Start Failover: ENABLED
Configuration Status:
SUCCESS (status updated 43 seconds ago)
DGMGRL>

Now that FSFO is enabled, we have a Master observer and Backup observers. The master observer works in the same manner that a single observer worked prior to the advent of multiple observers in Oracle Database 12c Release 2 (12.2.0.1).
Only the master observer can coordinate fast-start failover with Data Guard broker. All other registered observers are considered to be backup observers. The master observer is randomly chosen by the primary and the standby.

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "prima_observer" - Master
Host Name: primaserver.localdomain
Last Ping to Primary: 3 seconds ago
Last Ping to Target: 0 seconds ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 3 seconds ago
Observer "standserver2_observer" - Backup
Host Name: standserver2.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 2 seconds ago
DGMGRL>

If we try to stop the master observer, we will get an error

DGMGRL> stop observer prima_observer
Error: ORA-16878: master observer cannot be stopped
Failed.
DGMGRL>

It means that if for one reason we want to stop an observer that is the master one, we have to promote a backup observer as the new master. The command SET MASTEROBSERVER is used to manually change the master observer

DGMGRL> SET MASTEROBSERVER to standserver2_observer
Sent the proposed master observer to the data guard broker configuration.
Please run SHOW OBSERVER to see if master observer switch actually happens.
DGMGRL>

We can see that now the new master is standserver2_observer

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "standserver2_observer" - Master
Host Name: standserver2.localdomain
Last Ping to Primary: 0 seconds ago
Last Ping to Target: 1 second ago
Observer "prima_observer" - Backup
Host Name: primaserver.localdomain
Last Ping to Primary: 0 seconds ago
Last Ping to Target: 2 seconds ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 1 second ago
Last Ping to Target: 1 second ago
DGMGRL>

Now if we crash the server hosting the master observer (i.e. standserver2.localdomain), one of the backup observers will automatically be the new master.

DGMGRL> show observer;
Configuration - ORCL_DR
Primary: ORCL_SITE
Target: ORCL_SITE1
Observer "prima_observer" - Master
Host Name: primaserver.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 1 second ago
Observer "standserver1_observer" - Backup
Host Name: standserver1.localdomain
Last Ping to Primary: 2 seconds ago
Last Ping to Target: 0 seconds ago
Observer "standserver2_observer" - Backup
Host Name: standserver2.localdomain
Last Ping to Primary: 119 seconds ago
Last Ping to Target: 84 seconds ago

What happens if we try to start a fourth observer?

DGMGRL> connect sys/root@ORCL_SITE1
Connected to "ORCL_SITE1"
Connected as SYSDBA.
DGMGRL> start observer fourth_observer
Error: ORA-16647: could not start more than three observers
Failed to start the observer.
DGMGRL>

As we see above, we cannot have more than 3 observers.

One important thing about multiple observers is that we have only more availability for the observer but not for the FSFO configuration. This means that if we lose at the same time the primary database and the master observer, no fast-start failover will be initiated.

Let’s represent this by some configurations
Case1: Master observer and primary database are located in the same datacenter
observer1
Fast Start Failover will not occur if we lose Primary Datacenter

Case2: Master observer and primary database are located in different datacenters
observer2
Fast Start Failover will occur if we lose Primary datacenter

Case3: Master observer and primary database are located in different datacenters
observer3
Fast Start Failover will occur if we lose Primary datacenter

Conclusion: Support of multiple observers is a very nice feature. When using this feature, master observer and primary database should not be located in the same datacenter

 

Cet article Dataguard Oracle 12.2 : Support for Multiple Observers est apparu en premier sur Blog dbi services.

ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c

Tue, 2017-01-10 11:39
ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c Symptom/Analysis:

Using Oracle 12c in a RAC environment, you may encounter the following errors:

ORA-12801: error signaled in parallel query server P000, instance rac2.localdomain:DBRAC2 (2)
ORA-12154: TNS:could not resolve the connect identifier specified

In this article, we will present you an issue that is inspired from a real case:

olivier@olivier-ThinkPad-T550 ~ $ sqlplus sys/***@DBRAC1 as sysdba
SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 9 15:03:42 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> 
select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS, SCOTT.OBJECTS;
exit
ERROR at line 1:
ORA-12801: error signaled in parallel query server P000, instance rac2.localdomain:DBRAC2 (2)
ORA-12154: TNS:could not resolve the connect identifier specified

The following queries return answers with no error:

SQL> select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS;
COUNT(*)
----------
20342
SQL> select count(*) from SCOTT.OBJECTS;
COUNT(*)
----------
90951

Strange !
select * from A is working.
select * from B is working.
select * from A,B is not working.

Let’s check if TNS_ADMIN Oracle environment variable is set in the session:
We will use the dbms_system.get_env function

dbms_system.get_env()
SQL> 
set serveroutput on
DECLARE
RetVal VARCHAR2(4000);
BEGIN
dbms_system.get_env('TNS_ADMIN', RetVal);
dbms_output.put_line('TNS_ADMIN: '||RetVal);
END;
/
SQL> 2 3 4 5 6 7
TNS_ADMIN: /u01/app/MyNetworkAdmin/
PL/SQL procedure successfully completed.
TNS_ADMIN is set correctly.

The dblink is working and is correctly created:

dba_db_links
SQL> SELECT owner, db_link, username, host FROM   dba_db_links ORDER BY owner, db_link;
OWNER	DB_LINK 		  USERNAME   HOST
------- ------------------------- ---------- --------------------
PUBLIC	DBLINK_ONE_MYHOME_TNS	  SYSTEM     DBONE-MYHOME-TNS
DBMS_METADATA.GET_DDL
QL> SELECT DBMS_METADATA.GET_DDL('DB_LINK','DBLINK_ONE_MYHOME_TNS','PUBLIC') ddl from dual;
DDL
-------------------------------------------------------------------------
CREATE PUBLIC DATABASE LINK "DBLINK_ONE_MYHOME_TNS" 
   CONNECT TO "SYSTEM" IDENTIFIED BY VALUES ':1' USING 'DBONE-MYHOME-TNS'
/u01/app/MyNetworkAdmin/tnsnames.ora
DBONE-MYHOME-TNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = X.X.X.X)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SID = ONE)
    )
  )

So what ?
Let’s print the SQL plan:

SQL> explain plan for
2 select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS,SCOTT.OBJECTS;
Explained
SQL> select * from table(dbms_xplan.display);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------
Plan hash value: 1869185832
-------------------------------------------------------------------------------------------------------------
| Id  | Operation		 | Name        | Rows  | Cost (%CPU)| Time     | TQ/Ins |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	 |	       |     1 | 19591	 (2)| 00:00:01 |	|      |	    |
|   1 |  SORT AGGREGATE 	 |	       |     1 |	    |	       |	|      |	    |
|   2 |   PX COORDINATOR	 |	       |       |	    |	       |	|      |	    |
|   3 |    PX SEND QC (RANDOM)	 | :TQ10001    |     1 |	    |	       |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE	 |	       |     1 |	    |	       |  Q1,01 | PCWP |	    |
|   5 |      MERGE JOIN CARTESIAN|	       |  1810M| 19591	 (2)| 00:00:01 |  Q1,01 | PCWP |	    |
|   6 |       BUFFER SORT	 |	       |       |	    |	       |  Q1,01 | PCWC |	    |
|   7 |        PX RECEIVE	 |	       | 19911 |    65	 (0)| 00:00:01 |  Q1,01 | PCWP |	    |
|   8 | 	PX SEND BROADCAST| :TQ10000    | 19911 |    65	 (0)| 00:00:01 |	| S->P | BROADCAST  |
|   9 | 	 REMOTE 	 | DBA_OBJECTS | 19911 |    65	 (0)| 00:00:01 | DBLIN~ | R->S |	       |
|  10 |       BUFFER SORT	 |	       | 90951 | 19526	 (2)| 00:00:01 |  Q1,01 | PCWP |	    |
|  11 |        PX BLOCK ITERATOR |	       | 90951 |    15	 (0)| 00:00:01 |  Q1,01 | PCWC |	    |
|  12 | 	TABLE ACCESS FULL| OBJECTS     | 90951 |    15	 (0)| 00:00:01 |  Q1,01 | PCWP |	    |
-------------------------------------------------------------------------------------------------------------
Remote SQL Information (identified by operation id):
----------------------------------------------------
9 - SELECT 0 FROM "DBA_OBJECTS" "DBA_OBJECTS" (accessing 'DBLINK_ONE_MYHOME_TNS' )
Note
-----
- Degree of Parallelism is 4 because of table property
29 rows selected.
 

Parallelism is enabled for the query.
Let’s disable it !

SQL> ALTER SESSION disable parallel query;

Let’s run our query again:

SQL> select count(*) from DBA_OBJECTS@DBLINK_ONE_MYHOME_TNS,SCOTT.OBJECTS;
  COUNT(*)
----------
1850125242

It is working now, but without using parallelism features.

Problem:

In fact, the problem comes from the environment variable TNS_ADMIN that is not (or not correctly) set for the PX servers process:

When parallelism is enabled, the PX servers are doing all the work. (as shown in our parallel plans)
On the server, PX server processes can be easily identified on the OS level. On Linux they are the processes ora_p***:
Source: http://www.oracle.com/technetwork/database/bi-datawarehousing/twp-parallel-execution-fundamentals-133639.pdf

[oracle@rac1 admin]$ ps -ef | grep ora_p0
oracle 25803 1 0 11:21 ? 00:00:00 ora_p000_DBRAC1
oracle 25805 1 0 11:21 ? 00:00:00 ora_p001_DBRAC1
oracle 25807 1 0 11:21 ? 00:00:00 ora_p002_DBRAC1
oracle 25809 1 0 11:21 ? 00:00:00 ora_p003_DBRAC1
oracle 28021 1 0 14:25 ? 00:00:00 ora_p004_DBRAC1
oracle 28023 1 0 14:25 ? 00:00:00 ora_p005_DBRAC1
oracle 28025 1 0 14:25 ? 00:00:00 ora_p006_DBRAC1
oracle 28027 1 0 14:25 ? 00:00:00 ora_p007_DBRAC1
oracle 28029 1 0 14:25 ? 00:00:00 ora_p008_DBRAC1
oracle 28031 1 0 14:25 ? 00:00:00 ora_p009_DBRAC1
oracle 28033 1 0 14:25 ? 00:00:00 ora_p00a_DBRAC1
oracle 28035 1 0 14:25 ? 00:00:00 ora_p00b_DBRAC1
oracle 28037 1 0 14:25 ? 00:00:00 ora_p00c_DBRAC1
oracle 28039 1 0 14:25 ? 00:00:00 ora_p00d_DBRAC1
oracle 28041 1 0 14:25 ? 00:00:00 ora_p00e_DBRAC1
oracle 28047 1 0 14:25 ? 00:00:00 ora_p00f_DBRAC1

The file /proc/25803/environ contains the Oracle environment variables set for the P000 Process:
Command “sudo strings /proc/25803/environ | grep TNS_” give no result.
Source: https://blogs.oracle.com/myoraclediary/entry/how_to_check_environment_variables

If we check the environ file of the listener processes, we can see that the TNS_ADMIN is correctly set.

[oracle@rac1 ~]$ ps -ef | grep lsn
oracle 2731 12705 0 14:49 pts/0 00:00:00 grep --color=auto lsn
oracle 4176 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit
oracle 4309 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER_SCAN2 -no_crs_notify -inherit
oracle 4320 1 0 08:57 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER_SCAN3 -no_crs_notify -inherit
oracle 9059 1 0 12:01 ? 00:00:00 /u01/app/grid/bin/tnslsnr LISTENER -no_crs_notify -inherit
[oracle@rac1 ~]$ sudo strings /proc/9059/environ | grep TNS_
TNS_ADMIN=/u01/app/MyNetworkAdmin

This small environ.sh script for the lazy one can list the Oracle environment variables set for all the PX Server process and pmon:

environ.sh
#!/bin/bash
mypspmon='/tmp/pspmon'
myprocess1='pmon_DB'
myprocess2='ora_p00'
ps -ef | grep $myprocess1 | awk '{print $2}' > $mypspmon
ps -ef | grep $myprocess2 | awk '{print $2}' >> $mypspmon
while read ligne;
do
myenvironfile="/proc/${ligne#* }/environ"
if [ -e "$myenvironfile" ]
then
strings $myenvironfile
fi
done < $mypspmon

“sudo ./environ.sh | grep TNS” give no result
The TNS_ADMIN Oracle environment variable is not set for the PX server processes that are spawn to handle the parallel queries.

Solution:
  1. Be sure to set the TNS_ADMIN with srvctl !!
  2. Source: https://docs.oracle.com/cd/B19306_01/rac.102/b14197/srvctladmin.htm#i1010191

    [oracle@rac1 ~]$ srvctl setenv listener -l LISTENER -t TNS_ADMIN='/u01/app/MyNetworkAdmin/'
    [oracle@rac1 ~]$ srvctl setenv database -d DBRAC -t TNS_ADMIN='/u01/app/MyNetworkAdmin/'
    

    Let’s check if the variable are correctly set

    [oracle@rac1 ~]$ srvctl getenv listener -l LISTENER -t TNS_ADMIN
    
    LISTENER:
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    
    [oracle@rac1 ~]$ srvctl getenv database -d DBRAC -t TNS_ADMIN
    
    DBRAC:
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    

    TNS_ADMIN seems to be correctly set but we still receive ORA-12801 ORA-12154 errors.
    Moreover “sudo ./environ.sh | grep TNS” still gives no result !
    You need to restart database to set up thoroughly the TNS_ADMIN environment variable database !!

    [oracle@rac1 admin]$ srvctl stop listener -l LISTENER
    [oracle@rac1 admin]$ srvctl start listener -l LISTENER
    [oracle@rac1 admin]$ srvctl stop database -d DBRAC
    [oracle@rac1 admin]$ srvctl start database -d DBRAC
    
    [oracle@rac1 admin]$ srvctl status listener -l LISTENER
    
    Listener LISTENER is enabled
    Listener LISTENER is running on node(s): rac1,rac2
    
    [oracle@rac1 admin]$ srvctl status database -d DBRAC
    
    Instance DBRAC1 is running on node rac1
    Instance DBRAC2 is running on node rac2
    

    Now our “sudo ./environ.sh | grep TNS” commands list the TNS_ADMIN environment variable used by our pmon and px server processes.

    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    [...]
    TNS_ADMIN=/u01/app/MyNetworkAdmin/
    

    And our query is working using parallelism features.

  3. Other solutions exist:
  4. One of them would consist in modifying USR_ORA_ENV cluster resources attribute values with crsctl

    [oracle@rac1 admin]$ crsctl modify resource ora.dbrac.db -attr "USR_ORA_ENV=TNS_ADMIN=/u01/app/MyNetworkAdmin/"
    
    CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.
    

    In 12c, just add the flag -unsupported ad the end to avoid CRS-4995 errors.
    [oracle@rac1 admin]$ crsctl modify resource ora.dbrac.db -attr “USR_ORA_ENV=TNS_ADMIN=/u01/app/MyNetworkAdmin/” -unsupported

Comments:
  1. This issue is inspired from a real case. You can easily reproduce the issue
  2. SCOTT.OBJECTS table was created like that.

    SCOTT.OBJECTS
    
    create table SCOTT.OBJECTS as select * from DBA_OBJECTS;
    alter table scott parallel 32;
    

    Add an entry in your tnsnames.ora and create a dblink
    You can remove TNS_ADMIN environment database with “srvctl unsetenv database -d database -t TNS_ADMIN” command. And restart.

  3. On the server, on the OS level, if ORACLE_HOME is set and TNS_ADMIN is not set then $ORACLE_HOME/network/admin is used to locate tnsname.ora
  4. So it can be interesting to use symlink (ln -s) in $ORACLE_HOME/network/admin to point and use the same tnsnames.ora file when using sqlplus from the server.

    [oracle@rac1 admin]$ pwd
    
    /u01/app/oracle/product/12.1.0.2/db_1/network/admin
    
    [oracle@rac1 admin]$ ll
    
    total 0
    lrwxrwxrwx 1 oracle oinstall 36 Jan 10 09:53 listener.ora -> /u01/app/MyNetworkAdmin/listener.ora
    lrwxrwxrwx 1 oracle oinstall 34 Jan 10 09:53 sqlnet.ora -> /u01/app/MyNetworkAdmin/sqlnet.ora
    lrwxrwxrwx 1 oracle oinstall 36 Jan 10 09:53 tnsnames.ora -> /u01/app/MyNetworkAdmin/tnsnames.ora
    

But remember, although the TNS_ADMIN environment is set on the OS level when starting the instance, you need to to set the TNS_ADMIN with srvctl or crsctl before !!
If not, you may encounter ORA-12154 errors.
And not only for PDML !!

 

Cet article ORA-12801 and ORA-12154 with OracleRestart / GRID / RAC on Oracle Database 12c est apparu en premier sur Blog dbi services.

Oracle 12cR2 : Partitioning improvements – online conversion of a non-partitioned table to a partitioned table

Fri, 2017-01-06 04:22

It’s time to a new blog about partitioning improvement in 12cR2.
After auto-list partitioning and multi-column list partitioning & read-only partitions, I’ll demonstrate how we can easily convert a non-partitioned table to a partitioned table…online !

As a reminder, the way to convert a table to a partitioned table were to use the DBMS_REDEFINITION package (since 9i to 12cR1) which require few steps :

  1. Verify that the table is a candidate for redefinition :
    DBMS_REDEFINITION.CAN_REDEF_TABLE
  2. Create an interim partitioned table
  3. Start the redefinition :
    DBMS_REDEFINITION.START_REDEF_TABLE
  4. Copy dependent objects to the new table (grants, constraints, …) :
    DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
  5. Synchronize the interim table (optional) :
    DBMS_REDEFINITION.SYNC_INTERIM_TABLE
  6. Complete the redefinition :
    DBMS_REDEFINITION.FINISH_REDEF_TABLE

With 12cR2, the conversion can be done in one shot :

Non-partitioned table creation :
SQL> CREATE TABLE sales_non_part
(
sales_id NUMBER(5),
sales_name VARCHAR2(30),
sales_city VARCHAR2(20),
sales_amount NUMBER(10)
);
Table created.

Data insertion :
SQL> INSERT INTO sales_non_part VALUES (1, 'JOHN', 'DELEMONT', 10);
1 row created.


SQL> INSERT INTO sales_non_part VALUES (2, 'JANE', 'BERN', 150);
1 row created.


SQL> INSERT INTO sales_non_part VALUES (3, 'JACK', 'NYON', 20);
1 row created.

Before starting the conversion, I’ll enable the lock tracing to prove that the operation is made online.
You can take note of the new statement in 12cR2 to enable it :
SQL> ALTER SESSION SET events='trace[ksq] disk medium';


Session altered.

(The “old” one was : alter session set events=’10704 trace name context forever, level 3′;)

Let’s start the conversion of the sales_non_part table :
SQL> ALTER TABLE sales_non_part MODIFY PARTITION BY LIST (sales_city) AUTOMATIC (PARTITION p_delemont VALUES ('DELEMONT')) ONLINE;


Table altered.

Looks good. Quick check of the partitions :
SQL> SELECT partition_name, high_value FROM dba_tab_partitions WHERE table_name = 'SALES_NON_PART';


PARTITION_NAME HIGH_VALUE
-------------------- ------------------------------
P_DELEMONT 'DELEMONT'
SYS_P5537 'BERN'
SYS_P5538 'NYON'

The table is now partitioned by list and with the AUTOMATIC option.
Now, we can check the locks which were applied on the table during the conversion, firstly by finding the objects ID of the table and of the partitions :
SQL> SELECT TO_CHAR(object_id,'0XXXXXXX'), object_name, object_type FROM all_objects WHERE owner='SYS' and object_name='SALES_NON_PART' ORDER BY 2;


TO_CHAR(O OBJECT_NAME OBJECT_TYPE
--------- --------------- ------------------------------
00011CD1 SALES_NON_PART TABLE
00011CE0 SALES_NON_PART TABLE PARTITION
00011CE1 SALES_NON_PART TABLE PARTITION
00011CE2 SALES_NON_PART TABLE PARTITION

Then by looking for the generated trace file :
SQL> SELECT tracefile FROM v$process WHERE addr=(SELECT paddr FROM v$session WHERE sid=sys_context('USERENV','SID'));


TRACEFILE
----------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8223.trc

And finally by checking inside the file which kind of locks were hold on the objects :
SQL> host cat /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_8223.trc | grep -e 00011CD1 -e 00011CE0 -e 00011CE1 -e 00011CE2;


2016-12-16 11:22:09.821*:ksq.c@9006:ksqgtlctx(): *** TM-00011CD1-00000000-00000000-00000000 mode=3 flags=0x401 why=165 timeout=0 ***
2016-12-16 11:22:09.871*:ksq.c@9006:ksqgtlctx(): *** OD-00011CD1-00000000-00000000-00000000 mode=6 flags=0x10401 why=264 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE0-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE1-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***
2016-12-16 11:22:10.283*:ksq.c@9006:ksqgtlctx(): *** DL-00011CE2-00000000-00000000-00000000 mode=3 flags=0x10001 why=207 timeout=0 ***

One TM-lock mode=3 (Row Exclusive lock) were holds on the table. So the DML operations stay possible during the conversion !
Conclusion : the conversion is really done online :-)

Information concerning the others locks :
OD (Online DDL) lock : used internally to allow truly online DDL operations
DL : lock to prevent index DDL during direct load
TM : synchronizes accesses to an object

 

Cet article Oracle 12cR2 : Partitioning improvements – online conversion of a non-partitioned table to a partitioned table est apparu en premier sur Blog dbi services.

Dealing with WSFC 2016 quarantine state and availability groups

Thu, 2017-01-05 15:12

I was playing with my lab environment which includes Windows 2016 and SQL Server 2016 and I noticed an interesting scenario while testing cluster node failures. After simulating some network outage scenarios, I was not able to bring back my cluster node online immediately by using traditional way. A quick look at the cluster manager led me to notice something new:

blog 113 - 0 - WSFC new state

A quarantined state value … interesting! But what does it mean exactly?

In respect of this Microsoft blog from the Server and management team, this is a new feature shipped with Windows Server 2016 that concerns Virtual Machine Compute Resiliency. To cut the long story short, this feature is intended to improve of response of the Windows Failover Cluster to transient failures. Initially, it was designed for Hyper-V virtual machines resiliency and it has been introduced to avoid “flapping nodes” phenomena which may impact negatively the overall cluster health.

To illustrate the benefit of this feature, let’s go back to my failure scenario which consisted in simulating random network failures. Experiencing such scenario in production environment may lead to an unstable cluster state including quorum loss in the worst scenario. In fact, after 3 failure events, the concerned cluster node was quarantined and we may confirm by looking at the new following parameters (QuarantineThreshold is our concern here). The other parameter QuarantineDuration defines the timeframe during which the cluster node is not able to join the cluster, so basically two hours.

blog 113 - 1 - WSFC quarantine parameters

In my case, the WIN20169SQL16 node will not automatically join the cluster until 22:56:38 in my case.

blog 113 - 2 - WSFC quarantine error log

What about my availability group? Well no surprise here, the quarantined cluster node means an availability replica disconnected and a synchronization issue as well as confirmed below:

 

blog 113 - 3 - availability group state

We many notice a corresponding error number 100060 with the message An error occurred while receiving data: ‘10060(A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.)’. There is no specific message from SQL Server error log about quarantine state. From the secondary replica, I got the following sample message into the SQL Server error log:

blog 113 - 4 - secondary replica error log

SQL Server is waiting for the cluster node to start and rejoin the WSFC. In short, the overall availability health state will not change while the quarantined node is active. So according to your context, it may be a good thing until you don’t fix the related issue on the concerned cluster node. Fortunately, as stated to the Microsoft document, this is not mandatory for us to wait for the quarantined period to finish by using the following PowerShell command:

blog 113 - 5 - WSFC start cluster node

Here we go! The cluster health state got back to normal

blog 113 - 6 - WSFC final state

Happy clustering!

 

 

 

 

Cet article Dealing with WSFC 2016 quarantine state and availability groups est apparu en premier sur Blog dbi services.

12cR2: no cardinality feedback for small queries

Thu, 2017-01-05 09:52

This follows the comments on Clemens Bleile post about SQL Plan Directives in Oracle 12.2 where he discovered a new behavior that is not documented. The feature is good: bypass the cardinality feedback overhead when the query is fast anyway.

Feature

In Clemens Bleile post about SQL Plan Directives changes in 12cR2 he determined than the new feature coming in bug 23596611 was responsible for the bypass of automatic reoptimization (the 12c cardinality feedback) in some cases.

The description gives the idea that the reason for the bypass of cardinality feedback is ‘small queries':

SQL> select BUGNO,SQL_FEATURE,DESCRIPTION,OPTIMIZER_FEATURE_ENABLE from v$system_fix_control where bugno=23596611;
 
BUGNO SQL_FEATURE DESCRIPTION OPTIMIZER_FEATURE_ENABLE
---------- ------------------------------ -------------------------------------- -------------------------
23596611 QKSFM_STATS_FEEDBACK_23596611 No feedback for small queries 12.2.0.1

This is a new concept and in the traces we know there is nothing about ‘small queries’. Here is how to get more information.

Function

First, I’m looking for the ‘small queries’ text in the oracle executable:

$ strings $ORACLE_HOME/bin/oracle | grep "small query"
kkocfbCheckCardEst: No feedback for small query (BG %f, CPU %f (ms)
kkocfbCompareExecStats: skipping cost comparision forsmall query (BG %f, CPU %f (ms)

Good. This looks like there is a way to trace some information about that bypass decision, and there’s a clue that ‘small queries’ are evaluated on number of buffer gets (BG) and CPU time threshold. More than that, we have the name of the C function that traces this: kkocfbCompareExecStats

kkocfb probably means:

@OracleSK @FranckPachot @johnnyq72 it’s kernel kompile costing cardinality feedback probably. and it seems there is a trace function…

— Frits Hoogland (@fritshoogland) January 4, 2017

and then I’m looking for a way to get this trace. I didn’t know which event can trace that, but others know:

@fritshoogland @OracleSK @FranckPachot @johnnyq72 are you talking about event 10507 level 512?

— Mauro Pagano (@Mautro) January 4, 2017

However there is another way to get an idea of the events that are checked by a function.
The idea is described here: http://yong321.freeshell.org/computer/OracleFuncCalls.html

Event

We are not allowed to disassemble oracle code (this is in the licensing rules that you accept when you download oracle) so I’ll not do it here.
If you think you have a special permission to do it, just run:
gdb oracle <<<"disas kkocfbCheckCardEst"

I can’t show the dissassembled code, so here is how to get the list of the events that are checked by the kkocfbCheckCardEst function:
$ gdb oracle <<<"disas kkocfbCheckCardEst" | awk --non-decimal-data '/mov .*,%edi$/{gsub(/[$,]/," ");a=$4}/EventRdbmsErr/{printf "%d\n", a}' | sort -u
10507

The information we have about this event does not describe all what can be traced:

$ oerr ora 10507
10507, 00000, "Trace bind equivalence logic"
// *Cause:
// *Action:

From that, I don’t know which level to set the event, but in case of doubt and because levels are often bit flags, you can set it to the maximum 0xFFFFFFFF:


alter session set events '10507 trace name context forever, level 4294967295';

trace when cardinality feedback is bypassed

When I run the query where execution is doing less than 100 buffer gets I get the following trace with event 10507 level 512:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fca901f0e30 ctx=0x9415d4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
kkocfbCheckCardEst: No feedback for small query (BG 0.000000, CPU 0.000000 (ms)

The message is clear: no feedback for small query. The Buffer Get and CPU show 0 but on my test case, increasing the size of the table, I have seen that 100 buffer gets is the threshold:


bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 95 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 96 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 97 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 98 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: N buffer_gets: 99 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 100 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 1000
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 101 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 0
bq4fc1rdx97av is_reoptimizable: Y buffer_gets: 102 cpu_time: 1000

trace when query becomes reoptimizable

When the query reads more than 100 buffer gets, the cursor is marked as reoptimizable and here is the trace:


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fed7dc2ca40 ctx=0x9418f4f8 ******
Dumping Hints
=============
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
Reparsing due to card est...
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=y ecs=n efb=n ost=n fbs=n
*********** Begin Dump Context (kkocfbCheckCardEst) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x95426550 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********

act=500 is the actual number of rows (A-Rows) and est=63 is the estimated one (A-Rows) and this is how misestimate is detected.

trace when reoptimized query is run again

At that point we can see the feedback as OPT_ESTIMATE hints in v$sql_reoptimization_hints.

When we run the same query another time, it is parsed again with those hits, fixing cardinality tho the A-Rows of previous execution.

Here is the trace for this second execution.


kkoarCopyCtx: [sql_id=a71qw8t17qpqq] origin=CFB old=0x63904140 new=0x7fcda3716d40 copyCnt=1 copyClient=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCopyCardCtx: oldCtx Dumping string mapping
----------------------
kkocfbCopyCardCtx: newCtx Dumping string mapping
----------------------
**************************************************************
kkocfbCopyCardCtx: Best stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
**** Begin Dump Context (kkocfbCopyCardCtx - best est) ****
********************* End Dump Context ********************
*********** Begin Dump Context (Copy:old) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x6446dc58 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
*********** Begin Dump Context (Copy:new) [sql_id=a71qw8t17qpqq cpcnt=0] ***********
@=0x7fcda3716a78 type=1 nodeid=1 monitor=Y halias="DEMO_TABLE" loc="SEL$1" act=500 min=0 est=63 next=(nil)
*********** End Dump Context ***********
kkoarReparse: xsc=0x7fcda3672b98 kxsAutoReoptCtx=0x7fcda3716d40
kkocfbAddCardHints: Dumping string mapping
----------------------
******** Begin CFB Hints [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ********
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=0 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
********** End CFB Hints **********

You can see the OPT_ESTIMATE hints here.


****** Begin CFB Hints (APA) [sql_id=a71qw8t17qpqq] xsc=0x7fcda3672b98 ctx=0x8a274118 ******
Dumping Hints
=============
atom_hint=(@=0x7fcda37831e8 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
atom_hint=(@=0x7fcda3782d10 err=0 resol=1 used=0 token=1018 org=6 lvl=3 txt=OPT_ESTIMATE (@"SEL$1" TABLE "DEMO_TABLE"@"SEL$1" ROWS=500.000000 ) )
*********** End CFB Hints (APA) ***********
kkocfbInitCardFdbkCompCtx [sql_id=a71qw8t17qpqq] monitor=y
**************************************************************
kkocfbCopyBestEst: Best Stats
Exec count: 1
CR gets: 279
CU gets: 4
Disk Reads: 0
Disk Writes: 0
IO Read Requests: 0
IO Write Requests: 0
Bytes Read: 0
Bytes Written: 0
Bytes Exchanged with Storage: 0
Bytes Exchanged with Disk: 0
Bytes Simulated Read: 0
Bytes Simulated Returned: 0
Elapsed Time: 6998 (us)
CPU Time: 2000 (us)
User I/O Time: 0 (us)
*********** Begin Dump Context (kkocfbCopyBestEst) **********
*********** End Dump Context ***********
kkocfbCheckCardEst [sql_id=a71qw8t17qpqq] reparse=n ecs=n efb=y ost=n fbs=n

So what?

With those adaptive feature it is good to be able to trace the decisions in order to understand and reproduce the problems we encounter. Event 10507 is very useful. It’s the execution time counterpart for the event 10053 which explains compile time decision. And in latest versions, the optimizer is more and more present at execution time.

We have always seen problems coming from cardinality feedback. Most of them are coming from bad statistics or a data model where there is not one optimal access plan. Then the CBO is always trying to find better and sometimes the best is the enemy of the good.
Now in 12.2 you have a very good way to avoid the cardinality feedback side effect: tune your queries and your data model so that the critical queries read less than 100 logical reads.

 

Cet article 12cR2: no cardinality feedback for small queries est apparu en premier sur Blog dbi services.

NLS defaults for LANGUAGE and TERRITORY

Wed, 2017-01-04 07:09

When you set NLS_LANGUAGE then the values of NLS_DATE_LANGUAGE and NLS_SORT are set to their defaults for the LANGUAGE you set.
When you set NLS_TERRITORY then values of NLS_DATE_FORMAT, NLS_NUMERIC_CHARACTERS, NLS_CURRENCY and NLS_ISO_CURRENCY are set to the defaults for the TERRITORY you set. For reference, I’ve listed the defaults in this post.

Here for each value found from V$NLS_VALID_VALUES I set the session parameter and display the values that are derived, as well as an example of a date

TERRITORY

The TERRITORY sets the decimal and thousand separators, the currency and the date format. I display the short date example of last day of 2016.

SQL> with
2 function nls_territory_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_territory='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_FORMAT','NLS_NUMERIC_CHARACTERS','NLS_CURRENCY','NLS_ISO_CURRENCY') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'ds');
11 end;
12 select rpad(value,20)||nls_territory_defaults(value) "TERRITORY default NLS settings" from v$nls_valid_values where parameter='TERRITORY' order by 1
13 /
 
TERRITORY default NLS settings
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
AFGHANISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AFGHANISTAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=؋ example: 31/12/16
ALBANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ALBANIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=Lek example: 31.12.2016
ALGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ALGERIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ج. example: 31/12/2016
AMERICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AMERICA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 12/31/2016
ARGENTINA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ARGENTINA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
ARMENIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ARMENIA NLS_DATE_FORMAT=DD.fmMM.RRRR NLS_CURRENCY=դր. example: 31.12.2016
AUSTRALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=AUSTRALIA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=$ example: 31/12/2016
AUSTRIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=AUSTRIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
AZERBAIJAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=AZERBAIJAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=₼ example: 31.12.2016
BAHAMAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHAMAS NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=B$ example: 31/12/2016
BAHRAIN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BAHRAIN NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ب. example: 31/12/2016
BANGLADESH NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BANGLADESH NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=৳ example: 31-12-2016
BELARUS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=BELARUS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.2016
BELGIUM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BELGIUM NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
BELIZE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BELIZE NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=BZ$ example: 12/31/2016
BERMUDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BERMUDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=BD$ example: 31/12/2016
BOLIVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOLIVIA NLS_DATE_FORMAT=fmDD-MM-RR NLS_CURRENCY=B$ example: 12-31-2016
BOSNIA AND HERZEGOVINLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BOSNIA AND HERZEGOVI NLS_DATE_FORMAT=fmMM.DD.RRRR NLS_CURRENCY=КМ example: 12.31.2016
BRAZIL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=BRAZIL NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=R$ example: 31/12/2016
BULGARIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=BULGARIA NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=лв example: 31.12.2016
CAMBODIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CAMBODIA NLS_DATE_FORMAT=RRRR-fmMM-DD NLS_CURRENCY=៛ example: 2016-12-31
CAMEROON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CAMEROON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CANADA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CANADA NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=$ example: 2016-12-31
CATALONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CATALONIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31-12-16
CHILE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CHILE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31-12-2016
CHINA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=CHINA NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=¥ example: 2016-12-31
CIS NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CIS NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=р. example: 31.12.16
COLOMBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COLOMBIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
CONGO BRAZZAVILLE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO BRAZZAVILLE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
CONGO KINSHASA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=CONGO KINSHASA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=FrCD example: 31/12/16
COSTA RICA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=COSTA RICA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
CROATIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CROATIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=kn example: 31.12.16
CYPRUS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CYPRUS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=£ example: 31/12/2016
CZECH REPUBLIC NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECH REPUBLIC NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kč example: 31.12.2016
CZECHOSLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=CZECHOSLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Kčs example: 31.12.16
DENMARK NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=DENMARK NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=kr example: 31-12-2016
DJIBOUTI NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=DJIBOUTI NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ف.ج. example: 31/12/2016
ECUADOR NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ECUADOR NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=$ example: 31/12/2016
EGYPT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EGYPT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.م. example: 31/12/2016
EL SALVADOR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=EL SALVADOR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C example: 31/12/2016
ESTONIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=ESTONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
ETHIOPIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ETHIOPIA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=ብር example: 12/31/2016
FINLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FINLAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=€ example: 31.12.2016
FRANCE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=FRANCE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
FYR MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=FYR MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=ден. example: 31.12.2016
GABON NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=GABON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
GERMANY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GERMANY NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=€ example: 31.12.2016
GREECE NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=GREECE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
GUATEMALA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=GUATEMALA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Q example: 31/12/2016
HONDURAS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONDURAS NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=L example: 31.12.2016
HONG KONG NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=HONG KONG NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=$ example: 31/12/2016
HUNGARY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=HUNGARY NLS_DATE_FORMAT=RR-MON-DD NLS_CURRENCY=Ft example: 2016.12.31.
ICELAND NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ICELAND NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr. example: 31.12.2016
INDIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=INDIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=Rs example: 31/12/2016
INDONESIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=INDONESIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=Rp example: 31-12-2016
IRAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAN NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
IRAQ NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRAQ NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ع. example: 31/12/2016
IRELAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=IRELAND NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
ISRAEL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ISRAEL NLS_DATE_FORMAT=DD-MON-RRRR NLS_CURRENCY=ש"ח example: 31/12/2016
ITALY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ITALY NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=€ example: 31/12/2016
IVORY COAST NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=IVORY COAST NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=CFA example: 31/12/16
JAPAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JAPAN NLS_DATE_FORMAT=RR-MM-DD NLS_CURRENCY=¥ example: 2016/12/31
JORDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=JORDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ا.‏ example: 31/12/2016
KAZAKHSTAN NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=KAZAKHSTAN NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=KZT example: 31.12.2016
KENYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KENYA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=S example: 12/31/2016
KOREA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KOREA NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=₩ example: 2016/12/31
KUWAIT NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=KUWAIT NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ك. example: 31/12/2016
LAOS NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LAOS NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₭ example: 31/12/2016
LATVIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LATVIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=Ls example: 2016.12.31
LEBANON NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LEBANON NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.ل. example: 31/12/2016
LIBYA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=LIBYA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.ل. example: 31/12/2016
LITHUANIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=LITHUANIA NLS_DATE_FORMAT=RRRR.MM.DD NLS_CURRENCY=€ example: 2016-12-31
LUXEMBOURG NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=LUXEMBOURG NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31.12.2016
MACEDONIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MACEDONIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=den. example: 31.12.2016
MALAYSIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALAYSIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=RM example: 31/12/2016
MALDIVES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALDIVES NLS_DATE_FORMAT=fmDD/fmMM/RR NLS_CURRENCY=ރ. example: 12/31/16
MALTA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MALTA NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=€ example: 12/31/2016
MAURITANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MAURITANIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ا.م. example: 31/12/2016
MEXICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MEXICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
MONTENEGRO NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=MONTENEGRO NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=€ example: 31.12.2016.
MOROCCO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=MOROCCO NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.م. example: 31/12/2016
NEPAL NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEPAL NLS_DATE_FORMAT=fmDD/MM/RRRR NLS_CURRENCY=र example: 12/31/2016
NEW ZEALAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NEW ZEALAND NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
NICARAGUA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NICARAGUA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=C$ example: 31/12/2016
NIGERIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=NIGERIA NLS_DATE_FORMAT=DD/fmMM/RRRR NLS_CURRENCY=₦ example: 31/12/2016
NORWAY NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=NORWAY NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=kr example: 31.12.2016
OMAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=OMAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ع. example: 31/12/2016
PAKISTAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PAKISTAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=PKR example: 31/12/2016
PANAMA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PANAMA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=B/ example: 12/31/2016
PARAGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PARAGUAY NLS_DATE_FORMAT=fmDD/MM/RR NLS_CURRENCY=G example: 12/31/16
PERU NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PERU NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S/ example: 31/12/2016
PHILIPPINES NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PHILIPPINES NLS_DATE_FORMAT=MM/DD/RRRR NLS_CURRENCY=Php example: 12/31/2016
POLAND NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=POLAND NLS_DATE_FORMAT=RR/MM/DD NLS_CURRENCY=zł example: 2016-12-31
PORTUGAL NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=PORTUGAL NLS_DATE_FORMAT=RR.MM.DD NLS_CURRENCY=€ example: 31-12-2016
PUERTO RICO NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=PUERTO RICO NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=$ example: 31/12/2016
QATAR NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=QATAR NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ق. example: 31/12/2016
ROMANIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=ROMANIA NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=LEI example: 31.12.2016
RUSSIA NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=RUSSIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=₽ example: 31.12.2016
SAUDI ARABIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SAUDI ARABIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.س. example: 31/12/2016
SENEGAL NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=SENEGAL NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
SERBIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA NLS_DATE_FORMAT=DD.MM.RRRR. NLS_CURRENCY=RSD example: 31.12.2016.
SERBIA AND MONTENEGRNLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SERBIA AND MONTENEGR NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
SINGAPORE NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SINGAPORE NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=S$ example: 31/12/2016
SLOVAKIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVAKIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=Sk example: 31.12.16
SLOVENIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SLOVENIA NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SIT example: 31.12.16
SOMALIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOMALIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ش.ص. example: 31/12/2016
SOUTH AFRICA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SOUTH AFRICA NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=R example: 2016/12/31
SPAIN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SPAIN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=€ example: 31/12/2016
SRI LANKA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SRI LANKA NLS_DATE_FORMAT=RRRR/fmMM/fmDD NLS_CURRENCY=ريال example: 2016/12/31
SUDAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SUDAN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ج.س. example: 31/12/2016
SWEDEN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=SWEDEN NLS_DATE_FORMAT=RRRR-MM-DD NLS_CURRENCY=Kr example: 2016-12-31
SWITZERLAND NLS_NUMERIC_CHARACTE=.' NLS_ISO_CURRENCY=SWITZERLAND NLS_DATE_FORMAT=DD.MM.RR NLS_CURRENCY=SFr. example: 31.12.2016
SYRIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=SYRIA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ل.س. example: 31/12/2016
TAIWAN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TAIWAN NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=NT$ example: 2016/12/31
TANZANIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TANZANIA NLS_DATE_FORMAT=fmMM/DD/RRRR NLS_CURRENCY=TSh example: 12/31/2016
THAILAND NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=THAILAND NLS_DATE_FORMAT=DD MON RRRR NLS_CURRENCY=฿ example: 31/12/2016
THE NETHERLANDS NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=THE NETHERLANDS NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=€ example: 31-12-2016
TUNISIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=TUNISIA NLS_DATE_FORMAT=DD-MM-RR NLS_CURRENCY=د.ت. example: 31/12/2016
TURKEY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=TURKEY NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=₺ example: 31.12.2016
UGANDA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UGANDA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=CFA example: 31/12/2016
UKRAINE NLS_NUMERIC_CHARACTE=, NLS_ISO_CURRENCY=UKRAINE NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=грн. example: 31.12.2016
UNITED ARAB EMIRATESNLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED ARAB EMIRATES NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=د.إ. example: 31/12/2016
UNITED KINGDOM NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=UNITED KINGDOM NLS_DATE_FORMAT=DD-MON-RR NLS_CURRENCY=£ example: 31/12/2016
URUGUAY NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=URUGUAY NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=NU$ example: 31/12/2016
UZBEKISTAN NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=UZBEKISTAN NLS_DATE_FORMAT=DD/MON/RR NLS_CURRENCY=UZS example: 31/DEC/16
VENEZUELA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VENEZUELA NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=Bs example: 31/12/2016
VIETNAM NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=VIETNAM NLS_DATE_FORMAT=DD-MM-RRRR NLS_CURRENCY=₫ example: 31-12-2016
YEMEN NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=YEMEN NLS_DATE_FORMAT=DD/MM/RR NLS_CURRENCY=ر.ي. example: 31/12/2016
YUGOSLAVIA NLS_NUMERIC_CHARACTE=,. NLS_ISO_CURRENCY=YUGOSLAVIA NLS_DATE_FORMAT=DD.MM.RRRR NLS_CURRENCY=din. example: 31.12.2016
ZAMBIA NLS_NUMERIC_CHARACTE=., NLS_ISO_CURRENCY=ZAMBIA NLS_DATE_FORMAT=DD/MM/RRRR NLS_CURRENCY=ZK example: 31/12/2016

LANGUAGE

The LANGUAGE sets the linguistic sorting and the language used for words in dates. I display the long date example of last day of 2016.

SQL> with
2 function nls_language_defaults(t varchar2) return varchar2 as
3 s varchar2(2017):='';
4 begin
5 execute immediate 'alter session set nls_language='''||t||'''';
6 for i in (select * from nls_session_parameters where parameter in ('NLS_DATE_LANGUAGE','NLS_SORT') order by parameter desc)
7 loop
8 s:=s||lpad(i.parameter,20)||'='||rpad(i.value,20);
9 end loop;
10 return s||' example: '||to_char(trunc(sysdate,'yy')-1,'dl');
11 end;
12 select rpad(value,20)||nls_language_defaults(value) "LANGUAGE default NLS settings" from v$nls_valid_values where parameter='LANGUAGE' order by 1
13 /
 
LANGUAGE default NLS settings
-----------------------------------------------------------------------------------------------------------------------------------------------
ALBANIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ALBANIAN example: E Shtunë, Dhjetor 31, 2016
AMERICAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=AMERICAN example: Saturday, December 31, 2016
AMHARIC NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=AMHARIC example: ቅዳሜ, ዲሴምበር 31, 2016
ARABIC NLS_SORT=ARABIC NLS_DATE_LANGUAGE=ARABIC example: السبت, ديسمبر 31, 2016
ARMENIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=ARMENIAN example: Շաբա, Դեկտեմբեր 31, 2016
ASSAMESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=ASSAMESE example: শনিবাৰ, ডিচেম্বৰ 31, 2016
AZERBAIJANI NLS_SORT=AZERBAIJANI NLS_DATE_LANGUAGE=AZERBAIJANI example: Şənbə, Dekabr 31, 2016
BANGLA NLS_SORT=BINARY NLS_DATE_LANGUAGE=BANGLA example: শনিবার, ডিসেম্বর 31, 2016
BELARUSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=BELARUSIAN example: Субота, Сьнежань 31, 2016
BENGALI NLS_SORT=BENGALI NLS_DATE_LANGUAGE=BENGALI example: শনিবার, ডিসেমর 31, 2016
BRAZILIAN PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=BRAZILIAN PORTUGUESE example: Sábado, Dezembro 31, 2016
BULGARIAN NLS_SORT=BULGARIAN NLS_DATE_LANGUAGE=BULGARIAN example: Събота, Декември 31, 2016
CANADIAN FRENCH NLS_SORT=CANADIAN_M NLS_DATE_LANGUAGE=CANADIAN FRENCH example: Samedi, Décembre 31, 2016
CATALAN NLS_SORT=CATALAN NLS_DATE_LANGUAGE=CATALAN example: Dissabte, Desembre 31, 2016
CROATIAN NLS_SORT=CROATIAN NLS_DATE_LANGUAGE=CROATIAN example: Subota, Prosinac 31, 2016
CYRILLIC KAZAKH NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC KAZAKH example: Сенбі, Желтоқсан 31, 2016
CYRILLIC SERBIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC SERBIAN example: Субота, Децембар 31, 2016
CYRILLIC UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=CYRILLIC UZBEK example: Шанба, Декабр 31, 2016
CZECH NLS_SORT=CZECH NLS_DATE_LANGUAGE=CZECH example: Sobota, Prosinec 31, 2016
DANISH NLS_SORT=DANISH NLS_DATE_LANGUAGE=DANISH example: Lørdag, December 31, 2016
DARI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DARI example: شنبه, دسمبر 31, 2016
DIVEHI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=DIVEHI example: ހޮނިހިރު, ޑިސެމްބަރ 31, 2016
DUTCH NLS_SORT=DUTCH NLS_DATE_LANGUAGE=DUTCH example: Zaterdag, December 31, 2016
EGYPTIAN NLS_SORT=ARABIC NLS_DATE_LANGUAGE=EGYPTIAN example: السبت, ديسمبر 31, 2016
ENGLISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=ENGLISH example: Saturday, December 31, 2016
ESTONIAN NLS_SORT=ESTONIAN NLS_DATE_LANGUAGE=ESTONIAN example: Laupäev, Detsember 31, 2016
FINNISH NLS_SORT=FINNISH NLS_DATE_LANGUAGE=FINNISH example: Lauantai, Joulukuu 31, 2016
FRENCH NLS_SORT=FRENCH NLS_DATE_LANGUAGE=FRENCH example: Samedi, Décembre 31, 2016
GERMAN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN example: Samstag, Dezember 31, 2016
GERMAN DIN NLS_SORT=GERMAN NLS_DATE_LANGUAGE=GERMAN DIN example: Samstag, Dezember 31, 2016
GREEK NLS_SORT=GREEK NLS_DATE_LANGUAGE=GREEK example: Σάββατο, Δεκέμβριος 31, 2016
GUJARATI NLS_SORT=BINARY NLS_DATE_LANGUAGE=GUJARATI example: શનિવાર, ડિસેમ્બર 31, 2016
HEBREW NLS_SORT=HEBREW NLS_DATE_LANGUAGE=HEBREW example: שבת, דצמבר 31, 2016
HINDI NLS_SORT=BINARY NLS_DATE_LANGUAGE=HINDI example: शनिवार, दिसम्बर 31, 2016
HUNGARIAN NLS_SORT=HUNGARIAN NLS_DATE_LANGUAGE=HUNGARIAN example: Szombat, December 31, 2016
ICELANDIC NLS_SORT=ICELANDIC NLS_DATE_LANGUAGE=ICELANDIC example: Laugardagur, Desember 31, 2016
INDONESIAN NLS_SORT=INDONESIAN NLS_DATE_LANGUAGE=INDONESIAN example: Sabtu, Desember 31, 2016
IRISH NLS_SORT=BINARY NLS_DATE_LANGUAGE=IRISH example: Dé Sathairn, Nollaig 31, 2016
ITALIAN NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=ITALIAN example: Sabato, Dicembre 31, 2016
JAPANESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=JAPANESE example: 土曜日, 12月 31, 2016
KANNADA NLS_SORT=BINARY NLS_DATE_LANGUAGE=KANNADA example: ಶನಿವಾರ, ಡಿಸೆಂಬರ್ 31, 2016
KHMER NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=KHMER example: ថ្ងៃសៅរ៍, ធ្នូ 31, 2016
KOREAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=KOREAN example: 토요일, 12월 31, 2016
LAO NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LAO example: ວັນເສົາ, ທັນວາ 31, 2016
LATIN AMERICAN SPANI NLS_SORT=SPANISH NLS_DATE_LANGUAGE=LATIN AMERICAN SPANI example: Sábado, Diciembre 31, 2016
LATIN BOSNIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN BOSNIAN example: Subota, Decembar 31, 2016
LATIN SERBIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=LATIN SERBIAN example: Subota, Decembar 31, 2016
LATIN UZBEK NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=LATIN UZBEK example: Shanba, Dekabr 31, 2016
LATVIAN NLS_SORT=LATVIAN NLS_DATE_LANGUAGE=LATVIAN example: Sestdiena, Decembris 31, 2016
LITHUANIAN NLS_SORT=LITHUANIAN NLS_DATE_LANGUAGE=LITHUANIAN example: Šeštadienis, Gruodžio 31, 2016
MACEDONIAN NLS_SORT=BINARY NLS_DATE_LANGUAGE=MACEDONIAN example: Сабота, Декември 31, 2016
MALAY NLS_SORT=MALAY NLS_DATE_LANGUAGE=MALAY example: Sabtu, Disember 31, 2016
MALAYALAM NLS_SORT=BINARY NLS_DATE_LANGUAGE=MALAYALAM example: ശനിആഴ്ച, ഡിസംബര് 31, 2016
MALTESE NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=MALTESE example: Is-Sibt, Diċembru 31, 2016
MARATHI NLS_SORT=BINARY NLS_DATE_LANGUAGE=MARATHI example: शनिवार, डिसेंबर 31, 2016
MEXICAN SPANISH NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=MEXICAN SPANISH example: Sábado, Diciembre 31, 2016
NEPALI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=NEPALI example: शनिबार, डिसेम्बर 31, 2016
NORWEGIAN NLS_SORT=NORWEGIAN NLS_DATE_LANGUAGE=NORWEGIAN example: Lørdag, Desember 31, 2016
ORIYA NLS_SORT=BINARY NLS_DATE_LANGUAGE=ORIYA example: ଶନିବାର, ଡିସେମ୍ବର 31, 2016
PERSIAN NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=PERSIAN example: شنبه, دسامبر 31, 2016
POLISH NLS_SORT=POLISH NLS_DATE_LANGUAGE=POLISH example: Sobota, Grudzień 31, 2016
PORTUGUESE NLS_SORT=WEST_EUROPEAN NLS_DATE_LANGUAGE=PORTUGUESE example: Sábado, Dezembro 31, 2016
PUNJABI NLS_SORT=BINARY NLS_DATE_LANGUAGE=PUNJABI example: ਸ਼ਨਿਚੱਰਵਾਰ, ਦਸੰਬਰ 31, 2016
ROMANIAN NLS_SORT=ROMANIAN NLS_DATE_LANGUAGE=ROMANIAN example: Sâmbătă, Decembrie 31, 2016
RUSSIAN NLS_SORT=RUSSIAN NLS_DATE_LANGUAGE=RUSSIAN example: Суббота, Декабрь 31, 2016
SIMPLIFIED CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=SIMPLIFIED CHINESE example: 星期六, 12月 31, 2016
SINHALA NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SINHALA example: සෙනසුරාදා, දෙසැම්බර් 31, 2016
SLOVAK NLS_SORT=SLOVAK NLS_DATE_LANGUAGE=SLOVAK example: Sobota, December 31, 2016
SLOVENIAN NLS_SORT=SLOVENIAN NLS_DATE_LANGUAGE=SLOVENIAN example: Sobota, December 31, 2016
SPANISH NLS_SORT=SPANISH NLS_DATE_LANGUAGE=SPANISH example: Sábado, Diciembre 31, 2016
SWAHILI NLS_SORT=GENERIC_M NLS_DATE_LANGUAGE=SWAHILI example: Jumamosi, Desemba 31, 2016
SWEDISH NLS_SORT=SWEDISH NLS_DATE_LANGUAGE=SWEDISH example: Lördag, December 31, 2016
TAMIL NLS_SORT=BINARY NLS_DATE_LANGUAGE=TAMIL example: சனிக்கிழமை, டிசம்பர் 31, 2016
TELUGU NLS_SORT=BINARY NLS_DATE_LANGUAGE=TELUGU example: శనివారం, డిసెంబర్ 31, 2016
THAI NLS_SORT=THAI_DICTIONARY NLS_DATE_LANGUAGE=THAI example: เสาร์, ธันวาคม 31, 2016
TRADITIONAL CHINESE NLS_SORT=BINARY NLS_DATE_LANGUAGE=TRADITIONAL CHINESE example: 星期六, 12月 31, 2016
TURKISH NLS_SORT=TURKISH NLS_DATE_LANGUAGE=TURKISH example: Cumartesi, Aralık 31, 2016
UKRAINIAN NLS_SORT=UKRAINIAN NLS_DATE_LANGUAGE=UKRAINIAN example: Субота, Грудень 31, 2016
VIETNAMESE NLS_SORT=VIETNAMESE NLS_DATE_LANGUAGE=VIETNAMESE example: Thứ Bảy, Tháng MườI Hai 31, 2016

NLS_LANG

Note that you can also set the NLS settings with environment variables, but you can do that only if NLS_LANG is set. This means that you must set NLS_LANG first, which sets the LANGUAGE and TERRITORY, and then set for example NLS_DATE_FORMAT.

Here is an example.
I set NLS_DATE_FORMAT from the environment:

[oracle@VM102 data]$ export NLS_DATE_FORMAT=DL

but the date format is the set in my session:
[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
---------
04-JAN-17

Now I set NLS_LANG (and you don’t need to specify a LANGUAGE, TERRITORY or CHARACTERSET, just put the separators ‘_’ and ‘.’)

[oracle@VM102 data]$ export NLS_LANG="_."

And now my NLS_DATE_FORMAT is taken into account

[oracle@VM102 data]$ sqlplus -s / as sysdba <<< "select sysdate from dual;"
 
SYSDATE
-----------------------------
Wednesday, January 04, 2017

In general you should set NLS_LANG on any client, to the LANGUAGE and TERRITORY you want to display to client (or store in file, or manage by the application server).

I wish you an Happy New Year 2017.

 

Cet article NLS defaults for LANGUAGE and TERRITORY est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard Switchover with Oracle Wallets

Wed, 2017-01-04 06:48

I would like to make my DataGuard environment more secure, by eliminating the typing of “connect sys/Manager1″ for my DGMGRL commands. Especially the ones, that I have in my scripts. For example:

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect sys/Manager1
> show configuration verbose;
> EOF

or something like that:

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] cat show_config.dg
connect sys/Manager1;
show configuration;

oracle@dbidg01:/u01/app/oracle/local/dg/ [DBIT122] dgmgrl @show_config.dg
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 12:54:11 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "DBIT122_SITE1"
Connected as SYSDG.

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 39 seconds ago)

HINT: Be aware that “dgmgrl [<options>] @script_file_name” is a new feature with the Broker in Oracle 12.2. It was not possible to use “dgmgrl @script” beforehand.

Ok. So how can I make my scripts more secure? Of course, by using wallets, like we did already with the observer configuration. See http://blog.dbi-services.com/oracle-12cr2-how-to-setup-dataguard-observer-with-oracle-wallets/
However, I want to do also the switchover and other operations with wallets.

So, lets create the necessary wallets for the SYS user on the Primary and the Standby.

-- Primary

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS             
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

-- Standby

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

And of course, we have to test the connections to see if everything is working correctly.

-- Primary
sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2


-- Standby

sqlplus /@DBIT122_SITE1 as sysdba

sqlplus /@DBIT122_SITE2 as sysdba

DGMGRL> connect /@DBIT122_SITE1

DGMGRL> connect /@DBIT122_SITE2

So far, so good. My connections with the wallets work from the Primary to the Standby and the other way around. Now, lets try to do a DataGuard switchover with wallets.

DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL>
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 58 seconds ago)

DGMGRL> SWITCHOVER TO 'DBIT122_SITE2';
Performing switchover NOW, please wait...
Operation requires a connection to database "DBIT122_SITE2"
Connecting ...
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

Oppsssssss … doesn’t look good. It says “invalid username/password”, but everything worked beforehand. Ok. That output does not give me too much information. Lets try the whole thing again with the Debug mode … dgmgrl -debug

oracle@dbidg01:/u01/app/oracle/network/admin/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:04:21 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE2
[W000 01/04 11:04:34.32] Connecting to database using DBIT122_SITE2.
[W000 01/04 11:04:34.33] Attempt logon as SYSDG
[W000 01/04 11:04:35.42] Attempt logon as SYSDBA
[W000 01/04 11:04:35.47] Executing query [select db_unique_name from v$database].
[W000 01/04 11:04:35.47] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:04:35.47] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:04:35.47] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 49 seconds ago)

DGMGRL> switchover to 'DBIT122_SITE2';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE2" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE1"
Starting instance "DBIT122"...
[W000 01/04 11:05:04.99] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:05:04.99] Attempt logon as SYSDG
[W000 01/04 11:05:06.04] Attempt logon as SYSDBA
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

Please complete the following steps to finish switchover:
        start up and mount instance "DBIT122" of database "DBIT122_SITE1"

DGMGRL>

What is happening here? The Broker is not using the connect string “DBIT122_SITE1″, it is using the description list  “(DESCRIPTION=(ADDRESS …..)”, and when I look up my credentials in the wallet, I see only credentials for “DBIT122_SITE1 SYS” and “DBIT122_SITE2 SYS”.

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

The solution here is, to add the description list from the property StaticConnectIdentifier.

DGMGRL> show database 'DBIT122_SITE1' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

DGMGRL> show database 'DBIT122_SITE2' StaticConnectIdentifier;
  StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))'

Ok. Lets add the new credentials to our wallet. Be careful that you specify it exactly like they show up in the StaticConnectIdentifier.

-- Primary

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg01:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat


-- Standby

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED)))' SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg02:/home/oracle/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
5: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
4: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg01)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE1_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))) SYS
3: DBIT122_SITE2 SYS
2: DBIT122_SITE1 SYS
1: rcat rcat

After everything is setup and done, lets try again the switchover in debug mode.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl -debug
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Wed Jan 4 11:22:38 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
[W000 01/04 11:22:47.94] Connecting to database using DBIT122_SITE1.
[W000 01/04 11:22:47.94] Attempt logon as SYSDG
[W000 01/04 11:22:49.02] Attempt logon as SYSDBA
[W000 01/04 11:22:49.06] Executing query [select db_unique_name from v$database].
[W000 01/04 11:22:49.06] Query result is 'DBIT122_SITE1'
Connected to "DBIT122_SITE1"
[W000 01/04 11:22:49.06] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:22:49.06] Oracle database version is '12.2.0.1.0'
Connected as SYSDBA.
DGMGRL> switchover to 'DBIT122_SITE1';
Performing switchover NOW, please wait...
New primary database "DBIT122_SITE1" is opening...
Operation requires start up of instance "DBIT122" on database "DBIT122_SITE2"
Starting instance "DBIT122"...
[W000 01/04 11:23:18.07] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:18.07] Attempt logon as SYSDG
[W000 01/04 11:23:19.15] Attempt logon as SYSDBA
[W000 01/04 11:23:20.23] Executing query [select db_unique_name from v$database].
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

ORACLE instance started.
[W000 01/04 11:23:36.03] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:36.03] Attempt logon as SYSDG
[W000 01/04 11:23:37.13] Attempt logon as SYSDBA
[W000 01/04 11:23:37.17] Executing query [select db_unique_name from v$database].
ORA-01507: database not mounted

[W000 01/04 11:23:37.20] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:37.20] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:37.20] Executing statement [alter database mount].
[W000 01/04 11:23:42.66] Statement [alter database mount] executed successfully.
Database mounted.
[W000 01/04 11:23:42.66] Checking for bootstrap done...
[W000 01/04 11:23:42.67] Connecting to database using (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbidg02)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=DBIT122_SITE2_DGMGRL)(UR=A)(INSTANCE_NAME=DBIT122)(SERVER=DEDICATED))).
[W000 01/04 11:23:42.67] Attempt logon as SYSDG
[W000 01/04 11:23:43.77] Attempt logon as SYSDBA
[W000 01/04 11:23:43.82] Executing query [select db_unique_name from v$database].
[W000 01/04 11:23:43.83] Query result is 'DBIT122_SITE2'
Connected to "DBIT122_SITE2"
[W000 01/04 11:23:43.83] Checking broker version [BEGIN :version := dbms_drs.dg_broker_info('VERSION'); END;].
[W000 01/04 11:23:43.83] Oracle database version is '12.2.0.1.0'
[W000 01/04 11:23:55.85] Done waiting for bootstrap after 0 retries
Switchover succeeded, new primary is "DBIT122_SITE1"
DGMGRL>
DGMGRL> show configuration;

Configuration - DBIT122

  Protection Mode: MaxAvailability
  Members:
  DBIT122_SITE1 - Primary database
    DBIT122_SITE2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 60 seconds ago)

DGMGRL>

Now it worked perfectly, I can run now my switchover operations with wallets and I can run my scripts now without a password in clear text, like the following.

oracle@dbidg01:/home/oracle/ [DBIT122] dgmgrl <<-EOF
> connect /@DBIT122_SITE1
> show configuration verbose;
> EOF
Conclusion

Doing DataGuard switchovers with wallets work perfectly, if the setup was done correctly, and besides that, you can eliminate a lot of passwords in clear text that you might have laying around.

Cheers,

William

 

Cet article Oracle 12cR2 – DataGuard Switchover with Oracle Wallets est apparu en premier sur Blog dbi services.

Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets

Tue, 2017-01-03 09:12

I am not a big fan of having passwords in clear text laying around. This applies not only to application servers, but also for my Data Guard observer.

I do have a script for starting the observer that is reading a config file dgobserver.cfg, and this file contains the Username, Passwords and the Connectstring to my Primary and Standby database.

#*************************************************************
# Connection string to the primary
ConnectStringPrim="sys/Manager1@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="sys/Manager1@DBIT122_SITE2"

However, I don’t want to have these passwords in clear text anymore, so I setup wallets for that purpose on the observer host.

To setup the wallet connection we need to:

  • Create a wallet directory
  • Adjust the sqlnet.ora on the observer
  • Create the wallet and the credentials
  • Test the connections via wallets
  • Adjust the dgobserver.cfg file
  • Test a Fast Start Failover

Create a directory /u01/app/oracle/admin/wallets and add the following to your sqlnet.ora file

WALLET_LOCATION =
   (SOURCE =
      (METHOD = FILE)
      (METHOD_DATA = (DIRECTORY = /u01/app/oracle/admin/wallets))
)

SQLNET.WALLET_OVERRIDE = TRUE

Now, create the wallet and the credentials

oracle@dbidg03:/u01/app/oracle/network/admin/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -create
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter password:
Enter password again:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE1 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -createCredential DBIT122_SITE2 SYS
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Your secret/Password is missing in the command line
Enter your secret/Password:
Re-enter your secret/Password:
Enter wallet password:


oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] mkstore -wrl /u01/app/oracle/admin/wallets -listCredential           
Oracle Secret Store Tool : Version 12.2.0.1.0
Copyright (c) 2004, 2016, Oracle and/or its affiliates. All rights reserved.

Enter wallet password:
List credential (index: connect_string username)
2: DBIT122_SITE2 SYS
1: DBIT122_SITE1 SYS

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] ls -l
total 8
-rw------- 1 oracle oinstall 957 Jan  3 13:57 cwallet.sso
-rw------- 1 oracle oinstall   0 Jan  3 13:56 cwallet.sso.lck
-rw------- 1 oracle oinstall 912 Jan  3 13:57 ewallet.p12
-rw------- 1 oracle oinstall   0 Jan  3 13:56 ewallet.p12.lck

 

After everything was successfully setup, it is time to test the connection via wallets with sqlplus and with dgmgrl.

oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE1 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:07 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122] sqlplus /@DBIT122_SITE2 as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 3 13:59:12 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
oracle@dbidg03:/u01/app/oracle/admin/wallets/ [DBIT122]

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgh
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Tue Jan 3 14:00:05 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /@DBIT122_SITE1
Connected to "DBIT122_SITE1"
Connected as SYSDBA.
DGMGRL> connect /@DBIT122_SITE2
Connected to "DBIT122_SITE2"
Connected as SYSDBA.
DGMGRL> exit

 

Looks good so far, now let’s adjust the dgobserver.cfg file and start the observer.

-- adjust the dgobserver.cfg file

#*************************************************************
# Connection string to the primary
ConnectStringPrim="/@DBIT122_SITE1"

#*************************************************************
# Connection string to the Standby
ConnectStringStdb="/@DBIT122_SITE2"

-- start the observer

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] dgobserver.ksh start DBIT122
2017-01-03_14:01:02::dgobserver.ksh::SetOraEnv          ::INFO ==> Environment: DBIT122 (/u01/app/oracle/product/12.2.0/dbhome_1)
2017-01-03_14:01:03::dgobserver.ksh::StatusObserver     ::INFO ==> Observer Stopped
2017-01-03_14:01:04::dgobserver.ksh::StartObserver      ::INFO ==> Connection to the primary database
2017-01-03_14:01:04::dgobserver.ksh::DoCommand          ::INFO ==> Start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat
2017-01-03_14:01:06::dgobserver.ksh::StatusObserver     ::INFO ==> Observer running
2017-01-03_14:01:07::dgobserver.ksh::CleanExit          ::INFO ==> Program exited with ExitCode : 0

oracle@dbidg03:/u01/app/oracle/admin/DBIT122/etc/ [DBIT122] ps -ef | grep dgmgrl | grep -v grep
oracle 9186 1 0 14:01 pts/0 00:00:00 dgmgrl -logfile /u01/app/oracle/admin/DBIT122/log/dgobserver.log -silent start observer file='/u01/app/oracle/admin/DBIT122/etc/fsfo_DBIT122.dat'

After everything is setup and done, it is time for the fun part. Let’s initiate a Fast start failover by shutting down the primary with abort.

SQL> shutdown abort
ORACLE instance shut down.


-- observer log 

...
14:04:49.10  Tuesday, January 03, 2017
Initiating Fast-Start Failover to database "DBIT122_SITE2"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "DBIT122_SITE2"
14:04:58.85  Tuesday, January 03, 2017
...

14:07:39.04  Tuesday, January 03, 2017
Initiating reinstatement for database "DBIT122_SITE1"...
Reinstating database "DBIT122_SITE1", please wait...
Reinstatement of database "DBIT122_SITE1" succeeded
14:08:33.19  Tuesday, January 03, 2017

...

Cool, Fast Start Failover and the Reinstante worked as expected.

Conclusion

With Oracle wallets, I can make my DataGuard observer a little bit more secure by eliminating the passwords in clear text.

 

 

Cet article Oracle 12cR2 – How to Setup DataGuard observer with Oracle Wallets est apparu en premier sur Blog dbi services.

MS Cloud Summit Paris 2017

Mon, 2017-01-02 15:10

Une nouvelle année commence et 2017 devrait être un bon cru dans le domaine de la base donnée notamment en prédiction des effets d’annonces faites par Microsoft cette fin d’année 2016. En effet, il a été notamment été question de la prochaine vNext de SQL Server qui sera porté sous Linux ainsi que des nouvelles fonctionnalités prometteuses. A l’habitude, l’équipe dbi services tentera couvrir les différents sujets au cours cette année. N’hésitez pas à revenir de temps en temps jetez un coup d’œil sur d’éventuels nouveaux posts!

Mais avant de parler de la prochaine version de SQL Server, attardons nous à la version courante – SQL Server 2016 – qui offre d’ores et déjà des améliorations et nouvelles possibilités intéressantes dans plusieurs domaines qu’il s’agisse de la performance, la haute disponibilité, la sécurité et bien d’autres. Par ailleurs, une autre nouvelle importante qui intéressera la plupart de nos clients est la sortie récente du SP1 de SQL Server 2016 et qui permet une homogénéisation de la surface des fonctionnalités entre les différentes éditions. Il sera notamment possible d’utiliser la compression, le partitionnement ou les index columnstore avec une édition standard par exemple. Je ne suis pas devin mais je pense ne pas me tromper en disant que cette année je verrais pousser quelques infrastructures 2016 dans les écosystèmes de nos clients!

En tout cas, pour commencer du bon pied avec cette nouvelle version de SQL Server, un événement francophone à inscrire absolument dans vos agendas est le MSCloudSummit qui se déroulera à Paris à partir 23 janvier prochain. Cet événement se veut beaucoup plus large que les journées SQL Server que vous connaissez certainement déjà. Le MS Cloud Summit, c’est 600 participants attendus, 7 tracks avec 72 sessions autour du cloud, des scénarios hybrides et on-premises.  Pour ma part, je serai présent avec dbi services et j’aurai le plaisir de vous présenter les nouveautés dont vous bénéficierez en termes de haute disponibilité et plan de récupération avec le couple gagnant Windows 2016 et SQL Server 2016.

blog 112 - session

Au plaisir de vous y retrouver. D’ici là je vous souhaite une très bonne année 2017 !

 

 

Cet article MS Cloud Summit Paris 2017 est apparu en premier sur Blog dbi services.

IT-Tage 2016 Informatik aktuell: feedback

Fri, 2016-12-23 09:07

Today, to finish the year, I post a brief personal impression of the IT-Tage 2016 in Frankfurt at the Hotel Maritim, where I was also be a speaker.

IMG_3808

I presented 2 sessions on SQL Server: “SQL Server Errorlog Entmystifizierung” & “SQL Server 2016: Neue Sicherheitsfunktionen”.
I wasn’t the only one from dbi services who spoke at that conference:

  • David Barbarin with also 2 sessions: “SQL Server – Locks, latches and spinlocks” & “SQL Server 2016 Availability Group Enhancements”
  • Clemens Bleile with 1 session: “SQL Plan Directives: Neuigkeiten in 12.2. Produktions-Ausführungspläne in Testumgebungen reproduzieren”
  • Philippe Schweitzer with 1 session: “Feasibility study for building a software factory based on GIT repository”
  • Daniel Westermann with 1 session: “Breaking the deadlock: Migrating from proprietary databases to PostgreSQL”

You can already download all presentations on this link.

After my presentation day, I had the opportunity to go to a very interesting session by Oliver Hock “Ein Prozess lernt laufen: LEGO-Mindstorms-Steuerung mit BPMN”. With a Lego Mindstorm kit, he showed how to solve a magic cube.

IMG_3823

This session is also on youtube and look the demo at the end (the last 60 seconds) . It was very nice! ;-)

I would like to thank the entire team of Informatik Aktuell, who have put together a smooth and interesting process.

I hope that I can go also next year, with new sessions and follow other interesting sessions…

In the evening, you could also enjoy the Christmas Market, which is 2 metro’s stop from the Hotel. IMG_3810

I wish you a merry Christmas and like we said in Alsace: “A guetta rutsch ins neja Johr!”

 

Cet article IT-Tage 2016 Informatik aktuell: feedback est apparu en premier sur Blog dbi services.

Migrating your existing availability group infrastructure to 2016

Fri, 2016-12-23 04:57

Have you already a plan to migrate your old SQL Server 2012 availability group infrastructure to windows 2016 and SQL Server 2016? In a previous post, I talked about distributed availability groups and cross-cluster migration scenarios but this time the game is not the same because we want to achieve an in-place upgrade of the existing AG infrastructure. This question will probably be a concern the next year and if we take a closer look at improvements shipped with new versions of Windows and SQL Server, we will not be disappointed for sure.

Indeed, cluster rolling upgrade is a new feature from Windows Server 2016 which allows us to migrate smoothly (and almost transparently) the WSFC side of the existing database infrastructure. On the other side, upgrading high available replicas from SQL Server 2012 to SQL Server 2016 is also possible without reinstalling completely the availability group infrastructure. Thus, we may benefit from a temporary and mixed infrastructure at the both sides to reduce the outage timeframe of our applications. I may think about some customers where it could be helpful regarding their business and their corresponding SLAs.

So let’s just demonstrate this kind of scenario which includes a classic customer’s availability group infrastructure. Most part of availability groups implemented in my area consists of two replicas meaning a WSFC with 2 cluster nodes at the low-level of the global architecture as shown above:

blog 111 - 00 - initial infra WSFC2012

So the question is how to achieve the migration of the above infrastructure from 2012 version to 2016 version (both Windows and SQL Server) with low downtime? Well, one solution would consist in preparing and adding temporary two extra nodes which would run on Windows Server 2016 and SQL Server 2012 in a first step. Unfortunately we cannot mix directly two different versions of SQL Server in an existing availability group yet. We have to execute an extra step to upgrade one by one each replica we want to run on SQL Server 2016. Having two extra nodes will allow to prepare smoothly our migration without impacting the existing high available infrastructure.

So let’s begin with adding 2 nodes with Windows Server 2016 version. Basically, we may use either GUI or PowerShell cmdlets command for that.

The initial scenario (2012 version) is as follows:

blog 111 - 0 - initial config cluster

Let’s add the two extra cluster nodes which run on Windows Server 2016. According to Microsoft technet procedure, the key point is to perform this action from a Windows Server 2016 node exclusively.

The PowerShell cmdlet used is the same than the previous version and I executed it for the two additional nodes (WIN20168SQL16 and WIN20169SQL16) which run both on Windows Server 2016. Just remember to exclude the two extra nodes from quorum vote to avoid impacting the existing configuration.

blog 111 - 1 - Add Cluster Node 2016

However we may notice new cluster functional level property as shown below. The value is equal to 8 meaning that the cluster has switched to a temporary / mixed mode because at the moment we have cluster nodes both on 2012 and 2016 versions. 

blog 111 - 2 - Cluster Functional Level 2016

The transitioned infrastructure includes now 4 nodes. The first 2 nodes run on Windows Server 2012 whereas the last 2 nodes run on Windows Server 2016. An availability group runs on the top of the first 2 nodes and two additional replicas are ready to be enrolled to the existing infrastructure.

blog 111 - 21 - transitioned WSFC

So now let’s move on the SQL Server side and let’s add the 2 additional replicas.

The initial AG scenario is as follows:

blog 111 - 3 - Initial AG 2012

After adding the two replicas in asynchronous mode, we get the following picture:

blog 111 - 4 - Add replica 2016 to AG 2012

At this step, we have now to upgrade the new added replicas to SQL Server 2016. One important thing to keep in mind here is that we have to prevent absolutely failover to an upgraded replica before ensuring all the new secondary replicas are already upgraded. Indeed according to the Microsoft documentation an upgraded primary replica can no longer ship logs to any secondary replica whose SQL Server 2016 instance has not yet been upgraded to the same version.

So in my case, I upgraded first the WIN20169SQL16\SQL12 replica and then the WIN20168SQL16\SQL12 as shown below:

blog 111 - 5- AG config after upgrade node

The new transitioned infrastructure is shown in the picture below:

blog 111 -51- Transitioned infrastructure

 

The next part of the migration step includes a short downtime. The previous steps did not imply outage so far.

Basically the next part of the procedure will include the following steps:

  • Choose the next replica in 2016 version that will be involved as primary and change its replication mode to synchronous in order to prevent losing data
  • Failover the availability group to this replica (at this step old replicas in 2012 version will not be synchronized because the new primary replica may no longer ship logs to them as said previously)

blog 111 - 5- AG health state after upgrade node

  • Change the replication mode of the second replica in 2016 to synchronous to meet the initial configuration
  • Remove old replicas in 2012 from the availability group

blog 111 -52- Transitioned infrastructure

The T-SQL script was as follows in my case:

:CONNECT WIN20121SQL16\SQL12

-- Change temporary replication to synchronous
-- for next SQL Server 2016 primary replica 
USE [master]
GO
ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20168SQL16\SQL12' 
WITH (AVAILABILITY_MODE = SYNCHRONOUS_COMMIT)
GO

:CONNECT WIN20168SQL16\SQL12

-- Initiate failover to next SQL Server 2016 primary replica  
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
FAILOVER;
GO


:CONNECT WIN20168SQL16\SQL12

-- Change temporary replication to asynchronous
-- old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( AVAILABILITY_MODE = SYNCHRONOUS_COMMIT );

ALTER AVAILABILITY GROUP [dummygrp]
MODIFY REPLICA ON N'WIN20169SQL16\SQL12' 
WITH ( FAILOVER_MODE = AUTOMATIC );


:CONNECT WIN20168SQL16\SQL12

-- Remove old replicas (2012)
USE [master]
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20121SQL16\SQL12';
GO

ALTER AVAILABILITY GROUP [dummygrp]
REMOVE REPLICA ON N'WIN20122SQL16\SQL12';
GO

 

Let’s go back to the cluster side and the next step will consist in removing old cluster nodes from the WSFC by using usual commands as Remove-ClusterNode

blog 111 - 6- Remove old cluster nodes

… And we may finally update the cluster functional level to 9 (2016 version). Just be aware that upgrading the cluster functional level to 2016 will make the process un-reversible. So reverting back to the initial configuration will simply not be possible or at least it will require extra steps and longer downtime as well.

blog 111 - 7 - Update cluster functionalLevel 2016

And let’s get the final view of our upgraded availability group dashboard:

blog 111 - 8 - AG 2016 dashboard

The listener stays the same and it is almost transparent from applications.

 

Bottom line

In this blog post we had a glimpse of new capabilities of both Windows 2016 and SQL Server 2016 in terms of rolling upgrade. Of course the reality would be probably a little more complicated when introducing other parameters as customer context, number of availability groups, performance impact of adding temporary replicas, external dependencies and so on. But these feature seems to be promising and may be very helpful for future migration scenarios. I’m looking forward to experiment such feature at customer shops!

Happy upgrade!

 

 

Cet article Migrating your existing availability group infrastructure to 2016 est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 8 – Transportable tablespaces

Fri, 2016-12-23 01:47

My colleague Franck posted a comment to one of my last “Can I do it with PostgreSQL” blog posts. The comment he posted is: “Here is an idea for a future “Can I do it with PostgreSQL?”. My favorite Oracle feature: transportable tablespaces.” When you’ve read my post about how PostgreSQL implements tablespaces then you probably already know the answer: No, you can not do this in PostgreSQL. Having thought about this some time I decided to check the Oracle documentation for what transportable tablespaces are good for. I know the basics and how you can do it in Oracle, but better check twice :)

According to the documentation the main goal of transportable tablespaces is to transport data. This means you can transport one or more (which then becomes a transportable tablespace set) tablespaces from one host to another and then “plug” that set into an existing database (as long as the tablespaces are self containing). One, depending on the use case, great thing you can do with it is that the target database does not need to have the same standard block size as the source database. This means you can transport a tablespace space with a 16k block size to a database with a default block size of 8k. This is another thing you can not do in PostgreSQL: In PostgreSQL the block size is configured at configure/compile time. Once you have the compiled binaries you can not change that afterwards.

Probably the greatest benefit of transportable tablespaces is that it saves you time in moving your data around. You just copy the data files that make up your tablespace(s) and then use expdp/impdp for the meta data, that’s it. When you go for the multi-tenant architecture you can use transportable tablespaces to make a non-CDB a pluggable database on the target, too. For a more detailed introduction you can check the documentation (linked above).

Back to PostgreSQL: What options do you have for transporting your data from one database to another?

The tool of choice for most cases probably is pg_dump. The big advantage of pg_dump is that you can use it over the network and directly write everything you want to export into the target database using a pipe:

pg_dump -C -h [SOURCE_HOST] -U [USER] [DATABASE] | psql -h [TARGETHOST] -U [TARGETUSER] [TARGETDATABASE]

You can even do that using parallel processes when you combine pg_dump with pg_restore although using a pipe is not supported in that case. To demonstrate this we’ll need some sample data (I am using PostgreSQL 9.5.4 as the source):

postgres@pgbox:/home/postgres/ [PG954] psql
psql (9.5.4 dbi services build)
Type "help" for help.

(postgres@[local]:5438) [postgres] > create database test;
CREATE DATABASE
(postgres@[local]:5438) [postgres] > \c test
You are now connected to database "test" as user "postgres".
(postgres@[local]:5438) [test] > create table t1 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > create table t2 ( a int, b int );
CREATE TABLE
(postgres@[local]:5438) [test] > insert into t1 values (generate_series(1,1000000), generate_series(1,1000000));
INSERT 0 1000000
(postgres@[local]:5438) [test] > insert into t2 select * from t1;
INSERT 0 1000000

Lets say I want to copy the data to a PostgreSQL 9.6.1 instance on the same host. How can I do that? Quite easy:

postgres@pgbox:/home/postgres/ [PG961] pg_dump -V
pg_dump (PostgreSQL) 9.6.1 dbi services build
postgres@pgbox:/home/postgres/ [PG961] mkdir /var/tmp/exp
postgres@pgbox:/home/postgres/ [PG961] pg_dump -h localhost -p 5438 -C -c -F d -j 2 -f /var/tmp/exp/ test
postgres@pgbox:/home/postgres/ [PG961] pg_restore -h localhost -p 5439 -d postgres -F d -C /var/tmp/exp/
postgres@pgbox:/home/postgres/ [PG961] psql -c "\l" postgres
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
(4 rows)

postgres@pgbox:/home/postgres/ [PG961] psql -c "\d" test
        List of relations
 Schema | Name | Type  |  Owner   
--------+------+-------+----------
 public | t1   | table | postgres
 public | t2   | table | postgres
(2 rows)

Not a big deal and possible over the network, too.

Another option would be to use pg_basebackup to create a complete new instance from the source (source and target need to be of the same PostgreSQL release in this case). Again this is possible over the network.

You can even use rsync to copy the whole cluster to a new host and then do a second resync while the source is down for a short time. This will copy only the files that changed since the first rsync and will probably be very fast, but you will need a downtime of the source for the second rsync.

There are other methods for moving your data around in PostgreSQL but the above are the most popular.

Hope this helps …

 

Cet article Can I do it with PostgreSQL? – 8 – Transportable tablespaces est apparu en premier sur Blog dbi services.

SQL Plan Directives in 12cR2. Behavior is different than in 12cR1.

Wed, 2016-12-21 12:04

During the Frankfurter IT Tage I provided a presentation about SQL Plan Directives and how to reproduce a testcase in a test environment for a plan with used SQL Plan Directives. In that context I did a couple of tests with 12cR2 (12.2.0.1) and wanted to blog about the change in behavior in that release.

Here is my testcase I did initially run in 12.1.0.2:

First I created a table with 4 columns: A, B, C and D and load it with 1000 rows:


10:25:27 demo@GEN12102> create table DEMO_TABLE as
10:25:27 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:25:27 3 from dual connect by level <=1000;
 
10:25:30 demo@GEN12102> select * from DEMO_TABLE;
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
0 0 0 0
1 1 1 1
0 0 0 0

I.e. there is a correlation between the columns (all columns have a value 0 or all have a value 1).

Due to the new feature of stats gathering during CTAS and INSERT APPEND into empty tables in 12c the table has stats already:


10:28:29 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:28:29 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27

Let’s run a query:


10:29:40 demo@GEN12102> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:29:40 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
0 0 0 0
 
10:29:40 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------

There is a misestimation of the calculated cardinality (E-Rows) versus actual cardinality (A-Rows) of a factor 8 (63:500).
Oracle detected that and put the cursor on IS_REOPTIMIZABLE=Y:


10:32:49 demo@GEN12102> select sql_id,child_number,is_reoptimizable from v$sql
10:32:49 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

Executing the SQL again uses statistics feedback:


10:33:42 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Note
-----
- statistics feedback used for this statement

The computed cardinality is correct now.

After flushing the SQL Plan Directives to the SYSAUX-tablespace I can see them with the internal_state NEW:


10:34:37 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:34:39 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:34:39 2 from dba_sql_plan_directives where directive_id in(
10:34:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:34:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Executing a different SQL with the same type of predicate will use the SQL Plan Directive and dynmically gathers stats:


10:35:54 demo@GEN12102> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:35:54 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:35:54 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The internal_state of the directive changed to MISSING_STATS:


10:37:18 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:37:20 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

The SPD is still being used until fresh stats are gathered. Oracle will create extended (multi column) statistics then:


10:38:27 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:38:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
 
10:38:27 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
10:39:42 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:40:01 demo@GEN12102> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:25:27
B 2 .5 10:25:27
C 2 .5 10:25:27
D 2 .5 10:25:27
 
10:40:03 demo@GEN12102> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
10:40:05 demo@GEN12102> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 10:40:04
B 2 10:40:04
C 2 10:40:04
D 2 10:40:04
SYS_STSPJNMI 2 10:40:04
 
10:40:05 demo@GEN12102> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Running a SQL again still dynamically gatheres stats, but will change the internal state to HAS_STATS:


10:40:37 demo@GEN12102> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
10:40:37 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
10:40:37 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
10:40:56 10:40:56

A next execution will use the extended stats instead of the directives:


10:45:10 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
10:45:10 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
10:45:10 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

Dropping the extended stats will result in cardinalilty misestimate and setting the directive to PERMANENT:


10:45:58 demo@GEN12102> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
10:46:19 demo@GEN12102> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
10:46:19 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
10:46:19 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
 
10:47:10 demo@GEN12102> exec dbms_spd.flush_sql_plan_directive;
10:47:10 demo@GEN12102> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ---------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
615512554562037875 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:34:36
<internal_state>PERMANENT</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

Future executions of such SQL will use the directive again:


10:47:46 demo@GEN12102> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

That’s how it worked in 12.1.0.2. Here’s the behavior of 12.2.0.1:

First of all I have to enable adaptive statistics and switch on the preference to automatically create extended statistics:


10:49:22 demo@GEN12201> alter system set optimizer_adaptive_statistics=true;
 
10:49:22 demo@GEN12201> exec DBMS_STATS.SET_PARAM ('AUTO_STAT_EXTENSIONS','ON');

Creating the table with the appropriate stats is the same as in 12.1.:


10:50:23 demo@GEN12201> create table DEMO_TABLE as
10:50:23 2 select mod(rownum,2) a ,mod(rownum,2) b ,mod(rownum,2) c ,mod(rownum,2) d
10:50:23 3 from dual connect by level <=1000;
 
10:51:02 demo@GEN12201> select * from DEMO_TABLE;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
...
1 1 1 1
0 0 0 0
 
10:51:31 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics
10:51:31 2 where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23

Let’s run the first query:


10:51:42 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:51:42 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
..
0 0 0 0
0 0 0 0
 
10:51:42 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

As in 12.1. we have the misestimate in the computed cardinality (E-Rows) versus actual cardinality (A-Rows).

But here comes the first difference to 12.1.: The cursor is not marked as reoptimizable:


10:53:31 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:53:31 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 N

I have to execute the SQL a second time to make the cursor IS_REOPTIMIZABLE=Y:


10:54:27 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:54:27 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0

The computed cardinality is still wrong:


10:54:27 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

But the cursor is on IS_REOPTIMIZABLE=Y now:


10:54:27 demo@GEN12201> select sql_id,child_number,is_reoptimizable from v$sql
10:54:27 2 where sql_id='gbwsdbt5usvt6';
 
SQL_ID CHILD_NUMBER IS_REOPTIMIZABLE
------------- ------------ ----------------
gbwsdbt5usvt6 0 Y

However, statistics feedback is NOT USED in 12.2.:


10:56:33 demo@GEN12201> select /*+ gather_plan_statistics */ * from DEMO_TABLE
10:56:33 2 where a=0 and b=0 and c=0 and d=0;
 
A B C D
---------- ---------- ---------- ----------
0 0 0 0
...
0 0 0 0
 
10:56:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 500 |00:00:00.01 | 44 |
|* 1 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 63 | 500 |00:00:00.01 | 44 |
------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
--------------------------------------------------
 
1 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))

I.e. the cardinality is still wrong and the Note saying “statistics feedback used for this statement” is not there.
But I can see a new SPD created:


10:58:37 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:58:39 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used
10:58:39 2 from dba_sql_plan_directives where directive_id in(
10:58:39 3 select directive_id from dba_sql_plan_dir_objects where owner='DEMO'
10:58:39 4 ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>

And it’s being used:


10:59:08 demo@GEN12201> select /*+ gather_plan_statistics */ count(*) from DEMO_TABLE
10:59:08 2 where a=0 and b=0 and c=0 and d=0;
 
COUNT(*)
----------
500
 
10:59:08 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT AGGREGATE | | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement

The new feature in 12cR2 is the created Directive of type DYNAMIC_SAMPLING_RESULT:


10:59:31 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
10:59:31 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in( select directive_id from dba_sql_plan_dir_objects where owner='DEMO' ) order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING USABLE SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>MISSING_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. the result of the dynamically sampled data is stored in the repository. The SQL_ID 7fp7c6kcgmzux if a stripped Dynamic Sampling query is stored, so that the result can be reused by other queries in future. I.e. Oracle just has to generate the SQL_ID of the Dynamic Sampling query and can use a prior result if it finds it in the repository.

As the internal state of the directive is on MISSING_STATS and the DBMS_STATS-preference AUTO_STAT_EXTENSIONS is set to ON, Oracle will create extended stats when gathering stats next time:


11:02:17 demo@GEN12201> select column_name,num_distinct,density,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT DENSITY LAST_ANA
------------ ------------ ---------- --------
A 2 .5 10:50:23
B 2 .5 10:50:23
C 2 .5 10:50:23
D 2 .5 10:50:23
 
11:04:10 demo@GEN12201> exec dbms_stats.gather_table_stats(user,'DEMO_TABLE',options=>'GATHER AUTO',no_invalidate=>false);
 
11:04:11 demo@GEN12201> select column_name,num_distinct,last_analyzed from user_tab_col_statistics where table_name='DEMO_TABLE' order by 1;
 
COLUMN_NAME NUM_DISTINCT LAST_ANA
------------ ------------ --------
A 2 11:04:10
B 2 11:04:10
C 2 11:04:10
D 2 11:04:10
SYS_STSPJNMI 2 11:04:10
 
11:04:11 demo@GEN12201> select extension_name,extension from user_stat_extensions where table_name='DEMO_TABLE';
 
EXTENSION_NAME
---------------
EXTENSION
----------------------------------------------------------------------------------------------------
SYS_STSPJNMIY_S
("A","B","C","D")

Once a query is excecuted again the internal state changes to HAS_STATS (same as in 12.1.):


11:04:33 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b,c,d,a from DEMO_TABLE
11:04:33 2 where a=0 and b=0 and c=0 and d=0;
 
B C D A
---------- ---------- ---------- ----------
0 0 0 0
 
11:04:33 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=0 AND "B"=0 AND "C"=0 AND "D"=0))
 
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- 1 Sql Plan Directive used for this statement
 
11:04:35 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:04:35 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

The next query uses the stats instead of the SPD:


11:05:23 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a,b,c,d from DEMO_TABLE
11:05:23 2 where a=1 and b=1 and c=1 and d=1;
 
A B C D
---------- ---------- ---------- ----------
1 1 1 1
 
11:05:23 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 500 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

When dropping the extended stats the estimated cardinality is wrong again:


11:05:49 demo@GEN12201> exec dbms_stats.drop_extended_stats(user,'DEMO_TABLE','("A","B","C","D")');
 
11:05:57 demo@GEN12201> select /*+ gather_plan_statistics */ distinct a from DEMO_TABLE
11:05:57 2 where a=1 and b=1 and c=1 and d=1;
 
A
----------
1
 
11:05:57 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

However, the SPD won’t change to state to PERMANENT as in 12.1.:


11:06:38 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:06:38 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

I.e. future queries will have the misestimate, because the SPD remains on HAS_STATS:


11:07:16 demo@GEN12201> select /*+ gather_plan_statistics */ distinct b from DEMO_TABLE
11:07:16 2 where a=1 and b=1 and c=1 and d=1;
 
B
----------
1
 
11:07:16 demo@GEN12201> select * from table(dbms_xplan.display_cursor(null,null,'iostats last'));
 
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 11 |
| 1 | SORT UNIQUE NOSORT| | 1 | 1 | 1 |00:00:00.01 | 11 |
|* 2 | TABLE ACCESS FULL| DEMO_TABLE | 1 | 62 | 500 |00:00:00.01 | 11 |
-------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
2 - filter(("A"=1 AND "B"=1 AND "C"=1 AND "D"=1))

REMARK: Interesting is the E-Rows value of 62 instead of 63 before, but I haven’t checked the reason for that yet.


11:07:46 demo@GEN12201> exec dbms_spd.flush_sql_plan_directive;
11:07:46 demo@GEN12201> select directive_id,type,state,reason,notes,created,last_modified,last_used from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO') order by created;
 
DIRECTIVE_ID TYPE STATE REASON
----------------------- ----------------------- ---------- ------------------------------------
NOTES CREATED
------------------------------------------------------------------------------------------ --------
LAST_MOD LAST_USE
-------- --------
18321513813433659475 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE
<spd_note> 10:58:35
<internal_state>HAS_STATS</internal_state>
<redundant>NO</redundant>
<spd_text>{EC(DEMO.DEMO_TABLE)[A, B, C, D]}</spd_text>
</spd_note>
 
14490288466401772154 DYNAMIC_SAMPLING_RESULT USABLE VERIFY CARDINALITY ESTIMATE
<spd_note> 10:59:08
<internal_state>NEW</internal_state>
<redundant>NO</redundant>
<spd_text>{(DEMO.DEMO_TABLE, num_rows=1000) - (SQL_ID:7fp7c6kcgmzux, T.CARD=500[-2 -2])}
</spd_text>
</spd_note>

So in summary there are several changes in 12.2. compared to 12.1.:

1.) Statistics feedback won’t mark the cursor for reoptimization after the first execution. It needs 2 executions with a cardinality misestimate to mark the cursor IS_REOPTIMIZABLE=Y
2.) If the cursor is marked IS_REOPTIMIZABLE=Y it won’t be parsed again with additional information from cardinality feedback. In 12.2. only a SQL Plan Directive is being created.
3.) When executing a query which can use a SQL Plan Directive the first time, a new SQL Plan Directive of type DYNAMIC_SAMPLING_RESULT will be created. Future queries can use that result.
4.) Dropping automatically created extended stats won’t change the state of the SPD from HAS_STATS to PERMANENT with next execution of a query (and flushing the SPD).

Bottomline is that even if you enable SPDs in 12.2 like in 12.1 (with optimizer_adaptive_statistics = TRUE and DBMS_STATS.SET_PARAM (‘AUTO_STAT_EXTENSIONS’,’ON’)) the behavior in 12.2 is different. I would describe the change of using statistics feedback only on 2nd execution of a query to create a SPD then and not consider the feedback for the next parse as a more conservative approach towards more stable plans and being less “adaptive” than in 12.1.

 

Cet article SQL Plan Directives in 12cR2. Behavior is different than in 12cR1. est apparu en premier sur Blog dbi services.

Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions

Tue, 2016-12-20 04:13

WebLogicServer takes a long time (e.g. over 15 minutes) to startup. Can the performance be improved?
Using /dev/urandom option in the Weblogic Java Virtual Machine parameters from startup script, can be the solution.
In fact, /dev/urandom is a blocking device and during times of low entropy (when there is not enough random bits left in it), /dev/urandom will block any processes reading from it (and hence hang) until more random bits have been generated.
So, if you are running some custom WLST scripts to startup your Weblogic infrastructure, you have to know that there are a few things you could try to improve performance.

This change allows to improve the performances of your WLST sessions.. There is no functional impact to modify the wlst.sh script.

1- Login to the Weblogic Server.
2- Use the Unix (installation owner) account:
3- Go to the following directory using:

cd $ORACLE_HOME/oracle_common/common/bin

4- Edit the wlst.sh script with vi editor
5- Add the following option in JVM_ARGS

 JVM_ARGS="-Dprod.props.file='${WL_HOME}'/.product.properties ${WLST_PROPERTIES} ${JVM_D64} ${UTILS_MEM_ARGS} ${COMMON_JVM_ARGS} ${CONFIG_JVM_ARGS} -Djava.security.egd=file:///dev/./urandom" 

 
 
6- Please note that if you specifically want to reduce the load time in start time script, you can do It by providing :

java weblogic.WLST -skipWLSModuleScanning <script>.py 

It can be an advantage in some cases.

Others ways to increase performance of your WLS:

1- This blog deals with urandom parameter especially.
Be aware that you have another ways  to tune well your Weblogic Server(s).
Just in order to give you an overview of all possible changes :
Set Java parameters in starting Weblogic Server script or directly in JAVA_OPTS:
• urandom (see above)
• java heap size :
For higher performance throughput, set (if possible) the minimum java heap size equal to the maximum heap size. For example:

"$JAVA_HOME/bin/java" -server –Xms512m –Xmx512m -classpath $CLASSPATH ... 

 
2- Think about the number of Work Managers you’re really needed :
A default Work Manager might not be adapted to all application’s needs.
In this case, you’ll have to create a custom Work Manager.

3- Remember to have a good tuning regarding the stuck threads process detection.
Indeed, if WLS already detects when a thread process  becomes a “Stuck” thread, you have to know that the stuck status doesn’t allow to finish the current work or accept a new one.
So It impacts directly and quickly  the performance of the machine.
 
4- Don’t forget the basics :  Operating System tuning and/or Network tuning

 

Cet article Documentum story – Add the Java urandom inside the wlst.sh script to improve the performance of all WLST sessions est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING

Tue, 2016-12-20 03:58

Security is a quite important topic, especially when running Oracle in the cloud. To make your DataGuard a little bit more secure, you can enable the TCP.VALIDNODE_CHECKING feature. I call it SQL*Net Firewall.

Enabling the SQL*Net Firewall is quite straight forward. There are only 3 sqlnet.ora parameters which are involved:

  • TCP.VALIDNODE_CHECKING
  • TCP.INVITED_NODES
  • TCP.EXCLUDED_NODES

The parameter TCP.VALIDNODE_CHECKING turns this feature on or off. If enabled, then the incoming connections are allowed only if they originate from a node that conforms to the list specified by TCP.INVITED_NODES or TCP.EXCLUDED_NODES parameters.

The parameter TCP.INVITED_NODES list all clients that are allowed access to the database.

The parameter TCP.EXCLUDED_NODES specifies which clients are not allowed to access the database. In fact not even the listener.

Some important notes before you start configuring your valid node checking
  • All host names in the TCP.INVITED_NODES or TCP.EXCLUDED_NODES must be resolvable or the listener will not start. In case a host name is not resolvable, the start of the listener fails with the error message: TNS-00584: Valid node checking configuration error In the listener trace file, you can identify which host name exactly is causing the issued. In my case it is the host dbidg04 which is not resolvable. Enabling listener tracing is very useful, especially if you have a long list of names and ip addresses.
  [20-DEC-2016 08:18:35:894] nlvlloadp: Adding host dbidg04
  [20-DEC-2016 08:18:35:894] snlinGetAddrInfo: entry
  [20-DEC-2016 08:18:35:937] snlinGetAddrInfo: getaddrinfo() failed with error -2
  [20-DEC-2016 08:18:35:938] snlinGetAddrInfo: exit
  [20-DEC-2016 08:18:35:938] nlvlloadp: Error 7001 while resolving host 17712412
  [20-DEC-2016 08:18:35:938] nlvlloadp: exit
  [20-DEC-2016 08:18:35:938] ntvllt: Problem in Loading tcp.invited_nodes
  • Any change of the values in TCP.VALIDNODE_CHECKING, TCP.INVITED_NODES or TCP.EXCLUDED_NODES requires the listener to be stopped and started. Just reloading the listener can be very dangerous is some situation. See alsoBug 22194469 : TCP.INVITED_NODES IS DISABLED AFTER RELOADING LISTENER
  • The TCP.INVITED_NODES list takes precedence over the TCP.EXCLUDED_NODES if both lists are used. E.g. if the following entries are present, the the host 192.168.56.203 is allowed to connected.
  TCP.INVITED_NODES  = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)
  TCP.EXCLUDED_NODES = (192.168.56.203)
  • All entries in TCP.INVITED_NODES or TCP.EXCLUDED_NODES have to be on one line. In case the entries are not in one line, the listener will not start and you will receive the following error: TNS-00583: Valid node checking: unable to parse configuration parameters. e.g. the following list is invalid
  TCP.INVITED_NODES = (127.0.0.1,
  192.168.56.201,
  192.168.56.202)
  • Adding empty placeholder strings like (comma, space, comma) are possible, however, I would not recommend them.
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, , ,)
  •  Entries can be added via Net Manager as well. Local -> Profile -> General -> Access Rights. HINT: If you have some special formatting or commenting in your sqlnet.ora, the netmgr might reorganize your file. So better do a copy beforehand. ;-)
  • In case you work with SCAN and the GRID Infrastructure GRID_HOME/network/admin/sqlnet.ora file, then don’t forget to add your NODE VIP’s and SCAN VIP’s to your TCP.INVITED_NODES list

By knowing all that, we can start configuring our SQL*Net Firewall for our DataGuard environment. In a minimum configuration, we need to add the localhost, all hosts involved in the DataGuard configuration (Primary, Standby 1, standby 2, observer and so on) and of course the application server. Right now I do have my primary (192.168.56.201), my standby (192.168.56.202) and the localhost in the invited nodes list. The application server 192.168.56.203 is missing.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202)

If I try to connect now from the application server, I do get the following error: TNS-12547: TNS:lost contact. As you can see, not even a tnsping is possible.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:11:51

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
TNS-12547: TNS:lost contact

A sqlplus connection gives you the same error:

oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:16:00 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
ERROR:
ORA-12547: TNS:lost contact

The TNS: lost contact can mean anything. Turning on listener logging gives you a much clearer error message saying that an incoming connect from 192.168.56.203 is rejected.

Incoming connection from 192.168.56.203 rejected
20-DEC-2016 09:11:51 * 12546
TNS-12546: TNS:permission denied
 TNS-12560: TNS:protocol adapter error
  TNS-00516: Permission denied

Through listener logging, you can monitor all incoming connected that are rejected. That’s quite cool from my point of view. Now … let’s add the application server (192.168.56.203) to the list, and restart the listener. (not reload) Don’t forget to do it on the Primary and the Standby.

TCP.VALIDNODE_CHECKING = YES
TCP.INVITED_NODES = (127.0.0.1, 192.168.56.201, 192.168.56.202, 192.168.56.203)

Now my application can tnsping and connect.

oracle@dbidg03:/home/oracle/ [DBIT122] tnsping DBIT122_SITE1

TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 20-DEC-2016 09:21:01

Copyright (c) 1997, 2016, Oracle.  All rights reserved.

Used parameter files:
/u01/app/oracle/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = dbidg01)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = DBIT122_SITE1_DGMGRL)))
OK (0 msec)
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122]
oracle@dbidg03:/home/oracle/ [DBIT122] sqlplus system@DBIT122_SITE1

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 20 09:21:10 2016

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
Last Successful login time: Mon Dec 19 2016 10:44:52 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>

 

Conclusion

Setting up valid node checking is quite easy, and the listener log gives you clear error messages if there are any rejected hosts.  Make sure that you have a DataGuard test environment where you can test this feature including switchover, failover and so on. And very important, verify your name resolution before you add any hostname to the invited host list. The only drawback I see, is that the listener has to be stopped and started, meaning that new connections are not possible for a very short period of time.

 

 

 

Cet article Oracle 12cR2 – DataGuard and TCP.VALID_NODE_CHECKING est apparu en premier sur Blog dbi services.

Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG

Tue, 2016-12-20 02:00

Ref : EMC article number
The last publication date is Sat Feb 20 21:39:14 GMT 2016. Here the link: https://support.emc.com/kb/335987

After upgrading from 6.7.x to 7.2, the following warning message is logged in JMS log files: com.documentum.fc.common.DfNewInterprocessLockImpl – [DFC_FILE_LOCK_ACQUIRE_WARN] Failed to acquire lock proceeding ahead with no lock java.nio.channels.OverlappingFileLockException at sun.nio.ch.SharedFileLockTable.checkList FileLockTable.java:255)

In order to avoid this warning, EMC has provided a solution (SR #69856498) that will be described below:

By default ACS and ServerApp dfc.properties are pointing to $DOCUMENTUM_SHARED/config/dfc.properties.

Adding separate ‘dfc.data.dir’ cache folder location in ACS and ServerApp dfc.properties.
After JAVA Method Server restart, two separate cache folders are created inside $DOCUMENTUM_SHARED/jboss7.1.1/server and then, WARNING messages had gone from acs.log.

In fact, this is just a warning that someone else has acquired lock on the physical file (in this case it is dfc.keystore).  Since ServerApps (Method Server) and ACS are invoking DFC simultaneously and both try to acquire lock on dfc.keystore file and Java throws OverlappingFileLockException. Then DFC warns that it could not lock the file and proceeds without lock. Ideally this should be just info message in this case, where file lock is acquired for read-only. But the same logic is used by other functionality like registry update and BOF Cache update, where this failure should be treated as genuine warning or error. Going forward, engineering will have to correct this code by taking appropriate actions for each functionality. There is no functional impact to use different data directory folder.

Please proceed as below to solve It:

  • Login to the Content Server
  • Change the current user to dmadmin :(administrator account)
  • Create some folders using:
 mkdir $DOCUMENTUM_SHARED/acs
 mkdir $DOCUMENTUM_SHARED/ServerApps
 mkdir $DOCUMENTUM_SHARED/bpm

 

  • Update all necessary dfc.properties files (with vi editor):

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/acs.ear/lib/configs.jar/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/acs

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/ServerApps.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/ServerApps

===============================================================================================================================

$DOCUMENTUM_SHARED/jboss7.1.1/server/DctmServer_MethodServer/deployments/bpm.ear/APP-INF/classes/dfc.properties

⇒ Add at the end of this file the following line:

dfc.data.dir=$DOCUMENTUM_SHARED/bpm

===============================================================================================================================

  • Verify that the recently created folders are empty using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

 

  • Restart the JMS using:
sh -c "cd $DOCUMENTUM_SHARED/jboss7.1.1/server;./stopMethodServer.sh"
sh -c "$DOCUMENTUM_SHARED/jboss7.1.1/server/startMethodServer.sh"

 

Verification
  • Verify that the recently created folders are now populated with default files and folders using:
cd $DOCUMENTUM_SHARED
ls -l acs/ ServerApps/ bpm/

Files must not be empty now.

  • Disconnect from the Content Server.

 

Using this procedure, you won’t see this WARNING message anymore.
Regards,

Source : EMC article number : 000335987

 

Cet article Documentum story – How to avoid “DFC_FILE_LOCK_ACQUIRE_WARN” messages in Java Method Server (jms) LOG est apparu en premier sur Blog dbi services.

Oracle 12cR2: AWR views in multitenant

Mon, 2016-12-19 13:42

In a previous post I explained how the AWR views have changed between 12.1.0.1 and 12.1.0.2 and now in 12.2.0.1 they have changed again. This is a good illustration of multitenant object link usage.

What’s new in 12cR2 is the ability to run AWR snapshots at CDB or PDB level. I really think that it makes more sense to read an AWR report at CDB level because it’s about analysing the system (=instance) activity. But with PDBaaS I can understand the need to give a report to analyse PDB sessions, resource and statements.

I’ll start with the conclusion – a map of AWR view to show which ones read from CDB level snapshots, or PDB snapshots, or both:

C0DLx2GXEAALIG4
I’ll explain AWR reports in a future post. Basically when you run awrrpt.sql from CDB$ROOT you get CDB snapshots and when you run it from PDB you have the choice.

In the diagram above, just follow the arrows to know which view reads from PDB or CDB or both. You see two switches between the root and the PDB: data link for one way and common view for the other way. Note that all are metadata links so switches occurs also at parse time.

WRM$_

Let’s start from the table where AWR snapshots are stored:


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name='WRM$_SNAPSHOT';
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
----- ------------------------------ ----------------------- ------------------
SYS WRM$_SNAPSHOT TABLE METADATA LINK

This is a table. METADATA LINK means that the structure is the same in all containers, but data is different.

I have the following containers:

SQL> select con_id,dbid,name from v$containers;
 
CON_ID DBID NAME
---------- ---------- ------------------------------
1 904475458 CDB$ROOT
2 2066620152 PDB$SEED
3 2623271973 PDB1

From CDB$ROOT I see data for the CDB:

SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

and from PDB I see snapshots taken from PDB:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from WRM$_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So remember, CDB$ROOT has 91 snapshots with DBID= 904475458 and PDB1 has 79 snapshots with DBID=2623271973

AWR_ROOT_ and AWR_PDB_

Views on WRM$_SNAPSHOT are referenced in DBA_DEPENDENCIES:


SQL> select owner,name,type from dba_dependencies where referenced_name='WRM$_SNAPSHOT' and type like 'VIEW';
 
OWNER NAME TYPE
----- ------------------------------ -------------------
SYS AWR_ROOT_SNAPSHOT VIEW
SYS AWR_ROOT_SYSSTAT VIEW
SYS AWR_ROOT_ACTIVE_SESS_HISTORY VIEW
SYS AWR_ROOT_ASH_SNAPSHOT VIEW
SYS AWR_PDB_SNAPSHOT VIEW
SYS AWR_PDB_ACTIVE_SESS_HISTORY VIEW
SYS AWR_PDB_ASH_SNAPSHOT VIEW

I’m interested in views that show snapshot information: AWR_ROOT_SNAPSHOT and AWR_PDB_SNAPSHOT


SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') order by 3;
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM METADATA LINK
PUBLIC AWR_PDB_SNAPSHOT SYNONYM METADATA LINK
SYS AWR_ROOT_SNAPSHOT VIEW DATA LINK
SYS AWR_PDB_SNAPSHOT VIEW METADATA LINK

Besides the synonyms, we have a metadata link view AWR_PDB_SNAPSHOT and a data link view AWR_ROOT_SNAPSHOT. The data link one means that it switches to CDB$ROOT when queried from a PDB. Here is the definition:


SQL> select owner,view_name,container_data,text from dba_views where view_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS AWR_ROOT_SNAPSHOT Y select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0
 
SYS AWR_PDB_SNAPSHOT N select snap_id, dbid, instance_number, startup_time,
begin_interval_time, end_interval_time,
flush_elapsed, snap_level, error_count, snap_flag, snap_timezone,
decode(con_dbid_to_id(dbid), 1, 0, con_dbid_to_id(dbid)) con_id
from WRM$_SNAPSHOT
where status = 0

Same definition. The difference is that AWR_PDB_SNAPSHOT reads from the current container but AWR_ROOT_SNAPSHOT being a DATA LINK always read from CDB$ROOT.

This is what we can see:

SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
 
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from AWR_ROOT_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

This query when run in PDB1 displays the 91 snapshots from the CDB.

SQL> select dbid,count(*) from AWR_PDB_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

This one shows what is in the current container.

Those are the views used by the AWR report, depending on the AWR location choice. But what about the DBA_HIST_ views that we know and use from previous versions?

DBA_HIST_ and CDB_HIST_

I continue to follow the dependencies:

SQL> select owner,name,type from dba_dependencies where referenced_name in ('AWR_ROOT_SNAPSHOT','AWR_PDB_SNAPSHOT') and name like '%SNAPSHOT' order by 3;
 
OWNER NAME TYPE
------ ------------------------------ -------------------
PUBLIC AWR_ROOT_SNAPSHOT SYNONYM
PUBLIC AWR_PDB_SNAPSHOT SYNONYM
SYS DBA_HIST_SNAPSHOT VIEW
SYS CDB_HIST_SNAPSHOT VIEW
 
SQL> select owner,object_name,object_type,sharing from dba_objects where object_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER OBJECT_NAME OBJECT_TYPE SHARING
------ ------------------------------ ----------------------- ------------------
SYS DBA_HIST_SNAPSHOT VIEW METADATA LINK
SYS CDB_HIST_SNAPSHOT VIEW METADATA LINK
PUBLIC DBA_HIST_SNAPSHOT SYNONYM METADATA LINK
PUBLIC CDB_HIST_SNAPSHOT SYNONYM METADATA LINK

Here are the views I’m looking for. They are metadata link only. Not data link. This means that they do not switch to CDB$ROOT.

But there’s more in the view definition:

SQL> select owner,view_name,container_data,text from dba_views where view_name in ('CDB_HIST_SNAPSHOT','DBA_HIST_SNAPSHOT');
 
OWNER VIEW_NAME C TEXT
------ ------------------------------ - --------------------------------------------------------------------------------
SYS DBA_HIST_SNAPSHOT N select "SNAP_ID","DBID","INSTANCE_NUMBER","STARTUP_TIME","BEGIN_INTERVAL_TIME","
END_INTERVAL_TIME","FLUSH_ELAPSED","SNAP_LEVEL","ERROR_COUNT","SNAP_FLAG","SNAP_
TIMEZONE","CON_ID" from AWR_ROOT_SNAPSHOT
 
SYS CDB_HIST_SNAPSHOT Y SELECT k."SNAP_ID",k."DBID",k."INSTANCE_NUMBER",k."STARTUP_TIME",k."BEGIN_INTERV
AL_TIME",k."END_INTERVAL_TIME",k."FLUSH_ELAPSED",k."SNAP_LEVEL",k."ERROR_COUNT",
k."SNAP_FLAG",k."SNAP_TIMEZONE",k."CON_ID", k.CON$NAME, k.CDB$NAME FROM CONTAINE
RS("SYS"."AWR_PDB_SNAPSHOT") k

The DBA_HIST_SNAPSHOT is a simple one view on AWR_ROOT_SNAPSHOT which, as we have seen above, always show snapshots from CDB:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91
&nbsp
SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from DBA_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
904475458 91

Then CDB_HIST_SNAPSHOT reads AWR_PDB_SNAPSHOT which show current container snapshots. But this view is a COMMON DATA one, with the CONTAINER() function. This means that from CDB$ROOT when executed with a common user data from all open containers will be retrieved:


SQL> alter session set container=CDB$ROOT;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79
904475458 91

However, from a PDB you cannot see anything else:

SQL> alter session set container=PDB1;
Session altered.
 
SQL> select dbid,count(*) from CDB_HIST_SNAPSHOT group by dbid;
 
DBID COUNT(*)
---------- ----------
2623271973 79

So what?

Multitenant adds a new dimension in the dictionary views and we must be aware of that. However, compatibility is still there. The scripts that we used to run to query DBA_HIST views should still work. Don’t forget to always join on DBID and INSTANCE_NUMBER in addition to SNAP_ID so that your scripts are still working in RAC, and cross containers.
In 12.2 you can do the same for your application: used metadata links, data links, and common views for your tables. But remember to keep it simple…

 

Cet article Oracle 12cR2: AWR views in multitenant est apparu en premier sur Blog dbi services.

Pages