Skip navigation.

DBA Blogs

Step-by-Step Guide to January 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Pythian Group - Fri, 2016-02-12 07:52

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)

StepDescriptionETA1Update the OPATCH utility:

 

For Database home:

 

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1

$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version

 

For Grid home:

 

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid

$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version15 min2Create ocm.rsp file:

 

Note: Press Enter/Return key and don’t provide any input and say Yes.

 

$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid

$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp5 min3Validation of Oracle Inventory

 

Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

 

For database home:

 

$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1

 

For Grid home:

 

$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid

 

If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply.5 min4Stage the Patch:

 

$ mkdir /stage/PSUpatch

$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch

 

Check that the directory is empty.

$ cd /stage/PSUpatch

$ ls

 

Unzip the patch as grid home owner.

 

$ unzip p22191349_121020_<platform>.zip5 min5One-off Patch Conflict Detection and Resolution:

 

Run it with root user:

 

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

 

It will ask to rollback identical patches like this:

 

Analyzing patch(es) on “/u01/app/oracle/12.1.0.2/grid” …

Patch “/stage/PSUpatch/22191349/21436941” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948341” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948344” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948354” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

 

So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:

 

opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)

 

Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:

 

/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f

 

After this again run:

 

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

 

If analyze command fail then use this with root user:

 

$ORA_GI_HOME/crs/install/roothas.pl –postpatch

 

It will start the has services too.

 

Then again run the analyze command as given above:

 

It will show something like:

 

Analyzing patch(es) on “/u01/app/oracle/12.1.0.2/grid” …

Patch “/stage/PSUpatch/22191349/21436941” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948341” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948344” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948354” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

 

Now you are good to apply the patch. Proceed to next step.

 

 

 

 10 min6Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)

 

As root user, execute the following command:

 

# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp

 

In case if it doesn’t apply in RDBMS Home, then run:

 

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp

 

Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:

 

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/2194835460 min7Loading Modified SQL Files into the Database:

 

% sqlplus /nolog

SQL> Connect / as sysdba

SQL> startup

SQL> quit

% cd $ORACLE_HOME/OPatch

% ./datapatch -verbose60 min8Check for the list of patches applied to the database.

 

SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch;5 min

Categories: DBA Blogs

New Round of Let’s Talk Database Events

Richard Foote - Thu, 2016-02-11 18:45
I’ll be presenting a new round of “Let’s Talk Database” events around Australia and NZ next month. These are free events but have often “sold out” in the past so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. Currently, the confirmed events are: Sydney – Tuesday, […]
Categories: DBA Blogs

Log Buffer #460: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-02-11 14:34

This Log Buffer Edition covers blog posts from Oracle, SQL Server and MySQL for this week.

Oracle:

APEX shuttle item with one direction

Wondering about which tasks and work products are essential for your project?

Using Spark(Scala) and Oracle Big Data Lite VM for Barcode & QR Detection

Cloning 10.2.0.3 Oracle Home on fully patched 11.31 HP-UX hangs

An UNDO in a PDB in Oracle 12c?

SQL Server:

SQL Azure Performance Benchmarking

Monitoring In-Memory OLTP: What’s Important?

Find and Remove Duplicate Records SQL Server

A Database to Diagram For

Getting started with R scripts and R visuals in Power BI Desktop

MySQL:

MySQL Support People – Percona Support

How to Install Redmine 3 with Nginx on Ubuntu 15.10

The magical abandoned .ibd files after crash recovery with MySQL 5.7.10

How To Speed Up MySQL Restart (hint: just like before, but this time for real)

OmniSQL – Massively Parallel Query Execution

Categories: DBA Blogs

Oracle RAC on Azure

Pythian Group - Thu, 2016-02-11 14:31

Microsoft Azure provides an acceptable and affordable platform for a training environment. I am an Oracle DBA, and use it to test functionality, new technologies and features of different Oracle products. Azure supplies a template for Oracle linux and it can be used to run a single database, but when we try to create an Oracle RAC, we hit two major issues.

In the first, the Azure virtual network doesn’t support multicast and, as result, cannot be used for interconnect. The second issue is shared storage. Azure provides shared file storage, and you can access it using SMB-2 protocol, but it isn’t exactly what we need for RAC. How we can solve or workaround those problems? I will share my experience and show how I can setup a RAC on Azure.

For a two node RAC we first need to create at least two virtual machines for the cluster nodes. I’ve chosen Oracle Linux 6.4 from Azure Marketplace. I decided to create the machines with 2 network interfaces where one will be used for public, and another will be used for private interconnect. Here is my blog post how to create a VM with 2 network interfaces. It may not be necessary since you can fork a virtual interface out of your only public network, but I decided to go this way and create cluster nodes with two interfaces.

Here is output for the network from the first node:

[root@oradb5 network-scripts]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71
inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:776 errors:0 dropped:0 overruns:0 frame:0
TX packets:789 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:96068 (93.8 KiB) TX bytes:127715 (124.7 KiB)

eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92
inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:722 (722.0 b) TX bytes:1166 (1.1 KiB)

We need to install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64 rpm. It will install all required packages and set up kernel and limits for oracle user on our boxes :

yum install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64

The next step is to enable multicast support on the network for interconnect. You can read how to enable the multicast support in my other blog. As result you are getting a network interface edge0 which can be used now for our private network. Here is output of the ifconfig after crating virtual interface with support of multicast:

[root@oradb5 ~]# ifconfig
edge0 Link encap:Ethernet HWaddr 9E:1A:D8:0B:94:EF
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::9c1a:d8ff:fe0b:94ef/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:238 (238.0 b)

eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71
inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:118729 errors:0 dropped:0 overruns:0 frame:0
TX packets:62523 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:143705142 (137.0 MiB) TX bytes:20407664 (19.4 MiB)

eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92
inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:271 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1274 (1.2 KiB) TX bytes:43367 (42.3 KiB)

I’ve used multicast tool from Oracle support document

Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)

The check was successful:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

So, we have solved the first obstacle and need to get shared storage for our RAC. We have at least a couple of options here, and I believe somebody can advise us on others. We can use NFS based shared storage, or we can use iscsi for that. You may choose something from the Azure Marketplace like SoftNAS or Stonefly or you may decide to create your own solution. In my case I just faired another Oracle Linux VM, added couple of storage disks to it using portal, and then set up NFS server on that machine. Here is the high level description for that:
We create a linux based VM on Azure using Oracle Linux 6.4 template from Marketplace. The size will be dictated by your requirements. I called the machine oradata.
I’ve added a 20 Gb disk to the oradata machine through the Azure portal, and created a partition and filesystem on it:

[root@oradata ~]# fdisk -l
[root@oradata ~]# fdisk /dev/sdc
[root@oradata ~]# mkfs.ext4 /dev/sdc1
[root@oradata ~]# mkdir /share
[root@oradata ~]# mkdir /share/oradata1
[root@oradata ~]# e2label /dev/sdc1 sharedoradata1
[root@oradata ~]# vi /etc/fstab
[root@oradata ~]# mount -a
[root@oradata ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 7.4G 1.4G 5.7G 19% /
tmpfs 1.7G 0 1.7G 0% /dev/shm
/dev/sda1 485M 50M 410M 11% /boot
/dev/sda2 2.0G 67M 1.9G 4% /tmp
/dev/sdc1 20G 4.2G 15G 23% /share/oradata1
/dev/sdb1 60G 180M 56G 1% /mnt/resource
[root@oradata ~]#

Installed necessary utilities using yum:
[root@oradata ~]# yum install nfs-utils
Configured NFS server on the box:

[root@oradata ~]# chkconfig service nfs on
[root@oradata ~]# vi /etc/exports
[root@oradata ~]# cat /etc/exports
/share/oradata1 10.0.0.0/23(rw,sync,no_root_squash)
[root@oradata ~]# service nfs restart
[root@oradata ~]# showmount -e
Export list for oradata:
/share/oradata1 10.0.0.0/23

Configure or stop firewall(You may need to do it on your cluster nodes as well) :

[root@oradata ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] [root@oradata ~]# chkconfig iptables off
[root@oradata ~]#

On your cluster nodes you need add the mountpoint for your shared storage to /etc/fstab and mount it.

[root@oradb5 ~]# vi /etc/fstab
[root@oradb5 ~]# cat /etc/fstab | grep nfs
oradata:/share/oradata1 /u02/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600 0 0
[root@oradb5 ~]# mount -a
[root@oradb5 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 7.4G 2.5G 4.6G 36% /
tmpfs 3.5G 0 3.5G 0% /dev/shm
/dev/sda1 485M 69M 391M 15% /boot
/dev/sda2 2.0G 86M 1.8G 5% /tmp
/dev/sdc1 60G 12G 45G 21% /u01/app
/dev/sdb1 281G 191M 267G 1% /mnt/resource
oradata:/share/oradata1
20G 4.2G 15G 23% /u02/oradata
[root@oradb5 ~]# mount | grep /u02/oradata | grep -v grep
oradata:/share/oradata1 on /u02/oradata type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600,addr=10.0.1.101)
[root@oradb5 ~]#

Now we have the required storage for OCR and Voting disks, network for public and interconnect, and can install our cluster.
We need to correct /etc/hosts file on both nodes (you may choose to use Azure DNS service instead).

[oracle@oradb5 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.11 oradb5
10.0.1.12 oradb6
10.0.1.15 oradb5-vip
10.0.1.16 oradb6-vip
10.0.1.19 oradb-clst-scan
192.168.1.1 oradb5-priv
192.168.1.2 oradb6-priv
10.0.1.101 oradata
[oracle@oradb5 ~]$

You can see I setup the public,VIP and SCAN in the hosts file. Of course it is not acceptable for any production implementation or if you want to have more than one scan. As I’ve already mentioned above you can use DNS for proper installation.
We copy required software to one of the nodes, unpack it and create a response file for installation like:

[oracle@oradb5 ~]$ cat grid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.1.0
ORACLE_HOSTNAME=oradb5
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/12.1.0/grid
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=dba
oracle.install.crs.config.gpnp.scanName=oradb-clst-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterType=STANDARD
oracle.install.crs.config.clusterName=oradb-clst
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.gpnp.gnsClientDataFile=
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.clusterNodes=oradb5:oradb5-vip,oradb6:oradb6-vip
oracle.install.crs.config.networkInterfaceList=eth0:10.0.0.0:1,eth1:10.0.2.0:3,edge0:192.168.1.0:2
oracle.install.crs.config.storageOption=FILE_SYSTEM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/u02/oradata/voting/vdsk1,/u02/oradata/voting/vdsk2,/u02/oradata/voting/vdsk3
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/u02/oradata/ocr/ocrf1
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=
oracle.install.asm.diskGroup.name=
oracle.install.asm.diskGroup.redundancy=
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.diskGroup.disks=
oracle.install.asm.diskGroup.diskDiscoveryString=
oracle.install.asm.monitorPassword=
oracle.install.asm.ClientDataFile=
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsHost=
oracle.install.config.omsPort=0
oracle.install.config.emAdminUser=
oracle.install.config.emAdminPassword=

The file can be used for silent installation. You may choose instead to use runInstaller in GUI mode.
To run installation in silent mode you just need to go to your unpacked software and run:

[oracle@oradb5 grid]$ ./runInstaller -silent -responseFile /home/oracle/grid.rsp -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB. Actual 1350 MB Passed
Checking swap space: 0 MB available, 150 MB required. Failed <<<>> Ignoring required pre-requisite failures. Continuing…

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-02-01_09-41-01AM. Please wait …

You’ve of course noticed that I’ve run the installation ignoring requirements. As a matte of fact, I ran it without ignoring, checked the failing checks, made necessary adjustments for those checks, and then I decided they were important and left other as they were. As example my /etc/resolve.conf file was different due to settings on dhcp server and so on. I advise to apply common sense and your knowledge to decide what checks are important for you and what can be ignored.
Your installation will be completed and all you need to run is a couple of scripts to finish the installation.

As a root user, execute the following script(s):
1. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[oradb5, oradb6]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
1. /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

Note:
1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

We run the root.sh on each node one by one as user root and execute configToolAllCommands script as user oracle on the node we have run our installation. The response file would be required if we specified password for ASM,ASM monitoring or for DBCA. Here is an example of the file contents:

oracle.assistants.server|S_SYSPASSWORD=welcome1
oracle.assistants.server|S_SYSTEMPASSWORD=welcome1
oracle.assistants.server|S_DBSNMPPASSWORD=welcome1
oracle.assistants.server|S_PDBADMINPASSWORD=welcome1
oracle.assistants.server|S_EMADMINPASSWORD=welcome1
oracle.assistants.server|S_ASMSNMPPASSWORD=welcome1

Change permission for the file to 600 before running the script:

[oracle@oradb5 grid]$ vi /home/oracle/cfgrsp.properties
[oracle@oradb5 grid]$ chmod 600 /home/oracle/cfgrsp.properties

We don’t have any ASM in our installation or BMS console but I will leave the file nevertheless just for reference.
Here is an output what we ran on our system :

[root@oradb5 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_oradb5_2016-02-01_10-21-07.log for the output of root script
....

[root@oradb6 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_oradb6_2016-02-01_10-38-50.log for the output of root script
….

[oracle@oradb5 grid]$ /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/home/oracle/cfgrsp.properties
Setting the invPtrLoc to /u01/app/12.1.0/grid/oraInst.loc

perform – mode is starting for action: configure
….

Keep in mind the configToolAllCommands should also create the management database in your cluster. If somehow it was failed you can try to recreate it using dbca in silent mode like :

/u01/app/12.1.0/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination /u02/oradata/ocr/oradb-clst/mgmtdb -datafileJarLocation /u01/app/12.1.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck -oui_internal

The RAC is created and now it can be used for application high availability or for databases tests. You may install a database software on the RAC either using GUI installer or silent mode, but don’t forget to specify cluster nodes during installation. I would also like to mention that I would not recommend installing it as production system, but it is quite suitable for tests or to experiment if you want to verify or troubleshot some RAC specific features.

Categories: DBA Blogs

AUDIT DDLS in database with trigger

Learn DB Concepts with me... - Wed, 2016-02-10 21:37


-- Simple trigger to audit to audit basic schema changes :

--- CREATE TABLE TO STORE AUDIT DATA

CREATE TABLE DDL_AUDIT_LOG
(
  STAMP DATE
, USERNAME VARCHAR2(30 BYTE)
, OSUSER VARCHAR2(30 BYTE)
, MACHINE VARCHAR2(30 BYTE)
, TERMINAL VARCHAR2(30 BYTE)
, OPERATION VARCHAR2(30 BYTE)
, OBJTYPE VARCHAR2(30 BYTE)
, OBJNAME VARCHAR2(30 BYTE)
, OBJ_OWNER VARCHAR2(30 BYTE)
) TABLESPACE USERS ;

-- NOW CREATE TRIGGER TO AUDIT CHANGES

ALTER TRIGGER AUDIT_DDL_CHANGES DISABLECREATE TRIGGER AUDIT_DDL_CHANGES
   AFTER create OR drop OR alter
      ON ATOORPU.SCHEMA  -- Change SCOTT to your schema name!!!
      -- ON DATABASE
BEGIN
  INSERT INTO ddl_audit_log VALUES
        (SYSDATE,
         SYS_CONTEXT('USERENV', 'SESSION_USER'),
         SYS_CONTEXT('USERENV', 'OS_USER'),
         SYS_CONTEXT('USERENV', 'HOST'),
         SYS_CONTEXT('USERENV', 'TERMINAL'),
         ORA_SYSEVENT,
         ORA_DICT_OBJ_TYPE,
         ORA_DICT_OBJ_NAME,
         ORA_DICT_OBJ_OWNER
        );
END;

Sample output :



Sample Audit Table Output




Categories: DBA Blogs

IMPDP SHOW=Y, sqlfile=test.sql

Learn DB Concepts with me... - Wed, 2016-02-10 21:35
Most of the legacy mode options will work in the data pump with its corresponding options. In legacy mode (imp utility) we have show=y option to spool the content of the dump file into sql scripts without doing the actual import. The same command can be used in data pump impdp also but it will get replaced with sqlfile option.




oracle@Linux01:[/u01/app/oracle/dpump] $ expdp atoorpu directory=DPUMP dumpfile=test1.dmp logfile=test1.log tables=TEST

Export: Release 11.2.0.4.0 - Production on Wed Feb 3 15:22:00 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ATOORPU"."SYS_EXPORT_TABLE_01":  atoorpu/******** directory=DPUMP dumpfile=test1.dmp logfile=test1.log tables=TEST
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ATOORPU"."TEST"                            5.875 KB       5 rows
Master table "ATOORPU"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ATOORPU.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/dpump/test1.dmp
Job "ATOORPU"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 3 15:22:13 2016 elapsed 0 00:00:07

oracle@Linux01:[/u01/app/oracle/dpump] $ ls -ll test1.dmp
-rw-r-----. 1 oracle oinstall 114688 Feb  3 15:22 test1.dmp
oracle@Linux01:[/u01/app/oracle/dpump] $ impdp atoorpu directory=DPUMP dumpfile=test1.dmp sqlfile=TEST1.sql

Import: Release 11.2.0.4.0 - Production on Wed Feb 3 15:22:35 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATOORPU"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "ATOORPU"."SYS_SQL_FILE_FULL_01":  atoorpu/******** directory=DPUMP dumpfile=test1.dmp sqlfile=TEST1.sql
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ATOORPU"."SYS_SQL_FILE_FULL_01" successfully completed at Wed Feb 3 15:22:42 2016 elapsed 0 00:00:01

oracle@Linux01:[/u01/app/oracle/dpump] $ ls -ltr
-rw-rw-r--. 1 atoorpu atoorpu      18870 Feb  3 09:38 test.log
-rw-r--r--. 1 oracle  oinstall      1325 Feb  3 15:22 test1.log
-rw-r-----. 1 oracle  oinstall    114688 Feb  3 15:22 test1.dmp
-rw-r--r--. 1 oracle  oinstall      3800 Feb  3 15:22 TEST1.sql
-rw-r--r--. 1 oracle  oinstall       933 Feb  3 15:22 import.log


If you try to use SHOW=Y you wil see below message in the output.
oracle@Linux01:[/u01/app/oracle/dpump] $ impdp atoorpu dumpfile=test1.dmp SHOW=Y

Import: Release 11.2.0.4.0 - Production on Wed Feb 3 15:17:27 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "show=TRUE" Location: Command Line, Replaced with: "sqlfile=test1.sql"
Legacy Mode has set nologfile=true parameter.


Categories: DBA Blogs

Opatchauto Session failed: Parameter validation failed

Pakistan's First Oracle Blog - Wed, 2016-02-10 20:12
While applying PSU on Grid Home in 12c, due to the patch conflict, you might have to rollback few patches before you could apply the PSU.

After rolling back the patches from grid home, when you try to run the opatch analyze command again, you might encounter following error:





[root ~]# $Grid_Home/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.OPatchauto Version : 12.1.0.1.10OUI Version        : 12.1.0.2.0Running from       : $Grid_Home
opatchauto log file: $Grid_Home/cfgtoollogs/opatchauto/22191349/opatch_gianalyze.logNOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.OCM RSP file has been ignored in analyze mode. 
Clusterware is either not running or not configured. You have the following 2 options:1. Configure and start the Clusterware on this node and re-run the tool2. Run the tool with '-oh ' to first patch the Grid Home, then invoke tool with '-database ' or '-oh ' to patch the RAC homeParameter Validation: FAILED
Opatchauto Session failed: Parameter validation failedException in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:432)                at oracle.opatchauto.gi.GILogger.printStackTrace(GILogger.java:447)                at oracle.opatchauto.gi.OPatchauto.main(OPatchauto.java:97)Caused by: java.io.IOException: Stream closed                at java.io.BufferedWriter.ensureOpen(BufferedWriter.java:98)                at java.io.BufferedWriter.write(BufferedWriter.java:203)                at java.io.Writer.write(Writer.java:140)                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:426)                ... 2 more
opatchauto failed with error code 1.
Then if you try to start the has services, you get following error:
 [root ~]# $Grid_Home/bin/crsctl start hasCRS-6706: Oracle Clusterware Release patch level ('3749979535') does not match Software patch level ('2278979115'). Oracle Clusterware cannot be started.CRS-4000: Command Start failed, or completed with errors.
SOLUTION:
So in order to resolve this, you need to issue following command as root user:$ORA_GI_HOME/crs/install/roothas.pl –postpatch
It will start the has services too.
Then again run the analyze command as given above and it will work. 



Categories: DBA Blogs

Partner Webcast – Oracle PaaS: Oracle Mobile Cloud Service Development

Mobile computing has experienced explosive growth in the past decade, and this is just the beginning. At the heart of any organizations’ digital strategy, Mobile is the primary screen and engagement...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Step by Step Jan 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Pakistan's First Oracle Blog - Tue, 2016-02-09 20:05

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)






Step Description ETA 1 Update the OPATCH utility:
For Database home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version
For Grid home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version 15 min 2 Create ocm.rsp file:
Note: Press Enter/Return key and don't provide any input and say Yes.
$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp 5 min 3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.
For database home:
$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1
For Grid home:
$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid
If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply. 5 min 4 Stage the Patch:
$ mkdir /stage/PSUpatch$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch
Check that the directory is empty.$ cd /stage/PSUpatch$ ls
Unzip the patch as grid home owner.
$ unzip p22191349_121020_.zip 5 min 5 One-off Patch Conflict Detection and Resolution:
Run it with root user:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
It will ask to rollback identical patches like this:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948341" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948344" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948354" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:
opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)
Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:
/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f
After this again run:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
If analyze command fail then use this with root user:
$ORA_GI_HOME/crs/install/roothas.pl –postpatch
It will start the has services too.
Then again run the analyze command as given above:
It will show something like:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948341" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948344" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948354" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Now you are good to apply the patch. Proceed to next step.



10 min 6 Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)
As root user, execute the following command:
# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp
In case if it doesn’t apply in RDBMS Home, then run:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp
Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/21948354 60 min 7 Loading Modified SQL Files into the Database:
% sqlplus /nologSQL> Connect / as sysdbaSQL> startupSQL> quit% cd $ORACLE_HOME/OPatch% ./datapatch -verbose 60 min 8 Check for the list of patches applied to the database.
SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch; 5 min
Categories: DBA Blogs

Resolving Update-Update Conflict in Peer-to-Peer Replication

Pythian Group - Mon, 2016-02-08 15:20

Recently I had received a hand-off ticket which was about a replication issue. The system has been configured with the replication of Peer-to-Peer type.

One of the subscribers was throwing an error which was reading like A conflict of type ‘Update-Update’ was detected at peer 4 between peer 100 (incoming), transaction id 0x000000000011a1c3 and peer 100 (on disk). While I was working on this issue and trying to resolve it, I noticed that it wasn’t showing any records in msrepl_errors table or conflict_dbo_table. p2p1 p2p2

Here again, the Error Logs help, as they have the complete details logged in, which help us identify the table name and exact error with the record. If that hadn’t been the case, I would have followed the Replication Troubleshooting method describe in KB 3066750 to fix the issue.

At this time, with the information we had in hand, we reached out to the customer and resolved the issue by fixing it manually. I would like to mention that there are always two ways conflicts are handled in P2P replication:

1) Manually fix the conflict/data issue
2) Let the winner node have precedence about the data/conflict In P2P replication

At the time of configuration, we will have an option to choose which node will have precedence and can be declared the winner. This is decided by the way of originator_i; the highest originator_id will win. We will have to decide this carefully, as once the setup is done, orginator_id is allotted it can not be altered later.

Here are few reference article that will help you understand this topic better:

https://msdn.microsoft.com/en-IN/library/bb934199%28v=sql.105%29.aspx
https://msdn.microsoft.com/en-IN/library/ms151191%28v=sql.105%29.aspx http://blogs.msdn.com/b/change_sql_virtual_ip_from_dhcp_to_static_ip/archive/2015/11/04/conflicts-in-peer-to-peer-replication.aspx
http://blogs.msdn.com/b/repltalk/archive/2010/02/07/repltalk-start-here.aspx

 

Categories: DBA Blogs

New ORAchk 12.1.0.2.6 beta

Pythian Group - Mon, 2016-02-08 15:17

 

Oracle recently released new beta 12.1.0.2.6 version for the ORAchk utility. If you are an Oracle DBA and still not friendly with the utility, I advise you to try it out. In short, the utility is a proactive tool and scan your system for known issues providing an excellent report in html format. In addition to that, you are getting collection manager to manage reports for multiply databases, check for upgrade readiness and other features. I strongly recommend trying the utility and using it regularly.
You can download the new version of the ORAchk, Health Check Catalog and all related support files and guides from Oracle support (Document 1268927.2). Simply unzip the ORAchk to a directory and run the orachk preferably as root since it allows to execute all system wide checks. Here is an example:

[oracle@bigdatalite u01]$ mkdir orachk
[oracle@bigdatalite u01]$ cd orachk/
[oracle@bigdatalite orachk]$ unzip ../distr/orachk121026.zip
Archive: ../distr/orachk121026.zip
inflating: CollectionManager_App.sql
inflating: sample_user_defined_checks.xml
creating: .cgrep/
................
[oracle@bigdatalite orachk]$ su -
[root@bigdatalite ~]# cd /u01/orachk/
[root@bigdatalite orachk]# ./orachk

At the end you are getting an html report and zip file with results of all executed checks:

Detailed report (html) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905/orachk_bigdatalite_orcl_012816_151905.html

UPLOAD(if required) – /u01/orachk/orachk_bigdatalite_orcl_012816_151905.zip

The report is really good looking, split to different sections, and allows you to hide or show checks based on their status.

I compared the new 12.1.0.2.6 version against 12.1.0.2.5. The execution time for the new version was 3 minutes versus 8 minutes for the old one. The new format for report was way more usable; you don’t need to jump back and forth since result for every check expand on the same place.
If you haven’t used the utility so far I highly recommend you download and try it out.

Categories: DBA Blogs

Log Buffer #459: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-08 14:40

This Log Buffer Edition arranges few tips and tricks from the blogs of Oracle, SQL Server and MySQL.

Oracle:

Oracle ® Solaris innovation is due in part to the UNIX® the standard (1), the test suites (2) and the certification (3). By conforming to the standard, using the test suites and driving to certification, Oracle ® Solaris software engineers can rely on stable interfaces and an assurance that any regressions will be found quickly given more than 50,000 test cases.

Building on the program established last year to provide evaluation copies of popular FOSS components to Solaris users, the Solaris team has announced the immediate availability of additional and newer software, ahead of official Solaris releases.

Tracing in Oracle Reports 12c.

Issues with Oracle Direct NFS.

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

SQL Server:

Manipulating Filetable Files Programatically

Auto-suggesting foreign keys and data model archaeology

Create/write to an Excel 2007/2010 spreadsheet from an SSIS package.

Tabular vs Multidimensional models for SQL Server Analysis Services.

The PoSh DBA – Towards the Re-usable PowerShell Script.

MySQL:

MyRocks vs InnoDB with Linkbench over 7 days.

MySQL has been able to harness the potential of more powerful (CPU) and larger (RAM, disk space.

Setup a MongoDB replica/sharding set in seconds.

MySQL 5.7 makes secure connections easier with streamlined key generation for both MySQL Community and MySQL Enterprise, improves security by expanding support for TLSv1.1 and TLSv1.2, and helps administrators assess whether clients are connecting securely or not.

While EXPLAIN shows the selected query plan for a query, optimizer trace will show you WHY the particular plan was selected. From the trace you will be able to see what alternative plans was considered, the estimated costs of different plans, and what decisions was made during query optimization.

Categories: DBA Blogs

Oracle Big Data 2016 Implementation Boot Camp (19 Feb to 04 March)

February 2016 ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 11b : Using DBMS_SQLDIAG to trace the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-02-07 07:46
My previous blogpost covered using ALTER SYSTEM/SESSION to set tracing for a specific SQL_ID that has been determined in advance.   The SQL may be executed in the future after the ALTER SYSTEM/SESSION.

Here is a method for an SQL that has already been executed.

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> begin
2 dbms_sqldiag.dump_trace(p_sql_id=>'b086mzzp82x7w',
3 p_component=>'Optimizer',
4 p_file_id=>'OPT_TRACE_b086mzzp82x7w');
5 end;
6 /

PL/SQL procedure successfully completed.

SQL>
SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_2987_OPT_TRACE_b086mzzp82x7w.trc

SQL>


Let's review the trace file.

Registered qb: SEL$1 0x99b9000 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB
SPM: statement not a candidate for auto-capture

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
******************************
Compilation Environment Dump
is_recur_flags = 8
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
.... continued to a long list of parameters ........
....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
.... continued to a long list of bug fixes ........
...................................................
***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for 0b1t991khf449.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=398332482954924169
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=59416), execution(in-use=2456, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57320, alloc=59416), execution(in-use=2456, alloc=4032)

kkoqbc-subheap (create addr=0x7f4409c4fb18)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)


And here is the actual information about how the Costing is done and Execution Plan determined.

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1313133 Computed: 1313133.42 Non Adjusted: 1313133.42
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3486.00 resc_cpu: 287452140
ix_sel: 0.182432 ix_sel_with_filters: 0.182432
Cost: 3511.56 Resp: 3511.56 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3511.56 Degree: 1 Resp: 3511.56 Card: 1313133.42 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3511.5623 card: 1313133.4203 bytes: 10505064
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3511.5623 Degree: 1 Card: 1313133.0000 Bytes: 10505064
Resc: 3511.5623 Resc_io: 3486.0000 Resc_cpu: 287452140
Resp: 3511.5623 Resp_io: 3486.0000 Resc_cpu: 287452140
kkoqbc-subheap (delete addr=0x7f4409c4fb18, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59856, alloc=63560), execution(in-use=2456, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60768, alloc=63560), execution(in-use=2456, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=0b1t991khf449 plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=0b1t991khf449) -----
/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x6be0d530 145 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be0d530 12098 package body SYS.DBMS_SQLTUNE_INTERNAL
0x6be6b738 1229 package body SYS.DBMS_SQLDIAG
0x6becaea8 2 anonymous block
sql_text_length=96
sql=/* SQL Analyze(148,0) */ select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3512 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1282K | 10M | 3512 | 00:00:43 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA

*** 2016-02-07 21:29:15.838
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
parallel_execution_enabled = true
parallel_query_forced_dop = 0
parallel_dml_forced_dop = 0
parallel_ddl_forced_degree = 0
parallel_ddl_forced_instances = 0
_query_rewrite_fudge = 90
optimizer_features_enable = 11.2.0.4
_optimizer_search_limit = 5
...... long list of optimizer parameters ...........
.....................................................
Bug Fix Control Environment
fix 3834770 = 1
fix 3746511 = enabled
fix 4519016 = enabled
...... long list of Bug Fixes .......................
.....................................................

Query Block Registry:
SEL$1 0x99b9000 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90704, alloc=155568), execution(in-use=6408, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================


So, this is also a supported method.  This DBMS_SQLDIAG package is available from 11.2
Unfortunately, however, DUMP_TRACE is not documented !  (see the 11.2 documentation on DBMS_SQLDIAG).

Note : If the SQL Statement and/or Plan have already been aged out / purged from the Shared Pool , a DUMP_TRACE would, obviously, not be able to print anything.
.
.
.

Categories: DBA Blogs

SQL On The Edge #8 – SQL Server Mobile Report Publisher

Pythian Group - Fri, 2016-02-05 14:07

One of the cool things about SQL Server is that it comes bundled with all the Business Intelligence services with the core database engine license. Reporting Services (which includes the Mobile Report Publisher), Analysis Services, and Integration Services are all integrated, and are ready to get going as soon as you install SQL Server. This has made it not only cost-efficient for many organizations to deploy BI, but it has also contributed to a wide adoption among the SQL Server customer base.

 

What is the Mobile Report Publisher?

Currently in preview, the Mobile Report Publisher is a new report and dashboard editor that publishes reports to Reporting Services, and it’s part of the bigger road map that Microsoft has for their Business Intelligence On-Premises story. We all know that in the cloud, Power BI has been getting a large amount of investment, but with on-premises there was a big gap that was getting wider and wider, until now.

With this upcoming SQL 2016 release, the Microsoft team is focusing on bringing Reporting Services into the future as a one-stop integrated solution for BI deployment so that cloud BI or a competitor’s product (*cough*Tableau*cough) are not the only modern alternatives.

This Reporting Services refactor is the biggest change made to the product since SQL Server 2005 was released over 10 years ago. Leveraging the best parts of the Datazen acquisition, the Microsoft team is looking to provide a cohesive BI story that integrates web-based and mobile reports, Report Builder reports, and Power View style of modern visuals.


How is it different?

You’re probably used to working with SSRS Report Builder and are wondering what the idea is with Mobile Report Publisher. The demo below will make it very clear, but let’s just say that reports have now been split in two types:

1. Paginated reports: this is the ‘legacy’ style report that is built by Report Builder. It looks more flat, has the same controls as before, and is fully compatible with all the existing reports you have already deployed.

2. Mobile reports: Even though the name says ‘mobile’ these reports work just as well on desktop, tablet and mobile. If you’re familiar with web design, then the appropriate term would be that these reports are “responsive”. They can be done once, and will display nicely across devices. This is the new experience that the Mobile Report Publisher is targeting.

 

Where do we go from here?

Right now the current version is SQL Server 2016 CTP 3.2. We still have several different versions that will be released before SQL 2016 goes RTM later on this year. Currently, a lot of the planned functionality is not entirely there for the main SSRS interface, and you will be asked to switch to the ‘”classic” view often.

The Mobile Report Publisher experience is also very much targeted towards creating a visualization, and there’s no data modeling to be found. You pretty much need to have your datasets pre-made and in the format that the tool will expect, or you won’t get very far. Hopefully, at some point the team will add modeling capabilities like the robust experience we already have on the Power Bi desktop tool.

If you want to practice and get your feet wet with this release you can do it now, and for free by visiting SQL Server 2016 CTP 3.2. You can also preview the Mobile Report Publisher. Like I said before, this is a big departure so most people will want to train up to be ready when the new versions go live.

 

Demo

In the demo below I’m going to walk through the new SSRS interface preview in CTP 3.2, and then walk through creating a quick dashboard for both desktop and mobile consumption using the publisher. Let’s check it out!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

General troubleshooting lessons from recent Delphix issue

Bobby Durrett's DBA Blog - Fri, 2016-02-05 11:25

Delphix support helped me resolve an issue yesterday and the experience gave me the idea of writing this post about several general computer issue troubleshooting tips that I have learned down through the years. Never mind that I ignored these lessons during this particular problem. This is more of a “do as I say” and not a “do as I do” story.  Actually, some times I remember these lessons. I didn’t do so well this week. But the several mistakes that I made resolving this recent Delphix issue motivate me to write this post and if nothing else remind myself of the lessons I’ve learned in the past about how to resolve a computer problem.

Don’t panic!

I’m reminded of the friendly advice on the cover of the Hitchhiker’s Guide to the Galaxy: “Don’t panic!”. So, yesterday it was 4:30 pm. I had rebooted the Delphix virtual machine and then in a panic had the Unix team reboot the HP Unix target server. But, still I could not bring up any of the Delphix VDBs.  We had people coming over to our house for dinner that night and I was starting to worry that I would be working on this issue all night. I ended up getting out of the office by 5:30 pm and had a great dinner with friends. What was I so stressed about? Even the times that I have been up all night it didn’t kill me. Usually the all night issues lead to me learning things anyway.

Trust support

The primary mistake that I made was to get my mind fixed on a solution to the problem instead of working with Delphix support and trusting them to guide us to the solution. We had a number of system issues due to a recent network issue and I got my mind set on the idea that my Delphix issue was due to some network hangup. I feel sorry for our network team because it seems like the first thought people have any time there is some issue is that it is a “network issue”. I should know better. How many times have I been working on issues when everyone says it is a “database issue” and I’m annoyed because I know that the issue is somewhere else and they are not believing me when I point to things outside the database. Anyway, I opened a case with Delphix on Monday when I couldn’t get a VDB to come down. It just hung for 5 minutes until it gave me an error. I assumed that it was a network hangup and got fixated on rebooting the Delphix VM. Ack! Ultimately, I ended up working with two helpful and capable people in Delphix support and they resolved the issue which was not what I thought at all. There are times to disagree with support and push for your own solution but I did this too early in this case and I was dead wrong.

Keep it simple

I’ve heard people refer to Occam’s razor which I translate in computer terms to mean “look for simple problems first”. Instead of fixing my mind on some vague network issue where the hardware is not working properly, how about assuming that all the hardware and software is working normally and then thinking about what problems might cause my symptoms? I can’t remember how many times this has bit me. There is almost always some simple explanation.  In this case I had made a change to a Unix shell script that runs when someone logs in as the oracle user. This caused Delphix to no longer be able to do anything with the VDBs on that server. Oops! It was a simple blunder, no big deal. But I’m kicking myself for not first thinking about a simple problem like a script change instead of focusing on something more exotic.

What changed?

I found myself saying the same dumb thing that I’ve heard people say to me all the time: nothing changed. In this case I said something like “this has worked fine for 3 years now and nothing has changed”. The long-suffering and patient Delphix support folks never called me on this, but I was dead wrong. Something had to have changed for something that was working to stop working. I should have spent time looking at the various parts of our Delphix setup to see if anything had changed before I contacted support. All I had to do was see the timestamp on our login script and I would see that something had recently changed.

Understand how it all works

I think my Delphix skills are a little rusty. We just started a new expansion project to add new database sources to Delphix. It has been a couple of years since I’ve done any heavy configuration and trouble shooting. But I used to have a better feel for how all the pieces fit together. I should have thought about what must have gone on behind the scenes when I asked Delphix to stop a VDB and it hung for 5 minutes. What steps was it doing? Where in the process could the breakdown be occurring? Delphix support did follow this type of reasoning to find the issue. They manually tried some of the steps that the Delphix software would do automatically until they found the problem. If I stopped to think about the pieces of the process I could have done the same. This has been a powerful approach to solving problems all through my career. I think about resolving PeopleSoft issues. It just helps to understand how things work. For example, if you understand how the PeopleSoft login process works you can debug login issues by checking each step of the process for possible issues. The same is true for Oracle logins from clients. In general, the more you understand all the pieces of a computer system, down to the transistors on the chips, the better chance you have of visualizing where the problem might be.

Well, I can’t think of any other pearls of wisdom from this experience but I thought I would write these down while it was on my mind. Plus, I go on call Monday morning so I need to keep these in mind as I resolve any upcoming issues. Thanks to Delphix support for their good work on this issue.

Categories: DBA Blogs

Network multicast support on Azure

Pythian Group - Thu, 2016-02-04 15:07

 

Today I would like to talk about multicast support on Azure, and how to make it work. While it’s not the most required feature on a Virtual environment, nevertheless, some applications require multicast support for networks. The perfect example is Oracle RAC, where multicast is required starting from version 11.2.0.2. In Oracle RAC, multicast is used for highly available IP (HAIP) on interconnect. If you’re thinking about building a training environment with Oracle RAC on Azure you will need the multicast support.

How can we check if it works, or if it’s working now? First, you can check if it’s supported by your kernel using the netstat utility.

[root@oradb5 ~]# netstat -g | grep mcast
lo 1 all-systems.mcast.net
eth0 1 all-systems.mcast.net
eth1 1 all-systems.mcast.net

You can see that all my interfaces are ready for the multicast support. That’s fine, but how can we check if it works on our network? We can use either iperf utility or a perl script created by Oracle. You can download the script from Oracle support if you have account, from the Oracle note “How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)”.
Here’s what I got: I have two Azure VM A3 size with Oracle Linux 6, with two network interfaces each. The VM hostnames were oradb5 and oradb6. You can check out my blog on how to make an Azure VM with two network interface here. The second interface eth1 is one where we are going to enable multicast.

I ran the mcasttest.pl script and saw that:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

The output clearly tells us that we don’t have multicast support for either for 230.0.1.0 or 224.0.0.251 multicast addresses.

What does the Virtual Network FAQ for Azure tell us about it?
Here is the answer:

Do VNets support multicast or broadcast?
No. We do not support multicast or broadcast.
What protocols can I use within VNets?
You can use standard IP-based protocols within VNets. However, multicast, broadcast, IP-in-IP encapsulated packets and Generic Routing Encapsulation (GRE) packets are blocked within VNets. Standard protocols that work include:
* TCP
* UDP
* ICMP

So, we need a workaround. Luckily we have one. Some time ago, while discussing RAC on Amazon AWS, I was pointed to an article written by my former colleague Jeremiah Wilton, where he had described how he could work around the same problem on Amazon. You can read the article here. I decided to give a try and see if it works for Azure.

We are going to use a Peer-to-Peer VPN n2n provided by ntop.
They have mentioned that the development for the product has been put on hold, but the tool is still widely used and provides an acceptable solution for our problem. I used a Stuart Buckell’s article on how to set it up and it worked for me.
We could just use an already precompiled package, but compiling the utility from the sources provides us with an opportunity to disable encryption and compression, or change any other options.

Here is what I’ve done:
Installed kernel headers to be able to compile the n2n :

[root@oradb5 n2n_v2]# yum install kernel-headers
ol6_UEK_latest | 1.2 kB 00:00
ol6_u4_base | 1.4 kB 00:00
ol6_u4_base/primary | 2.7 MB 00:00
ol6_u4_base 8396/8396
Setting up Install Process
Resolving Dependencies
....

Installed subversion utility:

[root@oradb5 /]# yum install subversion.x86_64
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package subversion.x86_64 0:1.6.11-15.el6_7 will be installed
.............

Downloaded the sources using svn:

[root@oradb5 /]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n
Error validating server certificate for 'https://svn.ntop.org:443':
- The certificate hostname does not match.
Certificate information:
- Hostname: shop.ntop.org
- Valid: from Sun, 15 Nov 2015 00:00:00 GMT until Wed, 14 Nov 2018 23:59:59 GMT
- Issuer: COMODO CA Limited, Salford, Greater Manchester, GB
- Fingerprint: fb:a6:ff:a7:58:f3:9d:54:24:45:e5:a0:c4:04:18:d5:58:91:e0:34
(R)eject, accept (t)emporarily or accept (p)ermanently? p
A n2n/n2n_v1
A n2n/n2n_v1/lzodefs.h
A n2n/n2n_v1/README
...............

Disabled encryption and compression using this article
Changed directory to n2n/n2n_v2 and compiled it.

[root@oradb5 n2n_v2]# make
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n_keyfile.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c wire.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c minilzo.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c twofish.c
..............................

Copied files to the both my servers (oradb5 and oradb6) to /usr/sbin directory:

[root@oradb5 n2n_v2]# cp supernode /usr/sbin/
[root@oradb5 n2n_v2]# cp edge /usr/sbin/

Start a supernode daemon on the 1-st node. We only need it running on one machine, and it can even be a totally different machine. I am using port 1200 for it:

[root@oradb5 ~]# supernode -l 1200
[root@oradb5 ~]#

Started the edge on both servers. On oradb5 I am creating a sub-interface with IP 192.168.1.1 and providing some parameters:
-E – Accept multicast MAC addresses (default=drop).
-r – Enable packet forwarding through n2n community.
-c – n2n community name the edge belongs to.
-l – our supernode address:port.

[root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r

[root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r

So we are getting an interface edge0 on both nodes and can use it for connection required multicast:

[root@oradb5 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 52:CD:8E:20:3D:E5
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::50cd:8eff:fe20:3de5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:592 (592.0 b)

[root@oradb5 ~]#

On the second box:

[root@oradb6 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 7E:B1:F1:41:7B:B7
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7cb1:f1ff:fe41:7bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:362 (362.0 b)

[root@oradb6 ~]#

Now we can run our multicast test again for edge0 interface and see how it works.

[oracle@oradb5 ~]$ cd mcasttest/
[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

As you can see, the test has completed successfully. So, the edge0 interface can be used now for any connections requiring multicast support.

In my next article I will show you how to create an Oracle RAC on Azure using the created multicast interface and a shared storage.

Categories: DBA Blogs