Skip navigation.

DBA Blogs

Partner Webcast – Oracle PaaS: Exadata Database on Cloud

Cloud computing is known for many things: cost effectiveness, flexibility, scalability. IT of our generation needs to adapt to cloud exactly for the above reasons. But extreme performance has not...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Little Big Company With The Huge Heart

Pythian Group - Wed, 2016-02-17 10:26

February is a drag. Maybe like me you have already experienced profound failure on following-up with New Year’s Resolutions. Instead of “taking yoga” I ended up “severely injuring my back”. Ah, c’est la vie. Sometimes it’s harder than other times to be positive, and when it’s hard to be positive it’s even harder to be grateful (another one of my New Year’s Resolutions). The good news (or bad news if you’re making excuses for your negativity) is you can practice gratitude from wherever you are, in whichever state of mind or body you’re in.

Pythian exists because of the people who work here. This isn’t a commentary on the Company’s success, the Company’s Brand or Culture, but rather the actual employees who make this place a great place to be, by doing one thing: practicing kindness.

This building is full of people who are going out of their way on a daily basis to:

a) be generous with their time (and their baking)

b) offer support to a colleague in need

c) tell a joke, take a joke, share a laugh

d) provide you with directions when you inevitably become confused and disoriented on the 5th or 6th floor.

Thankfully I have the life and work experience to know that this is not the norm. Something special is happening here. People are supporting each other, providing personal and professional insights, and I must say that while technical genius is the grease that allows the wheels to spin, there is a ton of emotional intelligence being exchanged by a ton of people, all the time.

If you wouldn’t characterize your experience here in the same light I encourage you to wander around. Get lost looking for finance and smile with the Accountant who keeps chocolate in his desk. Tell someone in L&D about a new hobby you’ve taken up and she might just tell you she’s proud of you. Stumble into the IT room and ask an inane question about your new cell phone that you can barely turn on, it will be met with patience. Tell our CRO a joke, the driest you’ve got, or a dad-joke, he’ll love it.

Talk to our CEO about heavy metal, art, poetry, books (highly recommended points of interest if like me you started here not ever having heard of “Unix”). Ask our VP of Transformation where the printer is, seriously he told me three times and I still don’t know so please also let me know when you know. Tell one of the DBAs that you’re learning French and then avoid them for the next month or so because you’re too nervous to say anything beyond “je ne comprends pas”.

I’ve been given hugs, have given out hugs, one of my colleagues actually made me lunch a few times when I was moving places and my life was in disarray. There are a lot of smiles here and they are contagious.

Happy to be here with all of you. Thanks for being kind.

Categories: DBA Blogs

How to build a Multi-domain cluster with AlwaysOn Availability Groups – SQL Server 2016

Pythian Group - Wed, 2016-02-17 09:30

SQL Server 2016 is making lots of news, and promises to be a revolutionary SQL Server version. In talking about AlwaysOn Availability Groups, a very good number of improvements were already announced as part of the Basic Availability Groups. By the way, Hemantgiri, my friend and former Data Platform MVP, already talked about this on his post.

One of the improvements that got my attention was the ability to set up the AlwaysOn Availability Groups sit in a multi-domain cluster, or even a domain-detached cluster. I tested both and indeed this works very well, but the maintenance is slightly different (read: more difficult).

We need to consider the following points:

  • The cluster manager is not supporting this, everything should be done by using PowerShell.
  • To perform the setup, you need to be connected as a local Administrator.
    • The “Administrator” user password should be the same on all involved nodes.
  • For some tasks, like SQL Server upgrade/update, you will need to run the setup using the local administrator account.
    • You don’t need use the Administrator account to log in. Use your regular account, select the Run as different user menu item, and press the SHIFT key while you right-click the file.
      sql_server_screenshot
  • You must be using Windows Server 2016 and SQL Server 2016.

The new capability is a group effort between SQL Server and the Windows development team and is, in my opinion, a very good option. I see the multi-domain capability as a better option than a detached-domain, and I already have customers who can benefit very well from this option.

The following approach is the same; it doesn’t matter if you are making a multi-domain setup or domain-detached cluster; however, there are some particular details:

For Domain-detached cluster

In this particular case, the Failover Cluster is created without any associated computer objects, and for this reason, each involved node needs to have a defined primary DNS suffix.

2

For Multi-domain cluster

Make sure that you can reach all the nodes, using IP, server name, and FQDN. Take care with firewall rules as this may influence the cluster creation. Make sure that the trust relationship between the domains is well defined.

 

Creating the cluster

In my case, I created a multi-domain cluster, which was also a multi subnet one. The following command is the base to create the cluster:

New-Cluster –Name <Cluster Name> -Node <Nodes to Cluster> -StaticAddress <Cluster IP> -AdministrativeAccessPoint DNS

The catch here is the -AdministrativeAccessPoint DNS. Check the PowerShell New-Cluster command for more options.

To perform this command, open the PowerShell console as Administrator (you can either log in as local Administrator or open the console using the method that I explained earlier in this article), and execute the command. That’s it!

In my test laboratory, I created a cluster containing three nodes:

  • SQLCAFE.local (192.100.100.17)
  • LISBON.local (192.100.200.18)
  • SQLCAFE.local (192.100.100.19)

I executed the following command:

New-Cluster –Name W2016CLT01 -Node W2016SRV06.SQLCAFE.local , W2016SRV07.LISBON.local , W2016SRV08.SQLCAFE.local  -StaticAddress 192.100.100.52,192.100.200.52 -NoStorage -AdministrativeAccessPoint Dns

 

After the cluster is created, use PowerShell to perform all the administrative tasks. The following commands are useful:

  • Get-Cluster
  • Get-ClusterNode
  • Get-ClusterGroup
  • Get-ClusterGroup <Group Name> | Get-ClusterResource

3

Creating an Availability Group

The process of creating the availability group is the same as mentioned previously. Actually, we are lucky, because the SQL Server Management Studio has all we need to manage the AG, without the need to touch PowerShell. For basic management, SSMS is enough.

After the cluster is created, you can go ahead and enable the AlwaysOn Availability Groups feature. To perform this, just go the SLQ Server Configuration manager and access the SQL Server engine properties. In the AlwaysOn High Availability tab, select the Enable AlwaysOn Availability Groups check box. Click OK and restart the SQL Server engine.

4

After enabling the feature, just follow the usual procedure and use the wizard in the SSMS, T-SQL, or PowerShell, to create the Availability Group.

 

More Details

As this is a complex configuration, a careful DNS and Quorum configuration is recommended to ensure the highest service availability. As a result, a DNS replication might be needed. For the quorum, the new Cloud Witness (supported from Windows Server 2016) is recommended.

 

Summary

When installed on Windows Server 2016, more flexibility is given to SQL Server 2016, including DTC support for AG, and Cloud Witness. Multi-domain and domain-detached cluster increases the architecture options when using SQL Server. This is just one of the improvements coming with SQL Server 2016. Stay tuned for more news!

 

Categories: DBA Blogs

Links for 2016-02-16 [del.icio.us]

Categories: DBA Blogs

Pythian-led Events: Embracing Disruptive Technologies

Pythian Group - Tue, 2016-02-16 14:24

Today’s senior IT professionals are expected to be experts in everything from emerging technologies, to new regulations, and business innovation. Keeping up with the latest advances in technology, and understanding how they impact your business can be challenging.

The impact of today’s disruptive technologies on how we do business, led Pythian to introduce Velocity of Innovation, a series of events for senior IT professionals. These are Pythian’s exclusive thought-leadership events where a moderator-led panel engages in lively conversations around today’s disruptive technologies.

From our Velocity of Innovation series, to CIO summits and CIO forums, Pythian is leading the conversation around technologies that can transform business. Pythian focuses on creating positive outcomes for our clients, such as better, faster software development; high-performing, cost-effective systems; data-driven insights; and the adoption of disruptive technologies for competitive advantage.

Pythian will continue to lead conversations around disruptive technologies, bringing our vendor agnostic approach to a range of IT challenges. On March 16,  Pythian will be participating in the Global Business Intelligence CIO UK Event .

This event brings senior IT professionals together for one day to focus on topics ranging from digital transformation to cyber security.

 

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com.

Categories: DBA Blogs

Oracle SOA Suite 12c (12.2.1.0.0) - What's New

Oracle SOA Suite’s new release includes features that simplify and reduce the time to develop new integrations, and significantly improve response times. Developers now have the ability to improve...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Trace Files -- 12 : Tracing a Particular Process

Hemant K Chitale - Sun, 2016-02-14 02:59
Unlike tracing for particular SQL statements, you can also trace by PID  (Oracle PID) or Server Process ID (SPID).

SQL> select s.sid, p.pid
2 from v$session s join v$process p
3 on (s.paddr=p.addr)
4 and s.username = 'HEMANT';

SID PID
---------- ----------
19 22

SQL>
SQL> alter system set events 'sql_trace {process: orapid=22}';

System altered.

SQL>
SQL> select s.sid, p.pid, p.spid
2 from v$session s join v$process p
3 on (s.paddr=p.addr)
4 and s.username = 'HR';

SID PID SPID
---------- ---------- ------------------------
14 26 3207

SQL> alter system set events 'sql_trace {process:3207}';

System altered.

SQL>


Tracing for the processes is disabled with :

SQL> alter system set events 'sql_trace {process: orapid=22} off';

System altered.

SQL> alter system set events 'sql_trace {process:3207} off';

System altered.

SQL>


Tracing seems to get disabled after the next SQL, not immediately.

Remember : This is SQL Tracing, not Optimizer Tracing.
.
.
.

Categories: DBA Blogs

Automating Oracle Cloud storage container creation

When you request your Oracle Java Cloud Service trial (or attend an Oracle PaaS Workshop), one of the things you need to do to setup your cloud account is create storage containers in the Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Step-by-Step Guide to January 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Pythian Group - Fri, 2016-02-12 07:52

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)

StepDescriptionETA1Update the OPATCH utility:

 

For Database home:

 

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1

$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version

 

For Grid home:

 

$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid

$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version15 min2Create ocm.rsp file:

 

Note: Press Enter/Return key and don’t provide any input and say Yes.

 

$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid

$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp5 min3Validation of Oracle Inventory

 

Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.

 

For database home:

 

$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1

 

For Grid home:

 

$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid

 

If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply.5 min4Stage the Patch:

 

$ mkdir /stage/PSUpatch

$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch

 

Check that the directory is empty.

$ cd /stage/PSUpatch

$ ls

 

Unzip the patch as grid home owner.

 

$ unzip p22191349_121020_<platform>.zip5 min5One-off Patch Conflict Detection and Resolution:

 

Run it with root user:

 

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

 

It will ask to rollback identical patches like this:

 

Analyzing patch(es) on “/u01/app/oracle/12.1.0.2/grid” …

Patch “/stage/PSUpatch/22191349/21436941” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948341” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948344” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

Patch “/stage/PSUpatch/22191349/21948354” is already installed on “/u01/app/oracle/12.1.0.2/grid”. Please rollback the existing identical patch first.

 

So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:

 

opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)

 

Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:

 

/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f

 

After this again run:

 

/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp

 

If analyze command fail then use this with root user:

 

$ORA_GI_HOME/crs/install/roothas.pl –postpatch

 

It will start the has services too.

 

Then again run the analyze command as given above:

 

It will show something like:

 

Analyzing patch(es) on “/u01/app/oracle/12.1.0.2/grid” …

Patch “/stage/PSUpatch/22191349/21436941” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948341” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948344” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

Patch “/stage/PSUpatch/22191349/21948354” successfully analyzed on “/u01/app/oracle/12.1.0.2/grid” for apply.

 

Now you are good to apply the patch. Proceed to next step.

 

 

 

 10 min6Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)

 

As root user, execute the following command:

 

# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp

 

In case if it doesn’t apply in RDBMS Home, then run:

 

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp

 

Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:

 

/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/2194835460 min7Loading Modified SQL Files into the Database:

 

% sqlplus /nolog

SQL> Connect / as sysdba

SQL> startup

SQL> quit

% cd $ORACLE_HOME/OPatch

% ./datapatch -verbose60 min8Check for the list of patches applied to the database.

 

SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch;5 min

Categories: DBA Blogs

New Round of Let’s Talk Database Events

Richard Foote - Thu, 2016-02-11 18:45
I’ll be presenting a new round of “Let’s Talk Database” events around Australia and NZ next month. These are free events but have often “sold out” in the past so booking early is recommended to avoid disappointment. All events run between 9:00am – 12:30pm and are followed by a networking lunch. Currently, the confirmed events are: Sydney – Tuesday, […]
Categories: DBA Blogs

Log Buffer #460: A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2016-02-11 14:34

This Log Buffer Edition covers blog posts from Oracle, SQL Server and MySQL for this week.

Oracle:

APEX shuttle item with one direction

Wondering about which tasks and work products are essential for your project?

Using Spark(Scala) and Oracle Big Data Lite VM for Barcode & QR Detection

Cloning 10.2.0.3 Oracle Home on fully patched 11.31 HP-UX hangs

An UNDO in a PDB in Oracle 12c?

SQL Server:

SQL Azure Performance Benchmarking

Monitoring In-Memory OLTP: What’s Important?

Find and Remove Duplicate Records SQL Server

A Database to Diagram For

Getting started with R scripts and R visuals in Power BI Desktop

MySQL:

MySQL Support People – Percona Support

How to Install Redmine 3 with Nginx on Ubuntu 15.10

The magical abandoned .ibd files after crash recovery with MySQL 5.7.10

How To Speed Up MySQL Restart (hint: just like before, but this time for real)

OmniSQL – Massively Parallel Query Execution

Categories: DBA Blogs

Oracle RAC on Azure

Pythian Group - Thu, 2016-02-11 14:31

Microsoft Azure provides an acceptable and affordable platform for a training environment. I am an Oracle DBA, and use it to test functionality, new technologies and features of different Oracle products. Azure supplies a template for Oracle linux and it can be used to run a single database, but when we try to create an Oracle RAC, we hit two major issues.

In the first, the Azure virtual network doesn’t support multicast and, as result, cannot be used for interconnect. The second issue is shared storage. Azure provides shared file storage, and you can access it using SMB-2 protocol, but it isn’t exactly what we need for RAC. How we can solve or workaround those problems? I will share my experience and show how I can setup a RAC on Azure.

For a two node RAC we first need to create at least two virtual machines for the cluster nodes. I’ve chosen Oracle Linux 6.4 from Azure Marketplace. I decided to create the machines with 2 network interfaces where one will be used for public, and another will be used for private interconnect. Here is my blog post how to create a VM with 2 network interfaces. It may not be necessary since you can fork a virtual interface out of your only public network, but I decided to go this way and create cluster nodes with two interfaces.

Here is output for the network from the first node:

[root@oradb5 network-scripts]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71
inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:776 errors:0 dropped:0 overruns:0 frame:0
TX packets:789 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:96068 (93.8 KiB) TX bytes:127715 (124.7 KiB)

eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92
inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:722 (722.0 b) TX bytes:1166 (1.1 KiB)

We need to install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64 rpm. It will install all required packages and set up kernel and limits for oracle user on our boxes :

yum install oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64

The next step is to enable multicast support on the network for interconnect. You can read how to enable the multicast support in my other blog. As result you are getting a network interface edge0 which can be used now for our private network. Here is output of the ifconfig after crating virtual interface with support of multicast:

[root@oradb5 ~]# ifconfig
edge0 Link encap:Ethernet HWaddr 9E:1A:D8:0B:94:EF
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::9c1a:d8ff:fe0b:94ef/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:3 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:238 (238.0 b)

eth0 Link encap:Ethernet HWaddr 00:0D:3A:11:A3:71
inet addr:10.0.1.11 Bcast:10.0.1.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:a371/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:118729 errors:0 dropped:0 overruns:0 frame:0
TX packets:62523 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:143705142 (137.0 MiB) TX bytes:20407664 (19.4 MiB)

eth1 Link encap:Ethernet HWaddr 00:0D:3A:11:AC:92
inet addr:10.0.2.11 Bcast:10.0.3.255 Mask:255.255.254.0
inet6 addr: fe80::20d:3aff:fe11:ac92/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9 errors:0 dropped:0 overruns:0 frame:0
TX packets:271 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1274 (1.2 KiB) TX bytes:43367 (42.3 KiB)

I’ve used multicast tool from Oracle support document

Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)

The check was successful:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

So, we have solved the first obstacle and need to get shared storage for our RAC. We have at least a couple of options here, and I believe somebody can advise us on others. We can use NFS based shared storage, or we can use iscsi for that. You may choose something from the Azure Marketplace like SoftNAS or Stonefly or you may decide to create your own solution. In my case I just faired another Oracle Linux VM, added couple of storage disks to it using portal, and then set up NFS server on that machine. Here is the high level description for that:
We create a linux based VM on Azure using Oracle Linux 6.4 template from Marketplace. The size will be dictated by your requirements. I called the machine oradata.
I’ve added a 20 Gb disk to the oradata machine through the Azure portal, and created a partition and filesystem on it:

[root@oradata ~]# fdisk -l
[root@oradata ~]# fdisk /dev/sdc
[root@oradata ~]# mkfs.ext4 /dev/sdc1
[root@oradata ~]# mkdir /share
[root@oradata ~]# mkdir /share/oradata1
[root@oradata ~]# e2label /dev/sdc1 sharedoradata1
[root@oradata ~]# vi /etc/fstab
[root@oradata ~]# mount -a
[root@oradata ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 7.4G 1.4G 5.7G 19% /
tmpfs 1.7G 0 1.7G 0% /dev/shm
/dev/sda1 485M 50M 410M 11% /boot
/dev/sda2 2.0G 67M 1.9G 4% /tmp
/dev/sdc1 20G 4.2G 15G 23% /share/oradata1
/dev/sdb1 60G 180M 56G 1% /mnt/resource
[root@oradata ~]#

Installed necessary utilities using yum:
[root@oradata ~]# yum install nfs-utils
Configured NFS server on the box:

[root@oradata ~]# chkconfig service nfs on
[root@oradata ~]# vi /etc/exports
[root@oradata ~]# cat /etc/exports
/share/oradata1 10.0.0.0/23(rw,sync,no_root_squash)
[root@oradata ~]# service nfs restart
[root@oradata ~]# showmount -e
Export list for oradata:
/share/oradata1 10.0.0.0/23

Configure or stop firewall(You may need to do it on your cluster nodes as well) :

[root@oradata ~]# service iptables stop
iptables: Flushing firewall rules: [ OK ] iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Unloading modules: [ OK ] [root@oradata ~]# chkconfig iptables off
[root@oradata ~]#

On your cluster nodes you need add the mountpoint for your shared storage to /etc/fstab and mount it.

[root@oradb5 ~]# vi /etc/fstab
[root@oradb5 ~]# cat /etc/fstab | grep nfs
oradata:/share/oradata1 /u02/oradata nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600 0 0
[root@oradb5 ~]# mount -a
[root@oradb5 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 7.4G 2.5G 4.6G 36% /
tmpfs 3.5G 0 3.5G 0% /dev/shm
/dev/sda1 485M 69M 391M 15% /boot
/dev/sda2 2.0G 86M 1.8G 5% /tmp
/dev/sdc1 60G 12G 45G 21% /u01/app
/dev/sdb1 281G 191M 267G 1% /mnt/resource
oradata:/share/oradata1
20G 4.2G 15G 23% /u02/oradata
[root@oradb5 ~]# mount | grep /u02/oradata | grep -v grep
oradata:/share/oradata1 on /u02/oradata type nfs (rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,noac,actimeo=0,vers=3,timeo=600,addr=10.0.1.101)
[root@oradb5 ~]#

Now we have the required storage for OCR and Voting disks, network for public and interconnect, and can install our cluster.
We need to correct /etc/hosts file on both nodes (you may choose to use Azure DNS service instead).

[oracle@oradb5 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.1.11 oradb5
10.0.1.12 oradb6
10.0.1.15 oradb5-vip
10.0.1.16 oradb6-vip
10.0.1.19 oradb-clst-scan
192.168.1.1 oradb5-priv
192.168.1.2 oradb6-priv
10.0.1.101 oradata
[oracle@oradb5 ~]$

You can see I setup the public,VIP and SCAN in the hosts file. Of course it is not acceptable for any production implementation or if you want to have more than one scan. As I’ve already mentioned above you can use DNS for proper installation.
We copy required software to one of the nodes, unpack it and create a response file for installation like:

[oracle@oradb5 ~]$ cat grid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v12.1.0
ORACLE_HOSTNAME=oradb5
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/12.1.0/grid
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=dba
oracle.install.asm.OSASM=dba
oracle.install.crs.config.gpnp.scanName=oradb-clst-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterType=STANDARD
oracle.install.crs.config.clusterName=oradb-clst
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.gpnp.gnsClientDataFile=
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=
oracle.install.crs.config.clusterNodes=oradb5:oradb5-vip,oradb6:oradb6-vip
oracle.install.crs.config.networkInterfaceList=eth0:10.0.0.0:1,eth1:10.0.2.0:3,edge0:192.168.1.0:2
oracle.install.crs.config.storageOption=FILE_SYSTEM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.votingDiskLocations=/u02/oradata/voting/vdsk1,/u02/oradata/voting/vdsk2,/u02/oradata/voting/vdsk3
oracle.install.crs.config.sharedFileSystemStorage.votingDiskRedundancy=NORMAL
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=/u02/oradata/ocr/ocrf1
oracle.install.crs.config.sharedFileSystemStorage.ocrRedundancy=EXTERNAL
oracle.install.crs.config.useIPMI=false
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
oracle.install.asm.SYSASMPassword=
oracle.install.asm.diskGroup.name=
oracle.install.asm.diskGroup.redundancy=
oracle.install.asm.diskGroup.AUSize=1
oracle.install.asm.diskGroup.disks=
oracle.install.asm.diskGroup.diskDiscoveryString=
oracle.install.asm.monitorPassword=
oracle.install.asm.ClientDataFile=
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsHost=
oracle.install.config.omsPort=0
oracle.install.config.emAdminUser=
oracle.install.config.emAdminPassword=

The file can be used for silent installation. You may choose instead to use runInstaller in GUI mode.
To run installation in silent mode you just need to go to your unpacked software and run:

[oracle@oradb5 grid]$ ./runInstaller -silent -responseFile /home/oracle/grid.rsp -ignoreSysPrereqs -ignorePrereq
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 415 MB. Actual 1350 MB Passed
Checking swap space: 0 MB available, 150 MB required. Failed <<<>> Ignoring required pre-requisite failures. Continuing…

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2016-02-01_09-41-01AM. Please wait …

You’ve of course noticed that I’ve run the installation ignoring requirements. As a matte of fact, I ran it without ignoring, checked the failing checks, made necessary adjustments for those checks, and then I decided they were important and left other as they were. As example my /etc/resolve.conf file was different due to settings on dhcp server and so on. I advise to apply common sense and your knowledge to decide what checks are important for you and what can be ignored.
Your installation will be completed and all you need to run is a couple of scripts to finish the installation.

As a root user, execute the following script(s):
1. /u01/app/12.1.0/grid/root.sh

Execute /u01/app/12.1.0/grid/root.sh on the following nodes:
[oradb5, oradb6]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software.
As install user, execute the following script to complete the configuration.
1. /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=

Note:
1. This script must be run on the same host from where installer was run.
2. This script needs a small password properties file for configuration assistants that require passwords (refer to install guide documentation).

We run the root.sh on each node one by one as user root and execute configToolAllCommands script as user oracle on the node we have run our installation. The response file would be required if we specified password for ASM,ASM monitoring or for DBCA. Here is an example of the file contents:

oracle.assistants.server|S_SYSPASSWORD=welcome1
oracle.assistants.server|S_SYSTEMPASSWORD=welcome1
oracle.assistants.server|S_DBSNMPPASSWORD=welcome1
oracle.assistants.server|S_PDBADMINPASSWORD=welcome1
oracle.assistants.server|S_EMADMINPASSWORD=welcome1
oracle.assistants.server|S_ASMSNMPPASSWORD=welcome1

Change permission for the file to 600 before running the script:

[oracle@oradb5 grid]$ vi /home/oracle/cfgrsp.properties
[oracle@oradb5 grid]$ chmod 600 /home/oracle/cfgrsp.properties

We don’t have any ASM in our installation or BMS console but I will leave the file nevertheless just for reference.
Here is an output what we ran on our system :

[root@oradb5 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_oradb5_2016-02-01_10-21-07.log for the output of root script
....

[root@oradb6 ~]# /u01/app/12.1.0/grid/root.sh
Check /u01/app/12.1.0/grid/install/root_oradb6_2016-02-01_10-38-50.log for the output of root script
….

[oracle@oradb5 grid]$ /u01/app/12.1.0/grid/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/home/oracle/cfgrsp.properties
Setting the invPtrLoc to /u01/app/12.1.0/grid/oraInst.loc

perform – mode is starting for action: configure
….

Keep in mind the configToolAllCommands should also create the management database in your cluster. If somehow it was failed you can try to recreate it using dbca in silent mode like :

/u01/app/12.1.0/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType FS -datafileDestination /u02/oradata/ocr/oradb-clst/mgmtdb -datafileJarLocation /u01/app/12.1.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck -oui_internal

The RAC is created and now it can be used for application high availability or for databases tests. You may install a database software on the RAC either using GUI installer or silent mode, but don’t forget to specify cluster nodes during installation. I would also like to mention that I would not recommend installing it as production system, but it is quite suitable for tests or to experiment if you want to verify or troubleshot some RAC specific features.

Categories: DBA Blogs

AUDIT DDLS in database with trigger

Learn DB Concepts with me... - Wed, 2016-02-10 21:37


-- Simple trigger to audit to audit basic schema changes :

--- CREATE TABLE TO STORE AUDIT DATA

CREATE TABLE DDL_AUDIT_LOG
(
  STAMP DATE
, USERNAME VARCHAR2(30 BYTE)
, OSUSER VARCHAR2(30 BYTE)
, MACHINE VARCHAR2(30 BYTE)
, TERMINAL VARCHAR2(30 BYTE)
, OPERATION VARCHAR2(30 BYTE)
, OBJTYPE VARCHAR2(30 BYTE)
, OBJNAME VARCHAR2(30 BYTE)
, OBJ_OWNER VARCHAR2(30 BYTE)
) TABLESPACE USERS ;

-- NOW CREATE TRIGGER TO AUDIT CHANGES

ALTER TRIGGER AUDIT_DDL_CHANGES DISABLECREATE TRIGGER AUDIT_DDL_CHANGES
   AFTER create OR drop OR alter
      ON ATOORPU.SCHEMA  -- Change SCOTT to your schema name!!!
      -- ON DATABASE
BEGIN
  INSERT INTO ddl_audit_log VALUES
        (SYSDATE,
         SYS_CONTEXT('USERENV', 'SESSION_USER'),
         SYS_CONTEXT('USERENV', 'OS_USER'),
         SYS_CONTEXT('USERENV', 'HOST'),
         SYS_CONTEXT('USERENV', 'TERMINAL'),
         ORA_SYSEVENT,
         ORA_DICT_OBJ_TYPE,
         ORA_DICT_OBJ_NAME,
         ORA_DICT_OBJ_OWNER
        );
END;

Sample output :



Sample Audit Table Output




Categories: DBA Blogs

IMPDP SHOW=Y, sqlfile=test.sql

Learn DB Concepts with me... - Wed, 2016-02-10 21:35
Most of the legacy mode options will work in the data pump with its corresponding options. In legacy mode (imp utility) we have show=y option to spool the content of the dump file into sql scripts without doing the actual import. The same command can be used in data pump impdp also but it will get replaced with sqlfile option.




oracle@Linux01:[/u01/app/oracle/dpump] $ expdp atoorpu directory=DPUMP dumpfile=test1.dmp logfile=test1.log tables=TEST

Export: Release 11.2.0.4.0 - Production on Wed Feb 3 15:22:00 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "ATOORPU"."SYS_EXPORT_TABLE_01":  atoorpu/******** directory=DPUMP dumpfile=test1.dmp logfile=test1.log tables=TEST
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "ATOORPU"."TEST"                            5.875 KB       5 rows
Master table "ATOORPU"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for ATOORPU.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/dpump/test1.dmp
Job "ATOORPU"."SYS_EXPORT_TABLE_01" successfully completed at Wed Feb 3 15:22:13 2016 elapsed 0 00:00:07

oracle@Linux01:[/u01/app/oracle/dpump] $ ls -ll test1.dmp
-rw-r-----. 1 oracle oinstall 114688 Feb  3 15:22 test1.dmp
oracle@Linux01:[/u01/app/oracle/dpump] $ impdp atoorpu directory=DPUMP dumpfile=test1.dmp sqlfile=TEST1.sql

Import: Release 11.2.0.4.0 - Production on Wed Feb 3 15:22:35 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "ATOORPU"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded
Starting "ATOORPU"."SYS_SQL_FILE_FULL_01":  atoorpu/******** directory=DPUMP dumpfile=test1.dmp sqlfile=TEST1.sql
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "ATOORPU"."SYS_SQL_FILE_FULL_01" successfully completed at Wed Feb 3 15:22:42 2016 elapsed 0 00:00:01

oracle@Linux01:[/u01/app/oracle/dpump] $ ls -ltr
-rw-rw-r--. 1 atoorpu atoorpu      18870 Feb  3 09:38 test.log
-rw-r--r--. 1 oracle  oinstall      1325 Feb  3 15:22 test1.log
-rw-r-----. 1 oracle  oinstall    114688 Feb  3 15:22 test1.dmp
-rw-r--r--. 1 oracle  oinstall      3800 Feb  3 15:22 TEST1.sql
-rw-r--r--. 1 oracle  oinstall       933 Feb  3 15:22 import.log


If you try to use SHOW=Y you wil see below message in the output.
oracle@Linux01:[/u01/app/oracle/dpump] $ impdp atoorpu dumpfile=test1.dmp SHOW=Y

Import: Release 11.2.0.4.0 - Production on Wed Feb 3 15:17:27 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "show=TRUE" Location: Command Line, Replaced with: "sqlfile=test1.sql"
Legacy Mode has set nologfile=true parameter.


Categories: DBA Blogs

Opatchauto Session failed: Parameter validation failed

Pakistan's First Oracle Blog - Wed, 2016-02-10 20:12
While applying PSU on Grid Home in 12c, due to the patch conflict, you might have to rollback few patches before you could apply the PSU.

After rolling back the patches from grid home, when you try to run the opatch analyze command again, you might encounter following error:





[root ~]# $Grid_Home/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.OPatchauto Version : 12.1.0.1.10OUI Version        : 12.1.0.2.0Running from       : $Grid_Home
opatchauto log file: $Grid_Home/cfgtoollogs/opatchauto/22191349/opatch_gianalyze.logNOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.OCM RSP file has been ignored in analyze mode. 
Clusterware is either not running or not configured. You have the following 2 options:1. Configure and start the Clusterware on this node and re-run the tool2. Run the tool with '-oh ' to first patch the Grid Home, then invoke tool with '-database ' or '-oh ' to patch the RAC homeParameter Validation: FAILED
Opatchauto Session failed: Parameter validation failedException in thread "main" java.lang.RuntimeException: java.io.IOException: Stream closed                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:432)                at oracle.opatchauto.gi.GILogger.printStackTrace(GILogger.java:447)                at oracle.opatchauto.gi.OPatchauto.main(OPatchauto.java:97)Caused by: java.io.IOException: Stream closed                at java.io.BufferedWriter.ensureOpen(BufferedWriter.java:98)                at java.io.BufferedWriter.write(BufferedWriter.java:203)                at java.io.Writer.write(Writer.java:140)                at oracle.opatchauto.gi.GILogger.writeWithoutTimeStamp(GILogger.java:426)                ... 2 more
opatchauto failed with error code 1.
Then if you try to start the has services, you get following error:
 [root ~]# $Grid_Home/bin/crsctl start hasCRS-6706: Oracle Clusterware Release patch level ('3749979535') does not match Software patch level ('2278979115'). Oracle Clusterware cannot be started.CRS-4000: Command Start failed, or completed with errors.
SOLUTION:
So in order to resolve this, you need to issue following command as root user:$ORA_GI_HOME/crs/install/roothas.pl –postpatch
It will start the has services too.
Then again run the analyze command as given above and it will work. 



Categories: DBA Blogs

Partner Webcast – Oracle PaaS: Oracle Mobile Cloud Service Development

Mobile computing has experienced explosive growth in the past decade, and this is just the beginning. At the heart of any organizations’ digital strategy, Mobile is the primary screen and engagement...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Step by Step Jan 2016 PSU Patch Apply on 12c Grid and RDBMS Homes in Linux

Pakistan's First Oracle Blog - Tue, 2016-02-09 20:05

Following step by step action plan is for single instance database stored on ASM in 12.1.0.2 on Linux (OEL 6 64 bit in this case.)






Step Description ETA 1 Update the OPATCH utility:
For Database home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/product/12.1.0/db_1$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch version
For Grid home:
$ unzip p6880880_121010_LINUX.zip -d /u01/app/oracle/12.1.0.2/grid$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch version 15 min 2 Create ocm.rsp file:
Note: Press Enter/Return key and don't provide any input and say Yes.
$ export ORACLE_HOME=/u01/app/oracle/12.1.0.2/grid$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /stage/ocm.rsp 5 min 3 Validation of Oracle Inventory
Before beginning patch application, check the consistency of inventory information for GI home and each database home to be patched. Run the following command as respective Oracle home owner to check the consistency.
For database home:
$ /u01/app/oracle/product/12.1.0/db_1/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/product/12.1.0/db_1
For Grid home:
$ /u01/app/oracle/12.1.0.2/grid/OPatch/opatch lsinventory -detail -oh /u01/app/oracle/12.1.0.2/grid
If this command succeeds, it lists the Oracle components that are installed in the home. Save the output so you have the status prior to the patch apply. 5 min 4 Stage the Patch:
$ mkdir /stage/PSUpatch$ cp /stage/p22191349_121020_Linux-x86-64.zip /stage/PSUpatch
Check that the directory is empty.$ cd /stage/PSUpatch$ ls
Unzip the patch as grid home owner.
$ unzip p22191349_121020_.zip 5 min 5 One-off Patch Conflict Detection and Resolution:
Run it with root user:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
It will ask to rollback identical patches like this:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948341" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948344" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.Patch "/stage/PSUpatch/22191349/21948354" is already installed on "/u01/app/oracle/12.1.0.2/grid". Please rollback the existing identical patch first.
So first rollback above 4 patches by going to their directory and issuing with grid owner from grid home:
opatch rollback -id 21948354 -local -oh /u01/app/oracle/12.1.0.2/grid (Repeat for all 4 patches)
Note: In some cases, weirdly, I had to shutdown the has services with root user before patch rollback by using:
/u01/app/oracle/12.1.0.2/grid/bin/crsctl stop has -f
After this again run:
/u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -analyze -ocmrf /stage/ocm.rsp
If analyze command fail then use this with root user:
$ORA_GI_HOME/crs/install/roothas.pl –postpatch
It will start the has services too.
Then again run the analyze command as given above:
It will show something like:
Analyzing patch(es) on "/u01/app/oracle/12.1.0.2/grid" ...Patch "/stage/PSUpatch/22191349/21436941" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948341" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948344" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.Patch "/stage/PSUpatch/22191349/21948354" successfully analyzed on "/u01/app/oracle/12.1.0.2/grid" for apply.
Now you are good to apply the patch. Proceed to next step.



10 min 6 Apply the Patch: (Note: This should apply patch in both GI and RDBMS Home but its unreliable in that sense so after this completes, we need to check opatch lsinventory to make sure that it also applied patches in RDBMS Home)
As root user, execute the following command:
# /u01/app/oracle/12.1.0.2/grid/OPatch/opatchauto apply /stage/PSUpatch/22191349 -ocmrf /stage/ocm.rsp
In case if it doesn’t apply in RDBMS Home, then run:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatchauto apply /stage/PSUpatch/22191349 -oh /u01/app/oracle/product/12.1.0/db_1 -ocmrf /stage/ocm.rsp
Make sure the above applies both OCW and PSU patches. You can verify that from opatch lsinventory. If only OCW patch is present in output and no PSU (which is likely the case), then issue following from Oracle home with oracle database owner after shutting down database:
/u01/app/oracle/product/12.1.0/db_1/OPatch/opatch apply -oh /u01/app/oracle/product/12.1.0/db_1 -local /stage/PSUpatch/22191349/21948354 60 min 7 Loading Modified SQL Files into the Database:
% sqlplus /nologSQL> Connect / as sysdbaSQL> startupSQL> quit% cd $ORACLE_HOME/OPatch% ./datapatch -verbose 60 min 8 Check for the list of patches applied to the database.
SQL> select action_time, patch_id, patch_uid, version, status, bundle_series, description from dba_registry_sqlpatch; 5 min
Categories: DBA Blogs