Skip navigation.

DBA Blogs

Exadata Vulnerability

Pakistan's First Oracle Blog - Mon, 2015-02-02 19:49
This Exadata vulnerability is related to glibc vulnerability. A heap-based buffer overflow was found in glibc's __nss_hostname_digits_dots() function, which is used by the gethostbyname() and gethostbyname2() glibc function calls.

A remote attacker able to make an application call either of these functions could use this flaw to execute arbitrary code with the permissions of the user running the application.

In order to check if your Exadata system suffers from this vulnerability, use:

[root@server ~]# ./ghostest-rhn-cf.sh
vulnerable

The solution and action plan for this vulnerability is available by My Oracle Support in the following document:

glibc vulnerability (CVE-2015-0235) patch availability for Oracle Exadata Database Machine (Doc ID 1965525.1)
Categories: DBA Blogs

Database Flashback -- 1

Hemant K Chitale - Sun, 2015-02-01 09:25
A first post on Database Flashback.

Enabling Database Flashback in 11.2 non-RAC

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 1 23:13:17 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select version, status, database_status
2 from v$instance;

VERSION STATUS DATABASE_STATUS
----------------- ------------ -----------------
11.2.0.2.0 OPEN ACTIVE

SYS>select flashback_on, database_role
2 from v$database;

FLASHBACK_ON DATABASE_ROLE
------------------ ----------------
NO PRIMARY

SYS>
SYS>show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /home/oracle/app/oracle/flash_
recovery_area
db_recovery_file_dest_size big integer 3852M
SYS>
SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG 0 0 0
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

So, the above output shows that the database is OPEN but Flashback is not enabled.
Let me enable Flashback now.
SYS>alter database flashback on;

Database altered.

SYS>select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 28.88 .12 5
IMAGE COPY 0 0 0
FLASHBACK LOG .41 0 2
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

Immediately after enabling Flashback, Oracle shows usage of the FRA for Flashback Logs. Note : Although 11.2 allows you to enable Flashback in an OPEN Database, I would suggest doing so when the database is not active.

Categories: DBA Blogs

12c Dataguard: Restore Data File From Service

Oracle in Action - Fri, 2015-01-30 23:15

RSS content

Starting with Oracle Database 12c, in a Data Guard environment, you can restore data files on a primary (standby) database by connecting to a standby (primary) database over the network .

RMAN restores database files, over the network, from the physical standby (primary) database by using the FROM SERVICE clause of the RESTORE command. The FROM SERVICE clause provides the service name of the physical standby (primary) database from which the files must be restored. During the restore operation, RMAN creates backup sets, on the physical standby database (primary), of the files that need to be restored and then transfers these backup sets to the target database over the network.”

 Optionally, you can use SECTION SIZE to restore files from the source database as multisection backup sets. You can also compress the transferred files by specifying the USING COMPRESSED BACKUPSET.

Prerequisites for restoring Files from remote host :

  • The password file on the source database and the target database must be the same.
  • The tnsnames.ora file in the target database must contain an entry that corresponds to the remote database.

In this post, I will demonstrate restore of a data file on primary  from standby using service clause of RMAN  Restore command.

Current scenario:

  • Primary CDB : Boston
  • Physical Standby CDB : London
  • PDB : Dev1

– Create a new tablespace called sample in PDB dev1 on primary (boston)

BOSTON>alter session set container=dev1;
        create tablespace sample
        datafile       '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'
         size 5m;

– Verify that parameter standby_file_management = auto
on standby database  (london)

LONDON>sho parameter standby_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
standby_file_management string AUTO

– Verify that datafile for tablespace sample has been created on physical standby  (london)

LONDON>select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/london/system01.dbf
/u01/app/oracle/oradata/london/sysaux01.dbf
/u01/app/oracle/oradata/london/undotbs01.dbf
/u01/app/oracle/oradata/london/pdbseed/system01.dbf
/u01/app/oracle/oradata/london/users01.dbf
/u01/app/oracle/oradata/london/pdbseed/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/system01.dbf
/u01/app/oracle/oradata/london/dev1/sysaux01.dbf
/u01/app/oracle/oradata/london/dev1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/london/dev1/example01.dbf
/u01/app/oracle/oradata/london/dev1/sample01.dbf

– Create table hr.employees2 in new tablespace sample on primary

BOSTON>sho con_name

CON_NAME
------------------------------
DEV1

BOSTON>create table hr.employees2 tablespace sample
       as select * from hr.employees;
      select count(*) from hr.employees2;

COUNT(*)
----------
107

– To simulate loss of datafile, rename  sample01.dbf to sample01.sav on primary host

BOSTON>!mv /u01/app/oracle/oradata/boston/dev1/sample01.dbf /u01/app/oracle/oradata/boston/dev1/sample01.sav

– Restart primary – error while opening as datafile is missing

BOSTON>conn / as sysdba

       shu abort;
       startup
Database mounted.
ORA-01157: cannot identify/lock data file 12 - see DBWR trace file
 ORA-01110: data file 12: '/u01/app/oracle/oradata/boston/dev1/sample01.dbf'

– Take the missing datafile offline  on primary and then open primary database

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile offline;
       alter session set container=cdb$root;
       alter database open;

BOSTON>sho pdbs

CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 DEV1 MOUNTED

BOSTON>alter pluggable database dev1 open;
-- Connect to primary (boston)  using RMAN
[oracle@host01 ~]$ . oraenv
ORACLE_SID = [boston] ?

[oracle@host01 ~]$ rman target /

-- Restore datafile from physical standby database (london) over network

RMAN> restore tablespace dev1:sample from service 'london';

Starting restore at 23-JAN-15
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service london
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00012 to /u01/app/oracle/oradata/boston/dev1/sample01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 23-JAN-15

– Recover the restored tablespace using archivelogs available
locally on primary database (boston)

RMAN> recover tablespace dev1:sample;

Starting recover at 23-JAN-15
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 23-JAN-15

– Bring tablespce online

BOSTON>alter session set container=dev1;
       alter tablespace sample datafile online;
       select count(*) from hr.employees2;

COUNT(*)
----------
107

I hope this post was useful.
Your comments and suggestions are always welcome.

References:

https://docs.oracle.com/database/121/RCMRF/rcmsynta2008.htm#RCMRF149

http://docs.oracle.com/database/121/BRADV/rcmadvre.htm#BRADV681



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c Dataguard: Restore Data File From Service], All Right Reserved. 2015.

The post 12c Dataguard: Restore Data File From Service appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

OTNYathra 2015

Oracle in Action - Fri, 2015-01-30 05:23

RSS content

The Oracle ACE directors and Java champions will be organizing an evangelist event called ‘OTNYathra 2015’  during February 2015. during which a series of 7 conferences will be held across 7 major cities of India  in a time period of 2 weeks.  This event will bring the Oracle community together, spread the knowledge and increase the networking opportunities in the region. The detailed information about the event can be viewed at http://www.otnyathra.com.

I will be presenting a session on Adaptive Query Optimization on 13th Feb 2015 at FMDI, Sector 17B, IFFCO Chowk , Gurgaon.

Thanks to Sir Murali Vallath  and his team for organizing it and giving me an opportunity to present.

Hope to see you there!!

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [OTNYathra 2015], All Right Reserved. 2015.

The post OTNYathra 2015 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Log Buffer #408, A Carnival of the Vanities for DBAs

Pythian Group - Thu, 2015-01-29 21:45

This Log Buffer Edition covers various innovative blog posts from various fields of Oracle, MySQL and SQL Server. Enjoy!!!


Oracle:

A user reported an ORA-01114 and an ORA-27069 in a 3rd party application running against an Oracle 11.1 database.

Oracle SOA Suite 12c: Multithreaded instance purging with the Java API.

Oracle GoldenGate for Oracle Database has introduced several features in Release 12.1.2.1.0.

Upgrade to 12c and Plugin – one fast way to move into the world of Oracle Multitenant.

The Oracle Database Resource Manager (the Resource Manager) is an infrastructure that provides granular control of database resources allocated to users, applications, and services. The Oracle Database Resource Manager (RM) enables you to manage multiple workloads that are contending for system and database resources.

SQL Server:

Database ownership is an old topic for SQL Server pro’s.

Using T-SQL to Perform Z-Score Column Normalization in SQL Server.

The APPLY operator allows you to join a record set with a function, and apply the function to every qualifying row of the table (or view).

Dynamic Management Views (DMVs) are a significant and valuable addition to the DBA’s troubleshooting armory, laying bare previously unavailable information regarding the under-the-covers activity of your database sessions and transactions.

Grant Fritchey reviews Midnight DBA’s Minion Reindex, a highly customizable set of scripts that take on the task of rebuilding and reorganizing your indexes.

MySQL:

It’s A New Year – Take Advantage of What MySQL Has To Offer.

MySQL High Availability and Disaster Recovery.

MariaDB Galera Cluster 10.0.16 now available.

Multi-threaded replication with MySQL 5.6: Use GTIDs!

MySQL and the GHOST: glibc gethostbyname buffer overflow.

Categories: DBA Blogs

From 0 to Cassandra – An Exhaustive Approach to Installing Cassandra

Pythian Group - Thu, 2015-01-29 21:44

All around the Internet you can find lots of guides on how to install Cassandra on almost every Linux distro around. But normally all of this information is based on the packaged versions and omit some parts that are deemed essential for proper Cassandra functioning.

Note: If you are adding a machine to an existing Cluster please approach this guide with caution and replace the configurations here recommended by the ones you already have on your cluster, specially the Cassandra configuration.

Without further conversation lets start!

Essentials

Start your machine and install the following:

  • ntp (Packages are normally ntp, ntpdata and ntp-doc)
  • wget (Unless you have your packages copied over via other means)
  • vim (Or your favorite text editor)

Retrieve the following packages

Installation
Set up NTP

This can be more or less dependent of your system, but the following commands should do it (You can check this guide also):

~$ chkconfig ntpd on
~$ ntpdate pool.ntp.org
~$ service ntpd start
Set up Java (Let’s assume we are doing this in /opt)

Extract Java and install it:

~$ tar xzf [java_file].tar.gz
~$ update-alternatives --install /usr/bin/java java /opt/java/bin/java 1 

Check that is installed:

~$ java -version
java version '1.7.0_75'
Java(TM) SE Runtime Environment (build 1.7.0_75-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode)

Let’s put JNA into place

~$ mv jna-VERSION.jar /opt/java/lib
Set up Cassandra (Let’s assume we are doing this in /opt)

 

Extract Cassandra:

~$ tar xzf [cassandra_file].tar.gz

Create Cassandra Directories:

~$ mkdir /var/lib/cassandra
~$ mkdir /var/lib/cassandra/commitlog
~$ mkdir /var/lib/cassandra/data
~$ mkdir /var/lib/cassandra/saved_caches
~$ mkdir /var/log/cassandra
Configuration  Linux configuration
~$ vim /etc/security/limits.conf

Add the following:

root soft memlock unlimited
root hard memlock unlimited
root – nofile 100000
root – nproc 32768
root – as unlimited

CentOS, RHEL, OEL, set in the following in /etc/security/limits.d/90-nproc.conf:

* – nproc 32768

Add the following to the sysctl file:

~$ vim /etc/sysctl.conf
vm.max_map_count = 131072

Finally (Reboot also works):

~$ sysctl -p

Firewall, the following ports must be open:

# Internode Ports
7000    Cassandra inter-node cluster communication.
7001    Cassandra SSL inter-node cluster communication.
7199    Cassandra JMX monitoring port.
# Client Ports
9042    Cassandra client port (Native).
9160    Cassandra client port (Thrift).

Note: Some/Most guides tell you to disable swap, I think of swap as an acrobat’s safety net, it should never have to be put to use, but in need it exists. As such, I never disable it and I put a low swappiness (around 10). You can read more about it here and here.

 Cassandra configuration

Note: Cassandra has a LOT of settings, these are the ones you should always set if you are going live. Lots of them depend on hardware and/or workload. Maybe I’ll write a post about them in the near future. In the meantime, you can read about them here.

~$ vim /opt/cassandra/conf/cassandra.yaml

Edit the following fields:

cluster_name: <Whatever you would like to call it>
data_file_directories: /var/lib/cassandra/data
commitlog_directory: /var/lib/cassandra/commitlog

saved_caches_directory: /var/lib/cassandra/saved_caches

# Assuming this is your first node, this should be reachable by other nodes
seeds: “<IP>”

# This is where you listen for intra node communication
listen_address: <IP>

# This is where you listen to incoming client connections
rpc_address: <IP>

endpoint_snitch: GossipingPropertyFileSnitch

Edit the snitch property file:

~$ vim  /opt/cassandra/conf/cassandra-rackdc.properties:

Add the DC and the RACK the server is in. Ex:

dc=DC1
rack=RAC1

Finally make sure your logs go to /var/log/cassandra:

~$ vim /opt/cassandra/conf/logback.xml
Testing

Start Cassandra

~$ service cassandra start

You should see no error here, wait a bit then:

~$ grep  JNA /var/log/cassandra/system.log
INFO  HH:MM:SS JNA mlockall successful

Then check the status of the ring:

~$ nodetool status
Datacenter: DC1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address        Load       Owns   Host ID                               Token                                    Rack
UN  185.10.49.136  140.59 KB  100.0%  5c3c697f-8bfd-4fb2-a081-7af1358b313f  0                                        RAC

Creating a keyspace a table and inserting some data:

~$ cqlsh xxx.yy.zz.ww

cqlsh- CREATE KEYSPACE sandbox WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', DC1 : 1};
Should give no errors
cqlsh- USE sandbox;
cqlsh:sandbox- CREATE TABLE data (id uuid, data text, PRIMARY KEY (id));
cqlsh:sandbox- INSERT INTO data (id, data) values (c37d661d-7e61-49ea-96a5-68c34e83db3a, 'testing');
cqlsh:sandbox- SELECT * FROM data;

id                                   | data
--------------------------------------+---------
c37d661d-7e61-49ea-96a5-68c34e83db3a | testing

(1 rows)

And we are done, you can start using your Cassandra node!

Categories: DBA Blogs

GGSCI Commands : History and Command (!) recall

DBASolved - Wed, 2015-01-28 11:08

Ever get tired of typing commands?  Ever wish there was a way to recall commands from GGSCI?  Well, you are in luck!  There are two ways this can be done (listed below).

Commands:
1. History
2. ! (command)

The first option, History, is a command that allows you to view a list of the most recent commands executed in the GGSCI interface since the session started.  

Example:

GGSCI> history

Output:

GGSCI Command History

    3: stats ext ext
    4: stats exti
    5: info all
    6: info manager
    7: edit mgr
    8: edit manager
    9: edit params mgr
   10: set editor vi
   11: edit params mgr
   12: history

Notice that the last 10 commands are listed with the history command.  This list can be tailered by using a number after the history command.

Example:

GGSCI> history [n]

Output:
GGSCI Command History

    9: edit params mgr
   10: set editor vi
   11: edit params mgr
   12: history
   13: history 5

By using the a number with the history command, the last N commands will be displayed.  In the example above, I wanted the last five commands.  So commands that meet this requriement are displayed including the command ran.

Now that we can see a list of the last commands ran, how can we execute the command that we would like to execute from the list.  This is quite simple, just use the command (!) option.  An exclamation point (!) in Oracle GoldenGate acts as a command recall/execution.

Example:

GGSCI> !

In the example above, the command (!) will execute the last command in the list of commands store in the session history.  By adding a number to the command (!), we can tell GGSCI to excute that command.

Example:

GGSCI> ! 5

In the example above, I’m asking to rerun the “info all” command that can be seen in the history earlier in the post.  The output from running the command will be a listing of the current status of the Oracle GoldenGate environment.

Output:

GGSCI (oel.acme.com) 15> ! 5
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      RUNNING
EXTRACT     RUNNING     EXT         00:00:00      00:00:02  
EXTRACT     RUNNING     EXTI        00:00:06      00:00:09  Description “Integrated Extract”
EXTRACT     RUNNING     PMP         00:00:00      00:00:00
EXTRACT     RUNNING     PMPI        00:00:00      00:00:05  Description “Integrated Pump”

Both of these commands can be used to cut down on typing in the GGSCI utility and make working a bit easier. 

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Log Buffer #407, A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2015-01-27 08:32

This Log Buffer Edition keeps the aim high and brings few of the best blog posts from Oracle, SQL Server and MySQL.

Oracle:

3 Modes for Moving Data to the BI Applications DW from a Source Application Database.

JSON for APEX Developers.

Neelakanth Nadgir posted a useful utility that prints out various statistics about the ZFS Adaptive Replacement Cache (ARC).

Obtaining Bonus Depreciation Methods for Oracle Fixed Assets.

Existing News – Java Cloud Service just got an update – WebLogic Server 12.1.3

SQL Server:

Tracking Database DDL Changes with SQLVer.

While a diminished level of control is certainly a factor to consider when contemplating migration of on-premises systems to Microsoft Azure, especially when dealing with PaaS resources such as Azure SQL Database, you have a range of techniques at your disposal that allow you to control and monitor both the status of and access to your Azure-resident services.

A Custom Execution Method – Level 19 of the Stairway to Integration Services

When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place.

SQL Server Reporting Services Basics: Deploying Reports

MySQL:

Mo’ Data, Mo’ Problems

FromDual.en: FromDual Backup Manager for MySQL 1.2.1 has been released

Tracking MySQL query history in long running transactions

MySQL Cluster 7.4.3 RELEASE CANDIDATE now available

Streaming MariaDB backups in the cloud.

Categories: DBA Blogs

Indexing Points to Remember

Pakistan's First Oracle Blog - Mon, 2015-01-26 18:54
Indexing depends upon the queries in the application.

There is no one-size-fits-all break-even point for indexed versus table scan access. If only a few rows are being accessed, the index will be preferred.

If almost all the rows are being accessed, the full table scan will be preferred. In between these two extremes, your “mileage” will vary.

A concatenated index is more useful if it also supports queries where not all columns are specified. For instance SURNAME, FIRSTNAME is more useful than FIRSTNAME, SURNAME because queries against SURNAME only are more likely to occur than queries against FIRSTNAME only.

Global indexes provide better performance for queries that must span all partitions.
Categories: DBA Blogs

Partner Webcast – Oracle Mobile Security Suite for Mobile Applications

As the workforce becomes increasingly mobile, the need for organizations to control access and protect company data on mobile devices has become more critical than ever. BYOD (Bring Your Own Device)...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Map OS Groups To Administrative Privileges After Installation

Oracle in Action - Wed, 2015-01-21 23:10

RSS content

During installing database software, user is prompted to enter names of various operating system groups mapping to various administrative privileges (SYSDBA, SYSOPER, SYSBACKUP, SYSKM, SYSDG). One might map one operating system group to multiple administrative privileges if role separation is not desired.  In case the need for role separation arises later, the mapping can be specified by updating  $ORACLE_HOME/rdbms/lib/config.c file and then relinking it. This post explains the various steps.

While installing database 12.1.0.2 software on linux, I had not  created OS groups corresponding to administrative privileges SYSBACKUP, SYSKM, SYSDG.  Now I want OS groups dgdba, backupdba and kmdba to map to SYSDG, SYSBACKUP and SYSKM administrative privileges respectively.

-- Check that groups dgdba, backupdba and kmdba do not exist

[root@host01 etc]# cat /etc/group | grep dba
dba:x:501:oracle

– Create groups dgdba, backupdba and kmdba

#groupadd -g 54321 dgdba
groupadd -g 54322 backupdba
groupadd -g 54323 kmdba

– Check that groups dgdba, backupdba and kmdba have been created

[root@host01 etc]# cat /etc/group | grep dba
dba:x:501:oracle
dgdba:x:54321:
 backupdba:x:54322:
 kmdba:x:54323:

– Create a user test which is a member of dgdba group

[root@host01 /]# useradd test -g oinstall -G dgdba

[root@host01 /]# passwd test
Changing password for user test.
New UNIX password:

– Login as test user

[root@host01 /]# su - test

[test@host01 ~]$ . oraenv
ORACLE_SID = [test] ? orcl

– As test user try to connect as sysdg – fails as dgdba group
has not been mapped to SYSDG administrative privilege

[test@host01 ~]$ dgmgrl

DGMGRL> connect sysdg/xx
ORA-01017: invalid username/password; logon denied

Warning: You are no longer connected to ORACLE.

– Verify in configuration file that currently OS group dba corresponds to administrative priviliges SYSDBA, SYSKM, SYSDG and SYSBACKUP

[oracle@host01 ~]$ cat $ORACLE_HOME/rdbms/lib/config.c |grep define
/* SS_DBA_GRP defines the UNIX group ID for sqldba adminstrative access. */
#define SS_DBA_GRP "dba"
#define SS_OPER_GRP "oper"
#define SS_ASM_GRP ""
#define SS_BKP_GRP "dba"
#define SS_DGD_GRP "dba"
#define SS_KMT_GRP "dba"

– Edit configuration file so that OS groups dgdba, backupdba and kmdba to map to SYSDG, SYSBACKUP and SYSKM administrative privileges respectively.

[oracle@host01 ~]$ vi $ORACLE_HOME/rdbms/lib/config.c
#define SS_DBA_GRP "dba"
#define SS_OPER_GRP "oper"
#define SS_ASM_GRP ""
#define SS_BKP_GRP "backupdba"
 #define SS_DGD_GRP "dgdba"
 #define SS_KMT_GRP "kmdba"

– To relink oracle binaries, Shut down all Oracle processes of all instances

a. Shut down the listener.

$ lsnrctl stop

b. Shut down all instances.

$ ps -ef |grep pmon |grep -v grep
oracle 11832 1 0 15:21 ? 00:00:00 ora_pmon_orcl

ORCL> shutdown immediate

— Relink binaries

[oracle@host01 ~]$ cd $ORACLE_HOME/bin; relink all

writing relink log to: /u01/app/oracle/product/12.1.0.2/dbhome_1/install/relink.log

– Now as test user connect as sysdg – succeeds

[test@host01 bin]$ dgmgrl

DGMGRL> connect sysdg/xx
Connected as SYSDG.

– Optionally modify existing OS user oracle to become part of new groups

#usermod -a -G dgdba,backupdba,kmdba oracle

[root@host01 /]# su - oracle

[oracle@host01 ~]$ id
uid=500(oracle) gid=500(oinstall) groups=500(oinstall),501(dba),502(oper),503(asmadmin),54321(dgdba),54322(backupdba),54323(kmdba)

Hope it helps!

Your comments and suggestions are always welcome.

References:

https://community.oracle.com/message/12806120?et=watches.email.thread#12806120

https://www.linkedin.com/groups/Map-OS-Groups-Administrative-Privileges-3698383.S.5964260145260216320?view=&item=5964260145260216320&type=member&gid=3698383&trk=eml-b2_anet_digest-hero-1-hero-disc-disc-0&midToken=AQE9SYOdN_UFjg&fromEmail=fromEmail&ut=1fAfQMlI9DO6A1
==============================================================

Related Links:

Home

Oracle 12c Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [ Map OS Groups To Administrative Privileges After Installation], All Right Reserved. 2015.

The post Map OS Groups To Administrative Privileges After Installation appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle Database 12c Patching: DBMS_QOPATCH, OPATCH_XML_INV, and datapatch

Pythian Group - Mon, 2015-01-19 19:36
Background

Oracle Database 12c brings us many new features including: the long needed ability to run OPatch and query the patch software installed in the Oracle Home programatically through SQL using the new DBMS_QOPATCH package.

If you’re a DBA working in an environment where patching consistently among databases is important, this is a welcomed enhancement. Sometimes one-off critical bug fix patches are important in the environment. Other times regular and consistent application of the quarterly PSUs is important to the organization (i.e. for regulatory compliancy). For sake of illustration I’m going to use the later as my need case for the duration of this article as I’ve regularly run into the situation where clients need PSUs applied properly, regularly, and consistently to their databases.

As a recap, many Oracle Patches including PSUs require a two-step process to implement properly. First the software change needs to be applied to the associated Oracle Home via the OPatch utility. Secondly it needs to be installed into all databases running in that Oracle Home (possibly updating internal packages and objects).

With Oracle 11g the latest patch is applied within the database using:

SQL> @catbundle.sql psu apply

Oracle 12c introduces a new utility called “datapatch” which replaces the need to run the 11g command shown above.

As with Oracle 11g you first install the patch into the Oracle Home with all services & databases down. But with Oracle Database 12c after restarting the database simply run datapatch from the OPatch directory:

cd $ORACLE_HOME/OPatch
./datapatch -verbose

 

Given that patching is a two-step process, some common DBA questions are:

1) Has a patch been installed in a given database?
2) Has a patch been installed into the $ORACLE_HOME via Opatch but not one or more of the databases?
3) Is the patch in the database but not the $ORACLE_HOME?

You may be thinking that #3 isn’t very likely but the reality is that it does happen. Circumstances that lead to a database being at a higher patch level than the Oracle Home include:

A) If the database was create from an RMAN duplicate from a home with a patch to one without.
B) A data guard standby switch-over or fail-over where the primary Oracle home was patched but the standby home was missed.
C) A 12c PDB that was unplugged from a patched CDB and plugged into an unpatched CDB.
D) The patch was uninstalled via Opatch from the Oracle home without uninstalling from the database (unlikely).

Since patching is a two-step process, historically answering questions such as these is also a two step process requiring the DBA to query the Oracle Home inventory and the database and to manually correlate and reconcile the results. This is cumbersome, prone to human errors, and not scalable due to the manual component.

But with Oracle 12c and the new DBMS_QOPATCH package, both can be queried programatically and joined using SQL allowing the DBA to easily report on questions like 1, 2, and 3 above via SQL queries, custom extensions to monitoring tools, reporting programs, etc. Or just to report accurately on patch deployments for security compliance reports, general database security assessments, or database health checks.

This is actually quite a significant breakthrough with regards to patch management!

 

How it works

The first thing to note is that in Oracle Database 12c there are some new Directory Objects installed by default:

SQL> select owner, directory_name, directory_path from dba_directories
  2  where directory_name like 'OPATCH%' order by 2;

OWNER        DIRECTORY_NAME       DIRECTORY_PATH
------------ -------------------- --------------------------------------------------------------------------------
SYS          OPATCH_INST_DIR      /u01/app/oracle/product/12.1.0/dbhome_1/OPatch
SYS          OPATCH_LOG_DIR       /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
SYS          OPATCH_SCRIPT_DIR    /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch

SQL>

 

Two of those point to a new QOpatch directory which contains a batch file which runs the OPatch utility:

SQL> !ls /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch
qopatch_log.log  qopiprep.bat

SQL> !cat /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch/qopiprep.bat
#!/bin/sh
#
# $Header: rdbms/src/client/tools/qpinv/qopiprep.bat /unix/2 2013/04/04 20:59:27 tbhukya Exp $
#
# qopiprep.bat
#
# Copyright (c) 2012, 2013, Oracle and/or its affiliates. All rights reserved.
#
#    NAME
#      qopiprep.bat - bat file for preprocessor
#
#    DESCRIPTION
#      bat file for external table preprocessor.
#
#    NOTES
#      .
#
#    MODIFIED   (MM/DD/YY)
#    tbhukya     04/03/13 - Bug 16226172 : Forward merge of fix 16076845
#    tbhukya     09/23/12 - Creation
#
cd $ORACLE_HOME
PATH=/bin:/usr/bin
export PATH

# Option: "-retry 0" avoids retries in case of locked inventory.

# Option: "-invPtrLoc" is required for non-central-inventory
# locations. $OPATCH_PREP_LSINV_OPTS which may set by users
# in the environment to configure special OPatch options
# ("-jdk" is another good candidate that may require configuration!).

# Option: "-all" gives information on all Oracle Homes
# installed in the central inventory.  With that information, the
# patches of non-RDBMS homes could be fetched.


$ORACLE_HOME/OPatch/opatch lsinventory -xml  $ORACLE_HOME/QOpatch/xml_file.xml -retry 0 -invPtrLoc $ORACLE_HOME/oraInst.loc >> $ORACLE_HOME/QOpatch/stout.txt
`echo "UIJSVTBOEIZBEFFQBL" >> $ORACLE_HOME/QOpatch/xml_file.xml`
echo `cat $ORACLE_HOME/QOpatch/xml_file.xml`
rm $ORACLE_HOME/QOpatch/xml_file.xml
rm $ORACLE_HOME/QOpatch/stout.txt

 

The description text in this Oracle provided file states that it’s a batch file used for external table preprocessing. The preprocessor clause was added in Oracle 11g release 2 (and back-ported to Oracle11gR1) to allow for execution of an external script/file to process data before reading it in the database via an external table. A further explanation of this feature is beyond the scope of this article.

Consequently we know an external table is involved and it’s not too hard to find and understand that:

SQL> select owner, table_name from dba_external_tables where table_name like 'OPATCH%' order by 1,2;

OWNER        TABLE_NAME
------------ --------------------
SYS          OPATCH_XML_INV

SQL> select dbms_metadata.get_ddl('TABLE','OPATCH_XML_INV','SYS') from dual;

DBMS_METADATA.GET_DDL('TABLE','OPATCH_XML_INV','SYS')
--------------------------------------------------------------------------------

  CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED


SQL>

 

Hence we can understand the underlying mechanics of this new feature. It’s based on an external table using a directory object running an external batch file (through the external table preprocessor option) which then runs the actual OPatch utility.

And we can query that external table directly if we want though we’ll get an XMLTYPE result:

SQL> select * from OPATCH_XML_INV;

XML_INVENTORY
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <InventoryInstance> <ora
cleHome> <UId>OracleHome-0243b34c-d8db-43f7-946b-589110de0ef9</UId> <targetTypeI
d>oracle_home</targetTypeId> <inventoryLocation>/u01/app/oraInventory</inventory
Location>
...
(output truncated)

 

Using DBMS_QOPATCH

Oracle gives us a wrapper API to this new external table (OPATCH_XML_INV) with DBMS_QOPATCH to make working with the output more manageable.

For example we can start with some basic information on the inventory using DBMS_QOPATCH.GET_OPATCH_INSTALL_INFO :

SQL> set heading off long 50000
SQL> select dbms_qopatch.get_opatch_install_info from dual;

<oracleHome><UId>OracleHome-a59380fa-5f8e-42df-b624-282f0189ec93</UId><targetTyp
eId>oracle_home</targetTypeId><inventoryLocation>/u01/app/oraInventory</inventor
yLocation><isShared>false</isShared><patchingModel>oneoff</patchingModel><path>/
u01/app/oracle/product/12.1.0/dbhome_1</path><targetTypeId>oracle_home</targetTy
peId></oracleHome>

SQL>

 

But as we see the output still isn’t easy on the eyes. Fortunately the XML stylesheet is also presented using DBMS_QOPATCH.GET_OPATCH_XSLT function. Hence combining the two gives a much more readable output:

SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory


SQL>

 

The DBMS_QOPATCH package provides many other useful functions and procedures. For example, to see if and when a specified patch was installed, or just to see the list of all of the patches installed by bug number:

SQL> select xmltransform(dbms_qopatch.is_patch_installed('19303936'), dbms_qopatch.get_opatch_xslt) from dual;


Patch Information:
         19303936:   applied on 2014-12-20T13:54:54-07:00


SQL> select xmltransform(dbms_qopatch.get_opatch_bugs, dbms_qopatch.get_opatch_xslt) from dual;


  Bugs fixed:
          19157754  18885870  19303936  19708632  19371175  18618122  19329654
19075256  19074147  19044962  19289642  19068610  18988834  19028800  19561643
19058490  19390567  18967382  19174942  19174521  19176223  19501299  19178851
18948177  18674047  19723336  19189525  19001390  19176326  19280225  19143550
18250893  19180770  19155797  19016730  19185876  18354830  19067244  18845653
18849537  18964978  19065556  19440586  19439759  19024808  18952989  18990693
19052488  19189317  19409212  19124589  19154375  19279273  19468347  19054077
19048007  19248799  19018206  18921743  14643995

SQL>

 

Or to run the equivalent of “opatch lsinventory” but from SQL instead of the OS:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0
--------------------------------------------------------------------------------

Oracle Home       : /u01/app/oracle/product/12.1.0/dbhome_1
Inventory         : /u01/app/oraInventory
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Database 12c                                    12.1.0.2.0
Installed Products ( 135)

Oracle Database 12c                                         12.1.0.2.0
Java Development Kit                                        1.6.0.75.0
...
(output truncated)

 

Putting it all together

The examples above are interesting but to make it truly useful for patch application querying, monitoring, and reporting we need join the output of DBMS_QOPATCH (which is showing us what’s installed into the Oracle Home) with the new 12c view DBA_REGISTRY_SQLPATCH (which shows us what patches are applied to the database).

Prior to Oracle 12c to list the PSUs installed into the Oracle home we might use:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
19303936 19303936 Sat Dec 20 13:54:54 MST 2014 DATABASE PATCH SET UPDATE 12.1.0.2.1 (OCT2014)

SQL>

 

I obtained that result by shelling out from a sqlplus session running on the database server. But likely a centralized SQL based monitoring or reporting tool won’t have that ability. A client-server database connection won’t be able to run OPatch easily and hence the DBMS_QOPATCH API is required.

To extract the same information from the DBMS_QOPATCH package we need to parse the resulting XMLTYPE output:

SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.*
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               applied_date varchar2(30) path 'appliedDate',
 11               sql_patch varchar2(8) path 'sqlPatch',
 12               rollbackable varchar2(8) path 'rollbackable'
 13         ) x;

  PATCH_ID  PATCH_UID
---------- ----------
DESCRIPTION
--------------------------------------------------------------------------------
APPLIED_DATE                   SQL_PATC ROLLBACK
------------------------------ -------- --------
  19303936   18116864
Database Patch Set Update : 12.1.0.2.1 (19303936)
2014-12-20T13:54:54-07:00      true     true


SQL>

 

This is a great result: Information on the PSUs installed into the Oracle Home through SQL and returned to us in an easy to read and easy to work with tabular form!

To make this information even more useful we need to join to the new DBA_REGISTRY_SQLPATCH view which replaces the DBA_REGISTRY_HISTORY view with respect to which patches have been applied in the database:

SQL> select patch_id, patch_uid, version, status, description
  2  from dba_registry_sqlpatch
  3  where bundle_series = 'PSU';

  PATCH_ID  PATCH_UID VERSION              STATUS
---------- ---------- -------------------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 12.1.0.2             SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL>

 

Joining these two outputs allows us to easily write queries that report on the PSU patches deployed and ones only partially deployed (whether in the Oracle Home but not the database or vice versa):

SQL> --
SQL> --   List of PSUs applied to both the $OH and the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.rollbackable, s.status, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription',
 10               rollbackable varchar2(8) path 'rollbackable'
 11         ) x,
 12         dba_registry_sqlpatch s
 13   where x.patch_id = s.patch_id
 14     and x.patch_uid = s.patch_uid
 15     and s.bundle_series = 'PSU';

  PATCH_ID  PATCH_UID ROLLBACK STATUS
---------- ---------- -------- ---------------
DESCRIPTION
--------------------------------------------------------------------------------
  19303936   18116864 true     SUCCESS
Database Patch Set Update : 12.1.0.2.1 (19303936)


SQL> --
SQL> --   PSUs installed into the $OH but not applied to the DB
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select x.patch_id, x.patch_uid, x.description
  3    from a,
  4         xmltable('InventoryInstance/patches/*'
  5            passing a.patch_output
  6            columns
  7               patch_id number path 'patchID',
  8               patch_uid number path 'uniquePatchID',
  9               description varchar2(80) path 'patchDescription'
 10         ) x
 11  minus
 12  select s.patch_id, s.patch_uid, s.description
 13    from dba_registry_sqlpatch s;

no rows selected

SQL> --
SQL> --   PSUs applied to the DB but not installed into the $OH
SQL> --
SQL> with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
  2  select s.patch_id, s.patch_uid, s.description
  3    from dba_registry_sqlpatch s
  4  minus
  5  select x.patch_id, x.patch_uid, x.description
  6    from a,
  7         xmltable('InventoryInstance/patches/*'
  8            passing a.patch_output
  9            columns
 10               patch_id number path 'patchID',
 11               patch_uid number path 'uniquePatchID',
 12               description varchar2(80) path 'patchDescription',
 13         ) x;

no rows selected

SQL>

Simple queries such as those three are what can be incorporated into monitoring scripts and reports.

Some other DBMS_QOPATCH functions worth trying include:

set heading off long 50000 pages 9999 lines 180 trims on tab off
select xmltransform(dbms_qopatch.get_opatch_count, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_opatch_list, dbms_qopatch.get_opatch_xslt) from dual;
select xmltransform(dbms_qopatch.get_pending_activity, dbms_qopatch.get_opatch_xslt) from dual;

set serverout on
exec dbms_qopatch.get_sqlpatch_status;

 

Back-porting to Oracle Database 11g

An interesting question is: “can we back port this approach to 11g” and the answer is “absolutely“!

First of all, we need to create the directory object, external table, and OS batch script. For simplicity I’m keeping the name and structure of each the same as in 12c but of course you can adjust them if desired.

Using DBMS_METADATA.GET_DDL on an Oracle 12c database I get the necessary DDL to add the directory objects and external tables to an 11g database (note that I’ve updated the Oracle Home path but that’s the only change):

CREATE OR REPLACE DIRECTORY "OPATCH_LOG_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';
CREATE OR REPLACE DIRECTORY "OPATCH_SCRIPT_DIR" AS '/u01/app/oracle/product/11.2.0/dbhome_1/QOpatch';

CREATE TABLE "SYS"."OPATCH_XML_INV"
   (    "XML_INVENTORY" CLOB
   )
   ORGANIZATION EXTERNAL
    ( TYPE ORACLE_LOADER
      DEFAULT DIRECTORY "OPATCH_SCRIPT_DIR"
      ACCESS PARAMETERS
      ( RECORDS DELIMITED BY NEWLINE CHARACTERSET UTF8
      DISABLE_DIRECTORY_LINK_CHECK
      READSIZE 8388608
      preprocessor opatch_script_dir:'qopiprep.bat'
      BADFILE opatch_script_dir:'qopatch_bad.bad'
      LOGFILE opatch_log_dir:'qopatch_log.log'
      FIELDS TERMINATED BY 'UIJSVTBOEIZBEFFQBL'
      MISSING FIELD VALUES ARE NULL
      REJECT ROWS WITH ALL NULL FIELDS
      (
        xml_inventory    CHAR(100000000)
      )
        )
      LOCATION
       ( "OPATCH_SCRIPT_DIR":'qopiprep.bat'
       )
    )
   REJECT LIMIT UNLIMITED;

 

Next I need to copy the preprocessor batch script into the 11g home (which can be copied from another machine if necessary):

!cp -r /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch /u01/app/oracle/product/11.2.0/dbhome_1/QOpatch

 

Now I don’t want to install the DBMS_QOPATCH package into a different version of the database. It’s Oracle supplied “wrapped” code meaning I can’t modify it and it probably has 12c dependencies meaning it would be invalid in Oracle 11g. Instead I performed a SQL Trace on the DBMS_QOPATCH.GET_OPATCH_LSINVENTORY package executing on the 12c database and from the resulting trace file I see that all it’s doing is:

INSERT INTO OPATCH_XINV_TAB(XML_INVENTORY) SELECT * FROM OPATCH_XML_INV
SELECT XMLTYPE(XML_INVENTORY) FROM OPATCH_XINV_TAB
DELETE FROM OPATCH_XINV_TAB

 

Not exactly complicated. Hence using the directory objects and external table created above I can execute the following on the 11g database:

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.4.0

SQL> select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV;

PATCH_OUTPUT
--------------------------------------------------------------------------------
<?xml version="1.0" encoding="US-ASCII"?>
<INVENTORY isStandAlone="false">
 <HEADER>
 <ORACLE_HOME>/u01/app/oracle/product/11.2.0/dbhome_1</ORACLE_HOME>
 <CENTRAL_INVENTORY>/u01/app/oraInventory</CENTRAL_INVENTORY>
 <OPATCH_VERSION>11.2.0.3.6</OPATCH_VERSION>
 <OUI_VERSION>11.2.0.4.0</OUI_VERSION>
 <OUI_LOCATION>/u01/app/oracle/product/11.2.0/dbhome_1/oui</OUI_LOCATION>
 <LOG>/u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch2014-1
2-23_15-08-04PM_1.log</LOG>
 </HEADER>
...
(output truncated)

 

So as we can see, it’s working perfectly back-ported to 11g.

However again the XML output isn’t really useful for me, I’m more interested in what PSUs have been installed. It’s easy to check that using opatch if I’m on the server:

SQL> !$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | egrep -i 'DATABASE PSU|DATABASE PATCH SET UPDATE'
18522509   18522509  Sun Sep 21 20:58:00 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPU
18031668   18031668  Sun Sep 21 20:57:48 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPU
17478514   17478514  Sun Sep 21 20:57:38 MDT 2014   DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPU

 

But again what if I’m not on the server, want to do this programatically through SQL, build it into a monitoring query or a security compliance report, etc?

Again we can query the XMLTYPE data and get exactly what we want. Specifically:

SQL> with a as (select XMLTYPE(XML_INVENTORY) patch_output from OPATCH_XML_INV)
  2  select x.* from a, xmltable(
  3     'INVENTORY/HOST/HOME/ONEOFF_LIST/INTERIM_PATCH/oneoff_inventory/base_bugs/bug'
  4     passing a.patch_output columns
  5     --row_number for ordinality,
  6     bug_number number path '@number',
  7     bug_description varchar2(256) path '@description'
  8  ) x
  9  where regexp_like(bug_description, '(DATABASE PSU|DATABASE PATCH SET UPDATE)','i')
 10  order by bug_number;

BUG_NUMBER BUG_DESCRIPTION
---------- ------------------------------------------------------------
  17478514 DATABASE PATCH SET UPDATE 11.2.0.4.1 (INCLUDES CPUJAN2014)
  18031668 DATABASE PATCH SET UPDATE 11.2.0.4.2 (INCLUDES CPUAPR2014)
  18522509 DATABASE PATCH SET UPDATE 11.2.0.4.3 (INCLUDES CPUJUL2014)

SQL>

 

Voila! By copying the technique used by the Oracle 12c database we’ve now performed an OPatch query from SQL against an 11g database. Just took about 2 minutes of setup.

Again this can be super handy if PSU patching is a regular activity for you or if you have to produce security compliance reports.

Unfortunately though Oracle 11g records less data in DBA_REGISTRY_HISTORY than 12c has in DBA_REGISTRY_SQLPATCH:

SQL> select comments, action_time from dba_registry_history
  2  where bundle_series like '%PSU' order by action_time;

COMMENTS                       ACTION_TIME
------------------------------ ------------------------------
PSU 11.2.0.4.3                 23-SEP-14 09.21.34.702876 AM

SQL>

 

Hence joining the two is more challenging but certainly possible. The data (in this case the string “11.2.0.4.3”) is present in both the DBA_REGISTRY_HISTORY and my XQuery output from my back-ported OPATCH_XML_INV external table output.

 

Conclusion

The new DBMS_QOPATCH API is a fantastic improvement with Oracle Database 12c when it comes to patch management (querying/reporting/monitoring). And by understanding the concepts and components added to 12c they can be manually back-ported to 11g. Both using DBMS_QOPATCH and even the back-port to 11g is actually pretty easy. In fact the hardest part for me (not being very familiar with XQuery) was coming up with the query to parse the XMLTYPE data.

 

References

http://docs.oracle.com/database/121/ARPLS/d_qopatch.htm

Oracle Recommended Patches — Oracle Database (Doc ID 756671.1)

Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets (Doc ID 1454618.1)

How do you apply a Patchset,PSU or CPU in a Data Guard Physical Standby configuration (Doc ID 278641.1)

Categories: DBA Blogs

Log Buffer #405, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-01-19 19:34

This Log Buffer Edition rides on the wave of new ideas in the database realms. From Oracle technologies through MySQL to the SQL Server, things are piping hot.

Oracle:

The Call for Papers for the Rittman Mead BI Forum 2015 is currently open, with abstract submissions open to January 18th 2015.

The replicat process is the apply process within the Oracle GoldenGate environment.

“SELECT * FROM TABLE” Runs Out Of TEMP Space.

“log file sync” and the MTTR Advisor.

Working with XML files and APEX – Part 3: Detail elements in a row with OUTER JOIN.

SQL Server:

Get started testing your database code with the tSQLt framework.

Archiving Hierarchical, Deleted Transaction Using XML.

As a part of his “Function / Iterator Pairs” mini-series, Business Intelligence architect, Analysis Services Maestro, and author Bill Pearson introduces the DAX COUNT() and COUNTX() functions, discussing similarities and differences.

Free eBook: Fundamentals of SQL Server 2012 Replication

Importance of Statistics and How It Works in SQL Server – Part 1

MySQL:

Is Zero downtime even possible on RDS?

Monitor MySQL Database Users with VividCortex

Django with time zone support and MySQL

Using Perl to send tweets stored in a MySQL database to twitter

When ONLY_FULL_GROUP_BY Won’t See the Query Is Deterministic.

Categories: DBA Blogs

I Have Lots Of Oracle Database Server Power But Performance Is Slow/Bad

This page has been permanently moved. Please CLICK HERE to be redirected.

Thanks, Craig.I Have Lots Of Oracle Database Server Power But Performance Is Slow/Bad
Oracle Database parallelism and serialization is what we as Oracle Database Administrators live and die for. You have a screaming fast Oracle Database system and there is lots of computing power available.

But performance is unacceptable; users are screaming, the phone is ringing, and those fancy dashboards are flashing like it's Christmastime.

What is going on?! What can I do about it?! That's what this post is about.
Learn By Doing
Back in December on the third day of my Oracle Performance Firefighting class, I had each student bring in an AWR report from one of their systems that was giving them problems. (I keep my classes small, giving each student time to do their analysis and time for us to talk about it as a class.)
Get this:Half of the systems had a similar "problem." I think it's important every DBA understands this "problem" because it's more common than most people believe.
There was plenty of computing power and the key SQL statement they cared about was a batch job. What was the core problem? The quick answer is "serialization" that is, a lack of parallelism. Exploring this using a very large production system AWR report and coming up with solutions is what this posting is all about.
Serialization Is Death
In Oracle systems, serialization is death and parallelism is life. Follow this line: business, end user, application designer, DBA, Oracle Database Kernel Architects (or whatever their title is), OS Administrators, OS designers, CPU designers and IO subsystem designers all have something in common. They work hard to parallelize tasks. Just one example: Oracle is designed to have multiple background and foreground processes running parallel.
But all this parallelization effort can be wasted and minimized if a process turns into a serial work stream (at any level; Oracle, OS, business, etc.). The result is "slowness" because the wall time increases.
Available Power And Slowness Equals Opportunity
When I tune Oracle Database systems, I look for opportunities. And each of my solutions will specifically target an opportunity. When I see unused power and complaints of slowness, I look for ways to increase parallelism. Why? Because having available power combined with slowness likely means a serialization limitation exists.

For sure serialization may be necessary. Two examples come to mind; Oracle database memory serialization control (think: latch and mutex) and business rules.

But if I can find a way to increase performance by using up available power by increasing parallelism, I'll likely be able to turn a slow serialization situation into a screaming fast parallelization situation!
How To Recognize A Serially Constrained System
It's easy to recognize a serially constrained system. Ask yourself these two questions. First, is there available CPU or IO power? Second, are there complaints of application "slowness." If the answer to both of these questions is "Yes" then there is likely a serialization issue. Furthermore, the general solution is to use the available resources to our advantage. That is, find areas to increase parallelization, which will use the available resources and improve performance.
If you have the power, use it! What are you saving it for?
(There may be a very good answer to the "saving" but I'll save that for another article.)
Can I Be Out Of CPU And Be Serially Constrained?
Yes. An Oracle Database system can be serially constrained and be out of OS resources. A great example of this is when there is a raging Oracle memory serialization issue. If you see both significant Oracle latching or mutex wait time combined with a raging CPU bottleneck, you likely have a serialization issue... an Oracle Database memory structure access serialization issue.

So, while available power on a "slow" system likely means we have a serially constrained system there are situations in Oracle with a raging CPU bottleneck that also means there is likely a serialization issue.

Find Out: Is There Available CPU Power?
Here Is A Real Life Situation. To simplify, I'm going to focus on only instance number one. Look at instance number one in the below picture.

The above AWR report snippet shows RAC node #1 OS CPU utilization at 15%. This means that over the AWR report snapshot interval, the average CPU utilization was 15%. I never initially trust an AWR report for calculated results. Plus it's good practice to do the math yourself. If you use the super fast busy-idle method I have outlined in THIS POST and detailed in my online seminar, Utilization On Steroids, the utilization calculates to 16% ( 0.5/(0.5+2.7)=0.16 ). So the AWR Report's 15% for CPU "% Busy" looks to be correct.
Clearly with an average CPU utilization of 15%, we have an opportunity to use the unused CPU power to our advantage.
Find Out: Is There Available IO Power?
I am looking for fast IO responsiveness. That is, a low response time. A great way to get a quick view of IO subsystem responsiveness is to look at the average wait time for the event, db file sequential read.
The wait event, db file sequential read is the time it takes to read a single block synchronously. I like to call it a pure IO read call: a) what time is it? b) make the IO call and wait until you get it, c) what time is it? d) calculate the delta and you have the wait time...and the IO read call response time! If you want more details, I wrote about this HERE, which includes a short video.
For our system, let's figure out the single block IO subsystem read response time. Using the same AWR report, here is a screen shot of the Top Time Events.

Again, I'm just going to focus on the first instance. If you look closely (middle right area), you'll see for instance number one, the average db file sequential read time wait time is 2.22ms. That's fast!
There is no way a physical spinning disk is going to return a block in 2.22ms. This means that many of Oracle's single block read calls are be satisfied through some non-Oracle cache. Perhaps an OS cache or an IO subsystem cache. We can't tell, but we do know the block was NOT an Oracle's buffer cache because the db file sequential wait means the block was not found in Oracle's buffer cache.
A single block synchronous IO read call with an average of 2.22ms means there is available IO read capacity and probably available write capacity as well. Again, just like with the OS CPU subsystem, we have unused power that we will try and use to our advantage.
At this point, I will assume there is also plenty of memory and network capacity available. So, the bottom line is we have a "slow" system combined with available CPU and available IO power. Wow! That is a great situation to be in. I call this, "low hanging fruit."
Real Life: Looking For The "Slow" SQL
At the top of this post, I mentioned that in my Firefighting class in each of the "serialization" cases, there was a key SQL statement that was part of a larger batch process. Keep in mind, that at this point in the analysis I did NOT know this. All I knew was that users were complaining and there was plenty of CPU and IO resources.
Usually, in this situation there is a relatively long running process. There could be lots of quick SQL statement involved, but usually this is not the case. And I'm hoping there is a key long running SQL statement that can be parallelized.
Long running can roughly be translated into "high elapsed time." I've have written a number of articles about elapsed time (search my blog for: elapsed time) and even have a free tool with which, you can gather to get more than simply the average elapsed time. And I have online seminars that touch on this subject: Tuning Oracle Using An AWR Report and also, Using Skewed Performance Data To Your Advantage. So there are lots of useful resources on this topic.
In the AWR report, I'm going to look closely at the SQL Statistics, in particular the "SQL ordered by Elapsed Time (Global)." What I really want is the statistics only for instance one, that is, not global. But that's all I have available. Plus the DBAs will/should know if the key SQL statement(s) are run on instance one. Here's the report.


In the report above, look at the elapsed times (second column on the left). Now looking right, find the "Execs", that is, the executions column. The execution column is the number of completed executions within this AWR snapshot range. If the executions is zero, this means the SQL did not complete during the snapshot interval, that is before the ending snapshot.
If you're wondering, these top elapsed time SQL statements are involved in batch processing. When I look at this, I see opportunity, fruit waiting to be harvested!
And I love this: Every DBA in the class in this situation said, "Oh! I know about this SQL. It's always causing problems." Now it's time to do something about it!
Real Life: Putting This All Together
We have identified available CPU and IO capacity. And we have identified THE elapsed time SQL statement. While I'm a pretty laid back kind of guy, at this point I start to apply some pressure. Why? Because the users are complaining, we have identified both an opportunity, the cause of the problem and the general solutions.
There are two general solutions:
1. Do less work. You want to empty a candy dish faster? Then start with less candy in the dish! If you want a SQL statement to run faster, tune the SQL so it touches less blocks.
2. Do the same amount of work, but group the work and run each group at the same time. This is parallelization! This is why the total elapsed time will not decrease (it will probably increase a little) but the wall time will likely decrease... and dramatically! Here is a LINK to posting that contains a short video demonstrating the difference between elapsed time and wall time.
How To Parallelize (in summary)
There are many different ways to parallelize. But the goal is the same: use the available resources to reduce wall time (not necessarily the elapsed time). Perhaps the application can be redesigned to run in parallel streams. But that can take a very long time and be a real hassle. But in many cases, it's the best long term solution.
If you are short on time, are licensed for Oracle Parallel Query and the SQL has been optimized (oh boy... how many times have all heard that before), you likely can use Oracle PQ. And of course, even if the SQL is not optimized, you can still run PQ and performance may be fantastic.
By the way, adding faster IO disks or more IO disks (what is a "disk" is nowadays anyways) will likely NOT work. Remember the IO subsystem is performing wonderfully.
Thanks for reading and enjoy the mystery of your work!
Craig.
Categories: DBA Blogs

Arizona User Group Meeting tomorrow

Bobby Durrett's DBA Blog - Mon, 2015-01-19 14:07

Here is the link: url

It looks like our meeting tomorrow will be on Oracle 12c new features.  I’m looking forward to participating. :)

– Bobby

Categories: DBA Blogs

Log Buffer #406, A Carnival of the Vanities for DBAs

Pakistan's First Oracle Blog - Mon, 2015-01-19 01:38
This Log Buffer Edition covers blog posts from various bloggers of Oracle, SQL Server and MySQL.

Oracle:

Sync tables: generate MERGE using Unique constraint.
What Hardware and Software Do YOU Want Oracle to Build?
There were a number of new features introduced in Ops Center 12.2.2. One of the shiny ones is an expansion of the backup and recovery capabilities to include Proxy Controllers.
Want to Be a Better Leader? Answer One Question.
Managing a remote Oracle Database instance with “Geographic Edition”.

SQL Server:

Learn how you can use SQLCop to prevent your developers from writing stored procedures that are named sp_ something.
Data Cleaning in SQL 2012 with Data Quality Services.
Stairway to PowerPivot and DAX - Level 9: Function / Iterator Function Pairs: The DAX MAX() and MAXX() Functions.
Options to Improve SQL Server Bulk Load Performance.
Dynamically Create Tables Based on an Access Table

MySQL:

Stored Procedures: critiques and defences.
JSON UDF functions 0.3.3 have been released.
Business Scalability, Operational Efficiency and Competitive Edge with MariaDB MaxScale 1.0 GA.
MySQL 5.7 labs and the HTTP Plugin – inserting, updating and deleting records in MySQL via HTTP.
Hyper-threading – how does it double CPU throughput?

Published on Pythian Blog
Categories: DBA Blogs

A blog on Oracle Standard Edition

Hemant K Chitale - Sun, 2015-01-18 08:40
Here's a blog on Oracle Standard Edition by Ann Sjokvist.

.
.
.
Categories: DBA Blogs

If you are in Cleveland don't miss our January 23 2015 NEOOUG meeting!

Grumpy old DBA - Fri, 2015-01-16 14:09
Please join us next friday for our 1st 2015 regular meeting. Mark Rabne our resident Oracle technical geek will be taking us through major 2014 Oracle technology and application announcements. Kind of a recap of Oracle Open World 2014 major items plus some additional ones after that.

It's the usual deal at the Rockside Road Oracle office so free lunch at noon and networking opportunities. Meeting starts at 1 pm.

Our March meeting will be Big Data ish related ( and we have a great announcement coming up on a workshop for our May GLOC 2015 conference ).
Here is the info on Jan 23 2015 meeting

I hope to see you there!
Categories: DBA Blogs

Oracle Ace Associate

Oracle in Action - Thu, 2015-01-15 23:44

RSS content

It gives me immense pleasure to share with you the news that
I am an Oracle Ace Associate“.

Thanks to the “Oracle ACE Program” for accepting  me  to receive the Oracle ACE Associate award.

My heart is full of gratitude for Sir Murali Vallath who nominated me for this.

Thanks to AIOUG for giving me an opportunity to speak during SANGAM 14 and publishing my white paper on ‘Histograms – Pre-12c and now” in  Oracle Connect Issue Dec 2014.

I want to  thank  my husband  for encouraging me, and readers of my blog for their time, comments and suggestions.

Thank you so much!



Tags:  

Del.icio.us
Digg

Comments:  18 comments on this item
You might be interested in this:  
Copyright © ORACLE IN ACTION [Oracle Ace Associate], All Right Reserved. 2015.

The post Oracle Ace Associate appeared first on ORACLE IN ACTION.

Categories: DBA Blogs