Skip navigation.


Syndicate content DBASolved
Helping Oracle DBAs solve problems
Updated: 14 hours 52 min ago

LogDump and Understanding header information in Oracle GoldenGate Trail Files

Thu, 2014-04-17 08:30

Replication of data is always a fun thing to look at; What is replicating?!  Discussions around, How do I get data from server/database A to server/database B or even to server/database C are valid questions and are often asked by management.  Often the simple (knee jerk) answer is, just set it up and start replicating.  Although Oracle GoldenGate may be simple (for some architectures) to meet the demands of management and the task at hand, problems will arise with the data being replicated.

when problems arise, the need to identify and resolve the replication issue becomes a critical and time consuming task.  Oracle GoldenGate provides a few utility to help in diagnosing and resolving replication issues.  One such utility is the LogDump utility.  The LogDump utility is used to read the local and remote trail files that are used to support the continuous extraction and replication of transaction changes within the database.

Knowing what trail files are used for is part of the battle when troubleshooting replication issues with Oracle GoldenGate.  How do we use LogDump to read these trail files?  What are we looking for or at in a trail file to understand what is gong on?  To answer these questions, we need to start the LogDump utility.

To start LogDump, we just need to be in the OGG_HOME and run the LogDump command.  The below code set shows you how to run LogDump.

[oracle@oel oggcore_1]$ pwd
[oracle@oel oggcore_1]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 17185003 17451407

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.


Logdump 22 >

Note: Your LogDump session should start at 1 not 22 (Logdump 22).  LogDump remembers session info until you log out of the server.

Once LogDump has been started, we need to open a trail file and setup how we want the information to be displayed.  Commands for LogDump can be displayed by using the “help” command.  In the following code block, we see that we are opening a local trail (lt) file and setting a few environment options.

Note: Trail files (local and remote) normally are pre-fixed with two (2) letters followed by a six ( 6 ) digit string.  In new environments trail files will start with (prefix)000000 (lt000000 or rt000000).

Logdump 15 >open ./dirdat/lt000000
Current LogTrail is /oracle/app/product/12.1.2/oggcore_1/dirdat/lt000000
Logdump 16 >ghdr on
Logdump 17 >detail on
Logdump 18 >detail data
Logdump 19 >usertoken on
Logdump 20 >

The “help” command inside of LogDump provides more options.  The options that we are using in this example are:

  • ghdr on =  toggle header display on | off
  • detail on = toggle detailed data display (on | off | data)
  • detail data =  toggle detailed data display (on | off | data)  (repeated this just to make sure)
  • usertoken on = show user token information (on | off| detail)

With the LogDump environment set, we can now use the “next (n)” command to see the information in the trail file.

Logdump 20 > n

Once the header output is displayed, we need to understand how to read this information.  Image 1 provides us with a quick explanation of each major component within a trial file transaction.  We can see the following items for a transaction in trail file (lt000000):

  • Header Area: Transaction information
  • Data/Time and type of transaction
  • Object associated with the transaction
  • Image of transaction (before/after)
  • Columns associated with the transaction
  • Transaction data formatted in Hex
  • Length of the record
  • ASCII  format of the data
  • Record position within the trail file (RBA)

Image 1: Header Informationimage

At this point, we maybe asking: Why is this important?  Understanding the trail files and how to find information within the trail files is an important part of troubleshooting the Oracle GoldenGate environment.

Example: If a replicat abends and we need to start the replicat from a given RBA. Being able to identify the first, next  and last RBA in the trail file is helpful in understanding why the abend happened and identifying a starting point to restarting successfully.

In the end, the Oracle GoldenGate environment can be simple yet complex at the same time. Understanding the different components of the environment is very useful and worth the time involved to learn it.


twitter: @dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Monitoring Oracle Golden Gate from SQL Developer

Wed, 2014-04-16 07:16

Last week I was at Collaborate 14 speaking in two sessions; one of the sessions I had done a couple of times before.  The other session was about the different ways of monitoring Oracle GoldenGate (If you are curious about the presentation it can be found here).  While at the conference I ran the idea of monitoring GoldenGate from SQL Developer by a few peers and there seems to be interest.  As for Oracle, this approach to monitoring GoldenGate is not on Oracle’s road map for SQL Developer.

To achieve this goal, the usage of XML extensions within SQL Developer is needed.  Using XML extensions, I’ve been able to leverage monitoring GoldenGate from SQL into a working extension.  The extension is not perfect and continues to need some work.  As you can see in image 1, I can get the status of a GoldenGate process and associated stats.

Image 1:image

The SQL Developer extension for Oracle GoldenGate is available for whoever would like to use it and extend on it.  This extension is included with my other GoldenGate monitoring scripts located here and on my scripts page.

Note: at some point, I will hopefully get this extension uploaded to a Github repository for community digestion.

This extension is to help DBAs have a way to monitor their GoldenGate environments without the need of going directly to the server. For now, it just gives up/down status and operation stats.  Hopefully, as this matures (as I and others work on it) it will become a robust extension for all monitoring with Oracle GoldenGate.


twitter: @dbasolved


Filed under: Golden Gate, Replication
Categories: DBA Blogs

Monitor Oracle Golden Gate from SQL

Mon, 2014-04-07 08:41

One of my presentations at Collaborate 14 this year revolves around how many different ways there are to monitor Oracle Golden Gate.   As I was putting the presentation together, I was listing out the different ways for monitoring. I have covered a few of the ways already in earlier posts.  What I want to show you here is how to execute a simple “info all” command and see the results from SQL*Plus or SQL Developer using SQL.

First, a script (shell, Perl, etc..) needs to be written to write the output of the “info all” command to a text file.  In this case, I’m going to write the text file in /tmp since I’m on Linux.

#!/usr/bin/perl -w
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
use strict;
use warnings;

#Static Variables

my $gghome = "/oracle/app/product/12.1.2/oggcore_1";
my $outfile = "/tmp/gg_process_sqldev.txt";

my @buf = `$gghome/ggsci << EOF
info all

open (GGPROC, ">$outfile") or die "Unable to open file";
foreach (@buf)
        no warnings 'uninitialized';
        my ($program, $status, $group, $lagatchkpt, $timesincechkpt) = split(" ");
        if ($group eq "")
            $group = $program;
        if ($lagatchkpt eq "" || $timesincechkpt eq "")
            $lagatchkpt = "00:00:00";
            $timesincechkpt = "00:00:00";
        print GGPROC "$program|$status|$group|$lagatchkpt|$timesincechkpt\n";
close (GGPROC);

Next, is the text file needs to be placed into a table to be read by SQL*Plus or SQL Developer.  External Tables are great for this.

create directory TEMP as '/tmp';
grant read on directory TEMP to PUBLIC;

drop table ggate.os_process_mon;

create table ggate.os_process_mon
process char(15),
status char(15),
group char(15),
lagatchk char(15),
timelastchk char(15)
organization external
(type oracle_loader
default directory TEMP
access parameters
            process char(15),
            status char(15),
            ggroup char(15),
            lagatchk char(15),
            timelastchk char(15)
    location ('gg_process_sqldev.txt')

select * from ggate.os_process_mon;

Lastly, with these two pieces in place, I can now select the status from SQL*Plus or SQL Developer using SQL. Image 1 shows a sample from my testing environment, I’m building.

Image 1:


twitter: @dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Speaking at Collaborate 2014

Fri, 2014-03-28 08:45

I’m a little behind on my updating my blog with images of conferences where I will be speaking (I’ll get to that later and hopefully fix it).  In the meantime, I wanted to let everyone know that I will be speaking at IOUG Collaborate 2014 this year.  IOUG has decided to hold the conference in Las Vegas, NV.  Should be a fun conference; after all everyone knows the saying “What happens in Vegas…”, well you get the picture. 

Unlike last year, I will not be at the conference all week.  I will be there later in the week for my sessions and then leaving quickly due to work commitments.  All good though and  I hope to see many friends while I’m there.   You may be wondering what sessions I’ll be presenting; here they are:

Thursday, 4/10/2014 @ 12:15 pm
How many ways can you monitor Oracle Golden Gate?

This is presentation is going to be a quick look at how you can monitor Oracle Golden Gate within your environment from different approaches.

Friday, 4/11/2014 @ 09:45 am
Oracle Enterprise Manager 12c, Oracle Database 12c, and You!

This presentation is one of my most well received presentations.  As the title explains it deals with what you can expect from using Oracle Enterprise Manager 12c when managing Oracle Database 12c.  I try to improve this presentation each time; maybe there will be something new that you haven’t seen yet.

On an Oracle Enterprise Manager 12c related note, if you are looking to expand your knowledge of OEM12c, there are a lot of sessions being presented by Oracle and some of my friends at Collaborate 14. You can use this like to see what OEM sessions are being presented (here).


twitter: @dbasolved


Filed under: General
Categories: DBA Blogs

Open and Migrate Microsoft Access in Oracle SQL Developer 4

Mon, 2014-03-24 12:40

For many people getting start with databases revolve around using Microsoft Access (MS Access). MS Access is an entry level “database” (if you can call it a database) that Microsoft has been putting out for years. Often people want to move older MS Access “databases” into enterprise databases as they become reliant on the information stored in them. Oracle has recognized this and has enabled Oracle SQL Developer to interact with MS Access and allow for a quick copy of data from MS Access to Oracle.

I have been a baseball fan for as long as I can remember; however, I don’t dwell on stats and win-lose records. I honestly just like to watch the game and watch my boys learn the game at a completive level. With this in mind I went looking for baseball stats that I can put into a database and use for demo purposes. What I found was an MS Access “database” full of information from 1996 up through 2013 thanks to Sean Lahman (here).

Now that I have data I want for testing, I really want to test it in Oracle! Using Oracle SQL Developer I’m able to access the data stored in MS Access and with a right-click of the mouse move the data into Oracle. Sounds simple, right!?  Let’s take a look.

The tools I’m using to do this migration are:

  • Microsoft Access 2010
  • Oracle SQL Developer 4.0  ( w/ JDK 1.7.0_51)
  • Oracle Database 12c (
Setup Microsoft Access

In order to access the data in a MS Access “database”, you need to enable the viewing of system objects in Access. In MS Access 2010 this can be accomplished by doing the following once the MS Access database is open.

Open the options for the database.


Once the Access Options dialog is open, then go to the Navigation button.


After clicking on the navigation button, the Navigation Options dialog will open. On this dialog you want to enable “Show System Objects”.


After enabling “Show System Objects”, click OK all the way out to the “database”. You will notice in the All Access Objects tree there are some system tables that appear to be greyed out.


These are the system tables that Oracle SQL Developer needs access to in order to connect.
Connect to MS Access from SQL Developer

To setup a connection to MS Access in Oracle SQL Developer, is just like setting up a connection for Oracle. From the Connections dialog, click on the green plus sign. This will open the connections dialog box.


You will see a tab that says Access. Click on the tab to open up the dialog to use an MS Access MDB file. Use the Browse button to locate the MDB and give it a connection name. The dot in the password field is just there upon connection. Username is not needed since connections to MS Access is as Admin by default.


Once connected to the MS Access database from SQL Developer, you will see the connection and tables that are in the database.


From here, you can see all the tables in the database by expanding the TABLE category of the connection.


With the connection to MS Access set and usable, I wanted to move all these tables into an Oracle Database 12c for testing.
Quickly moving data to Oracle

All the baseball data can be quickly moved to Oracle in a few simple steps. No, this process does not involve using the migration features of SQL Developer; instead it is a simple right-click.

Start by highlighting one or more tables you would like to copy to Oracle.


Next perform a right-click on the highlighted tables. You will see an option for Copy to Oracle. Click this option.


After clicking Copy To Oracle, a connection dialog will open. For my Oracle connection, I’m connecting to a PDB with a local user named SCOUT. Click Apply.


At this point, SQL Developer is copying all the metadata and data over to my Oracle 12c PDB under the user SCOUT.


When the process is done copying, I can verify that all the tables are there by looking at my Oracle connection pane and opening the Tables node on the tree.


Now I have all the baseball data I want to play with loaded into Oracle Database 12c (PDB). Let the fun times begin!


twitter: @dbasolved


Filed under: Database
Categories: DBA Blogs

Oracle Data Guard Switchover via DGMGRL vs. #em12c

Wed, 2014-03-19 13:12

When you start to look at high availability and replication of data many people look at either Oracle Data Guard or Oracle GoldenGate.  Personally, I opt for Oracle GoldenGate; however, it is not always the best fit for smaller environments where cost is a concern.  When cost is a concern, Oracle Data Guard is a good choice and can be used with Enterprise (EE)  and Standard (SE) editions of Oracle Database.  There are multiple options for Oracle Data Guard, i.e. Redo Apply, SQL Apply and Active Data Guard.  All these options have its pros and cons.

Advice: I’ll say this up front; after configuring Oracle Data Guard from the command line (SQL) and from OEM 12c, I would take OEM 12c if I have the option.  Much easier, personal opinion.  I’ll write a post on this at some point.

In this post I want to cover the ways that Oracle Data Guard can switchover between Primary database and Physical Standby database from the command line and then from OEM12c. 

DGMRL – Command Line

Let’s take a look at how to perform a switch over from the command line.

In order to perform a switchover, I will be using the Oracle Data Guard Broker (DGMGRL).   With DGMGRL, I can manage the configuration of  Oracle Data Guard and provides an easier way to interact with the environment after it is setup. 

To start working with DGMGRL, first log into the standby database server.  Then execute DGMGRL and connect to the local database (Image 1).

Image 1:

Before doing anything with Oracle Data Guard, it is good to check the status of the configuration.  This can be done by using the SHOW CONFIGURATION command (Image 2).

Image 2:

In order to switch over to the standby database, I simply need to execute the SWITCHOVER TO <standby database> command (Image 3).

Image 3:

Once the switch over starts, the broker tries to switch everything over to db11g2tst.  During the switch over, I get an error on shutting down the original primary database and disconnects me from the databases.  Upon trying to look at the configuration, I see that the configuration is in ERROR status (Image 4).

Image 4:

From looking at this configuration problem, lets try restarted the database that it is complaining about and see if it clears the error. 

Image 5:

Now that the database has been restarted in mount mode, the DGMGRL is reporting that the switchover is in progress (Image 5).  With the database bounced and placed in mount mode, the switchover was able to complete successfully (Image 6).

Image 6:

Although, the switchover was successful, I had to intervene by rebooting the database that was becoming the new standby database.  Successful yes, but I’m not completely happy with the process.  Thought this was suppose to be hands free?  Let’s take a look at the OEM 12c approach.

OEM12c Approach

Taking a look at how to administer Oracle Data Guard from OEM12c, I will be using the broker again; this time from within OEM 12c. 

Note: The nice thing is that when you configure a standby database with OEM12c, the broker is setup for you.  Smile

To access the the broker items for Oracle Data Guard in OEM12c, I first have to go to the Database landing page (Targets –> Databases).  In Image 1, I’m looking for my test database (%tst%).  Once I have them listed, then I need to click on the database  that is the standby. 

Image 1:


After clicking on the standby database, OEM takes me to the landing page for that standby database.  Just as with any database, I see a set of menus under the database name (Image 2). 

Image 2:image

At this point, I want to use the Availability menu to access the Data Guard Administration Page (Image 3) (Availability –> Data Guard Administration).

Image 3:

Before OEM will take me to the Data Guard Administration page, I have to log into the standby database.  Since the standby database is in mount mode, the only way to log in is using the SYS AS SYSDBA user.  Image 4 shows that I have already setup a Named Credential for my standby database.

Image 4:image

Once logged in to the standby database, the Data Guard Administration page, provides you with a  lot of information pertaining to the Data Guard environment (Image 5).   

Image 5:


The important part on this page is the Standby Databases (Image 6) section.  This section provides all the information needed for the standby database.  In my example, I can quickly see the status of Data Guard, what role it is in, the last received and applied logs and the estimated time it will take to failover.

Image 6:


Now that I know what standby databases are available, I can quickly switchover to the standby selected by using the Switchover button (Image 7) in the Standby Databases section.

Image 7:


After clicking the Switchover button, OEM will ask you to log into both hosts that will partake in the switchover (not pictured).   Once logged into both hosts, a confirmation page for switching over is provided (Image 8).  At the time of this switchover, I have the option to swap monitoring settings as well (check box).  Since I want to swing everything over, I clicked the check box and then click the Yes button.

Image 8:


After clicking Yes, the switchover begins (Image 9).  I can monitor the status of the switchover from the output being displayed in OEM.

Image 9:image

Once the switchover is complete, OEM returns me to the Data Guard Administration page.  Where I can clearly see that everything was successful and that the new standby database is the old primary database (Image 10).

Image 10:image


I have showed two different ways of using the Data Guard Broker to perform a switch over.  Both methods are valid.  The main difference in the approaches is that OEM 12c approach took a few more steps due to the screen involved.  Where as the DGMGRL command line option I only had to run one (1) command to switch over.  In the end, everything switched over and I would leave it up to personal preference on which approach is used. 


twitter: @dbasolved


Filed under: Data Guard, Database, Replication
Categories: DBA Blogs

Use DCLI to check #em12c agents on Exadata – all at once

Tue, 2014-03-18 21:08

For all the benefits of Oracle Enterprise Manager 12c (OEM), there is one draw back when using OEM with Real Application Clusters or Exadata…. checking the status of the EM Agent on each node.  When a node is bounced, some times the EM Agent does not always restart.  When this happens on a single instance box or a RAC; normally you would have to check  each node individually and restart the agent.  With Exadata, Oracle has made this check easier by providing the Distributed Command Line Interface (DCLI).  Using DCLI, an administrator can run commands across all the nodes of the Exadata.

Note: The DCLI is a shell script that allows commands to be ran on multiple machines with a single invocation.

The example I’m going to use in this post is an example of checking the Enterprise Management Agents after rebooting the  computer (db) nodes of an Exadata.

After rebooting each of the nodes, I needed to validate that the EM Agents were successfully restarted.  All the agents were installed on each of the nodes in the same directory structure (/u01/app/oracle/product/agent12c), which makes using DCLI a breeze.   Now that I know where the agents are installed, I need to create a file that lists all the compute (db) nodes in the Exadata; this file name is dbs_group.  The last thing I need is a login for each of the nodes.  On all Exadata systems the Oracle user can be used to login to each of the nodes.

Now that I have all the information I need; I need two command line switches that will be used with the DCLI command.  These switches are –l (login) and –g (group). 

The command that will be used to check all the EM Agents on the Exadata is:


dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/ status agent

Once I run the DCLI command, I get output for all the EM Agents across all the nodes of the Exadata.

[e###db01:oracle:F####1] /home/oracle
> dcli -l oracle -g ~/dbs_group /u01/app/oracle/product/agent12c/core/ status agent
e###db01: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db01: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db01: —————————————————————
e###db01: Agent Version     :
e###db01: OMS Version       :
e###db01: Protocol Version  :
e###db01: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db01: Agent Binaries    : /u01/app/oracle/product/agent12c/core/
e###db01: Agent Process ID  : 6638
e###db01: Parent Process ID : 6307
e###db01: Agent URL         :
e###db01: Repository URL    :
e###db01: Started at        : 2014-03-16 02:29:10
e###db01: Started by user   : oracle
e###db01: Last Reload       : (none)
e###db01: Last successful upload                       : 2014-03-17 10:15:41
e###db01: Last attempted upload                        : 2014-03-17 10:15:41
e###db01: Total Megabytes of XML files uploaded so far : 3.84
e###db01: Number of XML files pending upload           : 0
e###db01: Size of XML files pending upload(MB)         : 0
e###db01: Available disk space on upload filesystem    : 29.98%
e###db01: Collection Status                            : Collections enabled
e###db01: Heartbeat Status                             : Ok
e###db01: Last attempted heartbeat to OMS              : 2014-03-17 10:16:56
e###db01: Last successful heartbeat to OMS             : 2014-03-17 10:16:56
e###db01: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:56
e###db01: —————————————————————

e###db01: Agent is Running and Ready
e###db02: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db02: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db02: —————————————————————
e###db02: Agent Version     :
e###db02: OMS Version       :
e###db02: Protocol Version  :
e###db02: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db02: Agent Binaries    : /u01/app/oracle/product/agent12c/core/
e###db02: Agent Process ID  : 28588
e###db02: Parent Process ID : 28478
e###db02: Agent URL         :
e###db02: Repository URL    :
e###db02: Started at        : 2014-03-16 02:24:59
e###db02: Started by user   : oracle
e###db02: Last Reload       : (none)
e###db02: Last successful upload                       : 2014-03-17 10:15:40
e###db02: Last attempted upload                        : 2014-03-17 10:15:40
e###db02: Total Megabytes of XML files uploaded so far : 3.18
e###db02: Number of XML files pending upload           : 0
e###db02: Size of XML files pending upload(MB)         : 0
e###db02: Available disk space on upload filesystem    : 38.92%
e###db02: Collection Status                            : Collections enabled
e###db02: Heartbeat Status                             : Ok
e###db02: Last attempted heartbeat to OMS              : 2014-03-17 10:16:59
e###db02: Last successful heartbeat to OMS             : 2014-03-17 10:16:59
e###db02: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:59
e###db02: —————————————————————

e###db02: Agent is Running and Ready
e###db03: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db03: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db03: —————————————————————
e###db03: Agent Version     :
e###db03: OMS Version       :
e###db03: Protocol Version  :
e###db03: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db03: Agent Binaries    : /u01/app/oracle/product/agent12c/core/
e###db03: Agent Process ID  : 5834
e###db03: Parent Process ID : 5733
e###db03: Agent URL         :
e###db03: Repository URL    :
e###db03: Started at        : 2014-03-16 02:42:35
e###db03: Started by user   : oracle
e###db03: Last Reload       : (none)
e###db03: Last successful upload                       : 2014-03-17 10:12:39
e###db03: Last attempted upload                        : 2014-03-17 10:12:39
e###db03: Total Megabytes of XML files uploaded so far : 8.54
e###db03: Number of XML files pending upload           : 0
e###db03: Size of XML files pending upload(MB)         : 0
e###db03: Available disk space on upload filesystem    : 39.97%
e###db03: Collection Status                            : Collections enabled
e###db03: Heartbeat Status                             : Ok
e###db03: Last attempted heartbeat to OMS              : 2014-03-17 10:16:38
e###db03: Last successful heartbeat to OMS             : 2014-03-17 10:16:38
e###db03: Next scheduled heartbeat to OMS              : 2014-03-17 10:17:38
e###db03: —————————————————————

e###db03: Agent is Running and Ready
e###db04: Oracle Enterprise Manager Cloud Control 12c Release 3
e###db04: Copyright (c) 1996, 2013 Oracle Corporation.  All rights reserved.
e###db04: —————————————————————
e###db04: Agent Version     :
e###db04: OMS Version       :
e###db04: Protocol Version  :
e###db04: Agent Home        : /u01/app/oracle/product/agent12c/agent_inst
e###db04: Agent Binaries    : /u01/app/oracle/product/agent12c/core/
e###db04: Agent Process ID  : 5853
Parent Process ID : 5734
e###db04: Agent URL         :
e###db04: Repository URL    :
e###db04: Started at        : 2014-03-16 02:53:08
e###db04: Started by user   : oracle
e###db04: Last Reload       : (none)
e###db04: Last successful upload                  : 2014-03-17 10:16:33
e###db04: Last attempted upload                   : 2014-03-17 10:16:33
e###db04: Total Megabytes of XML files uploaded so far : 2.81
e###db04: Number of XML files pending upload           : 0
e###db04: Size of XML files pending upload(MB)         : 0
e###db04: Available disk space on upload filesystem    : 45.75%
e###db04: Collection Status                         : Collections enabled
e###db04: Heartbeat Status                             : Ok
e###db04: Last attempted heartbeat to OMS           : 2014-03-17 10:16:50
e###db04: Last successful heartbeat to OMS          : 2014-03-17 10:16:50
e###db04: Next scheduled heartbeat to OMS           : 2014-03-17 10:17:50
e###db04: —————————————————————

e###db04: Agent is Running and Ready


From the output,  I can clearly see that all the EM Agents are monitoring the Exadata  and are in running status.  I really like the DCLI command option, makes life simple especially when needing to check on multiple agents within an environment.  Makes me wonder if I can put DCLI on a traditional RAC system?


twitter: @dbasolved


Filed under: Exadata, OEM
Categories: DBA Blogs

Tuning with ASH Analytics in #em12c

Sun, 2014-03-16 21:06

Performance tuning of the Oracle Database has always been a great thing to learn and a valuable skill for any database administer.  In a previous post, I talked about how to use the Active Workload Repository (AWR) and Active Session History (ASH) to performance turn a database.  In this post, I want to take a look at the Active Session History (ASH) Analytics that can be found in Oracle Enterprise Manager 12c.

Active Session History (ASH) Analytics is used to help the database administrator determine the cause of spikes within database activity.  The ASH Analytics page provides stacked area charts to help visualize the ASH data from various dimensions.  Some of these dimensions are:

  • Wait Class
  • Module
  • Actions
  • SQL ID
  • Instance
  • User Session
  • Consumer Group

In order to use the ASH Analytics page, you first have to access the database home page.  Once on the database home page, then you can access ASH Analytics from the Performance menu (Performance –> ASH Analytics)(Image 1).

Note: If you are accessing the ASH Analytics page for the first time, it will ask you to install the PL/SQL package for it.

Image 1:

Once you are on the ASH Analytics page, you will notice three distinct sections of graphs and charts (Image 2). 

Image 2:

The top graph (Image 3) provides a high-level perspective of top activity during a selected period of time.  The grey shaded box, by default, shows a 5 minutes window of activity.  If you need to see a larger timeframe drag the box in either direction using the handles provided.

Image 3:

When you select a timeframe to look at the Activity graph below will change to show the activity for that time period (Image 4).   The Activity graph looks a lot like the Top Activity page graph. By highlighting or clicking on the graph or legend (right-hand of graph), you can see what wait category is taking the longest. 

Example: In the case of the graph in image 4, the wait is a configuration (brown) wait due to other (pink) wait.   This is due to log writer (LWGR) issues on my small server.

Image 4:

Notice in image 4, that there are a few option in the upper left corner from the graph.  You see the word “Filter None”.  The graph can be filtered based on dimensions stated earlier. If you change the view of the graph to the Load Map and then click on one of the waits, the filter will dimension from Wait Class to the wait you clicked on in the load map. Image 5 shows that the load map is being filtered by “Wait Event: log buffer space”.

Image 5:

So far, I have selected the timeframe I wanted to see and identified what was taking the most database time during (LGWR and Log Buffer Space) that timeframe.  The last section on the ASH Analytics page is actually two different listing that can be filtered by the dimensions already outlined (Image 6 & Image 7).

In Image 6, you see a list of SQL_IDs that are taking up database time.  You can correlate these SQL_ID activities back to the wait categories in the Activity graph.   At this point, if you wanted to create SQL Tuning Set or Tune the SQL directly; buttons are provided to accomplish these tasks.

Example: I don’t need to tune the SQL because it is a SQL statement I’ve been working with and I already know that the LGWR and Log Buffer Space are the issues.

Image 6:

In Image 7, the output was filtered by Wait Class as the dimension.  As you can tell the wait taking the longest is the Configuration wait.  Again this goes right a long with the earlier findings shown on the page. 

Image 7:

At this point, I have identified that the Configuration wait (LGWR) event is causing the problem with the performance on the system.  In order to fix this I would need to look at my redo log sizes and locations to make sure that I’m not trying to perform to many I/O operations over the same I/O interface.


By adding the ASH Analytics to your DBA Toolkit, you should be able to quickly identify performance problems.  Couple ASH Analytics with ASH Report and AWR Report; you have a solid performance tuning basis to  begin diagnosing problems with the database.


twitter: @dbasolved


Filed under: Database, OEM, Performance
Categories: DBA Blogs

Create a tablespace in #DB12C

Sun, 2014-03-16 14:48

I’ve been messing with Oracle Database 12c ( today.  As I’ve been working at setting up a few things to test, I realized I need to create a tablespace or two and I didn’t have a post on how to create tablespaces with Oracle Database 12c.

Before I can do any testing, I have to setup a few tablespaces for the SLOB testing tool.  Below is how to simply create tablespaces using Oracle Managed Files (OMF) within Oracle Database 12c.

Traditional/Container Database

This is a top level database.  For a traditional style database (non-CDB), created a tablespace is just like in previous versions.

create tablespace SLOB datafile size 1M autoextend on next 1m;

With a Container Database (CDB), first make sure you are in the correct container.  This can be done with either SHOW CON_ID or SHOW CON_NAME.


The CDB is always container id of 1.  Using the CON_NAME option, the top level container name is called CDB$ROOT.

Now to create a tablespace in the CDB, you need to be logged in as a SYSDBA. Then you can run the same command to create the tablespace as you did for a traditional database.

create tablespace SLOB datafile size 1M autoextend on next 1m;

Pluggable Databases

To create tablespaces in the pluggable databases (PDB), you need to ensure that you are in the correct container you want to create the tablespace for.  PDBs have container ids greater than 2.  The CON_NAME option is also a good way to ensure that you are in the correct container as well.


Now that you have confirmed that you are in the correct container; now you can create the tablespace.

create tablespace SLOB datafile size 1M autoextend on next 1m;

How to check where the tablespaces are located

Everyone’s first response for checking on where the location of tablespace are located is to look at DBA_TABLESPACES. Within the Database 12c architecture, this view is valid for the container you are in.  Oracle has provided a few new views to use with Database 12c.  These views are called the CDB views.  If you want to see the tablespaces you created above and in what container they are in, the CDB_TABLESPACES view needs to be used.

Set the container back to CDB$ROOT and verify that you are in the correct container.


Now that you are back in the CDB container lets use the CDB_TABLESPACE view to see what containers you created a SLOB tablespace in.


As you can see the SLOB tablespace has been created in the containers 1, 3 and 4.  If you want to find the names for the con_ids, these can be found in the CDB_PDBS view.  CON_ID for 2, is for the seed PDB which is used for cloning of PDBs.


Oracle Enterprise Manager 12c view

Everything that has been shown has used SQL and SQL*Developer.  The same information can be found in Oracle Enterprise Manager 12c  (OEM) under the container database.  The screen shot below shows the SLOB tablespaces in OEM.



twitter: @dbasolved


Filed under: Database
Categories: DBA Blogs

#EM12c Browser Security – Certificate Installation

Thu, 2014-03-13 09:38

One thing that bugs me is browser security.  When I access Oracle Enterprise Manager 12c (OEM12c) through a browser and get the certificate error (Image 1), it just pushes my buttons.   Why not ship a valid certificate with OEM12c?  In reality, the problem is not with OEM12c; it is actually with the browser you choose to use.  In my case, I use Google Chrome a lot with OEM12c.  So how can I get rid of this security certificate warning?

Image 1:

To get rid of this warning message, the security certificate needs to be installed in the browser.  Where can I get the security certificate? 

Within Google Chrome, if I right click on the lock (Image 2).

Image 2:

I’m presented with a menu (Image 3).  From this menu, I can see that the connection has not been verified.

Image 3:

If I click on the “Certificate Information” link, I’m taken to an information dialog about the certificate (Image 4).  The information provided, clearly states that the certificate could not be verified to a trusted certification authority.

Image 4:

Obviously,  I need to add the certificate as a trusted authority.  In order to do this, I need to first save the certificate to my hard drive.  This is done from the Details tab (Image 5), you will see a Copy to File option.

Image 5:

When clicking on the Copy to File option, it takes me to the Certificate Export wizards (Image 6). Using this wizard, I need to export the certificate to my desktop.

Image 6:

I typically export the certificate in a X.509 format (Base –64) (Image 7). 

Image 7:

Lastly, I save the certificate to my desktop (Image 8):

Image 8:

Finally, close the certificate wizard (Image 9).

Image 9:

Now that I have the certificate saved to my desktop, I can import the certificate into the browser as a trusted authority.  In order to do this Google Chrome, I need to go to the Settings page in Chrome (Image 10).

Image 10:


Once on the Settings page, I open the Show Advanced Settings (Image 11) and look for HTTPS/SSL  (Image 12) to manage certificates.

Image 11:

Image 12:

Clicking the Manage Certificates button will open the Certificates dialog (Image 13). I then need to go to the Trusted Root Certification Authorities tab.  On this table I see an import button that I can use to import the certificate I saved to my desktop.

Image 13:

Using the import wizard (not pictured), I can import the certificate as a trusted certificate.  As part of the import, I receive a Security Warning, clicking Yes will install the certificate (Image 14).

Image 14:

Now that the certificate is installed, the next time I attempt to access to work with OEM12c, I go directly to the login page (Image 15).

Image 15:


If you want to know how to install the certificate for Internet Explorer or Firefox, the steps can be found here.


twitter: @dbasolved


Filed under: OEM
Categories: DBA Blogs

Performance Tuning with AWR & ASH in #em12c

Wed, 2014-03-12 10:42

Oracle monitoring and performance tuning has grown over the years.  With every release of the Oracle database the monitoring capabilities that have been provided have evolved.  In Oracle11g, monitoring with statpack has given way to Automatic Workload Repository (AWR) and Active Session History (ASH) for near real time monitoring of items within the Oracle database.  Adding AWR and ASH to your performance tool kit can yield huge savings of time to tuning endeavors.

Note: AWR and ASH were also in Oracle10g, but not as robust as it currently is in Oracle11g.

Note: If you are interested in how to run AWR and ASH from the command line check out this post by a good friend of mine: here

Instead of focusing on the command line to run AWR and ASH, lets take a look at how these tools are used through Oracle Enterprise Manager 12c. 

Note: The AWR and ASH tools are covered under the Diagnostic and Tuning Management Packs for the database and OEM.  Before using these tools, make sure you have access to these management packs.

Automatic Workload Repository (AWR)

The Automatic Workload Repository (AWR) is used to automate database statistic gathering by collecting, processing, and maintaining performance statistics for database problem identification and self-tuning.  Part of the AWR is snapshots.  AWR Snapshots are gathered by default every hour and queried from memory.  Snapshots can be used to identify performance problems over time.

In order to use AWR, there are two initialization parameters that are relevant:

  • STATISTICS_LEVEL – set to TYPICAL (default) or AL, enables statistics gathering for AWR. setting it to BASIC will disable statistics gathering.
  • CONTROL_MANAGEMENT_PACK_ACCESS – set to DIAGNOSTIC+TUNING(default) or DIAGNOSTIC to enable diagnostic monitoring.  Setting to NONE will disable many database features including ADDM (not covered in this post).
Automatic Workload Repository (AWR) Report

With the AWR enabled for the database, an AWR report can be ran from within OEM.  While on a database home page within OEM, AWR report can be accessed from the Performance menu (Performance –> AWR –> AWR Report) (Image 1). 

Image 1:

Before you can run an AWR Report, OEM will ask you to log in to the database.  Like with anything in OEM, you can used Named Credentials or create a new one (Image 2). 

Image 2:

After logging in you will be asked if you want to use “By Baseline” or “By Snapshot”.  Most of the time, “By Snapshot” will be used (Image 3).  This step corresponds to the command line version of AWR Report when it asks what snapshots you would like to use.  Then click the Generate Report button to build the report.

Tip: Use the magnifine glass to lookup the snapshots.  This will open a separate window.

Image 3:

Once the report is generated; the report can be read in the OEM interface or saved to file for later reading or emailing to others.

Reading AWR Report

One thing to notice, the difference between the AWR Report from command line and OEM interface is that at the command line, text based report can be chosen.  When running the AWR Report from OEM, the only option is the report is automatically saved in HTML format.  Although there is a difference in report formats; the report is read the same way no matter what the format is.

The header of the report (Image 4) provides you with general information about the environment where there report was gathered from.  It also provides information about the snapshots used to generate the report and the time differences between the snapshots.

Image 4:image

The next section in the report is the Report Summary.  In the Report Summary, there are a few items in interest.  The first of which is the “Instance Efficiency Percentage” section (Image 5).  In this section, you can get a sense of how well the instance is managing it memory operations.  Ideally, the target should be 100%.

Note: The only time I’m concern about this area is if any of the values are below 90%. Often this is an indication that the SGA may be to small.

Image 5:

In the same section, I also look at the “Top 5 Time Foreground Event” section (Image 6).  This section of the report provides you with a list of the current wait events that are taking the most time waiting and what wait class they belong to.  At this time, I can see what I should be tuning the environment to prevent.  In the image provided, I can see that I have a wait dealing with high water mark contention (enq: HW – contention). 

From looking at these waits, it appears that I have an object that is possibly having a problem with extents be allocated.  What I need to do next is see if I can find the object that is causing the issue.

Image 6:

Since I know that I have a high water mark contention issue, this is telling me that I need to look at what objects are being written to in the database.  In the “Segments Statistics” section of the report, I want to look at the “Segments by Physical Writes” sub-section (Image 7).  From looking at this section, I can see that the problem is on a LOB object names SYS_LOB0000253792C00002$$.

Image 7:image

From looking at the AWR Report, I can quickly see the object that is causing the problem with the wait.  I can make an attempt to correct the problem and remove the wait by affecting the LOB object in an appropriate way.  In this case, that may be adding extents, looking at moving the LOB to a less accessed tablespace, etc. 

Active Session History (ASH)

Although I have identified the object that is causing the wait, I want to find the session that is causing this wait.  This is where I can use the Active Session History (ASH) Report. Before diving into the ASH Report, lets take a look at what Active Session History (ASH) actually is.

Active Session History provides detailed history of session activity by sampling database activity by sessions every second.  These samplings are taken from memory and then stored in persistent storage (tables).  The sampling that ASH does is done at the session level rather than the instance level.  The statistics that are captured is only for active sessions and is directly related to the work be performed by the sessions rather than the entire database.

Active Session History (ASH) Reports are used to analyzed transient performance problems within the database during specific time. 

Let’s take a look at how the ASH Report is ran and what it provides.

Active Session History (ASH) Report

In Oracle Enterprise Manager 12c (OEM), the ASH Report is accessed through the “Top Activity” menu (Performance –> Top Activity) (Image 8) for that database. 

Image 8:

Once on the “Top Activity” page, below the graph that is displayed there is a button labeled “Run ASH Report” (Image 9). 

Image 9:

Clicking on the “Run ASH Report” button will bring up the page to provide specific information about the ASH Report that you would like to run (Image 10).  Since the ASH samples so much data, per second, you can select a beginning date, ending date and associated times that you would like to look at in the report. 

Note: Most times, when I use generate an ASH report, I try to keep to the same timeframe as the AWR report.  Provides some consistency to findings.

Image 10:

Once I have selected the timeframe I want to use, the report can be generated.  Just like the AWR Report, the ASH Report can be viewed inside of OEM or saved to a file for later reference or sharing with others.

Active Session History (ASH) Report

Let’s read the report now.  As you recall, I found an issue with a high-water mark contention wait.  What I will be looking for in the ASH Report is similar information and possibly who is behind the wait.

Just like the AWR Report, the ASH Report has a header section where I can find all the information related to the database, database instance and timing of the report (Image 11).  This section is general in nature, but it also provides you similar information on SGA usage just like the AWR Report does.

Image 11:image

The next think I like to look at in the ASH Report is the “Top User Events” (Image 12).  If I have selected the same timeframe as the AWR Report, I expect to see similar wait events as I found earlier.  As you can see, the high-water contention wait is at the top of the list again.

Image 12:

Now that I have confirmed that the high-water contention wait is the top wait for the sessions during this timeframe, I want to confirm that it is the same object I found in the AWR Report.  Information on the object causing the wait can be found in the “Top Event P1/P2/P3 Values” (Image 13) section.

Image 13:

In order to confirm that it is the same object, I need to search DBA_EXTENTS with the values from P2 (data file) and P3 (block number).  Depending on how fast the system is, I can quickly identify the object being accessed that is causing the wait.

In any tuning that is being done, I want to find the SQL statement that the session is using.  This information can also be found in the ASH Report along with the top sessions and if there are any sessions being blocked.


Overall, these two reports (AWR and ASH) provide a wide range information for anyone to use in performance diagnosing and tuning.   These reports can be ran from the command line or from OEM depending on your comfort levels with the tools; however, I recommend that any DBA looking to keep a database in good working order and top performance learn how to use and read these reports.  II t will save a lot of time in resolving issues.


twitter: @dbasolved


Filed under: Database, OEM, Performance
Categories: DBA Blogs

Find ILOM(s) for #exadata

Wed, 2014-03-12 06:58

Oracle Exadata can be a complex piece of machine/equipment that is housed in a data center or some other remote location. Once it is all configured and attached to the network managing it can be done using standard SSH tools.  At times, there is a need to manage the Exadata or any other engineered systems with Integrated Lights Out Management (ILOM). 

What is ILOM? 

Integrated Lights Out Manager (ILOM) provides the service processor (SP), hardware and software, a way to manage an Exadata machine component; i.e. compute nodes, cell nodes, storage nodes, etc.  The ILOM is pre-installed on these components.

What is ILOM used for? 

ILOM enables you to actively manage and monitor nodes independently of the operating system; providing you with a reliable way to manage Exadata without direct access.

Using ILOM, you can proactively manage the following:

  • Identify hardware error and faults
  • Remotely control the power of the node
  • View the graphical and non-graphical console of the host
  • View current status of sensors and indicators of the system
  • Identify the hardware configuration of the system
  • Receive alerts that are generated about system events

With the highlights/benefits of ILOM listed, how do you find the addresses for the ILOM?  Normally, this information is provided to you when the Exadata is initially configured.  From time-to-time you will need to look up the addresses to access the ILOM. 

Addresses for any of the Exadata components can be found in the “dbm.dat” file.  This file is located under /opt/oracle.Support/onecommand.  In order to access the “dbm.dat” file, you will need root access.

$ cd /opt/oracle.Support/onecommand
$ cat dbm.dat

Image 1 shows the ILOM excerpt from the dbm.dat file.

Image 1:

Once you have identified the hostname/IP address for ILOM, then it can be used in a web browser to access the ILOM web interface (Image 2).

Image 2:

Once at the login page for the ILOM, it can be accessed using the “root” user and password.


twitter: @dbasolved


Filed under: Exadata
Categories: DBA Blogs

Combat with an enqueue wait event (enq: HW–Contention)

Wed, 2014-03-05 07:24

Recently I’ve been combating a high water mark enqueue wait (enq: HW – contention) on a single node within an Exadata I’m supporting.   I first noticed the wait when I was looking at the performance page for the node in Oracle Enterprise Manager 12c.  What I noticed was the a lot of brown looking spikes (Image 1).  These spikes correspond to a Configuration wait.

Image 1:


When I clicked on Configuration in the legend on the side of the graph, I’m taken to the Active Session Waiting: Configuration  (Image 2)  page.  On this page, I can clearly see that the wait event I’m waiting on is the “enq: HW – contention” wait.

Image 2:


Now, that I know what wait event I’m waiting on, I needed to figure out what object was causing the wait.  Everyone has a different approach to how they performance tune; I find it easier to identify objects causing the wait by using the Active Session History (ASH) report.  ASH reports can be ran directly from the OEM Active Session Waiting page (page with the graphs) or from the command line.   Since I like the pretty graphs that OEM provides, I prefer to run ASH from the command line and use the graphs to help identify what direction my tuning is going.

Active Session History (ASH) Report

The Active Session History (ASH) report is handy in helping to identify what object(s) the wait is waiting on.  In order to run the ASH Report from the command line,  there are a few requirements needed.  They are:

  • Move to the directory where you would like to save the report

Once these requirements have been meet, the ASH Report can be ran from the command line using the ashrpt.sql script. 

                SQL> @$ORACLE_HOME/rdbms/admin/

When the script starts to run, it will ask for input on what format you would like the report in. For search ability purposes (from command line), I go with the “Text” option.   Next, it asks for a timeframe (Image 3); provide a timeframe as per the layout listed.

Image 3:

Lastly, it will ask you what you would like to name the report.  I normally prefix it with my initials and an underscore (bc_) (Image 4). 

Image 4:

Once I press enter, the report runs and the output will be stored in the directory you started SQL*Plus from.

Reading the ASH Report

In reading the ASH Report, I want to identify the primary object that is holding the “enq: HW – contention” wait.   In order to do this, I need to look at the “Top Event P1/P2/P3 Values” section of the report (Image 5).  Reviewing this section, I see that the Top Event is the “enq: HW – contention”, the same as I saw from the OEM performance pages.  Next, I need to find the values of P2 and P3.   

Image 5:

The values for P2 and P3 will tell me what data file (P2) to look at for the block (P3) holding the wait.  The values P2 & P3 for the wait are currently “85” and “16018”.

Finding the Object

Armed with the values for P2 and P3 from the ASH Report, I can now go and locate the object via SQL.  The SQL used to identify the object is as follows:

select /*+ parallel(a dba_extents, 4) */ a.owner, a.segment_type,a.segment_name
where a.FILE_ID = &P2
and &P3 between a.block_id and a.block_id + blocks – 1;

Note: I’m using a parallel hint; this is to speed of the identification of the block.  Can help in VLDBs. 

Once the script is done running, it returns the owner, segment_type, and segment_name of the object holding the enq: HW – contention wait (Image 6).  In this case, the wait is on a system generated LOB index. 

Image 6:

Resolving the Wait

At this point, I have identified the following:

  • The wait: enq: HW – contention
  • The object holding the wait

Next, I want to resolve the wait as fast as I can.  Knowing that the wait is a high watermark (enq: HW –contention) wait event, there needs to be an basic understanding of high watermarks and how extents are allocated.  The basic reason why the ‘enq: HW – contention’ raises is because the  additional extents are not being allocated to in a timely manner for the data being inserted into the object. 

In order to resolve this wait, additional extents need to be allocated.  The environment I’m working in is using big file data files with automatic segment space management (ASSM).  When initially looking into how to increase the extents; there are many different ways.  Being that this database is using ASSM, makes the extent allocation simpler.  To allocate another extent for the object identified, use the following:

             SQL> alter index <owner>.SYS_IL0000253792C00002$$ allocate extent;

The above alter statement is correct; however, the problem is that I was trying to allocate an extent on a system generated index.  The below error message states that this is not allowed:

Error starting at line : 32 in command -
alter index <OWNER>.SYS_IL0000253792C00002$$ allocate extent
Error report -
SQL Error: ORA-22864: cannot ALTER or DROP LOB indexes
22864. 00000 -  “cannot ALTER or DROP LOB indexes”
*Cause:    An attempt was made to ALTER or DROP a LOB index.
*Action:   Do not operate directly on the system-defined LOB index.
           Perform operations on the corresponding LOB column.

In order to add extents, I needed to identify the object that is using the system generated LOB index.  This information can be found in DBA_LOBS (SQL below).

select owner, table_name, column_name, segment_name, index_name
where index_name = ‘SYS_IL0000253792C00002$$’;

The SQL returns the name of the owner, table_name and LOB column within the table that is using the system generated LOB index (Image 7).   Using this information, I can now allocate additional extents to the LOB.

Image 7:image 

In order to allocate an extent on the object identified, the following was used:

alter table <owner>.<table> modify lob(<column>) (allocate extent (size 5M));

Once this completed, I was expecting wait (enq: HW – contention) to go away.  After sometime, I saw no improvement.  This lead me to open an service request (SR) with Oracle Support in trying to resolve. 


Opening the SR has lead me to dig a bit and do some research on extent allocation and LOBs while waiting on Oracle’s answer.  Prior to Oracle Database 11g (, there is a known bug (6376915).  This bug is related to high watermark enqueue contention for ASSM LOB segments (Note ID: 6376915.8).  This bug was actually fixed in release but it needs to be “turned on” in later releases.  To turn the fix for this bug on in, an event needs to be set in the spfile.


By setting this event between 1 and 1024 will identify the number of chunks to be cleaned up each time a reclamation operation is performed.  In turn this reduces the number of requests against the high watermark enqueue. 

Once the event has been set in the spfile, the database instance needs to be rebooted to make the change effective. 


The steps in resolving this enqueue event are not difficult; however, when extents need to be allocated manually it becomes very interesting.  The downside to setting the event, per Oracle Support, is the need to reboot the database instance to make the event active.


twitter: @dbasolved


Filed under: Database, Exadata, Performance
Categories: DBA Blogs

Fast Recovery Area Monitoring – Where are the metrics! #em12c

Fri, 2014-02-28 09:10

Oracle Enterprise Manager 12c is a great monitoring tool for the enterprise, I think I’ve said that more than once over the last two years; however, with every release small yet simple things change.  It is always the small things that will get you.  I had setup monitoring for a client using monitoring templates within OEM12c; everything was being monitored, so I thought!  I got a call from my client asking why nobody was alerted when the Fast Recovery Area (FRA) was filled due to archive logs.  My initial response was it should have alerted, I’ll look into what happen.

Naturally, the first place I started was with the monitoring template (Enterprise –> Monitoring –> Monitoring Templates –>View/Edit desired template) to check and make sure that the Archive Area Used (%) metric is set. 


The monitoring template for the database instances had the Archive Area Usage (%) metric and it is set to email a warning when 80% full and email a critical when 90% full.  Why was the emails not triggered?  The template has been applied to all database instances.


The easiest way to find out what this metric is “suppose” to do, is to look at the reference documentation on supported metrics (here).  This particular metric is listed under Database Instance.  In reading the description of the Archive Area Used (%) metric, I found a note that leads directly to what the issue was.


As the notes says, if the database is using the Fast Recovery Area (FRA) for archive logs; then the metrics associated with archive logs do not apply.  The metric Recovery Area Free Space (%) has to be used to monitor the Fast Recovery Area.  Ok, simple enough; lets just add the metric to the template. 

When trying to add Recovery Area Free Space (%) to the template using Database Instance Target Type, there is no metrics for Fast Recovery Area (Image shows a partial list of metric categories).  Where is Fast Recovery Area metrics?


Again, I go back to the reference guide and lookup Fast Recovery Metrics.  Section 5.34 of the reference guide has a good bit of information on the metrics related to the Fast Recovery Area, but no definitive answers on where these metrics are stored or how to add them to a template.

At this point, what do I know? 

  1. Archive Area Usage (%) cannot be used to monitor the Fast Recovery Area. 
  2. What metrics are needed to monitor Fast Recover Area, but cannot find them to add them to a template. 

Maybe “All Metrics” under a database target would shed some light on the situation.

To access “All Metrics” for a database instance, follow Targets –> Databases –> Database Instance.  Once I was at the database instance I wanted to look out, then I went  Oracle Database –> Monitoring –> All Metrics.


Once in “All Metrics”, I can see every metric that is associated with an Oracle Database Instance.  At the top of the metric tree, there is a search box for finding a metric.  When I search for “Fast”, I find all the Fast Recovery metrics.


Great, I found all the metrics that I want related to Fast Recovery Area.  Now how do I get them into a template so I can set thresholds for monitoring?  Back to the template (Enterprise –> Monitoring –> Monitoring Templates). 

When I edit the template, I noticed (have always noticed) the tabs at the top: General, Metric Thresholds, Other Collected Items, Access.  Normally, I’m only worried about the metrics on the Metric Thresholds tab; since I haven’t had any luck adding the metrics I wanted, lets take a look at the “Other Collected Items” tab.


Scrolling down through the “Other Collected Items” tab, I find the Fast Recovery category for metrics.


Apparently, the Fast Recovery metrics are already added to the template; how do the metrics, “Other Collected Items” tab, work or alerted against.  Again, back to the documentation.

This time when looking at the documentation,  I needed to look up templates to find the answer I needed.  In section 8.2 of the Oracle Enterprise Manager Cloud Control Administrator’s Guide, I find the answer I needed.  Here is why the Fast Recovery Area metrics are not configurable with thresholds:


Oracle has made all the metrics related to Fast Recovery Area non-metric!  That is right, OEM is gathering the information but not allowing you to alert on it with thresholds!  Although it is part of the template, the template will gather the information; but in the end I would need to go to “All Metrics” to see the results.


If you want to monitor the Fast Recovery Area and have thresholds against metrics; the solution is to use Metric Extensions.  Metric Extensions allow the end user to create custom metrics for monitoring.  Once an Metric Extension is created, it will be seen in “All Metrics” and then can be added to a monitoring template with thresholds assigned.

Instead of going into how to develop Metric Extensions in this post, I have provided some really great posts on how to implement and use Metric Extensions below.  I have also provide a link to a similar post which includes showing how the metric extensions are setup by Courtney Llamas of Oracle.


Almost everyone now is using Fast Recovery Area to store their backups and archive log.  Monitoring of this area is critical; however, out of the box Oracle Enterprise Manager 12c, needs to be adjusted to monitor the Fast Recovery Area with the correct metrics.  This slight change in metric monitoring came as a surprise versus previous editions of OEM.  In the end, OEM is still a good monitoring tool for the enterprise; just now we need to make some small adjustments.


Friendly Oracle Employees – Pete Sharman (would say find him on twitter as well but he doesn’t tweet)

Oracle Enterprise Manager 12c Documentation (


twitter: @dbasolved


Filed under: Database, OEM
Categories: DBA Blogs

OEM Agent Core directory deleted/re-add–What Happen? (EXADATA)

Mon, 2014-02-24 14:43

Ever have one of those days when someone calls and says “We/I accidently deleted the whole directory; can you get it back for me”?  Well,  over the weekend I had that happen with an OEM 12c agent on an Exadata, where the core directory for the agent was deleted by mistake.  Before I could evaluate the situation, I had to reassure the end user that the removal of the core directory under the agent home wasn’t a major issue.  The agent was still running in memory and reporting to OEM.  For all intensive-purposes the agent was still monitoring the Exadata node. 

After the end user was assured that the problem could be fixed, the question became:  How can the missing core directory be replaced?

The simplest way is to do a reinstall of the agent silently; however, this takes a lot of time and effort to get it working again.  I wanted the shortest possible way to recover the directory so there would not be a huge window of unmonitored time. 

In this post, what I want to show you how I recovered the agent’s core directory and didn’t loose any targets or have to resync the agent from OEM afterwards.

Note:  I have a good post on agent silent installs located here which is helpful to understand some of the process that was used.  Although, I cover every environment in my post for silent installs, Maaz Anjum covers silent installs for windows pretty well too; check it out here

As I mention in the note above, I needed to pull the correct agent binaries from the OMS library (outlined in the silent install post).  Once I had the binaries extracted to a temporary location, I needed to edit the response file (agent.rsp).  The response file was edited according to the silent install post.

The values that were changed within the response file were for:


All the values for these variables need to match what the existing agent had (information can be found in OEM under Setup –> Manage Cloud Control –> Agents).


With the response file ready, before I can run a silent install; the currently running agent needs to be stopped.  With the core directory gone, the only way to stop the running agent is to using “ps –ef | grep agent” and “kill –9 <process id>”. 

Reminder: there will be two (2) process that need to be killed; one which is a Perl process and the other a Java process.  Once the processes are killed the agent that was running is down.

Once the agent is down, installing the agent software using the silent install method can be done.  Now, here is where the install process becomes different from a normal agent installation.  Earlier I said the core directory under the agent home was deleted.  This means that everything else is still in place; only have to relink the core directory of the binaries.   How can I get only the core directory out of the binaries? 

In trying to answer this question, I used the -help option with the script.  The –help option provides a few examples on how to use the script.  I have listed these examples below:

        This command is expected to do the complete agent install and configuration with the provided inputs.

Example2: AGENT_BASE_DIR=/scratch/agt RESPONSE_FILE=/scratch/agent.rsp -softwareOnly -invPtrLoc /scratch/agent/oraInst.loc -debug
        This command is expected to copy the agent bits to the agent base directory.

Example3: AGENT_BASE_DIR=/scratch/agent EM_UPLOAD_PORT=1000 -forceConfigure
        This command is expected to do the agent install and also force the agent configuration even though the oms host and port are not available.

Example4: AGENT_BASE_DIR=/scratch/agent AGENT_INSTANCE_HOME=/scratch/agent/agent_inst -configOnly
        This command is expected to do the agent configuration only with the provided inputs.

Example5: AGENT_BASE_DIR=/scratch/agent s_agentHomeName=myAgent -ignorePrereqs
        This command is expected to skip the prereqs and then continue with the agent deployment also notice in the inventory that instead of the default home name, myAgent home name will be assigned for the agent home.

As I looked at the examples, I noticed Example 2; a software only install.  I decided to give that a try.  Keep in mind all I needed was the core directory.  The command I used to do a software only install was:

./ AGENT_BASE_DIR=/u01/app/oracle/product/agent12c RESPONSE_FILE=/tmp/agent_12030/agent.rsp –softwareOnly

As the deployment started, I noticed that the rebuilds and relinks for all the binaries was be performed on the the agent home.  Once the deployment is done updating all the dependences, the deployment completes successfully and returned me to the command problem.


The software only deployment of the silent install replaced the missing core directory in the agent home.  Now the only question left was will the agent start?

To test if the agent would start, I needed to go into the agent home:

$ cd /u01/app/oracle/product/agent12c/core/

$ ./emctl status agent

In running the above commands, I was expecting to see the agent status as being down since I have just completed the agent deployment.  What I received instead is an unusual error.  The error was:

$ ./emctl status agent

EM Configuration issue. #DEFAULT_EMSTATE# not found.

In researching this error (#DEFAULT_EMSTATE#) in My Oracle Support (MOS), there were only two notes that were found (1607805.1/1543473.1).  From reading the notes and reviewing the emctl file under the core directory, I identified that the problem was a configuration problem.  In order to fix this configuration problem, what needed to be done?

To make a long story short, the simplest way to fix this issue was to copy an emctl from another Exadata node.    The reason why this was the simplest is due to all the nodes have the same agent home configurations.  Once the updated emctl was put in place, I was able to start and get all the information I wanted from the agent.


With the agent running, my next question was what was OEM’s reaction to the agent being reconfigured/built this way?   To my surprise, OEM didn’t have a problem. The agent was able to upload with no issues and OEM reported that no re-syncing was needed.  The only thing I can conclude from this is that the configuration files were never deleted and when the core directory was relinked, OEM thinks everything is as it was before the core directory was deleted.


twitter: @dbasolved


Filed under: Exadata, Golden Gate
Categories: DBA Blogs

Oracle JDBC Connection Tester

Thu, 2014-02-20 12:08

From time to time, when working at various sites I have to check connections to a database.  If you have been an Oracle DBA for some time, you already know that the listener has to be up and services have to be registered before you can connect from a remote host.  If the listener is up and services are registered, then you should be able to get to the database; so why the need for a connection tester?

Recently, I’ve been asked to verify connections for a few databases to provide management a “warm-and-fuzzy” about their connections to environments; also helps in troubleshooting if you know the service name.  I have been asked to verify connections more than once in my career and always wanted a simple way that I could test without logging into the database host or needing to use an Oracle client.  Hence the reason I wrote this simple JDBC connection tool.

This tool is simple to use, it takes the following to build the URL for connection to Oracle:

  • Hostname/IP Address
  • Listener Port
  • SID/Service Name
  • Username
  • Password

Note: The password will be passed in clear text!

Once the requirements for the URL are provided, the tool attempts to login to the database with the username and password provided.  If the listener and services are working correctly, the tool will return a successful message.  If not, the tool will provide an error stack with the reason why it failed to connect.

I am making the tool available here if you would like to test or use it.   

In order to run the tool, it will need to be ran from the command prompt and Java should be in your path.  The tool was written against JDK 1.7 and uses the ojdbc6.jar which is provided with Oracle 11g.  Also, the ojdbc6.jar file is provided in the lib directory after unzipping the tool.  The README.txt file that comes with the tool explains how to run the tool.

Here is a screenshot of a run done previously.



twitter: @dbasolved


Filed under: Database, General
Categories: DBA Blogs

SPFILE for a CDB vs. a PDB

Fri, 2014-02-14 00:28

Recently, I have had conversations with users about Oracle Database 12c.  One question that often comes up is, how does the database manage the parameters between a container database (CDB) and a pluggable database (PDB)?  In order to answer this question, I had to work with my pluggable database setup in a test environment.  Before changing anything within my database setup I first made a backup of the current parameters using a tired and true method.

SQL> create pfile=’init12ctest.ora’ from spfile;

Before diving into change and storage of parameters, it is important that you understand the concept of inheritance.  Once you under stand inheritance within the Oracle Database 12c, you will grasp how parameters can be changed and stored for PDBs.



Inheritance in dealing with Oracle Database 12c means that the value of a particular parameter in the root (CDB) is inherited by one or more PDBs.  There are parameters that can be changed at the PDB level and override what is being inherited from the CDB.  To identify which parameters can be modified for a PDB, the ISPDB_MODIFIABLE column in the V$SYSTEM_PARAMETER view must be TRUE.  If the ISPDB_MODIFIABLE is TRUE then the parameter inherits the value from the CDB until it is changed with an ALTER SYSTEM SET command at the PDB.

  Setting Parameters

Now, let’s work on changing parameters for a PDB.  In order to do this, you needed to navigate into a PDB using the ALTER SESSION SET CONTAINER command. 


Once inside of the PDB, you needed to find what parameters are modifiable.  These parameters can be found using the following SQL statement.  Remember, the ISPDB_MODIFIABLE has to be TRUE.


With this list of parameters, pick one at random to change.  Lets change the statistics_level from typical to all for bcpdb1.


By using BOTH, you are setting the parameter in memory and in the spfile.  This way on a reboot of the database, the PDB will retain the setting.

Now that the parameter has been changed, can you see this change in the parameters for the CDB?  Not at the CDB level, but you can at the PDB level from the V$PARAMETER.  If I want to see the parameters that are set system wide (CDB & PDB), you needed to use the view V$SYSTEM_PARAMETER.  The following SQL will display the statistics_level parameter for both CDB and PDB (make note of the con_id column).

     WHERE NAME = ‘statistics_level’


At this point, you have two different settings for statistics_level depending on which container you are in.  Now, you’ll want to make a backup of the spfile and verify that your settings for the PDB is in fact stored in the spfile.

SQL> CREATE PFILE=’init12ctest1.ora’ FROM SPFILE;

Let’s take a look at the pfile you just created.

bc12c.__oracle_base=’/oracle/app’#ORACLE_BASE set from environment
*.dispatchers=’(PROTOCOL=TCP) (SERVICE=bc12cXDB)’

Notice that the parameter statistics_level is not included in the output for the pfile.  Why is this?  If you go back to the documentation on Oracle Database 12c (here), you will find a note stating:

Note: A text initialization parameter file (PFILE) cannot contain PDB-specific parameter values.

  Where are the parameters stored

Since the parameters for the PDB are not stored in the spfile, where are they stored then?  In order to find this, you need to take the V$SYSTEM_PARAMETER view apart using GV$FIXED_VIEW_DEFINITION.


The V$SYSTEM_PARAMETER view points you to the GV$SYSTEM_PARAMETER view.  Let’s grab the definition for this view.


The definition for GV$SYSTEM_PARAMTER shows you that the information is coming from the X$KSPPI and X$KSPPSV tables in the data dictionary.  The SQL that defines the GV$SYSTEM_PARAMETER view is listed below.

select x.inst_id,x.indx+1,ksppinm,ksppity,ksppstvl, ksppstdvl, ksppstdf, 
decode(bitand(ksppiflg/65536,3),1,’IMMEDIATE’,2,’DEFERRED’, 3,’IMMEDIATE’,'FALSE’), 
decode(bitand(ksppiflg/65536,3), 0, ‘FALSE’, ‘TRUE’)),
decode(bitand(ksppilrmflg/64, 1), 1, ‘TRUE’, ‘FALSE’), 
decode(bitand(ksppilrmflg/268435456, 1), 1, ‘TRUE’, ‘FALSE’), 
ksppdesc, ksppstcmnt, ksppihash, y.con_id
from x$ksppi x, x$ksppsv
where (x.indx = y.indx)
and  bitand(ksppiflg,268435456) = 0
and  ((translate(ksppinm,’_',’#') not like ‘##%’)
and  ((translate(ksppinm,’_',’#') not like ‘#%’)     
or   (ksppstdf = ‘FALSE’)
or   (bitand(ksppstvf,5) > 0)))


Now that you know the X tables to use, let’s take a look and see if you can locate the statistics_level parameter for bcpdb1.



Notice that the CON_ID is 3.  If you query CDB_PDB, you will notice that bcpdb1 has  CON_ID of 3. At this point, you have located where the value of statistics_level for the PDB is stored.  If you go back to the documentation, you will find references that tell you if SCOPE=SPFILE or SCOPE=BOTH were used when setting the parameter; the parameter will be transferred and stored in the XML file when the PDB is unplugged from the CDB.  Let’s test this out.

  Unplug a PDB

To test if the parameter (statistics_level) is stored in the XML file or data dictionary. What happens to the parameter when you unplug the PDB.  According to documentation when unplugged the value of statistics_level should be stored in the associated XML file for plugging into a CDB.  Let’s unplug BCPDB1.




With the pluggable database BCPDB1 unplugged from the CDB, lets see if the parameter values for statistics_level are still in the data dictionary.



Apparently, the parameter values for statistics_level are gone.  Let’s check the XML file that was created in the /tmp directory to see if the parameter is there.


The values that are in the XML file appear to be just general settings.  The statistics_level parameter didn’t make it into the XML file either. Hmmm…  Let’s plug in the PDB and see if the value comes back to the data dictionary.
  Plug-in the PDB

In order to plug-in the PDB make sure you still have the data files and the XML file needed.  What you are hoping for is that the statistic_level parameter comes back for the PDB and is set to a value of ALL. 



With the PDB (BCPDB1) open, let’s check and see if the statistic_level parameter is in the data dictionary.



Sure enough, the statistics_level parameter came back.  This means that the parameters for the PDB are stored in the PDB data dictionary. :)



When working with Oracle Database 12c, you have to understand how the initialization parameters are set for the CDB and each of the PDBs associated. The traditional ways of looking at an SPFILE will only be useful for a CDB database.  This is because the CDB is the root level that controls many of the items which are shared amongst the CDB and PDBs.  In order to fully understand how parameters are set for a PDB, you need to remember that PDBs inherit parameter values from a CDB; unless they are overridden from the PDB level. 




twitter: @dbasolved


Filed under: Database
Categories: DBA Blogs

Speaking at RMOUG TD 2014

Tue, 2014-02-04 14:22


Let the conference season begin! 

I will be presenting at RMOUG Training Days 2014 this month in Denver, CO.   The presentation that I will be presenting is a revised version of my OOW 2014 presentation on Oracle Enterprise Manager 12c, Database 12c and You!  This presentation will cover a few basics of Database 12c and how Oracle Enterprise Manager 12c can be used to monitor a Database 12c environment.

Being that this is my first RMOUG conference, I’m looking forward to an exciting 3 days of presentations and networking.   This is going to be fun!


twitter: @dbasolved


Filed under: General
Categories: DBA Blogs

Find what databases you have quickly – #EM12c Inventory Search

Sat, 2014-02-01 22:51

Thought I would write a quite tip on how to find what versions of databases you have in your environment.  Oracle Enterprise Manager 12c provides a quick and easy way to find this information once it is configured within your environment. 

Starting at with the Enterprise menu, you will go to Configuration and then Inventory and Usage Details (Enterprise –> Configuration – Inventory and Usage Details) (Figure 1).

Figure 1: Inventory and Usage Details Menu Item


After clicking Inventory and Usage Details, you will land on the associated page.  In the drop down box labeled “Show”, select Databases (Figure 2).  This will show you all the databases that are listed in the Oracle Enterprise Manager 12c that are being monitored.

Figure 2: Show Menu


Once all the databases are displayed (listed), you can see exactly what you have in your environment (Figure 3).  In this case, you test environment is comprised of 2 11gR2 databases ( & and a set of 12cR1 databases (CDB & 2 PDBs).

Figure 3:  Database Inventory listed (not a great graphic)


Hope this quick tip helps you find additional information about targets in your environment.


twitter: @dbasolved


Filed under: OEM
Categories: DBA Blogs

Database Machine (EXADATA) Schematic problem in OEM12c

Tue, 2014-01-28 11:45

When adding an Exadata to Oracle Enterprise Manager 12c (OEM), it is pretty easy yet at times you may run into a problem or two.  Many of these problems can be minimized by using the Oracle Enterprise Manager 12c: Exadata Discovery Cookbook (cookbook).  The cookbook can be found here, so you don’t have to search to far for it.

As with adding any target in OEM, you add the Exadata the same way (Setup –> Add Target –> Add Targets Manually).  I’m not going to go into details here on adding the Exadata target because it is covered in the cookbook and it is also a basic function of OEM.   Just keep in mind, that adding an Exadata, you will need the databasemachine.xml or the catalog.xml file.  With the Exadata’s I’ve been adding, I have been using the databasemachine.xml.

What I do want to go over, is when you add an Exadata and the schematic of the Exadata box does not appear in OEM.  In figure 1, you see that I have two Exadata boxes listed on the Oracle Exadata Database Machines page. 

Figure 1: Oracle Exadata Database Machines


One of these Exadata, was added to OEM without a problem; the other the schematic didn’t come up.  Just to make this a bit easier, the top database machine is the one that is missing the schematic.  Figure 2, shows what I’m referring to about missing the schematic.  What shows up is only the legend, but no graphic for the machine itself.

Figure 2: Missing schematic


So, the question has to be asked, Why did this happen? Everything was configured correctly!  Well, I was so puzzled by this, I had to ask Oracle for a bit of help.  What we (myself and Oracle) identified was that the databasemachine.xml file had a duplicate entry in it. 

Fixing the databasemachine.xml file is explained in a bit.  First, you need to remove the database machine that is having the problem from OEM.  This can be done from the Database Machine menu (Figure 3). 

Figure 3: Database Machine menu


After clicking the Remove Target… option, you are taken to a page that provides options for removing the Exadat from OEM (Figure 4).   To remove the whole Exadata, the default radio button is fine.  Just make sure you select the option to Unsubscribe SNMP Subscriptions. Once you check the box, make sure all the passwords are correct.   Then click the Yes button in the right-hand corner.

Figure 4: Remove Database Machine


When the Exadata is removed, OEM will drop you back into the All Targets page.  At this point, the Exadata has been removed from OEM.

Now, you are back to a clean slate for adding the Exadata to OEM.  Before you rush to re-add the machine, we need to take a look at the databasemachine.xml file.  What you are looking for is any possible duplicates in the file.

Note: The EMOMS.log file on the OEM server helps in resolving this issue.

In reviewing the EMOMS.log, you will want to search for any interactions between OEM on the name of your Exadata.  In searching the EMOMS.log you may find an Oracle error, in my case the error was ORA-00001.  OEM couldn’t add a record to the EM_DBM_RACK_COMPONENTS table due to a PK error (Listing 1).

Listing 1: PK Error in EMOMS.log

ERROR model.DbmDiscovery logp.251 – ORA-00001: unique constraint (SYSMAN.EM_DBM_RACK_COMPONENTS_PK) violated

Knowing that there is a primary key error in the EM_DBM_RACK_COMPONETS table, this should lead you to think that the problem is in the databasemachine.xml file.   The databasemachine.xml file can be edited with any text editor; the file is an XML file.  A simple search for each of the values in the <ADMINNAME>.  Listing 2, shows you what I found in the databasemachine.xml file I’m working with.

Listing 2: Duplicates in databasemachine.xml

   <ITEM ID=”1″>







    <ITEM ID=”13″>







One of these entries in the databasemachine.xml file has to go; however, do not remove the whole <ITEM_ID></ITEM_ID>.  Just remove the name from the <ADMINNAME></ADMINNAME> in the second entry.  In this example, it would be <ITEM_ID=13> is the duplicate and I removed the name from the <ADMINNAME> tags.

After editing the databasemachine.xml file,  copy it to all the nodes within the Exadata.  This way you make sure that all the machines have the same file.  To do this,  use SCP (Listing 3).

Listing 3: SCP to copy to other compute nodes

scp ./databasemachine.xml root@db02:/opt/oracle.SupportTools/onecommand

Now, with the databasemachine.xml file in place, you can try and re-add the Exadata to OEM.  Again, keep in mind the easiest way to add the Exadata is to follow the cookbook (here).  That way you make sure all the points are covered.

Once you rediscover the Exadata in OEM, you should be able to pull the Database Machine Schematic up correctly (Figure 5). 

Figure 5: The correct view of Database Machine Schematic


Now, that the Exadata has been imported into OEM successfully, you can now add all the associated targets that are running on the Exadata (ie, database, ASM, listener, etc..). 


twitter: @dbasolved


Filed under: Exadata, OEM
Categories: DBA Blogs