Skip navigation.

DBASolved

Syndicate content DBASolved
Helping Oracle DBAs solve problems
Updated: 13 hours 2 min ago

OGG-00212, what a frustrating error.

Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

#GoldenGate Bound Recovery

Sun, 2014-08-03 19:40

Every once in awhile when I restart an extract, I see entries in the report file that reference “Bounded Recovery”.  What exactly is “Bounded Recovery”?

First, keep in mind that “Bounded Recovery” is only for Oracle databases!

Second, according to the documentation, “Bounded Recovery” is a component of the general extract checkpointing facility.  This component of the extract guarantees an efficient recovery after an extract stops for any reason, no matter how many uncommitted transactions are currently outstanding.  The Bounded Recovery parameter sets an upper boundary for the maximum amount of time that an extract is needed to recover to the point where it stopped before continuing normal processing.

The default settings for “Bounded Recovery” is set to 4 hours and needed recovery information is cached in the OGG_HOME/BR/<extract name> directory  This is verified when I look at the report file for my extract named EXT.


2014-07-21 17:26:30 INFO OGG-01815 Virtual Memory Facilities for: BR
 anon alloc: mmap(MAP_ANON) anon free: munmap
 file alloc: mmap(MAP_SHARED) file free: munmap
 target directories:
 /oracle/app/product/12.1.2/oggcore_1/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR = /oracle/app/product/12.1.2/oggcore_1

According to documentation, the default setting so for “Bounded Recovery” should be sufficient for most environments.  It is also noted that the “Bounded Recovery” settings shouldn’t be changed without the guidance of Oracle Support.

Now that the idea of a “Bounded Recovery” has been established, lets try to understand a bit more about how a transaction is recovered in Oracle GoldenGate with the “Bounded Recovery” feature.

At the start of a transaction, Oracle GoldenGate must cache the transaction (even if it contains no data).  The reason for this is due to the need to support future operations of a transaction.  If the extract hits a committed transaction, then the cached transaction is written to the trail file and clears the transaction from memory.  If the extract hits a rollback, then the cached transaction is discarded from memory.  As long as a an extract is processing a transaction, before a commit or rollback, the transaction is considered an open transaction and will be collected.  If the extract is stopped before it encounters a commit or rollback, the extract needs all of the cached transaction information recovered before the extract can start.  This approach applies to all transactions that were open at the time of the extract being stopped.

There are three ways that an extract performs recovery:

  1. No open transactions when extract is stopped, the recovery begins at the current extract read checkpoint (Normal recovery)
  2. Open transactions whose start points in the log were very close in time to the time when the extracted was stopped, the extract begins its recovery by re-reading the logs from the beginning of the oldest open transaction (Considered a normal recovery)
  3. One or more open transactions that extract qualified as long-running open transactions, extract begins recovery (Bounded Recovery)

What defines a long-running transaction for Oracle GoldenGate?

Transactions in Oracle GoldenGate are long-running if the transaction has been open longer than one (1) “Bounded Recovery” interval.

A “bounded recovery interval” is the amount of time between “Bounded Recovery checkpoints” which persists the current state and data of the extract to disk.  “Bounded Recovery checkpoints” are used to identify a recovery position between tow “Bounded Recovery intervals”.  The extract will pick up from the last “bounded recovery checkpoint”, instead of processing from the log position where the open long-running transaction first appeared.

What is the maximum Bounded Recovery time?

The maximum bounded recovery time is no more than twice the current “Bounded Recovery checkpoint” interval.  However, the actual recovery time will be dictated by the following:

  1. The time from the last valid Bounded Recovery interval to when the extract was stopped
  2. Utilization of the extract in that period
  3. The percent of utilization for transaction that were previously written to the trail

Now that the basic details of “Bounded Recovery” have been discussed.  How can the settings for “Bounded Recovery” be changed?

“Bounded Recovery” can be changed by updating the extract parameter file with the following parameter:


BR
[, BRDIR directory]
[, BRINTERVAL number {M | H}]
[, BRKEEPSTALEFILES]
[, BROFF]
[, BROFFONFAILURE]
[, BRRESET]

As noted, there are a few options that can be set with the BR parameter.  If I wanted to shorten my “Bound Recovery” time and change directories where the cached information is stored I can do something similar to this:


--Bound Recovery
BR BRDIR ./dirbr BRINTERVAL 20M

In the example above, I’m changing the directory to a new directory called DIRBR (created manually as part of subdirs).  I also changed the interval from 4 hours to 20 minutes.

Note: 20 minutes is the smallest accepted time for the BRINTERVAL parameter.

After adding the BR parameter with options to the extract, the extract needs to be restarted.  Once the extract is up and running, the report file for the extract can be checked to verify that the new parameters have been taken.


2014-08-02 22:20:54  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /oracle/app/product/12.1.2/oggcore_1/dirbr/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 20M
BRDIR      = ./dirbr

Hopefully, this post provided a better understanding of one least understood option within Oracle GoldenGate.

Enjoy!!

About me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Macros and parameter passing macros with #goldengate

Tue, 2014-07-22 10:04

Replicating data can be a time consuming process to setup.  Fortunately, Oracle GoldenGate provide a few tools to help ease the  complexity of setup.  One of these tools is the “macro”.  Macros are used to simplify and automate the work associated with setting up and replicating data.  So what exactly is a macro?  Oracle defines a macro as:

A macro is a built-in automation tool that enables you to call a stored set of processing steps from within the Oracle GoldenGate parameter file.

In a nutshell, a macro is a stored set of commands that are used on a frequent basis; consisting of parameters for simple to complex series of substitutions, calculations or conversions.  Macros may be  written inline in the parameter file or stored in a macro library.

What this post will show you is how to add a macro to a library, accept parameters and then pass the parameter to within the parameter file during replication.

First thing that needs to be done is setup a standard directory to contain all the macro files.  A directory called “dirmac” needs to be created in the OGG_HOME.

Note: The “dirmac” directory is something that you will need to create with the following command “mkdir -p $OGG_HOME/dirmac”.  A macro directory can be created anywhere you like, I personally try to keep all OGG items together.

In the macro directory, create a file to use as the macro library.  Ideally, you should have a file for each type of process running in that $OGG_HOME.  Once the file is created, then edit the library file and add the macros desired.


> mkdir -p $OGG_HOME/dirmac
> cd $OGG_HOME/dirmac
> touch <library_name>.mac 
> vi <library_name>.mac 

When the macro library file is open for editing add macro that is desired.  Remember, a macro library can house more than one macro.  In the example below, you will see two examples.  The first example is setting a tokens that can be called.  The second example is setting a macro to map tokens and header information to a specific table.


—Example 1— 

BEGIN
SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’)
END;

MACRO #src_icnt
BEGIN
ICNT = @GETENV('STATS', 'INSERT')
END;

MACRO #src_ucnt
BEGIN
UCNT = @GETENV('STATS', 'UPDATE')
END;

MACRO #src_dcnt
BEGIN
DCNT = @GETENV('STATS', 'DELETE')
END;

MACRO #src_dmlcnt
BEGIN
DMLCNT = @GETENV('STATS', 'DML')
END;

—Example 2— 

MACRO #hb_mappings
PARAMS (#src_schema)
BEGIN
MAP #src_schema.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);
END; 

Notice in example 2 the PARAMS statement.  When using macros this can get a bit confusing since defining a macro uses the hash mark (#) and parameters in the PARAMS statement use the hash mark (#) as well.  Also notice that the parameter #src_schema is used in the MAP statement in the macro.  This is how the value for #src_schema is passed into the macro from the parameter files.

Now, lets take a look at a parameter file.

In my test environment, I have the following processes running:


GGSCI (oel.acme.com) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
JAGENT STOPPED
EXTRACT RUNNING EXT 00:00:09 00:00:09
EXTRACT RUNNING PMP 00:00:00 00:00:07
REPLICAT RUNNING REP 00:00:00 00:00:04

Taking a look at the extract parameter file (apply side); in order to use the macro in example 1, add an INCLUDE statement which references macro library to the parameter file.  Then in the TABLE statement, using the TOKEN string, the macro for the token can be referenced.


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT EXT

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle Login
USERID ggate, PASSWORD ggate

--Warns for a long running transaction
WARNLONGTRANS 1h, CHECKINTERVAL 30m

--Trace process info
--TRACE ./dirrpt/trace_ext.trc

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Ignore transactions for golden gate user
TRANLOGOPTIONS EXCLUDEUSER GGATE

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

<strong>INCLUDE ./dirmac/ops_info.mac</strong>

--Table Mappings
TABLE SCOTT.RANDOM_VALUES, TOKENS(<strong>#src_csn_ts(), #src_icnt(),#src_ucnt(),#src_dcnt(),#src_dmlcnt()</strong>);

Once the extract parameter file is updated, then the extract needs to be restarted. Upon restart of the extract, keep an eye out and make sure the process doesn’t abend.  The VIEW REPORT command can be used to check the report file during startup.  Once the extract starts, you can see how the macro is used and expanded in the parameter file.  From the extract side, the needed info will be captured and placed in the associated trail files.

In the middle, the pump really doesn’t need any changes or restarting.

Now on the replicat (capture) side, in order to use the macro defined in example 2 (above); the replicat parameter file needs to be edited to include the macro library and statements to call the macros.  The next code block shows the contents of my replicat parameter file.


--Specifies the name of the replicat load process.
REPLICAT REP

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle login.
USERID ggate, PASSWORD ggate

--surpress triggers - enable for 11.2.0.2 or later
--DBOPTIONS SUPPRESSTRIGGERS

ALLOWDUPTARGETMAP

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Tracing info
--TRACE ./dirrpt/trace_rep.trc

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 200

--Resolves the TARGETs to be replicated during process start-up
--WILDCARDRESOLVE IMMEDIATE

--Specify error handling rules:
REPERROR(default, discard)
REPERROR(default2, discard)

--Table Mappings
INCLUDE ./dirmac/ops_info.mac

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);

You will notice that I have included the INCLUDE statement to call the macro library.  Then the macro (example 2) that does the table mapping for the desired information can be accessed using #hb_mappings() (last line of parameter file example).   Passing the schema name is simple by placing it in between the parenthesis.  As the example above shows, I’m passing SCOTT as the schema I want to use.

Upon restart of the replicat, by looking at the report (VIEW REPORT), I can see where the macro library is read and how the macro is translated into a map statement for the replicat to use.


— Report Output (summerized)— 

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);
MAP SCOTT.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);

After the replicat has restarted.  Then the table in the mapping statement can be checked to see if any data was inserted (SCOTT.GG_REP_OP_STATUS).  Image 1 below shows the output of the data I requested to be replicated using the macro.

Image 1:
image1_macro_params.png

 

 

 

 

 

This should have shown you a way to use macros within your replication environment.

Enjoy!!!

http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Understanding and using tokens in Oracle #GoldenGate

Mon, 2014-07-21 10:53

Recently, I’ve been doing some work with a client where tokens need to be used.  It came to my attention that the basic usage of tokens is misunderstood.  Let’s see if I can clear this up a bit for people reading.

In Oracle GoldenGate, tokens are a way to capture and store data in the header of the trail file (more info on trail headers here).  Once a token has been defined, captured and stored in the header, it can be retrieved, on the apply side, and used in many ways to customize what information is delivered by Oracle GoldenGate.

Defining a token is pretty simple; however, keep these three points in mind:

  1. You define the token and associated data
  2. The token header in the trail file header permits up to a total of 2,000 bytes (token name, associated data, and length of data)
  3. Use the TOKEN option of the TABLE parameter in Extracts

In order to define a token in an extract, the definition should follow this basic syntax:


 TABLE <schema>.<table>, TOKENS( SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’));

In the example above, the token will be populated with the timestamp of the last commit on the table it is defined against.  After restarting the extract, the token (SRC_CSN_TS) will be included in the header of the trail file.

Once the trail file is shipped to the target side and read by the replicat, the token is mapped to a column in the target table.


MAP <schema>.<table>, target <schema>.<table>,
COLMAP (
SRC_CSN_TS=@token(’SRC_CSN_TS’)
); 

Image 1, is a view of a table where I have mapped the token (SRC_CSN_TS) to a target table to keep track of the committed timestamps of a transaction on the source system.

Image 1:

output_tokens1.png

 

 

 

 

Tokens are simple to create, use, and are a powerful feature for mapping data between environments.

Enjoy!!

twitter: @dbasolved

blog: https://dbasolved.com

 


Filed under: Golden Gate
Categories: DBA Blogs

Speedy #em12c template export

Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Configure shared storage for #em12c Business Intelligence Publisher (BIP)

Tue, 2014-07-01 07:52

Oracle Enterprise Manager 12c Release 4 has a lot of new features; however, I quickly want to focus on a feature that has been in OEM12c for awhile.  This feature is Business Intelligence Publisher (BIP).  BIP has been a part of OEM12c since it was initially released; at first it was a bit cumbersome to get it installed. With the release of 12.1.0.4, the OEM team has done a great job at making the process a lot easier. Although this post is not directly talking about BIP installation; just understand that the process is easier and details can be found here.

What I want to focus on is how to configure BIP, once installed, to use shared storage.  I don’t recall if the requirement for shared storage was required in earlier versions of OEM12c; however, if you want to share BIP reports between OMS nodes in a high-avaliablity configuration, a shared location is required.  The initial directions for reconfiguring BIP for shared storage can be found here.

In order to allow multiple OMS nodes to support multiple BIP servers the following command needs to be ran:


emctl config oms -bip_shared_storage -config_volume <directory location> -cluster_volume <directory location>

Note: The directory location supplied for the shared location has to be accessible by both OMS nodes.


Example:

emctl config oms -bip_shared_storage -config_volume /oms/BIP/config -cluster_volume /oms/BIP/cluster

When the reconfiguring of BIP begins, you will be asked for the Admin User’s password (Weblogic User) and the SYSMAN password.  Supply these and then wait for the completion of the script. Once completed the CONFIG and CLUSTER directories for BIP will be moved to the location specified.

The new directory locations can be verified from the BIP web page under Administration -> Server Configuration.

In the end, reconfiguring BIP to use shared storage is quite simple.

Enjoy!

twitter: @dbasolved

blog: https://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Counting the many rows of Oracle GoldenGate

Thu, 2014-06-12 22:29

******NOTICE******
!The code contained in this post is meant to be  used at your own risk!
******NOTICE****** 

With any Oracle GoldenGate replication configuration, it is always good to monitor what is going on.  One aspect that many people want to know is how can I validate that every record is being transferred to the target system.  Once such way is to use Oracle Veridata; however, some times there is not enough time to setup and run Oracle Veridata.  Also in smaller shops, Oracle Veridata may be a bit costly to initially start with. How can someone get the “warm-and-fuzzy” feeling when replicating data with Oracle GoldenGate?

Oracle GoldenGate has a lot of good command that can be used from the GGSCI prompt to check and see what type of transactions have been processed and the totals of those transactions.  What I was recently tasked with was a way to quickly do a validation of the rows between source and target within a Oracle GoldenGate configuration.  One way to do this is to quickly get a count of the rows per table between the source and target systems.  In discussions with a co-worker, it came out that they had a small utility, primarily scripts, that could do the counts between source and target systems.  In reviewing these scripts I saw where I could possibly improve on the process and make it a bit more streamlined and integrated with the target database.

In streamlining the process I decided to take the main portions of the scripts and rewrite it into a stored procedure that could be use from the Oracle GoldenGate user inside the target database of the replication environment.  The initial stored procedure I came up with can be seen in Code 1 below.

Code 1: Stored procedure for counts


create or replace procedure rowcounts(v_tgtschema in varchar2, v_srcschema in varchar2, v_dblink in varchar2)
is
—Author: Bobby Curtis, Oracle ACE
—Copyright: 20014
—Company: Accenture Enkitec Group
— 
v_tgtcount number(16) := 0;
v_srccount number(16) := 0;
v_sqlstmt0 varchar2(1000);
v_sqlstmt1 varchar2(1000);
v_sqlstmt2 varchar2(1000);
begin
 for vtable
 in (select table_name
     from all_tables
     where owner = v_tgtschema
     order by 1)

loop

v_sqlstmt0 := 'select count(*) from '||v_tgtschema||'.'||vtable.table_name;
 --dbms_output.put_line(v_sqlstmt0);
 execute immediate v_sqlstmt0 into v_tgtcount;

 v_sqlstmt1 := 'select count(*) from '||v_srcschema||'.'||vtable.table_name||'@'||v_dblink;
 --dbms_output.put_line(v_sqlstmt1);
 execute immediate v_sqlstmt1 into v_srccount;

v_sqlstmt2 := 'update onetstats set row_cnt_source='|| v_srccount ||', row_cnt_target=' || v_tgtcount || ', end_time=sysdate where schemaname='''||v_tgtschema||''' and tablename='''||vtable.table_name||''' and dataset=null';
 --dbms_output.put_line(v_sqlstmt2);
 execute immediate v_sqlstmt2;

 if (sql%notfound)
 then
     v_sqlstmt2 := 'insert into onetstats (schemaname,tablename,start_time,end_time,row_cnt_source,row_cnt_target,dataset) values ('''||v_tgtschema||''','''||vtable.table_name||''',sysdate,sysdate,' || v_srccount || ',' || v_tgtcount || ', null)';
    --dbms_output.put_line (v_sqlstmt2);
     execute immediate v_sqlstmt2;
 end if;

 commit;
 end loop;
 exception
       when others
       then
              dbms_output.put_line(sqlerrm);
end;

As you can tell from looking at the stored procedure it uses a table to store the counts for each table in the schema being replicated.  Also notice that a database link is used to access the source server.  The table that stores the count information is just a really simple table with columns that maps to the update/insert statement in the stored procedure.  The database link needs to be configured in the local TNSNames.ora on the target server.  Code 2 and code 3 show an example of these objects.

Code 2: Table for counts


create table &ggate_user..onetstats (
 schemaname varchar2(30),
 tablename varchar2(30),
 start_time date,
 end_time date,
 row_cnt_source number,
 row_cnt_target number,
 dataset number
 )
;

Code 3: Database Link to source


create database link ggcounts connect to &&ggate_user identified by &ggate_user_pwd using 'ggcounts';

The last thing that needed to be done is granting SELECT ON <TABLE> to the Oracle GoldenGate user on the source and target systems.  Once this is done, the stored procedure can be ran from SQL*Plus or SQL Developer at anytime on the target system to get a rough estimate count of the rows between the source and target databases.

Enjoy!!

twitter: @dbasolved

blog: http://dbasolved.com

 ******NOTICE******
!The code contained in this post is meant to be  used at your own risk!
******NOTICE****** 


Filed under: Golden Gate, Replication
Categories: DBA Blogs

SCN to Timestamp and back

Tue, 2014-06-10 09:37

When working with Oracle GoldenGate, understanding the System Change Number (SCN) is important.  The SCN is an internal number maintained by the database that keeps track of the changes made to the database for recovery purposes.  The SCN is also important when working with Oracle GoldenGate.  In many environments, instantiation of Oracle GoldenGate environments require knowing where to start the replicat from.

To find the current SCN for the database, Oracle has made this pretty easy.  There is a column in the V$DATABASE view called CURRENT_SCN.


select current_scn from v$database;

Now that the current SCN has been found, it can be used in an data pump export parameter file using FLASHBACK_SCN to ensure a consistent copy of the database at that point-in-time.  Once an import is completed on the target side of GoldenGate the replicat can be started using the SCN with the ATCSN or AFTERCSN option.

What does the SCN really mean to an Oracle GoldenGate Admin though?  As outlined above it really is just a point-in-time place holder to key admins in on a place to start the replicat.  At times there maybe need to start the replicat from an adjusted point-in-time.  In order to do this, it is handy to know how to convert the SCN to a Timestamp and back to SCN.  In order to do this, Oracle has provided two packages called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN.

Using these packages is pretty simple.  The following code blocks demonstrates how to convert from SCN to Timestamp and back.

Convert SCN to Timestamp:


select scn_to_timestamp(8697520) from dual;

Convert Timestamp to SCN:


select timestamp_to_scn('10-JUN-14 10.50.55.000000000 AM') from dual;

Knowing how to convert the SCN to Timestamp and back to SCN can be very handy in many different situations and useful when working with Oracle GoldenGate.

Enjoy!

twitter: @dbasolved

blog:http//dbasolved.com


Filed under: General, Golden Gate, Replication
Categories: DBA Blogs

#EM12c (12.1.0.4) is released!!!!!

Tue, 2014-06-03 07:08

This is just a quick post while I’m waiting on a few other things to line up today.  For many of us, the long awaited Release 4 of Oracle Enterprise Manager 12c is out!  This release of OEM 12c has many exciting improvements and some really cool new stuff.  I cannot remember all of them, but one I’m looking forward to is the AWR Warehouse and figure out how all of it works.  Additionally, BIP is finally installed with the OEM product, no more secondary install!

In the meantime, if you have time and want to start looking, working and figuring out what Oracle has provided for the community in this new release, you can download it here.

Go fourth and enjoy the new features!  I’m looking forward to writing more new posts related to OEM.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Oracle #GoldenGate Replicate Apply (Run) Rate

Thu, 2014-05-22 12:16

For a couple of weeks now, I’ve been trying to figure out a way to identify the size of data for transactions that are getting processed over a give period of time with Oracle GoldenGate.  When I started to think through the process, I keyed in on the Relative Byte Address (RBA).  What is the RBA?  From reading Oracle’s GoldenGate documentation, the RBA is mostly a marker within the trail file to identify the location of the transaction.  This got me to thinking; maybe I can use the RBA to “estimate” the amount of data applied to the source over a period of time (compare 2 RBAs).

Before I ventured off in the unknown; I wanted to verify if there was a method already identified by Oracle.  What I found in MOS was Note ID: 1356524.1.  This note deals mostly with how to identify the speed of the extraction process.  What I found interesting in this note is that Oracle is using the RBA to help calculate the amount of data being extracted.  With this note in hand, I felt comfortable in using the RBA to “estimate” the amount of data being applied by a replicat.

Note:  How to estimate Goldengate extract redo processing speed? (Doc ID 1356524.1)

A few sentences ago, I mentioned that I wanted to compare 2 RBAs to “estimate” the amount of data applied over a period of time.  In order to do this, I need to convert the RBA into meaningful number.

The following formulas I used to convert the RBA to megabytes and then into the metrics I wanted:


(($sec_rba - $first_rba)/(1024*1024))  <-  find the “estimated” size applied in MB
($mb_min*60)                           <- find the “estimate” size applied over an hour in MB
($mb_hr/(1024))                        <- find the “estimate” size applied in GB for an hour
($gb_hr*24)                            <- find the “estimate” size for a day in GB

Now the question was how can I grab this information from each replicat.  The information I needed could be found by doing a “info replicat <replicat>, detail” (The detail part is not really needed, just use it to list out all the associated trail files).    The output from the info command looks similar to this:

Info Replicat Output:

image

The thing to keep in mind is that I’m only concern about two lines in this output.  The first line is the “Log Read Checkpoint” and the second line that has the Date and RBA number.  Now in order to gather this information and do the calculations using the RBA, I wrote a Perl script.  The  for this basics of the script are below:


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#Title: gg_run_rate_from_rba.pl
#
use strict;
use warnings;

#Static Variables

my $gghome = "/u01/app/oracle/product/12.1.2/ogg";
my $outfile = "/tmp/gg_replicat_runrates.txt";
my $sleeptime = 60;
my $gguser = "c##ggate";
my $ggpass = "ggate";
my @process = ("replicat");
my $replicat;
my($date1,$curtrail1,$rba1);
my($date2,$curtrail2,$rba2);
my($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day);

#Program

open (RUNRATES, ">>$outfile") or die "Unable to open file";
foreach my $i(@process)
{
my @process_name = `ps -ef | grep dirprm | grep $i | grep -v grep | awk '{print \$14}'`;   
my @replicats = @process_name;

    foreach (@replicats)
    {
        $replicat = $_;
        chomp($replicat);
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail1,$date1,$rba1) = check_replicat();
        #print "$curtrail1 -> $date1 -> $rba1\n";
        sleep($sleeptime);
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail2,$date2,$rba2) = check_replicat();
        #print "$curtrail2 -> $date2 -> $rba2\n";
        calc_rate($rba1,$rba2);
        ($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day) = calc_rate();
       
        print RUNRATES "$replicat|$date1|$curtrail1|$rba1|$date2|$curtrail2|$rba2|$rate_min|$rate_hr|$rate_gb_hr|$rate_gb_day\n";
    }
}
close (RUNRATES);

#################
#Sub Programs
#################

sub check_replicat
{
my @buf = `$gghome/ggsci << EOF
dblogin userid $gguser\@pdb2 password $ggpass
info replicat $replicat, detail
EOF`;

my $curtrail;
my $date;
my $rba;

    foreach (@buf)
    {
        if (/Log Read Checkpoint/)
        {
            if (m/(\.\/\w+\/\w+)/g)
            {
                $curtrail = $1;
            }
        }
       
        if (/RBA/)
        {
            if (m/(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/g)
            {
                $date = $1."-".$2."-".$3." ".$4.":".$5.":".$6;
            }
           
            if (m/RBA (\d+)/g)
            {
                $rba = $1;

            }   
        }
    }
    return($curtrail,$date,$rba);
} #end sub check_replicat

sub calc_rate
{
    my $first_rba = $rba1;
    my $sec_rba = $rba2;
 
    my $mb_min = (($sec_rba-$first_rba)/(1024*1024));
    my $mb_hr = ($mb_min*60);
    my $gb_hr = ($mb_hr/(1024));
    my $gb_day = ($gb_hr*24);
    return ($mb_min,$mb_hr,$gb_hr, $gb_day);
} #end sub calc_rate

This script is a bit longer than I like; however, it will capture all information required and then waits 60 seconds and gather the information again for the replicat it is working on. Once the first and second RBA are grabbed then the script writes the output to a flat file with the calculations for MB per min, MB per hour, GB per hour and GB per day.

Once the flat file has been written,  I can now use an external table that will allow me to view this data from SQL (see my other post on monitoring GG from SQL..here).  Using the external table, I can see what my run rates are from any SQL capable tool.  Below is a simple query to pull the data from the external table.

Note: Some numbers in the output may be negative.  This is due to the subtraction between RBA2 (smaller) and RBA1 (larger).


select
        repgroup as processgroup,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap1,
        curtrail1 as snap1_trail,
        rba1 as snap1_rba,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap2,
        curtrail2 as snap2_trail,
        rba2 as snap2_rba,
        rate_min_mb,
        rate_hr_mb,
        rate_hr_gb,
        rate_day_gb
from
  gghb.replicat_runrates
where
  repgroup = 'REP';

--Output (unformatted)--

PROCESSG SNAP1              SNAP1_TRAIL                     SNAP1_RBA SNAP2              SNAP2_TRAIL                     SNAP2_RBA RATE_MIN_MB RATE_HR_MB RATE_HR_GB RATE_DAY_GB
-------- ------------------ ------------------------------ ---------- ------------------ ------------------------------ ---------- ----------- ---------- ---------- -----------
REP      22-MAY-14 01:38:51 ./dirdat/rt000034                 2905346 22-MAY-14 01:38:51 ./dirdat/rt000034                 3197702         286      17130         17         401
REP      22-MAY-14 01:39:49 ./dirdat/rt000034                 3197702 22-MAY-14 01:39:49 ./dirdat/rt000034                 3521610         316      18979         19         445
REP      22-MAY-14 01:40:50 ./dirdat/rt000034                 3521610 22-MAY-14 01:40:50 ./dirdat/rt000034                 3802260         274      16444         16         385
REP      22-MAY-14 01:41:49 ./dirdat/rt000034                 3802260 22-MAY-14 01:41:49 ./dirdat/rt000034                 4112529         303      18180         18         426
REP      22-MAY-14 01:42:49 ./dirdat/rt000034                 4112529 22-MAY-14 01:42:49 ./dirdat/rt000034                 4463477         343      20563         20         482

 

Being able to use an external table to view run rates additional scripts can be written to report on what is going on within the Oracle GoldenGate apply process.  Allowing administrators a better understanding of what is going on within their environments.  At the same time, I think this information is valuable in the turning process of Oracle GoldenGate as environment grown.

Let me know your thoughts and comments on this, because it is always interesting to see how other organizations solve these issues as well.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

Configure #Oracle #GoldenGate #12c for testing between two or more #db12c PDBs.

Fri, 2014-05-09 13:11

Over the last few weeks I’ve been working and traveling a good bit.  Amongst all the travel I’ve been trying to get time to test a few things.  One test I wanted to get done was using Oracle GoldenGate 12c with Oracle Database 12c between Pluggable Databases (in the same instance).   As of today, I’ve finally gotten my Oracle Virtual Box environment configured and ready, so now I can travel and still test items I want to test.

With setting up this environment, I’ve identified a few things that may be of interest to the community.  Before I go into what I’ve had to configure to make this environment work, I have to say that the Oracle documentation for Oracle GoldenGate 12c is decent but still missing a few thing.  The organization of the documentation could improve a good bit as well (personal opinion).

Note: As far as the documentation goes, it seems like downloading the PDF is the better option and a bit more detailed in steps compared to the web version.

In my VM environment, I’m using the following software:

  • Oracle Enterprise Linux 6.5
  • Oracle Database 12c Enterprise Edition (12.1.0.1)
  • Oracle GoldenGate 12c (12.1.2)
Linux:

I’ve just setup a simple Oracle Enterprise Linux 6.5 box.  Not very big, but for a VM it could be considered pretty large.  The VM consists of the following (screen shots provided for each part).

Memory:

3072MB
(could go smaller and may go smaller as I shrink the SGA/PGA requirements for testing)

image

CPU:

2 Processors with 100% execution cap

image

Storage:

Using 7 VMDK disks (File system and ASM disks).
The reason for so many disks is because I’m using Oracle ASM and Restart to keep database files and database instance consistent upon restarting of the VM.

image

Network:

NAT with port forwarding
For more information on port forwarding, a good blog post for this was written by my friend Maaz Anjum on his blog (VirtualBox, NAT, and Port Forwarding).

image

image

Database:

The database is the currently available version of Oracle Database 12c (12.1.0.1) and Grid Infrastructure that you can download from Oracle Technology Network (OTN).  Once the software is downloaded, it can be installed within the VM.  There are many different steps that need to be addressed with the Oracle Grid Infrastructure 12c to get ASM working right.  In my configuration I’m using ASMLib; however the Oracle RACSIG has provided some exceptional documentation on how to setup Oracle RAC 12c using Virtual Box VMs with ASM not using ASMLIB.  It is a good starting point if needed.

Once the Oracle Database 12c (12.1.0.1) software is installed, a Consolidated Database (CDB) with two Pluggable Databases (PDB) need to be created.  The DBCA will allow you to create a CDB with one PDB and additional PDBs with more runs of DBCA.  The way I created my second PDB  was to create the first PDB (PDB1)  and configure it for what I needed in it for Oracle GoldenGate.  Then I cloned PDB1 to create PDB2 with all the settings in place.

I don’t want to spend to much time on the cloning process of PDBs in this post; however, Oracle provides a lot of different ways for cloning PDBs.  I have listed a common ones below:

  • SQL
  • EM Express
  • SQL Developer (easiest way)

Note: To make cloning of PDBs easy, recommend staying with Oracle Managed Files (OMF).

GoldenGate:

I will say configuring Oracle GoldenGate 12c (12.1.2) with Oracle Database 12c (12.1.0.1) using PDBs is the same yet different then configuring with traditional databases (11.2.0.4 and earlier).  I say this because, things that we typically use for setting up Oracle GoldenGate change just ever so slightly.

GoldenGate User:

One example that I’m talking about is the Oracle GoldenGate user.  In previous version, I could create the user grant DBA rights and run.  In Oracle Database 12c, not quite that simple.  In order to use Oracle GoldenGate with PDBs, a “common user” needs to be created.

Details: I discuss “common users” in detail in this post.

I created my common user with the name of “C##GGATE” in the Consolidated Database (CDB). This gives the GoldenGate user access to all the PDBs that are contained within the CDB.

GoldenGate User: Privileges:

With the common user created I needed to grant permissions to the user so the user can do whatever it needs too.  I granted DBA and CDB_DBA along with CONNECT to my common user.  Turns out there is an issue with the CDB_DBA role (I did not investigate fully), but granted DBA and CONNECT worked (to a degree).  So I resorted to Oracle GoldenGate documentation for what permissions were needed.

Reminder: As I mentioned earlier, the documentation is a bit scattered and trying to find this information took some time.

Although I found the documentation and granting the documented privileges for capture and apply, I still had issues.   Outlined below, I have provided the permissions that I’ve had to explicitly grant to my common user.

Privileges Granted:

SQL used to find this information is the same on all CDB and PDBs.  The results returned are for the common user C##GGATE (Roles and System Privileges).


SELECT distinct DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME) as "USER", SUBSTR(U2.NAME,1,20) as "ROLE", 
   /*SUBSTR(SPM.NAME,1,27) as "PRIV",*/ UAM.STATUS 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM,SYS.USER_ASTATUS_MAP UAM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
  and U1.ASTATUS = UAM.STATUS# and U1.TYPE# =1
  and U1.name = upper('&user_name')
  ORDER BY 1, 2, 3;

Consolidated Database:

image

Pluggable Database 1:

image

Pluggable Database 2:

image

You will notice that the roles CONNECT, DBA and RESOURCE are granted at the CDB level and only CONNECT is needed at the PDB level.  This is needed to allow the common user to switch between PDB containers.

Although these roles are great for the common user, the documentation also states that the common user has to be granted access to the PDBs through a new procedure.


begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE');
end;
/

or

begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE', container=>'all');
end;
/

What the DBMS_GOLDENGATE_AUTH procedure actually grants is a set of system privileges that Oracle states are needed for the common user to interact with the PDBs.  In the end, I think the system privileges that are set through the DBMS_GOLDENGATE_AUTH package are not complete.  Below is a list of privileges that I have granted to my common user (in code block is the view I use to find these privileges under SYS (run on CDB and all PDBs)).


CREATE OR REPLACE VIEW DBA_USER_PRIVS (USERNAME, ROLENAME, PRIVILEGE) AS
SELECT DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME), SUBSTR(U2.NAME,1,20), 
   SUBSTR(SPM.NAME,1,27) 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
UNION
SELECT U.NAME, NULL, SUBSTR(SPM.NAME,1,27)
FROM SYS.SYSTEM_PRIVILEGE_MAP SPM, SYS.SYSAUTH$ SA, SYS.USER$ U
WHERE SA.GRANTEE#=U.USER# 
  AND SA.PRIVILEGE#=SPM.PRIVILEGE
/  

select * from dba_user_privs where username = 'C##GGATE' and rolename is null;

Consolidated Database:

image

Pluggable Database 1 (Capture):

image

Pluggable Database 2 (Apply):

image

Lastly, I figured out that I needed to set tablespace quotas both within the CDB and the PDB for the common user.  To make it easy, I just assigned UNLIMITED quotas to the USER tablespace in the CDB and PDB for the common user.

Note: The reason the quota on the USER tablespace is needed is due to the default tablespace for C##GGATE is USER in the CDB. The user needs the same access at the PDB level.

Now the user has been configured,  let’s get the GoldenGate processes configured and ready to run.

Environment Setup:

Just like any other GoldenGate environment, you will have an Extract, Replicat and possibly a Data Pump (optional, I always use one) plus the associated trail files.  The important thing to remember when setting up each process that there are a few changes in the TABLE and MAP statements.  I have provided my working, uni-directional parameter files below.

Tip: Extracts in Oracle Database 12c against PDBs MUST be setup as integrated extracts.  Classic capture extracts can be configured but will not start against a PDB (no error message will be logged as well).

Extract:


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT ext

--Oracle Login
USERID c##ggate, PASSWORD ggate

--Usa ASM API
--Not needed for integrated capture as of 12.1.2
--TRANLOGOPTIONS DBLOGREADER

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Warns for a long running transaction
WARNLONGTRANS 5m, CHECKINTERVAL 3m

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Data Pump (Extract):


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
-- CHECKPARAMS

--Specifies the name of the extract pump process
EXTRACT pmp

--No logic applied just pass everything extract writes to the trail file to the target trail file
PASSTHRU

--Specifies the target system to connect to
RMTHOST localhost, MGRPORT 15000, COMPRESS

--Specifies the location of the remote trail file on target machine
RMTTRAIL ./dirdat/rt

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Replicat:


--Specifies the name of the replicat load process.
REPLICAT rep

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Oracle login.
USERID c##ggate@PDB2, PASSWORD ggate

--DBOPTIONS INTEGRATEDPARAMS(parallel 4)

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/rep.dsc, append, megabytes 5

--Resolves the TARGETs to be replicated during process start-up
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
map PDB1.BOBBY.*, target BOBBY.*;

Notice in the Extract and Replicat parameter files that the TABLE and MAP statements are slightly different.  In both files, I use two different versions of how to associate tables within the PDBs.  In the Extract parameter file, the PDB is being assigned with the SOURCECATALOG option.  This tells GoldenGate to switch containers and to use PDB1 for the tables.  In the Replicat parameter file, the MAP statement is not using SOURCECATALOG; instead it is looking for source tables with a referenced inline format CATALOG.SCHEMA.TABLES (PDB1.BOBBY.*).  The target side does not need the CATALOG option because the replicat is logging into PDB2 directly.

Well, I hope I gave a good overview of how to setup Oracle GoldenGate 12c (12.1.2) within Oracle Database 12c (12.1.0.1) using PDBs.

Enjoy!

twitter: @dbasolved

blog:http://dbasolved.com


Filed under: Database, Golden Gate, Replication
Categories: DBA Blogs