Skip navigation.

DBASolved

Syndicate content DBASolved
Helping Oracle DBAs solve problems
Updated: 3 hours 4 min ago

Macros and parameter passing macros with #goldengate

Tue, 2014-07-22 10:04

Replicating data can be a time consuming process to setup.  Fortunately, Oracle GoldenGate provide a few tools to help ease the  complexity of setup.  One of these tools is the “macro”.  Macros are used to simplify and automate the work associated with setting up and replicating data.  So what exactly is a macro?  Oracle defines a macro as:

A macro is a built-in automation tool that enables you to call a stored set of processing steps from within the Oracle GoldenGate parameter file.

In a nutshell, a macro is a stored set of commands that are used on a frequent basis; consisting of parameters for simple to complex series of substitutions, calculations or conversions.  Macros may be  written inline in the parameter file or stored in a macro library.

What this post will show you is how to add a macro to a library, accept parameters and then pass the parameter to within the parameter file during replication.

First thing that needs to be done is setup a standard directory to contain all the macro files.  A directory called “dirmac” needs to be created in the OGG_HOME.

Note: The “dirmac” directory is something that you will need to create with the following command “mkdir -p $OGG_HOME/dirmac”.  A macro directory can be created anywhere you like, I personally try to keep all OGG items together.

In the macro directory, create a file to use as the macro library.  Ideally, you should have a file for each type of process running in that $OGG_HOME.  Once the file is created, then edit the library file and add the macros desired.


> mkdir -p $OGG_HOME/dirmac
> cd $OGG_HOME/dirmac
> touch <library_name>.mac 
> vi <library_name>.mac 

When the macro library file is open for editing add macro that is desired.  Remember, a macro library can house more than one macro.  In the example below, you will see two examples.  The first example is setting a tokens that can be called.  The second example is setting a macro to map tokens and header information to a specific table.


—Example 1— 

BEGIN
SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’)
END;

MACRO #src_icnt
BEGIN
ICNT = @GETENV('STATS', 'INSERT')
END;

MACRO #src_ucnt
BEGIN
UCNT = @GETENV('STATS', 'UPDATE')
END;

MACRO #src_dcnt
BEGIN
DCNT = @GETENV('STATS', 'DELETE')
END;

MACRO #src_dmlcnt
BEGIN
DMLCNT = @GETENV('STATS', 'DML')
END;

—Example 2— 

MACRO #hb_mappings
PARAMS (#src_schema)
BEGIN
MAP #src_schema.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);
END; 

Notice in example 2 the PARAMS statement.  When using macros this can get a bit confusing since defining a macro uses the hash mark (#) and parameters in the PARAMS statement use the hash mark (#) as well.  Also notice that the parameter #src_schema is used in the MAP statement in the macro.  This is how the value for #src_schema is passed into the macro from the parameter files.

Now, lets take a look at a parameter file.

In my test environment, I have the following processes running:


GGSCI (oel.acme.com) 1> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
JAGENT STOPPED
EXTRACT RUNNING EXT 00:00:09 00:00:09
EXTRACT RUNNING PMP 00:00:00 00:00:07
REPLICAT RUNNING REP 00:00:00 00:00:04

Taking a look at the extract parameter file (apply side); in order to use the macro in example 1, add an INCLUDE statement which references macro library to the parameter file.  Then in the TABLE statement, using the TOKEN string, the macro for the token can be referenced.


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT EXT

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle Login
USERID ggate, PASSWORD ggate

--Warns for a long running transaction
WARNLONGTRANS 1h, CHECKINTERVAL 30m

--Trace process info
--TRACE ./dirrpt/trace_ext.trc

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Ignore transactions for golden gate user
TRANLOGOPTIONS EXCLUDEUSER GGATE

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

<strong>INCLUDE ./dirmac/ops_info.mac</strong>

--Table Mappings
TABLE SCOTT.RANDOM_VALUES, TOKENS(<strong>#src_csn_ts(), #src_icnt(),#src_ucnt(),#src_dcnt(),#src_dmlcnt()</strong>);

Once the extract parameter file is updated, then the extract needs to be restarted. Upon restart of the extract, keep an eye out and make sure the process doesn’t abend.  The VIEW REPORT command can be used to check the report file during startup.  Once the extract starts, you can see how the macro is used and expanded in the parameter file.  From the extract side, the needed info will be captured and placed in the associated trail files.

In the middle, the pump really doesn’t need any changes or restarting.

Now on the replicat (capture) side, in order to use the macro defined in example 2 (above); the replicat parameter file needs to be edited to include the macro library and statements to call the macros.  The next code block shows the contents of my replicat parameter file.


--Specifies the name of the replicat load process.
REPLICAT REP

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/oracle/app/product/11.2.0.4/dbhome_1")
SETENV (ORACLE_SID="bc11g")

--Oracle login.
USERID ggate, PASSWORD ggate

--surpress triggers - enable for 11.2.0.2 or later
--DBOPTIONS SUPPRESSTRIGGERS

ALLOWDUPTARGETMAP

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Tracing info
--TRACE ./dirrpt/trace_rep.trc

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/REP.dsc, append, megabytes 200

--Resolves the TARGETs to be replicated during process start-up
--WILDCARDRESOLVE IMMEDIATE

--Specify error handling rules:
REPERROR(default, discard)
REPERROR(default2, discard)

--Table Mappings
INCLUDE ./dirmac/ops_info.mac

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);

You will notice that I have included the INCLUDE statement to call the macro library.  Then the macro (example 2) that does the table mapping for the desired information can be accessed using #hb_mappings() (last line of parameter file example).   Passing the schema name is simple by placing it in between the parenthesis.  As the example above shows, I’m passing SCOTT as the schema I want to use.

Upon restart of the replicat, by looking at the report (VIEW REPORT), I can see where the macro library is read and how the macro is translated into a map statement for the replicat to use.


— Report Output (summerized)— 

map SCOTT.RANDOM_VALUES, target SCOTT.RANDOM_VALUES_HIST;
#hb_mappings(SCOTT);
MAP SCOTT.RANDOM_VALUES, target SCOTT.GG_REP_OP_STATUS,
INSERTMISSINGUPDATES
COLMAP (
SRC_DB_SCHEMA=@token('SRC_SCHEMA'),
GG_REPLICAT_NAME=@GETENV('GGENVIRONMENT','GROUPNAME'),
TGT_LAST_UPDATE_DT=@DATE('YYYY-MM-DD HH:MI:SS.FFFFFF','JTS',@GETENV('JULIANTIMESTAMP')),
SRC_CSN_TS=@token('SRC_CSN_TS'),
ICNT=@token('ICNT'),
UCNT=@token('UCNT'),
DCNT=@token('DCNT'),
DMLCNT=@token('DMLCNT')
);

After the replicat has restarted.  Then the table in the mapping statement can be checked to see if any data was inserted (SCOTT.GG_REP_OP_STATUS).  Image 1 below shows the output of the data I requested to be replicated using the macro.

Image 1:
image1_macro_params.png

 

 

 

 

 

This should have shown you a way to use macros within your replication environment.

Enjoy!!!

http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Understanding and using tokens in Oracle #GoldenGate

Mon, 2014-07-21 10:53

Recently, I’ve been doing some work with a client where tokens need to be used.  It came to my attention that the basic usage of tokens is misunderstood.  Let’s see if I can clear this up a bit for people reading.

In Oracle GoldenGate, tokens are a way to capture and store data in the header of the trail file (more info on trail headers here).  Once a token has been defined, captured and stored in the header, it can be retrieved, on the apply side, and used in many ways to customize what information is delivered by Oracle GoldenGate.

Defining a token is pretty simple; however, keep these three points in mind:

  1. You define the token and associated data
  2. The token header in the trail file header permits up to a total of 2,000 bytes (token name, associated data, and length of data)
  3. Use the TOKEN option of the TABLE parameter in Extracts

In order to define a token in an extract, the definition should follow this basic syntax:


 TABLE <schema>.<table>, TOKENS( SRC_CSN_TS = @GETENV(‘GGHEADER’,’COMMITTIMESTAMP’));

In the example above, the token will be populated with the timestamp of the last commit on the table it is defined against.  After restarting the extract, the token (SRC_CSN_TS) will be included in the header of the trail file.

Once the trail file is shipped to the target side and read by the replicat, the token is mapped to a column in the target table.


MAP <schema>.<table>, target <schema>.<table>,
COLMAP (
SRC_CSN_TS=@token(’SRC_CSN_TS’)
); 

Image 1, is a view of a table where I have mapped the token (SRC_CSN_TS) to a target table to keep track of the committed timestamps of a transaction on the source system.

Image 1:

output_tokens1.png

 

 

 

 

Tokens are simple to create, use, and are a powerful feature for mapping data between environments.

Enjoy!!

twitter: @dbasolved

blog: https://dbasolved.com

 


Filed under: Golden Gate
Categories: DBA Blogs

Speedy #em12c template export

Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Configure shared storage for #em12c Business Intelligence Publisher (BIP)

Tue, 2014-07-01 07:52

Oracle Enterprise Manager 12c Release 4 has a lot of new features; however, I quickly want to focus on a feature that has been in OEM12c for awhile.  This feature is Business Intelligence Publisher (BIP).  BIP has been a part of OEM12c since it was initially released; at first it was a bit cumbersome to get it installed. With the release of 12.1.0.4, the OEM team has done a great job at making the process a lot easier. Although this post is not directly talking about BIP installation; just understand that the process is easier and details can be found here.

What I want to focus on is how to configure BIP, once installed, to use shared storage.  I don’t recall if the requirement for shared storage was required in earlier versions of OEM12c; however, if you want to share BIP reports between OMS nodes in a high-avaliablity configuration, a shared location is required.  The initial directions for reconfiguring BIP for shared storage can be found here.

In order to allow multiple OMS nodes to support multiple BIP servers the following command needs to be ran:


emctl config oms -bip_shared_storage -config_volume <directory location> -cluster_volume <directory location>

Note: The directory location supplied for the shared location has to be accessible by both OMS nodes.


Example:

emctl config oms -bip_shared_storage -config_volume /oms/BIP/config -cluster_volume /oms/BIP/cluster

When the reconfiguring of BIP begins, you will be asked for the Admin User’s password (Weblogic User) and the SYSMAN password.  Supply these and then wait for the completion of the script. Once completed the CONFIG and CLUSTER directories for BIP will be moved to the location specified.

The new directory locations can be verified from the BIP web page under Administration -> Server Configuration.

In the end, reconfiguring BIP to use shared storage is quite simple.

Enjoy!

twitter: @dbasolved

blog: https://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Counting the many rows of Oracle GoldenGate

Thu, 2014-06-12 22:29

******NOTICE******
!The code contained in this post is meant to be  used at your own risk!
******NOTICE****** 

With any Oracle GoldenGate replication configuration, it is always good to monitor what is going on.  One aspect that many people want to know is how can I validate that every record is being transferred to the target system.  Once such way is to use Oracle Veridata; however, some times there is not enough time to setup and run Oracle Veridata.  Also in smaller shops, Oracle Veridata may be a bit costly to initially start with. How can someone get the “warm-and-fuzzy” feeling when replicating data with Oracle GoldenGate?

Oracle GoldenGate has a lot of good command that can be used from the GGSCI prompt to check and see what type of transactions have been processed and the totals of those transactions.  What I was recently tasked with was a way to quickly do a validation of the rows between source and target within a Oracle GoldenGate configuration.  One way to do this is to quickly get a count of the rows per table between the source and target systems.  In discussions with a co-worker, it came out that they had a small utility, primarily scripts, that could do the counts between source and target systems.  In reviewing these scripts I saw where I could possibly improve on the process and make it a bit more streamlined and integrated with the target database.

In streamlining the process I decided to take the main portions of the scripts and rewrite it into a stored procedure that could be use from the Oracle GoldenGate user inside the target database of the replication environment.  The initial stored procedure I came up with can be seen in Code 1 below.

Code 1: Stored procedure for counts


create or replace procedure rowcounts(v_tgtschema in varchar2, v_srcschema in varchar2, v_dblink in varchar2)
is
—Author: Bobby Curtis, Oracle ACE
—Copyright: 20014
—Company: Accenture Enkitec Group
— 
v_tgtcount number(16) := 0;
v_srccount number(16) := 0;
v_sqlstmt0 varchar2(1000);
v_sqlstmt1 varchar2(1000);
v_sqlstmt2 varchar2(1000);
begin
 for vtable
 in (select table_name
     from all_tables
     where owner = v_tgtschema
     order by 1)

loop

v_sqlstmt0 := 'select count(*) from '||v_tgtschema||'.'||vtable.table_name;
 --dbms_output.put_line(v_sqlstmt0);
 execute immediate v_sqlstmt0 into v_tgtcount;

 v_sqlstmt1 := 'select count(*) from '||v_srcschema||'.'||vtable.table_name||'@'||v_dblink;
 --dbms_output.put_line(v_sqlstmt1);
 execute immediate v_sqlstmt1 into v_srccount;

v_sqlstmt2 := 'update onetstats set row_cnt_source='|| v_srccount ||', row_cnt_target=' || v_tgtcount || ', end_time=sysdate where schemaname='''||v_tgtschema||''' and tablename='''||vtable.table_name||''' and dataset=null';
 --dbms_output.put_line(v_sqlstmt2);
 execute immediate v_sqlstmt2;

 if (sql%notfound)
 then
     v_sqlstmt2 := 'insert into onetstats (schemaname,tablename,start_time,end_time,row_cnt_source,row_cnt_target,dataset) values ('''||v_tgtschema||''','''||vtable.table_name||''',sysdate,sysdate,' || v_srccount || ',' || v_tgtcount || ', null)';
    --dbms_output.put_line (v_sqlstmt2);
     execute immediate v_sqlstmt2;
 end if;

 commit;
 end loop;
 exception
       when others
       then
              dbms_output.put_line(sqlerrm);
end;

As you can tell from looking at the stored procedure it uses a table to store the counts for each table in the schema being replicated.  Also notice that a database link is used to access the source server.  The table that stores the count information is just a really simple table with columns that maps to the update/insert statement in the stored procedure.  The database link needs to be configured in the local TNSNames.ora on the target server.  Code 2 and code 3 show an example of these objects.

Code 2: Table for counts


create table &ggate_user..onetstats (
 schemaname varchar2(30),
 tablename varchar2(30),
 start_time date,
 end_time date,
 row_cnt_source number,
 row_cnt_target number,
 dataset number
 )
;

Code 3: Database Link to source


create database link ggcounts connect to &&ggate_user identified by &ggate_user_pwd using 'ggcounts';

The last thing that needed to be done is granting SELECT ON <TABLE> to the Oracle GoldenGate user on the source and target systems.  Once this is done, the stored procedure can be ran from SQL*Plus or SQL Developer at anytime on the target system to get a rough estimate count of the rows between the source and target databases.

Enjoy!!

twitter: @dbasolved

blog: http://dbasolved.com

 ******NOTICE******
!The code contained in this post is meant to be  used at your own risk!
******NOTICE****** 


Filed under: Golden Gate, Replication
Categories: DBA Blogs

SCN to Timestamp and back

Tue, 2014-06-10 09:37

When working with Oracle GoldenGate, understanding the System Change Number (SCN) is important.  The SCN is an internal number maintained by the database that keeps track of the changes made to the database for recovery purposes.  The SCN is also important when working with Oracle GoldenGate.  In many environments, instantiation of Oracle GoldenGate environments require knowing where to start the replicat from.

To find the current SCN for the database, Oracle has made this pretty easy.  There is a column in the V$DATABASE view called CURRENT_SCN.


select current_scn from v$database;

Now that the current SCN has been found, it can be used in an data pump export parameter file using FLASHBACK_SCN to ensure a consistent copy of the database at that point-in-time.  Once an import is completed on the target side of GoldenGate the replicat can be started using the SCN with the ATCSN or AFTERCSN option.

What does the SCN really mean to an Oracle GoldenGate Admin though?  As outlined above it really is just a point-in-time place holder to key admins in on a place to start the replicat.  At times there maybe need to start the replicat from an adjusted point-in-time.  In order to do this, it is handy to know how to convert the SCN to a Timestamp and back to SCN.  In order to do this, Oracle has provided two packages called SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN.

Using these packages is pretty simple.  The following code blocks demonstrates how to convert from SCN to Timestamp and back.

Convert SCN to Timestamp:


select scn_to_timestamp(8697520) from dual;

Convert Timestamp to SCN:


select timestamp_to_scn('10-JUN-14 10.50.55.000000000 AM') from dual;

Knowing how to convert the SCN to Timestamp and back to SCN can be very handy in many different situations and useful when working with Oracle GoldenGate.

Enjoy!

twitter: @dbasolved

blog:http//dbasolved.com


Filed under: General, Golden Gate, Replication
Categories: DBA Blogs

#EM12c (12.1.0.4) is released!!!!!

Tue, 2014-06-03 07:08

This is just a quick post while I’m waiting on a few other things to line up today.  For many of us, the long awaited Release 4 of Oracle Enterprise Manager 12c is out!  This release of OEM 12c has many exciting improvements and some really cool new stuff.  I cannot remember all of them, but one I’m looking forward to is the AWR Warehouse and figure out how all of it works.  Additionally, BIP is finally installed with the OEM product, no more secondary install!

In the meantime, if you have time and want to start looking, working and figuring out what Oracle has provided for the community in this new release, you can download it here.

Go fourth and enjoy the new features!  I’m looking forward to writing more new posts related to OEM.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

Oracle #GoldenGate Replicate Apply (Run) Rate

Thu, 2014-05-22 12:16

For a couple of weeks now, I’ve been trying to figure out a way to identify the size of data for transactions that are getting processed over a give period of time with Oracle GoldenGate.  When I started to think through the process, I keyed in on the Relative Byte Address (RBA).  What is the RBA?  From reading Oracle’s GoldenGate documentation, the RBA is mostly a marker within the trail file to identify the location of the transaction.  This got me to thinking; maybe I can use the RBA to “estimate” the amount of data applied to the source over a period of time (compare 2 RBAs).

Before I ventured off in the unknown; I wanted to verify if there was a method already identified by Oracle.  What I found in MOS was Note ID: 1356524.1.  This note deals mostly with how to identify the speed of the extraction process.  What I found interesting in this note is that Oracle is using the RBA to help calculate the amount of data being extracted.  With this note in hand, I felt comfortable in using the RBA to “estimate” the amount of data being applied by a replicat.

Note:  How to estimate Goldengate extract redo processing speed? (Doc ID 1356524.1)

A few sentences ago, I mentioned that I wanted to compare 2 RBAs to “estimate” the amount of data applied over a period of time.  In order to do this, I need to convert the RBA into meaningful number.

The following formulas I used to convert the RBA to megabytes and then into the metrics I wanted:


(($sec_rba - $first_rba)/(1024*1024))  <-  find the “estimated” size applied in MB
($mb_min*60)                           <- find the “estimate” size applied over an hour in MB
($mb_hr/(1024))                        <- find the “estimate” size applied in GB for an hour
($gb_hr*24)                            <- find the “estimate” size for a day in GB

Now the question was how can I grab this information from each replicat.  The information I needed could be found by doing a “info replicat <replicat>, detail” (The detail part is not really needed, just use it to list out all the associated trail files).    The output from the info command looks similar to this:

Info Replicat Output:

image

The thing to keep in mind is that I’m only concern about two lines in this output.  The first line is the “Log Read Checkpoint” and the second line that has the Date and RBA number.  Now in order to gather this information and do the calculations using the RBA, I wrote a Perl script.  The  for this basics of the script are below:


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#Title: gg_run_rate_from_rba.pl
#
use strict;
use warnings;

#Static Variables

my $gghome = "/u01/app/oracle/product/12.1.2/ogg";
my $outfile = "/tmp/gg_replicat_runrates.txt";
my $sleeptime = 60;
my $gguser = "c##ggate";
my $ggpass = "ggate";
my @process = ("replicat");
my $replicat;
my($date1,$curtrail1,$rba1);
my($date2,$curtrail2,$rba2);
my($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day);

#Program

open (RUNRATES, ">>$outfile") or die "Unable to open file";
foreach my $i(@process)
{
my @process_name = `ps -ef | grep dirprm | grep $i | grep -v grep | awk '{print \$14}'`;   
my @replicats = @process_name;

    foreach (@replicats)
    {
        $replicat = $_;
        chomp($replicat);
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail1,$date1,$rba1) = check_replicat();
        #print "$curtrail1 -> $date1 -> $rba1\n";
        sleep($sleeptime);
        check_replicat($gguser, $ggpass, $replicat);
        ($curtrail2,$date2,$rba2) = check_replicat();
        #print "$curtrail2 -> $date2 -> $rba2\n";
        calc_rate($rba1,$rba2);
        ($rate_min, $rate_hr, $rate_gb_hr, $rate_gb_day) = calc_rate();
       
        print RUNRATES "$replicat|$date1|$curtrail1|$rba1|$date2|$curtrail2|$rba2|$rate_min|$rate_hr|$rate_gb_hr|$rate_gb_day\n";
    }
}
close (RUNRATES);

#################
#Sub Programs
#################

sub check_replicat
{
my @buf = `$gghome/ggsci << EOF
dblogin userid $gguser\@pdb2 password $ggpass
info replicat $replicat, detail
EOF`;

my $curtrail;
my $date;
my $rba;

    foreach (@buf)
    {
        if (/Log Read Checkpoint/)
        {
            if (m/(\.\/\w+\/\w+)/g)
            {
                $curtrail = $1;
            }
        }
       
        if (/RBA/)
        {
            if (m/(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})/g)
            {
                $date = $1."-".$2."-".$3." ".$4.":".$5.":".$6;
            }
           
            if (m/RBA (\d+)/g)
            {
                $rba = $1;

            }   
        }
    }
    return($curtrail,$date,$rba);
} #end sub check_replicat

sub calc_rate
{
    my $first_rba = $rba1;
    my $sec_rba = $rba2;
 
    my $mb_min = (($sec_rba-$first_rba)/(1024*1024));
    my $mb_hr = ($mb_min*60);
    my $gb_hr = ($mb_hr/(1024));
    my $gb_day = ($gb_hr*24);
    return ($mb_min,$mb_hr,$gb_hr, $gb_day);
} #end sub calc_rate

This script is a bit longer than I like; however, it will capture all information required and then waits 60 seconds and gather the information again for the replicat it is working on. Once the first and second RBA are grabbed then the script writes the output to a flat file with the calculations for MB per min, MB per hour, GB per hour and GB per day.

Once the flat file has been written,  I can now use an external table that will allow me to view this data from SQL (see my other post on monitoring GG from SQL..here).  Using the external table, I can see what my run rates are from any SQL capable tool.  Below is a simple query to pull the data from the external table.

Note: Some numbers in the output may be negative.  This is due to the subtraction between RBA2 (smaller) and RBA1 (larger).


select
        repgroup as processgroup,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap1,
        curtrail1 as snap1_trail,
        rba1 as snap1_rba,
        to_char(date1, 'DD-MON-YY HH:MI:SS') snap2,
        curtrail2 as snap2_trail,
        rba2 as snap2_rba,
        rate_min_mb,
        rate_hr_mb,
        rate_hr_gb,
        rate_day_gb
from
  gghb.replicat_runrates
where
  repgroup = 'REP';

--Output (unformatted)--

PROCESSG SNAP1              SNAP1_TRAIL                     SNAP1_RBA SNAP2              SNAP2_TRAIL                     SNAP2_RBA RATE_MIN_MB RATE_HR_MB RATE_HR_GB RATE_DAY_GB
-------- ------------------ ------------------------------ ---------- ------------------ ------------------------------ ---------- ----------- ---------- ---------- -----------
REP      22-MAY-14 01:38:51 ./dirdat/rt000034                 2905346 22-MAY-14 01:38:51 ./dirdat/rt000034                 3197702         286      17130         17         401
REP      22-MAY-14 01:39:49 ./dirdat/rt000034                 3197702 22-MAY-14 01:39:49 ./dirdat/rt000034                 3521610         316      18979         19         445
REP      22-MAY-14 01:40:50 ./dirdat/rt000034                 3521610 22-MAY-14 01:40:50 ./dirdat/rt000034                 3802260         274      16444         16         385
REP      22-MAY-14 01:41:49 ./dirdat/rt000034                 3802260 22-MAY-14 01:41:49 ./dirdat/rt000034                 4112529         303      18180         18         426
REP      22-MAY-14 01:42:49 ./dirdat/rt000034                 4112529 22-MAY-14 01:42:49 ./dirdat/rt000034                 4463477         343      20563         20         482

 

Being able to use an external table to view run rates additional scripts can be written to report on what is going on within the Oracle GoldenGate apply process.  Allowing administrators a better understanding of what is going on within their environments.  At the same time, I think this information is valuable in the turning process of Oracle GoldenGate as environment grown.

Let me know your thoughts and comments on this, because it is always interesting to see how other organizations solve these issues as well.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

Configure #Oracle #GoldenGate #12c for testing between two or more #db12c PDBs.

Fri, 2014-05-09 13:11

Over the last few weeks I’ve been working and traveling a good bit.  Amongst all the travel I’ve been trying to get time to test a few things.  One test I wanted to get done was using Oracle GoldenGate 12c with Oracle Database 12c between Pluggable Databases (in the same instance).   As of today, I’ve finally gotten my Oracle Virtual Box environment configured and ready, so now I can travel and still test items I want to test.

With setting up this environment, I’ve identified a few things that may be of interest to the community.  Before I go into what I’ve had to configure to make this environment work, I have to say that the Oracle documentation for Oracle GoldenGate 12c is decent but still missing a few thing.  The organization of the documentation could improve a good bit as well (personal opinion).

Note: As far as the documentation goes, it seems like downloading the PDF is the better option and a bit more detailed in steps compared to the web version.

In my VM environment, I’m using the following software:

  • Oracle Enterprise Linux 6.5
  • Oracle Database 12c Enterprise Edition (12.1.0.1)
  • Oracle GoldenGate 12c (12.1.2)
Linux:

I’ve just setup a simple Oracle Enterprise Linux 6.5 box.  Not very big, but for a VM it could be considered pretty large.  The VM consists of the following (screen shots provided for each part).

Memory:

3072MB
(could go smaller and may go smaller as I shrink the SGA/PGA requirements for testing)

image

CPU:

2 Processors with 100% execution cap

image

Storage:

Using 7 VMDK disks (File system and ASM disks).
The reason for so many disks is because I’m using Oracle ASM and Restart to keep database files and database instance consistent upon restarting of the VM.

image

Network:

NAT with port forwarding
For more information on port forwarding, a good blog post for this was written by my friend Maaz Anjum on his blog (VirtualBox, NAT, and Port Forwarding).

image

image

Database:

The database is the currently available version of Oracle Database 12c (12.1.0.1) and Grid Infrastructure that you can download from Oracle Technology Network (OTN).  Once the software is downloaded, it can be installed within the VM.  There are many different steps that need to be addressed with the Oracle Grid Infrastructure 12c to get ASM working right.  In my configuration I’m using ASMLib; however the Oracle RACSIG has provided some exceptional documentation on how to setup Oracle RAC 12c using Virtual Box VMs with ASM not using ASMLIB.  It is a good starting point if needed.

Once the Oracle Database 12c (12.1.0.1) software is installed, a Consolidated Database (CDB) with two Pluggable Databases (PDB) need to be created.  The DBCA will allow you to create a CDB with one PDB and additional PDBs with more runs of DBCA.  The way I created my second PDB  was to create the first PDB (PDB1)  and configure it for what I needed in it for Oracle GoldenGate.  Then I cloned PDB1 to create PDB2 with all the settings in place.

I don’t want to spend to much time on the cloning process of PDBs in this post; however, Oracle provides a lot of different ways for cloning PDBs.  I have listed a common ones below:

  • SQL
  • EM Express
  • SQL Developer (easiest way)

Note: To make cloning of PDBs easy, recommend staying with Oracle Managed Files (OMF).

GoldenGate:

I will say configuring Oracle GoldenGate 12c (12.1.2) with Oracle Database 12c (12.1.0.1) using PDBs is the same yet different then configuring with traditional databases (11.2.0.4 and earlier).  I say this because, things that we typically use for setting up Oracle GoldenGate change just ever so slightly.

GoldenGate User:

One example that I’m talking about is the Oracle GoldenGate user.  In previous version, I could create the user grant DBA rights and run.  In Oracle Database 12c, not quite that simple.  In order to use Oracle GoldenGate with PDBs, a “common user” needs to be created.

Details: I discuss “common users” in detail in this post.

I created my common user with the name of “C##GGATE” in the Consolidated Database (CDB). This gives the GoldenGate user access to all the PDBs that are contained within the CDB.

GoldenGate User: Privileges:

With the common user created I needed to grant permissions to the user so the user can do whatever it needs too.  I granted DBA and CDB_DBA along with CONNECT to my common user.  Turns out there is an issue with the CDB_DBA role (I did not investigate fully), but granted DBA and CONNECT worked (to a degree).  So I resorted to Oracle GoldenGate documentation for what permissions were needed.

Reminder: As I mentioned earlier, the documentation is a bit scattered and trying to find this information took some time.

Although I found the documentation and granting the documented privileges for capture and apply, I still had issues.   Outlined below, I have provided the permissions that I’ve had to explicitly grant to my common user.

Privileges Granted:

SQL used to find this information is the same on all CDB and PDBs.  The results returned are for the common user C##GGATE (Roles and System Privileges).


SELECT distinct DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME) as "USER", SUBSTR(U2.NAME,1,20) as "ROLE", 
   /*SUBSTR(SPM.NAME,1,27) as "PRIV",*/ UAM.STATUS 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM,SYS.USER_ASTATUS_MAP UAM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
  and U1.ASTATUS = UAM.STATUS# and U1.TYPE# =1
  and U1.name = upper('&user_name')
  ORDER BY 1, 2, 3;

Consolidated Database:

image

Pluggable Database 1:

image

Pluggable Database 2:

image

You will notice that the roles CONNECT, DBA and RESOURCE are granted at the CDB level and only CONNECT is needed at the PDB level.  This is needed to allow the common user to switch between PDB containers.

Although these roles are great for the common user, the documentation also states that the common user has to be granted access to the PDBs through a new procedure.


begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE');
end;
/

or

begin
  DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE('C##GGATE', container=>'all');
end;
/

What the DBMS_GOLDENGATE_AUTH procedure actually grants is a set of system privileges that Oracle states are needed for the common user to interact with the PDBs.  In the end, I think the system privileges that are set through the DBMS_GOLDENGATE_AUTH package are not complete.  Below is a list of privileges that I have granted to my common user (in code block is the view I use to find these privileges under SYS (run on CDB and all PDBs)).


CREATE OR REPLACE VIEW DBA_USER_PRIVS (USERNAME, ROLENAME, PRIVILEGE) AS
SELECT DECODE(SA1.GRANTEE#, 1, 'PUBLIC', U1.NAME), SUBSTR(U2.NAME,1,20), 
   SUBSTR(SPM.NAME,1,27) 
FROM SYS.SYSAUTH$ SA1, SYS.SYSAUTH$ SA2, SYS.USER$ U1, 
     SYS.USER$ U2, SYS.SYSTEM_PRIVILEGE_MAP SPM
WHERE SA1.GRANTEE# = U1.USER#
  AND SA1.PRIVILEGE# = U2.USER#
  AND U2.USER# = SA2.GRANTEE#
  AND SA2.PRIVILEGE# = SPM.PRIVILEGE
UNION
SELECT U.NAME, NULL, SUBSTR(SPM.NAME,1,27)
FROM SYS.SYSTEM_PRIVILEGE_MAP SPM, SYS.SYSAUTH$ SA, SYS.USER$ U
WHERE SA.GRANTEE#=U.USER# 
  AND SA.PRIVILEGE#=SPM.PRIVILEGE
/  

select * from dba_user_privs where username = 'C##GGATE' and rolename is null;

Consolidated Database:

image

Pluggable Database 1 (Capture):

image

Pluggable Database 2 (Apply):

image

Lastly, I figured out that I needed to set tablespace quotas both within the CDB and the PDB for the common user.  To make it easy, I just assigned UNLIMITED quotas to the USER tablespace in the CDB and PDB for the common user.

Note: The reason the quota on the USER tablespace is needed is due to the default tablespace for C##GGATE is USER in the CDB. The user needs the same access at the PDB level.

Now the user has been configured,  let’s get the GoldenGate processes configured and ready to run.

Environment Setup:

Just like any other GoldenGate environment, you will have an Extract, Replicat and possibly a Data Pump (optional, I always use one) plus the associated trail files.  The important thing to remember when setting up each process that there are a few changes in the TABLE and MAP statements.  I have provided my working, uni-directional parameter files below.

Tip: Extracts in Oracle Database 12c against PDBs MUST be setup as integrated extracts.  Classic capture extracts can be configured but will not start against a PDB (no error message will be logged as well).

Extract:


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

--Specifies the name of the extract process
EXTRACT ext

--Oracle Login
USERID c##ggate, PASSWORD ggate

--Usa ASM API
--Not needed for integrated capture as of 12.1.2
--TRANLOGOPTIONS DBLOGREADER

--Set Oracle Environment Variables
SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Warns for a long running transaction
WARNLONGTRANS 5m, CHECKINTERVAL 3m

--Specifies the location of the remote trail file on target machine
EXTTRAIL ./dirdat/lt

--Resolves the TABLES to be replicated ON START-UP
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Data Pump (Extract):


-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
-- CHECKPARAMS

--Specifies the name of the extract pump process
EXTRACT pmp

--No logic applied just pass everything extract writes to the trail file to the target trail file
PASSTHRU

--Specifies the target system to connect to
RMTHOST localhost, MGRPORT 15000, COMPRESS

--Specifies the location of the remote trail file on target machine
RMTTRAIL ./dirdat/rt

--Table Mappings
SOURCECATALOG PDB1;
TABLE BOBBY.*;

Replicat:


--Specifies the name of the replicat load process.
REPLICAT rep

-- Verifies parameter file syntax. COMMENT OUT AFTER TESTING.
--CHECKPARAMS

SETENV (ORACLE_HOME="/u01/app/oracle/product/12.1.0/db12c")
SETENV (ORACLE_SID="bc12ctst")

--Oracle login.
USERID c##ggate@PDB2, PASSWORD ggate

--DBOPTIONS INTEGRATEDPARAMS(parallel 4)

--The source ddl and target ddl are identical
ASSUMETARGETDEFS

--Specifies name and location of a discard file.
DISCARDFILE ./dirrpt/rep.dsc, append, megabytes 5

--Resolves the TARGETs to be replicated during process start-up
WILDCARDRESOLVE IMMEDIATE

--Table Mappings
map PDB1.BOBBY.*, target BOBBY.*;

Notice in the Extract and Replicat parameter files that the TABLE and MAP statements are slightly different.  In both files, I use two different versions of how to associate tables within the PDBs.  In the Extract parameter file, the PDB is being assigned with the SOURCECATALOG option.  This tells GoldenGate to switch containers and to use PDB1 for the tables.  In the Replicat parameter file, the MAP statement is not using SOURCECATALOG; instead it is looking for source tables with a referenced inline format CATALOG.SCHEMA.TABLES (PDB1.BOBBY.*).  The target side does not need the CATALOG option because the replicat is logging into PDB2 directly.

Well, I hope I gave a good overview of how to setup Oracle GoldenGate 12c (12.1.2) within Oracle Database 12c (12.1.0.1) using PDBs.

Enjoy!

twitter: @dbasolved

blog:http://dbasolved.com


Filed under: Database, Golden Gate, Replication
Categories: DBA Blogs

Switch to Integrated Replicat

Mon, 2014-04-21 12:15

Earlier I wrote a blog post about moving the Oracle Golden Gate Extract from “Classic”  capture to “Integrated” capture.   In this post, we will take a look at how to move a “Classic” replicat (apply) to an “Integrated” replicat (apply) process.

The initial step to migrate the replicat from “Classic” to “Integrated” are the same as the extract.  Look here for the initial details.  This post picks up when we are getting ready to stop the replicat and perform the migration.

Before we can do anything with the replcat, we need to be in GGSCI.

Code 1:


[oracle@oel oggcore_1]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.0.0 17185003 17451407_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Oct 29 2013 00:31:40
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

 

GGSCI (oel.acme.com) 1>

Now, we need to stop the replciat (apply) process that we are using.

Code 2:


GGSCI (oel.acme.com) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
JAGENT      STOPPED
EXTRACT     RUNNING     EXT         00:00:09      00:00:00
EXTRACT     RUNNING     PMP         00:00:00      00:00:10
REPLICAT    RUNNING     REP         00:00:00      00:00:08

GGSCI (oel.acme.com) 2> stop replicat rep

Sending STOP request to REPLICAT REP ...
Request processed.

With the replicat stopped, we need to log into the database through GGSCI.

Code 3:


GGSCI (oel.acme.com) 3> dblogin userid ggate, password ggate
Successfully logged into database.

Next, we need to alter the replicat to make it integrated.

Code 4:


GGSCI (oel.acme.com) 4> alter replicat rep, integrated
REPLICAT (Integrated) altered.

Finally, we need to start the replicat.

Code 5:


GGSCI (oel.acme.com) 5> start replicat rep

Sending START request to MANAGER ...
REPLICAT REP starting

Once the replicat has been started, we can check the replicat and look at the details with the INFO option.

Code 6:


GGSCI (oel.acme.com) 7> info replicat rep, detail

REPLICAT   REP       Last Started 2014-04-21 14:06   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           28410
Log Read Checkpoint  File ./dirdat/rt000000
                     2014-04-21 12:42:14.030503  RBA 6912167

INTEGRATED Replicat
DBLOGIN Provided, inbound server name is OGG$REP in ATTACHED state

Current Log BSN value: <NULL>

  Extract Source                          Begin             End

  ./dirdat/rt000000                       2014-04-21 12:42  2014-04-21 12:42
  ./dirdat/rt000000                       2014-04-21 12:42  2014-04-21 12:42
  ./dirdat/rt000000                       * Initialized *   2014-04-21 12:42
  ./dirdat/rt000000                       * Initialized *   2014-04-21 12:42
  ./dirdat/rt000000                       * Initialized *   First Record
  ./dirdat/rt000000                       * Initialized *   First Record

Current directory    /oracle/app/product/12.1.2/oggcore_1

Report file          /oracle/app/product/12.1.2/oggcore_1/dirrpt/REP.rpt
Parameter file       /oracle/app/product/12.1.2/oggcore_1/dirprm/REP.prm
Checkpoint file      /oracle/app/product/12.1.2/oggcore_1/dirchk/REP.cpr
Checkpoint table     ggate.checkpoint
Process file         /oracle/app/product/12.1.2/oggcore_1/dirpcs/REP.pcr
Error log            /oracle/app/product/12.1.2/oggcore_1/ggserr.log

From looking at the details of the replicat, you can see that it is now integrated and using the DBLOGIN that we provided earlier.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

Move from classic capture to integrated capture

Mon, 2014-04-21 11:50

Most Oracle Golden Gate installations, venture to say 80-90%, use the “Classic” capture architecture.  What I want to show in this post is how to switch from “Classic” capture to “Integrated” capture.

In the Oracle Golden Gate 12c documentation, the steps provided work but there are a few things that we need to be aware of when trying to switch to integrated capture mode.  The steps that are outlined here show you how to get around these missing steps.

Let’s get started!

With any replication environment, we want to identify where we are at within the replication cycle, i.e. what is the current transaction.  In code 1, we see how we can do this:

Code 1:


GGSCI (oel.acme.com) 2> send extract ext, showtrans

Sending SHOWTRANS request to EXTRACT EXT ...
No transactions found

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 150, RBA 194781712.

Notice that we have no pending transactions and that the Redo Log Sequence Number is 150 with an RBA of 194781712.  Now that we know where we are at with transactions, we need to ensure the correct permissions are associated with our GGATE user.

The documentation says to use DBLOGIN USERIDALIAS alias to log into the mining database.  In this environment, we are not using a mining database.  What we need to do is just make sure that the GGATE user has the right permissions.  This can be accomplished by using SQL*Plus or SQL Developer to grant permissions.  Image 1 shows how to grant the needed privileges for GGATE with the capture process.

Note: omitting the “capture” option will ensure permissions for both capture and apply processes.

Image 1:
image

With all the permissions set and knowing where we are with transactions we can now being to migrate the “Classic” extract to an “Integrated” extract.  Before we can migrate the extract; always make a backup of the dir* directories in the $GG_HOME.

Code 2:


[oracle@oel oggcore_1]$ zip gg_directories.zip ./dir*/*

Once the backup of the dir* is made; we can proceed by logging into the database from GGSCI.

Code 3:


GGSCI (oel.acme.com) 3> dblogin userid ggate, password ggate
Successfully logged into database.

Next, we need to stop the extract that we want to migrate to “Integrated”

Code 4:


GGSCI (oel.acme.com) 5> stop extract ext

Sending STOP request to EXTRACT EXT ...
Request processed.

Now, register that extract with the database.

Code 5:


GGSCI (oel.acme.com) 7> register extract ext database
Extract EXT successfully registered with database at SCN 26468050.

Before the upgrade we need to verify that the extract is ready to be upgraded. This is accomplished with the INFO command.

Code 6:


GGSCI (oel.acme.com) 9> info extract ext upgrade
ERROR: Extract EXT is not ready to be upgraded because recovery SCN 26468017 has not reached SCN 26468050.

Notice that we got an ERROR message.  This is because the extract is stopped and that the extract was registered with the database with SCN 26468050.  The SCN is currently at 26468017 and needs to be incremented to 26468050.  How can we do this if the extract is down?  It is real simple!  Start the extract.

Code 7:


GGSCI (oel.acme.com) 15> start extract ext

Sending START request to MANAGER ...
EXTRACT EXT starting

Once the extract has been successfully started, we need to stop the extract again.

Code 8:


GGSCI (oel.acme.com) 18> stop extract ext

Sending STOP request to EXTRACT EXT ...
Request processed.

Now, lets check to see if the extract is ready to upgrade.

Code 9:


GGSCI (oel.acme.com) 19> info extract ext upgrade
Extract EXT is ready to be upgraded to integrated capture.

Great!  Now the extract is ready to be upgraded.  Let’s upgrade it to an “Integrated” extract.

Code 10:


GGSCI (oel.acme.com) 20> alter extract ext upgrade integrated tranlog
Extract EXT successfully upgraded to integrated capture.

Now that the extract has been upgraded, we need to try and start the extract.

Code 11:


GGSCI (oel.acme.com) 21> start extract ext

Sending START request to MANAGER ...
EXTRACT EXT starting

Once the extract has started, we can see that it has stared and all the specific information related to the extract using the INFO command.

Code 12:


GGSCI (oel.acme.com) 25> info extract ext, detail

EXTRACT    EXT       Last Started 2014-04-21 12:42   Status RUNNING
Checkpoint Lag       00:00:05 (updated 00:00:05 ago)
Process ID           25943
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2014-04-21 12:44:04
                     SCN 0.26486661 (26486661)

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/lt                                          2       1449         50 EXTTRAIL

Integrated Extract outbound server first scn: 0.26468050 (26468050)

  Extract Source                          Begin             End

  Not Available                           2014-04-21 12:37  2014-04-21 12:44
  /oracle/app/oradata/bc11g/redo06_1.log  * Initialized *   2014-04-21 12:41
  /oracle/app/oradata/bc11g/redo06_1.log  2014-04-21 12:37  2014-04-21 12:41
  /oracle/app/oradata/bc11g/redo06_1.log  2014-04-15 11:09  2014-04-21 12:38
  /oracle/app/oradata/bc11g/redo06_1.log  2014-04-15 11:09  2014-04-15 11:09
  Not Available                           * Initialized *   2014-04-15 11:09

Current directory    /oracle/app/product/12.1.2/oggcore_1

Report file          /oracle/app/product/12.1.2/oggcore_1/dirrpt/EXT.rpt
Parameter file       /oracle/app/product/12.1.2/oggcore_1/dirprm/EXT.prm
Checkpoint file      /oracle/app/product/12.1.2/oggcore_1/dirchk/EXT.cpe
Process file         /oracle/app/product/12.1.2/oggcore_1/dirpcs/EXT.pce
Error log            /oracle/app/product/12.1.2/oggcore_1/ggserr.log

With the extract running in “Integrated” mode, we want to see if there are any errors in the GGSERR.log.  With the upgrade, there appears to be no real errors in the GGSERR.log; however, we will see two warnings.

OGG-01423 : references a default archive location
OGG-02045 : referring to streams_pool_size initialization parameter

These warning are interesting and bare more investigating; however, for now the extract has been upgraded to an “Integrated” extract.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

LogDump and Understanding header information in Oracle GoldenGate Trail Files

Thu, 2014-04-17 08:30

Replication of data is always a fun thing to look at; What is replicating?!  Discussions around, How do I get data from server/database A to server/database B or even to server/database C are valid questions and are often asked by management.  Often the simple (knee jerk) answer is, just set it up and start replicating.  Although Oracle GoldenGate may be simple (for some architectures) to meet the demands of management and the task at hand, problems will arise with the data being replicated.

when problems arise, the need to identify and resolve the replication issue becomes a critical and time consuming task.  Oracle GoldenGate provides a few utility to help in diagnosing and resolving replication issues.  One such utility is the LogDump utility.  The LogDump utility is used to read the local and remote trail files that are used to support the continuous extraction and replication of transaction changes within the database.

Knowing what trail files are used for is part of the battle when troubleshooting replication issues with Oracle GoldenGate.  How do we use LogDump to read these trail files?  What are we looking for or at in a trail file to understand what is gong on?  To answer these questions, we need to start the LogDump utility.

To start LogDump, we just need to be in the OGG_HOME and run the LogDump command.  The below code set shows you how to run LogDump.


[oracle@oel oggcore_1]$ pwd
/oracle/app/product/12.1.2/oggcore_1
[oracle@oel oggcore_1]$ ./logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 12.1.2.0.0 17185003 17451407

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.

 

Logdump 22 >

Note: Your LogDump session should start at 1 not 22 (Logdump 22).  LogDump remembers session info until you log out of the server.

Once LogDump has been started, we need to open a trail file and setup how we want the information to be displayed.  Commands for LogDump can be displayed by using the “help” command.  In the following code block, we see that we are opening a local trail (lt) file and setting a few environment options.

Note: Trail files (local and remote) normally are pre-fixed with two (2) letters followed by a six ( 6 ) digit string.  In new environments trail files will start with (prefix)000000 (lt000000 or rt000000).


Logdump 15 >open ./dirdat/lt000000
Current LogTrail is /oracle/app/product/12.1.2/oggcore_1/dirdat/lt000000
Logdump 16 >ghdr on
Logdump 17 >detail on
Logdump 18 >detail data
Logdump 19 >usertoken on
Logdump 20 >

The “help” command inside of LogDump provides more options.  The options that we are using in this example are:

  • ghdr on =  toggle header display on | off
  • detail on = toggle detailed data display (on | off | data)
  • detail data =  toggle detailed data display (on | off | data)  (repeated this just to make sure)
  • usertoken on = show user token information (on | off| detail)

With the LogDump environment set, we can now use the “next (n)” command to see the information in the trail file.


Logdump 20 > n

Once the header output is displayed, we need to understand how to read this information.  Image 1 provides us with a quick explanation of each major component within a trial file transaction.  We can see the following items for a transaction in trail file (lt000000):

  • Header Area: Transaction information
  • Data/Time and type of transaction
  • Object associated with the transaction
  • Image of transaction (before/after)
  • Columns associated with the transaction
  • Transaction data formatted in Hex
  • Length of the record
  • ASCII  format of the data
  • Record position within the trail file (RBA)

Image 1: Header Informationimage

At this point, we maybe asking: Why is this important?  Understanding the trail files and how to find information within the trail files is an important part of troubleshooting the Oracle GoldenGate environment.

Example: If a replicat abends and we need to start the replicat from a given RBA. Being able to identify the first, next  and last RBA in the trail file is helpful in understanding why the abend happened and identifying a starting point to restarting successfully.

In the end, the Oracle GoldenGate environment can be simple yet complex at the same time. Understanding the different components of the environment is very useful and worth the time involved to learn it.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

Monitoring Oracle Golden Gate from SQL Developer

Wed, 2014-04-16 07:16

Last week I was at Collaborate 14 speaking in two sessions; one of the sessions I had done a couple of times before.  The other session was about the different ways of monitoring Oracle GoldenGate (If you are curious about the presentation it can be found here).  While at the conference I ran the idea of monitoring GoldenGate from SQL Developer by a few peers and there seems to be interest.  As for Oracle, this approach to monitoring GoldenGate is not on Oracle’s road map for SQL Developer.

To achieve this goal, the usage of XML extensions within SQL Developer is needed.  Using XML extensions, I’ve been able to leverage monitoring GoldenGate from SQL into a working extension.  The extension is not perfect and continues to need some work.  As you can see in image 1, I can get the status of a GoldenGate process and associated stats.

Image 1:image

The SQL Developer extension for Oracle GoldenGate is available for whoever would like to use it and extend on it.  This extension is included with my other GoldenGate monitoring scripts located here and on my scripts page.

Note: at some point, I will hopefully get this extension uploaded to a Github repository for community digestion.

This extension is to help DBAs have a way to monitor their GoldenGate environments without the need of going directly to the server. For now, it just gives up/down status and operation stats.  Hopefully, as this matures (as I and others work on it) it will become a robust extension for all monitoring with Oracle GoldenGate.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate, Replication
Categories: DBA Blogs