DBASolved

Subscribe to  DBASolved feed  DBASolved
Helping Oracle DBAs solve problems
Updated: 4 hours 55 min ago

Fabric … Simple #GoldenGate Administration over SSH

Wed, 2016-10-05 10:45

 

For awhile now, I’ve been looking at different approaches for automating some of my GoldenGate monitor tasks. Yes, there are tools out there that provide a wide variety of ways for interaction and monitoring of Oracle GoldenGate. However, the problem with Oracle GoldenGate is that you have at least two parts of the equation when interacting or monitoring an environment, i.e. source and target. If you are so lucky to have a multi-master configuration, then you have to deal with multiple target sites. In the end, making administration and knowing what is going on a bit of a time-consuming process.

I’ve looked at different tools to try and solve this problem; mostly in Perl since I write a lot of that from time to time. Trying to get Perl to do network commands like SSH was time consuming and cumbersome when scripting it. This is when I started to look at Python. Initially I wasn’t sold on Python, but after spending some time to get familiar with the language, I actually like it; but it still was missing the SSH functionality I wanted, until recently.

As I was browsing the inter-web over the last week or so, I came across a blog post that compared deployment tools such as Chef, Puppet, Ansible, Fabric and a few others. The website provided pros and cons for each of the deployment options they were reviewing. Take a look and see what you may like to use for deployment options in your environments.

Out of the tools the website talked about, Fabric peaked my attention because it was a Python-based tool for streamlining SSH application deployments. I was thinking, if I could deploy applications with Fabric, then I should be able to do some basic monitoring with it as well; sending me down the path of solving basic Oracle GoldenGate monitoring from a single command line. Yea sounds like a good plan!

After spending some time, figuring out how to install Fabric and what exactly is a “fabfile”, I wrote my first monitoring script for Fabric!

What exactly does this monitoring script look like? Below is the monitoring script, also known as a “fabfile”. This script is written in Python syntax and sets up the environment and what functions should be called.

from fabric.api import *

env.hosts = [
'fred.acme.com',
'wilma.acme.com'
]

env.user = "oracle"
env.password = [ do not place in clear text ]

def info_all():
        with cd("$OGG_HOME"):
                run("echo info all | ./ggsci")

def ggstatus():
        info_all()

The environment that is going to be monitored are my test servers that I use for many of my tests (Fred and Wilma). Then I’m telling Fabric to use the “oracle” userid and password to login to each server. In this environment I have all my oracle passwords set the same so I only have to provide it once. After the environment is set, I define the functions that I want to perform. The first function is the info_all() function which is logging into GGSCI and providing me the output of the info all command. The second function is just a generic function that calls the first function.

With the “fabfile” created, I can now run a single command to access both servers (Fred & Wilma) and check on the status of the Oracle GoldenGate processes running on these servers. The command that I run is:

$ fab ggstatus -f ./fab_gg.py

This command is executing the Fabric executable “fab” followed by the function in the “fabfile” I want to execute. Then the “-f ./fab_gg.py” is the “fabfile” that I want to use during the execution. Upon execution, I will spool output on my STDOUT providing me the status of each Oracle GoldenGate environment for each server I’ve requested.

AMAC02P37LYG3QC:python bobby.l.curtis$ fab ggstatus -f ./fab_gg.py
[fred.acme.com] Executing task 'ggstatus'
[fred.acme.com] run: echo info all | ./ggsci
[fred.acme.com] out: The Oracle base has been set to /u01/app/oracle
[fred.acme.com] out: ====================================
[fred.acme.com] out: ORACLE_SID=src12c
[fred.acme.com] out: ORACLE_BASE=/u01/app/oracle
[fred.acme.com] out: ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[fred.acme.com] out: OGG_HOME=/u01/app/oracle/product/12.2.0/oggcore_1
[fred.acme.com] out: JAVA_HOME=/home/oracle/Downloads/jdk1.8.0_71
[fred.acme.com] out: OGGSTUDIO_HOME=/u01/app/oracle/product/oggstudio/oggstudio
[fred.acme.com] out: OGGSTUDIO_HOME1=/u01/app/oracle/product/oggstudio/12.2.1.1.0/oggstudio
[fred.acme.com] out: ====================================
[fred.acme.com] out:
[fred.acme.com] out: Oracle GoldenGate Command Interpreter for Oracle
[fred.acme.com] out: Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
[fred.acme.com] out: Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
[fred.acme.com] out: Operating system character set identified as UTF-8.
[fred.acme.com] out:
[fred.acme.com] out: Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out: GGSCI (fred.acme.com) 1>
[fred.acme.com] out: Program     Status      Group       Lag at Chkpt  Time Since Chkpt
[fred.acme.com] out:
[fred.acme.com] out: MANAGER     RUNNING
[fred.acme.com] out: JAGENT      RUNNING
[fred.acme.com] out: EXTRACT     RUNNING     EGG12C      00:00:09      00:00:02
[fred.acme.com] out: Description 'Integrated Extract'
[fred.acme.com] out: EXTRACT     RUNNING     PGG12C      00:00:00      00:00:02
[fred.acme.com] out: Description 'Data Pump'
[fred.acme.com] out:
[fred.acme.com] out:
[fred.acme.com] out: GGSCI (fred.acme.com) 2>
[wilma.acme.com] Executing task 'ggstatus'
[wilma.acme.com] run: echo info all | ./ggsci
[wilma.acme.com] out: The Oracle base has been set to /opt/app/oracle
[wilma.acme.com] out: ====================================
[wilma.acme.com] out: ORACLE_SID=rmt12c
[wilma.acme.com] out: ORACLE_BASE=/opt/app/oracle
[wilma.acme.com] out: ORACLE_HOME=/opt/app/oracle/product/12.1.0.2/dbhome_1
[wilma.acme.com] out: OGG_HOME=/opt/app/oracle/product/12.2.0.0/oggcore_1
[wilma.acme.com] out: JAVA_HOME=/home/oracle/Downloads/jdk1.8.0_71
[wilma.acme.com] out: ODI_HOME=/opt/app/oracle/product/12.2.1.1.0/odi_home1
[wilma.acme.com] out: ====================================
[wilma.acme.com] out:
[wilma.acme.com] out: Oracle GoldenGate Command Interpreter for Oracle
[wilma.acme.com] out: Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
[wilma.acme.com] out: Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
[wilma.acme.com] out: Operating system character set identified as UTF-8.
[wilma.acme.com] out:
[wilma.acme.com] out: Copyright (C) 1995, 2015, Oracle and/or its affiliates. All rights reserved.
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out: GGSCI (wilma.acme.com) 1>
[wilma.acme.com] out: Program     Status      Group       Lag at Chkpt  Time Since Chkpt
[wilma.acme.com] out:
[wilma.acme.com] out: MANAGER     RUNNING
[wilma.acme.com] out: JAGENT      RUNNING
[wilma.acme.com] out: REPLICAT    RUNNING     RGG12C      00:00:00      00:00:03
[wilma.acme.com] out:
[wilma.acme.com] out:
[wilma.acme.com] out: GGSCI (wilma.acme.com) 2>

Done.
Disconnecting from wilma.acme.com... done.
Disconnecting from fred.acme.com... done.

As you can tell, I get a scrolling output of both servers showing that the servers were logged into and changed directories to the $OGG_HOME. Then executed and “info all” command against GGSCI with the returning output. This makes for a quick and easy way to get the current status of all Oracle GoldenGate processes in an environment.

With this being an introduction to Fabric, I’m looking forward to seeing what else I can do with it. I’ll keep everyone posted on additional things I do may do with it. But for now, I encourage you to take a look at it and see if you can simplify some of your administration tasks with it.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Dev Ops, Golden Gate
Categories: DBA Blogs

Oracle Open World 2016 – What GoldenGate sessions are there?

Wed, 2016-08-31 23:00

Well, it is that time of year again; the streets of San Francisco will be crowded with members of the Oracle community! As everyone in the Oracle community descends onto the bay area, there will be excitement about the things that will be announced this year. Sure a lot of it is going to be about “cloud” and what direction Oracle is taking with their “cloud” strategy. Besides, “cloud” there will be a lot of good things to take in as well. As I look through the online session catalog for 2016, I’m interested in the topics related to Oracle GoldenGate.

This year there appears to be a good number of Oracle GoldenGate sessions at Oracle Open World, to be specific there are 21 scheduled during the event. I have listed a few of the ones I think will be interesting and will make an attempt to attend; always check the session catalog because what I think is good you may not.

  • CON6632 – Oracle GoldenGate for Big Data
  • CON7318 – Getting Started with Oracle GoldenGate
  • CON6551 – New Oracle GoldenGate 12.3 Services Architecture (beta stuff)
  • CON6633 – Accelerate Cloud Onboarding with Oracle GoldenGate Cloud Service
  • CON6634 – Faster Design, Development and Deployment with Oracle GoldenGate Studio (should be updated from last year session)
  • CON6558 – Best Practice for High Availability and Performance Tuning for Oracle GoldenGate
  • UGF616 – Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming
  • THT7817 – Real-Time and Batch Data Ingestion into Data Lake with Oracle GoldenGate Cloud Service
  • UGF5120 – Oracle GoldenGate and Baseball: Five Fundamentals Before Jumping to the Cloud

As we are within two weeks of Oracle Open World, I hope everyone is ready to go and looking forward to seeing others from the community.

Enjoy!

@dbasolved
http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Digging into ADD SCHEMATRANDATA … what is #GoldenGate doing?

Fri, 2016-08-19 10:30

In my post on the differences between ADD TRANDATA and ADD SCHEMATRANDATA, I highlighted the differences between the two ways of adding supplemental logging to tables. One of the things I pointed out was that ADD SCHEMATRANDATA doesn’t add any supplemental log groups (SLG). Without SLGs then how does ADD SCHEMATRANDATA work? That was the question I was left with. So I started digging around to find the answer and I think you may be interested in this as well.

Tracing

In order to figure out what is actually going on within the database when ADD SCHEMATRANDATA is run, I had to trace the GoldenGate session. In order to do this, I first had to login to the database from GGSCI using “dblogin useridalias ggate”. After logging in, I had to identify the session and then setup tracing. This was done with the following scripts:

—SQL to identify the GGate session
select 'exec dbms_monitor.session_trace_enable(session_id=>'||sid||', serial_num=>'||serial#||', binds=>true, waits=true)' 
from v$session where username = 'GGATE';

—Start tracing of the session
exec dbms_monitor.session_trace_enable(session_id=>156, serial_num=>15799, binds=>true, waits=>true);
—Disable tracing after done
exec dbms_monitor.session_trace_disable(session_id=>156, serial_num=>15799);

Now with tracing enabled for the session, I’m able to trace the GoldenGate session from within GGSCI. At this time, I’m able to run DELETE SCHEMATRANDATA and ADD SCHEMATRANDATA against the schema I want to add the logging to.

All the last step of tracing, I need to disable the tracing (script above) of the GoldenGate user and identify the trace file. By default the trace file should be in $ORACLE_BASE/diag/rdbms/<db>/<db>/trace. In this directory, you need to identify the trace files for the session that was traced.

oracle >ls -ltr src12c_ora_23267.*
-rw-r-----. 1 oracle oracle   2788 Aug 15 16:09 src12c_ora_23267.trm
-rw-r-----. 1 oracle oracle 300531 Aug 15 16:09 src12c_ora_23267.trc

After you know the trace file, you can perform a TKPROF on the trace file.

oracle > tkprof src12c_ora_23267.trc src12c_ora_23267.tkprofs

This is generate a tkprofs file that will show specifics for the sessions, such as the SQL that is ran. This is what I’m more interested in, what SQL is ran with ADD SCHEMATRANDATA is called.

Quick Review of TKProf file

I’m not going to go into all the details that are in a tkprof generated file, but I took a look through the file trying to figure out what is going on when running the ADD SCHEMATRANDATA. As I was searching the file, I found a reference to LOGMNR$ALWAYS_SUPLOG_COLUMNS.

If you have kept up with the versions of Oracle GoldenGate, you will know that this is a reference to LogMiner and that Oracle is integrating the Oracle GoldenGate processes with it. This is also a hint to where to look, a.k.a Oracle Streams. As I continued to look through the tkprof file, I found a few references to a streams package – DBMS_CAPTURE_ADM; along with sql statements making calls to views like STREAMS$_PREPARE_DDL.

BEGIN sys.dbms_capture_adm.PREPARE_SCHEMA_INSTANTIATION('soe','ALLKEYS_OFF');
  END;
BEGIN sys.dbms_capture_adm.ABORT_SCHEMA_INSTANTIATION('soe'); END;

At this point, it is safe to say, that Oracle is merging Oracle GoldenGate into Oracle Streams.

Review of Packages

Note: These are Oracle packages that are encrypted in the database. You can use UnwrapIt to view if needed. Output here is only for teaching purposes.

Now that I’m armed with what package the ADD SCHEMATRANDATA is calling, I can drill into what exactly is happening.

In taking a look at the DBMS_CAPTURE_ADM package, I wanted to look at the PREPARE_SCHEMA_INSTANTIATION procedure. I can quickly see that the procedure takes the schema_name and that defaults will be used for supplemental logging and container.

PROCEDURE PREPARE_SCHEMA_INSTANTIATION( 
 SCHEMA_NAME               IN VARCHAR2, 
 SUPPLEMENTAL_LOGGING      IN VARCHAR2 DEFAULT 'KEYS', 
 CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT') 
 IS 
 BEGIN 
 DBMS_CAPTURE_ADM_IVK.PREPARE_SCHEMA_INST_IVK( 
 SCHEMA_NAME => SCHEMA_NAME,  
 SUPPLEMENTAL_LOGGING => SUPPLEMENTAL_LOGGING, 
 CONTAINER => CONTAINER); 
 END;

After the schema is passed to the procedure, this procedure passes all three of the parameters to another package and procedure for execution. This package is DBMS_CAPTURE_ADM_IVK.PREPARE_SCHEMA_INST_IVK.

PROCEDURE PREPARE_SCHEMA_INST_IVK(
SCHEMA_NAME               IN VARCHAR2,
SUPPLEMENTAL_LOGGING      IN VARCHAR2 DEFAULT 'KEYS',
CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT')
IS
SUPP_LOG_LEVEL      BINARY_INTEGER;    
SYNCHRONIZATION     VARCHAR2(4) := 'LOCK';
CANON_CONTAINER     DBMS_ID;
BEGIN
  DBMS_LOGREP_UTIL.WRITE_TRACE('prepare_schema_inst_ivk()+ container: ' ||
  CONTAINER, DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
  COMMIT;

  SUPP_LOG_LEVEL := DBMS_CAPTURE_ADM_IVK.SUPP_LOG_LEVEL_CHECK(SUPPLEMENTAL_LOGGING, 
  IF DBMS_XSTREAM_GG_ADM.SYNCHRONIZATION IS NOT NULL 
  THEN
         SYNCHRONIZATION := UPPER(DBMS_XSTREAM_GG_ADM.SYNCHRONIZATION);
  END IF;

  IF (SYNCHRONIZATION != 'NONE' AND SYNCHRONIZATION != 'LOCK' AND
           SYNCHRONIZATION != 'WAIT') 
  THEN
           DBMS_LOGREP_UTIL.RAISE_SYSTEM_ERROR(
           DBMS_STREAMS_ADM.INVALID_PARAMETER_NUM,
           DBMS_LOGREP_UTIL.GET_PROC_USE_CONTEXT, 'SYNCHRONIZATION'); 
  END IF;

  DBMS_UTILITY.CANONICALIZE(CONTAINER, CANON_CONTAINER, 30);   
  IF CANON_CONTAINER = 'CURRENT' 
  THEN
     DBMS_LOGREP_UTIL.WRITE_TRACE('prepare schema objects for current' ||
            ' container: ', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
     DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INSTANTIATION(
        SCHEMA_NAME, SYS_CONTEXT('USERENV','_USER'), SUPP_LOG_LEVEL,
        SYNCHRONIZATION, CANON_CONTAINER);
  ELSE
     DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INST_CDB(
            SCHEMA_NAME, CANON_CONTAINER,
            SYS_CONTEXT('USERENV','_USER'), SUPP_LOG_LEVEL,
           SYNCHRONIZATION);
  END IF;

  COMMIT;
  END;

In this procedure, you will notice that the input is all three of the parameters that are passed from the PREPARE_SCHEMA_INSTANTIATION procedure. Then this procedure precedes to check the supplemental logging and attempts to sync the tables with the DBMS_XSTREAM_GG_ADM package. After the tables have been synced, then the procedure calls another package and procedure to prepare (DBMS_CAPTURE_ADM_INTERNAL.PREPARE_SCHEMA_INSTANTIATION) the tables with the required supplemental logging level.

When looking at this procedure, it takes in five different parameters; two of which are defaults. After the procedure sets up tracing, it checks to see if the calling procedure is specifying GoldenGate. Once everything is confirmed and synchronization is complete, then the procedure grabs the CURRENT_SCN, waits for any inflight transactions and prepares the schema tables before exiting the package.

PROCEDURE PREPARE_SCHEMA_INSTANTIATION(
SCHEMA_NAME               IN VARCHAR2,
CANON_INVOKER             IN VARCHAR2,
SUPP_LOG_LEVEL            IN BINARY_INTEGER,
SYNCHRONIZATION           IN VARCHAR2 DEFAULT 'LOCK',
CONTAINER                 IN VARCHAR2 DEFAULT 'CURRENT')
IS
CANON_SCHEMA  DBMS_ID;
IDX           NUMBER :=0;
NAME_ARRAY    DBMS_UTILITY.NAME_ARRAY;
OWNER_ARRAY   DBMS_UTILITY.NAME_ARRAY;
CURRENT_SCN   NUMBER;
WFIT          BOOLEAN := FALSE;
RUN_TKLRWT1   BOOLEAN := FALSE;
SUPPLOG       BINARY_INTEGER := SUPP_LOG_LEVEL;
TEST_EVENT_LVL   BINARY_INTEGER := 0;
BEGIN
 DBMS_LOGREP_UTIL.WRITE_TRACE(
 'dbms_capture_adm_internal.prepare_schema_instantiation()+', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
 DBMS_LOGREP_UTIL.WRITE_TRACE('schema_name='||SCHEMA_NAME,   DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
  DBMS_LOGREP_UTIL.WRITE_TRACE('supp_log_level='||SUPP_LOG_LEVEL, DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH); 
 DBMS_UTILITY.CANONICALIZE(SCHEMA_NAME, CANON_SCHEMA, 30);  

 IF ((SUPP_LOG_LEVEL = DBMS_STREAMS_DECL.SUPPLOG_ALLKEYS_ON) OR (SUPP_LOG_LEVEL = DBMS_STREAMS_DECL.SUPPLOG_ALLKEYS_OFF)) 
 THEN
    STORE_PREPARE_INFO(CANON_SCHEMA, SUPP_LOG_LEVEL);
    RETURN;
 END IF;
    DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before read_ev', DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
    DBMS_SYSTEM.READ_EV(DBMS_LOGREP_UTIL.EXT_TRACE_EV, TEST_EVENT_LVL);

    IF (DBMS_XSTREAM_GG_ADM.IS_GOLDENGATE AND TEST_EVENT_LVL = 0) 
    THEN
     SUPPLOG := 0;
    END IF;
   DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before prepare_ddl',DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
    PREPARE_DDL(CANON_SCHEMA, SUPPLOG);
    DBMS_LOGREP_UTIL.WRITE_TRACE('dbms_capture_adm_internal.prepare_schema_instantiation()+ before prepare_schema_tables',DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
	RUN_TKLRWT1 := RUN_WFIT_TEST;

    IF (UPPER(SYNCHRONIZATION) = 'NONE') 
    THEN
		PREPARE_SCHEMA_TABLES(CANON_SCHEMA, FALSE, CONTAINER);
    ELSIF (UPPER(SYNCHRONIZATION) = 'LOCK') 
    THEN
		PREPARE_SCHEMA_TABLES(CANON_SCHEMA, TRUE, CONTAINER);
    ELSIF (UPPER(SYNCHRONIZATION) = 'WAIT') 
    THEN
		WFIT := TRUE;
 		CURRENT_SCN  := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER;
        DBMS_CAPTURE_ADM_INTERNAL.WAIT_FOR_INFLIGHT_TXN(CURRENT_SCN, 'Prepare_schema_instantiation');
        PREPARE_SCHEMA_TABLES(CANON_SCHEMA, FALSE, CONTAINER);
    END IF;
        DONE_WFIT_TEST(WFIT, RUN_TKLRWT1);
    END;

Up to this point, the last three packages have been preparing the tables. Seems like a lot of preparing to get tables set for supplemental logging. Well the last call in the previous package called the procedure to prepare the tables. When I look at this procedure, I see that there is a call to a PREPARE_TABLE_INST procedure (getting a bit lazy on copying the procedure – don’t wan to put to much since this is Oracle’s code).

FOR I IN 1..IDX LOOP
BEGIN
   DBMS_LOGREP_UTIL.WRITE_TRACE('prepare_schema_tables():' || CANON_OWNER_ARRAY(I) || '.' || CANON_NAME_ARRAY(I), DBMS_LOGREP_UTIL.TRACE_FLAG_PUBLISH);
   PREPARE_TABLE_INST(CANON_OWNER_ARRAY(I), CANON_NAME_ARRAY(I),DBMS_STREAMS_DECL.SUPPLOG_NONE, LOCKING, CONTAINER);
EXCEPTION WHEN OTHERS THEN
   DBMS_LOGREP_UTIL.DUMP_TRACE('error:dbms_capture_adm_internal.prepare_table_inst('||'owner='||CANON_OWNER_ARRAY(I)||' name='||CANON_NAME_ARRAY(I));
END;

Now, when I go an look at the PREPARE_TABLE_INST procedure, I see that the procedure is calling a C package called “knlcpreptabinst” to set the supplemental logging on the tables associated with the schema.

PROCEDURE PREPARE_TABLE_INST(
CANON_OWNER                 IN VARCHAR2,
CANON_TABLE_NAME            IN VARCHAR2,
SUPPLEMENTAL_LOGGING_LEVEL  IN BINARY_INTEGER,
LOCKING                     IN BOOLEAN,
CONTAINER                   IN VARCHAR2) 
IS
EXTERNAL
 NAME "knlcpreptabinst"
 LIBRARY DBMS_LOGREP_LIB
 WITH CONTEXT
 PARAMETERS(CONTEXT,    
      CANON_OWNER       OCISTRING, CANON_OWNER       INDICATOR SB2,
      CANON_TABLE_NAME  OCISTRING, CANON_TABLE_NAME  INDICATOR SB2,
      SUPPLEMENTAL_LOGGING_LEVEL UB4,
      LOCKING           UB2,       LOCKING           INDICATOR SB2,
      CONTAINER         OCISTRING, CONTAINER         INDICATOR SB2)
LANGUAGE C;

Without drilling down into the C package, I cannot see exactly how the supplemental logging is added or where it is exactly stored in the database. I can only assume (we know what that means … lol), that it is in the data dictionary some where.

Looking for tables or views that may shed some light on this as well, I’ve found LOGMNR$SCHEMA_ALLKEY_SUPLOG that will show you the schema, if all keys are in supplemental log mode and if no validated pks are allowed. The following query is what I used to extract information about the SOE schema:

select * from LOGMNR$SCHEMA_ALLKEY_SUPLOG
where allkey_suplog = 'YES';

 

Output is as follows:

SCHEMA_NAME     ALL ALL
--------------- --- ---
SOE             YES NO

In drilling down further, after have a friend of mine pointed out a function to me (follow him on twitter -> @resetlogs). You can get down to the table level on supplemental logging when using ADD SCHEMATRANDATA. There is a log miner function that has to be called when using SQL to pull the correct information. This function is similar named to the table I referenced above … LOGMNR$ALWAYS_SUPLOG_COLUMNS.

This function takes two parameters. The first is the schema that holds the objects and the second is the table name. So in the following example, I can see that the ORDERS table of the SOE schema has supplemental logging added.

select * from table(logmnr$always_suplog_columns('SOE','ORDERS'));

OWNER           TABLE_NAME                     COLUMN_NAME                        INTCOL     SEGCOL    USERCOL
--------------- ------------------------------ ------------------------------ ---------- ---------- ----------
SOE             ORDERS                         ORDER_ID                                1          1          1
SOE             ORDERS                         ORDER_TOTAL                             6          6          6
SOE             ORDERS                         COST_OF_DELIVERY                       11         11         11
SOE             ORDERS                         DELIVERY_ADDRESS_ID                    13         13         13
SOE             ORDERS                         ORDER_DATE                              2          2          2
SOE             ORDERS                         CUSTOMER_CLASS                         14         14         14
SOE             ORDERS                         CUSTOMER_ID                             4          4          4
SOE             ORDERS                         ORDER_STATUS                            5          5          5
SOE             ORDERS                         PROMOTION_ID                            8          8          8
SOE             ORDERS                         ORDER_MODE                              3          3          3
SOE             ORDERS                         SALES_REP_ID                            7          7          7
SOE             ORDERS                         WAREHOUSE_ID                            9          9          9
SOE             ORDERS                         DELIVERY_TYPE                          10         10         10
SOE             ORDERS                         WAIT_TILL_ALL_AVAILABLE                12         12         12
SOE             ORDERS                         CARD_ID                                15         15         15
SOE             ORDERS                         INVOICE_ADDRESS_ID                     16         16         16

 

I know this has been a long post, but hopefully, I’ve been able to somewhat show how the ADD SCHEMATRANDATA command within GGSCI works and where you can see if supplemental logging is turned on for the selected schema. As I was trying to dig to the root of this issue, I found it interesting that so many packages are involved with setting the supplemental logging on a schema/tables; while identifying if it is enabled is not as easy as ADD TRANDATA. Where when you use ADD TRANDATA the tables are easily identified and can quickly see that the supplemental log groups have been added.

As Oracle GoldenGate for Oracle moves to a more integrated approach to replication, I think more items will be tied to the log miner and streams architecture.

Enjoy!!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

rlwrap with #GoldenGate GGSCI

Tue, 2016-08-16 10:45

Since I published posts on how to retrieve command history within GGSCI using the Oracle provided commands of “history” and “fc”, I was pinged a couple of times by some friends asking about “rlwrap” usage with GoldenGate. The answer is a simple yes, rlwrap can be used with Oracle GoldenGate.

What exactly is “rlwrap”?

According to the readme file at http://utopia.knoware.nl/~hlub/uck/rlwrap/#rlwrap, rlwrap is a ‘read-one wrapper’. Basically, it allows for the editing of keyboard input for any command.

How to install?

There are two ways to install “rlwrap”. The first way is manually, which requires you to run the configure and make commands; however, the easiest way I’ve found is by using a yum repository. Tim Hall (a.k.a Oracle-Base) documents this process quite effortlessly over at his blog. Thanks Tim!

Usage?

After you have “rlwrap” installed, you simply have to create an alias in your profile to use it. You can use it from the command line as well; however, the alias approach ensure that it is executed every time you run ggsci. The below example is what I have running in my .bash_profile for the Oracle user.

alias ggsci='rlwrap $OGG_HOME/ggsci'

The next time I login to use ggsci, rlwrap will automatically be ran and then I will be able to scroll through the commands I typed while in ggsci. Another nice thing about “rlwrap” is that when I logout of ggsci, and then back in, my command history is available still.

Although the “history” and “fc” commands are handy, it would be nice to see Oracle include “rlwrap” into the core product of Oracle GoldenGate …

Enjoy!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Edit #GoldenGate commands from GGSCI

Mon, 2016-08-15 10:45

Ever get tired of typing the same command over and over again in Oracle GoldenGate GGSCI? Well, Oracle has provided a history recall function into GGSCI, I talked about his in an earlier post. This post on history recall can be found here.

In this post, lets take the command recall and how these commands can be edited. In Oracle GoldenGate 12c, Oracle has provided an GGSCI command called “FC”. The “FC” command is used to display/edit a previously issued GGSCI command and the execute it again. This command leverages the memory buffer the same was as the history command does within GGSCI.

Now the syntax for using this command is as follows:

FC [ n | -n | string ]

n – Displays the command from the line number provided

-n – Displays the command that was issued n lines before the current line

string – Displays the last command that starts with the specified string

Let’s take a look at an example of using “FC” within GGSCI.

In the below output, I have issued a series of commands and then listed these commands using this history function.

GGSCI (fred.acme.com) 6> history

GGSCI Command History

    1: info all
    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history

Using the “FC” command, I can edit and execute the edited command. If you take a look at command 4 (info pgg12c), I want to replace “pgg12c” with “egg12c”. this is done by using the “r” edit command as follows.

GGSCI 7> fc 4
GGSCI 7> info pgg12c
GGSCI 7..     regg12c
GGSCI 7> info egg12c
GGSCI 7..

EXTRACT    EGG12C    Last Started 2016-08-12 17:06   Status RUNNING
Description          'Integrated Extract'
Checkpoint Lag       00:00:08 (updated 00:00:05 ago)
Process ID           24082
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2016-08-15 10:14:45
                     SCN 0.51017268 (51017268)

By executing “fc 4” (fc n), I’m telling GGSCI to pull the fourth command from the memory buffer for editing. Then I use the “r” command to provide the text that I want to replace in the command. Notice the position of the “r”, it is lined up directly under the string I want to replace. Then I provide the replacement string. Before execution GGSCI provides you with an output of the changed command. After verifying the command, the command can be executed providing the output for the correct process.

Note: For more editing options with FC, you can find these here.

If you want to know what command was executed n commands before the previous command, you can use the -n option to “FC”. This makes the “FC” command act like the history command but only displays the command at the -n position. This can be seen if you do a history command prior to a fc -n command.

GGSCI (fred.acme.com) 11> history

GGSCI Command History

    2: stats extract egg12c
    3: info mgr
    4: info pgg12c
    5: info all
    6: history
    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history

GGSCI (fred.acme.com) 12> fc -4
GGSCI (fred.acme.com) 12> info mgr
GGSCI (fred.acme.com) 12..

Manager is running (IP port fred.acme.com.15000, Process ID 12377).

You will notice, that I have eleven commands in the command history. By using “fc -4”, I’m able to retrieve the info mgr command and then execute it by simply by hitting return. Before hitting return, I could still edit the command as was shown previously.

Lastly, you can retrieve a previous command by searching for a specific string. The string has to be at the beginning of the command. In the below example, I’m searching for the last stats command that was ran.

GGSCI (fred.acme.com) 16> history

GGSCI Command History

    7: info egg12c
    8: info mgr
    9: history
   10: info egg12c
   11: history
   12: info mgr
   13: info all
   14: history
   15: stats extract egg12c
   16: history


GGSCI (fred.acme.com) 17> fc sta
GGSCI (fred.acme.com) 17> stats extract egg12c
GGSCI (fred.acme.com) 17..

Sending STATS request to EXTRACT EGG12C ...

No active extraction maps
DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
	Operations                		           2.00
	Mapped operations         		           2.00
	Unmapped operations         		           0.00
	Other operations         		           0.00
	Excluded operations         		           0.00

As you can see, there is only one stats command in the command history. The “FC” command pulls this command and provides an opportunity to edit it. After hitting return, the command is executed.

Now that you know how to recall and edit Oracle GoldenGate commands from the GGSCI command prompt, hopefully, this will make your life with Oracle GoldenGate a bit easier.

Enjoy!!

@dbasolved
http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

To TRANDATA or To SCHEMATRANDATA? … That is the #GoldenGate questions of the day!

Fri, 2016-08-12 19:30

If you are familiar with using Oracle GoldenGate, you know that on the source side of the equation you have to enable supplemental logging and sometimes force logging on the database. I traditionally do both just to make sure that I capture as much as I can into the redo stream from the transactions on the database. For Oracle GoldenGate purposes, this is not the only thing you need to turn on to ensure all needed information is captured to the trail files.

There are two Oracle GoldenGate GGSCI commands that can be ran to enable supplemental logging at the schema or table level. These commands are ADD TRANDATA and ADD SCHEMATRANDATA. What is the difference between the two, you may ask?

ADD TRANDATA – is used to enable supplemental logging at the table level
ADD SCHEMATRANDATA – is used to enable supplemental logging at the schema level

That is such a high-level view of the concept. What is the difference between the two trandata approaches, really?

ADD TRANDATA:

ADD TRANDATA command is used to enable Oracle GoldenGate to acquire the transaction information that it needs from the transaction records. This version of the command can be used on the following databases:

  • DB2 for i Database
  • DB2 LUW Database
  • DB2 z/OS Database
  • Oracle Database
  • MS SQL Server
  • Sybase Database

For an Oracle Database, ADD TRANDATA enables the unconditional logging of the primary key and conditional supplemental logging of all unique key(s) and foreign key(s) of the specified table. Additionally, you can use ADD TRANDATA with the COLS option to log any non-key columns that can be used with the FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.

An example of adding trandata to a schema would be:

GGSCI> dblogin useridalias gate
GGSCI> add trandata soe.*

 

Once transdata has been added to the schema/tables, you can verify the existence of trandata from GGSCI using the INFO TRANDATA command as demonstrated in the below command set.

GGSCI> dblogin useridalias gate
GGSCI> info trandata soe.addresses
2016-08-12 15:07:23  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema SOE.
2016-08-12 15:07:23  INFO    OGG-01980  Schema level supplemental logging is enabled on schema SOE for all scheduling columns.
Logging of supplemental redo log data is enabled for table SOE.ADDRESSES.
Columns supplementally logged for table SOE.ADDRESSES: ADDRESS_ID, COUNTRY, COUNTY, CUSTOMER_ID, DATE_CREATED, HOUSE_NO_OR_NAME, POST_CODE, STREET_NAME, TOWN, ZIP_CODE.

Now that ADD TRANDATA has been ran, what exactly does ADD TRANDATA do to the database it is ran against? For an Oracle Database, ADD TRANDATA adds a Supplemental Log Group (SLG) on to the table. This can be seen from the DBA_LOG_GROUP view under SYS. The SLGs that are corrected are all labeled with a prefix of “GGS”. The following output shows what this looks like after running it for a whole schema.

select owner, log_group_name, table_name, log_group_type, always, generated 
from dba_log_groups
where owner = 'SOE'
and log_group_name like 'GGS%';

OWNER           LOG_GROUP_NAME       TABLE_NAME                     LOG_GROUP_TYPE                 ALWAYS                         GENERATED                    
--------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------
SOE             GGS_105669           CUSTOMERS                      USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105702           ADDRESSES                      USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105735           CARD_DETAILS                   USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105768           WAREHOUSES                     USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105769           ORDER_ITEMS                    USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105802           ORDERS                         USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105835           INVENTORIES                    USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105836           PRODUCT_INFORMATION            USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105837           LOGON                          USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105870           PRODUCT_DESCRIPTIONS           USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105871           ORDERENTRY_METADATA            USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_254721           TEST1                          USER LOG GROUP                 ALWAYS                         USER NAME

Now, there are some who will argue that the same effect can be done by just asking a SLG to a table manually. Although this is true, Oracle GoldenGate uses the GGS_ prefix to keep track of the tables that are in the replication process. Also, easier to clean up when you issue DROP TRANDATA, which will remove all the associated SLG items from the tables.

The ADD TRANDATA approach should be used with 11g or older versions of Oracle GoldenGate. As you move towards new version of Oracle GoldenGate, Oracle is pushing that everyone pick up and use the ADD SCHEMATRANDATA method. So let’s take a look at that now.

ADD SCHEMATRANDATA:

The ADD SCHEMATRANDATA is used on all the current and future tables in a given schema to automatically log a superset of available keys that Oracle GoldenGate needs for row identification. Using this version of TRANDATA, it can be used with both the integrated and classic capture processes.

There are four key reasons why you should use ADD SCHEMATRANDATA:

  • Enables supplemental logging for new tables created with a CREATE TABLE DDL command.
  • Updates supplemental logging for tables affected by an ALTER TABLE DDL command that adds or drops columns
  • Updates supplemental logging for tables affected by RENAME TABLE command
  • Updates supplemental logging for tables affected by adding or dropping of unique or primary key constraints

 

Although ADD SCHEMATRANDATA can be used with both integrated and classic capture processes, it is mostly geared towards the integrated process. There are three primary reasons to use ADD SCHEMATRANDATA with the integrated capture:

Ensures that the correct key is logged by logging all the keys
Options are provided that enable the logging of the primary, unique, and foreign keys to support the computation of dependences among the tables being processed by the integrated replicats (think apply servers)
Ensures the appropriate key values are logged in the redo to allow DML to be mapped to object that have DDL issued against them.

Earlier in this post, I mentioned that I often enable “force logging” on the database when I do the minimal supplemental logging. Force logging is encouraged by Oracle, especially when using ADD SCHEMATRANDATA.

Now to add issue ADD SCHEMATRANDATA against an Oracle database, it is similar the same way as ADD TRANDATA, with the difference that you don’t have to provide any wildcards. In the examples below, I show you how this can be done:

GGSCI> dblogin useridalias ggate
GGSCI> add schematrandata soe
2016-08-12 15:47:40  INFO    OGG-01788  SCHEMATRANDATA has been added on schema soe.
2016-08-12 15:47:40  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema soe.

After running ADD SCHEMATRANDATA, you can perform an INFO SCHEMATRANDATA on the schema to see what has been modified.

GGSCI (fred.acme.com as ggate@src12c) 9> info schematrandata soe

2016-08-12 15:51:52  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema SOE.

2016-08-12 15:51:52  INFO    OGG-01980  Schema level supplemental logging is enabled on schema SOE for all scheduling columns.

2016-08-12 15:51:52  INFO    OGG-10462  Schema SOE have 12 prepared tables for instantiation.

Digging around in the database, to see if ADD SCHEMATRANDATA does the same as ADD TRANDATA with SLG; well, it doesn’t. ADD SCHEMATRANDATA does not create any SLGs. The only place that I have found that has any record of supplemental logging turned on with ADD SCHEMATRANDATA is in the V_$GOLDENGATE_CAPABILITIES view. Here, you can see that supplemental logging has been enabled, the number of times it has been acted upon and when it was last executed.

NAME                        COUNT TO_CHAR(LAST_USED     CON_ID
---------------------- ---------- ----------------- ----------
DBENCRYPTION                    0 12-JUN-2016 21:20          0
DBLOGREADER                     0 12-JUN-2016 21:20          0
TRIGGERSUPPRESSION              0 12-JUN-2016 21:20          0
TRANSIENTDUPLICATE              0 12-JUN-2016 21:20          0
DDLTRIGGEROPTIMIZATION          0 12-JUN-2016 21:20          0
GGSESSION                       0 12-JUN-2016 21:20          0
DELETECASCADEHINT               0 12-JUN-2016 21:20          0
SUPPLEMENTALLOG                 5 12-AUG-2016 16:02          0

Now, being that the integrated items of Oracle GoldenGate are closely related to Oracle Streams, there may be a table or view related to Streams that has this information. Once I find it, I’ll provide an update to this post.

In the mean time, I hope this post has provided some insight into the differences between ADD TRANDATA and ADD SCHEMATRANDATA.

If you are moving to or using the integrated products of Oracle GoldenGate, then ADD SCHEMATRANDATA is the method that you should be using.

Enjoy!!!

@dbasolved
http://about.me/dbasolved

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

#GoldenGate #Cloud Service (#GGCS) … what to expect?

Mon, 2016-05-02 07:30

As I sit here working on performing some GoldenGate migrations to AWS for a client, I’ve been thinking about the glimpse of GoldenGate Cloud Service (GGCS) that was provided to me earlier this week. That glimpse has helped me define what and how GGCS is going to work within the Oracle Cloud space. Ever since this service was announced back at Oracle Open World 2015, I’ve been wanting to get my hands on this cloud product from Oracle to just better understand it. Hopefully, what I’m about to share with you will provide some insight into what to expect.

First, you will need a cloud account. If you do not have a cloud account; visit http://cloud.oracle.com and sign up for an account. This will typically be the same account you use to login to My Oracle Support (MOS).

Once you have an account and are in the cloud interface, subscribe to some services. You will need a Database Cloud Service or an Compute Cloud Service. These services will be the end points for the GGCS to point to. As part of setting up the compute node, you will need to setup SSH access with a public/private key. Once you create the GGCS instance, the same public/private key should be use to keep everything simple.

Once GGCS is made available for trial, currently it is only available through the sales team, many of us will have the opportunity to play with this. The following screen captures and comments were taken from the interface I had access to while discussing GGCS with Oracle Product Management.

Like any of the other cloud services from Oracle, once you have access to GGCS it will appear in your dashboard as available cloud services. In the figure below, GGCS is listed at the top of the services that I had access to. You will notice over on the right, there is a link called “service console”.

When you click on the service console link, you are taken to the console that is specific to GGCS. On the left hand side of the console, you will see three primary areas. The “overview” area is the important one; it provides you with all the information needed about your GGCS environment. You will see the host and port number, what version of GGCS you are running and the status of your environment.

With the environment up and running, you will want to create a new GGCS instance. This instance is created under your cloud service console. On this screen you are given information that tells you how many instances you have running with the number of OCPUs, Memory and storage for the configuration along with the public IP address. Notice the button to the right, just below Public IPs, this is the button that allows you to create a new GGCS instance. In the figure below, the instance has already been created.

Drilling down into the instance, you are taken to a page that illustrates your application nodes for GGCS. Notice that the GGCS instance actually created a compute node VM to run GoldenGate from.

With everything configured from the Oracle Cloud interface, you can now access the cloud server using the details provided (do not have current screen shots of this). Once you access the cloud server, you will find that Oracle GoldenGate has been configured for you along with a TNS entry that points to a “target” location. These items are standard template items for you to build your GoldenGate environment from. The interesting thing about this configuration is that Oracle is providing a single virtual machine (compute node) that will handle all the apply process to a database (compute node).

With the GGCS service running, you are then ready to build out your GoldenGate environment.

Like many other GoldenGate architectures, you build out the source side of the architecture like anything else. You install the GoldenGate software, build an extract, trail files and a data pump. The data pump process is then pointed to the GoldenGate Cloud Service (GGCS) instance instead of the target instance. The local trail files will be shipped to the GGCS machine. Once on the GGCS instance, the replicat would need to be configured. Part of the configuration of the replicat at this point is updating the TNSNames.ora file to point to the correct “target” compute node/database instance. The below picture illustrates this concept.

You will notice that the GGCS is setup to be an intermediary point in the cloud. This allows you to be flexible with your GoldenGate architecture in the cloud. From a single GGCS service you can run multiple replicats that can point to multiple difference cloud compute nodes; turning your GGCS into a hub that can send data to multiple cloud resources.

In talking with the Oracle Product team about GGCS, the only downside to GGCS right now is that it cannot be used for bi-directional setup or pulling data from the cloud. In essence, this is a uni-direction setup that can help you move from on-premise to cloud with minimal configuration setup needed.

Well, this is my take on GGCS as of right now. Once GGCS trials are available, I’ll try to update this post or add more posts on this topic. Until then, hope you have gain a bit of information this topic and looking forward to using GGCS.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: Cloud, Golden Gate
Categories: DBA Blogs

Veridata and boot.properties file

Mon, 2016-04-04 12:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

Veridata and boot.properties file

Mon, 2016-04-04 12:30

I’m spending some time building out an Oracle WebLogic/Veridata system at home. This is mostly for testing and general understanding purposes. One of the things I’ve noticed with the WebLogic server requires the username and password before the WebLogic server will start. After providing the name and password, the terminal window has to stay open because, closing it will kill the WebLogic server that is running. This leads to two interesting question and simple resolutions.

1. How can I automatically pass the username and password to WebLogic?
2. How to I keep WebLogic running upon closing my terminal window?

The answers to the questions are quite simple. Let’s take a look at the first question.

How can I automatically pass the username and password to WebLogic?

This is done by using a “boot.properties” file. This file needs to be placed in the security directory under the server that you want to automatically log in. In my case, I had to create a security directory and the boot.properties file under the AdminServer directory. The below steps are what I used:

$ /opt/app/oracle/product/12.2.0.1/middleware/user_projects/domains/base_domain/
$ cd ./servers
$ cd ./AdminServer
$ mkdir security
$ touch boot.properties
$ vi boot.properties

 

Once the boot.properties file is open for editing, simply put in the username and password you want the WebLogic server to use. In this case, I’m using a standard password setup.

After creating the boot.properties, file, you can go back and start the WebLogic Server and should not be prompted for a userid and password. You can see this during the startup of the Weblogic server as seen in the below image. Additionally, if you go and look at the boot.properties file, the username and password will be encrypted as well.

This same approach can be done with the Veridata components of the configuration. You just have to remember to create a security directory under the Veridata server directory as you did for the Weblogic server.

Now for the second question:

How to keep WebLogic Server running upon closing my terminal window?

With the boot.properties file in place, you can start the WebLogic/Veridata server(s) by using the “nohup” option. “Nohup” will allow you to keep the server processes running in the background.

Using these two options, you can very easily automate the starting/restarting of the WebLogic/Veridata server(s).

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate, WebLogic
Categories: DBA Blogs

#EMd360 … OEM health checks made easy

Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdba
SQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

#EMd360 … OEM health checks made easy

Mon, 2016-03-14 12:01

Oracle Enterprise Manager 12c is a great tool! Now that 13c is out, it is getting even better. This post however it not really about Oracle Enterprise Manager, rather than a quick and simple health check tool that I’ve put together. With the help of of some really cool co-workers (Carlos Sierra and Mauro Pagano), I’ve put together a small diagnostic tool call EMd360.

EMd360 stands for Enterprise Manager d360. The concept behind this tool is just like other tools that have been released with the 360 concept (edb360 and sqld360); to provide a quick and easy approach to checking an environment. As with edb360 and sqld360, EMd360 is a completely free tool for anyone to use.

So, why is there a need for EMd360? It is quite simple, there are so many things that go into OEM and you get so much out of OEM it is overwhelming. As a consultant, I’ve been asked to review a lot of OEM architectures and the associated performance. A lot of this information is in the OMR and often time I’m using other tools like REPVFY and OMSVFY, plus a handful of scripts. I’ve decided to make my life (and hopefully yours) a bit easier by building EMd360.

The first (base) release of EMd360 is now live on GitHub (https://github.com/dbasolved/EMd360.git). Go and get it! Test it out!

Download

If you are interested in trying out EMd360, you can download it from GitHub.

Instructions

Download EMd360 from GitHub as a zip file
Unzip EMd360-master.zip on the OMR server and navigate to the directory where you unzipped it
Connect to the OMR using SQL*Plus and execute @emd360.sql

Options

The @emd360.sql script take two variables. You will be prompted for them if not passed on the sql command line.

Variable 1 – Server name of the Oracle Management Service (without domain names)
Variable 2 – Oracle Management Repository name (database SID)

Example:

$ sqlplus / as sysdbaSQL> @emd360 pebble oemrep

Let me know your thoughts and if there is something you would like to see in it. Every environment is different and there maybe something you are looking for that is not provided. Let me know via email or blog comment and I’ll try to get it added in the next release.

Enjoy!!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Renaming #EM12c / #EM13c Targets

Tue, 2016-03-08 09:20

Oracle Enterprise Manager is a complex piece of software that many organizations are running now. Some organizations set out with a formalized naming standard; some do not. Those who do not ofter end up identifying a naming standard later down the road and then making requests to change the names of the targets being monitored. In order to do this, there are two ways:

1. Delete and rediscover the target and rename at time of discovery
2. Change the name from the backend using EMCLI

The first way is painful to say the least, especially when you have thousands upon thousands of targets. So this post is going to focus on how to change the name from the backend using EMCLI and a few other little tips.

EMCLI is a nice tool to use. It provides two options for renaming of targets. The first option is rename_target and the second is modify_target. The rename_target option is used to rename the target on the repository side, while the modify_target option is used to rename at the agent level. Both options are required when renaming a target because the target needs to stay in-sync to retain the history of the target.

To make this process a bit more automated, I’ve created a perl script that will do the renaming for me based on information in a space delimited flat file. The script is as follows:

#!/usr/bin/perl -w
use strict;
use warnings;

##########################
#Notes
##########################
#
#To help with renaming the entity_name in the repository database,
#comment out block of code in SYSMAN.EM_TARGET from line 8028 thru 8035
#
##########################
#GLOBAL Variables
##########################
my $oem_home_bin = "";
my $time_now = localtime();
my ($variable, $sysman_pwd) = @ARGV;
my $count = 0;
my @columns;

##########################
#Main Program
##########################

open (FILE, "< $variable") or die "$!\n";
@columns = ("", 0, 0, 0);
print "\nStart time: ".$time_now."\n";
emcli_login();
while()
{
	my $line = $_;
	@columns = split(' ',$line, 4);
	rename_target(@columns);
	$count = $count+1;
} #end file read
close (FILE) or die "$!\n";
my $end_time=localtime();
print "\nNumber of changes: ".$count;
print "\nEnd time: ".$end_time."\n";
emcli_logout();

##########################
#Sub-Programs
##########################
sub emcli_login{
	print "\n";
	system($oem_home_bin.'/emcli login -username=sysman -password='.$sysman_pwd);
	system($oem_home_bin.'/emcli sync');
	print "\n";
}

sub emcli_logout{
	print "\n";
	system($oem_home_bin.'/emcli logout');
	print "\n";
}

sub rename_target{
	#Parameters
	my ($target_name, $target_type, $server_name )=@columns;
	my $mod_target;
	my $new_name;
	my $cmd;
	my $cmd1;

	if ($target_type =~ /rac_database/)
	{
		chomp($target_name);
		chomp($server_name);
		$mod_target = $target_name;
		$target_name = substr($target_name, 0, -4);
		$new_name = $target_name."_".$server_name;
		#print $new_name;
		print "\n";
		$cmd = 'emcli modify_target -name="'.$mod_target.'" -type="'.$target_type.'" -display_name="'.$new_name.'" -on_agent';
		print $cmd."\n";
		#print "\n!!!!Executing on agent side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
		$cmd1 = 'emcli rename_target -target_type="'.$target_type.'" -target_name="'.$mod_target.'" -new_target_name="'.$new_name.'"';
		print $cmd1."\n";
		#print "\n!!!!Executing on repository side!!!!\n";
		#system($oem_home_bin.'/'.$cmd);
	}
}

Notice that I’m doing the renaming at the agent side along with the repository side. Although this looks pretty simple and straight forward, I’ve found that the EMCLI command to rename (rename_target) is actually driven by the package EM_TARGET in the SYSMAN schema. There is a small set of code in this package that will prevent renaming of certain target types if they are currently being monitored and managed by OEM.

To identify what targets are managed, the following SQL can be used:

SELECT ENTITY_TYPE, ENTITY_NAME, DISPLAY_NAME FROM EM_MANAGEABLE_ENTITIES 
WHERE ENTITY_TYPE='oracle_database' and promote_status=3 and manage_status=‘2';

The SQL above will provide you with the target type (entity_type), name (entity_name), and display name (display_name). These three columns are important because they directly correlate to what you will see in OEM. About 90% of the screen in OEM use the display_name column. The other 10% of the screens use the entity_name. When you start renaming, you will want these names to match, just keep in mind they may not over the long haul.

Now, back to the code in the EM_TARGET package. When renaming targets, some target will report back that the target cannot be changed. This is due to the target already being managed by OEM. In order to by-pass this, you need to update the EM_TARGET package body and comment out a small set of code (make sure you back up the package before doing anything). The lines of code that need to be commented out are between 8028 and 8035.

-- we will implement rename of agent side targets when it is fully
     -- supported by agent
    --IF ( l_trec.manage_status = MANAGE_STATUS_MANAGED AND
    --     l_trec.emd_url IS NOT NULL) 
    --THEN
    --  raise_application_error(MGMT_GLOBAL.INVALID_PARAMS_ERR,
    --      MGMT_GLOBAL.INVALID_PARAMS_ERR||' Not allowed') ;
    --END IF ;

After commenting out these lines of code, recompile the package. Then you will be able to rename repository targets using EMCLI even though they are already managed targets. This will effect the entity_name column and allow you to update the other 10% of pages that are not immediately changed.
 
Another way to change names of targets once the EM_TARGET package has been updated, is to use SQL to make the changes.

exec sysman.em_target.rename_target(target_type, current_name, new_name, new_name);
commit;

Once the commit has happened, then the OEM pages can be refreshed and the new entity_name will be displayed.

Well, I hope this has provided you some explanation on how to change existing targets within the EM framework.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: EMCLI, OEM
Categories: DBA Blogs

Recover from ORA-01172 & ORA-01151

Tue, 2016-01-19 07:48

This morning I was working on an Oracle Management Repository (OMR) for a test Enterprise Manager that is used by a few consultants I work with. When I logged into the box, I found that the OMR was down. When I went to start the database, I was greeted with ORA-01172 and ORA-01151.

These errors basically say:

ORA-01172 – recovery of thread % stuck at block % of file %
ORA-01151 – use media recovery to recover block, restore backup if needed

So how do I recover from this. The solution is simple, I just needed to perform the following steps:

1. Shutdown the database

SQL> shutdown immediate;
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.

2. Mount the database

SQL> startup mount;
ORACLE instance started.
Total System Global Area 1.0033E+10 bytes
Fixed Size 2934696 bytes
Variable Size 1677723736 bytes
Database Buffers 8321499136 bytes
Redo Buffers 30617600 bytes
Database mounted.

3. Recover the database

SQL> recover database;
Media recovery complete.

4. Open the database with “alter database”

SQL> alter database open;
Database altered.

At this point, you should be able to access the database (OMR) and then have the EM environment up and running.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Database
Categories: DBA Blogs

Defining Resources in #GoldenGate Studio 12c

Fri, 2016-01-15 16:12

As I’ve been working with the beta of GoldenGate Studio 12c, I have tried to do simple things to see what will break and what is needed to make the process work. One of the things that I lilke about the studio is that prior to creating any solutions, mappings or projects, you can define what databases and GoldenGate instances will be used during the design process. What I want to show you in this blog post is how to create the database resource and the GoldenGate instance resource.

Creating a Resource:

To create a database resource, after opening GoldenGate Studio, go to the Resource tab. On this tab, you will see that is it empty. This is because no resources have been created yet.

In the left hand corner of the Resources tab, you should see a folder with a small arrow next to it. When you click on the arrow, you are provided with a context menu that provides you with three options for resources (Databases, Global Mappings, and GoldenGate Instances).


Database Resources:

Now that you know how to select what resrouce you want to create, lets create a database resource. To do this, select the database resource from the context menu. This will open up a one page wizard/dialog for you to fill out the connection information for the database you want to use as a resource.

You will notice there are a few fields that need to be populated. Provide the relative information you need to connect to the database. Once you all the information has been provided, you can test the connection to validate that it works before clicking ok.

Once you click ok, the database resource will be added to the resrouce tab under database header.

Notice that the database is automatically connected to once it is created. This allows you to immediately start using the resource for mappings and global mappings.

GoldenGate Instance Resources:

The GoldenGate Instance resources are a little more complex to configure. This is due to the requirement that the GoldenGate environment has to have the GoldenGate Monitoring Agent (aka. JAgent (12.1.3.0)) running. This is the same JAgent that is used with the OEM plug-in. If you need more information on how to install and configure the JAgent, you can find it at this here.

Now, to create a new GoldenGate Instance resource, you follow the same approach as you would to create a database resource; instead of selecting database; select GoldenGate Instance. This will open up the GoldenGate Instance wizard/dialog for you to fill out. Provide all the information requested.

In setting up the GoldenGate Instance, there are a few things that you need to provide. In my opinion, the names of the items requested in the GoldenGate Information section are misleading. To make this a bit easier, I’m providing an explanation of what each field means.

GoldenGate Version: This is the version of GoldenGate running with the JAgent
GoldenGate Database Type: Database which GoldenGate is running against. There are multiple opptions here
GoldenGate Port: This is the port number of the manager process
Agent Username: This is the username that is defined in $GGAGENT_HOME/cfg/Config.properties
Agent Password: This is the password that is created and stored in the datastore for the JAgent
Agent Port: This is the JMX port number that is defined in $GGAGENT_HOME/cfg/Config.properties

After providing all the required information, you can then perform a test connection. If the connection is successful, then you can click “ok” to create the GoldenGate Instance resource. If the connection fails, then you need to confirm all your settings.

Once all the resources you need for designing your GoldenGate architecture is done, you will see all the rsources under the Resource tab.

Now that you know how to create resources in GoldenGate Studio, it will help you in designing your replication flows.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Error when starting #GoldenGate Extract against MS SQL Server

Wed, 2016-01-06 16:10

If you work with Oracle GoldenGate long enough, you will eventually have to setup against a Microsoft SQL Server. Being that GoldenGate is a heterogeneous application, this isn’t a problem; however there are small differences. One such difference is how the exact/replicat will connect to the MS SQL Database.

In an Oracle-to-Oracle configuration, you would just use a command line the following from the command line:

GGSCI> dblogin useridalias [ alias name]
or
GGSCI> dblogin userid [ user name ] password [ password ]

In a MS SQL Server environment, you can still login at the GGSCI command prompt with the following:

GGSCI> dblgoin sourcedb [ dns ]

You will notice the difference, which is the use of an ODBC DNS entry. Although setting up the ODBC DNS entry is not the point of this post, just keep it in mind that is is required when connecting to MS SQL Server with Oracle GoldenGate.

After setting up the ODBC DNS, you will need to add the following to the extract/replicat parameter file to enable the process to connect to the database.

sourcedb [ dns ]

Note: I normally put my connection information in a macro to modularize my parameter files. Please it makes it easier if it needs to change.

MACRO #logon_settings
BEGIN
sourcedb [ dns ]
END;

Now, when you go to start the extract/replicat, you may get the following error:

ERROR OGG-00551 Database operation failed: Couldn’t connect to [ dns ]. ODBC error: SQLSTATE 37000 native database error 4060. [Microsoft][SQL Server Native Client 11.0][SQL Server]Cannot open database “db_name” requested by the login. The login failed.

The error message is a little bit misleading. It tells you that the process cannot connect to the database which you were able to connect to from the GGSCI command prompt with no issue. Why is this? The issue lies in the fact that the manager (MGR) process is running as a service and does not have the correct permissions to access the database from the service.

In searching MOS for this error, I was found Note ID: 1633138.1. In this note, notice that this issue is known as of Oracle GoldenGate version 12.1.2.x.x. The note also provides you a fix to this issue. In simple terms, since the manager process is running as a service; additional permissions have to be granted to manger.

To grant the SYSADMIN privilege for the manager process follow the below sequence of steps (on windows after all):

1. Manager is installed as service:
Open SQL Server Management studio -> Security ->login>select NT AUTHORITY\SYSTEM ->Right Click -> Properties–>Server Role –>Enable sysadmin role

2. ggsci>stop mgr

3. ggsci>start mgr

4. ggsci>start extract <extract-name>

After granting the sysadmin role, the extract will start.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Exporting solutions from #GoldenGate Studio

Tue, 2016-01-05 23:21

In doing some testing with Oracle GoldenGate Studio, I decided to create a test solution that can be moved from studio-to-studio. In order to move the test solution from studio-to-studio, it has to be exported first. This post will be about how to export a solution so it can be archived or shipped to a co-worker.

To export a solution, you will start in the Projects window. After opening the project, you will see a little red puzzle piece under the “Solutions”.

By right-clicking on the solution name, you are presented with a context menu that provides a few options for dealing with solutions within Oracle GoldenGate Studio. The option you are interested in, is at the very bottom of the context menu. This is the export option.

After selecting the “export” option, studio will open a small wizard that allows you to provide information and options for the solution that is to be exported. Everything on the export screen can be edited; however, the only thing that should not be changed is the “Advanced Options”. Provide a directory where the export should reside and provide an encryption key (optional).

When everything is filled out as you want, click “ok” and the export will be done. At the end of the export, should be pretty quick, you will receive a message saying that the export completed.

Once the export is completed, you will find the XML file in the directory you specified in the export wizard. This XML file can be opened up with any text editor and reviewed. A sample of the XML content is provided below.

The beauty in this XML file is that everything created in studio is contained within it. This makes it every simple and easy to email to co-workers or others if they want to see the architecture being worked on. Making collaboration on GoldenGate architectures easier.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Table Definitions in Oracle #GoldenGate #12c Trail Files

Sat, 2015-11-28 09:07

Oracle GoldenGate 12c (12.2.0.1.0) has changed the information that is stored in the trail files. All the standard information is still there. What Oracle changed has to do with the meta data that is used to define a table.

Note: If you want a understand how to use log dump and general trail information, look here.

Prior to 12.2.0.1.0 release of Oracle GoldenGate, if the column order of tables between source and target were different, you needed to generate a “definition” file using the “defgen” utility located in $OGG_HOME. This file allowed you to specify either a source or target definitions file which could be used to map the order of columns correctly. This was a nice tool when needed.

In 12.2.0.1.0, Oracle took this concept a little bit further. Instead of using a definitions file to do the mapping between source and target tables; Oracle has started to provide this information in the trail files. Review the image below, and you will see the table definition for SOE.ORDERS, which I run in my test environment.

Notice at the top, the general header information is still available for view. Directly under that, you will see a line that has the word “metadata” in it. This is the start of the “metadata” section. Below this is the name of the table and a series of number categories (keep this in mind). Then below this, is the definition of the table with columns and the length of the record.

A second ago, I mentioned the “numbered categories”. The categories correspond to the information defined to the right of the columns defined for the table. When comparing the table/columns between the database and trail file, as few things stand out.

In column 2 (Data Types), the following numbers correspond to this information:

134 = NUMBER
192 = TIMESTAMP (6) WITH LOCAL TIME ZONE
64 = VARCHAR2

In column 3 (External Length), is the size of the data type:

13 = NUMBER(12,0) + 1
29 = Length of TIMESTAMP (6) WITH LOCAL TIME ZONE
8 = VARCHAR2 length of 8
15 = VARCHAR2 length of 15
30 = VARCHAR2 length of 30

There is more information that stands out, but I’ll leave a little bit for you to decode. Below is the table structure that is currently mapped to the example given so far.

Now, you may be wondering, how do you get this information to come up in the logdump interface? Oracle has provided a logdump command that is used to display/find metadata information. This command is:

SCANFORMETADATA (SFMD)

There are a few options that can be passed to this command to gather specific information. These options are:

DDR | TDR
NEXT | INDEX

If you issue:

SCANFORMETADATA DDR

You will get information related to Data Definition Records (DDR) of the table. Information this provides includes the following output:

If you issue:

SCANFORMETADATA TDR

You will get information related to Table Definition Record (TDR) on the table. Information provide includes the output already discussed earlier.

As you can tell, Oracle has provided a lot of information that is traditionally in the definitions files for mapping tables directly into the trail files. This will make mapping data between systems a bit easier and less complicated architectures.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs