Skip navigation.

DBASolved

Syndicate content DBASolved
Helping Oracle DBAs solve problems
Updated: 12 hours 25 min ago

Repair Replicat after mount point name change

Sat, 2014-11-15 11:06

Working on Oracle GoldenGate can be an interesting adventure. In such a case, I have been doing some migration work for a client. Half way though the migration, the target system ran out of resources need to create the tablespaces and store files export and trail files (i.e. disk space and a story for another time). The impact to the migration was that everything had to stop until resources were allocated.

Part of the allocation of resources was to change the mount point name. If you know anything about Oracle GoldenGate Replicats, using a static mount point is not the best approach (slipped my mind at the time); however, I made this mistake. When the mount point name changed, all the replicats broke because they couldn’t locate the trail files where specified.

Initial:

When I initially setup the replicat I used a static mount point. Let’s take a look at the create replicat statement I used initially:

--Add Replicat Process
ADD REPLICAT REPM01, EXTTRAIL /orabackup/ggate/trail/ars/ra, DESC "Replicat process for a schema”
START REPLICAT REPM01, ATCSN

As you can see the replicat is looking for the “ra” trail files on the “/orabackup” mount point.

Problem:

During the allocation of space the mount point “/orabackup” was changed to “/orabkup”. How does this affect the replicat? Simple, the replicat will through an OGG-01091 error stating that it coudn’t find the trail file.

ERROR OGG-01091 Unable to open file “/orabackup/ggate/trail/ars/ra000000″ (error 2, No such file or directory).

Solution:

The solution to fixing this problem is to capture the last CSN number from the Checkpoint table.

SQL> select group_name, rba, seqno, log_cmplt_csn from checkpoint where group_name = 'REPM01';

GROUP_NA  RBA       SEQNO   LOG_CMPLT_CSN
--------          ---------- ---------- -----------------------------------
REPM01       544013          1       11108080706671

Once the last completed CSN has been identified, then the replicat can be dropped, recreated with the new path to the trail file.

GGSCI> dblogin userid ggate password
GGSCI> delete replicat REPM01
GGSCI> add replicat REPM01, EXTTRAIL /orabkup/ggate/trail/ars/ra, DESC "Replicat process for a schema”
GGSCI> start replicat REPM01, atcsn 11108080706671
GGSCI> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     REPM01      00:00:00      00:00:06

Lesson Learned:

When setting up locations for your trail files make sure they are not static locations. Realitve locations should be used. In most Oracle GoldenGate architectures the “dirdat” directory under $OGG_HOME is used for trails files; however, if you need more space for trail files the “dirdat” directory can be linked to a directory on a larger mount point. This will keep the replicat consistant for trail file purposes and make it easier to manage the names of the mount point if the static name changes.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

ECO 2014 and Slides

Mon, 2014-11-10 07:30

Last week I attended the East Coast Oracle User Group conference, also known as ECO, in Raleigh, NC.  This being my first time at ECO, it was a good event for being a two day conference.  The low-key environment provided a nice, comfortable environment for interaction between the speakers and those in attendance.  If you ever have the chance to catch this conference, it would be a good one to attend.

What you can expect from ECO, is to see great speakers, both local to Raleigh and from around the country. There seems to be opportunities to see also see speakers that we all hear about and would like to see at some point.  As one of the speakers at this year’s conference, I have to say it was nice to have great attendance for my session on Oracle GoldenGate 12c Conflict Detection and Resolution.  My session was scheduled for 45 minutes; due to discussions throughout the session it lasted about 65 minutes.  Although the session ran over, it was exciting to see so many people wantiong to know more about Oracle GoldenGate and what benefits it provides to an organization.

If you would like to see the slides from my ECO session, they can be found here.

Lastly, I would like to say that ECO is one of the smaller user group conferences which seem to draw some great speakers.  Check it out next year!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

SQL Developer and Big Data Appliance (sort of)

Sun, 2014-11-09 20:55

Recently, Enkitec received an Oracle Big Data Appliance (BDA) for our server farm in Dallas (Thanks Accenture!).  With this new addition to the server farm, I’m excited to see what the BDA can do and how to use it.  Since I use Oracle SQL Developer for a lot of things, I figure I better see if I can connect to it…. wait I don’t have access yet, darn!  Simple solution, I’ll just use the Oracle Virtual Box VM (Big Data Lite) to make sure my that my SQL Developer can connect when I eventually get access.

The first thing I needed is download the Big Data Lite VM.  It can be downloaded from the Oracle Technology Network (here). The second thing I needed to do was download the connectors for HIVE from Cloudera, use the version for the platform you need (here).

After downloading the Cloudera connectors for HIVE, these needed to be unzipped in a location that can be accessed by SQL Developer. Since I’m on a MacBook Pro, I unzipped them in this location:


$ cd ~/Downloads
$ unzip ./Cloudera_HiveJDBC_2.5.4.1006.zip -d /Users/Bobby/Oracle/connectors
$ cd /Users/Bobby/Oracle/connectors
$ ls -ltr
total 21176
-rw-r--r--@ 1 Bobby  staff  5521341 Sep 10 15:16 Cloudera_HiveJDBC4_2.5.4.1006.zip
-rw-r--r--@ 1 Bobby  staff  5317239 Sep 10 15:16 Cloudera_HiveJDBC3_2.5.4.1006.zip
$ unzip ./Cloudera_HiveJDBC4_2.5.4.1006.zip -d ./Hive
$ cd ./Hive
$ ls -ltr
-r--r--r--@ 1 Bobby  staff  1083758 Sep  8 17:28 Cloudera - Simba JDBC Driver for Hive Install Guide.pdf
-rw-r--r--@ 1 Bobby  staff     9679 Sep  8 23:28 slf4j-log4j12-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff    23445 Sep  8 23:28 slf4j-api-1.5.8.jar
-rw-r--r--@ 1 Bobby  staff   367444 Sep  8 23:28 log4j-1.2.14.jar
-rw-r--r--@ 1 Bobby  staff   347531 Sep  8 23:28 libthrift-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   275186 Sep  8 23:28 libfb303-0.9.0.jar
-rw-r--r--@ 1 Bobby  staff   294796 Sep  8 23:28 ql.jar
-rw-r--r--@ 1 Bobby  staff   596600 Sep  8 23:28 hive_service.jar
-rw-r--r--@ 1 Bobby  staff  7670596 Sep  8 23:28 hive_metastore.jar
-rw-r--r--@ 1 Bobby  staff  2972229 Sep  8 23:28 TCLIServiceClient.jar
-rw-r--r--@ 1 Bobby  staff  1656683 Sep  8 23:29 HiveJDBC4.jar

 
Once the connectors are extracted, SQL Developer needs to know which HIVE connector to use.  In this case the JDBC4 connector is required.  Unzipped the JDBC4 set of files into a directory, in my case I’m using a directory called Hive.

In order to tell SQL Developer which connector to use, it needs to be specified in the interface by doing the following:

  1. Start SQL Developer
  2. Oracle SQL Developer -> Preferences
  3. Database -> Third Party JDBC -> Add Entry
  4. Restart SQL Developer

After restarting SQL Developer, we now see an option on the connection screen for Hive.


 

 
 

 
 

 
 

Now SQL Developer is ready to connect to a Big Data Appliance, oh I mean to my VM for Big Data Lite :), lets setup a connection and see if we can connect.  Since I’m connecting to a Virtual Box VM, I need to setup some ports to be used between my MacBook and the VM.  In this case, I have setup a SQL port on 15211 which maps to the standard database port of 1521.  For the Hive connection I’ve setup 10001 which maps to port 10000.

 

 

With the ports put in place, now I can setup SQL Developer to connect to the Hive on the Big Data Lite VM.  You will notice that on the username, password, server name and port is needed.  The database parameter is optional when connecting to a Bid Data Hive.


 

 
 

 
 

 
 

Once the connection is configured, I can login to the Hive and review what tables are listed in the Big Data Lite VM.

 

 

 

 

 

 
 

 
 

 

The end result is that now I can visualize the data that is in a Big Data Appliance/Lite VM and begin to interact with objects defined within.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: BigData
Categories: DBA Blogs

Securing Oracle GoldenGate Database Login in a Credential Store

Fri, 2014-11-07 17:44

Security is always a big deal.  In setting up Oracle GoldenGate the capture (extract) and apply (replicat) parameter files need to be configured to log in to the database which they will perform operations.  In order to do this the Oracle GoldenGate User name and password need to be provided in the parameter files.  Example 1 shows how the database login is traditionally done in a extract or replicat parameter file.

Example 1:

--Oracle Login
USERID ggate, PASSWORD ggate

To make this process login information more secure, we can create a userid alias that the extract or replicat process can use to log into the database.  In order to create a login alias, a credential store needs to be create.  Below are the steps to create the credential store and associated aliases.

After logging into the GoldenGate Service Command Interface (GGSCI), a credential store needs to be created.  By default the credential store will be kept in the “dircrd” directory undert the $OGG_HOME.

Create the credential store:



GGSCI (db12cgg.acme.com) 1> add credentialstore


Credential store created in ./dircrd/.

With the credential store created, now an alias can be created for the gguser.



GGSCI (db12cgg.acme.com) 2> alter credentialstore add user ggate, password ggate alias aggate


Credential store in ./dircrd/ altered.

The extract or replicat parameter files need to be updated to use the new alias.  Once the update is done the associated process needs to be restarted.


--Oracle Login

USERIDALIAS aggate

After restarting the process, the Oracle GoldenGate login is secure.

Note: If the password for the Oracle GoldenGate User changes, the alias in the credential store will need to be updated.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

OGG-01742 when trying to start extract process

Tue, 2014-10-28 13:57

Every once in awhile I do something in my test environments to test something else, then I go back to test core functions of the product; in this case I was testing a feature of Oracle GoldenGate 12c.  Earlier in the day, I had set the $ORACLE_HOME enviornment variable to reference the Oracle GoldenGate home.  Instead of closing my session and restarting, I just started GGSCI and the associated manager.  To my surprise, the extracts that I had configured wouldn’t start.  GGSCI was issuing an OGG-01742 error about the “child process is no longer alive” (Image 1).

Image 1:

As I was trying to figure out what was going on, I checked the ususal files (ggserr.log and report files) for any associated errors.  I didn’t find anything that was out of place or lead me to believe there was a  problem with Oracle GoldenGate.  The next thing I needed to identify was if the environment was configured correctly.  In doing this, I first checked the session environment variables (Image 2) related to the Oracle database.

Image 2:

As you can see, I had set the ORACLE_HOME equal to my OGG_HOME.  After seeing this, I remembered that I had set it this way for a specific test.  Since the ORACLE_HOME was set to OGG_HOME, the Oracle GoldenGate Extract (capture) process didn’t know where to get the needed libraries for the database.  

The Fix:

To fix this issue, I just needed to reset my environment to reference the ORACLE_HOME and ORACLE_SID correctly.  In Linux enviornment, I like to use “. oraenv” to do this (Image 3).

Image 3:

Now that I have reset my Oracle environment to point to the database, I should be able to start my extracts and not get any error messages related to OGG-01742 (Image 4).

Note: It appears that if the manager process has AUTOSTART and/or AUTORESTART set, the manager process needs to be restarted before the OGG-01742 message will go away.

Image 4:

With my extracts started now, I’ve got all my Oracle GoldenGate processes back up and running (Image 5).

Image 5:

In a nutshell, if you get OGG-01742 error while trying to start or restart an extract (capture) process; just reset your Oracle environment parameters!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

Use OPatch to check Oracle GoldenGate version

Tue, 2014-10-28 11:08

Recently on I was strolling the OTN message boards and came across a question about identifying the version of Oracle GoldenGate using OPatch.  This was the second time I came across this question; with that I decided to take a look and see if Oracle GoldenGate information could be retrieved using opatch.

Initially I thought that identifing the Oracle GoldenGate version could only be done by logging into GGSCI and reviewing the header information.  To do this, just setup the Oracle environment using “. oraenv”.

Note: “. oraenv” will use the /etc/oratab file to set the ORACLE_HOME and ORACLE_SID parameters and ensure that Oracle GoldenGate has access to the library files needed.

Once the enviornment is set, the GGSCI can be used to start the interface.

[oracle@db12cgg ogg]$ . oraenv
ORACLE_SID = [oragg] ?
The Oracle base has been set to /u01/app/oracle
[oracle@db12cgg ogg]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug  7 2014 10:21:34
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (db12cgg.acme.com) 1>

Notice in the code above, that the version of Oracle GoldenGate being ran is 12.1.2.1.0 for Linux x64.

How can this be done through OPatch?  The same information can be gathered using the opatch utility.  Ideally, you will want to use opatch from the $GG_HOME/OPatch directory.

Note: $ORACLE_HOME needs to be set to $OGG_HOME before correct opatch inventory will be listed.  If $ORACLE_HOME is set for the database, the opatch will return information the database not Oracle GoldenGate.

After making sure that the $ORACLE_HOME directory is pointed to the correct $GG_HOME,  the inventory for Oracle GoldenGate can be retrieved using “./opatch lsinventory”.

[oracle@db12cgg ogg]$ pwd
/u01/app/oracle/product/12.1.2/ogg
[oracle@db12cgg ogg]$cd OPatch
[oracle@db12cgg OPatch]$./opatch lsinventory
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation.  All rights reserved.
Oracle Home      : /u01/app/oracle/product/12.1.2/ogg
Central Inventory : /u01/app/oraInventory
  from          : /etc/oraInst.loc
OPatch version    : 11.2.0.1.7
OUI version      : 11.2.0.3.0
Log file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/opatch2014-10-28_11-18-49AM.log
Lsinventory Output file location : /u01/app/oracle/product/12.1.2/ogg/cfgtoollogs/opatch/lsinv/lsinventory2014-10-28_11-18-49AM.txt

--------------------------------------------------------------------------------
Installed Top-level Products (1):

Oracle GoldenGate Core                                              12.1.2.1.0

There are 1 products installed in this Oracle Home.

There are no Interim patches installed in this Oracle Home.

--------------------------------------------------------------------------------

OPatch succeeded.

As you can tell, I was able to find the same information using OPatch without having to go to the GGSCI utility.

Note: I have not had a chance to check this against Oracle GoldenGate 11g and earlier. This may be something specific to Oracle GoldenGate 12c.  Will verify at a later time.

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Run #em12c on #db12c? – Discussion from Oracle Open World (MOS Note: 1920632.1)

Wed, 2014-10-01 13:38

Ok Folks, I’ve been here are Oracle Open World for a few days now.  In that time, I’ve had numerous conversations about running Oracle Enterprise Manager 12c on Oracle Database 12c.  I will be honest and say that I’ve enjoyed these conversations; however, after about the fourth time I decided I need to write a quick post on the explanation discussed in these conversations.  

Early this year (August) I wrote a post about the what came out of the OEM CAB in May 2014 and how to get OEM 12c to work on DB12c.  The concept of running OEM 12c on DB12c, pluggable or not, have many people excited and looking forward to configuring OEM to do that very configuration.  Heck, I’ve even installed it for a few customers in that configuration (non-PDB).  So I’m a bit sad in having to say this:  ORACLE DATABASE 12c SUPPORT FOR THE REPOSITORY DATABASE IS TEMPORARILY SUSPENDED!  I say this due to the My Oracle Support (MOS) Note: 1920632.1.

Note 1920632.1 states the following:

Due to some recently discovered scenarios, we (Oracle) are temporarily suspending the certification of DB 12.1.0.1 and DB 12.1.0.2 as a Certified Repository version for EM 12c R4 until additional testing is complete.

Now what does this mean for those customers and clients that have already built their OEM 12c repository on DB 12c (12.1.0.1 or 12.1.0.2)?  The MOS note outlines what to do in the action section of the note:

Until testing is complete on the 12c Database, Oracle recommends using DB 11.2.0.4 as the EM 12c R4 Repository. 

If you are currently running a 12c DB repository, please be aware that additional testing is underway and there are currently no bugs or patches required; but if testing proves a patch is required, we will update this announcement.  You do not need to deinstall EM 12c or move the repository to an 11.2.0.4 database.

Sure hope Oracle quickly finishes testing and can restore support for DB 12c as the OEM repository.  In the meantime, everyone should know about this note number and be aware when making architecture changes related to their OEM 12c environment.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Oracle #GoldenGate Parameter File Templates

Wed, 2014-10-01 01:27

This week I’ve been enjoying spending some time at Oracle Open World in San Francisco, CA.  While here, I’ve been talking with everyone, friends old and new, and it came to my attention that it would be a good idea to have some useful templates for Oracle GoldenGate parameter files.  With this in mind, I decided to create some generic templates with comments for Oracle GoldenGate processes.  These templates can be found on my Scripts page under “Oracle GoldenGate Parameter Templates”.  These files are in a small zip file that can be downloaded, unzipped and used in creating a basic uni-directional configuration.

By using these templates, you should be able to do:

  1. Review useful examples for each Oracle GoldenGate process (Manager, Extract, Pump, Replicat)
  2. With minor changes, quickly get uni-directional replication going
  3. Gain a base understanding of what how simple Oracle GoldenGate parameter files work

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Golden Gate
Categories: DBA Blogs

Change unknown SYSMAN password on #EM12c

Fri, 2014-09-12 17:52

When I normally start work on a new EM 12c environment, I would request to have a userid created; however, I don’t have a userid in this environment and I need access EM 12c as SYSMAN.  Without knowing the password for SYSMAN, how can I access the EM 12c interface?  The short answer is that I can change the SYSMAN password from the OS where EM 12c is running.

Note:
Before changing the SYSMAN password for EM 12c, make sure to understand the following:

  1. SYSMAN is used by the OMS to login to the OMR to store and query all activity
  2. SYSMAN password has to be changed at both the OMS and OMR to EM 12c to work correctly
  3. Do not modify the SYSMAN or any  other repository user at the OMR level (not recommended)

The steps to change an unknown SYSMAN password is as follows:

Tip: Make sure you know what the SYS password is for the OMR.  It will be needed to reset SYSMAN.

1. Stop all OMS processes

cd <oms home>/bin
emctl stop oms 

Image 1:
sysman_pwd_stop_oms.png

 

 

 

 

 

 

2. Change the SYSMAN password

cd <oms home>/bin
emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys password> -new_pwd <new sysman password>

In Image 2, notice that I didn’t pass the password for SYS or SYSMAN on the command line.  EMCTL will ask you to provide the password if you don’t put it on the command line.

Image 2:
sysman_pwd_change_pwd.png

 

 

 

 

 

 

 

3. Stop the Admin Server on the primary OMS and restart OMS

cd <oms home>/bin
emctl stop oms -all
emctl start oms

Image 3:
sysman_pwd_start_oms.png

 

 

 

 

 

 

 

 

 

4. Verify that all of OMS is up and running

cd <oms home>/bin
emctl status oms -details

Image 4:

sysman_pwd_oms_status.png
 

 

 

 

 

 

 

 

 

 

After verifying that the OMS is backup, I can now try to login to the OMS interface.

Image 5:
sysman_pwd_oem_access.png

 

 

 

 

 

 

 

 

 

 

 

As we can see, I’m able to access OEM as SYSMAN now with the new SYSMAN password.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Adding additional agents to OEM12c

Mon, 2014-09-08 07:52

One question I get asked a lot is “how can I add additional agent software to OEM 12c”?  The answer is pretty easy; just download and apply to the software library.  Now what does that mean?  In this post, I’ll explain how to download additional agents for later deployments to other platforms.

After logging into OEM 12c, go to the Setup -> Extensibility -> Self Update (Image 1).

Image 1:

SelfUpdate_Menu.png

 

 

 

 

 

 

 

 

 

 

Once on the Self Update page (Image 2), there are a few things to notice.  The first thing is that under Status, the Connection Mode is Online.  This is an indicator that OEM has been configured and connected to My Oracle Support (MOS).  Additional items under the Status area is when was the last refresh, last download time and the last download type.  Right under the Status section there is a menu bar with actions that can be performed on this page.  Clicking the Check Updates button will check for any new updates in all the Types listed.  Since we want to focus on Agents, click on the folder for Agent Software.

Image 2:

SelfUpdate_Page.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After clicking on the Agent Software folder, it takes us to the Agent Software Updates page for Self Updates (Image 3).  On this page, it can be seen clearly that there are a lot of agent software available.  On this page, we can see the Past Activities where we can see what actions have been performed against a particular version of the agent.

Image 3:
AgentSoftwareUpdatePage.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the menu bar (Image 4), we can search the agent software by either description or by example.  These search options take text search terms.  If we know there is a new release, it can be searched my simply entering text like ’12.1.0.4’.

Image 4:
SelfUpdate_AgentUpdate_bar.png

 

As we can see in Image 5, searching for agents that are the version ’12.1.0.4’, we get a list of available agents with that version.  Notice the Status column of the table.  There are two types of status listed.  These are two of the three statuses available.  The third status is Downloading; which indicates that a new agent is downloading.  The two status listed in Image 5 are: Applied and Available.

Image 5:
AgentUpdateSearch.png

 

 

 

 

 

Let’s define the Agent Software Update Statuses a bit more.  They are as follows:

  1. Available = This version of the agent is available for the OS Platform and can be downloaded
  2. Download in progress = This version of the agent is being downloaded to the OMS
  3. Downloaded = This version of the agent has been downloaded to the OMS
  4. Applied = This version of the agent has been applied to the Software Library and ready to use for agent deployments

Now that we know what the Status column means, how can an agent be downloaded?

While on the Agent Software Updates page, select and highlight an OS Platform that an agent is needed for.  In this example, lets use “Microsoft Windows x64 (64-bit)” (Image 6). Notice the Status column and Past Activities section.  This agent is available for download.  Download the agent by clicking the download button in the menu bar.

Image 6:
AgentUpdate_Win64.png

 

 

 

 

 

 

 

 

 

 

 

 

After clicking the Download button, OEM will ask you when to run the job (Image 7).  Normally running it immediately is fine.

Image 7:
AgentDownloadJob.png

 

 

 

Once the Status is set Downloaded, the agent software needs to be applied to the Software Library before it can be used (Image 8). Highlight the agent that was just downloaded and click the Apply button.  This will apply the binaries to the software library.  Also notice the Past Activities section; here we can clearly see what has been done with these agent binaries.

Image 8:
AgentSoftwareDownloaded.png

 

 

 

 

 

 

 

 

 

 

 

 

 

Once the Apply button has been clicked, OEM presents a message letting you know that the Apply operation will store the agent software in the software library (Image 9).  Click OK when we are ready.

Image 9:
AgentUpdateApplyMsg.png

 

 

 

 

 

The agent software is finally applied to the Software Library and ready to use (Image 10).

Image 10:
AgentAppliedtoSWLib.png

 

 

 

 

 

 

 

 

With the agent now applied to the Software Library, it can be used to deploy out to, via push or pull, Microsoft Windows hosts.

Note: In my experience most deployments to Microsoft Windows host have to be done with there with Cygwin or Silent installed.  If you would like more information on the silent install approach, I wrote a post on it here.

Enjoy!!

 

about.me: http://about.me/dbasolved

 


Filed under: OEM
Categories: DBA Blogs

Using #DB12c PDB as a Repository for #EM12c…. Not so fast!

Fri, 2014-08-29 14:51

Last year, I wrote a post on how to setup Oracle Enterprise Manger 12c (12.1.0.3) with Oracle Database 12c (12.1.0.1) and what you could expect.  At the time the configuration was not supported by Oracle and I only did it to see if it could be done.  If you would like to read that post it can be found here.

This past May (2014), I attended the Oracle Customer Advisory Board (CAB) for Oracle Enterprise Manager 12c. It was a great gathering of power users and customers of the Oracle Enterprise Manager product line with a lot of useful information being shared.  One thing that came out in the CAB is that Oracle Enterprise Manager 12c (12.1.0.4) is now fully supported against Oracle Database 12c (12.1.0.1 or later).  During the discussion, I asked if Pluggable Databases (PDBs) would be supported for the repository database; I was surprised when the initial answer was “Yes”.  I was excited to say the least and wanted to try it out.  The ability to house multiple monitoring tools in a single CDB via multiple PDBs would shrink many monitoring application databases footprints within organizations.

Unfortunately, Oracle and I don’t share the same outlook with that use case.  In doing more research, installations and discussing the use case with a few friends, inside and outside of Oracle, I can safely say that Pluggable Databases (PDBs) are not supported as a Oracle Management Repository (OMR).  Although a Pluggable Database (PDB) is not supported as the OMR, Oracle Enterprise Manger 12c (12.1.0.4) can still be ran with a Oracle Database 12c (12.1.0.1 or later) OMR as long as it is a traditional database (non-CDB).

In summary, I’ve answered two (2) questions for you.

1. Are Pluggable Databases (PDBs) supported as Oracle Management Repositories for EM12c?  -> NO

2. Can an Oracle Database 12c (12.1.0.1 or later), in traditional (non-CDB) mode, be used as an Oracle Management Repository? -> YES

In Image 1, you can see that I have EM12c (12.1.0.4) running with a DB12c repository.

Image 1:

12104_db12_oem.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Enjoy!

about.me: http://about.me/dbasolved

 


Filed under: Database, OEM
Categories: DBA Blogs

Presenting at OOW 2014

Fri, 2014-08-29 07:26

This year I’ll be presenting at Oracle Open World with many of the best in the industry.  If you are going to be in the San Francisco area between September 28 thru October 2 2014, stop by and check out the conference.  Registration information can be found here.

The topics which I’ll be presenting or assisting with this year are:

  • OTN RAC Attack – Sunday, September 28, 2014 – 9 am – 3 pm PST @ OTN Lounge
  • How many ways can I monitor Oracle GoldenGate – Sunday, September 28, 2014 0 3:30 pm – 4:15 pm PST @ Moscone South 309
  • Oracle Exadata’s Exachk and Oracle Enterprise Manager 12c: Keeping Up with Oracle Exadata – Thursday, October 2, 2014 10:45 am – 11:30 am PST @ Moscone South 310

Hope to see you there!

Enjoy!

about.me: http://about.me/dbasolved


Filed under: General
Categories: DBA Blogs

Integrated Application Heartbeat for Oracle GoldenGate

Mon, 2014-08-25 06:51

Over the last two weeks I’ve been working on a heartbeat monitoring solution for a client.  This is not the normal heartbeat solution provided by Oracle as described in note 1299679.1; yet very similar.  The approach that I configured is similar to a traditional heartbeat setup but uses the same extract and replicats already being used by the database/application.  For simplicity reasons, I like to call this approach an Integrated Application Heartbeat.

In order to setup this style of heartbeat monitoring, the following items are are needed:

1.  Identify the heartbeat table in the source database.  In this configuration, lets call this table SETTINGS (Keep in mind nothing will change with the table (no DDL changes)).
2.  Identify and generate DDL for the target heartbeat table.  Let’s call this table GG_STATUS_HB.
3.  Identify and generate DDL for the target heartbeat history table.  Let’s call this table GG_STATUS_HB_HIST
4.  Two triggers for updating information in the target heartbeat tables (1 example below)
5.  A few macros.  One for each of the processes in the configuration.
6.  Either a crontab job or a DBMS_JOBs process

Now that the components have been identified, lets take a look at what needs to be done.

The source side heartbeat table has already been identified.  In the database, the table name is SETTINGS.  The SETTINGS table has a single column for a primary key.  This table is updated using a crontab job that runs the following SQL.  The <schema> variable is due to a this table (SETTINGS) being in more than one schema.

update <schema>.settings set id=id where rownum < 2;

On the target side, the target heartbeat and heartbeat history table need to be created.  These tables will not match the columns in the SETTINGS table.  This means that all the data that is replicated in the SETTINGS table still have to be replicated a long with being used as a heartbeat table.  The DDL to create these two tables are as follows:

CREATE TABLE <schema>.GG_STATUS_HB
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_PK PRIMARY KEY (SITE_ID) ENABLE
);
CREATE TABLE <schema>.GG_STATUS_HB_HIST
(
SITE_ID                    NUMBER(10),
DB_SCHEMA                  VARCHAR2(30),
CSN_TS                     TIMESTAMP(6),
CSN_NB                     NUMBER(18),
EXT_GROUP                  VARCHAR2(10),
EXT_TIME                   TIMESTAMP(6),
EXT_LAG                    NUMBER,
PMP_GROUP                  VARCHAR2(10),
PMP_TIME                   TIMESTAMP(6),
PMP_LAG                    NUMBER,
TGT_DB_NAME                VARCHAR2(15),
REP_GROU{                  VARCHAR2(10),
REP_TIME                   TIMESTAMP(6),
REP_LAG                    NUMBER,
TOTAL_LAG                  NUMBER,
TOTAL_LAG_MIN              NUMBER,
UPDATE_TS                  TIMESTAMP(6),
CONSTRAINT GG_STATUS_HB_HIST_PK PRIMARY KEY (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS) ENABLE
);

In mapping these tables, I found it easier to create macros to handle the mappings.  Macros are especially handy since the SETTINGS table is in multiple schemas and they need to be mapped through existing GoldenGate processes.  For more on macros and passing parameters, I’ve written this post to highlight macros (here).

Examples of the macros that are used in this configuration are as follows:

Macro for Extract process:
This macro is used to capture and map the extract name, time the transaction was extracted, the timestamp of the commit and the System Change Number.

--Heartbeat Extract Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC</pre>
<pre>-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #hb_ext_details
BEGIN
EXT_GROUP = @GETENV("GGENVIRONMENT","GROUPNAME"), &
EXT_TIME = @DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP")), &
CSN_TS = @GETENV("GGHEADER","COMMITTIMESTAMP"), &
TRANS_CSN = @GETENV("TRANSACTION","CSN")
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #opshb_info
BEGIN
       #hb_ext_details()
END;

Macro for the Pump process:
The macro captures the pump name and time of transaction passed through the pump.

--Heartbeat Pump Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************

MACRO #hb_pmp_details
BEGIN
PMP_GROUP=@GETENV("GGENVIRONMENT","GROUPNAME"),
PMP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV("JULIANTIMESTAMP"))
END;

-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************

MACRO #opshb_info
BEGIN
        #hb_pmp_details()
END;

Macro for the Replicat Process:
This macro does the mapping of all the tokens that have been passed from the other GoldenGate processes to the target tables.

--Heartbeat Replicat Macro Library - Version 1.01
--Inital write - 08-13-2014 - BLC
-- ***************************************
-- Common macros for this library
-- called internally
-- ***************************************
MACRO #opshb_rep_details
BEGIN
SITE_ID=@TOKEN("NHIN_STORE_ID"), &
DB_SCHEMA=@TOKEN("SRC_DB_SCHEMA"),
HOST_NAME=@GETENV("GGFILEHEADER", "HOSTNAME"), &
CSN_TS=@TOKEN("SRC_CSN_TS"), &
CSN_NB=@TOKEN("SRC_TRANS_CSN"), &
EXT_NAME=@TOKEN("EXT_GROUP"), &
EXT_TIME=@TOKEN("EXT_TIME"), &
PMP_GROUP=@TOKEN("PMP_GROUP"), &
PMP_TIME=@TOKEN("PMP_TIME"), &
TGT_DB_NAME=@GETENV("DBENVIRONMENT","DBNAME"), &
REP_GROUP=@GETENV ("GGENVIRONMENT", "GROUPNAME"), &
REP_TIME=@DATE("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")), &
UPDATE_TS=@DATENOW()
END;
-- *****************************************
-- Application Specific Marcos for Heartbeat
-- *****************************************
MACRO #hb_info
PARAMS (#src_schema)
BEGIN
MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB, &
KEYCOLS (SITE_ID), &
INSERTMISSINGUPDATES, &
COLMAP ( #hb_rep_details() );</pre>
<pre>MAP #src_schema.SETTINGS,TARGET <target schema>.GG_STATUS_HB_HIST, &
KEYCOLS (SITE_ID, DB_SCHEMA, CSN_TS, CSN_NB, UPDATE_TS), &
INSERTALLRECORDS, HANDLECOLLISIONS &
COLMAP ( #hb_rep_details() );
END;

For each of the target heartbeat tables, a trigger is needed to calculate the lag and timestamp information.  The triggers that I used/created are based on the Oracle supplied examples in note 1299679.1.  To keep this blog some what short, I’m just going to include one sample of the triggers used.

Trigger Example:


CREATE OR REPLACE TRIGGER <target schema>.GG_STATUS_HB_TRIG
BEFORE INSERT OR UPDATE ON <target schema>.GG_STATUS_HB
FOR EACH ROW
BEGIN
SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.SRC_EXT_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.EXT_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),1, INSTR(:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME), INSTR((:NEW.SRC_PMP_TIME - :NEW.SRC_EXT_TIME),' ')+10,6)) / 1000000
INTO :NEW.PMP_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_PMP_TIME),' ')+10,6)) / 1000000
INTO :NEW.REP_LAG
FROM DUAL;</pre>
<pre>SELECT TRUNC(TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),1, INSTR(:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS,' ')))) * 86400
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+1,2)) * 3600
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+4,2) ) * 60
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+7,2))
+ TO_NUMBER(SUBSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS), INSTR((:NEW.TGT_REP_TIME - :NEW.SRC_CSN_TS),' ')+10,6)) / 1000000
INTO :NEW.TOTAL_LAG
FROM DUAL;
SELECT round((:NEW.TOTAL_LAG/60),1) INTO :NEW.TOTAL_LAG_MIN FROM DUAL;
SELECT SYSTIMESTAMP INTO :NEW.UPDATE_TS FROM DUAL;
END;
/

After all these items are in place, the next thing that has to be done is updating the parameter files.  With any macros, the parameter files have to reference the macro via an INCLUDE statement.  Then the mappings for the SETTINGS table has to be done in each parameter file before restarting the process.  More information on macros can be found here.

The biggest change that has to be be made to existing processes comes in the pump process.  The pump has to be configured to be a PASSTHRU for all tables except the SETTINGS table.  In order to assign tokens to the SETTINGS table, the pump needs to be put in NOPASSTHRU mode.  In order to do this, the pump parameter file needs to be updated with something similar to this approach:

NODYNAMICRESOLUTION
WILDCARDRESOLVE IMMEDIATE
NOPASSTHRU
TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

DYNAMICRESOLUTION
WILDCARDRESOLVE DYNAMIC
PASSTHRU
TABLEEXCLUDE <schema>.SETTINGS;

TABLE <schema>.*;

With everything in place and processes restarted, the crontab job should be updating the heartbeat process on a scheduled interval.  By reviewing the STATUS_HB table, it will provide you a quick update on the application heartbeat.  By looking at the STATUS_HB_HIST table, you can get a sense of how much lag is happening within your GoldenGate environment over time without having additional overhead of a separate heartbeat processes.

Enjoy!

about.me: http://about.me/dbasolved

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

OGG-00212, what a frustrating error.

Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

#GoldenGate Bound Recovery

Sun, 2014-08-03 19:40

Every once in awhile when I restart an extract, I see entries in the report file that reference “Bounded Recovery”.  What exactly is “Bounded Recovery”?

First, keep in mind that “Bounded Recovery” is only for Oracle databases!

Second, according to the documentation, “Bounded Recovery” is a component of the general extract checkpointing facility.  This component of the extract guarantees an efficient recovery after an extract stops for any reason, no matter how many uncommitted transactions are currently outstanding.  The Bounded Recovery parameter sets an upper boundary for the maximum amount of time that an extract is needed to recover to the point where it stopped before continuing normal processing.

The default settings for “Bounded Recovery” is set to 4 hours and needed recovery information is cached in the OGG_HOME/BR/<extract name> directory  This is verified when I look at the report file for my extract named EXT.


2014-07-21 17:26:30 INFO OGG-01815 Virtual Memory Facilities for: BR
 anon alloc: mmap(MAP_ANON) anon free: munmap
 file alloc: mmap(MAP_SHARED) file free: munmap
 target directories:
 /oracle/app/product/12.1.2/oggcore_1/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 4HOURS
BRDIR = /oracle/app/product/12.1.2/oggcore_1

According to documentation, the default setting so for “Bounded Recovery” should be sufficient for most environments.  It is also noted that the “Bounded Recovery” settings shouldn’t be changed without the guidance of Oracle Support.

Now that the idea of a “Bounded Recovery” has been established, lets try to understand a bit more about how a transaction is recovered in Oracle GoldenGate with the “Bounded Recovery” feature.

At the start of a transaction, Oracle GoldenGate must cache the transaction (even if it contains no data).  The reason for this is due to the need to support future operations of a transaction.  If the extract hits a committed transaction, then the cached transaction is written to the trail file and clears the transaction from memory.  If the extract hits a rollback, then the cached transaction is discarded from memory.  As long as a an extract is processing a transaction, before a commit or rollback, the transaction is considered an open transaction and will be collected.  If the extract is stopped before it encounters a commit or rollback, the extract needs all of the cached transaction information recovered before the extract can start.  This approach applies to all transactions that were open at the time of the extract being stopped.

There are three ways that an extract performs recovery:

  1. No open transactions when extract is stopped, the recovery begins at the current extract read checkpoint (Normal recovery)
  2. Open transactions whose start points in the log were very close in time to the time when the extracted was stopped, the extract begins its recovery by re-reading the logs from the beginning of the oldest open transaction (Considered a normal recovery)
  3. One or more open transactions that extract qualified as long-running open transactions, extract begins recovery (Bounded Recovery)

What defines a long-running transaction for Oracle GoldenGate?

Transactions in Oracle GoldenGate are long-running if the transaction has been open longer than one (1) “Bounded Recovery” interval.

A “bounded recovery interval” is the amount of time between “Bounded Recovery checkpoints” which persists the current state and data of the extract to disk.  “Bounded Recovery checkpoints” are used to identify a recovery position between tow “Bounded Recovery intervals”.  The extract will pick up from the last “bounded recovery checkpoint”, instead of processing from the log position where the open long-running transaction first appeared.

What is the maximum Bounded Recovery time?

The maximum bounded recovery time is no more than twice the current “Bounded Recovery checkpoint” interval.  However, the actual recovery time will be dictated by the following:

  1. The time from the last valid Bounded Recovery interval to when the extract was stopped
  2. Utilization of the extract in that period
  3. The percent of utilization for transaction that were previously written to the trail

Now that the basic details of “Bounded Recovery” have been discussed.  How can the settings for “Bounded Recovery” be changed?

“Bounded Recovery” can be changed by updating the extract parameter file with the following parameter:


BR
[, BRDIR directory]
[, BRINTERVAL number {M | H}]
[, BRKEEPSTALEFILES]
[, BROFF]
[, BROFFONFAILURE]
[, BRRESET]

As noted, there are a few options that can be set with the BR parameter.  If I wanted to shorten my “Bound Recovery” time and change directories where the cached information is stored I can do something similar to this:


--Bound Recovery
BR BRDIR ./dirbr BRINTERVAL 20M

In the example above, I’m changing the directory to a new directory called DIRBR (created manually as part of subdirs).  I also changed the interval from 4 hours to 20 minutes.

Note: 20 minutes is the smallest accepted time for the BRINTERVAL parameter.

After adding the BR parameter with options to the extract, the extract needs to be restarted.  Once the extract is up and running, the report file for the extract can be checked to verify that the new parameters have been taken.


2014-08-02 22:20:54  INFO    OGG-01815  Virtual Memory Facilities for: BR
    anon alloc: mmap(MAP_ANON)  anon free: munmap
    file alloc: mmap(MAP_SHARED)  file free: munmap
    target directories:
    /oracle/app/product/12.1.2/oggcore_1/dirbr/BR/EXT.

Bounded Recovery Parameter:
BRINTERVAL = 20M
BRDIR      = ./dirbr

Hopefully, this post provided a better understanding of one least understood option within Oracle GoldenGate.

Enjoy!!

About me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs