Feed aggregator

Video Tutorial: XPLAN_ASH Active Session History - Part 8

Randolf Geist - Sun, 2016-07-17 09:54
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.


Why the "Row Source Operation" ('reality exec') differs from the "Execution Plan" ('guess exec')

Tom Kyte - Sun, 2016-07-17 07:06
Hi, Team There's a procedure in our Oracle 11.2.0.1 DB which captures data from MS SQL Server through DB Link, since the context is quite long (cursors, pragma autonomous_transaction and loop included), let us take it brief like this: CREATE OR...
Categories: DBA Blogs

XML Update

Tom Kyte - Sun, 2016-07-17 07:06
Hi Tom, I need to update some attribute(XML) in oracle table whose datatype is XML. Is there any package provided by oracle which help me? Or if you can tell me anyway how can I update it.
Categories: DBA Blogs

clone(duplicate active Database)

Tom Kyte - Sun, 2016-07-17 07:06
I Want to clone a database from one computer to another computer, both computer connected with lan. The database which i have to clone is available on computer 2. on computer 1 i have created an instance using oradim utility On computer 2 <code>...
Categories: DBA Blogs

Add_months

Tom Kyte - Sun, 2016-07-17 07:06
Hi Guys, I am just a little bit confused regarding the following : select add_months(to_date('30/01/2016','DD/MM/YYYY'),1) from dual Result is : 29/02/2016, shouldn't it be 28/02/2016 ?? Thanks Mohannad
Categories: DBA Blogs

issue while adding new apps node in oracle EBS r12.2.5

Tom Kyte - Sun, 2016-07-17 07:06
Hi DBA Experts, I am facing following issue while adding new application node please help me to fix this issue. Executing command: perl /u02/applmgr/SR1225/fs2/EBSapps/appl/ad/12.0.0/patch/115/bin/adProvisionEBS.pl ebs-create-node -contextfi...
Categories: DBA Blogs

export to CSV

Tom Kyte - Sun, 2016-07-17 07:06
Tom, I need to export data from a table into a .csv file. I need to have my column headres in between "" and data separated by ','. also depending on the column values a row may be printed upto 5 times with data differing in only one field. ...
Categories: DBA Blogs

Links for 2016-07-16 [del.icio.us]

Categories: DBA Blogs

AMM and ASMM derived parameters

Yann Neuhaus - Sat, 2016-07-16 15:24

The latest DBA Essentials Workshop training I’ve given raised a question about PGA_AGGREGATE_LIMIT. The default depends on PGA_AGGREGATE_TARGET. So how is it calculated in AMM where PGA_AGGREGATE_TARGET is dynamic? Is it also dynamic or is it determined by the value at instance startup only?

The PGA_AGGREGATE_LIMIT default value is documented. I’ll use the following query to display the values of the concerned parameters:

select
dbms_stats_internal.get_parameter_val('pga_aggregate_limit')/1024/1024/1024 "pga_aggregate_limit",
2 "2GB",
3*dbms_stats_internal.get_parameter_val('processes')/1024 "3MB*processes",
2*dbms_stats_internal.get_parameter_val('__pga_aggregate_target')/1024/1024/1024 "2*__pga_aggregate_target",
dbms_stats_internal.get_parameter_val('__sga_target')/1024/1024/1024 "__sga_target",
dbms_stats_internal.get_parameter_val('__pga_aggregate_target')/1024/1024/1024 "__pga_aggregate_target"
from dual
/

and I start with the following:

pga_aggregate_limit 2GB 3MB*processes 2*__pga_aggregate_target __sga_target __pga_aggregate_target
------------------- ---------- ------------- ------------------------ ------------ ----------------------
2.40625 2 .87890625 2.40625 1.796875 1.203125

I’m in AMM with only MEMORY_TARGET set to 3G. The dynamic SGA is at 1.8G and the PGA at 1.2G. The PGA_AGGREGATE_LIMIT is at 200% of the PGA which is 2.4G

I increase the SGA in order to see a resize of the PGA


SQL> alter system set sga_target=2500M;
System altered.

The PGA is now about 500M in order to release some space for SGA:

pga_aggregate_limit 2GB 3MB*processes 2*__pga_aggregate_target __sga_target __pga_aggregate_target
------------------- ---------- ------------- ------------------------ ------------ ----------------------
2.40625 2 .87890625 1.09375 1.796875 .546875

However, the PGA_AGGREGATE_LIMIT did no change. the formula is not dynamic. The value that has been calculated at startup remains.

spfile

When dynamic components are resized, the values are written into the spfile with double underscore parameters, so that a restart of the instance starts with same value:
SQL> host strings /u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfileCDB.ora | grep target
CDB.__pga_aggregate_target=587202560
CDB.__sga_target=1929379840
*.memory_max_target=5G
*.memory_target=3G
.sga_target=2634022912

So let’s restart and see what happens to PGA_AGGREGATE_LIMIT (which has no double underscore entry in spfile)

SQL> startup force
ORACLE instance started.
 
Total System Global Area 5368709120 bytes
Fixed Size 2935712 bytes
Variable Size 3976201312 bytes
Database Buffers 721420288 bytes
Redo Buffers 13840384 bytes
In-Memory Area 654311424 bytes
Database mounted.
Database opened.
 
SQL> select
2 dbms_stats_internal.get_parameter_val('pga_aggregate_limit')/1024/1024/1024 "pga_aggregate_limit",
3 2 "2GB",
4 3*dbms_stats_internal.get_parameter_val('processes')/1024 "3MB*processes",
5 2*dbms_stats_internal.get_parameter_val('__pga_aggregate_target')/1024/1024/1024 "2*__pga_aggregate_target",
6 dbms_stats_internal.get_parameter_val('__sga_target')/1024/1024/1024 "__sga_target",
7 dbms_stats_internal.get_parameter_val('__pga_aggregate_target')/1024/1024/1024 "__pga_aggregate_target"
8 from dual
9 /
 
pga_aggregate_limit 2GB 3MB*processes 2*__pga_aggregate_target __sga_target __pga_aggregate_target
------------------- ---------- ------------- ------------------------ ------------ ----------------------
2 2 .87890625 1.09375 2.453125 .546875

The good thing is that the value is calculated from the actual values. Here 200% of PGA is smaller than 2G so 2G is used.

The bad thing is that a restart of the instance may bring a different behavior than before than restart.

So what?

This instability is easy to solve: don’t use AMM. SGA and PGA are different things and you should size them separately.
But the problem is wider. There are other parameters that can show same behavior. For example, the default db_file_multiblock_read_count can be limited by processes x __db_block_buffers.
You may have to change some values either manually or automatically at the start of a new application because you don’t know which is the best setting. But once the application is more stable, you should stabilize the dynamic sizing by setting minimum values.

 

Cet article AMM and ASMM derived parameters est apparu en premier sur Blog dbi services.

Finding transacted tables,

Tom Kyte - Sat, 2016-07-16 12:46
Hello, Using Oracle data dictionary, how to find out the list of tables that have undergone Insert/Update/Delete by a particular user account in the last 7 days? Also, if possible I want to know the number of transactions happened and the size of...
Categories: DBA Blogs

What is best way to collect GLOBAL STATS of a table with 4 billion records

Tom Kyte - Sat, 2016-07-16 12:46
Hi Tom I need to take global stats collection for a table with 4 billion records. This is a partitioned table but I need to collect STATS globally. This table was last analyzed in Sept 2015. As of now following is Table statistics Actual Number...
Categories: DBA Blogs

DENSE_RANK function

Tom Kyte - Sat, 2016-07-16 12:46
Hi Tom, I have a problem with DENSE_RANK function. Let's see an example: <code>CREATE TABLE test_rank (val number); INSERT INTO test_rank VALUES(1); INSERT INTO test_rank VALUES(2); INSERT INTO test_rank VALUES(3); INSERT INTO test_rank VALUE...
Categories: DBA Blogs

DROP TABLE performance/delay

Tom Kyte - Sat, 2016-07-16 12:46
Hello Tom, We have a new application which has serious performance problems, and a reason for that might be the Oracle DB. Unfortunately, tests could not find the root cause yet. Now I saw a very strange behavior of the DROP TABLE performance: ...
Categories: DBA Blogs

count of distinct on multiple columns does not work

Tom Kyte - Sat, 2016-07-16 12:46
Hi, I am trying to count the number of distinct combinations in a table but the query gives error. For example, create table t(a varchar2(10), b varchar2(10), c varchar2(10)); insert into t values('a','b','c'); insert into t values('d','e'...
Categories: DBA Blogs

Creating and Executing a stored procedure that dynamically builds a table getting ORA-06550: PLS-00103

Tom Kyte - Sat, 2016-07-16 12:46
Receiving ORA-06550: PLS-00103 error when trying to execute a procedure that dynamically creates a table. I have created a pl/sql script to dynamically create a table: declare l_tablename varchar2(30) := 'TEST_3_'||to_char(sysdate, 'YYYYMMDD')...
Categories: DBA Blogs

GoldenGate and Networking

Michael Dinh - Fri, 2016-07-15 23:16

I have never delved into networking for GoldenGate since things just work. Due to security tightening, telnet is no longer available to verify opened port.

What’s to follow is a demo for ports being listened by GoldenGate when manager started and ports listened and when PUMP Extract is started as part using DYNAMICPORTLIST.

Also, how to test opened port using (nc – arbitrary TCP and UDP connections and listens)

Both source and target are running on the same host; however, the same principles still apply.

HAWK (source): PORT 7901 and DYNAMICPORTLIST 15100-15120

THOR (target):   PORT 7801 and DYNAMICPORTLIST 15200-15220

 

 

HAWK (source):

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01

$ cat dirprm/mgr.prm
PORT 7901
DYNAMICPORTLIST 15100-15120
ACCESSRULE, PROG server, IPADDR *, ALLOW
ACCESSRULE, PROG *, IPADDR *, ALLOW
USERIDALIAS ggs_admin
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
-- AUTOSTART ER *
-- AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60
CHECKMINUTES 5
LAGCRITICALMINUTES 15

++++++++++

GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     E_HAWK      00:00:06      00:00:47
EXTRACT     STOPPED     P_HAWK      00:00:00      00:00:35

GGSCI (arrow.localdomain) 2>
THOR (target):
oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ cat dirprm/mgr.prm
PORT 7801
DYNAMICPORTLIST 15200-15220
ACCESSRULE, PROG server, IPADDR *, ALLOW
ACCESSRULE, PROG *, IPADDR *, ALLOW
PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 3
-- AUTOSTART ER *
-- AUTORESTART ER *, RETRIES 5, WAITMINUTES 2, RESETMINUTES 60
CHECKMINUTES 5
LAGCRITICALMINUTES 15

++++++++++

GGSCI (arrow.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
REPLICAT    RUNNING     R_HAWK      00:00:00      70:18:54

GGSCI (arrow.localdomain) 2>
Ports 7801 and 7901 are in LISTEN mode and none of the ports from DYNAMICPORTLIST are LISTEN.
oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ netstat -lnp|grep "LISTEN "|egrep "mgr|server"
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 :::7801                     :::*                        LISTEN      2808/./mgr
tcp        0      0 :::7901                     :::*                        LISTEN      3068/./mgr
oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
HAWK (source):  Start PUMP
GGSCI (arrow.localdomain) 1> start p*

Sending START request to MANAGER ...
EXTRACT P_HAWK starting

GGSCI (arrow.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     E_HAWK      00:00:06      00:02:03
EXTRACT     RUNNING     P_HAWK      00:00:00      00:01:50

GGSCI (arrow.localdomain) 3>

Ports 7801 and 7901 are in LISTEN mode and none of the ports from DYNAMICPORTLIST 15200

oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$ netstat -lnp|grep "LISTEN "|egrep "mgr|server"
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 :::7801                     :::*                        LISTEN      2808/./mgr
tcp        0      0 :::7901                     :::*                        LISTEN      3068/./mgr
tcp        0      0 :::15200                    :::*                        LISTEN      3297/./server
oracle@arrow:hawk:/u01/app/12.2.0.1/ggs01
$
How to use nc to test opened port. What RPM is required to install nc. Use yum install nc.
[root@arrow ~]# yum whatprovides nc
Loaded plugins: refresh-packagekit, security
epel/metalink                                                                                   |  12 kB     00:00
epel                                                                                            | 4.3 kB     00:00
epel/primary_db                                                                                 | 5.8 MB     00:03
public_ol6_UEKR3_latest                                                                         | 1.2 kB     00:00
public_ol6_latest                                                                               | 1.4 kB     00:00
public_ol6_latest/primary                                                                       |  59 MB     00:26
public_ol6_latest                                                                                          36199/36199
nc-1.84-22.el6.x86_64 : Reads and writes data across network connections using TCP or UDP
Repo        : public_ol6_latest
Matched from:

nc-1.84-24.el6.x86_64 : Reads and writes data across network connections using TCP or UDP
Repo        : public_ol6_latest
Matched from:

nc-1.84-24.el6.x86_64 : Reads and writes data across network connections using TCP or UDP
Repo        : installed
Matched from:
Other       : Provides-match: nc

[root@arrow ~]#
DEMO:
oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ nc -zv arrow 7901
Connection to arrow 7901 port [tcp/tnos-sp] succeeded!

oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ nc -zv arrow 15200
Connection to arrow 15200 port [tcp/*] succeeded!

oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$ nc -zv arrow 15201
nc: connect to arrow port 15201 (tcp) failed: Connection refused
nc: connect to arrow port 15201 (tcp) failed: Connection refused
oracle@arrow:thor:/u01/app/12.2.0.1/ggs02
$

Average of input two Dates

Tom Kyte - Fri, 2016-07-15 18:26
How Can i create a oracle sql Function which can take two input Dates and Display their Average.For Ex:- input :-28-Aug-2016,4-Sep-2016 output:- 1-Sep-2016
Categories: DBA Blogs

Constraints

Tom Kyte - Fri, 2016-07-15 18:26
<code>Tom, Are the following a full list of all possible user constraint_types P - primary key? C - check? R - foreign key? U - unique ? And what are their full meanings? Are the dab constraint_types the same? Thanks Brian </code>
Categories: DBA Blogs

PaaS & IaaS Cloud Platform - Support Resources

Chris Warticki - Fri, 2016-07-15 12:38

First and ALWAYS – the #1 investment is made in the PRODUCT, PRODUCT, PRODUCT.

Remain a student of the product.

1. Cloud Computing Product Information Page

2. PaaS Platform Information Page

3. IaaS Product Information Page

4. Oracle Cloud Learning Library

5. PaaS Learning Subscription

6. Cloud.Oracle.com – Oracle Cloud Portal (Subscription and Services Admin)

Personalize My Oracle Support Experience

· Setup Proactive Alerts and Notifications

· Customize your MOS Dashboard

Collaborate. Communicate. Connect

· Subscribe:

· Oracle Mobile App – News, Events, Mobile MOS, Videos etc

SOCIAL Circles of Influence

· Cloud Solutions Blog

· Paas Blog

· DaaS Blog

· YouTube – Cloud

· Oracle Cloud Zone

· Cloud Café (Podcasts)

KNOW Support Best Practices

Engage with Oracle Support

1. Upload ALL reports if logging a Service Request

2. Leverage Oracle Collaborative Support (web conferencing)

3. Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)

4. Request Management Attention as necessary

5. Know My Oracle Support? Get Accredited

Truncates and Sequences Replication in Oracle GoldenGate

Pythian Group - Fri, 2016-07-15 11:29

We use the terms DDL and DML in our DBA life with Oracle all the time. The first stands for Data Definition Language and it is about Data Definition changes. The latter is about manipulation with your data itself, and stands for Data Manipulation Language. When we speak about replication of our data by replication tools between Oracle databases, we generally either enable DDL, work only replicating DML, or do it for both together. In general, I would recommend replicating both DML and DDL just in case, to prevent the replication to be broken in case of unexpected structural changes in the replicated schemas. But in some cases you do not want to replicate all DDL or any DDL at all for certain reasons. I will discuss a couple of operations which are handled slightly different from pure DDL/DML changes in GoldenGate.

The first of them is truncate operation. In Oracle it is definitely DDL and you can see that.

orcl&gt; select object_id,data_object_id,last_ddl_time from dba_objects where object_name='EMP_TEST';

       OBJECT_ID   DATA_OBJECT_ID LAST_DDL_TIME
---------------- ---------------- -----------------
	  148769	   148769 06/24/16 16:07:04

orcl&gt; truncate table emp_test;

Table truncated.

orcl&gt; select object_id,data_object_id,last_ddl_time from dba_objects where object_name='EMP_TEST';

       OBJECT_ID   DATA_OBJECT_ID LAST_DDL_TIME
---------------- ---------------- -----------------
	  148769	   148770 06/24/16 16:15:52

orcl&gt;

It is clear that the object gets the new data_object_id and last_ddl_time shows new time.

There is a general assumption that you need to enable a DDL support to replicate truncates. But is this true? In reality you can replicate truncates (with some limitations) without full DDL support, and I want to show you how it can be done. What you need to do is setup a parameter GETTRUNCATES.
Let’s setup it on extract and see how it works.

Here is my extract parameter file:

[oracle@bigdatalite oggora]$ cat dirprm/trext.prm
extract trext
userid ogg, password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
--RMTHOSTOPTIONS
RMTHOST bigdatalite, MGRPORT 7849
RMTTRAIL ./dirdat/tr, format release 11.2
GETTRUNCATES
TABLE trsource.*;

We don’t have DDL support and if we try to add a column on the source and put a value to that column our replicat on other side will be abended.

orcl&gt; alter table trsource.emp add col1 varchar2(10) ;

Table altered.

orcl&gt; update trsource.emp set col1='Test1' where empno=7499;

1 row updated.

orcl&gt; commit;

And on the target side:

2016-06-27 13:51:47  INFO    OGG-01021  Oracle GoldenGate Delivery for Oracle, trrep.prm:  Command received from GGSCI: STATS.
2016-06-27 13:57:37  ERROR   OGG-01161  Oracle GoldenGate Delivery for Oracle, trrep.prm:  Bad column index (8) specified for table TRSOURCE.EMP, max columns = 8.
2016-06-27 13:57:37  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, trrep.prm:  PROCESS ABENDING.

You are going to get similar errors for other DDL operations but not for truncates.

orcl&gt; truncate table trsource.emp;

Table truncated.

orcl&gt;
GGSCI (bigdatalite.localdomain) 1&gt; send trext, stats

Sending STATS request to EXTRACT TREXT ...

Start of Statistics at 2016-06-27 14:05:24.

Output to ./dirdat/tr:

Extracting from TRSOURCE.EMP to TRSOURCE.EMP:

*** Total statistics since 2016-06-27 14:05:07 ***
	Total inserts                   	           0.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total truncates                 	           1.00
	Total discards                  	           0.00
	Total operations                	           1.00

You can see that we have captured the truncate by our extract. Even our DDL support is disabled. What we need is to set up the same parameter GETTRUNCATES on replicat side. Why do we need to set it up explicitly? Because the default behaviour and parameter for GoldenGate is “IGNORETRUNCATES” for all processes. As result, the truncates will be applied to the target system.

We are setting our parameter on replicat side and see the result:

[oracle@bigdatalite ogg11ora]$ cat dirprm/trrep.prm
replicat trrep
--trace
DISCARDFILE ./dirdsc/discard.out, append
userid ogg@test, password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
assumetargetdefs
--DDL include all
GETTRUNCATES
map trsource.emp, target trdest.emp;
[oracle@bigdatalite ogg11ora]$


GGSCI (bigdatalite.localdomain) 4&gt; send trrep, stats

Sending STATS request to REPLICAT TRREP ...

Start of Statistics at 2016-06-27 14:08:40.

Replicating from TRSOURCE.EMP to TRDEST.EMP:

*** Total statistics since 2016-06-27 14:08:25 ***
	Total inserts                   	           0.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total truncates                 	           1.00
	Total discards                  	           0.00
	Total operations                	           1.00

test&gt; select count(*) from trdest.emp;

	COUNT(*)
----------------
	       0

test&gt;

We don’t need full DDL support if we want to replicate truncates only. Sometimes it may help us when we have workflow including truncates, but we don’t want to replicate all DDL commands for some reasons. Just keep in mind that it works with some limitations. You cannot replicate by using “truncate partition” for Oracle. It will require full DDL support.

The second thing I want to discuss in this topic is support for sequences values replication. Sometimes people assume that it requires DDL support, but this is not true. As matter of fact replicating of sequences values doesn’t require you to enable DDL support for your replication. Of course, you need full DDL replication support to replicate CREATE, ALTER, DROP, RENAME for sequences, but the values are replicated as DML.

To enable the replication of sequences you need to create a special user on source and target databases, add the user to the GGSCHEMA parameter to your .GLOBALS file, and run one script to create all necessary procedures in the newly created schema.
Let’s have a closer look. I have a user OGG I am using for connection and I plan to use the same user for sequence support.

Here is my .GLOBALS file:

[oracle@bigdatalite oggora]$ cat GLOBALS
GGSCHEMA OGG

The same I have on the target side:

[oracle@bigdatalite ogg11ora]$ cat GLOBALS
GGSCHEMA OGG

I ran the script sequence.sql on both sides.

orcl&gt; @sequence.sql
Please enter the name of a schema for the GoldenGate database objects:
ogg
Setting schema name to OGG

UPDATE_SEQUENCE STATUS:

Line/pos				 Error
---------------------------------------- -----------------------------------------------------------------
No errors				 No errors

GETSEQFLUSH

Line/pos				 Error
---------------------------------------- -----------------------------------------------------------------
No errors				 No errors

SEQTRACE

Line/pos				 Error
---------------------------------------- -----------------------------------------------------------------
No errors				 No errors

REPLICATE_SEQUENCE STATUS:

Line/pos				 Error
---------------------------------------- -----------------------------------------------------------------
No errors				 No errors

STATUS OF SEQUENCE SUPPORT
--------------------------------------------------------------
SUCCESSFUL installation of Oracle Sequence Replication support
orcl&gt;

And on the source side add primary key supplemental logging to the sys.seq$ table:

orcl&gt; ALTER TABLE sys.seq$ ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;

Table altered.

orcl&gt;

You may have a look to the procedures created by the scripts:
SEQTRACE
GETSEQFLUSH
REPLICATESEQUENCE
UPDATESEQUENCE

These procedures enable interface to flush, update and replicate the sequences.

Now we are creating a sequence on the source and target with the same parameters.

orcl&gt; create sequence trsource.empno_seq start with 8100;

Sequence created.

orcl&gt;

Adding parameter SEQUENCE to our parameter file for extract:

[oracle@bigdatalite oggora]$ cat dirprm/trext.prm
extract trext
userid ogg, password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
--RMTHOSTOPTIONS
RMTHOST bigdatalite, MGRPORT 7849
RMTTRAIL ./dirdat/tr, format release 11.2
--DDL include objname trsource.*
GETTRUNCATES
SEQUENCE tsource.*;
TABLE trsource.*;

[oracle@bigdatalite oggora]$

On the target we are creating the same sequence:

test&gt; create sequence trdest.empno_seq start with 8100;

Sequence created.

test&gt;
[oracle@bigdatalite ogg11ora]$ cat dirprm/trrep.prm
replicat trrep
--trace
DISCARDFILE ./dirdsc/discard.out, append
userid ogg@test, password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
assumetargetdefs
--DDL include all
GETTRUNCATES
map trsource., target trdest.;

I made our sequences a bit different on purpose. Our source sequence had a slightly bigger current value than target:

orcl&gt; select trsource.empno_seq.currval from dual;

	 CURRVAL
----------------
	    8102

orcl&gt;


test&gt; select trdest.empno_seq.currval from dual;

	 CURRVAL
----------------
	    8100

test&gt;

What we need is to run command FLUSH SEQUENCE on our extract side:

GGSCI (bigdatalite.localdomain) 9&gt; dblogin userid ogg@orcl,password AACAAAAAAAAAAAIARIXFKCQBMFIGFARA, BLOWFISH, ENCRYPTKEY DEFAULT
Successfully logged into database.

GGSCI (bigdatalite.localdomain as ogg@orcl) 3&gt; flush sequence trsource.empno_seq
Successfully flushed 1 sequence(s) trsource.empno_seq.

GGSCI (bigdatalite.localdomain as ogg@orcl) 4&gt;

And on target we can see:

test&gt; select * from dba_sequences where SEQUENCE_NAME='EMPNO_SEQ';

SEQUENCE_OWNER		       SEQUENCE_NAME			     MIN_VALUE	      MAX_VALUE     INCREMENT_BY C O	   CACHE_SIZE	   LAST_NUMBER
------------------------------ ------------------------------ ---------------- ---------------- ---------------- - - ---------------- ----------------
TRDEST			       EMPNO_SEQ				     1 9999999999999999 	       1 N N		   20		  8143

test&gt; select last_number from dba_sequences where SEQUENCE_NAME='EMPNO_SEQ';

     LAST_NUMBER
----------------
	    8143

test&gt;

The last number for the sequence on the target has been increased to 8143 when on the source we have only cache was flushed and we got 8123 as a last number for the sequence:

orcl&gt; select last_number from dba_sequences where SEQUENCE_NAME='EMPNO_SEQ';

     LAST_NUMBER
----------------
	    8123

orcl&gt;

Let’s try to get new values for the sequence.

orcl&gt; select trsource.empno_seq.nextval from dual;

	 NEXTVAL
----------------
	    8104

orcl&gt; select trsource.empno_seq.nextval from dual;

	 NEXTVAL
----------------
	    8105

We continue to increase values on the source, and as soon as we crossed max number for the source (8123) we got new value on the target:

orcl&gt; select trsource.empno_seq.nextval from dual;

	 NEXTVAL
----------------
	    8119

........
	 NEXTVAL
----------------
	    8124

orcl&gt;


test&gt; select last_number from dba_sequences where SEQUENCE_NAME='EMPNO_SEQ';

     LAST_NUMBER
----------------
	    8144

test&gt;

And the statistics on the target will be shown as updates:

GGSCI (bigdatalite.localdomain) 1&gt; send trrep, stats

Sending STATS request to REPLICAT TRREP ...

Start of Statistics at 2016-06-29 13:20:36.

Replicating from TRSOURCE.EMPNO_SEQ to TRDEST.EMPNO_SEQ:

*** Total statistics since 2016-06-29 13:10:52 ***
	Total updates                   	           4.00
	Total discards                  	           0.00
	Total operations                	           4.00


We can see that the two operations are a bit different from all other standard DDL and DML in Oracle GoldenGate. I hope this small piece of information may help you in your implementation, or help to support your GoldenGate environment.

Stay tuned and keep your eyes on Pythian blog.

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator