DBA Blogs

Can I declare the record variables dynamically

Tom Kyte - Fri, 2016-08-12 23:46
Hi Tom , I am writing a procedure for field to field comparison of tables. Create or Replace procedure data_check AS type all_tabs is table of table of all_tables.table_name%type; v_tab_stg all_tabs := all_tabs(); v_tab_pstg all_tabs := all_t...
Categories: DBA Blogs

sql ,plsql

Tom Kyte - Fri, 2016-08-12 23:46
Is oracle support on update cascade and on delete cascade in child table? e.g I want to delete the particular value from parent table column which primary key column and this column have relationship with child table column so if try to delete ...
Categories: DBA Blogs

Purging Partition and subpartitions

Tom Kyte - Fri, 2016-08-12 23:46
Hi, We have a table with 120 partitions and we create around 3500 sub partitions in total every month. Not all subpartitions are having data in it eg (weekends, holidays). But over the period these subpartitions are increased to 70000. And now thi...
Categories: DBA Blogs

Presenting table data into a different format

Tom Kyte - Fri, 2016-08-12 23:46
I have a table with the following sample data: Branch Department ======== ========== Sydney Sales Sydney Research Sydney Finance London Sales New York Sales New York IT How do I present this data in the table as this format: B...
Categories: DBA Blogs

Database Design

Tom Kyte - Fri, 2016-08-12 23:46
Tom: 1. I am designing a database for order entry. I have a table PO (po_no, po_date, po_status) and another child table (po_no, stock_no,stock_desc,qty, unit). Because a user may not find a stock_no in a reference table (for all stock numbers)...
Categories: DBA Blogs

SQL*Net message from client for MView refresh

Tom Kyte - Fri, 2016-08-12 23:46
Hi tom, I am getting wait events with SQL*Net message from client continuously while I am doing mview refresh. Now here while refreshing the mview it is accessing the data through db link & source db & destination db are on same server. So can u ...
Categories: DBA Blogs

How to copy data in XMLTYPE type (or CLOB) over 32K in a local table into a LONG type column in a remote table

Tom Kyte - Fri, 2016-08-12 23:46
The purpose is to copy data contained in XMLTYPE type column residing in a local table into a LONG type column in a remote table. Our current PLSQL (package) code first selects XMLTYPE column into a XMLTYPE variable (lv_xml_doc), converts into CLOB a...
Categories: DBA Blogs

Oracle.

Tom Kyte - Fri, 2016-08-12 23:46
Sir this is my questions ************************ I have a sourcetableA with 10 records....and target tables are like(table1...table2....table5) id 1 2 3 4 5 . . 10... i wnat load 2 records into table1 and next 2 records into ta...
Categories: DBA Blogs

Freeing up space

Tom Kyte - Fri, 2016-08-12 23:46
Hi, I have a quick question that doesn't require a test script or anything, if I may? I have a table that is partitioned by date. Each year has it's own ASSM locally managed tablespace. The dates go back as far as 2006. There is a BLOB which ho...
Categories: DBA Blogs

To TRANDATA or To SCHEMATRANDATA? … That is the #GoldenGate questions of the day!

DBASolved - Fri, 2016-08-12 19:30

If you are familiar with using Oracle GoldenGate, you know that on the source side of the equation you have to enable supplemental logging and sometimes force logging on the database. I traditionally do both just to make sure that I capture as much as I can into the redo stream from the transactions on the database. For Oracle GoldenGate purposes, this is not the only thing you need to turn on to ensure all needed information is captured to the trail files.

There are two Oracle GoldenGate GGSCI commands that can be ran to enable supplemental logging at the schema or table level. These commands are ADD TRANDATA and ADD SCHEMATRANDATA. What is the difference between the two, you may ask?

ADD TRANDATA – is used to enable supplemental logging at the table level
ADD SCHEMATRANDATA – is used to enable supplemental logging at the schema level

That is such a high-level view of the concept. What is the difference between the two trandata approaches, really?

ADD TRANDATA:

ADD TRANDATA command is used to enable Oracle GoldenGate to acquire the transaction information that it needs from the transaction records. This version of the command can be used on the following databases:

  • DB2 for i Database
  • DB2 LUW Database
  • DB2 z/OS Database
  • Oracle Database
  • MS SQL Server
  • Sybase Database

For an Oracle Database, ADD TRANDATA enables the unconditional logging of the primary key and conditional supplemental logging of all unique key(s) and foreign key(s) of the specified table. Additionally, you can use ADD TRANDATA with the COLS option to log any non-key columns that can be used with the FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters.

An example of adding trandata to a schema would be:

GGSCI> dblogin useridalias gate
GGSCI> add trandata soe.*

 

Once transdata has been added to the schema/tables, you can verify the existence of trandata from GGSCI using the INFO TRANDATA command as demonstrated in the below command set.

GGSCI> dblogin useridalias gate
GGSCI> info trandata soe.addresses
2016-08-12 15:07:23  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema SOE.
2016-08-12 15:07:23  INFO    OGG-01980  Schema level supplemental logging is enabled on schema SOE for all scheduling columns.
Logging of supplemental redo log data is enabled for table SOE.ADDRESSES.
Columns supplementally logged for table SOE.ADDRESSES: ADDRESS_ID, COUNTRY, COUNTY, CUSTOMER_ID, DATE_CREATED, HOUSE_NO_OR_NAME, POST_CODE, STREET_NAME, TOWN, ZIP_CODE.

Now that ADD TRANDATA has been ran, what exactly does ADD TRANDATA do to the database it is ran against? For an Oracle Database, ADD TRANDATA adds a Supplemental Log Group (SLG) on to the table. This can be seen from the DBA_LOG_GROUP view under SYS. The SLGs that are corrected are all labeled with a prefix of “GGS”. The following output shows what this looks like after running it for a whole schema.

select owner, log_group_name, table_name, log_group_type, always, generated 
from dba_log_groups
where owner = 'SOE'
and log_group_name like 'GGS%';

OWNER           LOG_GROUP_NAME       TABLE_NAME                     LOG_GROUP_TYPE                 ALWAYS                         GENERATED                    
--------------- -------------------- ------------------------------ ------------------------------ ------------------------------ ------------------------------
SOE             GGS_105669           CUSTOMERS                      USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105702           ADDRESSES                      USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105735           CARD_DETAILS                   USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105768           WAREHOUSES                     USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105769           ORDER_ITEMS                    USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105802           ORDERS                         USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105835           INVENTORIES                    USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105836           PRODUCT_INFORMATION            USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105837           LOGON                          USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105870           PRODUCT_DESCRIPTIONS           USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_105871           ORDERENTRY_METADATA            USER LOG GROUP                 ALWAYS                         USER NAME                     
SOE             GGS_254721           TEST1                          USER LOG GROUP                 ALWAYS                         USER NAME

Now, there are some who will argue that the same effect can be done by just asking a SLG to a table manually. Although this is true, Oracle GoldenGate uses the GGS_ prefix to keep track of the tables that are in the replication process. Also, easier to clean up when you issue DROP TRANDATA, which will remove all the associated SLG items from the tables.

The ADD TRANDATA approach should be used with 11g or older versions of Oracle GoldenGate. As you move towards new version of Oracle GoldenGate, Oracle is pushing that everyone pick up and use the ADD SCHEMATRANDATA method. So let’s take a look at that now.

ADD SCHEMATRANDATA:

The ADD SCHEMATRANDATA is used on all the current and future tables in a given schema to automatically log a superset of available keys that Oracle GoldenGate needs for row identification. Using this version of TRANDATA, it can be used with both the integrated and classic capture processes.

There are four key reasons why you should use ADD SCHEMATRANDATA:

  • Enables supplemental logging for new tables created with a CREATE TABLE DDL command.
  • Updates supplemental logging for tables affected by an ALTER TABLE DDL command that adds or drops columns
  • Updates supplemental logging for tables affected by RENAME TABLE command
  • Updates supplemental logging for tables affected by adding or dropping of unique or primary key constraints

 

Although ADD SCHEMATRANDATA can be used with both integrated and classic capture processes, it is mostly geared towards the integrated process. There are three primary reasons to use ADD SCHEMATRANDATA with the integrated capture:

Ensures that the correct key is logged by logging all the keys
Options are provided that enable the logging of the primary, unique, and foreign keys to support the computation of dependences among the tables being processed by the integrated replicats (think apply servers)
Ensures the appropriate key values are logged in the redo to allow DML to be mapped to object that have DDL issued against them.

Earlier in this post, I mentioned that I often enable “force logging” on the database when I do the minimal supplemental logging. Force logging is encouraged by Oracle, especially when using ADD SCHEMATRANDATA.

Now to add issue ADD SCHEMATRANDATA against an Oracle database, it is similar the same way as ADD TRANDATA, with the difference that you don’t have to provide any wildcards. In the examples below, I show you how this can be done:

GGSCI> dblogin useridalias ggate
GGSCI> add schematrandata soe
2016-08-12 15:47:40  INFO    OGG-01788  SCHEMATRANDATA has been added on schema soe.
2016-08-12 15:47:40  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema soe.

After running ADD SCHEMATRANDATA, you can perform an INFO SCHEMATRANDATA on the schema to see what has been modified.

GGSCI (fred.acme.com as ggate@src12c) 9> info schematrandata soe

2016-08-12 15:51:52  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema SOE.

2016-08-12 15:51:52  INFO    OGG-01980  Schema level supplemental logging is enabled on schema SOE for all scheduling columns.

2016-08-12 15:51:52  INFO    OGG-10462  Schema SOE have 12 prepared tables for instantiation.

Digging around in the database, to see if ADD SCHEMATRANDATA does the same as ADD TRANDATA with SLG; well, it doesn’t. ADD SCHEMATRANDATA does not create any SLGs. The only place that I have found that has any record of supplemental logging turned on with ADD SCHEMATRANDATA is in the V_$GOLDENGATE_CAPABILITIES view. Here, you can see that supplemental logging has been enabled, the number of times it has been acted upon and when it was last executed.

NAME                        COUNT TO_CHAR(LAST_USED     CON_ID
---------------------- ---------- ----------------- ----------
DBENCRYPTION                    0 12-JUN-2016 21:20          0
DBLOGREADER                     0 12-JUN-2016 21:20          0
TRIGGERSUPPRESSION              0 12-JUN-2016 21:20          0
TRANSIENTDUPLICATE              0 12-JUN-2016 21:20          0
DDLTRIGGEROPTIMIZATION          0 12-JUN-2016 21:20          0
GGSESSION                       0 12-JUN-2016 21:20          0
DELETECASCADEHINT               0 12-JUN-2016 21:20          0
SUPPLEMENTALLOG                 5 12-AUG-2016 16:02          0

Now, being that the integrated items of Oracle GoldenGate are closely related to Oracle Streams, there may be a table or view related to Streams that has this information. Once I find it, I’ll provide an update to this post.

In the mean time, I hope this post has provided some insight into the differences between ADD TRANDATA and ADD SCHEMATRANDATA.

If you are moving to or using the integrated products of Oracle GoldenGate, then ADD SCHEMATRANDATA is the method that you should be using.

Enjoy!!!

@dbasolved
http://about.me/dbasolved

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

Informational Error ORA-01013

Tom Kyte - Fri, 2016-08-12 05:26
Hi, This is my question : I need to log in a table in my database every action be made on a script, even like sudden interruptions such us ctrl+c. The problem is that i don't know how to catch ORA-01013 ( which is the code for abrupt interruptions...
Categories: DBA Blogs

tns and listener error

Tom Kyte - Fri, 2016-08-12 05:26
Hi tom, In our oracle database setup frequently getting error like - Fatal NI connect error 12537, connecting to: (LOCAL=NO) VERSION INFORMATION: TNS for Linux: Version 11.2.0.1.0 - Production Oracle Bequeath NT Protoco...
Categories: DBA Blogs

Recursive subquery factoring for a table function

Tom Kyte - Fri, 2016-08-12 05:26
Hello Sir, I have this table function call : select * FROM TABLE ( MOD_GENERATOR.PKG_GENERATOR_XREF.refcur_to_table_func ( MOD_GENERATOR.F_GENERATOR_XREF (trunc(sysdate)) )) Now I have to call this 24 times ever...
Categories: DBA Blogs

Restoration from incremental backup

Tom Kyte - Fri, 2016-08-12 05:26
Hi, I have Sunday full backup and then Monday Tuesday Wednesday incremental level 1 back-up and last 7 days archive now I want to restore all this to my Restoration server to check backup consistency So 1.which backup pieces I need to move on Re...
Categories: DBA Blogs

index-by table/associative array

Tom Kyte - Fri, 2016-08-12 05:26
Can you join a PL/SQL collection like an index-by table/associative array to a regular table? If not, what is a good technique for matching data in both sets? Thanx, Don
Categories: DBA Blogs

Oracle Partner Resources for Cloud Services

Oracle's Cloud Platform helps your customers expand and integrate your existing IT business with Oracle's cloud services, accelerate application development and deployment, and lead business...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Links for 2016-08-11 [del.icio.us]

Categories: DBA Blogs

The Top 4 Ways to Handle Difficult Backup Environments

Kubilay Çilkara - Thu, 2016-08-11 16:35
It may come as a surprise to you, but the administrators of backup, networks, and systems are essentially the backbone of the IT world.  Did you know that these heroes are responsible for some very difficult tasks?  These tasks include -but are not limited to- keeping critical business data secure and up-to-date, getting more out of existing hardware, and keeping auditors happy.  Overall, they are the ones who keep the whole sphere in line. 

In recent days, however, the jobs of these individuals has changed quite a bit.  Virtual tape libraries (VLTs), virtual machines (VMs), and additional technologies with respect to backup have made the job of a Backup Administrator much more complicated.  Also, there is more to be managed when corporate acquisitions occur.  With the addition of the fact that all departments want special reports that communicate factors most relevant to them, and that finance departments want each sphere to pay for their own storage, it seems that administrators have a lot on their plates!  The truth is, the plates of most Backup Administrators are full, and we haven’t even touched on compliance reports yet! 

On the upside, most Backup Administrators are well-equipped to handle the large load of work now required of them.  However, they are still human, and that makes them limited in terms of how much time they can spend in a particular area.  For instance, because of the list of tasks mentioned above, everything takes longer.  What this means is that less time can be spent on management of the entire backup sphere.  This should not surprise anyone, as even administrators can only do so much!  The good news is that there is light at the end of the tunnel.  If you are in a situation where you have too many proverbial pots on the stove with respect to your backup environment, don’t worry!  Here are 4 essential tips that will help you to wrangle in those testy backup spheres. 

1.     Create a client backup status every day.
You must remember the importance of making the backup status of your clients clear on a daily basis.  In order to do this, figure out the job information to use as your base.  You can depend upon your applications to supply indicators that make this easier. Next, you must consider your backup window.  Typically, you will see something like 7pm to 7am, meaning that the status of your daily backup doesn’t follow the calendar day.  Bear in mind the reality of missed jobs and that everything might be communicated as “ok” because of no job to report.  However, the truth is, this missed job ought to be marked as “missed.”  This can be done by checking on scheduler rules.  In the event of an external scheduler, this data needs to be associated with the client data in the backup product.  In the end, you must decide on how you want to handle the load of many jobs.  It is important to ask yourself how you would view failed jobs in the midst of several daily running jobs.  In other words, would you consider something like this a success, a failure, or a partial?   These factors need to be determined before you implement a daily backup status.  After going through these steps, you simply need to start programming, obtaining and aggregating data, and saving the results in order to produce accurate reports.

2.     Report on individual business units.
Most people that are reading this article are looking after a large amount of PCs, servers, and databases.  Many of these devices are simply names on a screen, and the challenge of valuing the data on each machine is very real.  In order to make these names more significant to you as the administrator, it is a good practice to pair the backup clients with the company configuration management database.  This way, you will be able to collect information such as business units or the application names in a much more memorable fashion.  You will also be able to begin reporting at the application or business unit level and thereby share the information with end users.  Bear in mind that there are many CMDB tools in existence, and the difficulty involved in extracting specific data programmatically can be significant.  In order to get this information, some people obtain an extra copy of the CMDB in a CSV file, and that way, the information is organized by columns that show the hostname, business unit name, and the application name.  With the availability of this information, administrators can then map it to the storage or backup status for each individual client.  As mentioned above, it can also be shared with end users, which is a huge benefit to all. 

3.     Report on your storage.
It is common desire for both managers and users alike to obtain knowledge about their storage usage.  Likewise, teams want this information in order to accurately forecast future storage needs and additional storage purchases.  Keeping a record of daily data points for all key elements is a good rule of thumb when reporting on your storage information.  In order to achieve this, you must look at the raw data, compress it, and then de-dupe it, if necessary.  Keep the level of granularity low, beginning with storage, and then moving on to storage pools, file systems, or shares, if applicable. Do remember that this data is only relevant for a few months after reporting.  You might also want to keep track of the deduplication ratio over time when considering the VTLs or other devices relating to deduplication.  The reason for this is because degradation will likely result in extra storage costs per TB of raw data, not to mention the additional processing cycles on the deduplication device.

4.     Don’t wait!  Be sure to automate!

You might be concerned that there will be loads of work you must do manually after reading this article.  Do not fear!  There are various solutions in the software world that will assist you in making many of the processes mentioned above automated.  The best part of this is that your system will be equipped to perform in a proactive manner, instead in one that is reactive.  By investing in appropriate software, you can be assured that your backup reporting strategy will be top-notch!

Amedee Potier joined Rocket Software in 2003 and is currently Senior Director of R&D, where he oversees several Rocket products in the Data Protection space. His focus is on solutions for data protection and management in heterogeneous multi-vendor and multi-platform environments.
Categories: DBA Blogs

Moving XML data from BASICFILE CLOB to BINARY XML

Tom Kyte - Thu, 2016-08-11 11:06
Currently I have table which stores xml in XMLType column which internally stores XML in BasicFile Clob,I have to migrate the XML data to store XMLType column which stores XML in secure file Binary XML. I have tried using examples given in link ht...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs