DBA Blogs

insert into local table with select from multiple database links in a loop

Tom Kyte - Fri, 2018-05-11 08:06
Hi Tom, i would like to apply the Orignial SQL Statement from Oracle MOS DOC ID 1317265.1 and 1309070.1 for license and healthcheck for all of my database instances. My Goal is to create a centralized repository with informations of my databases. Un...
Categories: DBA Blogs

Impdp not failing even if target table have missing column

Tom Kyte - Fri, 2018-05-11 08:06
My question why import is not failing even the source and target have different table structure <b>Source DB</b> has below table (with additional column COL3 and populated SQL> desc tab1 Name Null? Type ---------------------------...
Categories: DBA Blogs

Unique key across tables

Tom Kyte - Thu, 2018-05-10 13:46
Dear tom, How can i enforce unique key across multiple tables. Table1 and Table2 both have ID primary key column. Is it possible to restrict, while inserting and updating into these tables, the union of ID values from two tables are unique. r...
Categories: DBA Blogs

SEQUENCE

Tom Kyte - Thu, 2018-05-10 13:46
hi tom, during one interview i got one question in sequence i.e if there is one sequnce whose max value is 40,but after got nextval 20.without execute the select query 20,000 times and without alter the sequence i want to get 20,000 in the nextva...
Categories: DBA Blogs

How much data is there in your database?

Kubilay Çilkara - Thu, 2018-05-10 13:35
Have you ever thought how much of your database is actually data?

Sometimes you need to ask this most simple question about your database to figure out what the real size of your data is.

Databases store loads of auxiliary data such as indexes and materialized views and other structures where the original data is repeated. Many times databases repeat the data in indexes and materialized views for the sake of achieving better performance for the applications they server, and this repetition is legitimate.

But should this repetition be measured and counted as database size?

To make things worse, many databases due to many updates and deletes, over time create white space in their storage layer. This white space is fragmented free space which can not be re-used by new data entries. Often it might even end up being scanned in full table scans unnecessarily, eating up your resources. But most unfortunate of it all is that it will appear as if it is data in your database size measurements when usually it is not! White space is just void.

There are mechanisms in databases which will automatically remedy the white space and reset and re-organise the storage of data. Here is a link which talks about this in length https://oracle-base.com/articles/misc/reclaiming-unused-space 

One should be diligent when measuring database sizes, there is loads of data which is repeated and some which is just the blank void due to fragmentation and white-space.

So, how do we measure?

Below is a database size measuring SQL script which can be used with Oracle to show data (excluding the indexes) in tables and partitions. It also tries to estimate real storage (in the actual_gb column) excluding the whitespace by multiplying the number of rows in a table with the average row size. Replace the '<YOURSCHEMA>' bit with the schema you wish to measure.

SELECT
    SUM(actual_gb),
    SUM(segment_gb)
FROM
    (
        SELECT
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024 actual_gb,
            SUM(s.bytes) / 1024 / 1024 / 1024 segment_gb
        FROM
            dba_segments s,
            dba_tables t
        WHERE
            s.owner = '<YOURSCHEMA>'
            AND   t.table_name = s.segment_name
            AND   segment_type IN (
                 'TABLE'
                ,'TABLE PARTITION'
                ,'TABLE SUBPARTITION'
            )
        GROUP BY
            s.owner,
            t.table_name,
            s.segment_name,
            s.segment_type,
            t.num_rows,
            t.avg_row_len,
            t.avg_row_len * t.num_rows / 1024 / 1024 / 1024
    );
Categories: DBA Blogs

Creating a Custom Component in Oracle JET - Gökhan

Introduction Oracle JET (JavaScript Extension Toolkit) is a collection of open source libraries for JavaScript developers to develop client-side applications. It comes with lots of responsive...

We share our skills to maximize your revenue!
Categories: DBA Blogs

18c Scalable Sequences Part III (Too Much Rope)

Richard Foote - Tue, 2018-05-08 02:06
I previously looked in Part I and Part II how Scalable Sequences officially released in 18c can reduce index contention issues, by automatically assigning a 6 digit prefix to the sequence value based on the instance ID and session ID of the session. We need to be careful and consider this 6 digit prefix if […]
Categories: DBA Blogs

Build a Integrated Replicat using JSON

DBASolved - Mon, 2018-05-07 13:15

In a previous post, I showed you how to build an Integrated Extract (IE) and Distribution Path by using JSON and cURL. In this post, let’s look at how you can build an Integrated Replicat (IR) in the same manner.

To build a replicat using JSON, the JSON document is made up of the following 8:

Config – Details for the associated parameter file
Source – Where the replicat should read transactions from
Credentials – What credentials in the credential stores should be used
Checkpoint – What checkpoint table is used by the replicat
Mode – What type of replicat will be built
Registration – Register the replicat with the database
Begin – At what timeframe the replicat should start
Status – If the extract should be started or not

The resulting JSON document would look like the following:

{
“config”:[
“Replicat REPTS”,
“UseridAlias TGGATE”,
“Map SOE.*, Target SOE.*;”
],
“source”:{
“name”:”bc”
},
“credentials”:{
“alias”:”TGGATE”
},
“checkpoint”:{
“table”:”ggate.checkpoints”
},
“mode”:{
“type”:”integrated”,
“parallel”: true
},
“registration”: “standard”,
“begin”:”now”,
“status”:”stopped”
}

Now that you have a valid JSON document, a cURL command for building the integrated replicat can be done as follows:

curl -X POST \
http://localhost:17001/services/v2/replicats/REPTS\
-H ‘Cache-Control: no-cache’ \
-d ‘{
“config”:[
“Replicat REPTS”,
“UseridAlias TGGATE”,
“Map SOE.*, Target SOE.*;”
],
“source”:{
“name”:”bc”
},
“credentials”:{
“alias”:”TGGATE”
},
“checkpoint”:{
“table”:”ggate.checkpoints”
},
“mode”:{
“type”:”integrated”,
“parallel”: true
},
“registration”: “standard”,
“begin”:”now”,
“status”:”stopped”
}’

Just like the Integrated Extract (IE) and Distribution Service, the Integrated Replicat (IR) is created in a stopped state. At this point, you can start the IR and validate whatchanges need to be made to ensure replication happens.

Enjoy!!!

Categories: DBA Blogs

Migrate On-premise DB to database Cloud service using Cloud DB backup module

Are you considering migrating your Oracle Database to Oracle Database on the Cloud? Are you planning to backup your Oracle Database to the Cloud? The Oracle Database Cloud DB backup module, allows...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Offline Relocation of a PDB using RMAN

Hemant K Chitale - Sun, 2018-05-06 11:17
I've published a new video on Offline Relocation of a PDB using RMAN in 12.2
.
.
.

Categories: DBA Blogs

Build a Distribution Path using JSON

DBASolved - Fri, 2018-05-04 13:15

In a previous post, I showed you how to build an Integrated Extract (IE) using JSON and simple cURL command. In this post, let’s take a look at how to build a Distribution Service Path.

First think to understand is that the Distribution Service is the replace for the Extract Data Pump in the traditional Oracle GoldenGate architecture. The Distribution Service does the same thing as the Extract Data Pump with the exception of transformations. If you have a need to do transformations with Oracle GoldenGate Microservices; the transformations have to be pushed to either the extract or the replicat to be done.

The purpose of the Distribution Service is to ensure that the local trail files are shipped across the network and reach the Reciever Service which will create the remote trail files.

Note: The Receiver Service, on the target side, will be started automatically when the Distribution Service connects to the port number for it.

Within the Distribution Service, you will create Distribution Paths between the source and target hosts. The way you do this with JSON quite simple. There are 4 main items the JSON should contain.

1. Name – This is what the Distribution Path will be named
2. Status – Should the Distribution Path be running or stopped
3. Source – This specifies the local trail file that should be read for transactions
4. Target – This specifies the Login and URL to write to the remote trail files.

Note: For the Target setting, there are 4 protocols that can be used:
Secure Websockets (wss) – default
Websockets (ws)
UDP-based Data Transfer Protocol (udt)
Oracle GoldenGate (ogg)

An example of a JSON document that would be used to build a Distribution Path is as follows:

{
“name”: “TSTPATH”,
“status”: “stopped”,
“source”: {
“uri”: “trail://localhost:16002/services/v2/sources?trail=bb”
},
“target”: {
“uri”: “ws://OracleGoldenGate+WSTARGET@localhost:17003/services/v2/targets?trail=bc”
}
}

To build this Distirbution Path (TSTPATH), a cURL command as such can be used to build it:

curl -X POST \
http://localhost:16002/services/v2/sources/TSTPATH \
-H ‘Cache-Control: no-cache’ \
-d ‘{
“name”: “TSTPATH”,
“status”: “stopped”,
“source”: {
“uri”: “trail://localhost:16002/services/v2/sources?trail=bb”
},
“target”: {
“uri”: “ws://OracleGoldenGate+WSTARGET@localhost:17003/services/v2/targets?trail=bc”
}
}’

Once the Distribution Path is created, you can start it. Upon starting the path, you can check the Receiver Service on the target side. It should have been started as well.

Enjoy!!!

Categories: DBA Blogs

enq: TX - row lock contention wait event for an UPDATE statement

Tom Kyte - Fri, 2018-05-04 11:06
Hi, We are facing this wait event "enq: TX - row lock contention" when an update statement is being executed.THis update has filter columns which are part of unique index.What I observed during analysis is table has PCTFREE=10 and PCTUSED=NULL in ...
Categories: DBA Blogs

Changes that affect all_object tables

Tom Kyte - Thu, 2018-05-03 16:46
Hi TOM, Which actions can change the dates on the table all_objects? For example if I debug a package today 5-May-2018 at 17:00:00, the last_ddl_time and timestamp will be updated with the time that the debug was done? Regards, John Juma
Categories: DBA Blogs

error ora-14766

Tom Kyte - Thu, 2018-05-03 16:46
Dear Tom during a load phase of a table that has a clob I use an "alter table exchange partition" instruction from a stage table and sometimes (and only sometimes, not everytime) during an execution of a view (that insists on another partition of ...
Categories: DBA Blogs

Trigger to clean up associated rows

Tom Kyte - Thu, 2018-05-03 16:46
Hi Team, I have a table named "t". Please find the structure of the table: CREATE TABLE t ( t_id number(5) , s_id number(5), name char(20), mgr_id number(5), CONSTRAINT pk_t PRIMARY KEY(t_id, s_id) ); insert into t values(1,12,'a',0...
Categories: DBA Blogs

Build an Integrated Extract using JSON

DBASolved - Thu, 2018-05-03 13:15

Now that Oracle GoldenGate 12.3 Microservices have been out for about 9 month; there seems to be more and more discussions around how microservices can be used. The mircoservices architecture provides a faster way for users to build extract, replicats, distribution paths and many other items by using a JSON document and simply calling a REST end-point.

In this post, I’ll show you how to build an integrated extract using JSON and REST APIs. First think you need to understand, is the steps that it takes to build an extract currently in GGSCI/AdminClient.

Note: AdminClient can be used, with debug on, to see how these commands translate back into JSON and REST calls.

To build an Integrated Extract via GGSCI/AdminClient:

1. add extract exttst, integrated, begin now
2. register extract exttst, database container pdb1
3. add exttrail aa, extract exttst, megabytes 250
4. start extract exttst

As you can tell, it takes 4 steps to add and start the extract to an Oracle GoldenGate configuration.

If your a command line geek or a developer who wants to do more with Oracle GoldenGate, the mircroservices architecture provides you a way to build an extract via JSON files. A simple JSON file for building an integrated extract looks as follows:

{
“description”:”Integrated Extract”,
“config”:[
“Extract EXTTST”,
“ExtTrail bb”,
“UseridAlias SGGATE”,
“Table SOE.*;”
],
“source”:{
“tranlogs”:”integrated”
},
“credentials”:{
“alias”:”SGGATE”
},
“registration”:{
“containers”: [ “pdb1” ],
“optimized”:false
},
“begin”:”now”,
“targets”:[
{
“name”:”bb”,
“sizeMB”:250
}
],
“status”:”stopped”
}

This JSON example, describes all the attributes needed to build an integrated extract. The main items in this JSON are:

Description – Provide a description for the parameter file
Config – Details for the associated parameter file
Source – Where the extract should read transactions from
Credentials – What credentials in the credential stores should be used
Registration – Register the extract with the database and against associated pdbs
Begin – At what timeframe the extract should start
Targets – What trail files the extract should write to
Status – If the extract should be started or not

These 8 categories cover what we traditioanlly did in the classic architecture in 3 steps. With all these items in the JSON file, you can now quickly build the extract by calling a simple curl command.

In order to build the extract, you need to know the REST API end-point that is needed. All extracts are built against the Administration Server (AdminService) within the microservices architecture. In my configuration, my AdminService is running on port 16000; so the REST API end-point would be:

{{Source_AdminServer}}/services/v2/extracts/{{extract_name}}

http://localhost:16000/services/v2/extracts/EXTTST

The REST API end-point, requires you to specify the extract name in the URL. Now with the URL and associated JSON, you can create an extract with a simple cURL command or embed the call into an application. An example of a cURL command that would be used is:

curl -X POST \
http://localhost:16001/services/v2/extracts/EXTTST\
-H ‘Cache-Control: no-cache’ \
-d ‘{
“description”:””,
“config”:[
“Extract EXTTST”,
“ExtTrail bb”,
“UseridAlias SGGATE”,
“Table SOE.*;”
],
“source”:{
“tranlogs”:”integrated”
},
“credentials”:{
“alias”:”SGGATE”
},
“registration”:{
“containers”: [ “pdb1” ],
“optimized”:false
},
“begin”:”now”,
“targets”:[
{
“name”:”bb”,
“sizeMB”:250
}
],
“status”:”stopped”
}’

Once the extract is created, you will notice that the extract is stopped. This is due to the “status” that was feed through the JSON document. You should be able to start the extract and start seeing transactions being extracted.

Enjoy!!!

Categories: DBA Blogs

User Password Masking

Tom Kyte - Wed, 2018-05-02 22:26
Hello, Application user dont want to put the clear text password in the script which in turn connecting to database using this authentication. Is there any way to avoid clear text password and to use encrypted password for login . Regards,...
Categories: DBA Blogs

Why are all table subpartitions going in the same tablespace?

Tom Kyte - Wed, 2018-05-02 22:26
Dears, I have a problem regarding automatic list partitioning with hash sub-partitioning. the problem is the automatic created hash sub-partitions are not well distributed on the correct table spaces although the first hash-partitions are well dis...
Categories: DBA Blogs

Simple update command keeps running

Tom Kyte - Wed, 2018-05-02 22:26
hi there i have simple update command <code>UPDATE USERMAST SET USER_LOCK='N' WHERE USER_NAME='rancf';</code> when i execute this query it keeps on running.. USER_ID is the pk for USERMAST table and i tried with USER_ID TOO,same thing is ha...
Categories: DBA Blogs

ODTUG Kscope18 Conference: Change of Presentation Times

Richard Foote - Wed, 2018-05-02 22:13
For those of you that were planning to attend my presentations at the upcoming ODTUG Kscope18 Conference in Orlando (yes, I’m specifically talking to the two of you), both presentations have been allotted new times and locations. They are now confirmed as: New Indexing Features Introduced in Oracle 12c Release 2 (and 18c): When:  June […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs