Feed aggregator

11g to 12c

Tom Kyte - Sat, 2017-04-08 03:26
We will be migrating our databases from 11g to 12c in the next few months. We have a 1 TB database which has about 10 tables which have xml data (clob) in them. What is the best way that's proven to migrate from 11g to 12c ? In the past we have tried...
Categories: DBA Blogs

Change VARCHAR to VARCHAR2 in oracle 12c without deleting data

Tom Kyte - Sat, 2017-04-08 03:26
Hi , How can I change the data type from VARCHAR to VARCHAR2 in 12c DB without deleting records? Please advice. Regards, Prasun
Categories: DBA Blogs

Indexes on foreign keys

Tom Kyte - Sat, 2017-04-08 03:26
I have read several books which have repeatedly mentioned creating indexes on foreign keys. I know one advantage is that it eliminates table-level locks and, I have seen the benefit since I have encountered a similar problem. However, I would like to...
Categories: DBA Blogs

Service “696c6f76656d756c746974656e616e74″ has 1 instance(s).

Yann Neuhaus - Sat, 2017-04-08 02:53

Weird title, isn’t it? That was my reaction when I did my first ‘lsnrctl status’ in 12.2: weird service name… If you have installed 12.2 multitenant, then you have probably seen this strange service name registered in your listener. One per PDB. It is not a bug. It is an internal service used to connect to the remote PDB for features like Proxy PDB. This name is the GUID of the PDB which makes this service independent of the name or the physical location of the PDB. You can use it to connect to the PDB, but should not. It is an internal service name. But on a lab, let’s play with it.

CDB

I have two Container Databases on my system:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO

CDB2 has been created without any pluggable databases (except PDB$SEED of course).

18:01:33 SQL> connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO

CDB1 has one pluggable database PDB1.

PDB1 has its system files in /u01/oradata/CDB1/PDB1/ and I’ve a user tablespace datafiles elsewhere:

18:01:33 SQL> select con_id,file_name from cdb_data_files;
CON_ID FILE_NAME
------ -------------------------------------
1 /u01/oradata/CDB1/users01.dbf
1 /u01/oradata/CDB1/undotbs01.dbf
1 /u01/oradata/CDB1/system01.dbf
1 /u01/oradata/CDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/undotbs01.dbf
4 /u01/oradata/CDB1/PDB1/sysaux01.dbf
4 /u01/oradata/CDB1/PDB1/system01.dbf
4 /u01/oradata/CDB1/PDB1/USERS.dbf
4 /var/tmp/PDB1USERS2.dbf

Both are registered to the same local listener:

SQL> host lsnrctl status
 
LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 07-APR-2017 18:01:33
 
Copyright (c) 1991, 2016, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production
Start Date 07-APR-2017 07:53:06
Uptime 0 days 10 hr. 8 min. 27 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Log File /u01/app/oracle/diag/tnslsnr/VM104/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
The command completed successfully

Each container database declares its db_unique_name as a service: CDB1 and CDB2, with an XDB service for each: CDB1XDB and CDB2XDB, each pluggable database has also its service: PDB1 here. This is what we had in 12.1 but in 12.2 there is one more service with a strange name in hexadecimal: 4aa269fa927779f0e053684ea8c0c27f

Connect to PDB without a service name?

Want to know more about it? Let’s try to connect to it:

SQL> connect sys/oracle@(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.78.104)(PORT=1521))) as sysdba
Connected.
SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME') SYS_CONTEXT('USERENV','CON_NAME') SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------- --------------------------------- -------------------------------------
CDB1 PDB1 SYS$USERS

With this service, I can connect to the PDB1 but the service name I used in the connection string is not a real service:

SQL> select name from v$services;
 
NAME
----------------------------------------------------------------
pdb1
 
SQL> show parameter service
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string CDB1

The documentation says that SYS$USERS is the default database service for user sessions that are not associated with services so I’m connected to a PDB here without a service.

GUID

The internal service name is the GUID of the PDB, which identifies the container even after unplug/plug.

SQL> select pdb_id,pdb_name,con_uid,guid from dba_pdbs;
 
PDB_ID PDB_NAME CON_UID GUID
------ -------- ------- ----
4 PDB1 2763763322 4AA269FA927779F0E053684EA8C0C27F

Proxy PDB

This internal service has been introduced in 12cR2 for Proxy PDB feature: access to a PDB through another one, so that you don’t have to change the connection string when you migrate the PDB to another server.

I’ll create a Proxy PDB in CDB2 to connect to PDB1 which is in CDB1. This is simple: create a database link for the creation of the Proxy PDB which I call PDB1PX1:

18:01:33 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:01:33 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
 
18:01:33 SQL> create database link CDB1 connect to system identified by oracle using '//localhost/CDB1';
Database link CDB1 created.
 
18:01:38 SQL> create pluggable database PDB1PX1 as proxy from PDB1@CDB1
file_name_convert=('/u01/oradata/CDB1/PDB1','/u01/oradata/CDB1/PDB1PX1');
 
Pluggable database PDB1PX1 created.
 
18:02:14 SQL> drop database link CDB1;
Database link CDB1 dropped.

The Proxy PDB clones the system tablespaces, and this is why I had to give a file_name_convert. Note that the user tablespace datafile is not cloned, so I don’t need to convert the ‘/var/tmp/PDB1USERS2.dbf’. The dblink is not needed anymore once the Proxy PDB is created, as it is used only for the clone of system tablespaces. The PDB is currently in mount.

18:02:14 SQL> connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
18:02:14 SQL> show pdbs
 
CON_ID CON_NAME OPEN MODE RESTRICTED
------ -------- ---- ---- ----------
2 PDB$SEED READ ONLY NO
3 PDB1PX1 MOUNTED

The system tablespaces are there (I’m in 12.2 with local undo which is required for Proxy PDB feature)

18:02:14 SQL> select con_id,file_name from cdb_data_files;
 
CON_ID FILE_NAME
------ ---------
1 /u01/oradata/CDB2/system01.dbf
1 /u01/oradata/CDB2/sysaux01.dbf
1 /u01/oradata/CDB2/users01.dbf
1 /u01/oradata/CDB2/undotbs01.dbf

I open the PDB

18:02:19 SQL> alter pluggable database PDB1PX1 open;
Pluggable database PDB1PX1 altered.

connect

I have now 3 ways to connect to PDB1: with the PDB1 service, with the internal service, and through the Proxy PDB service.
I’ve tested the 3 ways:


18:02:45 SQL> connect demo/demo@//localhost/PDB1
18:02:56 SQL> connect demo/demo@//localhost/PDB1PX1
18:03:06 SQL> connect demo/demo@//localhost/4aa269fa927779f0e053684ea8c0c27f

and I’ve inserted each time into a DEMO table the information about my connection:
SQL> insert into DEMO select '&_connect_identifier' "connect identifier", current_timestamp "timestamp", sys_context('userenv','cdb_name') "CDB name", sys_context('userenv','con_name') "con name" from dual;

Here is the result:

connect identifier timestamp CDB name container name
------------------ --------- -------- --------------
//localhost/PDB1 07-APR-17 06.02.50.977839000 PM CDB1 PDB1
//localhost/PDB1PX1 07-APR-17 06.03.01.492946000 PM CDB1 PDB1
//localhost/4aa269fa927779f0e053684ea8c0c27f 07-APR-17 06.03.11.814039000 PM CDB1 PDB1

We are connected to the same databases. As for this test I’m on the same server with same listener, I can check what is logged in the listener log.

Here are the $ORACLE_BASE/diag/tnslsnr/$(hostname)/listener/alert/log.xml entries related to my connections.

//localhost/PDB1

When connecting directly to PDB1 the connection is simple:


<msg time='2017-04-07T18:02:45.644+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:02:45 * (CONNECT_DATA=(SERVICE_NAME=PDB1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27523)) * establish * PDB1 * 0
</txt>
</msg>

I am connecting with SQLcl which is java: (PROGRAM=java)

//localhost/PDB1PX1

When connecting through the Proxy PDB I see the connection to the Proxy PDBX1:


<msg time='2017-04-07T18:02:56.058+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:02:56 * (CONNECT_DATA=(SERVICE_NAME=PDB1PX1)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27524)) * establish * PDB1PX1 * 0
</txt>
</msg>

This is the java connection. But I can also see the connection to the remote PDB1 from the Proxy PDB


<msg time='2017-04-07T18:03:01.375+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:03:01 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=oracle)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.78.104)(PORT=16787)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0
</txt>
</msg>

Here the program is (PROGRAM=oracle) which is a CDB2 instance process connecting to the CDB1 remote through the internal service.

//localhost/4aa269fa927779f0e053684ea8c0c27f

When I connect to the internal service, I see the same connection to PDB1’s GUID but from (PROGRAM=java) directly


<msg time='2017-04-07T18:03:06.671+02:00' org_id='oracle' comp_id='tnslsnr'
type='UNKNOWN' level='16' host_id='VM104'
host_addr='192.168.78.104' pid='1194'>
<txt>07-APR-2017 18:03:06 * (CONNECT_DATA=(SERVICE_NAME=4aa269fa927779f0e053684ea8c0c27f)(CID=(PROGRAM=java)(HOST=VM104)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=27526)) * establish * 4aa269fa927779f0e053684ea8c0c27f * 0
</txt>
</msg>

One more…

So each user PDB, in addition to the PDB name and additional services you have defined, registers an additional internal service, whether the PDB is opened our closed. And the fun is that Proxy PDB also register this additional service. Here is my listener status:


Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=VM104)(PORT=1521)))
Services Summary...
Service "4aa269fa927779f0e053684ea8c0c27f" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "4c96bda23b8e41fae053684ea8c0918b" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "CDB2" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "CDB2XDB" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Service "pdb1px1" has 1 instance(s).
Instance "CDB2", status READY, has 1 handler(s) for this service...
The command completed successfully

This “4c96bda23b8e41fae053684ea8c0918b” is the GUID of the Proxy PDB.

SQL> select sys_context('userenv','cdb_name'), sys_context('userenv','con_name'), sys_context('userenv','service_name') from dual;
 
SYS_CONTEXT('USERENV','CDB_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
SYS_CONTEXT('USERENV','SERVICE_NAME')
--------------------------------------------------------------------------------
CDB1
PDB1
SYS$USERS

So that’s a fourth way to connect to PDB1: through the internal service of the Proxy PDB.

Then you can immediately imagine what I tried…

ORA-65280

Because the internal service name is used to connect through Proxy PDB, can I create an proxy for the proxy?

18:03:32 SQL> create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
2 file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2');
 
Error starting at line : 76 File @ /media/sf_share/122/blogs/proxypdb.sql
In command -
create pluggable database PDB1PX2 as proxy from PDB1PX1@CDB2
file_name_convert=('/u01/oradata/CDB1/PDB1/PX1','/u01/oradata/CDB1/PDB1PX2')
Error report -
ORA-65280: The referenced pluggable database is a proxy pluggable database.

Answer is no. You cannot nest the Proxy PDB.

So what?

Don’t panic when looking at services registered in the listener. Those hexadecimal service names are expected in 12.2, with one per user PDB. You see them, but have no reason to use them directly. You will use them indirectly when creating a Proxy PDB which makes the location where users connect independent from the physical location of the PDB. Very interesting from migration because client configuration is independent from the migration (think hybrid-cloud). You can use this feature even without the multitenant option. Want to see all multitenant architecture options available without the option? Look at the ITOUG Tech day agenda

 

Cet article Service “696c6f76656d756c746974656e616e74″ has 1 instance(s). est apparu en premier sur Blog dbi services.

Hackathon Weekend at Fishbowl Solutions: Bots, Cloud Content Migrations, and Lightweight ECM Apps

Hackathon 2017 captains – from L to R: Andy Weaver, John Sim, and Jake Ferm.

It’s hackathon weeked at Fishbowl Solutions. This means our resident hackers (coders) will be working as teams to develop new solutions for Oracle WebCenter, enterprise search, and various cloud offerings. The theme overall this year is The Cloud, and each completed solution will integrate with a cloud offering from Oracle, Google, and perhaps even a few others if time allows.

This year three teams have formed, and they all began coding today at 1:00 PM. Teams have until 9:00 AM on Monday, April 10th to complete their innovative solutions. Each team will then present and demo their solution to everyone at Fishbowl Solutions during our quarterly meeting at 4 PM. The winning team will be decided by votes from employees that did NOT participate in the hackathon.

Here are the descriptions of the three solutions that will be developed over the weekend:

Team Captain: Andy Weaver
Team Name – for now: Cloud ECM Middleware
Overview: Lightweight ECM for The Cloud. Solution will provide content management capabilities (workflow, versioning, periodic review notifications, etc.) to Google’s cloud platform. Solution will also include a simple dashboard to notify users of documents awaiting their attention, and users will be able to use the solution on any device as well.

Team Captain: John Sim
Team Name: SkyNet – Rise of the Bots
Overview: This team has high aspirations as they will be working on a number of solutions. The first is a bot that they are calling Atlas that will essentially query Fishbowl’s Google Search Appliance and return documents, which are stored in Oracle WebCenter, based on what was asked. For example, “show me the standard work document on on ordering food for the hackathon”. The bot will use Facebook messenger as the input interface, and if time allows, a similar bot will be developed to support Siri, Slack, and Skype.

The next solution the team will try and code by Monday will be a self-service bot to query a human capital management/human resources system to return how many days of PTO the employee has.

The last solution will be a bot that integrates Alexa, which is the voice system that powers the Amazon Echo, with Oracle WebCenter. In this example, voice commands could be used to ask Alexa to tell the user the number of workflow items in their queue, or the last document checked in by their manager.

Team Captain: Jake Ferm
Team Name – for now: Cloud Content Migrator
Overview: Jake’s team will be working on an interface to enable users to select content to be migrated across Google Drive, Microsoft OneDrive, DropBox, and the Oracle Documents Cloud Service. The goal with this solution is to enable with as few clicks as possible the ability to, for example, migrate content from OneDrive to the Oracle Documents Cloud Service. They will also be working on ensuring that content with larger file sizes can be migrated in the background so that users can carry on with other computer tasks.

Please check back on Tuesday, April 11th for a recap of the event and details on the winning solution. Happy hacking!

Taco bar to fuel the hackers!

 

The post Hackathon Weekend at Fishbowl Solutions: Bots, Cloud Content Migrations, and Lightweight ECM Apps appeared first on Fishbowl Solutions.

Categories: Fusion Middleware, Other

_small_table_threshold=1000000 results in > 5x query speedup

Bobby Durrett's DBA Blog - Fri, 2017-04-07 16:41

Today I sped a query up by over 5 times by setting _small_table_threshold=1000000.

Here is the query elapsed time and a piece of the plan showing its behavior before setting the parameter:

Elapsed: 00:28:41.67

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:13:02.42 |    3071K|   3050K|       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:13:02.25 |    3071K|   3050K|       |       |          |

I think that this part of the plan means that the query scanned a range of partitions  9125 times resulting in over three million physical reads. These reads took about 13 minutes. If you do the math it works out to between 200-300 microseconds per read. I have seen similar times from repeated reads from a storage server that has cached the data in memory. I have seen this with a SAN and with Delphix.

Here is my math for fun:

>>> 1000000*((60*13)+2.25)/3050000
256.4754098360656

About 256 microseconds per read.

I ran this query again and watched the wait events in Toad’s session browser to verify that the query was doing a bunch of direct path reads. Even though the query was doing full scans on the partition range 9000 times the database just kept on doing direct path reads for 13 minutes.

So, I got the idea of trying to increase _small_table_threshold. I was not sure if it would work with parallel queries. By the way, this is on 11.2.0.4 on HP-UX Itanium platform. So, I tried

alter session set "_small_table_threshold"=1000000;

I ran the query again and it ran in under 5 minutes. I had to add a comment to the query to get the plan to come back cleanly. So, then I reran the query again and I guess because of caching it came back in under 2 minutes:

First run:

Elapsed: 00:04:28.83

Second run:

Elapsed: 00:01:39.69

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                              | Name                   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem |  O/1/M   |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|  69 |              PARTITION RANGE ITERATOR                  |                        |   9125 |      1 |   9122 |00:00:45.33 |    3103K|      0 |       |       |          |
|* 70 |               TABLE ACCESS FULL                        | SIS_INV_DTL            |   9125 |      1 |   9122 |00:00:45.27 |    3103K|      0 |       |       |          |

The second execution did zero physical reads on these partitions instead of the 3 million that we had without the parameter!

So, it seems that if you have a query that keeps doing full scans on partitions over and over it can run a lot faster if you disable direct path read by upping _small_table_threshold.

Bobby

Categories: DBA Blogs

Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Enterprise Integration Platform as a Service

Oracle Press Releases - Fri, 2017-04-07 14:22
Press Release
Oracle Named a Leader in the 2017 Gartner Magic Quadrant for Enterprise Integration Platform as a Service Oracle positioned as a leader based on ability to execute and completeness of vision

Redwood Shores, Calif.—Apr 7, 2017

Oracle today announced that it has been named a leader in Gartner’s 2017 “Magic Quadrant for Enterprise Integration Platform as a Service” report1. We believe this recognition is another milestone which the company feels is due to the tremendous momentum and growth of Oracle Cloud Platform this year.

“We believe this recognition is another acknowledgement of Oracle’s strong momentum in the integration and larger PaaS sector, driven by the successful adoption of Oracle’s cloud platform offerings by thousands of customers,” said Amit Zavery, senior vice president, Oracle Cloud Platform and Middleware. “By successfully delivering a comprehensive iPaaS offering that provides an easy way to integrate any type of application, data, device and system, Oracle has given customers a powerful option to meet their ever evolving integration needs.”

Gartner positions vendors within a particular quadrant based on their ability to execute and completeness of vision.  According to the report, “leaders in this market have paid client numbers in the thousands for their iPaaS offerings, and often many thousands of indirect users via embedded versions of the platform as well as "freemium" options. They have a solid reputation, with notable market presence and a proven track record in enabling multiple integration use cases — often supported by the large global networks of their partners. Their platforms are well-proven and functionally rich, with regular releases to rapidly address this fast-evolving market.”

Oracle Cloud Platform, which includes Oracle’s iPaaS offerings, has experienced explosive growth, adding thousands of customers in fiscal year 2017. Global enterprises, SMBs, and ISVs are turning to Oracle Cloud Platform to build and run modern Web, mobile, and cloud-native applications. Continuing its commitment to its customers, Oracle has delivered more than 50 cloud services in the last two years.

Gartner views integration platform as a service (iPaaS) as providing “capabilities to enable subscribers (aka “tenants”) to implement data, application, API and process integration projects spanning cloud-resident and on-premises endpoints.” The report adds, “This is achieved by developing, deploying, executing, managing and monitoring “integration flows” (aka “integration interfaces”) — that is, integration applications bridging between multiple endpoints so that they can work together.”

Oracle’s iPaaS offerings include Oracle Integration Cloud Service and Oracle SOA Cloud Service, both part of the Oracle Cloud Platform. Oracle Integration Cloud is a simple and powerful integration platform targeting ad hoc integrators while Oracle SOA Cloud delivers a high-control platform for specialist integrators. Additionally, Oracle has many other cross-PaaS offerings that can be combined with Oracle’s iPaaS services to deliver greater productivity.  Those services include Oracle Self Service Integration for citizen integrators, Oracle Process Cloud for improved orchestration, Oracle Real-Time Integration Business Insight for business activity monitoring, Oracle API Platform Cloud for API management, Oracle Managed File Transfer Cloud for managed file transfer and Oracle IoT Cloud for IoT integration.

Download Gartner’s 2017 “Magic Quadrant for Enterprise Integration Platform as a Service” here.

Oracle Cloud

Oracle Cloud is the industry’s broadest and most integrated public cloud, offering a complete range of services across SaaS, PaaS, and IaaS. It supports new cloud environments, existing ones, and hybrid, and all workloads, developers, and data.  The Oracle Cloud delivers nearly 1,000 SaaS applications and 50 enterprise-class PaaS and IaaS services to customers in more than 195 countries around the world and supports 55 billion transactions each day.

For more information, visit http://cloud.oracle.com.

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

1 Gartner, “Magic Quadrant for Enterprise Integration Platform as a Service,” by Keith Guttridge, Massimo Pezzini, Elizabeth Golluscio, Eric Thoo, Kimihiko Iijima, Mary Wilcox, March 30, 2017

Contact Info
Nicole Maloney
Oracle
+1.415.235.4033
nicole.maloney@oracle.com
Kristin Reeves
Blanc & Otus
+1.415.856.5145
kristin.reeves@blancandotus.com
About Oracle

Oracle offers a comprehensive and fully integrated stack of cloud applications and platform services. For more information about Oracle (NYSE:ORCL), visit www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle Corporation. 

Talk to a Press Contact

Nicole Maloney

  • +1.415.235.4033

Kristin Reeves

  • +1.415.856.5145

Trace files segmented in multiple parts as a workaround for bug 23300142

Yann Neuhaus - Fri, 2017-04-07 12:27

Today I visited a customer, who deleted a Data Guard configuration (i.e. a temporary Data Guard setup through the broker was deleted). The LOG_ARCHIVE_DEST_STATE_2 on the primary database was set to DEFER temporarily. That resulted in trace-files with name *tt*.trc to become huge (GBytes after a couple of days). Analysis showed that this was caused by bug 23300142 in 12.1.0.2. See My Oracle Support Note

Bug 23300142 - TT background process trace file message: async ignored current log: kcclenal clear thread open (Doc ID 23300142.8)

for details.
Unfortunately the bug does not have a workaround.
Due to the fact that the affected development-databases (which were now normal single instances without Data Guard) could not be restarted, I searched for a temporary workaround to stop the trace-files from growing further. Limiting the trace-file size on the database with

alter system set max_dump_file_size='100M';

did actually not always work to limit the file size. Here an example of a huge trace file (over 5GB):


$ find . -name "*tt*.trc" -ls | tr -s " " | cut -d " " -f7-11 | sort -n
...
5437814195 Apr 7 10:46 ./xxxxxx_site1/XXXXXX/trace/XXXXXX_tt00_28304.trc

However, what came in handy was the uts-trace-segmentation feature of 12c. See Jonathan Lewis’ blog here:

https://jonathanlewis.wordpress.com/2016/01/26/trace-file-size

I.e. I left all DBs on max_dump_file_size=unlimited and set


SQL> alter system set "_uts_first_segment_size" = 52428800 scope=memory;
SQL> alter system set "_uts_trace_segment_size" = 52428800 scope=memory;

Unfortunately setting the limit to the tt-background-process alone does not work:


SQL> exec dbms_system.set_int_param_in_session(sid => 199, serial# => 44511, parnam => '_uts_trace_segment_size', intval => 52428800);
BEGIN dbms_system.set_int_param_in_session(sid => 199, serial# => 44511, parnam => '_uts_trace_segment_size', intval => 52428800); END;
 
*
ERROR at line 1:
ORA-44737: Parameter _uts_trace_segment_size did not exist.
ORA-06512: at "SYS.DBMS_SYSTEM", line 117
ORA-06512: at line 1

With the default setting of “_uts_trace_segments” (Maximum number of trace segments) = 5 I could limit the maximum size of the trace of 1 DB to 250MB (50MB * 5). Below you can see only 4 files, because of 2 tests with earlier splittings of the trace-file:


$ ls -ltr *_tt00_28304*.trc
-rw-r----- 1 oracle dba 52428964 Apr 7 14:14 XXXXXX_tt00_28304_3.trc
-rw-r----- 1 oracle dba 52428925 Apr 7 16:07 XXXXXX_tt00_28304_4.trc
-rw-r----- 1 oracle dba 52428968 Apr 7 17:12 XXXXXX_tt00_28304_5.trc
-rw-r----- 1 oracle dba 43887950 Apr 7 18:50 XXXXXX_tt00_28304.trc

The feature of segmented trace-files may help a lot in situations like bug 23300142.

REMARK: Do not use underscore parameters in production environments without agreement from Oracle Support.

 

Cet article Trace files segmented in multiple parts as a workaround for bug 23300142 est apparu en premier sur Blog dbi services.

Data Lake and Data Warehouse

Dylan's BI Notes - Fri, 2017-04-07 11:23
This is an old topic but I learned more and come up more perspectives over time. Raw Data vs Clean Data Metadata What kind of services are required? Data as a Service Analytics as a Service Raw Data and Clean Data I think that assuming that you can use raw data directly in a dangerous thing. […]
Categories: BI & Warehousing

Can I have only two editions using Oracle EBR for my application and still achieve zero downtime?

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, We are planning to implement Oracle EBR in our DB and plan to have only two editions created ED1 and ED2 apart from ORA$BASE. We would also ensure that all the 5000 objects(editionable) we have in ORA$BASE would be actualized into ED1 and ...
Categories: DBA Blogs

How to update tables in loop having 5L records in each table effectively in less time

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, i have a scenario where i need to update so many tables(around 80) at once and each table having a minimum of 5 Lakhs records and i used below approach. it is taking more time(don't know exactly because it is still running from more than ...
Categories: DBA Blogs

ORA-00838: Specified value of MEMORY_TARGET is too small

Tom Kyte - Fri, 2017-04-07 09:06
I have a oracle 12c installation. Following commands were executed as SYS user. ALTER SYSTEM SET MEMORY_MAX_TARGET=20G SCOPE=SPFILE; ALTER SYSTEM SET MEMORY_TARGET = 20G SCOPE = SPFILE; ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 15G SCOPE = SPFIL...
Categories: DBA Blogs

First Date of Week(Monday Date)

Tom Kyte - Fri, 2017-04-07 09:06
Using PO_DT field in Oracle, Trying to get the First Date of week based on value of PO_DT.
Categories: DBA Blogs

continuous scrolling output for table data

Tom Kyte - Fri, 2017-04-07 09:06
hi tom, I am a big fan of your work. We are in need of a procedure, which gives output of table data in ... tail -f format ... meaning the output is continuous, never ending procedure that shows records as it happens in table. I am looking to ha...
Categories: DBA Blogs

Default column value vs commit write batch nowait. When actual value is assigned?

Tom Kyte - Fri, 2017-04-07 09:06
Hi Tom, <b>In this section I'll explain how I come up with the question..</b> Under normal operation, our application do next actions: 1. call util.do_some_action, but not wait for response 2. do some network calls to remote system 3. receiv...
Categories: DBA Blogs

swingbench datagenerator connect string

Tom Kyte - Fri, 2017-04-07 09:06
I need to use [swingbench][1] to quantify performance of a given host. However since I am pretty new to Databases as such cannot get the [datagenerator][2] program to connect to an Oracle DB instance that has been "opened" on the host. After insta...
Categories: DBA Blogs

How to trace plsql executed by a package or a procedure

Tom Kyte - Fri, 2017-04-07 09:06
Hi, In an application that we use (which uses Oracle as the database to store data), when we update and save data, it internally calls a plsql package or a procedure to invoke an UPDATE statement. We have traced the UPDATE statement using db le...
Categories: DBA Blogs

Using package global variables

Tom Kyte - Fri, 2017-04-07 09:06
We are trying to do some conversion of large tables (100 mil rows) in 11g. This is a vendor based schema so from existing structure to new structure with some calculations involved. Our developers want to use package global variables. I heard it is r...
Categories: DBA Blogs

Compare two tables having different data types and different column name

Tom Kyte - Fri, 2017-04-07 09:06
HI Tom, I have two table in migration , source table and destination table.. Column Names in source table are different w.r.to destination table and data type is also different but Fields are mapped from source table to destination table. I ne...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator