Feed aggregator

Data Archival

Tom Kyte - Tue, 2016-11-08 17:26
Hi Tom, Couple of questions here There is requirement of data archival, one option i can find is create separate database and load data using dblink. Is there any efficient way to do archival? Post archival there is a requirement where c...
Categories: DBA Blogs

Regarding high CPU usage for index query

Tom Kyte - Tue, 2016-11-08 17:26
Hi As part of a query execution we observed high CPU usage. The query is using the indexed column in where condition. Table : Select * from mytable where mycol1 <= 200000 and xxxx This table is having index on column mycol1. This tab...
Categories: DBA Blogs

Loading The XML like data into columns

Tom Kyte - Tue, 2016-11-08 17:26
Hi Tom, I have a data in a csv file like this: something_here, "{ "entityValue" : { "vlanId" : { "type" : "string" , "value" : "121a" }, "vlanDescription" : { "type" : "string" , "value" : "asdf" ...
Categories: DBA Blogs

I am trying to access a function stored in a package and it gives me invalid sql statement error

Tom Kyte - Tue, 2016-11-08 17:26
HERE IS MY <b>PACKAGE</b> AND ITS <b>BODY</b>: CREATE OR REPLACE PACKAGE TBOOK AS FUNCTION TBOOK1(JID VARCHAR2) RETURN VARCHAR2; END; CREATE OR REPLACE PACKAGE BODY TBOOK AS JID2 VARCHAR2(6); JID1 VARCHAR2(6); FUNCTION TBOOK1(JID VARCHAR2)...
Categories: DBA Blogs

Data retrieval from two databases

Tom Kyte - Tue, 2016-11-08 17:26
Hi Team, I have requirement - Consider : 1. Table1 is holding data current 5 years of data in primary database 2. Table1 is holding data older 5 years of data in other database(archived) is there any way we can manage records retr...
Categories: DBA Blogs

Oracle Data Integrator 12c: Getting Started - Installation and Configuration

Rittman Mead Consulting - Tue, 2016-11-08 15:34

I’ve decided that it’s time for a refresher on Oracle Data Integrator 12c. This week, I’m continuing the “Oracle Data Integrator 12c: Getting Started” series with the next topic: Installation and Configuration. I don’t plan to dig into all of the nitty gritty details on each specific scenario you might run into. Rittman Mead have an excellent training program for ODI 12c where you can ask the specifics. There is also this 100+ page document from Oracle that can provide further details: Oracle® Fusion Middleware - Installing and Configuring Oracle Data Integrator 12c (12.2.1) E55859-03. In this post, we’ll hit on the basics and hopefully guide you in the right direction.

First, let’s take a look at the installation process. Installing ODI 12c is quite simple compared to past versions of product. This is mainly due to the configuration of agents being moved outside of the initial installation and into a completely different process. The big choice we have now is which implementation to choose: standard or enterprise.

Standard:
Choose this installation if you plan to use a Standalone Agent to execute your mappings and data integration processes. This will install ODI Studio and the Standalone Agent, along with all of the necessary drivers and libraries to make everything work. Choose this option when installing ODI Studio on your client machine.

Enterprise:
This option will install the JEE Agent, Standalone Colocated Agent, as well as ODI Studio. The installation footprint is slightly larger than the Standard install, coming in at 3390 MB vs 3017 MB for standard. That’s a part of why I choose Standard when just installing Studio. Along with the JEE Agent and ODI Studio, you’ll also have the ODI Console and ODI plugin for Fusion Middleware Control (Enterprise Manager). One thing to note, you’ll need to first install the Oracle Fusion Middleware Infrastructure (download) package as it is a prerequisite to the ODI 12c Enterprise version.

To get started, download the bits and bytes from Oracle OTN and unzip the file. Next, using the appropriate Java version, run the ODI installer.

[oracle@bigdatalite bin]$ ./java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
[oracle@bigdatalite bin]$ cd ../..
[oracle@bigdatalite odi122120]$ ./jdk1.8.0_111/bin/java -jar fmw_12.2.1.2.0_odi.jar 
Launcher log file is /tmp/OraInstall2016-10-21_12-20-12PM/launcher2016-10-21_12-20-12PM.log.
Extracting the installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .

From there, simply go through the install wizard. I won’t go into too much because there’s not much to it. First, decide whether you will install the Standard or Enterprise version. Next, determine where you want to install your ODI home. For a JEE or Colocated Agent installation, ODI must be installed into the Oracle Home containing Fusion Middleware Infrastructure. For Standalone agent, a brand new Oracle Home must be used. Now that we’ve installed the software, it’s time to setup the repository database schema.

Master and Work Repository Setup

The Repository Configuration Utility (RCU) is delivered as a part of the ODI install. It is used to create Fusion Middleware repositories, such as the ODI Master/Work Repositories, and any additional database schemas necessary.

Start the RCU by browsing to ODI_HOME/oracle_common/bin and running ./rcu. The steps are pretty straightforward, so I’ll leave this as an exercise for everyone to try at home. Once we have our repositories, we can begin configuring Agents and Studio.

ODI Studio Configuration

Setting up ODI Studio is fairly straightforward, so I’ll keep it brief. Once installed, browse to ODI_HOME/odi/studio and fire up ./odi.sh (or odi.exe in Windows). Then, create a new connection to your ODI Master and Work repository. Enter the ODI username/password to connect with, master repository schema username/password, JDBC connection information, and Work repository. Test the connection to ensure it’s all working and you’re set.

Standalone Agent Configuration

Browse to your /oracle_common/common/bin and run the WebLogic Server Configuration wizard.

./config.sh

Create ODI Agent

First, create a new domain for the agent. This is similar to a WebLogic domain if you’ve gone through the setup process for any application that relies on WLS. It’s recommended to create the domain outside of the ODI Oracle Home. That way you can avoid issues when upgrading/reinstalling and can follow the same domain directory structure if installing multiple agents, since each must be installed in its own domain.

Next, choose the Standalone Agent Template, installed as part of the ODI download. Then, select the JDK to be used by the agent.

Configure ODI Agent component

The repository is already setup, so now if we connect to the service table (STB) schema, we can look up the ODI repository connection information without having to manually enter anything.

Get RCU configuration

The system component will be the ODI Agent name. Then, set the host server listen address, port, and Supervisor username/password.

ODI Agent configuration

Finally, setup the Node Manager for the ODI Agent. The Node Manager doesn’t have to be used when starting the agent, but it is a requirement to set up during configuration. The Username/Password should be kept in a safe place, as it will be necessary for starting components within the Node Manager.

Finally, click Create and you’re off and running.

Starting the Standalone Agent

Before you can start the agent, be sure to configure it in the master repository via ODI studio. Check out the video below for a short tutorial.

Now, you can startup the agent with the node manager or without it. If we want to manage the agent via Fusion Middleware Control, use the Node Manager. But, if you plan to simply use the command line to start / stop the agent, you can do so using the classic startup method.

Start Agent with Node Manager

From within the DOMAIN_HOME/bin directory, run the following. You’ll want to use nohup to start the process in the background on Linux. For Windows, create a service to run nodemanager.cmd.

[oracle@bigdatalite bin]$ pwd
/u01/odi_agent/user_projects/domains/base_domain/bin
[oracle@bigdatalite bin]$ nohup ./startNodeManager.sh > nm.out&
[oracle@bigdatalite bin]$ nohup: ignoring input and redirecting stderr to stdout

Next, start the agent component from within the same directory. You’ll be prompted for the Node Manager password setup during configuration.

[oracle@bigdatalite bin]$ ./startComponent.sh OracleDIAgent1
Starting system Component OracleDIAgent1 ...

Initializing WebLogic Scripting Tool (WLST) ...
...
Reading domain from /u01/odi_agent/user_projects/domains/base_domain

Please enter Node Manager password:
Connecting to Node Manager ...
Start Agent without Node Manager

For times when you don’t need the Node Manager, you can still start the ODI Agent using the classic method. Just remember to always use all-caps when referring to the agent parameters, such as NAME.

./agent.sh -NAME=OracleDIAgent1

Either approach for starting the agent will get you up and running.

Colocated Agent Configuration

The Colocated Agent is essentially a Standalone agent that is managed and maintained via WebLogic Server. Rather than repeat the steps, I thought I would point you to a nice Oracle By Example on Creating a Colocated Agent. As mentioned earlier, be sure to create your domain outside of the ODI home.

Ping colocated agent

For reference, when testing an agent, simply place the hostname:port/agent-web-application-context in a browser. The Ping response means the agent is up and running.

JEE Agent Configuration

Finally, we’ll discuss the JEE Agent configuration. As mentioned in my previous post, the main reasons for using the JEE Agent is for high availability or connection pooling. You may also want to use the ODI Console or need to manage and monitor ODI agents from FMW Control. Whatever the reason, here’s how we configure the agent.

The configuration process is very similar to that of the Colocated Agent, but with a several extra steps involved. I’ll highlight those steps here. To get started, just as with the other agents, browse to the ODI_HOME/oracle_common/common/ directory and run ./config.sh.

Work through the usual first few installer screens. Then, we select the template to configure. Choose ODI Agent and any additional items you’d like to add to the domain, such as ODI Console or Enterprise Manager plugin.

JEE Agent template

Next, add the credential mappings for both the ODI user, allowing JEE agent access to the Master / Work Repository, and the WebLogic domain user.

JEE Agent credential mapping

Select the options you plan to configure for the domain. This will vary based on what currently exists in WebLogic Server, but in this example we’ll be setting up a new AdminServer, Node Manager, and the Managed Server to host the ODI JEE Agent.

JEE Agent domain configuration

Next, configure the AdminServer, Node Manager, and Managed Server. For each, remember to use the machine IP address and not the default “Local All Machines”. For the node manager, enter a username and password as we did in previous configurations.

JEE Agent server configuration

The final screens in the configuration wizard allow you to create a cluster and assign the cluster to the server. You can assign a dynamic cluster as needed for scalability, though this falls outside the context of this article. Assign the cluster to a server, create the machine (remember to assign the IP address), and finally assign the servers (AdminServer and Managed Server) to the machine.

JEE Agent topology

Click create and you're done! Now start weblogic, node manager, then login to console to start ODI managed server and agent.

There you are, ODI Agents installed and configured. As always, there are many external factors that may change how your configuration and setup works in your environment, so please think of this as a simple guide and not the exact approach for each unique situation. If you do have further questions and would like a more detailed answer, you can always join one of the Rittman Mead ODI bootcamps to learn more from one of our experts on the product. Up next in the Getting Started series, we’ll look at creating your first ODI mapping.

Categories: BI & Warehousing

Oracle 12cR2 multitenant containers in SQL_TRACE

Yann Neuhaus - Tue, 2016-11-08 14:21

In multitenant you session can switch between containers. For example, since 12.1, a common user can switch explicitly between CDB$ROOT and any PDB with the ‘ALTER SYSTEM SET CONTAINER’. Any user connected to a PDB will also have it session switching implicitely when querying through metadata links and data links (new name for object links). In 12.1 there are no ways to trace this. This is fixed in 12.2

I set sql_trace and get the tracefile name:

SQL> select value tracefile from v$diag_info where name='Default Trace File';
 
TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_ora_6307.trc
 
SQL> alter session set sql_trace=true;
Session altered.

The container ID is CON_ID=1 because I’m connected to the root:


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:08:17.968360+02:00 (CDB$ROOT(1))
*** SESSION ID:(14.25101) 2016-09-04T16:08:17.968399+02:00
*** CLIENT ID:() 2016-09-04T16:08:17.968405+02:00
*** SERVICE NAME:(SYS$USERS) 2016-09-04T16:08:17.968410+02:00
*** MODULE NAME:(sqlplus@VM115 (TNS V1-V3)) 2016-09-04T16:08:17.968415+02:00
*** ACTION NAME:() 2016-09-04T16:08:17.968420+02:00
*** CLIENT DRIVER:(SQL*PLUS) 2016-09-04T16:08:17.968425+02:00
*** CONTAINER ID:(1) 2016-09-04T16:08:17.968430+02:00

In 12.1 you had no more information about the container in the trace file. This is improved in 12.2

Explicit ALTER SYSTEM SET CONTAINER

I’ll run a simple query, then change to container PDB (which is CON_ID=3 here) and run again a query:

SQL> select * from dual;
 
D
-
X
 
SQL> alter session set container=PDB;
Session altered.
 
SQL> select * from dual;
 
D
-
X

The lines with starting with ‘***’ followed by a timestamp are not new. But now we also have the container name (here CON_NAME=PDB) and container ID (CON_ID=3):

SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.397448+02:00 (PDB(3))
*** CONTAINER ID:(3) 2016-09-04T16:09:54.397527+02:00

You get those line for each ALTER SESSION SET CONTAINER and you have the CON_NAME and CON_ID of the PDB: (PDB(3))

Implicit switch though data link

I’m still in PDB and I’ll query a data link view: DBA_PDBS. Data link views (previously called ‘object link’ views) query data from the CDB$ROOT even when you are in a PDB. DBA_PDBS show information from pluggable databases, which are stored in CDB$ROOT (because they must be available before the PDB is opened).

SQL> select count(*) from dba_pdbs;
 
COUNT(*)
----------
1
 

The execution of the query had to switch to CDB$ROOT (CON_ID=1) to get the rows and switch back to PDB (CON_ID=3):


SQL> host grep "^\*\*\*" &tracefile
 
*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))
*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

If you look at the detail you will see that my query is parsed in my container:

=====================
PARSING IN CURSOR #139807307349184 len=29 dep=0 uid=0 oct=3 lid=0 tim=203051393258 hv=2380449338 ad='896cae38' sqlid='3cngtnf6y5jju'
select count(*) from dba_pdbs
END OF STMT
PARSE #139807307349184:c=0,e=53,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393256

I think the following is to check that the table behind the data link view are valid in the PDB even if we don’t want to query them. This is only a parse call:

=====================
PARSING IN CURSOR #139807307295488 len=46 dep=1 uid=0 oct=3 lid=0 tim=203051393450 hv=1756598280 ad='7b5dfd58' sqlid='5ucyn75nb7408'
SELECT * FROM NO_OBJECT_LINK("SYS"."DBA_PDBS")
END OF STMT
PARSE #139807307295488:c=0,e=26,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=810534000,tim=203051393449
CLOSE #139807307295488:c=0,e=7,dep=1,type=1,tim=203051393490

Then when I execute my query:

EXEC #139807307349184:c=0,e=246,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=2,plh=1333657383,tim=203051393539

my session switches to root:

*** 2016-09-04T16:09:54.406379+02:00 (CDB$ROOT(1))

and the recursive query is parsed and executed in CDB$ROOT:
=====================
PARSING IN CURSOR #139807307379504 len=170 dep=1 uid=0 oct=3 lid=0 tim=203051393687 hv=1291428476 ad='895c6940' sqlid='g34kja56gm8mw'
SELECT /*+ NO_STATEMENT_QUEUING RESULT_CACHE (SYSOBJ=TRUE) */ CON_ID FROM NO_OBJECT_LINK("SYS"."DBA_PDBS") "DBA_PDBS" WHERE "DBA_PDBS"."CON_ID"=0 OR "DBA_PDBS"."CON_ID"=3
END OF STMT
PARSE #139807307379504:c=0,e=44,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393685
EXEC #139807307379504:c=0,e=48,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=2042216988,tim=203051393790
FETCH #139807307379504:c=0,e=20,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,plh=2042216988,tim=203051393826
STAT #139807307379504 id=1 cnt=1 pid=0 pos=1 obj=0 op='RESULT CACHE 8p3h095ufc042f32tf05b23qf3 (cr=0 pr=0 pw=0 str=1 time=18 us)'
STAT #139807307379504 id=2 cnt=0 pid=1 pos=1 obj=0 op='NESTED LOOPS (cr=0 pr=0 pw=0 str=0 time=0 us cost=2 size=16 card=1)'
STAT #139807307379504 id=3 cnt=0 pid=2 pos=1 obj=161 op='TABLE ACCESS BY INDEX ROWID CONTAINER$ (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=11 card=1)'
STAT #139807307379504 id=4 cnt=0 pid=3 pos=1 obj=163 op='INDEX UNIQUE SCAN I_CONTAINER2 (cr=0 pr=0 pw=0 str=0 time=0 us cost=0 size=0 card=1)'
STAT #139807307379504 id=5 cnt=0 pid=2 pos=2 obj=36 op='INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 str=0 time=0 us cost=1 size=5 card=1)'
CLOSE #139807307379504:c=0,e=4,dep=1,type=1,tim=203051393959

You note that result cache is used for optimization and query is run with NO_OBJECT_LINK() to prevent further data links if any.

Then, my session switches back to my PDB:

*** 2016-09-04T16:09:54.406676+02:00 (PDB(3))

and execution of my query finishes:

FETCH #139807307349184:c=0,e=375,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=2,plh=1333657383,tim=203051393981
STAT #139807307349184 id=1 cnt=1 pid=0 pos=1 obj=0 op='SORT AGGREGATE (cr=0 pr=0 pw=0 str=1 time=544 us)'
STAT #139807307349184 id=2 cnt=1 pid=1 pos=1 obj=0 op='DATA LINK FULL DBA_PDBS (cr=0 pr=0 pw=0 str=1 time=525 us cost=1 size=1300 card=100)'
FETCH #139807307349184:c=0,e=1,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1333657383,tim=203051394259
CLOSE #139807307349184:c=0,e=10,dep=0,type=1,tim=203051397922

you see that the execution plan is explicit: ‘DATA LINK FULL’ in 12.2 (it was FIXED TABLE FULL X$OBLNK$ in 12.1)

_diag_cdb_logging

This new behaviour is controlled by an underscore parameter:

SQL> alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp;
alter system set "_diag_cdb_logging"=thisIsMyWayToGetHelp
*
ERROR at line 1:
ORA-00096: invalid value THISISMYWAYTOGETHELP for parameter _diag_cdb_logging,
must be from among long, short, off

By default on 12.2 the parameter is set to SHORT and writes the traces as above.
SQL> alter system set "_diag_cdb_logging"=SHORT;

If you set it to OFF, you have same behavior as in 12.1: a ‘*** CONTAINER ID:’ line is displayed for explicit SET CONTAINER but no more information.

When set to LONG you get the CON_UID which may be useful for traces that cover plug/unplug operations:

SQL> select con_id,name,dbid,con_uid,guid from v$containers;

CON_ID NAME DBID CON_UID GUID
---------- -------- ---------- ---------- --------------------------------
1 CDB$ROOT 893728006 1 3817ED090B9766FDE0534440E40ABD67
2 PDB$SEED 1943618461 1943618461 3A29D20830E760B7E053734EA8C047BB
3 PDB 4128224117 4128224117 3A2C965DE81E15A8E053734EA8C023AC
 
SQL> host grep "^\*\*\*" &tracefile
*** 2016-09-04T16:50:43.462870+02:00 (PDB(3/4128224117))
*** CONTAINER ID:(3) 2016-09-04T16:50:43.463067+02:00
*** 2016-09-04T16:50:43.493035+02:00 (CDB$ROOT(1/1))
*** 2016-09-04T16:50:43.495053+02:00 (PDB(3/4128224117))

If you want more information about CON_ID, CON_UID, GUID, and a lot more about multitenant, the Oracle Database 12c Release 2 Multitenant (Oracle Press) 1st Edition by Anton Els (Author), Vit Spinka (Author), Franck Pachot (Author) goes into all details.

 

Cet article Oracle 12cR2 multitenant containers in SQL_TRACE est apparu en premier sur Blog dbi services.

Oracle 12cR2 multitenant: Local UNDO

Yann Neuhaus - Tue, 2016-11-08 12:58

Pluggable Databases are supposed to be isolated, containing the whole of user data and metadata. This is the definition of dictionary separation coming with multitenant architecture: only system data and metadata are at CDB level. User data and metadata are in separate tablespaces belonging to the PDB. And this is what makes the unplug/plug available: because PDB tablespaces contain everything, you can transport their datafiles from one CDB to another.
However, if they are so isolated, can you explain why

  • You cannot flashback a PDB?
  • You need an auxiliary instance for PDB Point-In-Time recovery?
  • You need to put the PDB read-only before cloning it?


There is something that is not contained in your PDB but is at CDB level, and which contains user data. The UNDO tablespace is shared:

CaptureLocalUndo001

You cannot flashback a PDB because doing so requires to rollback the ongoing transactions at the time you flashback. Information was in UNDO tablespace at that time, but is not there anymore.

It’s the same idea with Point-In-Time recovery of PDB. You need to restore the UNDO tablespace to get those UNDO records from the Point-In-Time. But you cannot restore it in place because it’s shared with other PDBs that need current information. This is why you need an auxiliary instance for PDBPITR in 12.1

To clone a PDB cannot be done with ongoing transactions because their UNDO is not in the PDB. This is why it can be done only when the PDB is read-only.

12.2 Local UNDO

In 12.2 you can choose to have one UNDO tablespace per PDB, in local undo mode, which is the default in DBCA:

CaptureLocalUndo000

With local undo PDBs are truly isolated even when opened with ongoing transactions:

CaptureLocalUndo002

Look at the ‘RB segs’ column from RMAN report schema:

[oracle@OPC122 ~]$ rman target /
 
Recovery Manager: Release 12.2.0.1.0 - Production on Tue Nov 8 18:53:46 2016
 
Copyright (c) 1982, 2016, Oracle and/or its affiliates. All rights reserved.
 
connected to target database: CDB1 (DBID=901060295)
 
RMAN> report schema;
 
using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1
 
List of Permanent Datafiles
===========================
File Size(MB) Tablespace RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1 880 SYSTEM YES /u02/app/oracle/oradata/CDB1/system01.dbf
3 710 SYSAUX NO /u02/app/oracle/oradata/CDB1/sysaux01.dbf
4 215 UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/undotbs01.dbf
5 270 PDB$SEED:SYSTEM NO /u02/app/oracle/oradata/CDB1/pdbseed/system01.dbf
6 560 PDB$SEED:SYSAUX NO /u02/app/oracle/oradata/CDB1/pdbseed/sysaux01.dbf
7 5 USERS NO /u02/app/oracle/oradata/CDB1/users01.dbf
8 180 PDB$SEED:UNDOTBS1 NO /u02/app/oracle/oradata/CDB1/pdbseed/undotbs01.dbf
9 270 PDB1:SYSTEM YES /u02/app/oracle/oradata/CDB1/PDB1/system01.dbf
10 590 PDB1:SYSAUX NO /u02/app/oracle/oradata/CDB1/PDB1/sysaux01.dbf
11 180 PDB1:UNDOTBS1 YES /u02/app/oracle/oradata/CDB1/PDB1/undotbs01.dbf
12 5 PDB1:USERS NO /u02/app/oracle/oradata/CDB1/PDB1/users01.dbf
 
List of Temporary Files
=======================
File Size(MB) Tablespace Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1 33 TEMP 32767 /u04/app/oracle/oradata/temp/temp01.dbf
2 64 PDB$SEED:TEMP 32767 /u04/app/oracle/oradata/temp/temp012016-10-04_11-34-07-330-AM.dbf
3 100 PDB1:TEMP 100 /u04/app/oracle/oradata/CDB1/PDB1/temp012016-10-04_11-34-07-330-AM.dbf

You have an UNDO tablespace in ROOT, in PDB$SEED and in each user PDB.

If you have a database in shared undo mode, you can move to local undo mode while in ‘startup migrate’. PDBs when opened will have an UNDO tablespace created. You can also create an UNDO tablespace in PDB$SEED.

Yes, in 12.2, you can open the PDB$SEED read/write for this purpose:


18:55:59 SQL> alter pluggable database PDB$SEED open read write force;
 
Pluggable database altered.
 
18:56:18 SQL> show pdbs;
 
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ WRITE NO
3 PDB1 READ WRITE NO
18:56:23 SQL> alter pluggable database PDB$SEED open read only force;
 
Pluggable database altered.

But remember this is only allowed for local undo migration.

The recommandation is to run in local undo mode, even in Single-Tenant.

More about it in the 12cR2 Multitenant book:

 

Cet article Oracle 12cR2 multitenant: Local UNDO est apparu en premier sur Blog dbi services.

Oracle 12cR2 PL/SQL new feature: TNSPING from the database

Yann Neuhaus - Tue, 2016-11-08 11:55

Database links are resolved with the server TNS_ADMIN configuration (sqlnet.ora and tnsnames.ora). You can use tnsping to check the resolution, but it supposes that you are on the server and have set the same environment as the one which started the database.
In 12.2 you have a new package to check that: DBMS_TNS. It’s the kind of little new features that make our life easier.

The easy way to verify a connection string is to use tnsping. Here is an example with an EZCONNECT resolution:

[oracle@SE122 ~]$ tnsping //10.196.234.38/CDB1.opcoct.oraclecloud.internal
TNS Ping Utility for Linux: Version 12.2.0.1.0 - Production on 08-NOV-2016 17:45:34
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
/u01/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora
Used EZCONNECT adapter to resolve the alias
Attempting to contact (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))
OK (0 msec)

The full connection description is displayed here before contacting the listener.

This resolution is valid only with a specific TNS configuration (which is here /u01/app/oracle/product/12.2.0/dbhome_1/network/admin). However, you may have different configurations (using the TNS_ADMIN environment variable) and if it’s not set consistently, you may have different results.
Basically:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database
DBMS_TNS

So here is this new package:

SQL> desc dbms_tns
FUNCTION RESOLVE_TNSNAME RETURNS VARCHAR2
Argument Name Type In/Out Default?
------------------------------ ----------------------- ------ --------
TNS_NAME VARCHAR2 IN

And you can run it when connected to the database to see how the name is resolved:

SQL> select dbms_tns.resolve_tnsname('&_connect_identifier') from dual;
old 1: select dbms_tns.resolve_tnsname('&_connect_identifier') from dual
new 1: select dbms_tns.resolve_tnsname('//10.196.234.38/CDB1.opcoct.oraclecloud.internal') from dual
 
DBMS_TNS.RESOLVE_TNSNAME('//10.196.234.38/CDB1.OPCOCT.ORACLECLOUD.INTERNAL')
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=CDB1.opcoct.oraclecloud.internal)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.196.234.38)(PORT=1521)))

The resolution is done without attempting to contact the listener. This ip address do not exist on my network:

select dbms_tns.resolve_tnsname('//10.1.1.1/XX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('//10.1.1.1/XX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=XX)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=10.1.1.1)(PORT=1521)))

As you can see, the client identification is send here (PROGRAM and HOST).

Demo

I’ll use this new feature to prove my assumption above about which environment is used when connecting locally or through dynamic or static service.

I create 3 directories with different names for the SERVICE_NAME in order to see which one is used:


mkdir -p /tmp/tns_lsnr ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_lsnr/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_lsnr/tnsnames.ora
mkdir -p /tmp/tns_sess ; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_sess/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_sess/tnsnames.ora
mkdir -p /tmp/tns_inst; echo "NAMES.DIRECTORY_PATH=TNSNAMES" > /tmp/tns_inst/sqlnet.ora ; echo "XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))" > /tmp/tns_inst/tnsnames.ora

In addition, I’ll need a listener configuration with a static service, let’s call it STATIC:


cat > /tmp/tns_lsnr/listener.ora <<END
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=$HOSTNAME)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=$ORACLE_HOME)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
END

Here’s a summary of the different configurations:


$ tail /tmp/tns*/*
 
==> /tmp/tns_inst/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_inst/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_lsnr/listener.ora <==
LISTENER=(DESCRIPTION_LIST=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=SE122.compute-opcoct.oraclecloud.internal)(PORT=1521))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(ORACLE_HOME=/u01/app/oracle/product/122EE)(GLOBAL_DBNAME=STATIC)(SID_NAME=CDB1)))
====> /tmp/tns_lsnr/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_lsnr/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))
====> /tmp/tns_sess/sqlnet.ora <==
NAMES.DIRECTORY_PATH=TNSNAMES
====> /tmp/tns_sess/tnsnames.ora <==
XXX=(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

I start the listener and the instance with their own environment, and set the session one to another:


export TNS_ADMIN=/tmp/tns_lsnr ; lsnrctl start
export TNS_ADMIN=/tmp/tns_inst ; sqlplus / as sysdba <<< startup
export TNS_ADMIN=/tmp/tns_sess

Now it’s time to use this new DBMS_TNS when connecting locally, through the dynamic service (CDB1) and through the static service (STATIC):


SQL> connect system/oracle
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_sess)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected locally the TNS_ADMIN from my shell environment running sqlplus is used.


SQL> connect system/oracle@//localhost/CDB1
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_inst)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to dynamic service, the TNS_ADMIN used to startup the instance is used.


SQL> connect system/oracle@//localhost/STATIC
Connected.
 
SQL> select dbms_tns.resolve_tnsname('XXX') from dual;
 
DBMS_TNS.RESOLVE_TNSNAME('XXX')
-----------------------------------------------------------------------------------------------------------------------------------------------------------
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=from_lsnr)(CID=(PROGRAM=oracle)(HOST=SE122.compute-opcoct.oraclecloud.internal)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))

When connected to static service, the TNS_ADMIN used to startup the listener is used.

So what?

You should use a consistent environment setting in order to be sure that all sessions will use the same name resolution. But if you have a doubt about it, DBMS_TNS can help to troubleshoot. It’s better than DBMS_SYSTEM.GET_ENV as it does the name resolution rather than just showing the environment variables.

Want to know quickly where all database links are going? Here it is:

SQL> select username,dbms_tns.resolve_tnsname(host) from cdb_db_links;

 

Cet article Oracle 12cR2 PL/SQL new feature: TNSPING from the database est apparu en premier sur Blog dbi services.

Oracle 12.2.0.1 Manuals available now!

Marco Gralike - Tue, 2016-11-08 10:00
Have a look what’s new and cool in the full Oracle 12.2.0.1 manuals / documentation…

Oracle 12cR2 : Partitioning improvements – auto-list partitioning

Yann Neuhaus - Tue, 2016-11-08 09:16

The second release of Oracle Database 12c (12cR2) comes with new improvements regarding partitioning.
In this blog post I’ll show one of them : auto-list partitioning.
Auto-list partitioning is an extension of list partitioning. It enable the automatic creation of partitions for new values inserted into the partitioned table.
auto-list-part
Image : “Oracle Partitioning in Oracle Database 12c Release 2″ – Hermann Bär

Let’s have a look at the syntax. As you can see below, you only have to specify the new “Automatic” keyword after the “Partition by list” statement :
SQL> CREATE TABLE city(
city_zip NUMBER(5),
city_name VARCHAR2(30)
)
PARTITION BY LIST (city_name) AUTOMATIC
(PARTITION p_delemont VALUES ('Delemont'));


Table created.
SQL>

The auto-list partitioned table is created with only one partition, explicitly created at the table creation :
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- ------------------------------ ------------------
P_DELEMONT 'Delemont' 1

Let’s insert new cities in the table… :
SQL> INSERT INTO city VALUES (2800, 'Delemont');
1 row created.

SQL> INSERT INTO city VALUES (4001, 'Basel');
1 row created.

SQL> INSERT INTO city VALUES (8001, 'Zurich');
1 row created.

SQL> INSERT INTO city VALUES (1000, null);
1 row created.

Great ! Every data insertion succeed without the famous partitions error message : “ORA-14400: inserted partition key does not map to any partition.
And if we check the partitions…
SQL> SELECT partition_name, high_value, partition_position FROM dba_tab_partitions WHERE table_name = 'CITY';

PARTITION_NAME HIGH_VALUE PARTITION_POSITION
-------------------- -------------------- ------------------
P_DELEMONT 'Delemont' 1
SYS_P5004 'Basel' 2
SYS_P5005 'Zurich' 3
SYS_P5006 '' 4

…one partition has been automatically created for each values. Take note that a partition has also been created to store the “null” values.

Now, if you want to evolve a list partitioned table to an auto-list partitioned table, you can do it easily and in one shot :
SQL> ALTER TABLE cars SET AUTOMATIC;;
ALTER TABLE cars SET AUTOMATIC
*
ERROR at line 1:
ORA-14852: SET [SUB]PARTITIONING AUTOMATIC is not legal on this table.

Oops, this error occurs because the table contains a DEFAULT partition. As you can imagine, it’s now not necessary to have one, so you MUST drop it before moving to auto-list partitioning (take care of the data stored inside it ! ;-) ):
SQL> ALTER TABLE cars DROP PARTITION (P_DEFAULT);

Table altered.

SQL> ALTER TABLE cars SET AUTOMATIC;

Table altered.

The table is now auto-list partitioned :
SQL> SELECT partitioning_type, autolist, partition_count FROM dba_part_tables WHERE table_name = 'CITY';

PARTITIONING_TYPE AUT PARTITION_COUNT
------------------------------ --- ---------------
LIST YES 4

Conclusion :
From my point of view, this partitioning improvement is a really good one. Indeed, if your list-partitioned table has a big quantity of distinct values, creating and managing partitions for each of them could become an onerous work. But now with 12cR2 you can forget this task : partition creation for new values is automated !

 

Cet article Oracle 12cR2 : Partitioning improvements – auto-list partitioning est apparu en premier sur Blog dbi services.

Creating A New Theme in Oracle Sites Cloud Service

WebCenter Team - Tue, 2016-11-08 08:32

Authored by: Igor Polyakov, Senior Principal Product Manager, Oracle

Creating a new site template in Oracle Sites Cloud Service (SCS) 16.3.5 is easy – you can simply select an existing site in the Manager UI and then save it as a new template. However, what if you want to create a template that will be based on a site created using a brand new theme that you want to develop at the same time. This 9 step process will provide you with step-by-step instructions how to achieve that.

Step 1: Get a Bootstrap Theme
You can develop a new theme using HTML cut that you received from a web design agency or a Bootstrap theme that your found on the Web. To follow instructions in this document you will use Bootstrap theme Modern Business as an example to create new SCS theme.

Download to your desktop and explode ModernBusiness.zip archive attached to this document.

Step 2: Create a New Site using Starter Template

Login on SCS, navigate to Sites > Templates and then create new Site using StarterTemplate. When prompted, enter ‘ModernBusiness’ as the name for the new site.

Note: If StarterTemplate is not available in the catalog, upload the StarterTemplate.zip template package to a folder in DOCS and then in the Template catalog, select Create > Import a Template Package option. When prompted select StarterTemplate.zip file that you uploaded and click OK to import the template. After site creation is completed, you will see new site ModernBusiness on the list of sites. This site is created using StarterTheme that was installed with the StarterTemplate. Under Themes in the Manager UI, select StarterTheme and copy it to a new theme ModernBusinessTheme.

Open ModernBusiness site in the Site Builder, select Edit mode and then select Design option on the left bar to open Design palette. On the palette select Themes. When prompted select ModernBusinessTheme as the them for the ModernBusiness site.

Note: In the October release of Sites Cloud Service, every time you use StarterTemplate to create a new site both <New_Site> and <New_Site>Theme will be created for you. This is different from using a standard template – all sites created from a non-starter template share the same theme.

To continue reading and see steps 3-9, please click here

EDB Postgres Advanced Server 9.6 BETA released

Yann Neuhaus - Tue, 2016-11-08 05:25

Yesterday EnterpriseDB released the beta of the next version of Postgres Advanced Server. You can sign up for the beta program here and download the software here.

As usual this release includes all the features of the community release of PostgreSQL (version 9.6) plus the EDB additions such as Oracle compatibility. The release notes are here.

 

Cet article EDB Postgres Advanced Server 9.6 BETA released est apparu en premier sur Blog dbi services.

Paco on Continuing Investment in PeopleSoft

Duncan Davies - Tue, 2016-11-08 05:00

There’s a great episode of Marc Weintraub’s PeopleSoft Talk interview series, featuring a 30 minute discussion with Paco Aubrejuan. There’ll be some great points for everyone to take away from it, however here are my highlights:

On the current Support End date of December 2027:

There’s no plan on ending support for PeopleSoft then, it’s actually not that important a date. It happens to be the date right now that our lifetime support goes to … that probably won’t be the case and as we get closer to that date the plan is to move those dates out.

On Continued Investment:

For me investment and support are one and the same. It’s hard to support applications without continuing to enhance them as well. We’re planning to support and enhance them through that date.

On Fluid Adoption:

We have 2-300 customers who are live on it, and many of whom aren’t live with just a few applications but with Fluid across the board. We’ve got to that hockey stick period in terms of adoption where the majority of our customers who are doing projects or upgrades are deploying Fluid.

On replacing SES with Elasticsearch:

“it’s easier, cheaper, faster, better for customers to run Elasticsearch versus SES”

plus lots more on Cloud, Fluid Approvals and Selective Adoption. It’s well worth a watch:

 

 


Oracle Data Visualization Desktop: Star Schemas and Multiple Fact Sources

Rittman Mead Consulting - Tue, 2016-11-08 04:00

Once added to a project, columns I specified with my custom SQL query now shows up as one source.

Now that I have a custom data source with only the data I want to explore, I can easily create an analysis without having to sift through multiple data sources.

* A note about Data Visualization Desktop and caching: When using the above method of writing SQL to create a virtual table, it is loaded into the cache. You should only use this method for very compact models. If the virtual table contains too much data, you can still add it as a data source but it may be too big to cache, causing your columns not to load when creating analyses.

Although having one fact source is common in relational models, using multiple fact sources is sometimes unavoidable in dimensional modeling.

In my sample data, I have another schema called GCBC_SURVEYS which contains two fact tables containing satisfaction scores for both customers and staff and one dimension table containing the organization that conducted the surveys.

For this example, I’m going to try to add each table as a data source manually first and attempt to join the two fact tables to my dimension table. When using this method, pay close attention to change any key and ID columns from Measure to Attribute so they aren’t aggregated. Data Visualization Desktop sees a numeric datatype and assumes it’s a measure.

Once I've added in all of the GCBCSURVEYS tables as data sources, I’m going to load them all into a project and create my joins using source diagram. When I joined each fact table to the dimension table on the SURVORG_ID, notice how DVD automatically created a join between my two fact tables.

This is not desirable because, due to the presence of a circular join, we run the risk of double counting. When I try to break the join between the two fact tables, DVD asks which data source I want to break conformance from.

When I select one of the fact sources, it will not only break the join between the two fact sources but also the join between the fact and the dimension table.

As of this writing, I have not found a way to only break joins between fact tables if they are using the same key to connect to the dimension table.

The only workaround to this I’ve found is to write SQL statement to pull in the columns and create the joins into one virtual table. This way I could specify the joins without DVD creating one between the fact sources.

Once I created my virtual table, I could use it to create a report and use both fact sources.

Although it can take some time to set up all the data sources you want to use for your model, Data Visualization Desktop packs some powerful features when it comes to dimensional modeling. The ability to use more than one fact source when needed adds another area of flexibility to DVD. Whether you are pulling in each table and then creating your model or writing a SQL to create one virtual table, DVD has the flexibility to be able to accommodate a variety of different scenarios.

Categories: BI & Warehousing

Troubleshooting Cloning Issues in EBS 12.1

Steven Chan - Tue, 2016-11-08 02:06

The Rapid Clone tool is used to create a working identical copy of an existing Oracle E-Business Suite 12.1 environment.  There are several ways of using Rapid Clone, including cloning a single node environment to another single node environment, adding additional nodes to an existing environment, and reducing nodes when cloning a multinode environment to a single node clone. 

Cloning EBS 12.1.3

The guide to using Rapid Clone in EBS 12.1 environments is:

When things go wrong

Given the variety of ways that this tool can be used, it is inevitable that some things might not go as expected.  When that happens, it's helpful to have a troubleshooting framework to narrow down the possible root causes and identify a solution. 

If you've encountered a problem with Rapid Clone, your first stop should be:

This excellent Note covers the most-common questions and issues associated with Rapid Clone, such as:

  • Location of cloning log files
  • Missing prerequisite patches
  • Preparing the source system's database and application tiers
  • The expected layout of the cloning stage area for both the applications and database tiers
  • Execution of the cloning process
  • Inventory registration issues
  • Common issues when cloning the database tier
  • Known issues

Related Articles


Categories: APPS Blogs

Copy table data From One DB to Another DB

Tom Kyte - Mon, 2016-11-07 23:06
Hi Team, I need to Copy table data From One DB to Another DB. One approach I can recollect from one of the conversation asked in asktom.oracle.com is that create database link and simply execute - insert into local_table select * from table@...
Categories: DBA Blogs

export import

Tom Kyte - Mon, 2016-11-07 23:06
Hi team, I wanted to know that when i export entire schema from one database and import into anothere database then- Objects like - tables,functions,triggers,procedures,dblinks,synonyms,public synonyms and many more. Which objects export dur...
Categories: DBA Blogs

SQL not using index

Tom Kyte - Mon, 2016-11-07 23:06
Tom, There is an index on a table, but that index is not being used by SQL(even with hint). Can you please tell if something is wrong with the syntax? Below is the definition of the index on the table <b>XLA.XLA_TRANSACTION_ENTITIES</b> (PS:...
Categories: DBA Blogs

How to find the tables of a particular string

Tom Kyte - Mon, 2016-11-07 23:06
Hello there, I'm trying to find the name of the table and column in which it has the particular string. The below code searches for the string in the whole database and prints it out. However I wanted to use wild card because there are instance...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator