Feed aggregator

Backup and Recovery

Tom Kyte - Fri, 2016-09-30 19:46
Tom, In responding to a question you mentioned that when you take a cold backup in NoArchiveLog Mode, it is a bad idea to include redo log files, in case you later decided to put the database in archivelog mode. - I understand that if we have to ...
Categories: DBA Blogs

Query to find the Index which is not created by primary key

Tom Kyte - Fri, 2016-09-30 19:46
Hi, how to find the index list which is not created by primary key for a particular schema? <b>select * from all_indexes where owner='sys'</b> currently i am running this query but this is giving the index which is created by primary key also. ...
Categories: DBA Blogs

Subqueries reduction in Merge

Tom Kyte - Fri, 2016-09-30 19:46
I'm having a <code>MERGE </code>with a <code>subquery </code>to get and ID, and I'm wondering if the subquery in the <code>NOT MATCHED statement</code> is always executed or not. <code> MERGE INTO CAR_STOCK st USING CAR_PRODUCTO pr...
Categories: DBA Blogs

Oracle 12c and RMAN automatic datafile creation

Yann Neuhaus - Fri, 2016-09-30 09:13

A lot of new features popped up with RMAN 12c, like  Expansion of Multi-section support, or Simplified cross-platform migration and many many more. However, in this post I would like to highlight a small, but very helpful new feature which I demonstrated during my session at the SOUG day.

The automatic datafile creation

In earlier versions of Oracle, you might run into situations where you create a new tablespace, and some objects into it, and then, all of a sudden, the DB crashes or the datafile is lost, without ever being backed up.

This is where the follwing command comes into play:

alter database create datafile <NAME>;

However, in Oracle 12c, this is done automatically for you, in case you use RMAN.

Lets create a tablespace and afterwards resize the datafile.

SQL> create tablespace dbi datafile '/home/oracle/rman/dbi01.dbf' size 16M;

Tablespace created.

SQL> alter database datafile '/home/oracle/rman/dbi01.dbf' resize 32M;

Database altered.

Now lets create a table in the new tablespace.

SQL> create table t_dbi tablespace dbi as select * from dba_objects;

Table created.

SQL> select count(*) from t_dbi;


Afterwards, we simulate an error and then check what Oracle says.

$ echo destroy > /home/oracle/rman/dbi01.dbf
SQL> alter system flush buffer_cache; 

System altered. 

SQL> select count(*) from t_dbi; 
select count(*) from t_dbi 
ERROR at line 1: 
ORA-03135: connection lost contact Process ID: 25538 
Session ID: 387 Serial number: 55345

From the alert.log

Errors in file /u00/app/oracle/diag/rdbms/ocm121/OCM121/trace/OCM121_ora_25904.trc:
ORA-01157: cannot identify/lock data file 11 - see DBWR trace file
ORA-01110: data file 11: '/home/oracle/rman/dbi01.dbf'

Ok. Datafile is gone, and we have no backup. Lets do a preview to see, how Oracle would restore the datafile.

RMAN> restore ( datafile 11 ) preview;

Starting restore at 30-SEP-2016 15:46:27
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=355 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=126 device type=DISK

datafile 11 will be created automatically during restore operation
using channel ORA_DISK_1
using channel ORA_DISK_2

List of Archived Log Copies for database with db_unique_name OCM121

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
519     1    529     A 30-SEP-2016 12:23:00
        Name: +FRA/OCM121/ARCHIVELOG/2016_09_30/thread_1_seq_529.454.923931563

recovery will be done up to SCN 7346834
Media recovery start SCN is 7346834
Recovery must be done beyond SCN 7346834 to clear datafile fuzziness
Finished restore at 30-SEP-2016 15:46:28

As you can see in the preview output, Oracle will create the datafile automatically for us. Ok. Lets try it.

RMAN> restore ( datafile 11 );

Starting restore at 30-SEP-2016 15:48:29
using channel ORA_DISK_1
using channel ORA_DISK_2

creating datafile file number=11 name=/home/oracle/rman/dbi01.dbf
restore not done; all files read only, offline, or already restored
Finished restore at 30-SEP-2016 15:48:31

RMAN> recover datafile 11;

Starting recover at 30-SEP-2016 15:49:17
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery
media recovery complete, elapsed time: 00:00:02

Finished recover at 30-SEP-2016 15:49:20

RMAN> alter database datafile 11 online;

Statement processed

Ready. We can now access our table again without running the “alter database create datafile” command during the restore. And the resize to 32M was also done for us. However, the resize came during the recovery part.

$ ls -l dbi01.dbf
-rw-r----- 1 oracle asmadmin 33562624 Sep 30 15:52 dbi01.dbf

From my point of view, a small, but quite helpful feature.

One last remark. I have done my demo with Oracle 12cR1 with July 2016 PSU. In that version, the RMAN PREVIEW command has a bug, which says that you cannot recover into a consistent state, even if you can.  After applying the following OnOff patch on top of July PSU 2016, the RMAN PREVIEW command works as expected.








Cet article Oracle 12c and RMAN automatic datafile creation est apparu en premier sur Blog dbi services.

console access for Linux server

Pat Shuff - Fri, 2016-09-30 09:00
Some times you just have to have access to a console for graphical use to install software. Many software products provide a command line or silent install but for some reason vendors insist on a graphical user interface to install software. With Windows, this is a simple task because getting to the command line is the difficult part. We talked about configuring the Oracle Compute Cloud security rules on June 6th and getting graphical access for a Windows desktop in the cloud. We could follow the same procedure and setup to gain access to a graphical interface to a Linux instance. We would need to either pass the X-Windows interfaces through ssh or open ports up for VNC (virtual network console) or tigerVNC to connect to a software package on the Linux server.

Oracle has a dedicated team that works on the difficult problems like mirroring E-Business Suite or ingesting SalesForce data into BI repositories for analysis. You can find their postings at www.ateam-oracle.com. One of their posts from April 28th was detailing how to setup a VNC server using an ssh tunnel to securely communicate from a Windows desktop using a putty tunnel to a tigerVNC server installed on a Linux instance in the Oracle Compute Cloud. The installation instructions talk about setting up putty on a Windows desktop as well as setting up the X-Windows user interface and tigerVNC on the Linux server.

There are basically two options for this configuration. The first option is to setup the VNC software to run on the Linux server and have it listen on all network connections. If you setup a Corente VPN to connect your office to the Oracle Cloud nothing else needs to be done other than open up port 5901 on your virtual private network. Once this is done you will have access to the console from any computer in your office that is connected to the VPN subnet. You could also open up port 5901 to the public internet but this is not recommended because you are exposing a potential security hole that someone can exploit. It is easy to configure this short term and turn it off when you are done with your testing of the configuration. Alternatively you can setup an ssh tunnel from a computer in your office and anyone who opens up port 5901 to that ip address is tunneled through the ssh connection to port 5901 on the Oracle Cloud. This is a simple and secure way of connecting one desktop to one computer. It also allows a single access point for people to use to connect to a server in the cloud.

One of the key questions that you need to ask when configuring a console connection is how are you going to use it. Do you have a team of administrators that are all going to access a server in the cloud? Do they need simultaneous access? Do you have a VPN need to connect other servers and services or are you doing it just for console connection? There is a cost of $75/month to run a VPN server because you have to pay for a compute instance to run the connection. In my opinion paying $75/month to connect one person to one computer is not worth the cost. You can do the same thing with an SSH tunnel. If, on the other hand, you have 3-4 admins that are trying to access services and developers that are trying to access other servers having them setup a dozen or so SSH tunnels every time that they want to connect is not an efficient use of time.

In summary, many people only look at how to do something. They don't take a step back and look at why they are doing something. If the software that you are installing is only available with a graphical interface installer then you have to set up a connection once possibly twice. If you can run the installer in silent mode with a configuration file to feed it the mouse clicks and text entry, go with the silent mode install. Scripting will allow you to automate the install and put it in the ora-init of your Orchestration thus reducing time and potential error conditions by selecting a different option during re-installation. If you don't have a need for a long term VPN go with a short term SSH tunnel connection. You can always install the VNC software and not enable it on the Linux server in the Oracle Cloud. I did not go through a screen shot of installation and configuration because the A-Team did a great job in their tutorial.

Correlating APEX Sessions to Database Sessions

Joel Kallman - Fri, 2016-09-30 08:01
I received the following question via email today:

"Had a question from a client yesterday concerning the subject:  I want to know which database session (APEX_PUBLIC_USER)  is servicing which APEX session. Poking around in the V$ tables, I can see that in v$SQL, the module column will reveal the APEX Application and Page, but not the Session ID.  Even if the session ID was in there, I don’t see an obvious way to join back to V$SESSION." 

It's a bit of a puzzling problem for DBA's and developers - being able to correlate a database session with a specific APEX application and session.  As I wrote about earlier, all database sessions in the database session pool of ORDS are connected as APEX_PUBLIC_USER (or ANONYMOUS, if you're using the embedded PL/SQL Gateway).  If a user is experiencing slowness in their APEX application, how can a DBA look under the hood, identify the database session associated with the request from that user, and also look at the active SQL statement and possibly any wait events with that session?

This question comes up a lot, and should really be covered in the Oracle Application Express documentation.  But in the meantime, here's the definitive answer:

APEX populates the following information in GV$SESSION for ACTIVE sessions:

client_info: Workspace ID:Authenticated username
module: DB Schema/APEX:APP application id:page id
client_identifier: Authenticated username:APEX Session ID

For example, for a recent request I did on apex.oracle.com, I had the following values in the DB session that executed my request:

client_info: 3574091691765823934:JOEL.KALLMAN@FOOBAR.COM
module: JOEL_DB/APEX:APP 17251:4
client_identifier: JOEL.KALLMAN@FOOBAR.COM:12161645673208

There is no permanent, fixed correlation between an APEX session and a database session.  The assignment of a session in the session pool to service an APEX request is essentially unpredictable and not constant.  That's why this correlation can only be done for active database sessions, which are actively servicing APEX requests.

There's one caveat.  A developer could overwrite these values using the database-provided PL/SQL API, which I've seen customers do occasionally.  Otherwise, for active database sessions, you'll see these three elements populated in GV$SESSION, and module & client_identifer will also be present in the Active Session History.  Carlos Sierra has an excellent blog post about how to query the Active Session History and identify poor performing APEX applications and their associated SQL.

Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones

Yann Neuhaus - Fri, 2016-09-30 02:24

In the last post we looked at how to get a ZFS file system up and running on a CentOS 7 host and how to enable the auto mount of the ZFS file systems. In this post we’ll look at two of the features ZFS provides: Snapshots and clones.

A ZFS snapshot is a read only copy of a file system. How can we benefit from that when it comes to PostgreSQL. There are several scenarios where this can be useful. Imagine you are developing an application and you want to test the deployment of a new release on top of a previous release. What you probably want to have is a production like PostgreSQL instance with lots of data for being able to test the upgrade path. In addition it would be great if you can revert in seconds and start from scratch just in case you run into troubles or you missed one important point in the upgrade scripts. Using ZFS snapshots you can have all of this. Lets see.

Currently my PostgreSQL instance from the last post does not contain any user data, so lets generate some:

postgres= create table my_app_table ( a int, b varchar(50) );
postgres=# with aa as 
postgres-# ( select * 
postgres(#     from generate_series (1,1000000) a
postgres(# )
postgres-# insert into my_app_table
postgres-# select aa.a, md5(aa.a::varchar)
postgres-#   from aa;
INSERT 0 1000000

This is the release we want to test our upgrade scripts from so lets create a snapshot of the current state of our instance:

[root@centos7 ~] zfs snapshot pgpool/pgdata@baserelease
[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease  16.6M      -   202M  -

The “@baserelease” is the name of the snapshot or to be correct everything after the “@” is the name of the snapshot.

Are you worried about consistency? This should not be an issue as PostgreSQL fsyncs the WAL so the instance should just start, apply all the wal records which are missing from the data files and you’re fine. Anyway, this is a scenario for testing: So as long as you have a consistent starting point you are fine.

A simple upgrade script could be:

postgres=# alter table my_app_table add column c date;
postgres=# update my_app_table set c = now();
UPDATE 1000000

What happened to the snapshot?

[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease  78.3M      -   202M  -

As soon as you modify data the snapshot will grow, no surprise.

So you did run your tests and discovered some things you could improve and once you improved what you wanted you want to start from the same point again. When having a snapshot this is quite easy, just revert to the snapshot. Of course you’ll need to stop your PostgreSQL instance first:

postgres@centos7:/home/postgres/ [PG1] pg_ctl stop -D /pgpool/pgdata/ -m fast
waiting for server to shut down....LOG:  received fast shutdown request
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down
LOG:  shutting down
server stopped

As soon as the instance is down the snapshot can be reverted:

[root@centos7 ~] zfs rollback pgpool/pgdata@baserelease
[root@centos7 ~] zfs list -t snapshot
NAME                        USED  AVAIL  REFER  MOUNTPOINT
pgpool/pgdata@baserelease     1K      -   202M  -

When you check the data after you started the instance again it is exactly as it was before:

postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/pgdata/
postgres@centos7:/home/postgres/ [PG1] LOG:  database system was not properly shut down; automatic recovery in progress
postgres@centos7:/home/postgres/ [PG1] psql postgres
psql (9.5.4 dbi services build)
Type "help" for help.

postgres= \d my_app_table
        Table "public.my_app_table"
 Column |         Type          | Modifiers 
 a      | integer               | 
 b      | character varying(50) | 

Notice the message about the automatic recovery, that is when the wal is replayed. Now you can just start your upgrade script again, revert in case of issues, start again, revert again, and so on.

Another use case: Rapid cloning of PostgreSQL instances (clones are writable, snapshots not). How does that work? This is where clones come into the game. For being able to clone you need a snapshot as clones depend on snapshots. Another thing to keep in mind is that you can not delete a snapshot when you have a clone still sitting on top of it. Lets see how it works:

As said we need a snapshot:

[root@centos7 ~] zfs snapshot pgpool/pgdata@clonebase

On top of this snapshot we can now create a clone:

[root@centos7 ~] zfs create pgpool/clones
[root@centos7 ~] zfs clone pgpool/pgdata@clonebase pgpool/clones/1
[root@centos7 ~] zfs list
pgpool            170M  9.46G    21K  /pgpool
pgpool/clones    20.5K  9.46G  19.5K  /pgpool/clones
pgpool/clones/1     1K  9.46G   169M  /pgpool/clones/1
pgpool/pgdata     170M  9.46G   169M  /pgpool/pgdata

Using the new clone we bring up another PostgreSQL instance in seconds, containing the exact data from the source of the clone:

postgres@centos7:/home/postgres/ [PG1] rm /pgpool/clones/1/*.pid
postgres@centos7:/home/postgres/ [PG1] sed -i 's/#port = 5432/port=5433/g' /pgpool/clones/1/postgresql.conf
postgres@centos7:/home/postgres/ [PG1] pg_ctl start -D /pgpool/clones/1/
postgres@centos7:/home/postgres/ [PG1] psql -p 5433 postgres
psql (9.5.4 dbi services build)
Type "help" for help.


Quite cool and easy.

Conclusion: I am not sure if I’d use ZFS for production databases on Linux because I have not tested enough. But for development and testing purposes there are quite a few benefits such as snapshots and cloning. This can simply your processes a lot. You could even use snapshots and clones as a basis for your backups although I’d prefer barman or bart.

PS: To clean up:

[root@centos7 ~] zfs destroy pgpool/clones/1
[root@centos7 ~] zfs destroy pgpool/clones

Cet article Running PostgreSQL on ZFS on Linux – Fun with snapshots and clones est apparu en premier sur Blog dbi services.

September 2016 Updates to AD and TXK for EBS 12.2

Steven Chan - Fri, 2016-09-30 02:05
We have been fine-tuning the administration tools for E-Business Suite 12.2 via a series of regular updates to the Applications DBA (AD) and EBS Technology Stack (TXK) components:

We have now made available a tenth set of critical updates to AD and TXK.These updates are included in EBS 12.2.6.  You have a choice:  you can apply the EBS 12.2.6 Release Update Pack or you can apply the AD and TXK updates separately.

We strongly recommend that you apply either EBS 12.2.6 or these new AD and TXK updates at your earliest convenience:

They can be downloaded from My Oracle Support, as shown by this example for AD:

AD Delta 8 download screenshot

Refer to the following My Oracle Support knowledge document for full installation instructions and associated tasks:

What's New in this Patchset?

This patchset includes a large number of critical fixes for stability issues that will affect all customers.  It also includes the following new features:

  • Delta synchronization for more efficient Online Patching cycles, with streamlined handling of seed data and the Applications file system
  • Improved log file structure to make diagnostics easier and more-readable
  • Expanded Application Short Name to 16 characters
  • Support for Logical Host Names
  • Support for 64-bit JDK on Concurrent Manager tier

Related Articles

Categories: APPS Blogs

SQLSaturday #546 Porto – Troubleshooting scenarios with availability groups

Yann Neuhaus - Fri, 2016-09-30 01:48


J-1 before the next SQL Saturday 2016 in Portugal.

I usually promote this event in France but this time this is for SQL Server Portuguese community. I will have the chance to talk about SQL Server AlwaysOn and troubleshooting scenarios. It will be also a good opportunity to meet portuguese people like Niko Neugebauer (), André Batista () and many others.

Moreover, Porto is definitely a beautiful city and a good way to finish the summer vacations.

Hope to see you there!


Cet article SQLSaturday #546 Porto – Troubleshooting scenarios with availability groups est apparu en premier sur Blog dbi services.


Tom Kyte - Fri, 2016-09-30 01:26
Categories: DBA Blogs

extended statistics and dynamic sampling

Tom Kyte - Fri, 2016-09-30 01:26
Hello AskTom team, Please correct me if I am wrong, as I understand, "extended statistics" is useful for multi-predicate/non indexed columns for achieving realistic cardinality estimates. Is there any DB level setting where Oracle generates the...
Categories: DBA Blogs

Theory behind adding a 'Select' before a 'Delete' in a program

Tom Kyte - Fri, 2016-09-30 01:26
We are upgrading software that executes against an Oracle database. The vendor modified code to add a Select statement before issuing an existing Delete statement. The criteria of both commands were the same except the Select included a table that o...
Categories: DBA Blogs

How to add two numbers without + or SUM

Tom Kyte - Fri, 2016-09-30 01:26
Sir...i was laughed when i was seen one answer from you...That is..). How to add two numbers without using + and sum functions... Actually that is an interview question.When i was attended an interview at TCS then were asked me these type of quest...
Categories: DBA Blogs

Parsing JSON and storing In Different table.

Tom Kyte - Fri, 2016-09-30 01:26
Hi AskTom, First of all thanks for your contribution to oracle Community! :-) I have this problem where i want to parse a json file stored in clob column and after parsing i have to store it in separate table , i am able to parse this throu...
Categories: DBA Blogs

How to convert only HH24:MI:SS from SYSDATE?

Tom Kyte - Fri, 2016-09-30 01:26
Hello. I have a quick question about Oracle 11g SQL command. I need to get only HH24:MI:SS data from SYSDATE; for example, select TO_DATE(SYSDATE, 'HH24:MI:SS') from dual; I tried it then error is like below. ORA-01861: literal does no...
Categories: DBA Blogs

insert statement returns error very late

Tom Kyte - Fri, 2016-09-30 01:26
hi tom, i have an insert statement which inserts select in specific table, the ids in this table are inserted by trigger and sequence and it works fine. it inserts 8600 row in about second which is fine. the question is, if something is viola...
Categories: DBA Blogs

package specification and body

Tom Kyte - Fri, 2016-09-30 01:26
my question is...if name of procedure along with procedure name and parameters are given in package specification but body of "procedure" is not given in package body will the procedure is going to work ..? if not ...why..? if yes then how..?
Categories: DBA Blogs

SIze of data

Tom Kyte - Fri, 2016-09-30 01:26
1)? How do you find the size of the data for an application? 2). How to add two numbers without using + and sum functions.
Categories: DBA Blogs


Learn DB Concepts with me... - Thu, 2016-09-29 13:45

Note : Killing sessions can be very destructive if you kill the wrong session, so be very careful when identifying the session to be killed.
There could be a number of reasons to kill non-essential Oracle user processes. In Oracle the alter system kill session command allows us to kill these Oracle sessions. 

The alter system kill session command requires two unique arguments that uniquely identify the Oracle session, the session identifier and serial number.

First you have to identify the session to be killed with alter system kill session.


The SID and SERIAL# values of the Oracle session to be killed can then be substituted and the alter system kill session command issued.


Sometimes Oracle is not able to kill the session immediately with the alter system kill session  command alone. Upon issuing the alter system kill session  command, the session will be 'marked for kill'. It will then be killed as soon as possible.

In the case of a session being 'marked for kill' after issuing the alter system kill session  command and not killed immediately, the alter system kill session  command can be forced by adding the immediate keyword:


In a RAC environment, you optionally specify the INST_ID, shown when querying the GV$SESSION view. This allows you to kill a session on different RAC node.

SQL> ALTER SYSTEM KILL SESSION 'sid,serial#,@inst_id';

References :

Categories: DBA Blogs


Learn DB Concepts with me... - Thu, 2016-09-29 13:24

If you have configured IDLE_TIME inr your user profile.


Specify the permitted periods of continuous inactive time during a session, expressed in minutes. Long-running queries and other operations are not subject to this limit

Lets say a session has been idle for 10 Minutes. Session will continue to show as idle even after the idle_time for that user,as specified in that user's profile, has expired. When the user attempts to run a transaction against the database after the idle_time has expired, the database will disconnect the user by terminating the session. After this, the
session will no longer show in v$session. So, even if the session appears to be idle for a duration slightly more then your 10 minutes -- it is already "dead", it just doesn't show as dead yet. PMON will eventually snipe the session, marking it dead in v$session.

Reference this oracle Document for more information.

Once the oracle session is changed to SNIPED status, we can kill that session without any problem. How ever this be done manually, but watching for these SNIPED sessions every time can be irritating, we can automate the job of killing SNIPED session as below.

-- This is an optional table. Create this only if you want to Audit the killed session.

    "SID"        NUMBER,
    "SERIAL#"    NUMBER,

Note :     Inorder to create below procedure you will need DBA role & "ALTER SYSTEM" grant. If you are creating this procedure in non DBA user, you will need grant select on V$SESSION & "ALTER SYSTEM" grant

-- Procedure to kill sessions that are in sniped state

create or replace Procedure KILL_SNIPED_SESSIONS as
cursor SEL_SID is select SID,SERIAL#,USERNAME from v$session where status='SNIPED';
V_SQL varchar2(100);
V_SQL1 varchar2(100);


execute immediate V_SQL;

execute immediate V_SQL1;





            job_name => '"EXEC_KILL_SNIPED_SESSIONS"',
            job_type => 'STORED_PROCEDURE',
            job_action => 'KILL_SNIPED_SESSIONS',
            number_of_arguments => 0,
            start_date => TO_TIMESTAMP_TZ('2016-09-29 13:07:38.837143000 AMERICA/CHICAGO','YYYY-MM-DD HH24:MI:SS.FF TZR'),
            repeat_interval => 'FREQ=HOURLY',
            end_date => NULL,
            enabled => FALSE,
            auto_drop => FALSE,
            comments => 'Job to run KILL_SNIPED_SESSIONS procedure that kills SNIPED SESSIONOS ');

             name => '"EXEC_KILL_SNIPED_SESSIONS"',
             attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);
             name => '"EXEC_KILL_SNIPED_SESSIONS"');

References :

Categories: DBA Blogs


Subscribe to Oracle FAQ aggregator