Skip navigation.

DBA Blogs

Shift Command in Shell Script in AIX and Linux

Pakistan's First Oracle Blog - Tue, 2015-07-14 21:42
Shell in Unix never ceases to surprise. Stumbled upon 'shift 2' command in AIX few hours ago and it's very useful.

'Shift n' command shifts the parameters passed to a shell script by 'n' numbers to the left.

For example:

if you have a shell script which takes 3 parameters like:

./mytest.sh arg1 arg2 arg3

and you use shift 2 in your shell script, then the values of arg1 and arg2 will be lost and the value of arg3 will get assigned to arg1.

For example:

if you have a shell script which takes 2 parameters like:

./mytest arg1 and arg2

and you use shift 2, then values of both arg1 and arg2 will be lost.

Following is a working example of shift command in AIX:

testsrv>touch shifttest.sh

testsrv>chmod a+x shifttest.sh

testsrv>vi shifttest.sh

testsrv>cat shifttest.sh
#!/bin/ksh
SID=$1
BACKUP_TYPE=$2
echo "Before Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"
shift 2
echo "After Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"


testsrv>./shifttest.sh orc daily

Before Shift: orc and daily => SID=orc and BACKUPTYPE=daily
After Shift:  and  => SID=orc and BACKUPTYPE=daily


Note that the values of arguments passed has been shifted to left, but the values of variables has remained intact.
Categories: DBA Blogs

Sorry for the radio silence but this post will make up for it ( or maybe not ha ha ) makes me grumpy

Grumpy old DBA - Mon, 2015-07-13 17:07
It's been a busy year apologies for the lack of posts.  I have learned a couple of good things oracle related and probably forgotten more but most of the stuff I work on is now somewhat confidential that doesn't make it easy to blog about.

Last week at the Federal Reserve Bank of Cleveland we had a very important visitor.  It was Janet Yellen the chairwoman of the well everything the chair of the Federal Reserve.

She was delivering a speech at the well known Cleveland City Club but stopped our bank.  My area was involved in doing two brief demo's of what we are working on she sat on the other side of the table across from me.  I did not have a speaking part ( just eye candy I guess ha ha ) but the people in my area who did speak were also in pictures.  Maybe next time?

I am kind of thinking about a presentation on sql plan baselines as being something to work on this winter?  There are a number of good ones out there already but the critical part of using a baseline is figuring out where to get one from ( and quickly ha ha ).


Categories: DBA Blogs

RMAN -- 5 : Useful KEYWORDs and SubClauses

Hemant K Chitale - Sun, 2015-07-12 04:55
I begin a subseries on useful KEYWORDs and SubClauses in the RMAN Command Set.

This is on the less-commonly used KEYWORDs  (Thus, I don't show CONFIGURE CONTROLFILE AUTOBACKUP ON in this post or SET UNTIL in the next post).

(Note that specifications defined with CONFIGURE become "persistent" but can be overridden in the specific BACKUP or RESTORE runs)



COMPRESSED BACKUPSET
By default BackupSets are non-Compressed.  However, COMPRESSED BACKUPSET can be specified in either the BACKUP command OR in the CONFIGURE command.
RMAN> configure device type disk parallelism 2 backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1
released channel: ORA_DISK_2
released channel: ORA_DISK_3
released channel: ORA_DISK_4

RMAN>

RMAN> backup tablespace hemant;

Starting backup at 12-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=49 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=29 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_2: starting compressed full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_2: starting piece 1 at 12-JUL-15
channel ORA_DISK_2: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182836_bt4jbnxh_.bkp tag=TAG20150712T182836 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:08
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182836_bt4jbnx1_.bkp tag=TAG20150712T182836 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:16
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884888932_bt4jc47o_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>
Note how I did not have to specify "AS COMPRESSED BACKUPSET" in the BACKUP command. The CONFIGURE command had configured the DISK device type backups to be COMPRESSED BACKUPSETs as "the default".  I can override the *configured* COMPRESSED BACKUPSET by including the "AS BACKUPSET" Keywords in the BACKUP command. (Out-of-the-box, RMAN is NOT configured for COMPRESSED BACKUPSET !)



PARALLELISM in the deviceConf SubClause of the CONFIGURE command
This allows RMAN to automatically create multiple Channels when running a Backup.  Therefore, individual CONFIGURE CHANNEL commands are not required.  Note : each Channel creates a separate BackupSet !

With PARALLELISM 2, we see 2 Channels ORA_DISK_1 and ORA_DISK_2 created and used :
RMAN> configure device type disk parallelism 2 backup type to backupset;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1

RMAN>
RMAN> backup tablespace HEMANT;

Starting backup at 12-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=49 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=29 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_2: starting piece 1 at 12-JUL-15
channel ORA_DISK_2: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182024_bt4hv96z_.bkp tag=TAG20150712T182024 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:36
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182024_bt4hv973_.bkp tag=TAG20150712T182024 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:36
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884888462_bt4hwn9c_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>

With PARALLELISM 4, we see 4 Channels ORA_DISK_1 to ORA_DISK_4 being created :
RMAN> configure device type disk parallelism 4 backup type to backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1
released channel: ORA_DISK_2

RMAN>
RMAN> backup tablespace HEMANT;

Starting backup at 12-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=49 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=29 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=32 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=28 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_2: starting piece 1 at 12-JUL-15
channel ORA_DISK_3: starting full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
channel ORA_DISK_3: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182301_bt4j05dl_.bkp tag=TAG20150712T182301 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182301_bt4j05f7_.bkp tag=TAG20150712T182301 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T182301_bt4j05jz_.bkp tag=TAG20150712T182301 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884888584_bt4j08mc_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>

In the latter case, although 4 Channels were created, RMAN actually used only 3 Channels.  So, RMAN can still automatically adjust the allocation of datafiles across the Channels.



MAXSETSIZE in the BACKUP command.
A BACKUP run (whether at Database level or for specific Tablespace(s) or Datafiles or ArchiveLogs), by default, creates BackupSets.  However, a BACKUP run can create multiple BackupSets.
This can happen when :
a.  The number of datafiles (or ArchiveLogs) in the Backup is "large"  (e.g. by default, unless FILESPERSET is specified, a max of 64 datafiles make up one BackupSet).
b.  Multiple Channels are used for the BACKUP run
c.  The PARALLEL Keyword is used  (as demonstrated above)
The MAXSETSIZE is actually a way to specify the maximum size of a BackupSet.  A datafile cannot span BackupSets, so the MAXSETSIZE must be, at least, as large as the largest datafile in the database.

SQL> select max(bytes)/1048576 from dba_data_files;

MAX(BYTES)/1048576
------------------
1259

SQL>

RMAN> backup as compressed backupset database maxsetsize 1200M;

Starting backup at 12-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=49 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 07/12/2015 18:14:37
RMAN-06183: datafile or datafile copy /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf (file number 2) larger than MAXSETSIZE

RMAN>

Thus, we can see that Oracle automatically checks the largest datafile against the MAXSETSIZE. Note, also, that it does not estimate a Compressed size for this check, even though I've specified "AS COMPRESSED BACKUPSET".



MAXPIECESIZE
Just as a Backup can consist of multiple BackupSets, a BackupSet can consist of multiple BackupPieces.  A BackupSet can consist of multiple datafiles and a datafile may be split across multiple BackupPieces. However, unlike the MAXSETSIZE that can be specified with the BACKUP command, MAXPIECESIZE is specified with the CONFIGURE CHANNEL. (However, unlike the MAXSETSIZE which runs a check before beginning the backup, this limit is applied when creating backuppieces !)

RMAN> configure device type disk parallelism 1 backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1
released channel: ORA_DISK_2

RMAN> configure channel device type disk maxpiecesize 25M;

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 25 M;
new RMAN configuration parameters are successfully stored


RMAN> configure channel device type disk maxpiecesize 5M;

old RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 25 M;
new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 5 M;
new RMAN configuration parameters are successfully stored
released channel: ORA_DISK_1

RMAN> backup datafile 6;

Starting backup at 12-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=49 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kf8jm_.bkp tag=TAG20150712T184704 comment=NONE
channel ORA_DISK_1: starting piece 2 at 12-JUL-15
channel ORA_DISK_1: finished piece 2 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kf9jn_.bkp tag=TAG20150712T184704 comment=NONE
channel ORA_DISK_1: starting piece 3 at 12-JUL-15
channel ORA_DISK_1: finished piece 3 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kfbjs_.bkp tag=TAG20150712T184704 comment=NONE
channel ORA_DISK_1: starting piece 4 at 12-JUL-15
channel ORA_DISK_1: finished piece 4 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kfcjy_.bkp tag=TAG20150712T184704 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884890028_bt4kfdn9_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>

SQL> !du -sh /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704*
3.1M /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kf8jm_.bkp
3.1M /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kf9jn_.bkp
3.1M /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kfbjs_.bkp
68K /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T184704_bt4kfcjy_.bkp

SQL>

The configured MAXPIECESIZE limit applied to the compressed piecesize. !



FILESPERSET
I have mentioned FILESPERSET earlier in this post and in another blog post.  This Keyword allows us to limit a BackupSet to a certain number of files.

RMAN> backup as compressed backupset tablespace HEMANT filesperset=6;

Starting backup at 12-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=42 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T222911_bt4yfrd6_.bkp tag=TAG20150712T222911 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884903397_bt4yh670_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>

With FILEPERSET=6, all 5 datafiles are in one BackupSet (as long as the total size is less than the limit on a BackupSet Size / MAXSETSIZE).

Now, let's rerun the backup with FILESPERSET=1

RMAN> backup as compressed backupset tablespace HEMANT filesperset=1;

Starting backup at 12-JUL-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T223210_bt4ymbh5_.bkp tag=TAG20150712T223210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T223210_bt4ymfnl_.bkp tag=TAG20150712T223210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T223210_bt4ymjq8_.bkp tag=TAG20150712T223210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T223210_bt4ymmt5_.bkp tag=TAG20150712T223210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
channel ORA_DISK_1: starting piece 1 at 12-JUL-15
channel ORA_DISK_1: finished piece 1 at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_12/o1_mf_nnndf_TAG20150712T223210_bt4ympxr_.bkp tag=TAG20150712T223210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 12-JUL-15

Starting Control File and SPFILE Autobackup at 12-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_12/o1_mf_s_884903546_bt4ymtdr_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 12-JUL-15

RMAN>
Now, I can have each datafile go into a separate BackupSet. The advantage of using FILESPERSET in one context was shown in a previous post. Another advantage is obvious when we have to restore a single datafile. If we had 16 datafiles in a BackupSet, and we need to restore a single datafile, RMAN still has to read the entire BackupSet of 16 datafiles to be able to restore that single datafile.  If each BackupSet was limited in size by using MAXSETSIZE and/or FILESPERSET, restoring a single datafile can be faster as the Restore has to read a smaller BackupSet.

.
.
.
Categories: DBA Blogs

Log Buffer #431: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-10 08:46

This Log buffer edition covers Oracle, SQL Server and MySQL blog posts about new features, tips, tricks and best practices.

Oracle:

  • Traditionally, assigning specific processes to a certain set of CPUs has been done by using processor sets (and resource pools). This is quite useful, but it requires the hard partitioning of processors in the system. That means, we can’t restrict process A to run on CPUs 1,2,3 and process B to run on CPUs 3,4,5, because these partitions overlap.
  • Parallel_Degree_Limit, Parallel_Max_Degree, Maximum DOP? Confused?
  • JDeveloper 12c – ORA-01882: time zone region not found
  • Using a Parallel Gateway without a Merge in OBPM
  • Secure multi-threaded live migration for kernel zones

SQL Server:

  • How to Unlock Your Team’s Creative Power with SQL Prompt and a VCS
  • In-Memory OLTP – Common Workload Patterns and Migration Considerations
  • The Poster of the Plethora of PowerShell Pitfalls
  • Temporarily Change SQL Server Log Shipping Database to Read Only
  • Monitoring Availability Groups with Redgate SQL Monitor

MySQL:

  • Introducing MySQL Performance Analyzer
  • MySQL 5.7.8 – Now featuring super_read_only and disabled_storage_engines
  • Become a MySQL DBA – Webinar series: Which High Availability Solution?
  • How to obtain the MySQL version from an FRM file
  • MySQL Enterprise Audit: Parsing Audit Information From Log Files, Inserting Into MySQL Table

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #431: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Pillars of PowerShell: SQL Server – Part 1

Pythian Group - Thu, 2015-07-09 13:37
Introduction

This is the sixth and final post in the series on the Pillars of PowerShell, at least part one of the final post. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS

PowerShell + SQL Server is just cool! You will see folks talk about the ability to perform a task against multiple servers at a time, automate implementing a configuration or database change, or just obtaining a bit of consistency when doing certain processes. I tend to use it just because I can, and it is fun to see what I can do. There are a some instances where I have used it for a specific purpose where it saved me time, but overall I just chose to use it. I would say that on average there are going to be things you can do in PowerShell that could be done in T-SQL, and in those cases you use the tool that fits your needs.

Interacting with SQL Server PowerShell

There are a three main ways to interact with SQL Server using PowerShell that I have seen:

  1. SQL Server PowerShell (SQLPS)
  2. SQL Server Server Management Object (SMO)
  3. Native .NET coding

I am not going to touch on the third option in this series because it is not something I use enough to discuss. I will say, it is not the first choice for me to use it, but it does serve a purpose at times.

To try and provide enough information to introduce you to working with PowerShell and SQL Server, I broke this into two parts. Part one, we are going to look at SQL Server PowerShell (SQLPS) and using the SQL Server Provider (SQLSERVER:\). In part two we will go over SMO and what can be accomplished.

SQLPS, to me, offers you quick access to do the one-liner type tasks against SQL Server. It is just a preference really on which option you go with, so if it works for you just use it. There are some situations that using the SQL Server Provider actually requires you to mix in using SMO (e.g. creating a schema or database role). It also offers up a few cmdlets that are added onto (and improved upon) with each release of SQL Server.

Loading/Importing

The first thing to understand is how to get the product module into your PowerShell session. As with most products, some portion of the software has to exist on the machine you are working on, or the machine your script is going to be executed on. SQL Server PowerShell and SMO are installed by default if you install the SQL Server Management Tools (aka SSMS and such) for SQL Server 2008 and higher. I will only mention that they can also be found in the SQL Server Feature Pack if you need a more “standalone” type setup on a remote machine.

One thing you should get in the habit of doing with your scripts is verifying certain things that can cause more errors than are desired, one of those is dealing with modules. If the module is not loaded when the script is run your script is just going to spit out a ton of red text. If the prerequisites are not there to begin with, there is no point in continuing. You can verify that a version of the SQLPS module is installed on your machine by running the following command:

Get-Module -ListAvailable -Name SQL*

If you are running SQL Server 2012 or 2014 you will see something like this:

SQLModule1

This works in a similar fashion when you want to verify if the SQL Server 2008 snap-in is loaded:

SQLSnapin1

I generally do not want to have to remember or type out these commands all the time when I am doing things on the fly, so I will add this bit of code to my PowerShell Profile:

Push-Location
Import-Module SQLPS -DisableNameChecking -ErrorAction 'Stop'
Pop-Location

#Load SQL Server 2008 by uncommenting next line
#Add-PSSnapin *SQL* -ErrorAction 'Stop'

One cool thing that most cmdlets you use in PowerShell contain is the -ErrorAction parameter. There are a few different values you can use for this parameter, and you can find those by checking the help on about_CommonParamters. If your script is one that is going to be interactive or run manually I would use -ErrorAction ‘Inquire‘ instead, try it out on a machine that does not have the module installed to see what happens. Once you have the module or snap-in loaded you will be able to access the SQL Server PowerShell Provider.

One side note, there actually is a “sqlps.exe” utility that is easily accessible in most cases via the right-click menu in SSMS (e.g. right-click on the “Databases” node in Object Explorer). If you open this, you are thrust into the SQLPS provider and the “directory” of the node you opened from in SSMS. However convenient as that may seem, it is something that was added to the depreciation list with SQL Server 2012, so there’s not much point in talking about it. It has its own little quirks that most folks steer clear of using it anymore.

Being Specific

The code I use in my profile is going to load the most current version of the module found on my system, at least it should. It may not do as you think it will every time. In some circumstances when you are developing scripts on your own system you may need to only import a specific version; especially if you are in a mixed version environment for SQL Server. You can load a specific version of the module by utilizing Get-Module to find your version, and just pass it to Import-Module.

Get-Module -ListAvailable -Name SQLPS | select name, path
#110 = SQL Server 2012, 120 = SQL Server 2014, 130 = SQL Server 2016
Push-Location
Get-Module -ListAvailable -Name SQLPS |
     where {$_.path -match "110"} | Import-Module
Pop-Location

# To show that it was indeed loaded
Get-Module -Name SQLPS | select name, path

#If you want to switch to another one, you need to remove it
Remove-Module SQLPS
Authentication

By default when you browse the SQLPS provider (or most providers actually), it is going to utilize the account that is running the PowerShell session, Windows Authentication. If you find yourself working with an instance that you require SQL Login authentication, don’t lose hope. You can connect to an instance via the SQL Server Provider with a SQL Login. There is an MSDN article that provides a complete function that you can use to create a connection for such a purpose. It does not show a version of the article for SQL Server 2008 but I tested this with SQL Server 2008 R2 and it worked fine.

SQLSnapin_Authentication

One important note I will make that you can learn from the function in that article: the password is secure and not stored or processed in plain text.

SQLPS Cmdlets

SQLPS as noted previously offers a handful of cmdlets for performing a few administrative tasks against SQL Server instances. The majority of the ones you will find with SQL Server 2012 for example revolve around Availability Groups (e.g. disabling, creating, removing, etc.). The other unmentionables include Backup-SqlDatabase and Restore-SqlDatabase, these do exactly what you think but with a few limitations. The backup cmdlet can actually only perform a FULL, LOG, or FILE level backup (not sure why they did not offer support of a differential backup). Anyway, they could be useful for automating backups of production databases to “refresh” development or testing environments as the backup cmdlet does support doing a copy only backup. Another way is if you deal with Express Edition you can utilize this cmdlet and a scheduled task to backup those databases.

Update 7/13/2015: One correction, where I should have checked previously, but the Backup cmdlet for 2012 and above does include an “-Incremental” parameter for performing differential backups.

The other main cmdlet you get with SQLPS is what most people consider the replacement to the sqlcmd utility, Invoke-Sqlcmd. The main thing you get from the cmdlet is a smarter output in the sense that PowerShell will more appropriately detect the data type coming out, compared to the utility that just had everything as a string.

SQLPS One-liners

Working with the SQL Server Provider you will traverse this provider as you would a drive on your computer. So you can use the cmdlet Get-ChildItem or do as most folks and use the alias dir. The main thing to understand is the first few “directories” to access a given SQL Server instance. There are actually multiple root directories under the provider that you can see just by doing “dir SQLSERVER:\“. You can see by the description what each one is for, the one we are interested in is the “Database Engine”

SQLProvider2

Once you get beyond the root directory it can require a bit of patience as the provider is slow to respond or return information. If we want to dig into an instance of SQL Server you just need to understand the structure of the provider, it will generally follow this syntax: <Provider>:\<root>\<hostname>\<instance name>\. The instance name will be “DEFAULT” if you are dealing with a SQL Server default instance. If you have a named instance you just add the name of the instance (minus the server name).

To provide a real-world example, Avail Monitoring is the custom tool Pythian developed to monitor the SQL Server environments of our customers (or Oracle or MySQL…you get the point). One of the features it includes, among many, is monitoring for failed jobs. We customize the monitoring around the customer’s requirements so some job failures will page us immediately when it occurs, while others may allow a few extra failures before we are notified to investigate. This is all done without any intervention required by the customer and I know from that notification what job failed. Well right off you are going to want to check the job history for that job to see what information shows up, and I can use SQLPS Provider to do just that:

# To see the job history
dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.EnumHistory()} | select message, rundate -first 5 | format-list
SQLProvider3
# if I needed to start the job again
$jobs = dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs
$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Start()}

You might think that is a good bit to typing, but consider how long it can take for me to do the same thing through SSMS…I can type much faster than I can click with a mouse.

Anyway to close things out, I thought I would show one cool thing SQLPS can be used for the most: scripting out stuff. Just about every “directory” you go into with the provider is going to offer a method named “Script()”.

$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Script()}

I will get the T-SQL equivalent of the job just like SSMS provides, this can be used to document your jobs or used when refreshing a development server.

Summary

I hope you got the idea of what SQLPS can do from the information above, one-liners are always fun to discover. The SQL Server Provider is not the most used tool out there by DBAs, but it can be a life-saver at times. In the next post we will dig into using SMO and the awesome power it offers.

 

Discover more about our expertise in SQL Server

The post Pillars of PowerShell: SQL Server – Part 1 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Reading System Logs on SQL Server

Pythian Group - Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

RMAN -- 4b : Recovering from an Incomplete Restore with OMF Files

Hemant K Chitale - Wed, 2015-07-08 07:51
Following up on my previous post (which had the datafiles as non-OMF), here is a case with OMF files.

SQL> select file_name from dba_data_files
2 where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst850ts_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85312_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85njw_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85qsq_.dbf

SQL>
SQL> !rm /home/oracle/app/oracle/oradata/HEMANTDB/datafile/*hemant*dbf

SQL> shutdown immediate;
ORA-01116: error in opening database file 6
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> shutdown abort;
ORACLE instance shut down.
SQL>

I have removed the datafiles for a tablespace. Note that the datafiles are all OMF.  I then attempt to restore the tablespace.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:15:21 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 390073016 bytes
Database Buffers 58720256 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> select file# from v$datafile
2 where ts# = (select ts# from v$tablespace where name = 'HEMANT')
3 order by 1;

FILE#
----------
6
7
8
9
11

SQL>
SQL> alter database datafile 6 offline;

Database altered.

SQL> alter database datafile 7 offline;

Database altered.

SQL> alter database datafile 8 offline;

Database altered.

SQL> alter database datafile 9 offline;

Database altered.

SQL> alter database datafile 11 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:22:02 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> restore tablespace HEMANT;

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szss_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szxb_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3615
Session ID: 29 Serial number: 21

[oracle@localhost ~]$

Once again, the database has crashed in the midst of the RESTORE. Let's check the datafile names.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:25:12 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> set pages60
SQL> select file#, name from v$datafile where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf


SQL>
SQL> select file#, name from v$datafile_header where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6


7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8


9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11



SQL>

[To understand why I queried both V$DATAFILE and V$DATAFILE_HEADER, see my previous post "Datafiles not Restored  --  using V$DATAFILE and V$DATAFILE_HEADER".]

So, datafiles 7 and 9 have been restored. We can see that in the RESTORE log as well -- "backup piece 1" in the RESTORE had datafiles 7 and 9 and was the only one to complete. Let's check the datafile names. Datafiles 7 and 9 are differently named from what they were earlier.  Earlier, they were "%bst85%", now they are "%bst90%".

So, if we want to re-run the restore, we can use SET NEWNAME for datafiles 7 and 9 to allow Oracle to check that they are already restored.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:32:12 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> run
2> {set newname for datafile 7 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf';
3> set newname for datafile 9 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf';
4> restore tablespace HEMANT;}

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

skipping datafile 7; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
skipping datafile 9; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 08-JUL-15

RMAN>

YES ! Datafiles 7 and 9 were identified as "already restored".
Let's re-check the datafiles and then RECOVER them.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:37:29 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> select name from v$datafile
2 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
3 minus
4 select name from v$datafile_header
5 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
6 /

no rows selected

SQL>
SQL> select * from v$datafile_header where name is null;

no rows selected

SQL>
SQL> recover datafile 6;
Media recovery complete.
SQL> recover datafile 7;
Media recovery complete.
SQL> recover datafile 8;
Media recovery complete.
SQL> recover datafile 9;
Media recovery complete.
SQL> recover datafile 11;
Media recovery complete.
SQL> alter tablespace HEMANT online;
alter tablespace HEMANT online
*
ERROR at line 1:
ORA-01109: database not open


SQL> alter database open;

Database altered.

SQL> alter tablespace HEMANT online;

Tablespace altered.

SQL>
SQL> select owner, segment_name, bytes/1048576 from dba_segments where tablespace_name = 'HEMANT';

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
HEMANT
LARGE_TABLE
272


SQL> select count(*) from hemant.large_table;

COUNT(*)
----------
2404256

SQL>

Yes, I have been able to verify that all the datafiles have been restored.  I have been able to bring the tablespace online and query the data in it.

SQL> set pages60
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


SQL>

And, yes the datafile names (%bst90%) are different from what they were earlier (%bst84% and %bst85%).

(Reference :  See Oracle Support Note Doc ID 1621319.1)
.
.
.

Categories: DBA Blogs

Become an #Oracle Certified Expert for Data Guard!

The Oracle Instructor - Wed, 2015-07-08 04:33

It is with great pride that I can announce a new certification being available – Oracle Database 12c: Data Guard Administration.

We wanted this for years and finally got it now, after having put much effort and expertise into the development of the exam. It is presently in beta and offered with a discount. Come and get it!


Tagged: Data Guard, Oracle Certification
Categories: DBA Blogs

Pythian Introduces Cassandra Consulting & Operational Support

Pythian Group - Tue, 2015-07-07 06:00

Over the past ten years, the technology industry has begun to adopt a whole new way of thinking about the database and data storage. This is largely the result of the fast moving, high-volume and non-traditional types of data that are being generated to support both internal business processes and real-time web applications. Using a NoSQL database enables businesses to quickly and cost-effectively process large amounts of data, whether the data is hosted in the enterprise or on the cloud.

Pythian’s new Cassandra services address the needs of customers deploying Apache Cassandra. Pythian offers highly knowledgeable and experienced Cassandra experts who can guide you to success with your Cassandra deployment by filling critical skills and capacity gaps, getting your Cassandra instance up and running quickly, and ensuring that it performs optimally as you move forward.

If you’re thinking of implementing Cassandra, or already have, watch our webinar, Getting Started With Cassandra, which covers key topics for starting out, such as when you should use Cassandra, potential challenges, real world Cassandra applications and benefits, and more.

Learn more about Pythian’s Cassandra Services.

The post Pythian Introduces Cassandra Consulting & Operational Support appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

So, What Kind of Person Are You?

Pythian Group - Mon, 2015-07-06 08:48

It’s my not-so-humble opinion that it takes a special kind of ‘someone’ to work in a successful and innovative collective such as Pythian.  We’re a diverse team of thought-leaders, technology forecasters, technical prodigies and individual contributors.  When we look for people to join our global company we’re looking for people who want to see that their work really matters…that they matter.  We have truly discerning tastes when it comes to who gets to have “Pythian” in their email signature – you have to love data and value what it does for people.

Oh.  And you have to like people (we’re funny like that).

Our global team is brimming with top talent dedicated to building something larger than them.  We haven’t gotten this far by playing it safe.  We play it smart.  We’re strategic.  We have a vision to be the most trusted and admired technology services organization in the world….

….And we’re looking for fantastic people to join us.   In order to take your career to the next level at Pythian, you have to be able to:

Lend a hand – There are times when it’s easier to allocate blame but in the end, the problem still exists.  It may not be ‘your fault’ but it can become everyone’s problem.  In situations where there isn’t a lot of time for advanced planning it’s the people who take steps towards a solution that will make the greatest (and often most favorable) impact.

Play to your strengths – Maybe you’re a whiz with numbers or an I.T. genius.  Perhaps your level of organization is outstanding or you have incredible leadership skills. Play to what energizes you.  Cultivate it, infuse your work with it and success will follow.

Lean into the unknown – Opportunity is often found in the things we never knew existed.  Many of the talented people that I’ve come to know at Pythian can dive fearlessly into projects and own them.   If they don’t know how to do something, they learn it and they learn how to do it well.  That’s just the way it’s done.

Embrace diversity – We believe that every employee that works with us is deserving of dignity and respect.

Be approachable –Typically there’s a good mix of personalities in any successful company.  While introverts seem to be at the helm of hands on I.T. work, extroverts also contribute significantly to getting things done.  Regardless of which way your personality leans, always be approachable.  A positive disposition is often contagious.

Put your best face forward – Remember that the skill and professionalism that you demonstrate every day will inevitably become your business card.  Maya Angelou once said, “People will forget what you said, people will forget what you did, but people will never forget how you made them feel.”

Do you think you can picture yourself here? Discover more about what it’s like to be part of the Pythian team.  You just might be our kind of people!

The post So, What Kind of Person Are You? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Possible Truncation Attack? logged in #em12c nodemanager.log file

DBASolved - Mon, 2015-07-06 08:38

Recently I’ve come across issues with restarting Oracle Enterprise Manager and seeing messages in the nodemanager.log. The message that I’m referring to is (followed by a java trace stack):

<Jul 2, 2015 11:46:11 PM> <WARNING> Uncaught exception in server handlerjavax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack? 
javax.net.ssl.SSLException: 
Inbound closed before receiving peer's close_notify: possible truncation attack?

What is interesting about this message is the panic that may come over a person when they see the word “attack”. The first time I saw this message, I was working on a client site and I was distressed because I was worried about an “attack” on EM. After some investigation, this message is a bit misleading. So, what was the cause of the message?

The “possible truncation attack” is due to the IP address of the host where the OMS runs changed. Here in my test environment, I recently upgraded my wireless router which effected my whole network. The router upgrade changed all the addresses on the network. When OEM was initially installed, the host had an address of 192.168.65.67 after the upgraded the addressed changed to 192.168.65.7. Not a big deal; how to fix though?

In the case of my test lab, I needed to change the /etc/hosts files to ensure that the correct IP address was picked up. In the enterprise, what needs to happen is your local DNS needs to be updated along as the /etc/hosts file. OEM upon start up will look at DNS then /etc/hosts when trying to resolve host to IP resolution. The order of preference can be changed in the /etc/resolv.conf as well.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Why A Brand New Index Might Benefit From An Immediate Coalesce (One Slip)

Richard Foote - Mon, 2015-07-06 01:58
A recent question on the OTN Forums Reg: Index – Gathering Statistics vs. Rebuild got me thinking on a scenario not unlike the one raised in the question where a newly populated index might immediately benefit from a coalesce. I’ve previously discussed some of the pertinent concepts such as how index rebuilds can make indexes bigger, not smaller […]
Categories: DBA Blogs

Transport Tablespace using RMAN Backupsets in #Oracle 12c

The Oracle Instructor - Mon, 2015-07-06 01:29

Using backupsets for Transportable Tablespaces reduces the volume of data you need to ship to the destination database. See how that works:

RMAN TTS on the source database

RMAN TTS on the source database

The tablespace is made READ ONLY before the new BACKUP FOR TRANSPORT command is done. At this point, you can also convert the platform and the endian format if required. Then on the destination site:

RMAN TTS on the destination database

RMAN TTS on the destination database

The FOREIGN keyword indicates that this doesn’t use a backup taken at the destination. Practical example:

 

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jul 6 08:36:30 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select * from v$tablespace;

       TS# NAME                           INC BIG FLA ENC     CON_ID
---------- ------------------------------ --- --- --- --- ----------
         0 SYSTEM                         YES NO  YES              0
         1 SYSAUX                         YES NO  YES              0
         2 UNDOTBS1                       YES NO  YES              0
         3 TEMP                           NO  NO  YES              0
         4 USERS                          YES NO  YES              0
         5 TBS1                           YES NO  YES              0

6 rows selected.

SQL> select table_name,owner from dba_tables where tablespace_name='TBS1';

TABLE_NAME
-------------------- 
OWNER
--------------------
T
ADAM


SQL> alter tablespace tbs1 read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ rman target sys/oracle@prima

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:37:28 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    244      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/prima/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> host 'mkdir /tmp/stage';

host command complete

RMAN> configure device type disk backup type to compressed backupset;

old RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 1;
new RMAN configuration parameters are successfully stored

RMAN> backup for transport format '/tmp/stage/tbs1.bkset'
      datapump format '/tmp/stage/tbs1.dmp'
      tablespace tbs1;

Starting backup at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=40 device type=DISK
Running TRANSPORT_SET_CHECK on specified tablespaces
TRANSPORT_SET_CHECK completed successfully

Performing export of metadata for specified tablespaces...
   EXPDP> Starting "SYS"."TRANSPORT_EXP_PRIMA_yvym":
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   EXPDP> Master table "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TRANSPORT_EXP_PRIMA_yvym is:
   EXPDP>   /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
   EXPDP> ******************************************************************************
   EXPDP> Datafiles required for transportable tablespace TBS1:
   EXPDP>   /u01/app/oracle/oradata/prima/tbs1.dbf
   EXPDP> Job "SYS"."TRANSPORT_EXP_PRIMA_yvym" successfully completed at Mon Jul 6 08:39:50 2015 elapsed 0 00:00:26
Export completed

channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00005 name=/u01/app/oracle/oradata/prima/tbs1.dbf
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.bkset tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: starting compressed full datafile backup set
input Data Pump dump file=/u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_PRIMA_25997.dmp
channel ORA_DISK_1: starting piece 1 at 06-JUL-15
channel ORA_DISK_1: finished piece 1 at 06-JUL-15
piece handle=/tmp/stage/tbs1.dmp tag=TAG20150706T083917 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 06-JUL-15

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> exit


Recovery Manager complete.
[oracle@uhesse ~]$ ls -rtl /tmp/stage
total 5608
-rw-r-----. 1 oracle oinstall 5578752 Jul  6 08:39 tbs1.bkset
-rw-r-----. 1 oracle oinstall  163840 Jul  6 08:39 tbs1.dmp
[oracle@uhesse ~]$ rman target sys/oracle@sekunda

Recovery Manager: Release 12.1.0.2.0 - Production on Mon Jul 6 08:40:49 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: SEKUNDA (DBID=3356258651)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> restore foreign tablespace tbs1
      format '/u01/app/oracle/oradata/sekunda/tbs1.dbf'
      from backupset '/tmp/stage/tbs1.bkset'
      dump file from backupset '/tmp/stage/tbs1.dmp';

Starting restore at 06-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=37 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring all files in foreign tablespace TBS1
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.bkset
channel ORA_DISK_1: restoring foreign file 5 to /u01/app/oracle/oradata/sekunda/tbs1.dbf
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.bkset
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:08
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring Data Pump dump file to /u01/app/oracle/product/12.1.0/dbhome_1/dbs/backup_tts_SEKUNDA_85631.dmp
channel ORA_DISK_1: reading from backup piece /tmp/stage/tbs1.dmp
channel ORA_DISK_1: foreign piece handle=/tmp/stage/tbs1.dmp
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02

Performing import of metadata...
   IMPDP> Master table "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_SEKUNDA_ppol":
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
   IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
   IMPDP> Job "SYS"."TSPITR_IMP_SEKUNDA_ppol" successfully completed at Mon Jul 6 08:42:51 2015 elapsed 0 00:00:20
Import completed

Finished restore at 06-JUL-15

RMAN> report schema;

Report of database schema for database with db_unique_name SEKUNDA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/sekunda/system01.dbf
2    249      SYSAUX               NO      /u01/app/oracle/oradata/sekunda/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/sekunda/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/sekunda/users01.dbf
5    100      TBS1                 NO      /u01/app/oracle/oradata/sekunda/tbs1.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/sekunda/temp01.dbt

RMAN> alter tablespace tbs1 read write;

Statement processed

RMAN> select count(*) from adam.t;

  COUNT(*)
----------
   1000000

Hope you find it useful :-)


Tagged: 12c New Features, PracticalGuide, RMAN
Categories: DBA Blogs

RMAN -- 4 : Recovering from an Incomplete Restore

Hemant K Chitale - Fri, 2015-07-03 23:22
What do you do if a RESTORE fails mid-way ?  Do you need to rerun the whole restore ?  If it is a very large database, it could take [many ?] hours.

RMAN is "smart" enough to detect datafiles that have been restored and not re-attempt a restore.

UPDATE 08-Jul-15 :  For a database with Oracle Managed Files  (OMF), please see Oracle Support Note Doc ID 1621319.1  {Thus, see my next blog post on Recovering from an Incomplete Restore with OMF Files}

This example below is for non-OMF files.

Here, I begin a database restore.

 
RMAN> restore controlfile from autobackup;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK

recovery area destination: /NEW_FS/oracle/FRA
database name (or database unique name) used for search: HEMANTDB
channel ORA_DISK_1: AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp found in the recovery area
AUTOBACKUP search with format "%F" not attempted because DBID was not set
channel ORA_DISK_1: restoring control file from AUTOBACKUP /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp
channel ORA_DISK_1: control file restore from AUTOBACKUP complete
output file name=/home/oracle/app/oracle/oradata/orcl/control01.ctl
output file name=/home/oracle/app/oracle/flash_recovery_area/orcl/control02.ctl
Finished restore at 04-JUL-15

RMAN>
RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
Starting implicit crosscheck backup at 04-JUL-15
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=19 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
Crosschecked 8 objects
Finished implicit crosscheck backup at 04-JUL-15

Starting implicit crosscheck copy at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2
Finished implicit crosscheck copy at 04-JUL-15

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
File Name: /NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_04/o1_mf_s_884175832_bsgqysyq_.bkp

using channel ORA_DISK_1
using channel ORA_DISK_2

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00004 to /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqsccg_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:02:34
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00003 to /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqwt4s_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:00:35
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3777
Session ID: 1 Serial number: 9

[oracle@localhost ~]$

After having restored a few datafiles, the restore failed on being disconnected from the database. (The  server or database instance has crashed).  Since the controlfile has been restored, I can bring up the database in MOUNT mode and then re-attempt a RESTORE DATABASE.

[oracle@localhost ~]$ rman target sys/oracle@orcl

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Jul 4 12:56:41 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN>
RMAN> restore database;

Starting restore at 04-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK

skipping datafile 3; already restored to file /home/oracle/app/oracle/oradata/orcl/undotbs01.dbf
skipping datafile 4; already restored to file /home/oracle/app/oracle/oradata/orcl/users01.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp
channel ORA_DISK_2: starting datafile backup set restore
channel ORA_DISK_2: specifying datafile(s) to restore from backup set
channel ORA_DISK_2: restoring datafile 00002 to /home/oracle/app/oracle/oradata/orcl/sysaux01.dbf
channel ORA_DISK_2: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjj_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:02:36
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00010 to /home/oracle/app/oracle/oradata/orcl/APEX_2614203650434107.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxovh_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00005 to /home/oracle/app/oracle/oradata/orcl/example01.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqxjv6_.bkp tag=TAG20150704T121859
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_2: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_04/o1_mf_nnndf_TAG20150704T121859_bsgqonjl_.bkp tag=TAG20150704T121859
channel ORA_DISK_2: restored backup piece 1
channel ORA_DISK_2: restore complete, elapsed time: 00:04:02
Finished restore at 04-JUL-15

RMAN>

RMAN detects that datafiles 3 (undotbs01.dbf) and 4 (users01.dbf) had already been restored.
 If you look at the previous RESTORE run, you can see that these were restored by Channel ORA_DISK_2. The first channel ORA_DISK_1 had started restoring system01.dbf but hadn't completed restoring the datafile when the restore crashed. That restore of datafile 1 (system01.dbf) had to be redone.

 (Another thing to note : Oracle doesn't necessarily restore datafiles in the order of file_id (file#) ! There really is no ORDER BY for a RESTORE)

RMAN> recover database;

Starting recover at 04-JUL-15
using channel ORA_DISK_1
using channel ORA_DISK_2

starting media recovery

archived log for thread 1 with sequence 628 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc
archived log for thread 1 with sequence 629 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc
archived log for thread 1 with sequence 630 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc
archived log for thread 1 with sequence 631 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc
archived log for thread 1 with sequence 632 is already on disk as file /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc
archived log for thread 1 with sequence 633 is already on disk as file /home/oracle/app/oracle/oradata/orcl/redo03.log
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc thread=1 sequence=628
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc thread=1 sequence=629
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc thread=1 sequence=630
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc thread=1 sequence=631
archived log file name=/NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc thread=1 sequence=632
archived log file name=/home/oracle/app/oracle/oradata/orcl/redo03.log thread=1 sequence=633
media recovery complete, elapsed time: 00:00:02
Finished recover at 04-JUL-15

RMAN>
RMAN> alter database open resetlogs;

database opened

RMAN>


UPDATE 07-Jul-15 :  Also see my earlier (year 2012) post "Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER"  which also shows retrying a RESTORE DATABASE after a failure of restoring a datafile.  There, a single file in a BackupSet failed to restore.  Oracle didn't continue and try the other datafiles in that BackupSet.  I could either fix the error and retry the entire BackupSet (RESTORE DATABASE would have identified the right BackupSet containing those files) OR I could, as I did in that scenario, individually restore DataFiles from the BackupSet.

It can be a good idea to have your database backup consist of multiple BackupSets, using either multiple CHANNELs or FILESPERSET during the BACKUP.


You could also note, as an aside, that Log Sequence 633 was an online redo log file. RMAN automatically verifies that the online redo log files designated by the controlfile are present and uses them.

.
.
.

Categories: DBA Blogs

Log Buffer #430: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-03 12:56

This Log Buffer Edition cuts through the crowd and picks some of the outstanding blog posts from Oracle, SQL Server and MySQL.


Oracle:

  • Continuous Delivery (CD) is a software engineering approach in which teams keep producing valuable software in short cycles and ensure that the software can be reliably released at any time.
  • Query existing HBase tables with SQL using Apache Phoenix.
  • Even though WebLogic with Active GridlLink are Oracle’s suggested approach to deploy Java applications that use Oracle Real Applications Clusters (RAC), there might be scenarios in which you can’t make that choice (e.g. certification issues, licensing, library dependency, etc.).
  • OSB & MTOM: When to use Include Binary Data by Reference or Value.
  • Ever used SoapUI to test services on multiple environments? Then you probably ran in to the job of ever changing the endpoints to the hosts of the particular environment; development, test, acceptance, production (although I expect you wouldn’t use SoapUI against a prod-env). This is not that hard if you have only one service endpoint in the project.

SQL Server:

  • Using DAX to create SSRS reports: The Basics.
  • Getting to know your customers better – cohort analysis and RFM segmentation in R.
  • Using the T-SQL PERCENTILE Analytic Functions in SQL Server 2000, 2005 and 2008.
  • Schema-Based Access Control for SQL Server Databases.
  • How to Fix a Corrupt MSDB SQL Server Database.

MySQL:

  • MySQL Enterprise Audit – parsing audit information from log files, inserting into a MySQL table.
  • Proposal to deprecate MySQL INTEGER display width and ZEROFILL.
  • Using Cgroups to Limit MySQL and MongoDB memory usage.
  • Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.
  • Setting up environments, starting processes, and monitoring these processes on multiple machines can be time consuming and error prone.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

The post Log Buffer #430: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Query existing HBase tables with SQL using Apache Phoenix

Kubilay Çilkara - Thu, 2015-07-02 13:25
Spending a bit more time with Apache Phoenix and reading again my previous post I realised that you can use it to query existing HBase tables. That is NOT tables created using Apache Phoenix, but HBase - the columnar NoSQL database in Hadoop.

I think this is cool as it gives you the ability to use SQL on an HBase table.

To test this, let's say you login to HBase and you create an HBase table like this:

> create 'table2', {NAME=>'cf1', VERSIONS => 5}

The table2 is a simple table in HBase with one column family cf1 and now let's put some data to this HBase table.

> put 'table2', 'row1', 'cf1:column1', 'Hello SQL!'

then maybe add another row

> put 'table2', 'row4', 'cf1:column1', 'London'

Now, in Phoenix all you will have to do is create a database View for this table and query it with SQL. The database View will be read-only.  How cool is that, you don't even need to physically create the table or move the data to Phoenix or convert it, a database view will be sufficient and via Phoenix you can query the HBase table with SQL.

In Phoenix you create the view for the table2 using the same name. As you can see below the DDL used to create the view is case sensitive and if you created your HBase table name in lower case you will have to put the name in between double quotes.

So login to Phoenix and create the "table2" view like this:

> create view "table2" ( pk VARCHAR PRIMARY KEY, "cf1"."column1" VARCHAR );

And here is how you then query it in Phoenix:


SQL Query on Phoenix
Tremendous potential here, imagine all those existing HBase tables which now you can query with SQL. More, you can point your Business Intelligence tools and Reporting Tools and other tools which work with SQL and query HBase as if it was another SQL database.

A solution worth investigating further? It definitely got me blogging in the evenings again.

To find out more about Apache Phoenix visit their project page https://phoenix.apache.org/



Categories: DBA Blogs

Table Recovery in #Oracle 12c

The Oracle Instructor - Thu, 2015-07-02 03:18

You can now restore single tables from backup! It is a simple command although it leads to much effort by RMAN. See it as an enhancement over a ‘normal’ Point In Time Recovery:

Point In Time Recovery

Point In Time Recovery

After a full restore from a sufficiently old backup, archived logs are being applied in direction of the presence until before the logical error. Then a new incarnation comes up (with RESETLOGS) and the whole database is as it was at that time. But what if it is only a dropped table that needs to be recovered? Enter the 12c New Feature:

Table Recovery

Table Recovery

Above is what RMAN does upon Table Recovery. The restore is done to the auxiliary destination, while the database keeps on running like it is just now. The new incarnation is there only temporarily, just to export the dropped table from. Afterwards, it is removed. RMAN will then import the table back to the still running database – unless you say otherwise with the NOTABLEIMPORT clause. So it is a huge effort to go through for the system in spite of the simple RMAN command:

 

SQL> select count(*) from sales;

  COUNT(*)
----------
  10000000

SQL> select sysdate from dual;

SYSDATE
-------------------
2015-07-02 09:33:37

SQL> drop table sales purge;

Table dropped.

Oops – that was a mistake! And I can’t simply say flashback table sales to before drop because of the purge. RMAN to the rescue!

[oracle@uhesse ~]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Jul 2 09:34:35 2015

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PRIMA (DBID=2113606181)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
1       Full    142.13M    DISK        00:01:45     2015-07-01 17:50:32
        BP Key: 1   Status: AVAILABLE  Compressed: YES  Tag: TAG20150701T174847
        Piece Name: /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
  List of Datafiles in backup set 1
  File LV Type Ckp SCN    Ckp Time            Name
  ---- -- ---- ---------- ------------------- ----
  1       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/system01.dbf
  2       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/sysaux01.dbf
  3       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/undotbs01.dbf
  4       Full 532842     2015-07-01 17:48:47 /u01/app/oracle/oradata/prima/users01.dbf

RMAN> host 'mkdir /tmp/auxi';

host command complete

RMAN> recover table adam.sales until time '2015-07-02 09:33:00' auxiliary destination '/tmp/auxi';

Starting recover at 2015-07-02 09:35:54
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='tDtf'

initialization parameters used for automatic instance:
db_name=PRIMA
db_unique_name=tDtf_pitr_PRIMA
compatible=12.1.0.2
db_block_size=8192
db_files=200
diagnostic_dest=/u01/app/oracle
_system_trig_enabled=FALSE
sga_target=1512M
processes=200
db_create_file_dest=/tmp/auxi
log_archive_dest_1='location=/tmp/auxi'
#No auxiliary parameter file used


starting up automatic instance PRIMA

Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                402656944 bytes
Database Buffers            1174405120 bytes
Redo Buffers                  13848576 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# restore the controlfile
restore clone controlfile;

# mount the controlfile
sql clone 'alter database mount clone database';

# archive current online log
sql 'alter system archive log current';
}
executing Memory Script

executing command: SET until clause

Starting restore at 2015-07-02 09:36:21
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=3 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_ncsnf_TAG20150701T174847_bs832pht_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl
Finished restore at 2015-07-02 09:36:23

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile  1 to new;
set newname for clone datafile  3 to new;
set newname for clone datafile  2 to new;
set newname for clone tempfile  1 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 3, 2;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /tmp/auxi/PRIMA/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 2015-07-02 09:36:32
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /tmp/auxi/PRIMA/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:36
Finished restore at 2015-07-02 09:37:08

datafile 1 switched to datafile copy
input datafile copy RECID=4 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=5 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=6 STAMP=883993028 file name=/tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  3 online";
sql clone "alter database datafile  2 online";
# recover and open database read only
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX";
sql clone 'alter database open read only';
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  2 online

Starting recover at 2015-07-02 09:37:09
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:00
Finished recover at 2015-07-02 09:38:11

sql statement: alter database open read only

contents of Memory Script:
{
   sql clone "create spfile from memory";
   shutdown clone immediate;
   startup clone nomount;
   sql clone "alter system set  control_files =
  ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment=
 ''RMAN set'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
# mount database
sql clone 'alter database mount clone database';
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter system set  control_files =   ''/tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl'' comment= ''RMAN set'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area    1593835520 bytes

Fixed Size                     2924880 bytes
Variable Size                419434160 bytes
Database Buffers            1157627904 bytes
Redo Buffers                  13848576 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  4 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  4;

switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 2015-07-02 09:39:11
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=12 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp
channel ORA_AUX_DISK_1: piece handle=/u02/fra/PRIMA/backupset/2015_07_01/o1_mf_nnndf_TAG20150701T174847_bs82z0rl_.bkp tag=TAG20150701T174847
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 2015-07-02 09:39:47

datafile 4 switched to datafile copy
input datafile copy RECID=8 STAMP=883993187 file name=/tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf

contents of Memory Script:
{
# set requested point in time
set until  time "2015-07-02 09:33:00";
# online the datafiles restored or switched
sql clone "alter database datafile  4 online";
# recover and open resetlogs
recover clone database tablespace  "USERS", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  4 online

Starting recover at 2015-07-02 09:39:47
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 13 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc
archived log for thread 1 with sequence 14 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc
archived log for thread 1 with sequence 15 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc
archived log for thread 1 with sequence 16 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc
archived log for thread 1 with sequence 17 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc
archived log for thread 1 with sequence 18 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc
archived log for thread 1 with sequence 19 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc
archived log for thread 1 with sequence 20 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc
archived log for thread 1 with sequence 21 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc
archived log for thread 1 with sequence 22 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc
archived log for thread 1 with sequence 23 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc
archived log for thread 1 with sequence 24 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc
archived log for thread 1 with sequence 25 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc
archived log for thread 1 with sequence 26 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc
archived log for thread 1 with sequence 27 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc
archived log for thread 1 with sequence 28 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc
archived log for thread 1 with sequence 29 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc
archived log for thread 1 with sequence 30 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc
archived log for thread 1 with sequence 31 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc
archived log for thread 1 with sequence 32 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc
archived log for thread 1 with sequence 33 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc
archived log for thread 1 with sequence 34 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc
archived log for thread 1 with sequence 35 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc
archived log for thread 1 with sequence 36 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc
archived log for thread 1 with sequence 37 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc
archived log for thread 1 with sequence 38 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc
archived log for thread 1 with sequence 39 is already on disk as file /u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_13_bs836h1p_.arc thread=1 sequence=13
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_01/o1_mf_1_14_bs836lv2_.arc thread=1 sequence=14
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_15_bs9mog63_.arc thread=1 sequence=15
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_16_bs9mpsqo_.arc thread=1 sequence=16
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_17_bs9n281y_.arc thread=1 sequence=17
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_18_bs9n360t_.arc thread=1 sequence=18
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_19_bs9n3p5r_.arc thread=1 sequence=19
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_20_bs9n46od_.arc thread=1 sequence=20
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_21_bs9n4l4j_.arc thread=1 sequence=21
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_22_bs9n512c_.arc thread=1 sequence=22
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_23_bs9p5m15_.arc thread=1 sequence=23
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_24_bs9p6qn7_.arc thread=1 sequence=24
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_25_bs9plfkc_.arc thread=1 sequence=25
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_26_bs9pls8h_.arc thread=1 sequence=26
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_27_bs9pm0db_.arc thread=1 sequence=27
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_28_bs9pm70g_.arc thread=1 sequence=28
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_29_bs9pmk0c_.arc thread=1 sequence=29
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_30_bs9pmrrj_.arc thread=1 sequence=30
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_31_bs9sq00g_.arc thread=1 sequence=31
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_32_bs9sqzgd_.arc thread=1 sequence=32
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_33_bs9t4fq8_.arc thread=1 sequence=33
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_34_bs9t4vyr_.arc thread=1 sequence=34
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_35_bs9t593c_.arc thread=1 sequence=35
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_36_bs9t5htq_.arc thread=1 sequence=36
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_37_bs9t5q3h_.arc thread=1 sequence=37
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_38_bs9t5yqj_.arc thread=1 sequence=38
archived log file name=/u02/fra/PRIMA/archivelog/2015_07_02/o1_mf_1_39_bs9tgttq_.arc thread=1 sequence=39
media recovery complete, elapsed time: 00:01:15
Finished recover at 2015-07-02 09:41:03

database opened

contents of Memory Script:
{
# create directory for datapump import
sql "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
# create directory for datapump export
sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as ''
/tmp/auxi''";
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''/tmp/auxi''

Performing export of tables...
   EXPDP> Starting "SYS"."TSPITR_EXP_tDtf_lwFD":
   EXPDP> Estimate in progress using BLOCKS method...
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   EXPDP> Total estimation using BLOCKS method: 600 MB
   EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   EXPDP> . . exported "ADAM"."SALES"                              510.9 MB 10000000 rows
   EXPDP> Master table "SYS"."TSPITR_EXP_tDtf_lwFD" successfully loaded/unloaded
   EXPDP> ******************************************************************************
   EXPDP> Dump file set for SYS.TSPITR_EXP_tDtf_lwFD is:
   EXPDP>   /tmp/auxi/tspitr_tDtf_59906.dmp
   EXPDP> Job "SYS"."TSPITR_EXP_tDtf_lwFD" successfully completed at Thu Jul 2 09:42:53 2015 elapsed 0 00:01:06
Export completed


contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables...
   IMPDP> Master table "SYS"."TSPITR_IMP_tDtf_uink" successfully loaded/unloaded
   IMPDP> Starting "SYS"."TSPITR_IMP_tDtf_uink":
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
   IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
   IMPDP> . . imported "ADAM"."SALES"                              510.9 MB 10000000 rows
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
   IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
   IMPDP> Job "SYS"."TSPITR_IMP_tDtf_uink" successfully completed at Thu Jul 2 09:54:13 2015 elapsed 0 00:11:12
Import completed


Removing automatic instance
Automatic instance removed
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_temp_bs9tm7pz_.tmp deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_2_bs9trods_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/onlinelog/o1_mf_1_bs9trjw6_.log deleted
auxiliary instance file /tmp/auxi/TDTF_PITR_PRIMA/datafile/o1_mf_users_bs9to0k1_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_sysaux_bs9tj1jd_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_undotbs1_bs9tj1hw_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/datafile/o1_mf_system_bs9tj1fk_.dbf deleted
auxiliary instance file /tmp/auxi/PRIMA/controlfile/o1_mf_bs9thps1_.ctl deleted
auxiliary instance file tspitr_tDtf_59906.dmp deleted
Finished recover at 2015-07-02 09:54:16

See how much work was done by RMAN here? But now, life is good again:

SQL> select count(*) from adam.sales;

  COUNT(*)
----------
  10000000

You say that you could have done that yourself even before 12c? Yes, you’re right: It’s not magic, it’s just more comfortable now ;-)


Tagged: Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs

Apache Phoenix, SQL is getting closer to Big Data

Kubilay Çilkara - Tue, 2015-06-30 15:50


Here is a post about another project in the Big Data world, like Apache Hive from my previous post, enables you to do SQL on Big Data. It is called Apache Phoenix.

Phoenix is a bit different, a bit closer to my heart too, as I read the documentation on Apache Phoenix, the word 'algebra' and 'relational algebra' came across few times, and that mean only one thing, SQL! The use of the word algebra in the docs did give me a lot of confidence. SQL has closure, is based on a database systems model which has it's roots in logic and maths and especially a subset of algebra, The Set Theory.

Apache Phoenix is developed in Salesforce and is now one of the popular projects in Apache. Apache Phoenix is a SQL skin on top of HBase, the columnar (NoSQL) database of the Hadoop ecosystem, capable of storing very large tables and data and query them via 'scans'. HBase is part of the Hadoop ecosystem and the file system it uses is usually HDFS. Apache Phoenix is using JDBC on the client as a driver.

In the race to bring the easiest to use tools for Big Data, I think Apache Phoenix is very close. It is the SQL we know used since the 1970s. The Apache Phoenix team seems to be committed and willing to introduce all of the missing parts of SQL, including transaction processing with different isolation levels.  Making Phoenix a fully operational Relational Database layer on HBase. Have a look in their roadmap. The amount of current and suggested future SQL compatibility is remarkable, and this makes me take them really seriously.
  • Transactions
  • Cost-based Query Optimization! (Wow)
  • Joins
  • OLAP
  • Subqueries
  • Striving for full SQL-92 compliance
In addition to all this, it is also possible to turn an existing HBase table to an Apache Phoenix table using CREATE TABLE or even CREATE VIEW, the DDL statements that we know. How handy is that? Suddenly you can SQL enable your existing HBase database!
How to install and use Phoenix

The SQL skin can be installed to an existing Hadoop HBase installation very quickly. All you need to do is to download and extract the tarball. You can setup a standalone Hadoop environment, look at my previous blog post for that, and then install HBase and install Apache Phoenix
Once the Apache  Phoenix software is installed, then you can start it and query it with SQL like this.

From within the bin/ directory of Phoenix install directory run

$ ./sqlline.py  localhost

That will bring you to the phoenix prompt


0: jdbc:phoenix:localhost> select * from mytable;

Categories: DBA Blogs

Prepare for the Leap Second

Pythian Group - Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.

 

Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.

 

What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.

 

The post Prepare for the Leap Second appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Multisection Backup for Image Copies

The Oracle Instructor - Mon, 2015-06-29 10:30

A nice Oracle Database 12c New Feature enhances the multisection backup, introduced in 11g: You can use it now for image copies also!

Multisection Backup with Image Copy

Multisection Backup with Image Copy

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    235      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf

List of Temporary Files
=======================
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> configure device type disk parallelism 2;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> backup section size 301m as copy datafile 4;

Starting backup at 29-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 1 through 38528
channel ORA_DISK_2: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 38529 through 77056
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 29-JUN-15

RMAN> host 'ls -rtl /u02/fra/PRIMA/datafile/';

total 616468
-rw-r-----. 1 oracle oinstall 631250944 Jun 29 18:07 o1_mf_users_bs2v934z_.dbf
host command complete

We use backupsets by default now especially also upon the DUPLICATE DATABASE command, which leads finally to image copies of course.


Tagged: 12c New Features, Backup & Recovery, PracticalGuide, RMAN
Categories: DBA Blogs