Skip navigation.

Feed aggregator

2 ways to move archivelogs - both need RMAN

Yann Neuhaus - Tue, 2015-06-23 14:03

The database must know where are the files. If you move them from the OS, the database will not be aware of it. But there is a database tool to copy or move files and update the database repository (controlfile). It's RMAN. With RMAN you can either:

  • update the repository after you've moved the file from the OS
  • or do the both: move and update the repository
The syntax is a bit weird, so let's have an example.

RMAN> CATALOG

I have the following archived logs in the /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23 directory:

[oracle@VM111 2015_06_23]$ pwd
/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23
[oracle@VM111 2015_06_23]$ ls -alrt
total 188
drwxr-x---. 5 oracle oinstall   4096 Jun 23 21:40 ..
-rw-r-----. 1 oracle oinstall 120320 Jun 23 21:44 o1_mf_1_61_brmfrl7v_.arc
-rw-r-----. 1 oracle oinstall   3584 Jun 23 21:44 o1_mf_1_62_brmfrq6c_.arc
-rw-r-----. 1 oracle oinstall   2560 Jun 23 21:44 o1_mf_1_63_brmfrsts_.arc
-rw-r-----. 1 oracle oinstall   1536 Jun 23 21:44 o1_mf_1_64_brmfrtjs_.arc
-rw-r-----. 1 oracle oinstall  30208 Jun 23 21:45 o1_mf_1_65_brmft8n5_.arc
-rw-r-----. 1 oracle oinstall   6144 Jun 23 21:45 o1_mf_1_66_brmftlpg_.arc
-rw-r-----. 1 oracle oinstall   4096 Jun 23 21:45 o1_mf_1_67_brmftr8w_.arc
-rw-r-----. 1 oracle oinstall   2560 Jun 23 21:45 o1_mf_1_68_brmftvwn_.arc
drwxr-x---. 2 oracle oinstall   4096 Jun 23 21:45 .
and I move them to /u01/DEMO/temp/:
[oracle@VM111 2015_06_23]$ mv * /u01/DEMO/temp/
my current directory is empty:
[oracle@VM111 2015_06_23]$ ls -alrt
total 8
drwxr-x---. 5 oracle oinstall 4096 Jun 23 21:40 ..
drwxr-x---. 2 oracle oinstall 4096 Jun 23 21:50 .
and the /u01 one has my archived logs:
[oracle@VM111 2015_06_23]$ ls -alrt /u01/DEMO/temp
total 188
-rw-r-----.  1 oracle oinstall 120320 Jun 23 21:44 o1_mf_1_61_brmfrl7v_.arc
-rw-r-----.  1 oracle oinstall   3584 Jun 23 21:44 o1_mf_1_62_brmfrq6c_.arc
-rw-r-----.  1 oracle oinstall   2560 Jun 23 21:44 o1_mf_1_63_brmfrsts_.arc
-rw-r-----.  1 oracle oinstall   1536 Jun 23 21:44 o1_mf_1_64_brmfrtjs_.arc
-rw-r-----.  1 oracle oinstall  30208 Jun 23 21:45 o1_mf_1_65_brmft8n5_.arc
-rw-r-----.  1 oracle oinstall   6144 Jun 23 21:45 o1_mf_1_66_brmftlpg_.arc
-rw-r-----.  1 oracle oinstall   4096 Jun 23 21:45 o1_mf_1_67_brmftr8w_.arc
-rw-r-----.  1 oracle oinstall   2560 Jun 23 21:45 o1_mf_1_68_brmftvwn_.arc
drwxr-x---. 10 oracle oinstall   4096 Jun 23 21:49 ..
drwxr-xr-x.  2 oracle oinstall   4096 Jun 23 21:50 .
[oracle@VM111 2015_06_23]$

But let's list the archived logs from RMAN:

[oracle@VM111 2015_06_23]$ rman target /
Recovery Manager: Release 12.1.0.2.0 - Production on Tue Jun 23 21:50:48 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
connected to target database: DEMO11 (DBID=684456715)

RMAN> list archivelog all;

using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      A 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc
the repository (in the controlfile) still have the old location. If I need the redologs to recover the database, then it will fail.

The CROSSCHECK command can be used so that RMAN verifies if the files are still there:

RMAN> crosscheck archivelog all;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc RECID=82 STAMP=883172658
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc RECID=84 STAMP=883172663
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc RECID=86 STAMP=883172665
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc RECID=88 STAMP=883172666
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc RECID=90 STAMP=883172712
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc RECID=92 STAMP=883172722
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc RECID=94 STAMP=883172728
validation failed for archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc RECID=96 STAMP=883172731
Crosschecked 8 objects
validation failed for all of them. They are marked as EXPIRED:
RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc
which means that the database do not know any archived logs now.

I did only one part of the job. Now I need to register the new location with the CATALOG command:

RMAN> catalog start with '/u01/DEMO/temp';

searching for all files that match the pattern /u01/DEMO/temp

List of Files Unknown to the Database
=====================================
File Name: /u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc
File Name: /u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc
File Name: /u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc
File Name: /u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc
File Name: /u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc
File Name: /u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc
File Name: /u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc
File Name: /u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc
File Name: /u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc
File Name: /u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc
File Name: /u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc
File Name: /u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc
File Name: /u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc
File Name: /u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc
File Name: /u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc
the file types and the header is read to see if the file belongs to this database. Then they are registered. Before listing them, I remove the expired entries:
RMAN> delete expired archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
82      1    61      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc

84      1    62      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc

86      1    63      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc

88      1    64      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc

90      1    65      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc

92      1    66      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc

94      1    67      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc

96      1    68      X 23-JUN-15
        Name: /u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_61_brmfrl7v_.arc RECID=82 STAMP=883172658
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_62_brmfrq6c_.arc RECID=84 STAMP=883172663
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_63_brmfrsts_.arc RECID=86 STAMP=883172665
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_64_brmfrtjs_.arc RECID=88 STAMP=883172666
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_65_brmft8n5_.arc RECID=90 STAMP=883172712
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_66_brmftlpg_.arc RECID=92 STAMP=883172722
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_67_brmftr8w_.arc RECID=94 STAMP=883172728
deleted archived log
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_68_brmftvwn_.arc RECID=96 STAMP=883172731
Deleted 8 EXPIRED objects
and I can verify that a crosscheck validates all my files from the new location:
RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc RECID=104 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc RECID=102 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc RECID=103 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc RECID=97 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc RECID=101 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc RECID=99 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc RECID=100 STAMP=883173169
validation succeeded for archived log
archived log file name=/u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc RECID=98 STAMP=883173169
Crosschecked 8 objects

RMAN> BACKUP AS COPY

Let's do the same in one command. RMAN is there to do backups. Backups can go to backupsets or they can be a simple copy with BACKUP AS COPY. The destination is defined with the backup FORMAT string. And if we want to move instead of copy, we just add the DELETE INPUT.

RMAN> backup as copy archivelog all format '/u03/DEMO/temp/%U' delete input;

Starting backup at 23-JUN-15
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=69 RECID=106 STAMP=883173353
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b RECID=109 STAMP=883173387
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_69_brmgg9on_.arc RECID=106 STAMP=8
83173353
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=61 RECID=104 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d RECID=110 STAMP=883173389
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_61_brmfrl7v_.arc RECID=104 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=65 RECID=101 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e RECID=111 STAMP=883173390
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_65_brmft8n5_.arc RECID=101 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=70 RECID=108 STAMP=883173387
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f RECID=112 STAMP=883173391
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u02/DEMO/fast_recovery_area/DEMO11/archivelog/2015_06_23/o1_mf_1_70_brmghct5_.arc RECID=108 STAMP=8
83173387
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=66 RECID=99 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g RECID=113 STAMP=883173392
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_66_brmftlpg_.arc RECID=99 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=67 RECID=100 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h RECID=114 STAMP=883173393
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_67_brmftr8w_.arc RECID=100 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=62 RECID=102 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i RECID=115 STAMP=883173394
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_62_brmfrq6c_.arc RECID=102 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=63 RECID=103 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j RECID=116 STAMP=883173395
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_63_brmfrsts_.arc RECID=103 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=68 RECID=98 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k RECID=117 STAMP=883173396
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_68_brmftvwn_.arc RECID=98 STAMP=883173169
channel ORA_DISK_1: starting archived log copy
input archived log thread=1 sequence=64 RECID=97 STAMP=883173169
output file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l RECID=118 STAMP=883173397
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/DEMO/temp/o1_mf_1_64_brmfrtjs_.arc RECID=97 STAMP=883173169
Finished backup at 23-JUN-15
The syntax is very different from a move command but it's the same. The file names may have changed (because of the %U format) but who cares? Only RMAN should know what is inside the files. You have a repository (controlfile or rman catalog) which knows all the attributes about the files (DBID, thread#, sequence#, SCN, etc) so better rely on that rather than on a file name and timestamp.

Look at the files, they are now in my third destination:

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name DEMO11
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - ---------
110     1    61      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d

115     1    62      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i

116     1    63      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j

118     1    64      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l

111     1    65      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e

113     1    66      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g

114     1    67      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h

117     1    68      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k

109     1    69      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b

112     1    70      A 23-JUN-15
        Name: /u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f
and a crosscheck validates that they are accesible there:
RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=75 device type=DISK
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-61_T-1_A-880845261_0cqa8a0d RECID=110 STAMP=883173389
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-62_T-1_A-880845261_0hqa8a0i RECID=115 STAMP=883173394
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-63_T-1_A-880845261_0iqa8a0j RECID=116 STAMP=883173395
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-64_T-1_A-880845261_0kqa8a0l RECID=118 STAMP=883173397
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-65_T-1_A-880845261_0dqa8a0e RECID=111 STAMP=883173390
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-66_T-1_A-880845261_0fqa8a0g RECID=113 STAMP=883173392
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-67_T-1_A-880845261_0gqa8a0h RECID=114 STAMP=883173393
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-68_T-1_A-880845261_0jqa8a0k RECID=117 STAMP=883173396
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-69_T-1_A-880845261_0bqa8a0b RECID=109 STAMP=883173387
validation succeeded for archived log
archived log file name=/u03/DEMO/temp/arch_D-DEMO11_id-684456715_S-70_T-1_A-880845261_0eqa8a0f RECID=112 STAMP=883173391
Crosschecked 10 objects

Conclusion

Which one do you prefer? Managing database files from the OS is old-style. You are doing your backups with RMAN, so why not use RMAN to do any operations on the files. Then you are sure that the repository is up to date. When you will need your archivelogs to recover, you don't want to waste time finding where an archived logs has been moved one day by a collegue that forgot to re-catalog them because of the emergency situation.

What’s in a Name ? USER_TAB_COLS and USER_TAB_COLUMNS are different.

The Anti-Kyte - Tue, 2015-06-23 13:22

My son and I are quite similar in some ways ( although he would vehemently dispute this).
Like me, he works in IT, in his case as a Support Engineer.
Like me, he’s called Mike (well, my Mum likes the name…and I can spell it).
Unlike me – as he would be quick to point out – he still has all his own hair.
These similarities have been known to cause confusion – I’m often contacted by recruitment agents with enticing offers to work on…some newfangled stuff I know nothing about, whilst he’s constantly being offered “exciting” Database related opportunities.

Similar confusion can arise when you’re delving into the Oracle Data Dictionary…

Note – the examples that follow apply to 11gR2. Additionally, apart from the COLS synonym, what is true for USER_TAB_COLUMNS and USER_TAB_COLS also applies to their ALL_ and DBA_ equivalents.

When it comes to getting column meta-data out of the Data Dictionary, you’ve got several choices. To illustrate this, connect as HR and ….

select column_name, data_type
from cols
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_columns
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_cols
where table_name = 'EMPLOYEES'
order by column_id
/

In each case the results are identical :

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
COMMISSION_PCT		       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

11 rows selected.

So, it would appear that COLS, USER_TAB_COLUMNS and USER_TAB_COLS are all synonyms for the same thing…

select synonym_name, table_owner, table_name
from all_synonyms
where synonym_name in ('COLS', 'USER_TAB_COLS', 'USER_TAB_COLUMNS')
order by table_name
/ 

SYNONYM_NAME		       TABLE_OWNER	    TABLE_NAME
------------------------------ -------------------- --------------------
USER_TAB_COLS		       SYS		    USER_TAB_COLS
USER_TAB_COLUMNS	       SYS		    USER_TAB_COLUMNS
COLS			       SYS		    USER_TAB_COLUMNS

…OK, so COLS is indeed a synonym for USER_TAB_COLUMNS. USER_TAB_COLS and USER_TAB_COLUMNS also appear to be identical…

select table_name, comments
from all_tab_comments
where table_name in ('USER_TAB_COLUMNS', 'USER_TAB_COLS')
/

TABLE_NAME	     COMMENTS
-------------------- --------------------------------------------------
USER_TAB_COLS	     Columns of user's tables, views and clusters
USER_TAB_COLUMNS     Columns of user's tables, views and clusters

There you go then. Must be the case…

Unused Columns

Lets create another table in the HR schema as a copy of EMPLOYEES….

create table non_sales_emps as
    select * 
    from employees
    where commission_pct is null
/

Table created.

As the name suggests, we’re not going to have any Sales Staff in this table, so we don’t really need the COMMISSION_PCT column…

SQL> alter table non_sales_emps
  2      set unused column commission_pct
  3  /

Table altered.

SQL> 

So, the table no longer contains the COMMISSION_PCT column…

select column_name, data_type
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER
SYS_C00009_15061918:46:18$     NUMBER

11 rows selected.

Yes, the table now has 10 columns…and here are the details of all 11 of them.
Weren’t expecting that ? Well then you probably won’t be expecting this either….

select column_name, data_type
from user_tab_columns
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

10 rows selected.

The extra column has magically disappeared again. Just what is going on ?

Delving a bit deeper into this particular rabbit-hole…

select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
minus
select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLUMNS'
/

COLUMN_NAME		       DATA_TYPE
------------------------------ ------------------------------
HIDDEN_COLUMN		       VARCHAR2
INTERNAL_COLUMN_ID	       NUMBER
QUALIFIED_COL_NAME	       VARCHAR2
SEGMENT_COLUMN_ID	       NUMBER
VIRTUAL_COLUMN		       VARCHAR2

SQL> 

From this we can see that USER_TAB_COLS contains five additional columns over those available in USER_TAB_COLUMNS.

select column_name, comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
and column_name in ( 'HIDDEN_COLUMN', 'INTERNAL_COLUMN_ID', 
    'QUALIFIED_COL_NAME', 'SEGMENT_COLUMN_ID', 'VIRTUAL_COLUMN')
/

COLUMN_NAME		  COMMENTS
------------------------- --------------------------------------------------
HIDDEN_COLUMN		  Is this a hidden column?
VIRTUAL_COLUMN		  Is this a virtual column?
SEGMENT_COLUMN_ID	  Sequence number of the column in the segment
INTERNAL_COLUMN_ID	  Internal sequence number of the column
QUALIFIED_COL_NAME	  Qualified column name

Furthermore, if we examine the source code for the USER_TAB_COLUMNS view, the reason for it’s similarity with USER_TAB_COLS becomes apparent :

select TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
       DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
       DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
       CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
       GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
       V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM
  from USER_TAB_COLS
 where HIDDEN_COLUMN = 'NO'

So, just when does it become useful to use USER_TAB_COLS rather than USER_TAB_COLUMNS ?

In 11g, you’d set a column to be unused on a large table in order for the “drop” to happen quickly.
Once you have set it to unused, the only thing you can do with it is drop it altogether to reclaim the space it’s using.
To find out which tables have unused columns, you can use…

select table_name, count
from user_unused_col_tabs
/

TABLE_NAME			    COUNT
------------------------------ ----------
NON_SALES_EMPS				1

…and if you want to drop an unused column, you don’t need to specify it’s name in the command…

alter table non_sales_emps
    drop unused columns
/

Table altered.

Where USER_TAB_COLS does come in handy is when you’re looking for …

Virtual Columns

For example, we could add a column to our NON_SALES_EMPS table to calculate the number of completed years service for each employee.
First, we need a deterministic function to return the number of full years between a given date and today :

create or replace function years_elapsed_fn( i_date in date)
    return number deterministic
as
begin
    return floor( months_between( trunc(sysdate), i_date) / 12);
end;
/

Now we add a virtual column to the table which calls this function :

alter table non_sales_emps
    add years_service generated always as
        (years_elapsed_fn(hire_date))
/

Whilst there’s no way to tell which columns are virtual in USER_TAB_COLUMNS, there is in USER_TAB_COLS :

select column_name, data_type, virtual_column
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
/

COLUMN_NAME		  DATA_TYPE			 VIR
------------------------- ------------------------------ ---
EMPLOYEE_ID		  NUMBER			 NO
FIRST_NAME		  VARCHAR2			 NO
LAST_NAME		  VARCHAR2			 NO
EMAIL			  VARCHAR2			 NO
PHONE_NUMBER		  VARCHAR2			 NO
HIRE_DATE		  DATE				 NO
JOB_ID			  VARCHAR2			 NO
SALARY			  NUMBER			 NO
MANAGER_ID		  NUMBER			 NO
DEPARTMENT_ID		  NUMBER			 NO
YEARS_SERVICE		  NUMBER			 YES

11 rows selected.

SQL> 

The reasons for having two such similar ( and similarly named) dictionary views seem to have been lost in the mists of time.
Whatever the rationale, it’s worth knowing the difference next time you need to go poking around the column meta-data in your database.


Filed under: Oracle, SQL Tagged: all_tab_comments, alter table set unused column, cols, deterministic, hidden columns, user_tab_cols, user_tab_columns, virtual columns

SQL Server 2016 : availability groups and the new SSISDB support

Yann Neuhaus - Tue, 2015-06-23 12:25

This blog post is focused on the new supportability of SSIDB catalog on AlwaysOn architecture.

Others studies are available here:

 

A couple of weeks ago, I was involved in an SSIS infrastructure project with SQL Server 2014. As you know, the SSIS architecture has fundamentally changed since SQL Server 2012 and has lead to a new way of administrating it from the DBA perspective. This is also particularly true when we have to take into account an AlwaysOn architecture with the new SSISDB catalog since SQL Server 2014..

Indeed, when you want to include the SSISDB catalog to an SQL Server 2014 availability group, you have to perform some extra steps that are required according to the Microsoft SSIS blog post here. The task consists in creating manually some SQL Server jobs to leverage a failover event that requires the re-encryption of the database master key by the service master key on the new primary replica. Likewise, you will have to deal with the SSIS Server Maintenance job that is not AlwaysOn aware by default. Thus, deploying the SSISDB catalog in an availability group’s environment is not an easy task with SQL Server 2014 but let’s take a look at the new support AlwaysOn support with SQL Server 2016.

Fortunately, Microsoft has built on the experience gained from the previous version. Unlike SQL Server 2014, the next version will provide an easier way to deploy and manage the SSISDB catalog in an AlwaysOn infrastructure. By referring to the BOL here, you will notice that all the configuration stuff is done directly from the availability groups wizard without scripting any additional object. So, my goal in this blog post will consist in understanding the internal changes made by Microsoft in this area.

First of all, let’s take a look at some changes by trying to add an SSISDB catalog. At this point, you will be asked to provide the password of your database master key before to continue as follows:

 

blog_54_-_1-_aag_ssidb_database_master_key

 

Yes, the SSIDB catalog uses intensively the encryption to protect sensitive data from projects, packages, parameters and so on. At this point,  you may notice a warning icon point. In fact, the wizard warns us about configuring the AlwaysOn support for SSISDB in a final step because it is required in order to leverage availability group failover events.

 

blog_54_-_2-_aag_ssidb_validation_step

 

To enable AlwaysOn support we need to go the Integration Services Catalog node and we must include the concerned replica(s) as shown below:

 

blog_54_-_3-_aag_ssidb_services_integration_services_node

 

...

 

blog_54_-_4-_aag_ssidb_services_integration_services_alwayson_support

 

My configuration is now finished. In a second step, we will have a look at the SQL Server agent jobs. Indeed, during my test I suspected that a lot of stuff was done by SQL Server behind the scene and I was right. It added two additional jobs as shown below:

 

blog_54_-_5-_aag_ssidb_catalog_jobs

 

First of all, the SSIS Failover Monitor Job is designed to run on regular basis in order to detect failover events by refreshing the state of the concerned replica(s) and finally by starting accordingly the SSISDB catalog with the SSISDB.catalog.startup stored procedure. I remember in the past having implemented this kind of failover detection mechanism with the first AlwaysOn release in order to monitor availability group failover events (see my blog post here).

 

DECLARE @role int DECLARE @status tinyint   SET @role =(SELECT [role]                     FROM [sys].[dm_hadr_availability_replica_states] hars                     INNER JOIN [sys].[availability_databases_cluster] adc                            ON hars.[group_id] = adc.[group_id]                     WHERE hars.[is_local] = 1 AND adc.[database_name] ='SSISDB')   IF @role = 1 BEGIN        EXEC [SSISDB].[internal].[refresh_replica_status]              @server_name = N'SQL161',              @status =              @status OUTPUT          IF @status = 1              EXEC [SSISDB].[catalog].[startup] END

 

Moreover, we may also notice some changes about the second job SSIS Server Maintenance Job. Indeed, this job is directly designed to support the AlwaysOn feature regardless the current SSISDB configuration (included or not in an availability group)

 

DECLARE @role int   SET @role =(SELECT [role] FROM [sys].[dm_hadr_availability_replica_states] hars              INNER JOIN [sys].[availability_databases_cluster] adc                     ON hars.[group_id] = adc.[group_id]                     WHERE hars.[is_local] = 1 AND adc.[database_name] ='SSISDB')                     IF DB_ID('SSISDB') IS NOT NULL AND(@role IS NULL OR @role = 1)        EXEC [SSISDB].[internal].[cleanup_server_retention_window]


Finally, just a quick search in the SSISDB database gives us a good overview of the new objects related to the AlwaysOn feature:

 

select        name as [object_name],        type_desc from sys.objects where name like '%replica%'

 

blog_54_-_6-_aag_ssidb_objects

 

  • The procedure add_replica_info seems to be used for adding a new SQL Server instance as replica in the SSISDB catalog.
  • The update_replica_info procedure seems to be used for updating the replica state in the alwayson_support_state internal table.
  • The refresh_replica_status seems to be used by the SSIS Failover Monitor Job on regular basis for updating the replica configuration state as well as detecting failover events.
  • However, according to my tests, I didn’t found neither any explicit call of the delete_replica_info stored procedure or guideline that explains how to exclude a AlwaysOn replica from the SSIDB catalog. I will update this blog post when I get the response.

 

My feeling is that the new SSISDB support for AlwaysOn will be a pretty cool feature of the next version. This is not the most important improvement in this area for sure but it will help each DBA that wants to implement the SSISDB catalog in an AlwaysOn infrastructure by avoiding an additional burden required with SQL Server 2014. I’m looking forward the definitive package soon!

An Example Why LMS Should Not Be Only Part of Learning Ecosystem

Michael Feldstein - Tue, 2015-06-23 11:51

By Phil HillMore Posts (332)

In Michael’s initial post on the Post-LMS, he built on this central theme:

Reading Phil’s multiple reviews of Competency-Based Education (CBE) “LMSs”, one of the implications that jumps out at me is that we see a much more rapid and coherent progression of learning platform designs if you start with a particular pedagogical approach in mind.

The idea here is not that the traditional LMS has no value (it can be critical infrastructure, particularly for mainstream faculty adoption), but rather that in the future we both see more learning platform designs being tied to specific pedagogies. This idea is quite relevant given the ongoing LMS users’ conferences (InstructureCon last week, D2L Fusion this week, BbWorld next month, Apereo / Sakai as well as iMoot in the past two months).

Later in the post Michael mentions ASU’s Habitable Worlds as an example of assessing the quality of students’ participation instead of direct grading.

A good example of this is ASU’s Habitable Worlds, which I have blogged about in the past and which will be featured in an episode of the aforementioned e-Literate TV series. Habitable Worlds is roughly in the pedagogical family of CBE and mastery learning. It’s also a PBL [problem-based learning] course. Students are given a randomly generated star field and are given a semester-long project to determine the likelihood that intelligent life exists in that star field. There are a number of self-paced adaptive lessons built on the Smart Sparrow platform. Students learn competencies through those lessons, but they are competencies that are necessary to complete the larger project, rather than simply a set of hoops that students need to jump through. In other words, the competency lessons are resources for the students.

In our recent case study on ASU, Lev Horodyskyj shared his experiences helping to design the course. He specifically called out the difficulties they faced when initially attempting this pedagogical approach with a traditional LMS.

Phil Hill: But the team initially found that the traditional technologies on campus were not suited to support this new personalized learning approach.

Lev Horodyskyj: Within a traditional system it was fairly difficult. Traditional learning management systems aren’t really set up to allow a lot of interactivity. They’re more designed to let you do things that you would normally do in a traditional classroom: multiple choice tests; quizzes; turning in papers; uploading, downloading things.

Especially when you’re teaching science, a range of possibilities are viable answers, and oftentimes when we teach science, we’re more interested in what you’re not allowed to do rather than what you’re allowed to do.

Traditional LMS’s don’t allow you to really program in huge parameter spaces that you can work with. They’re basically looking for, “What are the exact correct answers you are allowed to accept?”

I was brought into the picture once Ariel decided that this could be an interesting way to go, and I started playing around with the system. I instantly fell in love with it because it was basically like PowerPoint. I could drop whatever I wanted wherever I wanted, and then wire it up to behave the way I wanted it to behave.

Now, instead of painstakingly programming all the 60 possible answers that a student might write that are acceptable, I can all of sudden set up a page to take any answer I want and evaluate it in real time. I no longer have to program those 60 answers; I could just say, “Here are the range of answer that are acceptable,” and it would work with that.

Phil Hill: And this was the Smart Sparrow system?

Lev Horodyskyj: This was the Smart Sparrow system, correct. It was really eye-opening because it allowed so many more possibilities. It was literally a blank canvas where I could put whatever I wanted.

This pedagogical approach, supported by appropriate learning platform design, seems to lead to conceptual understanding.

Eric Berkebile: My experiences were very similar. What amazed me the most about it was more how the course was centered upon building concept. It wasn’t about hammering in detail. They weren’t trying to test you on, “How much can you remember out of what we’re feeding you?” It wasn’t about hammering in detail. They weren’t trying to test you on ‘How much can you remember?’

You go through the slides, you go through the different sections, and you are building conceptual knowledge while you are doing it. Once you’ve demonstrated that you can actually apply the concept that they are teaching you, then you can move forward. Until that happens, you’re going to be stuck exactly where you are, and you’re going to have to ask help from other students in the class; you’re going to have to use the resources available.

They want you to learn how to solve problems, they want you to learn how to apply the concepts, and they want you to do it in a way that’s going to work best for you.

Phil Hill: So, it’s multidisciplinary for various disciplines but all held together by project problem-solving around Drake’s equation?

Todd Gilbert: Yeah. One concept really ties it all together, and if you want to answer those questions around that kind of problem, like, “Is there life out there? Are we alone?” you can’t do that with just astronomy, you can’t do that with just biology. It touches everything, from sociology down to physics. Those are very, very different disciplines, so you have to be adaptable.

But I mean if you rise to that kind of a challenge—I can honestly say, this is not hyperbole or anything. It is my favorite class I’ve taken at this college, and it’s a half-semester online course. It is my favorite class I’ve taken at this college.

Eric Berkebile: By far the best course I’ve taken, and I’ve recommended it to everybody I’ve talked to since.

This approach is not mainstream in the sense that the vast majority of courses are not designed as problem-based learning, so I am not arguing that all LMSs should change accordingly or that Smart Sparrow is a superior product. I do, however, think that this episode gives a concrete example of how the traditional LMS should not be the only platform available in a learning ecosystem and how we will likely see more development of platforms tied to specific pedagogical approaches.

The post An Example Why LMS Should Not Be Only Part of Learning Ecosystem appeared first on e-Literate.

Quick and Dirty - Refreshing a Select List of Values

Denes Kubicek - Tue, 2015-06-23 05:14
This is a quick and dirty solution but it works. It shows hot to update a select list of values on demand without a lot of coding.

Categories: Development

AJAX Callback and jQuery creating Page Items Dynamically

Denes Kubicek - Tue, 2015-06-23 04:40
This example shows a couple of interesting techniques:
  • 1. How to create and use an AJAX Callback process instead of an Application Process on Demand,
  • 2. how to create a dynamic page item,
  • 3. how to modify the item display settings and reference Font Awsome icon set and finaly,
  • 4. it shows how to call an AJAX process from javascript.
This question is based on a recent forum posting where this question was asked.

Have a look and enjoy.

Categories: Development

RMAN - 3 : The DB_UNIQUE_NAME in Backups to the FRA

Hemant K Chitale - Tue, 2015-06-23 03:14
When you run RMAN Backups to the FRA without using the FORMAT clause, Oracle automatically generates filenames for the BackupPieces.  The folder name is derived from the system date.  But what is the parent folder for backups ?  Is it simply the DB_RECOVERY_FILE_DEST ?  Actuallly, the DB_UNIQUE_NAME comes into play as well.

For example :

[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 16:57:19 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> list backup of datafile 1;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 831.23M DISK 00:03:32 07-JUN-15
BP Key: 1 Status: AVAILABLE Compressed: YES Tag: TAG20150607T165914
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T165914_bq81z2y6_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068320 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
3 Full 366.89M DISK 00:01:56 07-JUN-15
BP Key: 3 Status: AVAILABLE Compressed: YES Tag: TAG20150607T170754
Piece Name: /NEW_FS/oracle/FRA/ORCL/backupset/2015_06_07/o1_mf_nnndf_TAG20150607T170754_bq82hc5f_.bkp
List of Datafiles in backup set 3
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 14068721 07-JUN-15 /home/oracle/app/oracle/oradata/orcl/system01.dbf

RMAN>

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Jun 23 16:58:34 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /NEW_FS/oracle/FRA
db_recovery_file_dest_size big integer 8G
SQL>

We can see that the DB_RECOVERY_FILE_DEST is defined as "/NEW_FS/oracle/FRA". However, the backups go into a "backupset" folder under "/NEW_FS/oracle/FRA/ORCL/". The "ORCL" is part of the path to the folder holding the backups. How is this "ORCL" derived ?

SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string orcl
SQL>

By default, the DB_UNIQUE_NAME is the same as DB_NAME. Let's see what happens after I change the DB_UNIQUE_NAME.

SQL> 
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> alter system set db_unique_name='HEMANTDB' scope=SPFILE;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 385878712 bytes
Database Buffers 62914560 bytes
Redo Buffers 6008832 bytes
Database mounted.
Database opened.
SQL> !ls -l /NEW_FS/oracle/FRA/
total 4
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>

After resetting the DB_UNIQUE_NAME, Oracle doesn't immediately create the folder for the new DB_UNIQUE_NAME until and unless I run an RMAN Backup.

RMAN> exit

RMAN-06900: WARNING: unable to generate V$RMAN_STATUS or V$RMAN_OUTPUT row
RMAN-06901: WARNING: disabling update of the V$RMAN_STATUS and V$RMAN_OUTPUT rows
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3344
Session ID: 67 Serial number: 13


Recovery Manager complete.
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Tue Jun 23 17:07:14 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup datafile 1;

Starting backup at 23-JUN-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=27 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=38 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/home/oracle/app/oracle/oradata/orcl/system01.dbf
channel ORA_DISK_1: starting piece 1 at 23-JUN-15
channel ORA_DISK_1: finished piece 1 at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_06_23/o1_mf_nnndf_TAG20150623T170721_brl8g9od_.bkp tag=TAG20150623T170721 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:25
Finished backup at 23-JUN-15

Starting Control File and SPFILE Autobackup at 23-JUN-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_06_23/o1_mf_s_883156126_brl8k0w4_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 23-JUN-15

RMAN>

SQL> !ls -l /NEW_FS/oracle/FRA/
total 8
drwxrwx--- 3 oracle oracle 4096 Jun 23 17:07 HEMANTDB
drwxrwx--- 5 oracle oracle 4096 Jun 7 17:10 ORCL

SQL>
SQL> show parameter db_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_name string orcl
SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string HEMANTDB
SQL>

Notice how Oracle created the "HEMANTDB" folder under the designated DB_RECOVERY_FILE_DEST. It then created the "backupset" and "autobackup" folders also as subfolders under this.  BackupSet BackupPieces and Controlfile Autobackups are now going to the new path.  The backups are go to folders under {DB_RECOVERY_FILE_DEST}/{DB_UNIQUE_NAME}
.
.
.



Categories: DBA Blogs

Creating Custom Listening Connectors using Integration Broker SDK

Javier Delgado - Tue, 2015-06-23 02:48
One of my customers recently had the need of allowing a third party web application to attach files into PeopleSoft. After trying a number of different approaches (the integration had to be done at the web application client level, which significantly reduces the options to manipulate the request to PeopleSoft before sending it, particularly when dealing with old web browsers), I gave up and came to the conclusion that I needed a custom listening connector in Integration Broker to implement such integration. 


The process of developing and installing a custom listening connector in Integration Broker is quite well described in PeopleBooks, however, I thought it would helpful to document the process I have followed.
Samples
PeopleSoft comes with some samples of connectors developed using the Integration Broker SDK. The most interesting one for listening connectors is ExampleServletListeningConnector.java located in the following folder:
$PIA_HOME/webserv/IFHRDEV/applications/peoplesoft/PSIGW.war/WEB-INF/SDK/src/samplelisteningconnectors.java
I've used this sample as the basis for my custom connector. In some cases, I also found quite handy to decompile the standard connectors such HttpListeningConnector. I could not find any exhaustive source of documentation of the Integration Broker SDK, so decompiling the existing connectors proved to be a good way to understand how to best use the SDK.
Note: For decompiling the Java class files I have used a very simple tool names cavaj. It is a very simple tool, but still helpful.
DevelopmentTaking the previously mentioned sample as the basis, I have coded my own connector. If you are not interested in the details, you may want to skip to the next section.
What I needed to do in my connector was basically two things:
  1. Encode the incoming file using Base64, as binary files could not be processed otherwise by the existing PeopleCode Message API.
  2. Pass any parameters received in the HTTP Header or the URL as part of the IBInfo (Integration Broker information included in every internal Integration Broker message.


If you want to check the actual code, you can download the source code from this link:
https://drive.google.com/file/d/0B9Ts3FrN6NfhNWdaVnNhalAxRG8/view?usp=sharing

CompilingOnce your source code is ready, you need to compile it. The first step for Java compilation is to set the environment variables so they point to the Java SDK. In my case, I was using the PeopleSoft HCM 9.2 Update Image 11, and these were the commands I needed to use:
export JAVA_HOME=/opt/oracle/psft/pt/jdk1.7.0_71 export PATH=$PATH:$JAVA_HOME/bin
Then I went to the directory were my java file was placed and run the following command to compile the file:
javac -cp /home/psadm2/psft/pt/8.54/webserv/peoplesoft/applications/peoplesoft/PSIGW.war/WEB-INF/classes:/home/psadm2/psft/pt/8.54/webserv/peoplesoft/applications/peoplesoft/PSIGW.war/WEB-INF/lib/mail.jar:/opt/oracle/psft/pt/bea/wlserver/server/lib/weblogic.jar FileUploadListeningConnector.java 
The paths may obviously differ in your case, but the important thing is to include the following directories/jar files in your class path:
  • $PIA_HOME/webserv/peoplesoft/applications/peoplesoft/PSIGW.war/WEB-INF/classes
  • mail.jar
  • weblogic.jar

Please note that weblogic.jar was needed because I was using WebLogic as my web server. In case you use WebSphere, you need to change this jar file.
DeploymentOnce the file is compiled, you need to copy the resulting class file to the following directory:
$PIA_HOME/webserv/IFHRDEV/applications/peoplesoft/PSIGW.war/WEB-INF/classes/com/peoplesoft/pt/integrationgateway/listeningconnector
The next step is to let WebLogic know that there is a new servlet available. This can be done by editing the following file:
$PIA_HOME/webserv/IFHRDEV/applications/peoplesoft/PSIGW.war/WEB-INF/web.xml

All you need to do is to duplicate the sections referring to ExampleServletListeningConnector and replace those appearances with your connector name. In my case, I was using PeopleTools 8.54, and these were the lines I had to include:
(...) <servlet> <servlet-name>ExampleServletListeningConnector</servlet-name> <servlet-class>com.peoplesoft.pt.integrationgateway.listeningconnector.ExampleServletListeningConnector</servlet-class> </servlet> <servlet> <servlet-name>FileUploadListeningConnector</servlet-name> <servlet-class>com.peoplesoft.pt.integrationgateway.listeningconnector.FileUploadListeningConnector</servlet-class> </servlet>(...) <servlet-mapping> <servlet-name>ExampleServletListeningConnector</servlet-name> <url-pattern>/ExampleServletListeningConnector/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>FileUploadListeningConnector</servlet-name> <url-pattern>/FileUploadListeningConnector/*</url-pattern> </servlet-mapping>(...)
TestingIn order to test the new connector, you need to reboot the web server so the changes made to register the new servlet are taken. Once this is done, you can check if the new listening connector is responding by using the declared URL:
http://webserver/PSIGW/FileUploadListeningConnector

Empty Leaf Blocks After Rollback Part I (Empty Spaces)

Richard Foote - Tue, 2015-06-23 00:09
There’s been an interesting recent discussion on the OTN Database forum regarding “Index blank blocks after large update that was rolled back“. Setting aside the odd scenario of updating a column that previously had 20 million distinct values to the same value on a 2 billion row table, the key questions raised are why the blank index leaf blocks […]
Categories: DBA Blogs

Moved to new/better blog Feeds via eMail

Online Apps DBA - Mon, 2015-06-22 18:21

Screen shot 2015-06-23 at 01.12.28

.

If you subscribed to our blog onlineAppsDBA (using RSS feed) prior to April 2015 then from today you will receive email for new posts via new/better email service provider.

Emails for new post will come from email ID contactus[@]k21technologies.com and subject as [New Post] … and will look like image above.

Note: Ensure that you add email address contactus[@]k21technologies.com as safe sender list.

The post Moved to new/better blog Feeds via eMail appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Python cx_Oracle 5.2 driver for Oracle Database has been released

Christopher Jones - Mon, 2015-06-22 17:03

Anthony Tuininga just released an updated Python cx_Oracle 5.2 driver for Oracle Database. This release brings a number of enhancements, many of them for Oracle Database 12c features such as longer VARCHARS.

cx_Oracle 5.2 is the first release Oracle has contributed code for (thanks Avinash!) so we're pretty happy all around. And a big thank you to all the other contributors and users who have made this release possible.

The new code features are:

  • Support for Oracle Database 12c strings up to 32k characters.
  • Support for LOB values larger than 4 GB.
  • Support for Oracle Database 12c array DML row counts.
  • Support for fetching batch errors.
  • Support for connections as SYSASM.
  • Added types NCHAR, FIXED_NCHAR and LONG_NCHAR to replace the types UNICODE, FIXED_UNICODE and LONG_UNICODE (which are now deprecated). These types are available in Python 3 as well so they can be used to specify the use of NCHAR type fields when binding or using setinputsizes().
  • Support for building without any configuration changes to the machine when using instant client RPMs on Linux.
  • Fixed session releasing to the pool when calling connection.close() (Issue #2)
  • Fixed binding of booleans in Python 3.x.
  • Added __version__ attribute to conform with PEP 396.
  • Fixed handling of datetime intervals (Issue #7)

The complete release notes are here.

My favorite feature is the installation improvement. (Disclaimer: I contributed the initial implementation!) With this change, Instant Client RPMS on Linux can now be used. The best bit is cx_Oracle will automatically locate Instant Client and will then also automatically build using rpath. The installation of cx_Oracle on Linux is now as simple as installing the Instant Client Basic & SDK RPMs, and running 'pip install cx_Oracle'. No need to set ORACLE_HOME during installation. No need to set LD_LIBRARY_PATH at runtime. If you have a Linux ULN support subscription you can install Instant Client via yum, which makes it even simpler.

Users of Database Resident Connection Pooling will like the connection.close() fix since it allows DRCP to be used effectively without requiring a cx_Oracle session pool.

In summary the cx_Oracle 5.2 release adds great features to the already impressive Oracle Database support available to Python applications. Application development and deployment just got better.

Resources:

Thanks again to Anthony, Avinash and all the contributors who have made cx_Oracle so good.

12c Parallel Execution New Features: 1 SLAVE distribution

Randolf Geist - Mon, 2015-06-22 14:39
When certain SQL features get used in pre-12c versions that force non-parallel evaluation, like using ROWNUM or certain Analytic Functions like LAG/LEAD, then - depending on the overall plan shape - Oracle can start to decompose the parallel execution plan into several so called DFO trees (If you want learn more about DFO trees and DFOs I recommend watching my online tutorial on my Youtube channel).

Now having multiple DFO trees in a single parallel execution plan comes with several side effects that are confusing and complicate matters unnecessarily, like each DFO tree allocates its own PX slave set(s), and so each one can potenially end up with a different DOP, which means you can have more than one DOP in a single parallel execution plan.

Depending on the overall plan shape this might also mean that a DFO tree can get started multiple times, and again this means that each time it is started / completed PX slaves need to be allocated and de-allocated, potentially causing a significant overhead coordinating all that activity that is not directly related to the actual execution.

This also means that having multiple DFO trees can lead to a situation where (a lot) more PX slaves are allocated than expected, in case multiple DFO trees are active at the same time - which again means that if you believe you can limit the number of PX slaves allocated by a single parallel execution using Resource Manager directives you might be wrong.

Since all these are undesirable side effects, starting with release 12c Oracle has put effort into new features that minimize the need for such a decomposition into multiple DFO trees. One of these new features is the so called "1 SLAVE" distribution method that can get used if such a non-parallel evaluation is required.

Quite similar to the recently described "PX SELECTOR" operator the "1 SLAVE" distribution uses a single PX slave out of a slave set to execute the non-parallel operations instead of the Query Coordinator. The main difference this makes is that the parallel and serial operations now are still part of the same DFO tree instead of having parts of the execution plan executed by the Query Coordinator and different DFO trees before and after such serial operations.

Let's have a look at a simple example to demonstrate the new feature. I use here three identical tables, just for the sake of being able to differentiate the tables in the plan output - in principle re-using a single table three times would be sufficient.

create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't2')

create table t4
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't4')

create table t6
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't6')

explain plan for
select /*+ no_merge(x) */
*
from
(
select /*+ parallel(t6 4)
--optimizer_features_enable('11.2.0.4')
*/
*
from
(
select /*+ parallel(t2 4) */
--lag(id) over (order by id) as v1_rn
rownum as v1_rn
, t2.id as v1_id
, t2.filler as v1_filler
from
t2
) v1
, (
select /*+ parallel(t4 2) */
--lag(id) over (order by id) as v2_rn
rownum as v2_rn
, t4.id as v2_id
, t4.filler as v2_filler
from
t4
) v2
, t6
where
v1_id = v2_id
and v1_id = t6.id
) x
where
rownum > 1
;

-- 11.2.0.4 plan shape
----------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | COUNT | | | | |
|* 2 | FILTER | | | | |
| 3 | PX COORDINATOR | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ30002 | Q3,02 | P->S | QC (RAND) |
| 5 | VIEW | | Q3,02 | PCWP | |
|* 6 | HASH JOIN | | Q3,02 | PCWP | |
| 7 | PX RECEIVE | | Q3,02 | PCWP | |
| 8 | PX SEND HASH | :TQ30001 | Q3,01 | P->P | HASH |
| 9 | PX BLOCK ITERATOR | | Q3,01 | PCWC | |
| 10 | TABLE ACCESS FULL | T6 | Q3,01 | PCWP | |
| 11 | BUFFER SORT | | Q3,02 | PCWC | |
| 12 | PX RECEIVE | | Q3,02 | PCWP | |
| 13 | PX SEND HASH | :TQ30000 | | S->P | HASH |
|* 14 | HASH JOIN | | | | |
| 15 | VIEW | | | | |
| 16 | COUNT | | | | |
| 17 | PX COORDINATOR | | | | |
| 18 | PX SEND QC (RANDOM)| :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 19 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | VIEW | | | | |
| 22 | COUNT | | | | |
| 23 | PX COORDINATOR | | | | |
| 24 | PX SEND QC (RANDOM)| :TQ20000 | Q2,00 | P->S | QC (RAND) |
| 25 | PX BLOCK ITERATOR | | Q2,00 | PCWC | |
| 26 | TABLE ACCESS FULL| T4 | Q2,00 | PCWP | |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
6 - access("V1_ID"="T6"."ID")
14 - access("V1_ID"="V2_ID")

-- 12.1.0.2 plan shape
---------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | COUNT | | | | |
|* 2 | FILTER | | | | |
| 3 | PX COORDINATOR | | | | |
| 4 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
| 5 | VIEW | | Q1,04 | PCWP | |
|* 6 | HASH JOIN BUFFERED | | Q1,04 | PCWP | |
| 7 | PX RECEIVE | | Q1,04 | PCWP | |
| 8 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
| 9 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 10 | TABLE ACCESS FULL | T6 | Q1,02 | PCWP | |
| 11 | PX RECEIVE | | Q1,04 | PCWP | |
| 12 | PX SEND HASH | :TQ10003 | Q1,03 | S->P | HASH |
|* 13 | HASH JOIN BUFFERED | | Q1,03 | SCWC | |
| 14 | VIEW | | Q1,03 | SCWC | |
| 15 | COUNT | | Q1,03 | SCWP | |
| 16 | PX RECEIVE | | Q1,03 | SCWP | |
| 17 | PX SEND 1 SLAVE | :TQ10000 | Q1,00 | P->S | 1 SLAVE |
| 18 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 19 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 20 | VIEW | | Q1,03 | SCWC | |
| 21 | COUNT | | Q1,03 | SCWP | |
| 22 | PX RECEIVE | | Q1,03 | SCWP | |
| 23 | PX SEND 1 SLAVE | :TQ10001 | Q1,01 | P->S | 1 SLAVE |
| 24 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 25 | TABLE ACCESS FULL| T4 | Q1,01 | PCWP | |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM>1)
6 - access("V1_ID"="T6"."ID")
13 - access("V1_ID"="V2_ID")
Let's start with the 11.2.0.4 plan shape: We can see from multiple occurrences of the PX COORDINATOR operator, the TQ column and the corresponding TQ entries in the NAME column that this parallel execution plan consists of three DFO trees. When analyzing the actual run time activity it would also become obvious that the different DFO trees actually run at a different DOP and use different PX slave sets. In this particular case here it would even become obvious that two DFO trees are active at the same time - so it's pretty much demonstrating all the undesirable side effects of having multiple DFO trees that I mentioned above.

The HASH JOIN between V1 and V2 runs serially, whereas the HASH JOIN between T6 and the join result of V1 and V2 runs parallel. Since we have a Serial->Parallel distribution between these two HASH JOINs, an additional BUFFER SORT operation gets added - as outlined in the PX SELECTOR note.

If we now look at the 12.1.0.2 plan shape we notice that the execution plan consists of a single DFO tree, and hence all the side effects I've outlined are gone. The serial evaluation is done by a single PX slave of one set, again indicated by the SCWC/SCWP decorator, similar to the PX SELECTOR operator.

However, the plan shape also demonstrates one possible disadvantage of the new feature: Since now everything is part of a single DFO tree we end up with more re-distributions per DFO tree, and hence might have an increased demand for the dreaded BUFFERing to prevent multiple re-distributions being active at the same time - here this can be seen in operation ID 13 - the HASH JOIN executed serially by one PX slave is actually turned into a HASH JOIN BUFFERED (so the worst case for a (not really) "parallel" hash join - executed only by a single PX slave and needs to buffer the right hand row source in addition), and also the second HASH JOIN is turned into its BUFFERED variant. This means compared to the 11.2.0.4 plan shape that contains no HASH JOIN BUFFERED but an additional BUFFER SORT operation, the 12.1.0.2 plan shape with this data pattern actually requires more PGA / TEMP space than the 11.2.0.4 plan shape due to the double buffering now necessary.

Footnote
The new 1 SLAVE distribution doesn't get used always in 12c. If you run just the join between V1 and V2 for example, then the old plan shape will be used, and there are again multiple DFO trees. Furthermore, in this particular case, when you start changing the DOP used in the PARALLEL hints you also might end up with a plan shape where one view uses the 1 SLAVE distribution whereas the other one uses the old plan shape with Query Coordinator activity - I haven't investigated further why this happens.

If Analytic Functions get used, you might also see a "1 SLAVE (ORDER)" variation of the distribution that enforces a certain order when re-distributing the data, similar to the "PX SEND QC (ORDER)" operator.

Log Buffer #428: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-06-22 11:45

The Log Buffer Edition once again is sparkling with some gems, hand-picked from Oracle, SQL Server and MySQL.

Oracle:

  • Oracle GoldenGate 12.1.2.1.1  is now certified with Unity 14.10.  With this certification, customers can use Oracle GoldenGate to deliver data to Teradata Unity which can then automate the distribution of data to multiple Teradata databases.
  • How do I change DNS servers on Exadata storage servers.
  • Flushing Shared Pool Does Not Slow Its Growth.
  • Code completion is the key feature you need when adding support for your own JavaScript framework to NetBeans IDE.
  • Replicating Hive Data Into Oracle BI Cloud Service for Visual Analyzer using BICS Data Sync.

SQL Server:

  • Trigger an Email of an SSRS Report from an SSIS Package.
  • Script All Server Level Objects to Recreate SQL Server.
  • A Syntax Mystery in a Previously Working Procedure.
  • Using R to Explore Data by Analysis – for SQL Professionals.
  • Converting Rows to Columns (PIVOT) and Columns to Rows (UNPIVOT) in SQL Server.

MySQL:

  • Some applications, particularly those written with a single-node database server in mind, attempt to immediately read a value they have just inserted into the database, without making those operations part of a single transaction. A read/write splitting proxy or a connection pool combined with a load-balancer can direct each operation to a different database node.
  • Q&A: High availability when using MySQL in the cloud.
  • MariaDB 10.0.20 now available.
  • Removal and Deprecation in MySQL 5.7.
  • Getting EXPLAIN information from already running queries in MySQL 5.7.

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

Categories: DBA Blogs

Cost of PeopleSoft vs SaaS and Other Options

PeopleSoft Technology Blog - Mon, 2015-06-22 11:32

There is a lot of publicity around SaaS and various cloud options for enterprise applications these days.  This leaves PeopleSoft customers wondering whether they should stay with PeopleSoft and what that real cost comparisons are.  Is SaaS really less expensive for them?  Is it really the best option for their business?   What are the factors they need to consider?  Can they stay with PeopleSoft and run it in the cloud, getting the best of both worlds?

Quest, the international user group organization recently published an informative article titled How Much Does Your ERP Cost?  In this short article, the author provides ways to consider cost, but importantly, examines the value of these systems to the enterprise and why that should be a factor in your decision.  The author interviews several customers in assessing these questions.  Worth a read.

Server refused public-key signature despite accepting key!

Vikram Das - Mon, 2015-06-22 11:23
A new SFTP connection was not working, even though everything looked fine:

1. Permissions were correct on directories:
chmod go-w $HOME/
chmod 700 $HOME/.ssh
chmod 600 $HOME/.ssh/authorized_keys
chmod 600 $HOME/.ssh/id_rsa
chmod 644 $HOME/.ssh/id_rsa.pub
chmod 644 $HOME/.ssh/known_hosts

2. Keys were correctly placed
However, it still asked for password, whenever SFTP connection was done:
Using username "sftpuser".Authenticating with public key "rsa-key-20150214"Server refused public-key signature despite accepting key!Using keyboard-interactive authentication.Password:
I tried various things, none worked and I eventually went back to my notes for SFTP troubleshooting:
1. Correct Permissionschmod go-w $HOME/chmod 700 $HOME/.sshchmod 600 $HOME/.ssh/authorized_keyschmod 600 $HOME/.ssh/id_rsachmod 644 $HOME/.ssh/id_rsa.pubchmod 644 $HOME/.ssh/known_hosts
2. Make sure the owner:group on the directories and files is correct:
ls -ld  $HOME/ls -ld  $HOME/.sshls -ltr $HOME/.ssh
3. Login as root
chown user:group $HOME chown user:group $HOME/.sshchown user:group $HOME/.ssh/authorized_keyschown user:group $HOME/.ssh/id_rsachown user:group $HOME/.ssh/id_rsa.pubchown user:group $HOME/.ssh/known_hosts
4. Check for user entries in /etc/passwd and /etc/shadow
5. grep user /etc/shadow
When I did the 5th step, I found that /etc/shadow entry for the user didn't exist.  So I did these steps:
chmod 600 /etc/shadowvi /etc/shadowInsert this new line at the endsftpuser:UP:::::::Save Filechmod 400 /etc/shadow
It started working after that.
Categories: APPS Blogs

Buffer pool advisory in AWR

Yann Neuhaus - Mon, 2015-06-22 07:45

In Oracle memory advisors: how relevant ? I said that advisors are calculating their recommendations from statistics cumulated since the begining of the instance, even in AWR which is supposed to cover only a short period. Here is a quick test on buffer pool advisory to validate that assumption.

  I'm running the following query to compare the 'physical reads cache' from DBA_HIST_SYSSTATS and the value from the advisor in DBA_HIST_DB_CACHE:

SQL> column sysstat_value format 999G999G999G999
SQL> column advisor_value format 999G999G999G999
SQL> select snap_id,sysstat_value,advisor_value from
-- physical reads cache
(select snap_id,dbid,instance_number,stat_name,value sysstat_value from dba_hist_sysstat where stat_name like 'physical reads cache')
natural join
--  ACTUAL_PHYSICAL_READS
(select snap_id,dbid,instance_number,'ADVISOR',actual_physical_reads advisor_value from DBA_HIST_DB_CACHE_ADVICE where size_factor=1 and name='DEFAULT')
order by 1 desc,2,3;

Here is the result where I can see that they match for all snapshots I have in history:

   SNAP_ID    SYSSTAT_VALUE    ADVISOR_VALUE
---------- ---------------- ----------------
      3025      708,373,759      731,359,811
      3024      708,364,027      731,350,072
      3023      708,284,582      731,270,631
      3022      708,281,965      731,268,020
      3021      708,280,406      731,266,424
      3020      708,252,249      731,238,240
...
      2133       45,538,775       46,930,580
      2132       45,533,062       46,924,865
      2131       30,030,094       31,423,247
      2130          138,897          138,406
      2129          125,126          124,637
      2128          114,556          114,052
      2127          113,455          112,959
      2126          112,378          111,890
      2125          111,179          110,682
      2124          106,701          106,197
      2123          104,782          104,287
      2122           59,071           58,578
      2121           57,972           57,476
...

I'm not sure about the three columns available in that view: PHYSICAL_READS BASE_PHYSICAL_READS ACTUAL_PHYSICAL_READS so let's check that the one I used is the one that is displayed in an AWR report. Here is the latest report for snapshots 3024 to 3025:  

CaptureBPA.PNG

Here is how I verified my assumtions, on an instance that is running for a long time. When you read at the advisor recommendations, you should know whether the activity since instance startup is relevant or not. And I don't know how to reset the counters (except with an instance restart).

Oracle BPEL & BPM 12c Set Flow Instance Title

Jan Kettenis - Mon, 2015-06-22 07:44
In this article I describe how to set the instance title for a composite in Oracle BPEL or BPM 12c.

Sometimes little, annoying things that are fixed with a new release can give great joy. With 11g you could set the composite instance title using the (advanced) XPath function setCompositeInstanceTitle(). This helps to find or identify instances in Enterprise Manager. However, for high volume composites you may want to configure in-memory-optimization as well by adding the following properties to the BPEL process in the composite.xml:


The problem is that when you configure in-memory-optimization, that setCompositeInstanceTitle() fails because there is a relation with auditing.

In 12c the composite instance does no longer have such a prominent role in the SOA/BPM Suite. Instead the flow instance now has that role. And with that setCompositeInstanceTitle() has been deprecated, and setFlowInstanceTitle() should be used instead:

You can set the flow instance title by adding a script activity right after the Receive with assign to some dummy string variable, using the setFlowInstanceTitle() as show above. To make it work I had to wrap the string variable in a string function.
Unlike the setCompositeInstanceTitle(), with the setFlowInstanceTitle() you can configure in-memory-optimization and still display the title in Enterprise Manager:

Setting the title for a BPM flow instance can be done using the same XPath function and use it in an assignment to some dummy variable in the start event.

Now Launching Oracle Process Cloud Service

WebCenter Team - Mon, 2015-06-22 06:45

Announcing Oracle Process Cloud Service
-       Empowering Business Users with Process Automation

Oracle Process Cloud Service equips your line of business to deliver on digital automation of process applications - independent of IT. In a zero-code, cloud based infrastructure, Oracle Process Cloud Service empowers the business user to manage the entire lifecycle of business processes and gain actionable insight on process health and SLAs. This takes business empowerment to a new level of automation and efficiency.

Join us in Making History

We are excited Larry Ellison will be breaking ground with Oracle Process Cloud Service on Monday June 22. Join us online or on the web – we have an exciting lineup of customers, hands on demos, product experts and more.

[Event |Redwood Shores] Jun 22 Integrate, Accelerate, Lead with Oracle Cloud Platform (link
[Event |Webcast] Jun 22 Larry Ellison Unveils New Oracle Cloud Platform Services (link)

How to Learn More on Oracle Process Cloud Service?

[BPM-CON Webcast] Rapid Process Automation on the Cloud (link
[Solution Brief] A New Approach to Digital Transformation (link)
[Datasheet]  Productive Work Management in the Cloud (link)
[Video] Introducing Oracle Process Cloud Service (link)
[Infographic] Empowering the Business to Deliver at Market Speed (link)
[eBook] Rapidly Design, Automate and Manage Processes on the Cloud (link)
[Video] Technical dive into Process Cloud Service (link)

Testimonials: PCS Customers and Partners in Action 

[Video] Land O Lakes Extends SaaS with Oracle Process Cloud Service (link)
[Video] Flextronics Speeds up BPM with Oracle Process Cloud Service (link)
[Video] Avio Empowers Business with Oracle Process Cloud Service (link)
[Video] eProseed Digitizes with Oracle Process Cloud Service (link)
[Video] Sofbang Extends SOA with with Oracle Process Cloud Service (link)
[Video] Hitachi Manages Contracts with Oracle Process Cloud Service (link)

Publications

[Analyst Report] Ovum: Oracle Brings Process to the Cloud (link)
[Byline] Forbes: What IT Pros Must do to Survive, Thrive Amid Rapid Change (link)

Events

[Event |Webcast] Jun 19  Oracle Public Cloud: Platform as a Service (link)
[Event |Webcast] Jun 22  Larry Ellison Unveils New Oracle Cloud Platform Services (link)
[Event |Redwood Shores] Jun 22 Integrate, Accelerate, Lead with Oracle Cloud Platform (link)
[More Events]  BPM Events: Click here

To learn more join us at cloud.oracle.com/process Follow us on twitter @OracleBPM

Index Tree Dumps in Oracle 12c Database (New Age)

Richard Foote - Sun, 2015-06-21 23:56
I’ve previously discussed Index Tree Dumps but I’ve recently found a nice little improvement that’s been introduced in Oracle Database 12c. Let’s begin by creating a little table and index: To generate an Index Tree Dump, we first need the OBJECT_ID of the index: And then use it to generate the Index Tree Dump: Previously, an […]
Categories: DBA Blogs

ODTUG KScope15: Sunday Symposium

Luc Bors - Sun, 2015-06-21 14:14