Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 13 hours 31 min ago

12cR1 RAC Posts -- 4 : Adding a Disk of a different size

Sat, 2017-01-21 10:44
How does 12.1.0.2 ASM handle adding a disk of a different size to an existing DiskGroup ?

I currently have 4 disks of 5GB each in 2 DiskGroups

[oracle@collabn1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 12.1.0.2.0 Production on Sat Jan 21 23:48:00 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, disk_number, name, state, total_mb
2 from v$asm_disk
3 order by 1,2,3
4 /

GROUP_NUMBER DISK_NUMBER NAME STATE TOTAL_MB
------------ ----------- ------------------------------ -------- ----------
0 0 NORMAL 0
1 0 DATA_0000 NORMAL 5114
1 1 DATA_0001 NORMAL 5114
2 0 FRA_0000 NORMAL 5114

SQL>
SQL> select group_number, name
2 from v$asm_diskgroup
3 order by 1
4 /

GROUP_NUMBER NAME
------------ ------------------------------
1 DATA
2 FRA

SQL>


The DATA DiskGroup has 2 disks of 5GB each and the FRA DiskGroup has 1 disk of 5GB.  One disk (identified as DiskNumber=0) is not yet assigned.

What happens if I try to expand the DATA DiskGroup with a Disk of 10GB ?

[root@collabn1 dev]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xff8b0ab7.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u').

Command (m for help): m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): p

Disk /dev/sdf: 12.9 GB, 12884901888 bytes
255 heads, 63 sectors/track, 1566 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xff8b0ab7

Device Boot Start End Blocks Id System

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1566, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-1566, default 1566):
Using default value 1566

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@collabn1 dev]#
[root@collabn1 dev]# /sbin/scsi_id -g -u -d /dev/sdf
1ATA_VBOX_HARDDISK_VB535deca9-9a295efe
[root@collabn1 dev]#
[root@collabn1 dev]# /sbin/scsi_id -g -u -d /dev/sdf
1ATA_VBOX_HARDDISK_VB535deca9-9a295efe
[root@collabn1 dev]# cd /etc/udev/rules.d
[root@collabn1 rules.d]# vi 99-oracle-asmdevices.rules
[root@collabn1 rules.d]# tail -1 99-oracle-asmdevices.rules
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB535deca9-9a295efe", NAME="asm-disk5", OWNER="oracle", GROUP="dba", MODE="0660"
[root@collabn1 rules.d]#
[root@collabn1 rules.d]# /sbin/partprobe /dev/sdf1
[root@collabn1 rules.d]# /sbin/udevadm test /block/sdb/sdf1
run_command: calling: test
udevadm_test: version 147
This program is for debugging only, it does not run any program,
specified by a RUN key. It may show incorrect results, because
some values may be different, or not available at a simulation run.

parse_file: reading '/lib/udev/rules.d/10-console.rules' as rules file
parse_file: reading '/lib/udev/rules.d/10-dm.rules' as rules file
parse_file: reading '/lib/udev/rules.d/11-dm-lvm.rules' as rules file
parse_file: reading '/lib/udev/rules.d/13-dm-disk.rules' as rules file
parse_file: reading '/lib/udev/rules.d/40-isdn.rules' as rules file
parse_file: reading '/lib/udev/rules.d/40-redhat.rules' as rules file
parse_file: reading '/lib/udev/rules.d/42-qemu-usb.rules' as rules file
parse_file: reading '/lib/udev/rules.d/50-firmware.rules' as rules file
parse_file: reading '/lib/udev/rules.d/50-udev-default.rules' as rules file
parse_file: reading '/etc/udev/rules.d/55-usm.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-alias-kmsg.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-cdrom_id.rules' as rules file
parse_file: reading '/etc/udev/rules.d/60-fprint-autosuspend.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-net.rules' as rules file
parse_file: reading '/etc/udev/rules.d/60-pcmcia.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-alsa.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-input.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-serial.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-storage-tape.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-storage.rules' as rules file
parse_file: reading '/lib/udev/rules.d/60-persistent-v4l.rules' as rules file
parse_file: reading '/etc/udev/rules.d/60-raw.rules' as rules file
parse_file: reading '/etc/udev/rules.d/60-vboxadd.rules' as rules file
parse_file: reading '/lib/udev/rules.d/61-mobile-action.rules' as rules file
parse_file: reading '/lib/udev/rules.d/61-option-modem-modeswitch.rules' as rules file
parse_file: reading '/lib/udev/rules.d/61-persistent-storage-edd.rules' as rules file
parse_file: reading '/lib/udev/rules.d/64-device-mapper.rules' as rules file
parse_file: reading '/lib/udev/rules.d/64-md-raid.rules' as rules file
parse_file: reading '/lib/udev/rules.d/65-md-incremental.rules' as rules file
parse_file: reading '/lib/udev/rules.d/69-dm-lvm-metad.rules' as rules file
parse_file: reading '/lib/udev/rules.d/70-acl.rules' as rules file
parse_file: reading '/lib/udev/rules.d/70-cups-libusb.rules' as rules file
parse_file: reading '/lib/udev/rules.d/70-hid2hci.rules' as rules file
parse_file: reading '/etc/udev/rules.d/70-persistent-cd.rules' as rules file
parse_file: reading '/etc/udev/rules.d/70-persistent-net.rules' as rules file
parse_file: reading '/lib/udev/rules.d/71-biosdevname.rules' as rules file
parse_file: reading '/lib/udev/rules.d/75-cd-aliases-generator.rules' as rules file
parse_file: reading '/lib/udev/rules.d/75-net-description.rules' as rules file
parse_file: reading '/lib/udev/rules.d/75-persistent-net-generator.rules' as rules file
parse_file: reading '/lib/udev/rules.d/75-tty-description.rules' as rules file
parse_file: reading '/lib/udev/rules.d/78-sound-card.rules' as rules file
parse_file: reading '/lib/udev/rules.d/79-fstab_import.rules' as rules file
parse_file: reading '/lib/udev/rules.d/80-drivers.rules' as rules file
parse_file: reading '/lib/udev/rules.d/80-iosched.rules' as rules file
parse_file: reading '/lib/udev/rules.d/80-mpath-iosched.rules' as rules file
parse_file: reading '/lib/udev/rules.d/85-regulatory.rules' as rules file
parse_file: reading '/lib/udev/rules.d/88-clock.rules' as rules file
parse_file: reading '/lib/udev/rules.d/89-microcode.rules' as rules file
parse_file: reading '/etc/udev/rules.d/90-alsa.rules' as rules file
parse_file: reading '/lib/udev/rules.d/90-btrfs.rules' as rules file
parse_file: reading '/etc/udev/rules.d/90-hal.rules' as rules file
parse_file: reading '/lib/udev/rules.d/91-drm-modeset.rules' as rules file
parse_file: reading '/lib/udev/rules.d/95-dm-notify.rules' as rules file
parse_file: reading '/lib/udev/rules.d/95-keyboard-force-release.rules' as rules file
parse_file: reading '/lib/udev/rules.d/95-keymap.rules' as rules file
parse_file: reading '/lib/udev/rules.d/95-udev-late.rules' as rules file
parse_file: reading '/etc/udev/rules.d/98-kexec.rules' as rules file
parse_file: reading '/etc/udev/rules.d/99-oracle-asmdevices.rules' as rules file
parse_file: reading '/dev/.udev/rules.d/99-root.rules' as rules file
udev_rules_new: rules use 32448 bytes tokens (2704 * 12 bytes), 19085 bytes buffer
udev_rules_new: temporary index used 19500 bytes (975 * 20 bytes)
unable to open device '/sys/block/sdb/sdf1'
[root@collabn1 rules.d]#
[root@collabn1 rules.d]# /sbin/udevadm control --reload-rules
[root@collabn1 rules.d]# /sbin/start_udev
Starting udev: [ OK ]
[root@collabn1 rules.d]#
[root@collabn1 rules.d]# ls -l /dev/asm*
brw-rw----. 1 oracle dba 8, 17 Jan 22 00:07 /dev/asm-disk1
brw-rw----. 1 oracle dba 8, 33 Jan 22 00:07 /dev/asm-disk2
brw-rw----. 1 oracle dba 8, 49 Jan 22 00:07 /dev/asm-disk3
brw-rw----. 1 oracle dba 8, 65 Jan 22 00:05 /dev/asm-disk4
brw-rw----. 1 oracle dba 8, 81 Jan 22 00:05 /dev/asm-disk5


So I now have asm-disk5 as the new ASM Disk.  Let my try to add this disk.

SQL> set pages600
SQL> select group_number, disk_number, name, path, total_mb
2 from v$asm_disk
3 order by 1,2
4 /

GROUP_NUMBER DISK_NUMBER NAME
------------ ----------- ------------------------------
PATH
--------------------------------------------------------------------------------
TOTAL_MB
----------
0 0
/dev/asm-disk5
0

0 1
/dev/asm-disk4
0

1 0 DATA_0000
/dev/asm-disk1
5114

1 1 DATA_0001
/dev/asm-disk2
5114

2 0 FRA_0000
/dev/asm-disk3
5114


SQL>
SQL> alter diskgroup data add disk '/dev/asm-disk5';

Diskgroup altered.

SQL>
SQL> alter diskgroup data add disk '/dev/asm-disk5';

Diskgroup altered.

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 order by 1,2
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
1 DATA 22512
2 FRA 5114

SQL>
SQL> select group_number, name, type
2 from v$asm_diskgroup
3 order by 1,2
4 /

GROUP_NUMBER NAME TYPE
------------ ------------------------------ ------
1 DATA EXTERN
2 FRA EXTERN

SQL>
SQL> select group_number, name, compatibility, database_compatibility
2 from v$asm_diskgroup
3 order by 1
4 /

GROUP_NUMBER NAME
------------ ------------------------------
COMPATIBILITY
------------------------------------------------------------
DATABASE_COMPATIBILITY
------------------------------------------------------------
1 DATA
12.1.0.0.0
10.1.0.0.0

2 FRA
12.1.0.0.0
10.1.0.0.0


SQL>
SQL> select group_number, disk_number, name, path, total_mb
2 from v$asm_disk
3 order by 1,2,3
4 /

GROUP_NUMBER DISK_NUMBER NAME
------------ ----------- ------------------------------
PATH
--------------------------------------------------------------------------------
TOTAL_MB
----------
0 0
/dev/asm-disk4
0

1 0 DATA_0000
/dev/asm-disk1
5114

1 1 DATA_0001
/dev/asm-disk2
5114

1 2 DATA_0002
/dev/asm-disk5
12284

2 0 FRA_0000
/dev/asm-disk3
5114


SQL>


According to Oracle Support Document 1938950.1, adding a disk of a different size to an existing DiskGroup fails with an error ORA-15410 in 12.1.0.2.  However, that seems to apply to NORMAL or HIGH Redundancy and COMPATIBLE.ASM 12.1.0.2.   Here, I have EXTERNAL Redundancy and COMPATIBLE.ASM 12.1.0.0.0

Do I recommend Disks of different sizes ?  Absolutely *not* in Production.  This is a "play" environment in Virtual Machines on my desktop that I can destroy and recreate anytime.  I can monitor disk usage as well.
.
.
.

Categories: DBA Blogs

12cR1 RAC Posts -- 3 : Convert PolicyManaged DB back to AdminManaged

Sun, 2017-01-15 20:35
In the previous post on 12cR1 RAC, I had converted my AdminManaged Database to PolicyManaged.

Here, I convert it back to AdminManaged.

First, I verify that the database is shutdown (note that I have only 1 node of the cluster currently up and running, I don't need both nodes and instances up).

[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC_1 is running on node collabn1
[oracle@collabn1 ~]$ srvctl stop database -d RAC
[oracle@collabn1 ~]$ ps -fuoracle |grep smon
oracle 3422 1 0 09:49 ? 00:00:00 asm_smon_+ASM1
oracle 4882 1 0 09:50 ? 00:00:00 mdb_smon_-MGMTDB
oracle 16889 9821 0 10:08 pts/0 00:00:00 grep smon
[oracle@collabn1 ~]$


Next, I remove the database from the Cluster Registry.

[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$ srvctl remove database -d RAC
Remove the database RAC? (y/[n]) y
[oracle@collabn1 ~]$ srvctl config database -d RAC
PRCD-1120 : The resource for database RAC could not be found.
PRCR-1001 : Resource ora.rac.db does not exist
[oracle@collabn1 ~]$


I then remove the defined Server Pool that I used for this database.

[oracle@collabn1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: MyPool
Importance: 100, Min: 1, Max: 2
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$ srvctl remove srvpool -serverpool MyPool
[oracle@collabn1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
[oracle@collabn1 ~]$


I then add the database back into the Cluster Registry.

[oracle@collabn1 ~]$ srvctl add database -d RAC \
> -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 \
> -pwfile +DATA/RAC/PASSWORD/pwdrac.277.931824933
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is administrator managed
[oracle@collabn1 ~]$


I start the second node of the cluster before I configure the instances (Note : I have the $ORACLE_HOME/dbs pfiles created in advance).

[oracle@collabn1 ~]$ srvctl add instance -d RAC -i RAC1 -n collabn1
[oracle@collabn1 ~]$ srvctl add instance -d RAC -i RAC2 -n collabn2
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: RAC1,RAC2
Configured nodes: collabn1,collabn2
Database is administrator managed
[oracle@collabn1 ~]$


I am now ready to start the database (instances).

[oracle@collabn1 ~]$ srvctl start database -d RAC
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC1 is running on node collabn1
Instance RAC2 is running on node collabn2
[oracle@collabn1 ~]$ ps -fuoracle |grep smon
oracle 3422 1 0 09:49 ? 00:00:00 asm_smon_+ASM1
oracle 4882 1 0 09:50 ? 00:00:00 mdb_smon_-MGMTDB
oracle 25431 1 0 10:30 ? 00:00:00 ora_smon_RAC1
oracle 27533 9821 0 10:33 pts/0 00:00:00 grep smon
[oracle@collabn1 ~]$
[root@collabn2 ~]# ps -fuoracle |grep smon
oracle 3460 1 0 10:19 ? 00:00:00 asm_smon_+ASM2
oracle 9561 1 0 10:30 ? 00:00:00 ora_smon_RAC2
[root@collabn2 ~]#
[oracle@collabn1 ~]$ env |grep SID
ORACLE_SID=RAC1
[oracle@collabn1 ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 16 10:34:15 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> select * from v$containers;

CON_ID DBID CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME OPEN_MODE RES
------------------------------ ---------- ---
OPEN_TIME
---------------------------------------------------------------------------
CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY SNAPSHOT_PARENT_CON_ID
---------- ---------- ---------- -------- ----------------------
1 2519807290 1 FD9AC20F64D344D7E043B6A9E80A2F2F
CDB$ROOT READ WRITE NO
16-JAN-17 10.31.34.014 AM +08:00
0 0 8192 ENABLED 0


CON_ID DBID CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME OPEN_MODE RES
------------------------------ ---------- ---
OPEN_TIME
---------------------------------------------------------------------------
CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY SNAPSHOT_PARENT_CON_ID
---------- ---------- ---------- -------- ----------------------
2 2061548092 2061548092 44BB5E17F41A2618E053334EA8C006B9
PDB$SEED READ ONLY NO
16-JAN-17 10.31.34.859 AM +08:00
1594413 859832320 8192 ENABLED 0


CON_ID DBID CON_UID GUID
---------- ---------- ---------- --------------------------------
NAME OPEN_MODE RES
------------------------------ ---------- ---
OPEN_TIME
---------------------------------------------------------------------------
CREATE_SCN TOTAL_SIZE BLOCK_SIZE RECOVERY SNAPSHOT_PARENT_CON_ID
---------- ---------- ---------- -------- ----------------------
3 1857084550 1857084550 44BBC69CE8F552AEE053334EA8C07365
PDB MOUNTED

1755977 0 8192 ENABLED 0


SQL> alter pluggable database PDB open;

Pluggable database altered.

SQL>
SQL> select con_id, name, open_mode from v$pdbs;

CON_ID NAME OPEN_MODE
---------- ------------------------------ ----------
2 PDB$SEED READ ONLY
3 PDB READ WRITE

SQL>


Thus, I converted my PolicyManaged database to AdministratorManaged.
.
.
.

Categories: DBA Blogs

Copying a Tablespace from NonCDB to a PDB (using TTS)

Sun, 2017-01-15 02:54
A Tablespace can be "transported"  from a NonCDB to a PDB as a way of copying the Tablespace.  Here I work with ASM as well.

First in the NonCDB :

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [oracle] ? NONCDB
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:03:43 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 402653184 bytes
Fixed Size 2924928 bytes
Variable Size 260050560 bytes
Database Buffers 134217728 bytes
Redo Buffers 5459968 bytes
Database mounted.
Database opened.
SQL> select file_name, bytes/1048576
2 from dba_data_files
3 where tablespace_name = 'EXAMPLE'
4 /

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA/NONCDB/DATAFILE/example.266.896482777
1243.75


SQL>
[oracle@ora12102 Desktop]$ expdp hemant/hemant \
> directory=data_pump_dir dumpfile=EXAMPLE_TTS.dmp \
> transport_tablespaces=EXAMPLE transport_full_check=Y

Export: Release 12.1.0.2.0 - Production on Sun Jan 15 16:08:27 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Starting "HEMANT"."SYS_EXPORT_TRANSPORTABLE_01": hemant/******** directory=data_pump_dir dumpfile=EXAMPLE_TTS.dmp transport_tablespaces=EXAMPLE transport_full_check=Y
ORA-39123: Data Pump transportable tablespace job aborted
ORA-39185: The transportable tablespace failure list is

ORA-29335: tablespace 'EXAMPLE' is not read only
Job "HEMANT"."SYS_EXPORT_TRANSPORTABLE_01" stopped due to fatal error at Sun Jan 15 16:08:41 2017 elapsed 0 00:00:06

[oracle@ora12102 Desktop]$


The tablespace has to be set READ ONLY before we can use export to transport it (also, it should be READ ONLY while the data files are being copied.

[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:09:10 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> alter tablespace example read only;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ expdp hemant/hemant \
> directory=data_pump_dir dumpfile=EXAMPLE_TTS.dmp \
> transport_tablespaces=EXAMPLE transport_full_check=Y

Export: Release 12.1.0.2.0 - Production on Sun Jan 15 16:09:58 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Starting "HEMANT"."SYS_EXPORT_TRANSPORTABLE_01": hemant/******** directory=data_pump_dir dumpfile=EXAMPLE_TTS.dmp transport_tablespaces=EXAMPLE transport_full_check=Y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_BODY
Processing object type TRANSPORTABLE_EXPORT/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/XMLSCHEMA/XMLSCHEMA
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX/BITMAP_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "HEMANT"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HEMANT.SYS_EXPORT_TRANSPORTABLE_01 is:
/u01/app/oracle/admin/NONCDB/dpdump/EXAMPLE_TTS.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
+DATA/NONCDB/DATAFILE/example.266.896482777
Job "HEMANT"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Sun Jan 15 16:12:48 2017 elapsed 0 00:02:46

[oracle@ora12102 Desktop]$


Now, I need to copy the datafile (while the tablespace is READ ONLY).

[oracle@ora12102 Desktop]$ su - grid
Password:
[grid@ora12102 ~]$ asmcmd
ASMCMD> cp +DATA/NONCDB/DATAFILE/example.266.896482777 /tmp/example.dbf
copying +DATA/NONCDB/DATAFILE/example.266.896482777 -> /tmp/example.dbf
ASMCMD>
ASMCMD> exit
[grid@ora12102 ~]$ exit
logout
[oracle@ora12102 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:16:24 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> alter tablespace example read write ;

Tablespace altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$


I must now identify the target location for the datafile in the CDB database.

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [NONCDB] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:17:44 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 973082144 bytes
Database Buffers 654311424 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>
SQL> alter session set container=PDB1;

Session altered.

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------------------------------------
+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/system.284.914408541
+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/sysaux.285.914408541
+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/users.287.914408663
+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/hemant.288.914713623

SQL>


Now that I have identiied the default location for all PDB1 files, I need to use ASMCMD to copy the datafile.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ su - grid
Password:
[grid@ora12102 ~]$
[grid@ora12102 ~]$ asmcmd
ASMCMD> cp /tmp/example.dbf +DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/
copying /tmp/example.dbf -> +DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/example.dbf
ASMCMD> cd +DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE
ASMCMD> ls
HEMANT.288.914713623
SYSAUX.285.914408541
SYSTEM.284.914408541
USERS.287.914408663
example.dbf
ASMCMD> exit
[grid@ora12102 ~]$


Now, I need to import the tablespace with the datafile. Before that, I need to setup the user that will do the import and all the users of the target tablespace.

[grid@ora12102 ~]$ exit
logout
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:28:16 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sun Jan 15 2017 16:27:18 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> grant select_catalog_role, imp_full_database to hemant;

Grant succeeded.

SQL> select privilege from dba_sys_privs where grantee = 'HEMANT';

PRIVILEGE
----------------------------------------
CREATE TABLE
CREATE SESSION

SQL> select granted_role from dba_role_privs where grantee = 'HEMANT';

GRANTED_ROLE
--------------------------------------------------------------------------------
SELECT_CATALOG_ROLE
IMP_FULL_DATABASE

SQL>


As with the Export, I am using a non-DBA user for the import.  I also have to setup the users and their grants.

[oracle@ora12102 Desktop]$ sqlplus sys/oracle@PDB1 as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:33:17 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create directory imp_from_noncdb as '/u01/app/oracle/admin/NONCDB/dpdump';

Directory created.

SQL> grant read, write on directory imp_from_noncdb to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ sqlplus system/oracle@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jan 15 16:41:36 2017

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-28002: the password will expire within 7 days


Last Successful login time: Sun Jan 15 2017 16:28:16 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> create user HR identified by HR ;
create user IX identified by IX ;
create user OE identified by OE ;
create user PM identified by PM ;
create user SH identified by SH ;

User created.

SQL>
User created.

SQL>
User created.

SQL>
User created.

SQL>
User created.

SQL> SQL>
SQL> @grants_to_EXAMPLE
SQL> spool grants_to_EXAMPLE
SQL>
SQL> grant ALTER SESSION to HR ;

Grant succeeded.

SQL> grant ALTER SESSION to IX ;

Grant succeeded.

SQL> grant ALTER SESSION to SH ;

Grant succeeded.

SQL> grant CREATE CLUSTER to IX ;

Grant succeeded.

SQL> grant CREATE CLUSTER to SH ;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to HR ;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to IX ;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to OE ;

Grant succeeded.

SQL> grant CREATE DATABASE LINK to SH ;

Grant succeeded.

SQL> grant CREATE DIMENSION to SH ;

Grant succeeded.

SQL> grant CREATE INDEXTYPE to IX ;

Grant succeeded.

SQL> grant CREATE MATERIALIZED VIEW to OE ;

Grant succeeded.

SQL> grant CREATE MATERIALIZED VIEW to SH ;

Grant succeeded.

SQL> grant CREATE OPERATOR to IX ;

Grant succeeded.

SQL> grant CREATE PROCEDURE to HR ;

Grant succeeded.

SQL> grant CREATE PROCEDURE to IX ;

Grant succeeded.

SQL> grant CREATE RULE to IX ;

Grant succeeded.

SQL> grant CREATE RULE SET to IX ;

Grant succeeded.

SQL> grant CREATE SEQUENCE to HR ;

Grant succeeded.

SQL> grant CREATE SEQUENCE to IX ;

Grant succeeded.

SQL> grant CREATE SEQUENCE to SH ;

Grant succeeded.

SQL> grant CREATE SESSION to HR ;

Grant succeeded.

SQL> grant CREATE SESSION to IX ;

Grant succeeded.

SQL> grant CREATE SESSION to OE ;

Grant succeeded.

SQL> grant CREATE SESSION to SH ;

Grant succeeded.

SQL> grant CREATE SYNONYM to HR ;

Grant succeeded.

SQL> grant CREATE SYNONYM to IX ;

Grant succeeded.

SQL> grant CREATE SYNONYM to OE ;

Grant succeeded.

SQL> grant CREATE SYNONYM to SH ;

Grant succeeded.

SQL> grant CREATE TABLE to IX ;

Grant succeeded.

SQL> grant CREATE TABLE to SH ;

Grant succeeded.

SQL> grant CREATE TRIGGER to IX ;

Grant succeeded.

SQL> grant CREATE TYPE to IX ;

Grant succeeded.

SQL> grant CREATE VIEW to HR ;

Grant succeeded.

SQL> grant CREATE VIEW to IX ;

Grant succeeded.

SQL> grant CREATE VIEW to OE ;

Grant succeeded.

SQL> grant CREATE VIEW to SH ;

Grant succeeded.

SQL> grant QUERY REWRITE to OE ;

Grant succeeded.

SQL> grant QUERY REWRITE to SH ;

Grant succeeded.

SQL> grant SELECT ANY DICTIONARY to IX ;

Grant succeeded.

SQL> grant UNLIMITED TABLESPACE to HR ;

Grant succeeded.

SQL> grant UNLIMITED TABLESPACE to IX ;

Grant succeeded.

SQL> grant UNLIMITED TABLESPACE to OE ;

Grant succeeded.

SQL> grant UNLIMITED TABLESPACE to PM ;

Grant succeeded.

SQL> grant UNLIMITED TABLESPACE to SH ;

Grant succeeded.

SQL>
SQL> spool off
SQL> @roles_to_EXAMPLE
SQL> set echo on
SQL> spool roles_to_EXAMPLE
SQL>
SQL> grant AQ_ADMINISTRATOR_ROLE to IX ;

Grant succeeded.

SQL> grant AQ_USER_ROLE to IX ;

Grant succeeded.

SQL> grant CONNECT to IX ;

Grant succeeded.

SQL> grant CONNECT to PM ;

Grant succeeded.

SQL> grant RESOURCE to HR ;

Grant succeeded.

SQL> grant RESOURCE to IX ;

Grant succeeded.

SQL> grant RESOURCE to OE ;

Grant succeeded.

SQL> grant RESOURCE to PM ;

Grant succeeded.

SQL> grant RESOURCE to SH ;

Grant succeeded.

SQL> grant SELECT_CATALOG_ROLE to IX ;

Grant succeeded.

SQL> grant SELECT_CATALOG_ROLE to SH ;

Grant succeeded.

SQL> grant XDBADMIN to OE ;

Grant succeeded.

SQL>
SQL> spool off
SQL>


I am now ready to import the tablespace and datafile.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ impdp hemant/hemant@PDB1 \
> dumpfile=EXAMPLE_TTS.dmp directory=imp_from_noncdb \
> transport_datafiles=+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/example.dbf

Import: Release 12.1.0.2.0 - Production on Sun Jan 15 16:50:16 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

UDI-28002: operation generated ORACLE error 28002
ORA-28002: the password will expire within 7 days

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
Master table "HEMANT"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Source time zone is +00:00 and target time zone is -07:00.
Starting "HEMANT"."SYS_IMPORT_TRANSPORTABLE_01": hemant/********@PDB1 dumpfile=EXAMPLE_TTS.dmp directory=imp_from_noncdb transport_datafiles=+DATA/CDB1/35208E5B92306007E0530F02000A969A/DATAFILE/example.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_SPEC
Processing object type TRANSPORTABLE_EXPORT/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TRANSPORTABLE_EXPORT/TYPE/TYPE_BODY
Processing object type TRANSPORTABLE_EXPORT/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/XMLSCHEMA/XMLSCHEMA
Processing object type TRANSPORTABLE_EXPORT/TABLE
ORA-39360: Table "OE"."ORDERS" was skipped due to transportable import and TSLTZ issues resulting from time zone mismatch.
ORA-39151: Table "OE"."PURCHASEORDER" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type TRANSPORTABLE_EXPORT/GRANT/OWNER_GRANT/OBJECT_GRANT
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."PROMOTIONS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."PRODUCT_DESCRIPTIONS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."PRODUCT_INFORMATION" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."INVENTORIES" TO "BI"
ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type OBJECT_GRANT:"OE" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."ORDER_ITEMS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."WAREHOUSES" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "OE"."CUSTOMERS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."COSTS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."SALES" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."COUNTRIES" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."CUSTOMERS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."PROMOTIONS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."CHANNELS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."PRODUCTS" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."TIMES" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."FWEEK_PSCAT_SALES_MV" TO "BI"
ORA-39083: Object type OBJECT_GRANT failed to create with error:
ORA-01917: user or role 'BI' does not exist
Failing sql is:
GRANT SELECT ON "SH"."CAL_MONTH_SALES_MV" TO "BI"
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
ORA-39112: Dependent object type INDEX:"OE"."ORD_SALES_REP_IX" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"OE"."ORD_ORDER_DATE_IX" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"OE"."ORD_CUSTOMER_IX" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type INDEX:"OE"."ORDER_PK" skipped, base object type TABLE:"OE"."ORDERS" creation failed
Processing object type TRANSPORTABLE_EXPORT/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
ORA-39112: Dependent object type CONSTRAINT:"OE"."ORDER_MODE_LOV" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"OE"."ORDER_TOTAL_MIN" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type CONSTRAINT:"OE"."ORDER_PK" skipped, base object type TABLE:"OE"."ORDERS" creation failed
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/COMMENT
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type COMMENT skipped, base object type TABLE:"OE"."ORDERS" creation failed
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
ORA-39083: Object type REF_CONSTRAINT:"OE"."ORDER_ITEMS_ORDER_ID_FK" failed to create with error:
ORA-00942: table or view does not exist
Failing sql is:
ALTER TABLE "OE"."ORDER_ITEMS" ADD CONSTRAINT "ORDER_ITEMS_ORDER_ID_FK" FOREIGN KEY ("ORDER_ID") REFERENCES "OE"."ORDERS" ("ORDER_ID") ON DELETE CASCADE ENABLE NOVALIDATE
ORA-39112: Dependent object type REF_CONSTRAINT:"OE"."ORDERS_SALES_REP_FK" skipped, base object type TABLE:"OE"."ORDERS" creation failed
ORA-39112: Dependent object type REF_CONSTRAINT:"OE"."ORDERS_CUSTOMER_ID_FK" skipped, base object type TABLE:"OE"."ORDERS" creation failed
Processing object type TRANSPORTABLE_EXPORT/INDEX/BITMAP_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/TABLE
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/INDEX
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/SECONDARY_TABLE/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/DOMAIN_INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCACT_INSTANCE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PROCDEPOBJ
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
ORA-39082: Object type TRIGGER:"HR"."SECURE_EMPLOYEES" created with compilation warnings
ORA-39082: Object type TRIGGER:"HR"."UPDATE_JOB_HISTORY" created with compilation warnings
Job "HEMANT"."SYS_IMPORT_TRANSPORTABLE_01" completed with 41 error(s) at Sun Jan 15 16:51:40 2017 elapsed 0 00:01:23

[oracle@ora12102 Desktop]$


The key error is the failure on the ORDERS table creation because of a TimeZone mismatch !   So, there is a lesson to be learnt !

.
.
.

Categories: DBA Blogs

V$RMAN_BACKUP_JOB_DETAILS, a caveat

Sun, 2017-01-08 23:46
Building on a previous blog post (you could read it before or after this post), here's a quick demo of a caveat or quirk with V$RMAN_BACKUP_JOB_DETAILS.

This in 11.2.0.4

[oracle@ora11204 Desktop]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Jan 9 13:39:44 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1362461976)

RMAN> backup as compressed backupset
2> incremental level 1 database
3> plus archivelog ;


Starting backup at 09-JAN-17
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=108 RECID=1245 STAMP=928093294
input archived log thread=1 sequence=109 RECID=1246 STAMP=928093719
input archived log thread=1 sequence=110 RECID=1247 STAMP=928093722
input archived log thread=1 sequence=111 RECID=1248 STAMP=928093724
...
...
...
input archived log thread=1 sequence=163 RECID=1318 STAMP=929802055
input archived log thread=1 sequence=164 RECID=1319 STAMP=932823436
input archived log thread=1 sequence=165 RECID=1320 STAMP=932823439
input archived log thread=1 sequence=166 RECID=1321 STAMP=932823606
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_annnn_TAG20170109T134007_d768kr8l_.bkp tag=TAG20170109T134007 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:15
Finished backup at 09-JAN-17

Starting backup at 09-JAN-17
using channel ORA_DISK_1
no parent backup or copy of datafile 2 found
no parent backup or copy of datafile 1 found
no parent backup or copy of datafile 3 found
no parent backup or copy of datafile 4 found
channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00002 name=/u01/app/oracle/oradata/ORCL/sysaux.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ORCL/system.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ORCL/undotbs1.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ORCL/users.dbf
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_nnnd0_TAG20170109T134123_d768n3v5_.bkp tag=TAG20170109T134123 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting compressed incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00011 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_hemant_cdgb60g4_.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_users_c552qnsh_.dbf
input datafile file number=00013 name=/u01/app/oracle/oradata/ORCL/datafile/o1_mf_intermed_c552qpc7_.dbf
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_nnnd1_TAG20170109T134123_d768ojmm_.bkp tag=TAG20170109T134123 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 09-JAN-17

Starting backup at 09-JAN-17
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=167 RECID=1322 STAMP=932823743
channel ORA_DISK_1: starting piece 1 at 09-JAN-17
channel ORA_DISK_1: finished piece 1 at 09-JAN-17
piece handle=/u02/FRA/ORCL/backupset/2017_01_09/o1_mf_annnn_TAG20170109T134223_d768ozv9_.bkp tag=TAG20170109T134223 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 09-JAN-17

Starting Control File and SPFILE Autobackup at 09-JAN-17
piece handle=/u02/FRA/ORCL/autobackup/2017_01_09/o1_mf_s_932823745_d768p1jo_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 09-JAN-17

RMAN>


What does V$RMAN_BACKUP_JOB_DETAILS tell us ?

SQL> l
1 select to_char(start_time,'DD-MON HH24:MI') Start_At,
2 to_char(end_time,'DD-MON HH24:MI') End_At,
3 input_bytes/1048576 Input_MB, output_bytes/1048576 Output_MB,
4 input_type, status
5 from v$rman_backup_job_details
6 where start_time > trunc(sysdate)
7* order by start_time
SQL> /

START_AT END_AT INPUT_MB OUTPUT_MB INPUT_TYPE
--------------------- --------------------- ---------- ---------- -------------
STATUS
-----------------------
09-JAN 13:40 09-JAN 13:42 2917.06055 491.563477 DB INCR
COMPLETED


SQL>


The view does NOT show how much of the input/output was for ArchiveLogs.  It clubs ArchiveLogs and the controlfile autobackup under the   single entry for "DB INCR".   Anyone reading this row from V$RMAN_BACKUP_JOB_DETAILS would NOT know if ArchiveLogs had been backed-up, would NOT know if a controlfile/spfile autobackup was created.
Furtheremore, if there is a failure (e.g. only the last ArchiveLog backupset failed ?), would you be able to identify what has successfully been backed up.  Also see my previous blog post.
.
.
.


Categories: DBA Blogs

12cR1 RAC Posts -- 2 : Convert AdminManaged DB to PolicyManaged

Fri, 2016-12-30 22:14
I have an AdminManaged Database in my (2node) RAC Cluster.

How do I convert it to PolicyManaged ?

(Yes, let me admit :  It makes no sense to have PolicyManaged on a 2node Cluster.  But since I can't create an 8 or 16 node Cluster (with multiple databases to boot ?!), let me demonstrate with a 2node  Cluster.  The principle remains the same).

First, I show the configuration of the database in the Cluster :
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC1 is running on node collabn1
Instance RAC2 is running on node collabn2
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name: RAC
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfileRAC1.ora
Password file: +DATA/RAC/PASSWORD/pwdrac.277.931824933
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: FRA,DATA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances: RAC1,RAC2
Configured nodes: collabn1,collabn2
Database is administrator managed
[oracle@collabn1 ~]$


This is the current definition of server pool(s) :
[oracle@collabn1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, we see that I don't have any Server Pools defined. Only the default FREE and GENERIC (for AdminManaged database(s)) exist.
I now proceed to remove the database from the configuration.
[oracle@collabn1 ~]$ srvctl stop database -d RAC
[oracle@collabn1 ~]$ srvctl remove database -d RAC
Remove the database RAC? (y/[n]) y
[oracle@collabn1 ~]$


I now create a new (custom) Server Pool (called "MyPool").
[oracle@collabn1 ~]$ srvctl add srvpool -serverpool MyPool -importance 100 -min 1 -max 2 \
> -servers "collabn1,collabn2" -verbose
[oracle@collabn1 ~]$ srvctl config srvpool -serverpool MyPool
Server pool name: MyPool
Importance: 100, Min: 1, Max: 2
Category:
Candidate server names: collabn1,collabn2
[oracle@collabn1 ~]$


So, now with an "upto 2nodes" Server Pool, I add my database to it.
[oracle@collabn1 ~]$ srvctl add database -d RAC -oraclehome /u01/app/oracle/product/12.1.0/dbhome_1 \
> -serverpool MyPool -verbose
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


This shows that RAC is now a PolicyManaged database in the "MyPool" Server Pool !
Can I now start the database and check on the instance(s) ?
[oracle@collabn1 ~]$ srvctl start database -d RAC
PRCR-1079 : Failed to start resource ora.rac.db
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_2.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn2' failed
CRS-5017: The resource action "ora.rac.db start" encountered the following error:
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/12.1.0/dbhome_1/dbs/initRAC_1.ora'
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/collabn1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.rac.db' on 'collabn1' failed
CRS-2632: There are no more servers to try to place resource 'ora.rac.db' on that would satisfy its placement policy
[oracle@collabn1 ~]$


Yes, of course. I need to create initRAC_1.ora and initRAC_2.ora.
After creating the new parameter files (pointing to the SPFILE in the ASM Diskgroup), I try again.
[oracle@collabn1 ~]$ srvctl start database -d RAC
[oracle@collabn1 ~]$ srvctl status database -d RAC
Instance RAC_1 is running on node collabn1
Instance RAC_2 is running on node collabn2
[oracle@collabn1 ~]$ ps -ef |grep smon
oracle 3447 1 0 11:37 ? 00:00:00 asm_smon_+ASM1
root 3605 1 0 11:37 ? 00:00:11 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle 4203 1 0 11:38 ? 00:00:00 mdb_smon_-MGMTDB
oracle 22882 1 0 12:08 ? 00:00:00 ora_smon_RAC_1
oracle 23422 12657 0 12:10 pts/0 00:00:00 grep smon
[oracle@collabn1 ~]$
[oracle@collabn2 ~]$ ps -ef |grep smon
oracle 3495 1 0 11:41 ? 00:00:00 asm_smon_+ASM2
root 3593 1 0 11:41 ? 00:00:09 /u01/app/12.1.0/grid/bin/osysmond.bin
oracle 15973 1 0 12:08 ? 00:00:00 ora_smon_RAC_2
oracle 16647 4582 0 12:10 pts/0 00:00:00 grep smon
[oracle@collabn2 ~]$
[oracle@collabn1 ~]$ srvctl config database -d RAC
Database unique name: RAC
Database name:
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile:
Password file:
Domain: racattack
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: MyPool
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group:
Database instances:
Configured nodes:
Database is policy managed
[oracle@collabn1 ~]$


Yes, I now have instances RAC_1 and RAC_2 (instead of RAC1 and RAC2) now running. If I had multiple (4 or more ?!) nodes (and a Server Pool configuration to match), there's no guarantee that RAC_1 starts on collabn1 (and RAC_2 on collabn2).  These are "floating" instances that can start on any nodes in the Cluster.


(UPDATE : It seems that when I shutdown a node, a PolicyManaged Instance is SHUTDOWN ABORT, unlike an AdminManaged Instance which gets SHUTDOWN NORMAL ?)

.
.
.

Categories: DBA Blogs

12cR1 RAC Posts -- 1 : Grid Infrastructure Install completed (first cycle)

Sat, 2016-12-24 09:17
Just as I had posted 11gR2 RAC Posts in 2014  (listed here), I plan to post some 12cR1 RAC (GI, ASM) posts over the next few weeks.

Here's my Grid Infrastructure up and running.  (Yes, I used racattack for this first 12cR1 setup.

[root@collabn1 ~]# crsctl status resource -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.SHARED.advm
ONLINE ONLINE collabn1 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ONLINE ONLINE collabn2 Volume device /dev/a
sm/shared-141 is onl
ine,STABLE
ora.DATA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.FRA.dg
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.asm
ONLINE ONLINE collabn1 Started,STABLE
ONLINE ONLINE collabn2 Started,STABLE
ora.data.shared.acfs
ONLINE ONLINE collabn1 mounted on /shared,S
TABLE
ONLINE ONLINE collabn2 mounted on /shared,S
TABLE
ora.net1.network
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
ora.ons
ONLINE ONLINE collabn1 STABLE
ONLINE ONLINE collabn2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE collabn2 STABLE
ora.LISTENER_SCAN2.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.LISTENER_SCAN3.lsnr
1 ONLINE ONLINE collabn1 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE collabn1 169.254.3.70 172.16.
100.51,STABLE
ora.collabn1.vip
1 ONLINE ONLINE collabn1 STABLE
ora.collabn2.vip
1 ONLINE ONLINE collabn2 STABLE
ora.cvu
1 ONLINE ONLINE collabn1 STABLE
ora.mgmtdb
1 ONLINE ONLINE collabn1 Open,STABLE
ora.oc4j
1 ONLINE ONLINE collabn1 STABLE
ora.scan1.vip
1 ONLINE ONLINE collabn2 STABLE
ora.scan2.vip
1 ONLINE ONLINE collabn1 STABLE
ora.scan3.vip
1 ONLINE ONLINE collabn1 STABLE
--------------------------------------------------------------------------------
[root@collabn1 ~]#
[root@collabn1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 96fbcb40bfeb4ff7bf18881adcfef149 (/dev/asm-disk1) [DATA]
Located 1 voting disk(s).
[root@collabn1 ~]#
[root@collabn1 ~]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 1632
Available space (kbytes) : 407936
ID : 827167720
Device/File Name : +DATA
Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

[root@collabn1 ~]#
[root@collabn1 ~]# nslookup collabn-cluster-scan
Server: 192.168.78.51
Address: 192.168.78.51#53

Name: collabn-cluster-scan.racattack
Address: 192.168.78.252
Name: collabn-cluster-scan.racattack
Address: 192.168.78.253
Name: collabn-cluster-scan.racattack
Address: 192.168.78.251

[root@collabn1 ~]#


I hope to run a few cycles of setups, switching to different node names, IPs, DiskGroup names etc over the next few weeks).
.
.
.

Categories: DBA Blogs

12.2 New Features -- 5 : Memory Parameters for Pluggable Database

Tue, 2016-12-06 08:07
12.2 allows Instance Memory parameters to be configured at the PDB level.

[oracle@HKCORCL ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 12.2.0.1.0 Production on Tue Dec 6 13:56:28 2016

Copyright (c) 1982, 2016, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga boolean FALSE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 2544M
sga_min_size big integer 0
sga_target big integer 2544M
unified_audit_sga_queue_size integer 1048576
SQL> show parameter db_cach

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL>


Those are parameters set at the CDB level. Let's see the PDB.

SQL> alter session set container = PDB1;

Session altered.

SQL> show parameter sga

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga boolean FALSE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 2544M
sga_min_size big integer 0
sga_target big integer 0
unified_audit_sga_queue_size integer 1048576
SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL> alter system set db_cache_size=400M;

System altered.

SQL>
SQL> alter system set sga_target=512M;
alter system set sga_target=512M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-56750: invalid value 536870912 for parameter sga_target; must be larger
than 200% of parameter db_cache_size


SQL> alter system set sga_target=810M;

System altered.

SQL> alter system set shared_pool_size=256M;

System altered.

SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 400M
SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 810M
SQL> show parameter shared_pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size big integer 26004684
shared_pool_size big integer 256M
SQL>
SQL> alter system set pga_aggregate_target=128M;

System altered.

SQL>


Returning to the CDB ...

SQL> connect / as sysdba
Connected.
SQL> show parameter db_cache

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_cache_advice string ON
db_cache_size big integer 0
SQL> show parameter sga_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
sga_target big integer 2544M
SQL> show parameter shared_pool

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
shared_pool_reserved_size big integer 26004684
shared_pool_size big integer 0
SQL> show parameter pga_aggergate_target
SQL> show parameter pga_aggregate_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 1775294400
SQL>


Thus, multiple PDBs can have their own private target and limits (even an SGA_MIN_SIZE) all shared within the single instance that they co-exist in.
Note : The requirement is that MEMORY_TARGET is to be not set.
.
.
.

Categories: DBA Blogs

12.2 New Features -- 4 : AWR for Pluggable Database

Thu, 2016-12-01 03:29
12.2 now allows AWR Snapshots and Reports to be created at the PDB level.

Here I demonstrate a Manual Snapshot.  Although Automatic PDB AWR Snapshots are possible (with the AWR_PDB_AUTOFLUSH_ENABLED parameter) , they are disabled by default and Oracle recommends Manual Snapshots.

SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB1;

Session altered.

SQL> exec dbms_workload_repository.create_snapshot();

PL/SQL procedure successfully completed.

SQL>


I then proceed to create an AWR Report, still in the PDB1 container.

SQL> @?/rdbms/admin/awrrpt

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats. Please enter the
name of the format at the prompt. Default value is 'html'.

'html' HTML format (default)
'text' Text format
'active-html' Includes Performance Hub active report

Enter value for report_type: text

Type Specified: text

Specify the location of AWR Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWR_ROOT - Use AWR data from root (default)
AWR_PDB - Use AWR data from PDB
Enter value for awr_location: AWR_PDB




Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance Container Name
-------------- -------------- -------------- -------------- --------------
3774315809 HKCORCL 1 HKCORCL PDB1


Root DB Id Container DB Id AWR DB Id
--------------- --------------- ---------------
947935822 3774315809 3774315809









Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ ---------- --------- ---------- ------
3774315809 1 HKCORCL HKCORCL HKCORCL.comp

Using 3774315809 for database Id
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.


Enter value for num_days: 1

Listing the last day's Completed Snapshots
Instance DB Name Snap Id Snap Started Snap Level
------------ ------------ ---------- ------------------ ----------

HKCORCL HKCORCL 1 01 Dec 2016 08:48 1
2 01 Dec 2016 08:49 1
3 01 Dec 2016 08:52 1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 3
End Snapshot Id specified: 3




Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_1_3.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name:

Using the report name awrrpt_1_1_3.txt


Here's a look at the header of the AWR report.

WORKLOAD REPOSITORY PDB report (PDB snapshots)

DB Name DB Id Unique Name DB Role Edition Release RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL 3774315809 HKCORCL PRIMARY EE 12.2.0.1.0 NO NO

Instance Inst Num Startup Time
------------ -------- ---------------
HKCORCL 1 16-Nov-16 06:13

PDB Name PDB Id PDB DB Id Open Time
------------ ------ ---------- ---------------
PDB1 3 3774315809 25-Nov-16 14:11

Host Name Platform CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit 2 2 1 7.05

Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1 01-Dec-16 08:48:46 0 4.0
End Snap: 3 01-Dec-16 08:52:08 1 12.0
Elapsed: 3.36 (mins)
DB Time: 0.29 (mins)

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 0.1 4.3 0.00 0.06
DB CPU(s): 0.1 2.9 0.00 0.04
Background CPU(s): 0.0 0.0 0.00 0.00
Redo size (bytes): 138,443.8 6,976,599.0
Logical read (blocks): 1,798.4 90,625.0
Block changes: 282.3 14,224.3
Physical read (blocks): 21.0 1,055.8
Physical write (blocks): 0.7 34.5
Read IO requests: 20.9 1,051.8
Write IO requests: 0.3 12.5
Read IO (MB): 0.2 8.3
Write IO (MB): 0.0 0.3
IM scan rows: 0.0 0.0
Session Logical Read IM: 0.0 0.0
User calls: 1.5 77.5
Parses (SQL): 17.9 904.0
Hard parses (SQL): 3.2 161.5
SQL Work Area (MB): 2.5 123.5
Logons: 0.0 1.0
Executes (SQL): 45.7 2,302.3
Rollbacks: 0.0 0.0
Transactions: 0.0

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Avg % DB Wait
Event Waits Time (sec) Wait time Class
------------------------------ ----------- ---------- --------- ------ --------
DB CPU 11.5 66.9
db file sequential read 3,758 2.5 667.74us 14.6 User I/O
direct path write 23 .7 29.08ms 3.9 User I/O
flashback log file sync 36 .6 16.98ms 3.6 User I/O
local write wait 12 .4 37.17ms 2.6 User I/O
acknowledge over PGA limit 9 .1 9.50ms .5 Schedule
control file sequential read 189 .1 293.42us .3 System I
PGA memory operation 3,687 0 11.02us .2 Other
db file scattered read 4 0 8.32ms .2 User I/O
log file sync 3 0 9.35ms .2 Commit


The Header identifies the PDB being reported on.Note that Snapshots 1 to 3 are local to the PDB and are not in the Root.  PDB Snapshots can be maintained (create or drop snapshot) in the same manner as CDB snapshots.  (Note : PDB AWR Snapshots are in the view AWR_PDB_SNAPSHOT,  not DBA_HIST_SNAPSHOT).




In contrast, this below is the Header for a CDB where Automatic Snapshots have meant Snap IDs are already at 379,380.  Thus, the CDB snapshots are different from the PDB snapshots.

WORKLOAD REPOSITORY report for

DB Name DB Id Unique Name DB Role Edition Release RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL 947935822 HKCORCL PRIMARY EE 12.2.0.1.0 NO YES

Instance Inst Num Startup Time
------------ -------- ---------------
HKCORCL 1 16-Nov-16 06:13

Host Name Platform CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit 2 2 1 7.05

Snap Id Snap Time Sessions Curs/Sess PDBs
--------- ------------------- -------- --------- -----
Begin Snap: 379 01-Dec-16 08:00:47 51 .6 2
End Snap: 380 01-Dec-16 09:00:09 62 .9 2
Elapsed: 59.36 (mins)
DB Time: 1.14 (mins)



Note how it doesn't identify a PDB.

You need to be explicitly connected to a PDB before awrrpt shows you the option to generate PDB-level AWR report.
.
.
.
Categories: DBA Blogs

12.2 New Features -- 3 : Flashback Pluggable Database

Fri, 2016-11-25 08:35
12.1 allows Point In Time Recovery of a Pluggable Database but not Flashback of an individual PDB.

12.2 now allows Flashback of an individual PDB.   This is easier with a Local Undo Tablespace instead of a Shared Undo Tablespace.

Here is a quick demo (all times in UTC timezone) :

[oracle@HKCORCL ~]$ sqlplus system/Oracle_4U@PDB1

SQL*Plus: Release 12.2.0.1.0 Production on Fri Nov 25 14:19:06 2016

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Thu Nov 24 2016 01:03:52 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select count(*) from hr.employees_part;

COUNT(*)
----------
107

SQL> drop table hr.employees_part purge;

Table dropped.

SQL> connect / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL>
SQL> select sysdate, sysdate-oldest_flashback_time
2 from v$flashback_database_log;

SYSDATE SYSDATE-OLDEST_FLASHBACK_TIME
--------- -----------------------------
25-NOV-16 2.36273148

SQL>
SQL> flashback pluggable database pdb1
2 to timestamp sysdate-2/24;

Flashback complete.

SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17:
'/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf
_pdb1undo_d3dgxhbh_.dbf'


SQL> alter pluggable database pdb1 open resetlogs;

Pluggable database altered.

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> select count(*) from hr.employees_part;

COUNT(*)
----------
107

SQL>


(Note : The 12.2 documentation shows the TO TIME clause, it is actually TO TIMESTAMP).
I have done a flashback of PDB1 to a time as of 2hours ago, when the table HR.EMPLOYEES_PART still existed.

Let's look for messages in the alert log.

2016-11-25T14:19:52.992589+00:00
Archived Log entry 11 added for T-1.S-11 ID 0x38800462 LAD:1
2016-11-25T14:19:57.621705+00:00
alter pluggable database pdb1 close
2016-11-25T14:19:57.640353+00:00
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:19:58.885892+00:00
Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 close
2016-11-25T14:26:10.205824+00:00
flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:10.627900+00:00
Flashback Restore Start
2016-11-25T14:26:11.513882+00:00
Restore Flashback Pluggable Database PDB1 (3) until change 3536013
Flashback Restore Complete
2016-11-25T14:26:11.707236+00:00
Flashback Media Recovery Start
2016-11-25T14:26:11.718480+00:00
Serial Media Recovery started
2016-11-25T14:26:12.006472+00:00
Recovery of Online Redo Log: Thread 1 Group 2 Seq 11 Reading mem 0
Mem# 0: /u04/app/oracle/redo/redo02.log
2016-11-25T14:26:12.283587+00:00
Incomplete Recovery applied until change 3536477 time 11/25/2016 12:26:56
Flashback Media Recovery Complete
Flashback Pluggable Database PDB1 (3) recovered until change 3536477, at 11/25/2016 12:26:56
Completed: flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:21.451523+00:00
alter pluggable database pdb1 open
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:21.659109+00:00
Pdb PDB1 hit error 1113 during open read write (1) and will be closed.
2016-11-25T14:26:21.659410+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_ora_7920.trc:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:21.804780+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 9: '/u02/app/oracle/oradata/HKCORCL/PDB1/system01.dbf'
ORA-1113 signalled during: alter pluggable database pdb1 open...
2016-11-25T14:26:22.086212+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 10: '/u02/app/oracle/oradata/HKCORCL/PDB1/sysaux01.dbf'
2016-11-25T14:26:22.175778+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 12: '/u02/app/oracle/oradata/HKCORCL/PDB1/users01.dbf'
2016-11-25T14:26:22.270876+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
Checker run found 4 new persistent data failures
2016-11-25T14:26:39.804216+00:00
alter pluggable database pdb1 open resetlogs
2016-11-25T14:26:40.377390+00:00
Online datafile 17
Online datafile 12
Online datafile 10
Online datafile 9
2016-11-25T14:26:40.881181+00:00
PDB1(3):Autotune of undo retention is turned on.
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868281239 end:868281333 diff:94 ms (0.1 seconds)
PDB1(3):Database Characterset for PDB1 is AL32UTF8
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:42.441388+00:00
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:42.827673+00:00
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868283079 end:868283168 diff:89 ms (0.1 seconds)
PDB1(3):Pluggable database PDB1 dictionary check beginning
2016-11-25T14:26:43.706672+00:00
PDB1(3):Pluggable Database PDB1 Dictionary check complete
PDB1(3):Database Characterset for PDB1 is AL32UTF8
2016-11-25T14:26:44.083617+00:00
PDB1(3):Opatch validation is skipped for PDB PDB1 (con_id=0)
PDB1(3):Opening pdb with no Resource Manager plan active
2016-11-25T14:26:45.205147+00:00
Starting control autobackup

Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/415864F430FE5FFEE053B2D4100A149C/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qldlnv_.bkp
2016-11-25T14:26:46.523130+00:00
Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/3E09703FB0AF1A7EE053DE4BC40A6C1D/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qlfzqg_.bkp
Control autobackup written to DISK device

handle '/u03/app/oracle/fast_recovery_area/HKCORCL/autobackup/2016_11_25/o1_mf_s_928852005_d3jlk651_.bkp'

Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 open resetlogs


The set of ORA-01113 and ORA-01110 errors are when I tried to open PDB1 without a RESETLOGS.
The OPEN RESETLOGS issued at 2016-11-25T14:26:39.804216+00:00 was successful.
(Note : The ALTER SYSTEM SWITCH LOGFILE wasn't required but I like to archive out the CURRENT redo whenever I make a significant action against the database).

.
.
.

Categories: DBA Blogs

12.2 New Features -- 2 : Partitioning an Existing Table

Wed, 2016-11-23 19:27
A non-partitioned table can be Partitioned (without having to use DBMS_REDEFINITION) online.

SQL> connect hr/Oracle_4U@PDB1
Connected.
SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL> create table employees_part as select * from employees;

Table created.

SQL> select table_name from user_part_tables;

no rows selected

SQL> alter table employees_part
2 modify
3 partition by range (last_name)
4 (partition p_N values less than ('O'),
5 partition p_Q values less than ('R'),
6 partition p_LAST values less than (MAXVALUE))
7 online;

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'EMPLOYEES_PART'
4 order by partition_position
5 /

PARTITION_NAME
--------------------------------------------------------------------------------
HIGH_VALUE
------------
P_N
'O'

P_Q
'R'

P_LAST
MAXVALUE


SQL>
SQL> select table_name, partitioning_type, partition_count
2 from user_part_tables
3 where table_name = 'EMPLOYEES_PART'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
PARTITION PARTITION_COUNT
--------- ---------------
EMPLOYEES_PART
RANGE 3


SQL>
SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'EMPLOYEES_PART'
4 order by partition_position
5 /

PARTITION_NAME
--------------------------------------------------------------------------------
NUM_ROWS
----------
P_N
71

P_Q
10

P_LAST
26


SQL>


I was able to convert a Non-Partitioned Table to a Range-Partitioned Table online.
.
.
.

Categories: DBA Blogs

12.2 New Features -- 1 : Separate Undo Tablespace for each PDB

Wed, 2016-11-23 19:06
Unlike 12.1 MultiTenant, 12.2 introduces a separate Undo Tablespace for each PDB.

SQL> l
1 select c.con_id, c.name con_name, t.tablespace_name, t.contents, t.status
2 from v$containers c, cdb_tablespaces t
3 where c.con_id=t.con_id
4 and t.tablespace_name like '%UNDO%'
5* order by 1,2
SQL> /

CON_ID CON_NAME TABLESPACE_NAME CONTENTS STATUS
---------- ---------------- ---------------- --------------------- ---------
1 CDB$ROOT UNDOTBS1 UNDO ONLINE
3 PDB1 UNDOTBS1 UNDO ONLINE
5 PDB2 UNDOTBS1 UNDO ONLINE

SQL>


I have two PDBs and each PDB has an Undo Tablespace.

Let me create a new Undo Tablespace.

SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> create undo tablespace PDB1UNDO ;

Tablespace created.

SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> alter system set undo_tablespace='PDB1UNDO';

System altered.

SQL> drop tablespace undotbs1;

Tablespace dropped.

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>
SQL> select c.con_id, c.name con_name, t.tablespace_name, t.contents, t.status
2 from v$containers c, cdb_tablespaces t
3 where c.con_id=t.con_id
4 and t.tablespace_name like '%UNDO%'
5 order by 1,2
6 /

CON_ID CON_NAME TABLESPACE_NAME CONTENTS STATUS
---------- ---------------- ---------------- --------------------- ---------
1 CDB$ROOT UNDOTBS1 UNDO ONLINE
3 PDB1 PDB1UNDO UNDO ONLINE
5 PDB2 UNDOTBS1 UNDO ONLINE

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string PDB1UNDO
SQL> select tablespace_name, contents, status
2 from dba_tablespaces
3 where tablespace_name like '%UNDO%'
4 /

TABLESPACE_NAME CONTENTS STATUS
---------------- --------------------- ---------
PDB1UNDO UNDO ONLINE

SQL>


I was able to switch PDB1 to a new Undo Tablespace (and drop the old Undo Tablespace).
.
.
.
Categories: DBA Blogs

Flashback Database -- 3 : Purging (older) Flashback Logs

Sun, 2016-11-20 08:34
As demonstrated earlier, Oracle may maintain Flashback Logs for a duration that is longer than the Flashback Retention Target.  This can happen when the db_recovery_filie_dest_size is large enough to support them (along with ArchiveLogs, Backups etc)

For example, in my play database I have reset the retention target to 1day but the Flashback Logs exceed 4 days :

SQL> show parameter flashback_ret

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SQL> select sysdate-oldest_flashback_time from v$flashback_database_log;

SYSDATE-OLDEST_FLASHBACK_TIME
-----------------------------
4.21686343



The DBA should not manually delete Flashback Logs.

The only way I've found to purge older Flashback Logs is to reset db_recovery_file_dest_size to a lower value such that current FRA usage exceeds the dest_size.  This prompts Oracle to purge older Flashback Logs.

However, if ArchiveLogs exist and consume significant space and frequncy in the FRA, you do run the risk of

ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance orcl - Archival Error
ORA-16038: log 1 sequence# nnn cannot be archived
ORA-19809: limit exceeded for recovery files

and/or

ORACLE Instance orcl- Cannot allocate log, archival required
Thread 1 cannot allocate new log, sequence nnn
All online logs need archiving
Examine archive trace files for archiving errors


errors.
So, be careful to monitor your FRA usage and the Flashback Logs.  Use V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE, V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE view.
(see my earlier post that also points to an Oracle Support Doc about the first two views).
.
.
.


Categories: DBA Blogs

Flashback Database -- 2 : Flashback Requires Redo (ArchiveLog)

Mon, 2016-11-14 09:03
Although Flashback Logs support the ability to execute a FLASHBACK DATABASE command, the actual Flashback also requires Redo to be applied.  This is because the Flashback resets the images of blocks but doesn't guarantee that all transactions are reset to the same point in time (any one block can contain one or more active, uncommitted transactions, and there can be multiple blocks with active transactions at any point in time).  Therefore, since Oracle must revert the database to a consistent image, it needs to be able to apply redo as well (just as it would do for a roll-forward recovery from a backup).

Here's a quick demo of what happens if the redo is not available.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.oldest_flashback_scn, l.oldest_flashback_time
2 from v$flashback_database_log l;

SYSDATE OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T
------------------ -------------------- ------------------
14-NOV-16 22:51:37 7246633 14-NOV-16 22:39:43

SQL>

sh-4.1$ pwd
/u02/FRA/ORCL/archivelog/2016_11_14
sh-4.1$ date
Mon Nov 14 22:52:29 SGT 2016
sh-4.1$ rm *
sh-4.1$

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.

SQL> flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS');
flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS')
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 7246634 to SCN 7269074
ORA-38761: redo log sequence 70 in thread 1, incarnation 5 could not be
accessed


SQL>
SQL> l
1 select sequence#, first_change#, first_time
2 from v$archived_log
3 where resetlogs_time=(select resetlogs_time from v$database)
4 and sequence# between 60 and 81
5* order by 1
SQL> /

SEQUENCE# FIRST_CHANGE# FIRST_TIME
---------- ------------- ------------------
60 7245238 14-NOV-16 22:27:35
61 7248965 14-NOV-16 22:40:46
62 7250433 14-NOV-16 22:40:52
63 7251817 14-NOV-16 22:41:04
64 7253189 14-NOV-16 22:41:20
65 7254583 14-NOV-16 22:41:31
66 7255942 14-NOV-16 22:41:44
67 7257317 14-NOV-16 22:41:59
68 7258689 14-NOV-16 22:42:10
69 7260094 14-NOV-16 22:42:15
70 7261397 14-NOV-16 22:42:22
71 7262843 14-NOV-16 22:42:28
72 7264269 14-NOV-16 22:42:32
73 7265697 14-NOV-16 22:42:37
74 7267121 14-NOV-16 22:42:43
75 7269075 14-NOV-16 22:48:05
76 7270476 14-NOV-16 22:48:11
77 7271926 14-NOV-16 22:48:17
78 7273370 14-NOV-16 22:48:23
79 7274759 14-NOV-16 22:48:32
80 7276159 14-NOV-16 22:48:39
81 7277470 14-NOV-16 22:48:43

22 rows selected.

SQL>



Note how the error message states that Redo(Archive)Log Sequence#70 is required but provides a range of SCNs that span Sequence#60 to Sequence#74 !

Bottom Line : Flashback Logs alone aren't adequate to Flashback database.  You also need the corresponding Redo.

Just to confirm that I can continue with the current (non-Flashbacked Database) state (in spite of the failed Flashback)  :

SQL> shutdown;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 89
Next log sequence to archive 90
Current log sequence 90
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
7289329

SQL>


.Bottom Line : *Before* you attempt a FLASHBACK DATABASE to the OLDEST_FLASHBACK_TIME (or SCN) from V$FLASHBACK_DATABASE_LOG, ensure that you *do* have the "nearby"  Archive/Redo Logs. !
.
.
.
Categories: DBA Blogs

Flashback Database -- 1 : Introduction to Operations

Mon, 2016-11-07 04:24
Continuing on my previous post,  ....

In 11gR2,  ALTER DATABASE FLASHBACK ON   and OFF can be executed when the database is OPEN.  Setting FLASHBACK OFF results in deletion of all Flashback Files.

Here is some information that I have pulled from my test database environment :

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL>
SQL> select oldest_flashback_scn, oldest_flashback_time,
2 retention_target, flashback_size
3 from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET FLASHBACK_SIZE
-------------------- ------------------ ---------------- --------------
7140652 07-NOV-16 10:53:30 180 314572800

SQL> select sysdate from dual;

SYSDATE
------------------
07-NOV-16 17:46:54

SQL>
SQL> select begin_time, end_time, flashback_data, estimated_flashback_size
2 from v$flashback_database_stat
3 order by begin_time;

BEGIN_TIME END_TIME FLASHBACK_DATA ESTIMATED_FLASHBACK_SIZE
------------------ ------------------ -------------- ------------------------
06-NOV-16 18:56:28 06-NOV-16 21:20:55 202129408 251873280
06-NOV-16 21:20:55 07-NOV-16 09:53:26 107102208 62054400
07-NOV-16 09:53:26 07-NOV-16 10:53:30 51609600 67866624
07-NOV-16 10:53:30 07-NOV-16 13:14:45 10682368 60887040
07-NOV-16 13:14:45 07-NOV-16 14:14:51 66002944 67986432
07-NOV-16 14:14:51 07-NOV-16 15:14:57 10018816 66112512
07-NOV-16 15:14:57 07-NOV-16 16:15:01 10190848 64441344
07-NOV-16 16:15:01 07-NOV-16 17:15:05 53559296 68751360
07-NOV-16 17:15:05 07-NOV-16 17:47:57 52862976 0

9 rows selected.

SQL>
SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
2 from v$flashback_database_logfile
3 order by sequence#;

LOG# SEQUENCE# SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
6 6 50 07-NOV-16 09:00:46
1 7 50 07-NOV-16 10:36:01
2 8 50 07-NOV-16 13:13:22
3 9 50 07-NOV-16 13:43:28
4 10 50 07-NOV-16 16:43:49
5 11 50 07-NOV-16 17:44:42

6 rows selected.

SQL>


Firstly, we note (as in my previous blog post), that the available flashback that is from 10:53am to 5:46pm (almost 7hours) exceeds the Flashback Target of 3hours (180minutes).  Apparently, Flashback Logfiles 1 to 5 have already been purged (but I find no entries for the deletions in the alert log).

Note how the "earliest time" does not match in all three views.  The OLDEST_FLASHBACK_TIME is 10:53am although V$FLASHBACK_DATABASE_STAT reports statistics from the previous day (I had enabled Flashback in the database at 18:56:27 of 06-Nov) while V$FLASHBACK_DATABASE_LOGILE shows an existing logfile from 09:00am to 10:36am.

Let me do a Flashback.  I must rely on the V$FLASHBACK_DATABASE_LOG view to know that I  cannot flashback beyond 10:53am.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL>
SQL> flashback database to timestamp trunc(sysdate)+11/24;

Flashback complete.

SQL>
SQL> alter database open read only; --- to verify data if necessary

Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SQL> alter database open resetlogs;

Database altered.

SQL>


A FLASHBACK DATABASE requires an OPEN RESETLOGS to open READ WRITE.

Let's look at the alert log for messages about the Flashback operation itself :

Mon Nov 07 17:56:36 2016
flashback database to timestamp trunc(sysdate)+11/24
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
started logmerger process
Parallel Media Recovery started with 2 slaves
Flashback Media Recovery Log /u02/FRA/ORCL/archivelog/2016_11_07/o1_mf_1_81_d2052ofj_.arc
Mon Nov 07 17:56:43 2016
Incomplete Recovery applied until change 7141255 time 11/07/2016 11:00:01
Flashback Media Recovery Complete
Completed: flashback database to timestamp trunc(sysdate)+11/24
Mon Nov 07 17:57:08 2016
alter database open read only


What happens if I disable and re-enable Flashback ?

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback off;

Database altered.

SQL>

From the alert log :
Mon Nov 07 18:03:02 2016
alter database flashback off
Stopping background process RVWR
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32vjv_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32xq0_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3bhkx_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3dd8r_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y6r6bf_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1ycky3v_.flb
Flashback Database Disabled
Completed: alter database flashback off

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback on;

Database altered.

SQL>

From the alert log :
Mon Nov 07 18:04:21 2016
alter database flashback on
Starting background process RVWR
Mon Nov 07 18:04:21 2016
RVWR started with pid=30, OS id=12621
Flashback Database Enabled at SCN 7142426
Completed: alter database flashback on

From the FRA :
[oracle@ora11204 flashback]$ pwd
/u02/FRA/ORCL/flashback
[oracle@ora11204 flashback]$ ls -ltr
total 102416
-rw-rw----. 1 oracle oracle 52436992 Nov 7 18:04 o1_mf_d20nf7wc_.flb
-rw-rw----. 1 oracle oracle 52436992 Nov 7 18:05 o1_mf_d20nf5nz_.flb
[oracle@ora11204 flashback]$

SQL> alter session set nls_date_Format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
2 from v$flashback_database_logfile
3 order by sequence#;

LOG# SEQUENCE# SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
2 1 50
1 1 50 07-NOV-16 18:04:22

SQL>



So, I can set FLASHBACK OFF and ON when the database is OPEN.  (But I can't execute a FLASHBACK TO .... with the database OPEN).
.
.
.

Categories: DBA Blogs

Flashback Database Logs can exceed the Retention Target

Fri, 2016-10-28 18:58
The documentation on the Flashback Retention Target in 11.2 and 12.1 states that this parameter specifies an upper limit on how far the database may be flashed back.

However, if the FRA (db_recovery_file_dest_size) is actually large enough, Oracle may retain flashback logs for a much longer duration.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.* from v$flashback_database_log l;

SYSDATE OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET
------------------ -------------------- ------------------ ----------------
FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------- ------------------------
29-OCT-16 07:42:44 6968261 28-OCT-16 22:35:50 180
157286400 86467584


SQL>
SQL> show parameter flashback

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 180
SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> flashback database to timestamp trunc(sysdate);

Flashback complete.

SQL>


Thus, it is useful to check the V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE and V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE views from time to time.

See Oracle Support Doc# 1194013.1 for a discrepancy between the first two views.

Note : If you have Standby database configured, also see John Hallas's blog post.
.
.
.

Categories: DBA Blogs

Multi-Version Read Consistency in Oracle

Sun, 2016-10-23 10:19
My Linked-In post on this topic.
.
.
.
Categories: DBA Blogs

OTN Appreciation Day : Undo and Redo

Mon, 2016-10-10 20:06
On OTN Appreciation Day, let me say I like the Undo and Redo features of Oracle.  I name them together as they work together.

Undo also supports MultiVersionReadConsistency -- a great advantage of Oracle.

Redo, with Archive Logging, also supports Online Backups -- an absolute necessity.

These features have been around for almost 30 years now.

Here are some Quick and Rough Notes on Undo and Redo.
.
.
.
Categories: DBA Blogs

Undo and Redo

Mon, 2016-10-10 20:04
Quick and Rough Notes :


Undo and Redo


Undo is where Oracle logs how to reverse a transaction (one or more DMLs in a transaction)

Redo is where Oracle logs how to replay a transaction

Undo and Redo are written to as the transaction proceeds, not merely at the end of the transaction
(imagine a transaction that consists of 1million single-row inserts, each distinct insert is written to undo and redo)
Undo segments
Oracle dynamically creates and drops Undo segments depending on transaction volume
An undo segment consists of multiple extents. As a transaction grows beyond the current extent, a new extent may be allocated
One undo segment can support multiple transactions but a transaction cannot span multiple undo segments
After COMMIT the undo information is retained for undo_retention or autotuned_undo_retention.
At the end of the retention period, the undo is discarded, the extent is expired

Undo retention
Oracle may autotune the undo retention
If the datafile(s) for the active undo tablespace are set to autoextend OFF, Oracle automatically uses the datafile to the fullest and ignores undo_retention
If the datafile(s) are set to autoextend ON, Oracle autotunes undo_retention to match query lengths
Check V$undostat for this information

Undo and Read Consistency
Oracle's implementation of MultiVersionReadConsistency relies on a user session being able to read the undo generated by another session
A session may need to read the prior image of data because the data has been modified (and may even have been commited) by another session
It clones the current version of the block it is reading and applies the undo for that block to create its read consistent version
Flashback Query is supported by reading from Undo
Isolation levels (READ COMMITTED, SERIALIZABLE, READ ONLY) 
Read Consistency with READ COMMITTED is at *statement* level by default
A session running multiple queries may each read a different version by default because Read Committed is enforced for each statement
(This also means that if you have a PLSQL block running the same SQL multiple times, each execution can see a different version of the data-- if the data is modified by another session between executions of the SQL !)
A session can choose to set it's ISOLATION LEVEL to SERIALIZABLE which means that every query sees the same version of data
This works only for short running queries and with few changes to the data or read only data.
SERIALIZABLE can update data provided that the same data hasn't been updated and committed by another session after the start (else you get ORA-08177)
READ ONLY does not allow the session to make changes

Transactions
When a transaction is in progress, it is identified by the Transaction Address, Undo segment, slot and sequence
The ITL slot in the block header contains the reference (address) to the Undo
The SCN is assigned at commit time (therefore a transaction doesn't begin with an SCN)

Temp Undo
12c also allows temporary undo
Normally, changes to GTT generate undo which needs to be written to undo segments
With 12c temp undo, those undo entries are also, like the actual changes, temporary and can be discarded when the commit is issued
Thus, the undo doesn't need to be written to disk (remember data in a GTT is not visible to another session, so there is no need to persist the undo)
Redo also captures Undo One transaction (or multiple concurrent transactions) may have updated multiple database blocks So, DBWR may have written down some of the modified buffers to disk, even before the transaction COMMIT has been issued This means that some of the blocks on disk may have uncomitted changes What happens if the instance were to fail (e.g. a bug takes down a background process or the server crashes due to an OS bug or a CPU failure ?) On instance recovery, Oracle must identify the uncommited transactions and roll them back But if the undo for that was only in memory and was lost on instance/server failure, how can Oracle rollback the uncomitted transaction ? Oracle knows that it must "undo" modified blocks This is done by protecting the undo through the redo as well Before a modified buffer is written to disk by DBWR, LGWR writes the redo for it That redo also captures the undo This ensures that, on the need to do Instance Recovery or Media Recovery, the undo is also available The Rollforward process writes the undo to the undo segments This allows Oracle to rollback the uncommitted transaction because the undo is now on disk (and not lost from memory) Redo Strands Redo consists of multiple strands Since 10g, Oracle has introduced private strands for single-instance databases This allows a process to manage it's private strand of redo until it decides to commit At commit time, the private strand is written into the public redo area and this allows LGWR to flush the redo to disk IMU Similarly, Oracle also manages undo "in memory" (using IMU pools). This means that, for a short period or small transactions, Undo is managed in memory rather than through undo segments Therefore, Oracle doesn't have to track undo segment changes in the redo This also allows bundling the undo for multiple changes into a single redo record, instead of separate redo records RAC In RAC, every instance has (a) a seperate Redo Thread (b) a separate Undo Tablespace However, the redo thread must be readable by every other instance -- as instance recovery by another (surviving) instance needs to read the redo Similarly, the undo tablespace is read by any other instance because queries in instance 2 may need to read undo of instance 1 for read-consistency
Categories: DBA Blogs

Pages