Hemant K Chitale

Subscribe to Hemant K Chitale feed
I am an Oracle Database Specialist in Singapore.
get an rss feed of this blog at http://hemantoracledba.blogspot.com/feeds/posts/default?alt=rss
follow me on twitter : @HemantKChitale
Updated: 5 hours 44 min ago

12.2 New Features -- 4 : AWR for Pluggable Database

Thu, 2016-12-01 03:29
12.2 now allows AWR Snapshots and Reports to be created at the PDB level.

Here I demonstrate a Manual Snapshot.  Although Automatic PDB AWR Snapshots are possible (with the AWR_PDB_AUTOFLUSH_ENABLED parameter) , they are disabled by default and Oracle recommends Manual Snapshots.

SQL> connect / as sysdba
Connected.
SQL> alter session set container=PDB1;

Session altered.

SQL> exec dbms_workload_repository.create_snapshot();

PL/SQL procedure successfully completed.

SQL>


I then proceed to create an AWR Report, still in the PDB1 container.

SQL> @?/rdbms/admin/awrrpt

Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
AWR reports can be generated in the following formats. Please enter the
name of the format at the prompt. Default value is 'html'.

'html' HTML format (default)
'text' Text format
'active-html' Includes Performance Hub active report

Enter value for report_type: text

Type Specified: text

Specify the location of AWR Data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWR_ROOT - Use AWR data from root (default)
AWR_PDB - Use AWR data from PDB
Enter value for awr_location: AWR_PDB




Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance Container Name
-------------- -------------- -------------- -------------- --------------
3774315809 HKCORCL 1 HKCORCL PDB1


Root DB Id Container DB Id AWR DB Id
--------------- --------------- ---------------
947935822 3774315809 3774315809









Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ ---------- --------- ---------- ------
3774315809 1 HKCORCL HKCORCL HKCORCL.comp

Using 3774315809 for database Id
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing without
specifying a number lists all completed snapshots.


Enter value for num_days: 1

Listing the last day's Completed Snapshots
Instance DB Name Snap Id Snap Started Snap Level
------------ ------------ ---------- ------------------ ----------

HKCORCL HKCORCL 1 01 Dec 2016 08:48 1
2 01 Dec 2016 08:49 1
3 01 Dec 2016 08:52 1


Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 1
Begin Snapshot Id specified: 1

Enter value for end_snap: 3
End Snapshot Id specified: 3




Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is awrrpt_1_1_3.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name:

Using the report name awrrpt_1_1_3.txt


Here's a look at the header of the AWR report.

WORKLOAD REPOSITORY PDB report (PDB snapshots)

DB Name DB Id Unique Name DB Role Edition Release RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL 3774315809 HKCORCL PRIMARY EE 12.2.0.1.0 NO NO

Instance Inst Num Startup Time
------------ -------- ---------------
HKCORCL 1 16-Nov-16 06:13

PDB Name PDB Id PDB DB Id Open Time
------------ ------ ---------- ---------------
PDB1 3 3774315809 25-Nov-16 14:11

Host Name Platform CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit 2 2 1 7.05

Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 1 01-Dec-16 08:48:46 0 4.0
End Snap: 3 01-Dec-16 08:52:08 1 12.0
Elapsed: 3.36 (mins)
DB Time: 0.29 (mins)

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 0.1 4.3 0.00 0.06
DB CPU(s): 0.1 2.9 0.00 0.04
Background CPU(s): 0.0 0.0 0.00 0.00
Redo size (bytes): 138,443.8 6,976,599.0
Logical read (blocks): 1,798.4 90,625.0
Block changes: 282.3 14,224.3
Physical read (blocks): 21.0 1,055.8
Physical write (blocks): 0.7 34.5
Read IO requests: 20.9 1,051.8
Write IO requests: 0.3 12.5
Read IO (MB): 0.2 8.3
Write IO (MB): 0.0 0.3
IM scan rows: 0.0 0.0
Session Logical Read IM: 0.0 0.0
User calls: 1.5 77.5
Parses (SQL): 17.9 904.0
Hard parses (SQL): 3.2 161.5
SQL Work Area (MB): 2.5 123.5
Logons: 0.0 1.0
Executes (SQL): 45.7 2,302.3
Rollbacks: 0.0 0.0
Transactions: 0.0

Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Total Wait Avg % DB Wait
Event Waits Time (sec) Wait time Class
------------------------------ ----------- ---------- --------- ------ --------
DB CPU 11.5 66.9
db file sequential read 3,758 2.5 667.74us 14.6 User I/O
direct path write 23 .7 29.08ms 3.9 User I/O
flashback log file sync 36 .6 16.98ms 3.6 User I/O
local write wait 12 .4 37.17ms 2.6 User I/O
acknowledge over PGA limit 9 .1 9.50ms .5 Schedule
control file sequential read 189 .1 293.42us .3 System I
PGA memory operation 3,687 0 11.02us .2 Other
db file scattered read 4 0 8.32ms .2 User I/O
log file sync 3 0 9.35ms .2 Commit


The Header identifies the PDB being reported on.Note that Snapshots 1 to 3 are local to the PDB and are not in the Root.  PDB Snapshots can be maintained (create or drop snapshot) in the same manner as CDB snapshots.  (Note : PDB AWR Snapshots are in the view AWR_PDB_SNAPSHOT,  not DBA_HIST_SNAPSHOT).




In contrast, this below is the Header for a CDB where Automatic Snapshots have meant Snap IDs are already at 379,380.  Thus, the CDB snapshots are different from the PDB snapshots.

WORKLOAD REPOSITORY report for

DB Name DB Id Unique Name DB Role Edition Release RAC CDB
------------ ----------- ----------- ---------------- ------- ---------- --- ---
HKCORCL 947935822 HKCORCL PRIMARY EE 12.2.0.1.0 NO YES

Instance Inst Num Startup Time
------------ -------- ---------------
HKCORCL 1 16-Nov-16 06:13

Host Name Platform CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
HKCORCL.compute- Linux x86 64-bit 2 2 1 7.05

Snap Id Snap Time Sessions Curs/Sess PDBs
--------- ------------------- -------- --------- -----
Begin Snap: 379 01-Dec-16 08:00:47 51 .6 2
End Snap: 380 01-Dec-16 09:00:09 62 .9 2
Elapsed: 59.36 (mins)
DB Time: 1.14 (mins)



Note how it doesn't identify a PDB.

You need to be explicitly connected to a PDB before awrrpt shows you the option to generate PDB-level AWR report.
.
.
.
Categories: DBA Blogs

12.2 New Features -- 3 : Flashback Pluggable Database

Fri, 2016-11-25 08:35
12.1 allows Point In Time Recovery of a Pluggable Database but not Flashback of an individual PDB.

12.2 now allows Flashback of an individual PDB.   This is easier with a Local Undo Tablespace instead of a Shared Undo Tablespace.

Here is a quick demo (all times in UTC timezone) :

[oracle@HKCORCL ~]$ sqlplus system/Oracle_4U@PDB1

SQL*Plus: Release 12.2.0.1.0 Production on Fri Nov 25 14:19:06 2016

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Last Successful login time: Thu Nov 24 2016 01:03:52 +00:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> select count(*) from hr.employees_part;

COUNT(*)
----------
107

SQL> drop table hr.employees_part purge;

Table dropped.

SQL> connect / as sysdba
Connected.
SQL> alter system switch logfile;

System altered.

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL>
SQL> select sysdate, sysdate-oldest_flashback_time
2 from v$flashback_database_log;

SYSDATE SYSDATE-OLDEST_FLASHBACK_TIME
--------- -----------------------------
25-NOV-16 2.36273148

SQL>
SQL> flashback pluggable database pdb1
2 to timestamp sysdate-2/24;

Flashback complete.

SQL> alter pluggable database pdb1 open;
alter pluggable database pdb1 open
*
ERROR at line 1:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17:
'/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf
_pdb1undo_d3dgxhbh_.dbf'


SQL> alter pluggable database pdb1 open resetlogs;

Pluggable database altered.

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> select count(*) from hr.employees_part;

COUNT(*)
----------
107

SQL>


(Note : The 12.2 documentation shows the TO TIME clause, it is actually TO TIMESTAMP).
I have done a flashback of PDB1 to a time as of 2hours ago, when the table HR.EMPLOYEES_PART still existed.

Let's look for messages in the alert log.

2016-11-25T14:19:52.992589+00:00
Archived Log entry 11 added for T-1.S-11 ID 0x38800462 LAD:1
2016-11-25T14:19:57.621705+00:00
alter pluggable database pdb1 close
2016-11-25T14:19:57.640353+00:00
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:19:58.885892+00:00
Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 close
2016-11-25T14:26:10.205824+00:00
flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:10.627900+00:00
Flashback Restore Start
2016-11-25T14:26:11.513882+00:00
Restore Flashback Pluggable Database PDB1 (3) until change 3536013
Flashback Restore Complete
2016-11-25T14:26:11.707236+00:00
Flashback Media Recovery Start
2016-11-25T14:26:11.718480+00:00
Serial Media Recovery started
2016-11-25T14:26:12.006472+00:00
Recovery of Online Redo Log: Thread 1 Group 2 Seq 11 Reading mem 0
Mem# 0: /u04/app/oracle/redo/redo02.log
2016-11-25T14:26:12.283587+00:00
Incomplete Recovery applied until change 3536477 time 11/25/2016 12:26:56
Flashback Media Recovery Complete
Flashback Pluggable Database PDB1 (3) recovered until change 3536477, at 11/25/2016 12:26:56
Completed: flashback pluggable database pdb1
to timestamp sysdate-2/24
2016-11-25T14:26:21.451523+00:00
alter pluggable database pdb1 open
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:21.659109+00:00
Pdb PDB1 hit error 1113 during open read write (1) and will be closed.
2016-11-25T14:26:21.659410+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_ora_7920.trc:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:21.804780+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 9: '/u02/app/oracle/oradata/HKCORCL/PDB1/system01.dbf'
ORA-1113 signalled during: alter pluggable database pdb1 open...
2016-11-25T14:26:22.086212+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 10: '/u02/app/oracle/oradata/HKCORCL/PDB1/sysaux01.dbf'
2016-11-25T14:26:22.175778+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 12: '/u02/app/oracle/oradata/HKCORCL/PDB1/users01.dbf'
2016-11-25T14:26:22.270876+00:00
Errors in file /u01/app/oracle/diag/rdbms/hkcorcl/HKCORCL/trace/HKCORCL_m000_9995.trc:
ORA-01110: data file 17: '/u02/app/oracle/oradata/HKCORCL/4157E08302CC2021E053B2D4100AABA3/datafile/o1_mf_pdb1undo_d3dgxhbh_.dbf'
Checker run found 4 new persistent data failures
2016-11-25T14:26:39.804216+00:00
alter pluggable database pdb1 open resetlogs
2016-11-25T14:26:40.377390+00:00
Online datafile 17
Online datafile 12
Online datafile 10
Online datafile 9
2016-11-25T14:26:40.881181+00:00
PDB1(3):Autotune of undo retention is turned on.
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868281239 end:868281333 diff:94 ms (0.1 seconds)
PDB1(3):Database Characterset for PDB1 is AL32UTF8
PDB1(3):JIT: pid 7920 requesting stop
2016-11-25T14:26:42.441388+00:00
PDB1(3):Autotune of undo retention is turned on.
2016-11-25T14:26:42.827673+00:00
PDB1(3):Endian type of dictionary set to little
PDB1(3):[7920] Successfully onlined Undo Tablespace 7.
PDB1(3):Undo initialization finished serial:0 start:868283079 end:868283168 diff:89 ms (0.1 seconds)
PDB1(3):Pluggable database PDB1 dictionary check beginning
2016-11-25T14:26:43.706672+00:00
PDB1(3):Pluggable Database PDB1 Dictionary check complete
PDB1(3):Database Characterset for PDB1 is AL32UTF8
2016-11-25T14:26:44.083617+00:00
PDB1(3):Opatch validation is skipped for PDB PDB1 (con_id=0)
PDB1(3):Opening pdb with no Resource Manager plan active
2016-11-25T14:26:45.205147+00:00
Starting control autobackup

Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/415864F430FE5FFEE053B2D4100A149C/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qldlnv_.bkp
2016-11-25T14:26:46.523130+00:00
Deleted Oracle managed file /u03/app/oracle/fast_recovery_area/HKCORCL/3E09703FB0AF1A7EE053DE4BC40A6C1D/backupset/2016_11_16/o1_mf_nnndf_TAG20161116T024856_d2qlfzqg_.bkp
Control autobackup written to DISK device

handle '/u03/app/oracle/fast_recovery_area/HKCORCL/autobackup/2016_11_25/o1_mf_s_928852005_d3jlk651_.bkp'

Pluggable database PDB1 closed
Completed: alter pluggable database pdb1 open resetlogs


The set of ORA-01113 and ORA-01110 errors are when I tried to open PDB1 without a RESETLOGS.
The OPEN RESETLOGS issued at 2016-11-25T14:26:39.804216+00:00 was successful.
(Note : The ALTER SYSTEM SWITCH LOGFILE wasn't required but I like to archive out the CURRENT redo whenever I make a significant action against the database).

.
.
.

Categories: DBA Blogs

12.2 New Features -- 2 : Partitioning an Existing Table

Wed, 2016-11-23 19:27
A non-partitioned table can be Partitioned (without having to use DBMS_REDEFINITION) online.

SQL> connect hr/Oracle_4U@PDB1
Connected.
SQL> select count(*) from employees;

COUNT(*)
----------
107

SQL> create table employees_part as select * from employees;

Table created.

SQL> select table_name from user_part_tables;

no rows selected

SQL> alter table employees_part
2 modify
3 partition by range (last_name)
4 (partition p_N values less than ('O'),
5 partition p_Q values less than ('R'),
6 partition p_LAST values less than (MAXVALUE))
7 online;

Table altered.

SQL>
SQL> select partition_name, high_value
2 from user_tab_partitions
3 where table_name = 'EMPLOYEES_PART'
4 order by partition_position
5 /

PARTITION_NAME
--------------------------------------------------------------------------------
HIGH_VALUE
------------
P_N
'O'

P_Q
'R'

P_LAST
MAXVALUE


SQL>
SQL> select table_name, partitioning_type, partition_count
2 from user_part_tables
3 where table_name = 'EMPLOYEES_PART'
4 /

TABLE_NAME
--------------------------------------------------------------------------------
PARTITION PARTITION_COUNT
--------- ---------------
EMPLOYEES_PART
RANGE 3


SQL>
SQL> select partition_name, num_rows
2 from user_tab_partitions
3 where table_name = 'EMPLOYEES_PART'
4 order by partition_position
5 /

PARTITION_NAME
--------------------------------------------------------------------------------
NUM_ROWS
----------
P_N
71

P_Q
10

P_LAST
26


SQL>


I was able to convert a Non-Partitioned Table to a Range-Partitioned Table online.
.
.
.

Categories: DBA Blogs

12.2 New Features -- 1 : Separate Undo Tablespace for each PDB

Wed, 2016-11-23 19:06
Unlike 12.1 MultiTenant, 12.2 introduces a separate Undo Tablespace for each PDB.

SQL> l
1 select c.con_id, c.name con_name, t.tablespace_name, t.contents, t.status
2 from v$containers c, cdb_tablespaces t
3 where c.con_id=t.con_id
4 and t.tablespace_name like '%UNDO%'
5* order by 1,2
SQL> /

CON_ID CON_NAME TABLESPACE_NAME CONTENTS STATUS
---------- ---------------- ---------------- --------------------- ---------
1 CDB$ROOT UNDOTBS1 UNDO ONLINE
3 PDB1 UNDOTBS1 UNDO ONLINE
5 PDB2 UNDOTBS1 UNDO ONLINE

SQL>


I have two PDBs and each PDB has an Undo Tablespace.

Let me create a new Undo Tablespace.

SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> create undo tablespace PDB1UNDO ;

Tablespace created.

SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
SQL> alter system set undo_tablespace='PDB1UNDO';

System altered.

SQL> drop tablespace undotbs1;

Tablespace dropped.

SQL> connect / as sysdba
Connected.
SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>
SQL> select c.con_id, c.name con_name, t.tablespace_name, t.contents, t.status
2 from v$containers c, cdb_tablespaces t
3 where c.con_id=t.con_id
4 and t.tablespace_name like '%UNDO%'
5 order by 1,2
6 /

CON_ID CON_NAME TABLESPACE_NAME CONTENTS STATUS
---------- ---------------- ---------------- --------------------- ---------
1 CDB$ROOT UNDOTBS1 UNDO ONLINE
3 PDB1 PDB1UNDO UNDO ONLINE
5 PDB2 UNDOTBS1 UNDO ONLINE

SQL>
SQL> connect system/Oracle_4U@PDB1
Connected.
SQL> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string PDB1UNDO
SQL> select tablespace_name, contents, status
2 from dba_tablespaces
3 where tablespace_name like '%UNDO%'
4 /

TABLESPACE_NAME CONTENTS STATUS
---------------- --------------------- ---------
PDB1UNDO UNDO ONLINE

SQL>


I was able to switch PDB1 to a new Undo Tablespace (and drop the old Undo Tablespace).
.
.
.
Categories: DBA Blogs

Flashback Database -- 3 : Purging (older) Flashback Logs

Sun, 2016-11-20 08:34
As demonstrated earlier, Oracle may maintain Flashback Logs for a duration that is longer than the Flashback Retention Target.  This can happen when the db_recovery_filie_dest_size is large enough to support them (along with ArchiveLogs, Backups etc)

For example, in my play database I have reset the retention target to 1day but the Flashback Logs exceed 4 days :

SQL> show parameter flashback_ret

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SQL> select sysdate-oldest_flashback_time from v$flashback_database_log;

SYSDATE-OLDEST_FLASHBACK_TIME
-----------------------------
4.21686343



The DBA should not manually delete Flashback Logs.

The only way I've found to purge older Flashback Logs is to reset db_recovery_file_dest_size to a lower value such that current FRA usage exceeds the dest_size.  This prompts Oracle to purge older Flashback Logs.

However, if ArchiveLogs exist and consume significant space and frequncy in the FRA, you do run the risk of

ARCH: Archival stopped, error occurred. Will continue retrying
ORACLE Instance orcl - Archival Error
ORA-16038: log 1 sequence# nnn cannot be archived
ORA-19809: limit exceeded for recovery files

and/or

ORACLE Instance orcl- Cannot allocate log, archival required
Thread 1 cannot allocate new log, sequence nnn
All online logs need archiving
Examine archive trace files for archiving errors


errors.
So, be careful to monitor your FRA usage and the Flashback Logs.  Use V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE, V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE view.
(see my earlier post that also points to an Oracle Support Doc about the first two views).
.
.
.


Categories: DBA Blogs

Flashback Database -- 2 : Flashback Requires Redo (ArchiveLog)

Mon, 2016-11-14 09:03
Although Flashback Logs support the ability to execute a FLASHBACK DATABASE command, the actual Flashback also requires Redo to be applied.  This is because the Flashback resets the images of blocks but doesn't guarantee that all transactions are reset to the same point in time (any one block can contain one or more active, uncommitted transactions, and there can be multiple blocks with active transactions at any point in time).  Therefore, since Oracle must revert the database to a consistent image, it needs to be able to apply redo as well (just as it would do for a roll-forward recovery from a backup).

Here's a quick demo of what happens if the redo is not available.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.oldest_flashback_scn, l.oldest_flashback_time
2 from v$flashback_database_log l;

SYSDATE OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T
------------------ -------------------- ------------------
14-NOV-16 22:51:37 7246633 14-NOV-16 22:39:43

SQL>

sh-4.1$ pwd
/u02/FRA/ORCL/archivelog/2016_11_14
sh-4.1$ date
Mon Nov 14 22:52:29 SGT 2016
sh-4.1$ rm *
sh-4.1$

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.

SQL> flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS');
flashback database to timestamp to_date('14-NOV-16 22:45:00','DD-MON-RR HH24:MI:SS')
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 7246634 to SCN 7269074
ORA-38761: redo log sequence 70 in thread 1, incarnation 5 could not be
accessed


SQL>
SQL> l
1 select sequence#, first_change#, first_time
2 from v$archived_log
3 where resetlogs_time=(select resetlogs_time from v$database)
4 and sequence# between 60 and 81
5* order by 1
SQL> /

SEQUENCE# FIRST_CHANGE# FIRST_TIME
---------- ------------- ------------------
60 7245238 14-NOV-16 22:27:35
61 7248965 14-NOV-16 22:40:46
62 7250433 14-NOV-16 22:40:52
63 7251817 14-NOV-16 22:41:04
64 7253189 14-NOV-16 22:41:20
65 7254583 14-NOV-16 22:41:31
66 7255942 14-NOV-16 22:41:44
67 7257317 14-NOV-16 22:41:59
68 7258689 14-NOV-16 22:42:10
69 7260094 14-NOV-16 22:42:15
70 7261397 14-NOV-16 22:42:22
71 7262843 14-NOV-16 22:42:28
72 7264269 14-NOV-16 22:42:32
73 7265697 14-NOV-16 22:42:37
74 7267121 14-NOV-16 22:42:43
75 7269075 14-NOV-16 22:48:05
76 7270476 14-NOV-16 22:48:11
77 7271926 14-NOV-16 22:48:17
78 7273370 14-NOV-16 22:48:23
79 7274759 14-NOV-16 22:48:32
80 7276159 14-NOV-16 22:48:39
81 7277470 14-NOV-16 22:48:43

22 rows selected.

SQL>



Note how the error message states that Redo(Archive)Log Sequence#70 is required but provides a range of SCNs that span Sequence#60 to Sequence#74 !

Bottom Line : Flashback Logs alone aren't adequate to Flashback database.  You also need the corresponding Redo.

Just to confirm that I can continue with the current (non-Flashbacked Database) state (in spite of the failed Flashback)  :

SQL> shutdown;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
Database opened.
SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 89
Next log sequence to archive 90
Current log sequence 90
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
7289329

SQL>


.Bottom Line : *Before* you attempt a FLASHBACK DATABASE to the OLDEST_FLASHBACK_TIME (or SCN) from V$FLASHBACK_DATABASE_LOG, ensure that you *do* have the "nearby"  Archive/Redo Logs. !
.
.
.
Categories: DBA Blogs

Flashback Database -- 1 : Introduction to Operations

Mon, 2016-11-07 04:24
Continuing on my previous post,  ....

In 11gR2,  ALTER DATABASE FLASHBACK ON   and OFF can be executed when the database is OPEN.  Setting FLASHBACK OFF results in deletion of all Flashback Files.

Here is some information that I have pulled from my test database environment :

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL>
SQL> select oldest_flashback_scn, oldest_flashback_time,
2 retention_target, flashback_size
3 from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET FLASHBACK_SIZE
-------------------- ------------------ ---------------- --------------
7140652 07-NOV-16 10:53:30 180 314572800

SQL> select sysdate from dual;

SYSDATE
------------------
07-NOV-16 17:46:54

SQL>
SQL> select begin_time, end_time, flashback_data, estimated_flashback_size
2 from v$flashback_database_stat
3 order by begin_time;

BEGIN_TIME END_TIME FLASHBACK_DATA ESTIMATED_FLASHBACK_SIZE
------------------ ------------------ -------------- ------------------------
06-NOV-16 18:56:28 06-NOV-16 21:20:55 202129408 251873280
06-NOV-16 21:20:55 07-NOV-16 09:53:26 107102208 62054400
07-NOV-16 09:53:26 07-NOV-16 10:53:30 51609600 67866624
07-NOV-16 10:53:30 07-NOV-16 13:14:45 10682368 60887040
07-NOV-16 13:14:45 07-NOV-16 14:14:51 66002944 67986432
07-NOV-16 14:14:51 07-NOV-16 15:14:57 10018816 66112512
07-NOV-16 15:14:57 07-NOV-16 16:15:01 10190848 64441344
07-NOV-16 16:15:01 07-NOV-16 17:15:05 53559296 68751360
07-NOV-16 17:15:05 07-NOV-16 17:47:57 52862976 0

9 rows selected.

SQL>
SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
2 from v$flashback_database_logfile
3 order by sequence#;

LOG# SEQUENCE# SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
6 6 50 07-NOV-16 09:00:46
1 7 50 07-NOV-16 10:36:01
2 8 50 07-NOV-16 13:13:22
3 9 50 07-NOV-16 13:43:28
4 10 50 07-NOV-16 16:43:49
5 11 50 07-NOV-16 17:44:42

6 rows selected.

SQL>


Firstly, we note (as in my previous blog post), that the available flashback that is from 10:53am to 5:46pm (almost 7hours) exceeds the Flashback Target of 3hours (180minutes).  Apparently, Flashback Logfiles 1 to 5 have already been purged (but I find no entries for the deletions in the alert log).

Note how the "earliest time" does not match in all three views.  The OLDEST_FLASHBACK_TIME is 10:53am although V$FLASHBACK_DATABASE_STAT reports statistics from the previous day (I had enabled Flashback in the database at 18:56:27 of 06-Nov) while V$FLASHBACK_DATABASE_LOGILE shows an existing logfile from 09:00am to 10:36am.

Let me do a Flashback.  I must rely on the V$FLASHBACK_DATABASE_LOG view to know that I  cannot flashback beyond 10:53am.

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL>
SQL> flashback database to timestamp trunc(sysdate)+11/24;

Flashback complete.

SQL>
SQL> alter database open read only; --- to verify data if necessary

Database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SQL> alter database open resetlogs;

Database altered.

SQL>


A FLASHBACK DATABASE requires an OPEN RESETLOGS to open READ WRITE.

Let's look at the alert log for messages about the Flashback operation itself :

Mon Nov 07 17:56:36 2016
flashback database to timestamp trunc(sysdate)+11/24
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
started logmerger process
Parallel Media Recovery started with 2 slaves
Flashback Media Recovery Log /u02/FRA/ORCL/archivelog/2016_11_07/o1_mf_1_81_d2052ofj_.arc
Mon Nov 07 17:56:43 2016
Incomplete Recovery applied until change 7141255 time 11/07/2016 11:00:01
Flashback Media Recovery Complete
Completed: flashback database to timestamp trunc(sysdate)+11/24
Mon Nov 07 17:57:08 2016
alter database open read only


What happens if I disable and re-enable Flashback ?

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback off;

Database altered.

SQL>

From the alert log :
Mon Nov 07 18:03:02 2016
alter database flashback off
Stopping background process RVWR
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32vjv_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y32xq0_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3bhkx_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y3dd8r_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1y6r6bf_.flb
Deleted Oracle managed file /u02/FRA/ORCL/flashback/o1_mf_d1ycky3v_.flb
Flashback Database Disabled
Completed: alter database flashback off

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> alter database flashback on;

Database altered.

SQL>

From the alert log :
Mon Nov 07 18:04:21 2016
alter database flashback on
Starting background process RVWR
Mon Nov 07 18:04:21 2016
RVWR started with pid=30, OS id=12621
Flashback Database Enabled at SCN 7142426
Completed: alter database flashback on

From the FRA :
[oracle@ora11204 flashback]$ pwd
/u02/FRA/ORCL/flashback
[oracle@ora11204 flashback]$ ls -ltr
total 102416
-rw-rw----. 1 oracle oracle 52436992 Nov 7 18:04 o1_mf_d20nf7wc_.flb
-rw-rw----. 1 oracle oracle 52436992 Nov 7 18:05 o1_mf_d20nf5nz_.flb
[oracle@ora11204 flashback]$

SQL> alter session set nls_date_Format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select log#, sequence#, bytes/1048576 Size_MB, first_time
2 from v$flashback_database_logfile
3 order by sequence#;

LOG# SEQUENCE# SIZE_MB FIRST_TIME
---------- ---------- ---------- ------------------
2 1 50
1 1 50 07-NOV-16 18:04:22

SQL>



So, I can set FLASHBACK OFF and ON when the database is OPEN.  (But I can't execute a FLASHBACK TO .... with the database OPEN).
.
.
.

Categories: DBA Blogs

Flashback Database Logs can exceed the Retention Target

Fri, 2016-10-28 18:58
The documentation on the Flashback Retention Target in 11.2 and 12.1 states that this parameter specifies an upper limit on how far the database may be flashed back.

However, if the FRA (db_recovery_file_dest_size) is actually large enough, Oracle may retain flashback logs for a much longer duration.

SQL> alter session set nls_date_format='DD-MON-RR HH24:MI:SS';

Session altered.

SQL> select sysdate, l.* from v$flashback_database_log l;

SYSDATE OLDEST_FLASHBACK_SCN OLDEST_FLASHBACK_T RETENTION_TARGET
------------------ -------------------- ------------------ ----------------
FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------- ------------------------
29-OCT-16 07:42:44 6968261 28-OCT-16 22:35:50 180
157286400 86467584


SQL>
SQL> show parameter flashback

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 180
SQL>
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2260088 bytes
Variable Size 750781320 bytes
Database Buffers 310378496 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> flashback database to timestamp trunc(sysdate);

Flashback complete.

SQL>


Thus, it is useful to check the V$FLASHBACK_DATABASE_LOG, V$FLASHBACK_DATABASE_LOGFILE and V$FLASHBACK_DATABASE_STAT and V$FLASH_RECOVERY_AREA_USAGE views from time to time.

See Oracle Support Doc# 1194013.1 for a discrepancy between the first two views.

Note : If you have Standby database configured, also see John Hallas's blog post.
.
.
.

Categories: DBA Blogs

Multi-Version Read Consistency in Oracle

Sun, 2016-10-23 10:19
My Linked-In post on this topic.
.
.
.
Categories: DBA Blogs

OTN Appreciation Day : Undo and Redo

Mon, 2016-10-10 20:06
On OTN Appreciation Day, let me say I like the Undo and Redo features of Oracle.  I name them together as they work together.

Undo also supports MultiVersionReadConsistency -- a great advantage of Oracle.

Redo, with Archive Logging, also supports Online Backups -- an absolute necessity.

These features have been around for almost 30 years now.

Here are some Quick and Rough Notes on Undo and Redo.
.
.
.
Categories: DBA Blogs

Undo and Redo

Mon, 2016-10-10 20:04
Quick and Rough Notes :


Undo and Redo


Undo is where Oracle logs how to reverse a transaction (one or more DMLs in a transaction)

Redo is where Oracle logs how to replay a transaction

Undo and Redo are written to as the transaction proceeds, not merely at the end of the transaction
(imagine a transaction that consists of 1million single-row inserts, each distinct insert is written to undo and redo)
Undo segments
Oracle dynamically creates and drops Undo segments depending on transaction volume
An undo segment consists of multiple extents. As a transaction grows beyond the current extent, a new extent may be allocated
One undo segment can support multiple transactions but a transaction cannot span multiple undo segments
After COMMIT the undo information is retained for undo_retention or autotuned_undo_retention.
At the end of the retention period, the undo is discarded, the extent is expired

Undo retention
Oracle may autotune the undo retention
If the datafile(s) for the active undo tablespace are set to autoextend OFF, Oracle automatically uses the datafile to the fullest and ignores undo_retention
If the datafile(s) are set to autoextend ON, Oracle autotunes undo_retention to match query lengths
Check V$undostat for this information

Undo and Read Consistency
Oracle's implementation of MultiVersionReadConsistency relies on a user session being able to read the undo generated by another session
A session may need to read the prior image of data because the data has been modified (and may even have been commited) by another session
It clones the current version of the block it is reading and applies the undo for that block to create its read consistent version
Flashback Query is supported by reading from Undo
Isolation levels (READ COMMITTED, SERIALIZABLE, READ ONLY) 
Read Consistency with READ COMMITTED is at *statement* level by default
A session running multiple queries may each read a different version by default because Read Committed is enforced for each statement
(This also means that if you have a PLSQL block running the same SQL multiple times, each execution can see a different version of the data-- if the data is modified by another session between executions of the SQL !)
A session can choose to set it's ISOLATION LEVEL to SERIALIZABLE which means that every query sees the same version of data
This works only for short running queries and with few changes to the data or read only data.
SERIALIZABLE can update data provided that the same data hasn't been updated and committed by another session after the start (else you get ORA-08177)
READ ONLY does not allow the session to make changes

Transactions
When a transaction is in progress, it is identified by the Transaction Address, Undo segment, slot and sequence
The ITL slot in the block header contains the reference (address) to the Undo
The SCN is assigned at commit time (therefore a transaction doesn't begin with an SCN)

Temp Undo
12c also allows temporary undo
Normally, changes to GTT generate undo which needs to be written to undo segments
With 12c temp undo, those undo entries are also, like the actual changes, temporary and can be discarded when the commit is issued
Thus, the undo doesn't need to be written to disk (remember data in a GTT is not visible to another session, so there is no need to persist the undo)
Redo also captures Undo One transaction (or multiple concurrent transactions) may have updated multiple database blocks So, DBWR may have written down some of the modified buffers to disk, even before the transaction COMMIT has been issued This means that some of the blocks on disk may have uncomitted changes What happens if the instance were to fail (e.g. a bug takes down a background process or the server crashes due to an OS bug or a CPU failure ?) On instance recovery, Oracle must identify the uncommited transactions and roll them back But if the undo for that was only in memory and was lost on instance/server failure, how can Oracle rollback the uncomitted transaction ? Oracle knows that it must "undo" modified blocks This is done by protecting the undo through the redo as well Before a modified buffer is written to disk by DBWR, LGWR writes the redo for it That redo also captures the undo This ensures that, on the need to do Instance Recovery or Media Recovery, the undo is also available The Rollforward process writes the undo to the undo segments This allows Oracle to rollback the uncommitted transaction because the undo is now on disk (and not lost from memory) Redo Strands Redo consists of multiple strands Since 10g, Oracle has introduced private strands for single-instance databases This allows a process to manage it's private strand of redo until it decides to commit At commit time, the private strand is written into the public redo area and this allows LGWR to flush the redo to disk IMU Similarly, Oracle also manages undo "in memory" (using IMU pools). This means that, for a short period or small transactions, Undo is managed in memory rather than through undo segments Therefore, Oracle doesn't have to track undo segment changes in the redo This also allows bundling the undo for multiple changes into a single redo record, instead of separate redo records RAC In RAC, every instance has (a) a seperate Redo Thread (b) a separate Undo Tablespace However, the redo thread must be readable by every other instance -- as instance recovery by another (surviving) instance needs to read the redo Similarly, the undo tablespace is read by any other instance because queries in instance 2 may need to read undo of instance 1 for read-consistency
Categories: DBA Blogs

Obj# and DataObj# in 12c AWR

Sun, 2016-10-02 23:03
The 12c AWR Report section on Segment Statistics now reports both Obj# (OBJECT_ID) and DataObj# (DATA_OBJECT_ID).  This is useful information when you have (table) objects undergoing TRUNCATEs during the workload.
A TRUNCATE (or a MOVE {or REBUILD for an Index}) causes a reallocation of the data segment and change of the DATA_OBJECT_ID.

Thus :

SQL> show user
USER is "HEMANT"
SQL> create table hkc_t_1 (id_col number);

Table created.

SQL> select object_id, data_object_id
2 from user_objects
3 where object_name = 'HKC_T_1'
4 and object_type = 'TABLE'
5 /

OBJECT_ID DATA_OBJECT_ID
---------- --------------
94422 94422

SQL> insert into hkc_t_1 values (1);

1 row created.

SQL> truncate table hkc_t_1;

Table truncated.

SQL> select object_id, data_object_id
2 from user_objects
3 where object_name = 'HKC_T_1'
4 and object_type = 'TABLE'
5 /

OBJECT_ID DATA_OBJECT_ID
---------- --------------
94422 94423

SQL>


The 12c AWR report shows both the Obj# and DataObj#..

In my lab, I created a workload where two tables were TRUNCATED 100 times in a loop that would insert fresh rows after each TRUNCATE.  (Those familiar with Peoplesoft Batch Jobs would know this behaviour)

Some extracts from the AWR show :

Segments by Physical Writes               DB/Inst: NONCDB/NONCDB  Snaps: 53-54
-> Total Physical Writes: 340,305
-> Captured Segments account for 0.7% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

Tablespace
Owner Name
-------------------- ----------
Subobject Obj. Physical
Object Name Name Type Obj# Dataobj# Writes %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT HEMANT
SOURCE_TB_1 TABLE 94220 94220 1,575 .46
HEMANT USERS
LIST_TB_2 TABLE 94219 94231 263 .08
HEMANT USERS
WORKLOAD_LOG TABLE 94221 94221 71 .02
SYS SYSTEM
COL$ TABLE 21 2 46 .01
SYS SYSTEM
SEG$ TABLE 14 8 45 .01
------------------------------------------------------

Segments by Physical Write Requests DB/Inst: NONCDB/NONCDB Snaps: 53-54
-> Total Physical Write Requests: 175,206
-> Captured Segments account for 22.3% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

Tablespace
Owner Name
-------------------- ----------
Subobject Obj. Phys Write
Object Name Name Type Obj# Dataobj# Requests %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT USERS
LIST_TB_1 TABLE 94218 94370 1,086 .62
HEMANT USERS
LIST_TB_1 TABLE 94218 94234 983 .56
HEMANT USERS
LIST_TB_1 TABLE 94218 94228 981 .56
HEMANT USERS
LIST_TB_1 TABLE 94218 94232 971 .55
HEMANT USERS
LIST_TB_1 TABLE 94218 94218 964 .55
------------------------------------------------------

Segments by Table Scans DB/Inst: NONCDB/NONCDB Snaps: 53-54
-> Total Table Scans: 243
-> Captured Segments account for 18.9% of Total
-> When ** MISSING ** occurs, some of the object attributes may not be available

Tablespace
Owner Name
-------------------- ----------
Subobject Obj. Table
Object Name Name Type Obj# Dataobj# Scans %Total
-------------------- ---------- ----- ---------- ---------- ------------ -------
HEMANT USERS
LIST_TB_1 TABLE 94218 94240 1 .41
HEMANT USERS
LIST_TB_1 TABLE 94218 94248 1 .41
HEMANT USERS
LIST_TB_1 TABLE 94218 94388 1 .41
HEMANT USERS
LIST_TB_1 TABLE 94218 94224 1 .41
HEMANT USERS
LIST_TB_1 TABLE 94218 94234 1 .41
------------------------------------------------------

Segments by DB Blocks Changes DB/Inst: NONCDB/NONCDB Snaps: 53-54
-> % of Capture shows % of DB Block Changes for each top segment compared
-> with total DB Block Changes for all segments captured by the Snapshot
-> When ** MISSING ** occurs, some of the object attributes may not be available

Tablespace
Owner Name
-------------------- ----------
Subobject Obj. DB Block % of
Object Name Name Type Obj# Dataobj# Changes Capture
-------------------- ---------- ----- ---------- ---------- ------------ -------
** MISSING ** TEMP
** MISSING: -4001635 MISSING ** UNDEF 4.2550E+09 4218752 10,032 2.89
HEMANT USERS
LIST_TB_2 TABLE 94219 94235 7,616 2.20
HEMANT USERS
LIST_TB_2 TABLE 94219 94231 7,488 2.16
HEMANT USERS
LIST_TB_2 TABLE 94219 94403 7,392 2.13
HEMANT USERS
LIST_TB_1 TABLE 94218 94314 7,360 2.12
------------------------------------------------------


These are the two target tables (with the current DATA_OBJECT_ID after 100 TRUNCATEs each) :

SQL> select object_name, object_id, data_object_id
2 from user_objects
3 where object_name like 'LIST_TB_%'
4 and object_type = 'TABLE'
5 order by 1
6 /

OBJECT_NAME OBJECT_ID DATA_OBJECT_ID
------------------------------ ---------- --------------
LIST_TB_1 94218 94418
LIST_TB_2 94219 94419

SQL>


The fact that the AWR report shows a different Dataobj# from the Obj# indicates that a TRUNCATE may have occurred sometime in the past (The TRUNCATE may have occurred before the beignning of the AWR report window !).  Multiple entries for the same table indicate that multiple TRUNCATES occurred within the AWR report window.
Note : Although both tables underwent 100 TRUNCATEs in the workload, AWR reports only 5 occurrences of activity.
.
.
.

Categories: DBA Blogs

SQLLoader DIRECT option and Unique Index

Mon, 2016-09-26 09:59
The DIRECT parameter for SQLLoader Command-Line enables Direct Path Load which uses a Direct Path API instead of a regular INSERT statement to load data into the target table.

However, one needs to know how it handles a Unique Index on the target table.
It actually leaves the  UNUSABLE if, duplicate values are loaded. The Index rebuild fails but the duplicate values remain in the table.

Here is a quick demo  (this in 12.1.0.2 MultiTenant). I first setup the target table with a Unique Index.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@PDB1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:36:51 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:26:16 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
3
SQL> create table test_sqlldr_direct
2 (id_column number,
3 data_column varchar2(15))
4 /

Table created.

SQL> create unique index test_sqlldr_direct_u1 on test_sqlldr_direct(id_column);

Index created.

SQL> insert into test_sqlldr_direct values (1, 'First Row');

1 row created.

SQL> commit;

Commit complete.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$

Next, I setup the datafile with a duplicate record and the controlfile.

[oracle@ora12102 Desktop]$ ls -l
total 8
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
[oracle@ora12102 Desktop]$ cat load_data.dat
2,'Second Row'
3,'Third Row'
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.ctl
LOAD DATA
INFILE load_data.dat
APPEND INTO TABLE TEST_SQLLDR_DIRECT
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
id_column,
data_column)
[oracle@ora12102 Desktop]$

I am now ready to run a Direct Path Load.

[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@pdb1 control=load_control.ctl direct=TRUE

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Direct

Load completed - logical record count 3.

Table TEST_SQLLDR_DIRECT:
3 Rows successfully loaded.

Check the log file:
load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$

What is that ? 3 rows loaded successfully ?  So, the duplicate row also did get loaded ?  Let's check the log file.

[oracle@ora12102 Desktop]$ ls -ltr
total 12
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall 1833 Sep 26 22:47 load_control.log
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:47:09 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Control File: load_control.ctl
Data File: load_data.dat
Bad File: load_data.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation: none specified
Path used: Direct

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN FIRST * , O(") CHARACTER
DATA_COLUMN NEXT * , O(") CHARACTER

The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Table TEST_SQLLDR_DIRECT:
3 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Column array rows : 5000
Stream buffer bytes: 256000
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 3
Total logical records rejected: 0
Total logical records discarded: 0
Total stream buffers loaded by SQL*Loader main thread: 1
Total stream buffers loaded by SQL*Loader load thread: 0

Run began on Mon Sep 26 22:47:09 2016
Run ended on Mon Sep 26 22:47:11 2016

Elapsed time was: 00:00:01.88
CPU time was: 00:00:00.01
[oracle@ora12102 Desktop]$

Did you notice the section in the log file that says :
The following index(es) on table TEST_SQLLDR_DIRECT were processed:
index HEMANT.TEST_SQLLDR_DIRECT_U1 was made unusable due to:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

Apparently, the Index is left UNUSABLE.

[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@pdb1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Sep 26 22:50:51 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Mon Sep 26 2016 22:47:09 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
UNUSABLE

SQL> select * from test_sqlldr_direct order by 1;

ID_COLUMN DATA_COLUMN
---------- ---------------
1 First Row
2 'Second Row'
3 'Third Row'
3 'Oops !'

SQL> alter index test_sqlldr_direct_u1 rebuild;
alter index test_sqlldr_direct_u1 rebuild
*
ERROR at line 1:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found


SQL>

We can see the duplicated row for ID_COLUMN=3 and find that the Index cannot be rebuilt.  Oracle has allowed duplicate rows to load and left the Index UNUSABLE.

So, if you are planning to use DIRECT=TRUE and have a Unique Index, make sure you check the status of the Index and/or check the Log file before you proceed with processing the data.


Conversely, here is how the data is handled without DIRECT=TRUE :.

SQL> truncate table test_sqlldr_direct;

Table truncated.

SQL> insert into test_sqlldr_direct values (1,'First Row');

1 row created.

SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
VALID

SQL>

[oracle@ora12102 Desktop]$ rm load_control.log
[oracle@ora12102 Desktop]$ sqlldr hemant/hemant@PDB1 control=load_control.ctl

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Path used: Conventional
Commit point reached - logical record count 3

Table TEST_SQLLDR_DIRECT:
2 Rows successfully loaded.

Check the log file:
load_control.log
for more information about the load.
[oracle@ora12102 Desktop]$
[oracle@ora12102 Desktop]$ ls -ltr
total 16
-rw-r--r-- 1 oracle oinstall 40 Sep 26 22:40 load_data.dat
-rw-r--r-- 1 oracle oinstall 165 Sep 26 22:45 load_control.ctl
-rw-r--r-- 1 oracle oinstall 11 Sep 26 22:59 load_data.bad
-rw-r--r-- 1 oracle oinstall 1668 Sep 26 22:59 load_control.log
[oracle@ora12102 Desktop]$ cat load_data.bad
3,'Oops !'
[oracle@ora12102 Desktop]$ cat load_control.log

SQL*Loader: Release 12.1.0.2.0 - Production on Mon Sep 26 22:59:58 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Control File: load_control.ctl
Data File: load_data.dat
Bad File: load_data.bad
Discard File: none specified

(Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array: 64 rows, maximum of 256000 bytes
Continuation: none specified
Path used: Conventional

Table TEST_SQLLDR_DIRECT, loaded from every logical record.
Insert option in effect for this table: APPEND
TRAILING NULLCOLS option in effect

Column Name Position Len Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
ID_COLUMN FIRST * , O(") CHARACTER
DATA_COLUMN NEXT * , O(") CHARACTER

Record 3: Rejected - Error on table TEST_SQLLDR_DIRECT.
ORA-00001: unique constraint (HEMANT.TEST_SQLLDR_DIRECT_U1) violated


Table TEST_SQLLDR_DIRECT:
2 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.


Space allocated for bind array: 33024 bytes(64 rows)
Read buffer bytes: 1048576

Total logical records skipped: 0
Total logical records read: 3
Total logical records rejected: 1
Total logical records discarded: 0

Run began on Mon Sep 26 22:59:58 2016
Run ended on Mon Sep 26 22:59:58 2016

Elapsed time was: 00:00:00.07
CPU time was: 00:00:00.00
[oracle@ora12102 Desktop]$

SQL> select * from test_sqlldr_direct
2 order by id_column
3 /

ID_COLUMN DATA_COLUMN
---------- ---------------
1 First Row
2 'Second Row'
3 'Third Row'

SQL>
SQL> select status from user_indexes
2 where index_name = 'TEST_SQLLDR_DIRECT_U1'
3 /

STATUS
--------
VALID

SQL>

The duplicate row was rejected and went to the BAD file and was REJECTED.
.
.
.
So, the next time you chose to use SQLLDR DIRECT=TRUE for its performance benefits, make sure you know how to validate the status of Unique Indexes and check the log file ! Else, you might allow duplicates if the incoming data has not been cleansed before loading into the database.
.
.
.
If you are running 11g, the behaviour is the same in 11g/
.
.
.
.

Categories: DBA Blogs

SQL*Net Message Waits

Sun, 2016-09-18 10:10
Here are some extracts from an 11.2.0.4 AWR Report  I ran a simulated workload on this server for about 40minutes and generated this report.  I understand that some DBAs may be misinterpreting SQL*Net message time.

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
ora11204 Linux x86 64-bit 2 2 1 3.04

Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 158 18-Sep-16 21:42:34 36 1.0
End Snap: 159 18-Sep-16 22:23:01 33 1.0
Elapsed: 40.45 (mins)
DB Time: 42.04 (mins)

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 10.2 0.12 0.07
DB CPU(s): 0.0 0.1 0.00 0.00
Redo size (bytes): 2,523.6 24,793.8
Logical read (blocks): 41.7 409.6


Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Tota Wait % DB
Event Waits Time Avg(ms) time Wait Class
------------------------------ ------------ ---- ------- ------ ----------
log file sync 1,095 2508 2291 99.5 Commit
db file sequential read 162 37.5 231 1.5 User I/O
DB CPU 19.1 .8
SQL*Net message to client 31,579 .1 0 .0 Network
Disk file operations I/O 103 0 0 .0 User I/O


Wait Classes by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Avg Avg
Total Wait Wait % DB Active
Wait Class Waits Time (sec) (ms) time Sessions
---------------- ---------------- ---------------- -------- ------ --------
Commit 1,095 2,509 2291 99.5 1.0
System I/O 12,899 2,210 171 87.6 0.9
User I/O 1,866 38 20 1.5 0.0
DB CPU 19 .8 0.0
Network 33,651 9 0 .4 0.0


Foreground Wait Class DB/Inst: ORCL/orcl Snaps: 158-159
-> s - second, ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Captured Time accounts for 101.7% of Total DB time 2,522.36 (s)
-> Total FG Wait Time: 2,546.18 (s) DB CPU time: 19.14 (s)


Avg
%Time Total Wait wait
Wait Class Waits -outs Time (s) (ms) %DB time
-------------------- ---------------- ----- ---------------- -------- ---------
Commit 1,095 0 2,509 2291 99.5
User I/O 276 0 38 136 1.5
DB CPU 19 0.8
Network 31,579 0 0 0 0.0
Concurrency 21 0 0 0 0.0

Foreground Wait Events DB/Inst: ORCL/orcl Snaps: 158-159
-> s - second, ms - millisecond - 1000th of a second
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by wait time desc, waits desc (idle events last)
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0


Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file sync 1,095 0 2,509 2291 4.4 99.5
db file sequential read 162 0 37 231 0.7 1.5
SQL*Net message to client 31,579 0 0 0 127.9 .0
Disk file operations I/O 103 0 0 0 0.4 .0
latch: shared pool 2 0 0 4 0.0 .0
direct path sync 2 0 0 2 0.0 .0
db file scattered read 6 0 0 0 0.0 .0
jobq slave wait 5,522 100 2,770 502 22.4
SQL*Net message from clien 31,577 0 2,404 76 127.8


Wait Event Histogram DB/Inst: ORCL/orcl Snaps: 158-159
-> Units for Total Waits column: K is 1000, M is 1000000, G is 1000000000
-> % of Waits: value of .0 indicates value was <.05%; value of null is truly 0
-> % of Waits: column heading of <=1s is truly <1024ms>1s is truly >=1024ms
-> Ordered by Event (idle events last)

% of Waits
-----------------------------------------------
Total
Event Waits <1ms ms="" s="">1s
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
ARCH wait on ATTACH 37 97.3 2.7
ARCH wait on DETACH 37 100.0
Disk file operations I/O 920 99.9 .1
LGWR wait for redo copy 54 100.0
Parameter File I/O 640 100.0
SQL*Net break/reset to cli 6 100.0
SQL*Net message to client 33.6K 100.0
...
SQL*Net message from clien 34.2K 82.7 9.5 2.7 1.6 .7 2.4 .3 .0


Other Instance Activity Stats DB/Inst: ORCL/orcl Snaps: 158-159
-> Ordered by statistic name

Statistic Total per Second per Trans
-------------------------------- ------------------ -------------- -------------
SQL*Net roundtrips to/from clien 31,579 13.0 127.9



Should I be worried about the SQL*Net message waits (to client, from client) ? How should I interpret them ?  I have more than 30K SQL*Net messages and roundtrips.  Apparently, messages from client are very slow -- at 76ms/message.  Apparently, round-trips are very slow -- at 13 round-trips per second.

SQL*Net message to client waits are not really measured in terms of the time it took for the message to reach the client.  Oracle doesn't know how long the transmission took.  It only knows the time taken to put the message onto the TCP stack on the database server itself.  Subsequent network transmission time is unknown. That is why SQL*Net message to client will always be an extremely low figure in terms of time -- because it isn't a true measure of elapsed time sending a message to a client.

SQL*Net message from client is not  just the time spent on the network.  It is the time between the last message to the client upto the next message from the client.  Thus, it also includes client overheads (like "think time", CPU time, network stack on the client) besides transmission over the network.  In most cases, the major component of SQL*Net message from the client is client "think time" -- how long the client spent before formatting and sending the next SQL call to the database server.   This should also mean that it is not always true that SQL*Net message from client is an idle event.

If I have a single client that is either or some or all of :
a. Loading data in sets of rows (e.g. an ETL client)
b. Extracting data in sets of rows
c. Retrieving results and formatting the results for presentation to a user
b. Running a batch job that majorly consists of SQL calls, not PLSQL
the SQL*Net message from client is majorly the time spent by the client  (--- unless you really have a high latency network.).  In most such cases, when tracing the individual session, this wait event is NOT an idle event.  However, when reading an AWR, you cannot isolate such a session from the other sessions that are doing a mix of activity -- some with real interactive end-users, some sending periodic "heart-beat" messages, some completely idle waiting for a request from a user.  In this AWR report, there are a mix of clients with different "think-times", some completely idle for 40minutes.  We can't identify them in the AWR report.

Can you use the SQL*Net roundtrips to/from client figure from the AWR ?  Not if you have a mix of different clients doing different forms of activity.  Idle clients will have very few roundtrips in the 40minutes while clients extracting data row-by-row (not in PLSQL, but SQL or some other client like Java) would have a high number of roundtrips.  So, you can't separate the two types of behaviour in an AWR.

If you are really concerned about identifying SQL*Net overheads and/or round-trips, you should *trace* the specific individual session of interest and extract figures from the trace file.
.
.
.


Categories: DBA Blogs

CODE : View My Source Code -- a Function

Mon, 2016-09-05 09:53
If you need to view the source code of a stored program you need to either :
a.  Be the owner of the program
b.  Have EXECUTE privilege on the program
c.  Have EXECUTE ANY ... privilege or the DBA role

If you are not the owner o the program, the owner can grant you access to view but not modify the program.

Here's code for a PL/SQL Function that allows this.  (I wrote this as a Function --- and not as a PL/SQL Procedure -- to be similar to the DBMS_METADATA.GET_DDL Function).

Imagine  that HR is the Application Schema and the owner of Tables and Programs.  Imagine that HEMANT has not been granted the DBA role or an EXECUTE privilege but needs access to view the source code.

Here's a quick method.  ("Quick" meaning that this code can be improved further -- e.g. by adding an Audit Trail and by better Error Handling).

SQL> connect hr/oracle
Connected.
SQL>
SQL> drop table authorized_view_source purge;

Table dropped.

SQL>
SQL> create table authorized_view_source
2 (username varchar2(30),
3 object_type varchar2(23),
4 object_name varchar2(30))
5 /

Table created.

SQL>
QL> create or replace function view_my_source(object_type_in in varchar2, object_name_in in varchar2)
2 return clob
3 as
4 return_clob clob;
5 line_out varchar2(4000);
6 line_count pls_integer;
7 line_no pls_integer;
8 verify_count pls_integer;
9 return_source clob;
10
11 begin
12 select count(*) into verify_count from authorized_view_source
13 -- check if any of these three predicates fail
14 where username = user
15 and object_type = object_type_in
16 and object_name = object_name_in;
17
18 if verify_count = 0 then
19 -- don't tell if the object exists or not
20 raise_application_error(-20001,'You are not authorized to view the source code of this object');
21 return('FAILURE');
22
23 else
24
25 select count(*) into line_count from user_source
26 where 1=1
27 and type = object_type_in
28 and name = object_name_in;
29
30 return_clob := ' ';
31
32 for line_no in 1..line_count
33 loop
34 return_clob := return_clob || line_out;
35 select text into line_out from user_source
36 where 1=1
37 and type = object_type_in
38 and name = object_name_in
39 and line = line_no;
40 end loop;
41 return_clob := return_clob || line_out;
42
43 return return_clob;
44 end if;
45
46 end view_my_source;
47 /

Function created.
92,1 40%
SQL>
SQL> show errors
No errors.
SQL> grant execute on view_my_source to hemant;

Grant succeeded.

SQL>
SQL> -- list all code objects
SQL> col object_name format a30
SQL> select object_type, object_name
2 from user_objects
3 where object_type not in ('TABLE','INDEX','VIEW')
4 order by object_type, object_name
5 /

OBJECT_TYPE OBJECT_NAME
----------------------- ------------------------------
FUNCTION VIEW_MY_SOURCE
PACKAGE ANOTHER_PKG
PACKAGE DEFINE_MY_VARIABLES
PACKAGE DUMMY_PKG
PACKAGE BODY ANOTHER_PKG
PACKAGE BODY DUMMY_PKG
PROCEDURE ADD_JOB_HISTORY
PROCEDURE SECURE_DML
SEQUENCE DEPARTMENTS_SEQ
SEQUENCE EMPLOYEES_SEQ
SEQUENCE LOCATIONS_SEQ
TRIGGER SECURE_EMPLOYEES
TRIGGER UPDATE_JOB_HISTORY

13 rows selected.

SQL>
SQL> -- store list of authorzed access
SQL> -- e.g. HEMANT can't view the source for
SQL> -- "ANOTHER_PKG" and "VIEW_MY_SOURCE"
SQL> insert into authorized_view_source
2 select 'HEMANT', object_type, object_name
3 from user_objects
4 where object_type not in ('TABLE','INDEX','VIEW')
5 and object_name not in ('ANOTHER_PKG','VIEW_MY_SOURCE')
6 /

10 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> select count(*) from authorized_view_source
2 where username = 'HEMANT'
3 /

COUNT(*)
----------
10

SQL>
SQL>


So, HR has created a Table to list the Users that are authorized to view a certain list of stored programs.  This table, AUTHORIZED_VIEW_SOURCE excludes "ANOTHER_PKG" and "VIEW_MY_SOURCE" from the authorized list  for HEMANT.

Let's see what HEMANT can do :

SQL>
SQL> connect hemant/hemant
Connected.
SQL> -- the return type is a CLOB, so we SET LOMG
SQL> set long 1000000
SQL>
SQL> select hr.view_my_source('PACKAGE','DEFINE_MY_VARIABLES') from dual ;

HR.VIEW_MY_SOURCE('PACKAGE','DEFINE_MY_VARIABLES')
--------------------------------------------------------------------------------
package
define_my_variables
authid definer
is
my_application varchar2(25) := 'Human Resources';
my_base_schema varchar2(25) := 'HR';
end;


SQL>
SQL> select hr.view_my_source('PACKAGE BODY','DUMMY_PKG') from dual ;

HR.VIEW_MY_SOURCE('PACKAGEBODY','DUMMY_PKG')
--------------------------------------------------------------------------------
package body dummy_pkg as
procedure dummy_proc is
begin
raise_application_error (-20001,'Dummy Procedure');
null;
end;
end;


SQL>
SQL> select hr.view_my_source('TRIGGER','SECURE_EMPLOYEES') from dual;

HR.VIEW_MY_SOURCE('TRIGGER','SECURE_EMPLOYEES')
--------------------------------------------------------------------------------
TRIGGER secure_employees
BEFORE INSERT OR UPDATE OR DELETE ON employees
BEGIN
secure_dml;
END secure_employees;


SQL>
SQL> -- these two should raise an error
SQL> select hr.view_my_source('PACKAGE','ANOTHER_PKG') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL> select hr.view_my_source('FUNCTION','VIEW_MY_SOURCE') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL>
SQL> select hr.view_my_source('PACKAGE','NONEXISTENT') from dual;
ERROR:
ORA-20001: You are not authorized to view the source code of this object
ORA-06512: at "HR.VIEW_MY_SOURCE", line 20



no rows selected

SQL>


This shows that HEMANT can view the source code of programs listed in the AUTHORIZED_VIEW_SOURCE table and not any others.   The way I have defined the "Your are not authroized ..." message, it doesn't even show whether the requested program exists or not.
.
.
.

Categories: DBA Blogs

Index of Posts

Sun, 2016-09-04 22:43
My friend, Ravi Muthupalani has created an Index of my Blog Posts.

p1 {font-size:smaller;}
Hemant's Oracle DBA BlogAs on 1-Sep-162016-07
CODE : Persistent Variables via PL/SQL Package and DBMS_APPLICATION_INFO
Loading SQL*Plus HELP into the Database
ACS, SQL Patch and SQL Plan Baseline

2016-06
Services -- 4 : Using the SERVICE_NAMES parameter (non-RAC, PDB)
Services -- 3 : Monitoring Usage of Custom Services
Services -- 2 : Starting and Connecting to Services (non-RAC)
Services -- 1 : Services in non-RAC 12c MultiTenant
Data Recovery Advisor (11g)
Blog Series on 11gR2 RAC, GI, ASM
Compression -- 8 : DROPping a Column of a Compressed Table

2016-05
Restore and Recovery from Incremental Backups : Video
Recent Blog Series on Partition Storage
TRUNCATEing a Table makes an UNUSABLE Index VALID again
Partition Storage -- 8 : Manually Sizing Partitions
Compression -- 7 : Updating after BASIC Compression
Compression -- 6b : Advanced Index Compression (revisited)
Compression -- 6 : Advanced Index Compression
FBDA -- 7 : Maintaining Partitioned Source Table
Partition Storage -- 7 : Revisiting HWM - 2 (again)

2016-04
Partition Storage -- 6 : Revisiting Partition HWM
Partition Storage -- 5 : Partitioned Table versus Non-Partitioned Table ? (in 12.1)
Partition Storage -- 4 : Resizing Partitions
Partition Storage -- 3 : Adding new Range Partitions with SPLIT
Partition Storage -- 2 : New Rows Inserted in 12.1 Partitioned Table
Partition Storage -- 1 : Default Partition Sizes in 12c
Online Relocation of Database File : ASM to FileSystem and FileSystem to ASM
FBDA -- 6 : Some Bug Notes
Recent Blog Series on Compression
FBDA -- 5 : Testing AutoPurging
FBDA -- 4 : Partitions and Indexes
FBDA -- 3 : Support for TRUNCATEs
FBDA -- 2 : FBDA Archive Table Structure
FBDA -- 1 : Testing Flashback Data Archive in 12c (NonCDB)

2016-03
Now an OCP 12c
Compression -- 5 : OLTP Compression
Compression -- 4 : RMAN (BASIC) Compression
Compression -- 3 : Index (Key) Compression
COMPRESSION -- 2 : Compressed Table Partitions
Recent Blog Series on (SQL) Tracing
Recent Blog Series on RMAN

2016-02
Compression -- 1b : (more on) BASIC Table Compression
Compression -- 1 : BASIC Table Compression
RMAN : Unused Block Compression and Null Block Compression
Trace Files -- 12 : Tracing a Particular Process
Trace Files -- 11b : Using DBMS_SQLDIAG to trace the Optimization of an SQL Statement

2016-01
Trace Files -- 11 : Tracing the Optimization of an SQL Statement
Trace Files -- 10c : Query and DML (INSERT)

2015-12
Oracle High Availability Demonstrations
Trace Files -- 10b : More DML Tracing
Trace Files -- 10 : Tracing DML
Trace Files -- 9 : Advantages
Trace Files -- 8d : Full Table Scans
Auditing DBMS_STATS usage

2015-11
Trace Files -- 8c : Still More Performance Evaluation from Trace File
Trace Files -- 8b : More Performance Evaluation from Trace File
Trace Files -- 8a : Using SQL Trace for Performance Evaluations
Trace Files -- 7 : SQL in PL/SQL
SSL Support
Trace Files -- 6 : Multiple Executions of the same SQL

2015-10
Trace Files -- 5.2 : Interpreting the SQL Trace Summary level
Trace Files -- 5.1 : Reading an SQL Trace
Trace Files -- 4 : Identifying a Trace File
Trace Files -- 3 : Tracing for specific SQLs

2015-09
Trace Files -- 2 : Generating SQL Traces (another session)
Trace Files -- 1 : Generating SQL Traces (own session)
My YouTube Videos as introductions to Oracle SQL and DBA
RMAN -- 10 : VALIDATE
RMAN -- 9 : Querying the RMAN Views / Catalog

2015-08
RMAN -- 8 : Using a Recovery Catalog Schema
RMAN -- 7 : Recovery Through RESETLOGS -- how are the ArchiveLogs identified ?
RMAN -- 6 : RETENTION POLICY and CONTROL_FILE_RECORD_KEEP_TIME

2015-07
RMAN -- 5c : (Some More) Useful KEYWORDs and SubClauses
RMAN -- 5b : (More) Useful KEYWORDs and SubClauses
Monitoring and Diagnostics without Oracle Enterprise Manager
RMAN -- 5 : Useful KEYWORDs and SubClauses
RMAN -- 4b : Recovering from an Incomplete Restore with OMF Files
RMAN -- 4 : Recovering from an Incomplete Restore

2015-06
RMAN - 3 : The DB_UNIQUE_NAME in Backups to the FRA
RMAN -- 2 : ArchiveLog Deletion Policy
RMAN -- 1 : Backup Job Details

2015-05
Parallel Execution -- 6 Parallel DML Restrictions
Parallel Execution -- 5b Parallel INSERT Execution Plan
Status Of My SlideShare Material
Parallel Execution -- 5 Parallel INSERT

2015-04
Parallel Execution -- 4 Parsing PX Queries
Parallel Execution -- 3b Limiting PX Servers with Resource Manager

2015-03
1 million page views in less than 5 years
Parallel Execution -- 3 Limiting PX Servers
Parallel Execution -- 2c PX Servers
Parallel Execution -- 2b PX Servers
Parallel Execution -- 2 PX Servers
Parallel Execution -- 1b The PARALLEL Hint and AutoDoP (contd)

2015-02
Parallel Execution -- 1 The PARALLEL Hint and AutoDoP
Database Flashback -- 5
Database Flashback -- 4
Database Flashback -- 3
Database Flashback -- 2
Database Flashback -- 1

2015-01
A blog on Oracle Standard Edition
Inserting into a table with potentially long rows

2014-12
Statistics on this blog
StatsPack and AWR Reports -- Bits and Pieces -- 4

2014-11
StatsPack and AWR Reports -- Bits and Pieces -- 3
StatsPack and AWR Reports -- Bits and Pieces -- 2

2014-10
StatsPack and AWR Reports -- Bits and Pieces -- 1
Bandwidth and Latency
11g Adaptive Cursor Sharing --- does it work only for SELECT statements ? Using the BIND_AWARE Hint for DML

2014-09
The ADMINISTER SQL MANAGEMENT OBJECT Privilege
EXECUTE Privilege on DBMS_SPM not sufficient
Index Growing Larger Than The Table
RAC Database Backups

2014-08
ASM Commands : 2 -- Migrating a DiskGroup to New Disk(s)
ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC
GI Commands : 2 -- Managing the Local and Cluster Registry

2014-07
GI Commands : 1 -- Monitoring Status of Resources
RAC Commands : 2 -- Updating Configuration for Services
RAC Commands : 1 -- Viewing Configuration
Installing OEL 6 and Database 12c
Passed the 11g RAC and Grid Expert Exam

2014-06
Gather Statistics Enhancements in 12c
Getting your Transaction ID
Guenadi Jilevski's posts on building RAC Clusters on VM Virtual Box

2014-05
Oracle Diagnostics Presentations
Partitions and Segments and Data Objects
(Slightly Off Topic) Spurious Correlations

2014-04
PageView Count
Upgrading Certification to 12c

2014-03
Storing Trailing NULLs in a table
Plan HASH_VALUE remains the same for the same Execution Plan, even if ROWS and COST change
My slideshare site has had 1000 views
Dropping an Index Partition

2014-02
RMAN Image Copy File Names
SQL Analytics
An SQL Performance Quiz
login.sql does not require a login
Database Technology Index
The difference between SELECT ANY DICTIONARY and SELECT_CATALOG_ROLE

2014-01
My first Backup and Recovery Quiz
LAST_CALL_ET in V$SESSION
OTNYathra 2014

2013-12
INTERVAL Partitioning
DEFAULT ON NULL on INSERT
GATHER_TABLE_STATS : What SQLs does it call ?. 12c

2013-11
Gather Statistics Enhancements in 12c -- 5
AIOUG Sangam'13 Day Two 09-Nov-13
AIOUG Sangam'13 Day One 08-Nov-13

2013-10
Gather Statistics Enhancements in 12c -- 4
The DEFAULT value for a column

2013-09
AIOUG Sangam 13
RMAN (and DataPump) book

2013-08
Sins of Software Deployment
Gather Statistics Enhancements in 12c -- 3
Common Mistakes Java Developers make when writing SQL
Gather Statistics Enhancements in 12c -- 2
Gather Statistics Enhancements in 12c -- 1
12c New Features to "Watch Out For"
Re-CATALOGing BackupPieces ??

2013-07
What happens if a database (or tablespace) is left in BACKUP mode
Interesting Bugs in 12cR1
Concepts / Features overturned in 12c
12c RMAN Restrictions -- when connected to a PDB
Information on 12c
Dynamic SQL

2013-06
A Function executing DML in a View
Oracle Database 12c Learning Library
Networking in Oracle Virtual Box
Upcoming Blog Post : DML when querying a View
Getting the ROWIDs present in a Block
DROP A Tablespace After a Backup
Bug 10013177 running Aggregation on Expression indexed by an FBI

2013-05
BACKUP CURRENT CONTROLFILE creates a Snapshot Controlfile
Games for DBAs

2013-04
Preparing for Oracle Certification
SSD Performance for Oracle Databases
Single Row Fetch from a LOB
Oracle Forums due for Upgrade

2013-03
Useful Oracle Youtube videos
Segment Size of a Partition (11.2.0.2 and above)
Short-Circuiting the COST

2013-02
Moving a Partition to an Archival Schema and Tablespace
Backup and Recovery with intermediate NOARCHIVELOG

2013-01
Oracle 11g Anti-Hackers Cookbook
Podcast on the Oracle ACE program
Oracle's Advisory On Certification Integrity

2012-12
Book on OEM 12c

2012-11
Oracle 11g Anti-Hackers Cookbook

2012-10
Separate Child Cursor with varying bind allocation length

2012-09
Cardinality Decay
IT Contracting in Singapore
Open Invitation from Packt Publishing

2012-08
Storage Allocation
Issue a RECOVER for a Tablespace/Datafile that does not need recovery

2012-07
How to use Oracle Virtual Box templates
Database Specialists in Singapore
ON COMMIT Refresh without a Primary Key
Materialized View Refresh ON COMMIT
WizIQ Tutorials
An Oracle Installer that automatically switches to Console mode

2012-06
OOW 2012 Content Catalog
CONTROLFILE AUTOBACKUPs are OBSOLETE[d]
RMAN BACKUP AS COPY
OEM 12c : New Book
Java for PLSQL Developers

2012-05
SQL written by Lisbeth Salander
CHECKPOINT_CHANGE#
CURRENT_SCN and CHECKPOINT_CHANGE#
Index Block Splits
RMAN Tips -- 4
Debugging stories
A Poll on the usage of SQL Plan Management
USER_TAB_MODIFICATIONS -- 1

2012-04
Create Histogram without having to gather Table Stats
AIOUG Sangam '12 -- CFP
When is an ArchiveLog created ?
Parameters for COMMIT operations
Primary Key name appears to be different
TOO_MANY_ROWS and Variable Assignment

2012-03
The Hot Backup "myth" about Datafiles not being updated
Relocating a datafile using RMAN
More than 250K page views
OOW 2012 CFP now open.
Oracle Database Performance Diagnostics -- before you begin
Another example of COST in an Explain Plan
Packt Publishing's Oracle Packtpot

2012-02
Two Partitioned Indexes with different HIGH_VALUEs
CURSOR_SHARING FORCE and Child Cursors
Archived Logs after RESETLOGS
SLOB
RESETLOGS

2012-01
Understanding RESETLOGS
Oracle Wiki Relaunched
Departmental Analytics -- a "pro-local" approach ?
Refreshing an MV on a Prebuilt Table
SQL in Functions
Growing Materialized View (Snapshot) Logs
Datafiles not Restored -- using V$DATAFILE and V$DATAFILE_HEADER

2011-12
Does a STARTUP MOUNT verify datafiles ?
DROP TABLESPACE INCLUDING CONTENTS drops segments
(Off-Topic) How NOT to make a chart
AIOUG Sangam '11 photographs
AIOUG Sangam 11 content

2011-11
Constraints and Indexes
Oracle PreConfigured Templates
SSDs for Oracle
ROWIDs from an Index
RESTORE, RECOVER and RESETLOGS
Grid and RAC Notes
Oracle's Best-Of-Breed Strategy
Tablespace Recovery in a NOARCHIVELOG database
CTAS in a NOARCHIVELOG database is a NOLOGGING operation
Index Organized Table(s) -- IOT(s)
An ALTER USER to change password updates the timestamp of the password file
Handling Exceptions in PLSQL

2011-10
AIOUG : Sangam '11
The impact of ASSM on Clustering of data -- 2
DBMS_REDEFINITION to redefine a Partition -- and the impact of deferred_segment_creation
The impact of ASSM on Clustering of data
Controlfiles : Number and Size
Oracle OpenWorld and JavaOne Announcements
RMAN Tips -- 3

2011-09
Another example of GATHER_TABLE_STATS and a Histogram
Oracle Android App
An Index that is a "subset" of a pre-existing Index
RMAN Tips -- 2
Very successful Golden Gate workshop at SG RACSIG
RMAN Tips -- 1
Outer Join Queries
Splitting a Range Partitioned Table
Understanding Obsolescence of RMAN Backups

2011-08
CREATE INDEX ..... PARALLEL
Gather Column (Histogram) Stats can use an Index
Does GATHER_TABLE_STATS update Index Statistics ?
Reading an AWR Report -- 3
Singapore RACSIG meeting today
Reading an AWR -- 2
Oracle 11g RAC Essentials

2011-07
More on COUNT()s -- 2
More on COUNT()s
Data Quality Issues cannot always be addressed by programming
Running a COUNT(column) versus COUNT(*)
Oracle Database "Performance" - A Diagnostics Method
ENABLE ROW MOVEMENT with MSSM
Virtathon Sessions Schedule
ENABLE ROW MOVEMENT
Using WGET to download Patches
Reading an AWR - 1
Multiple Channels in RMAN are not always balanced

2011-06
DDL Triggers
Are you ready ? (to buy bigger hardware or review your code ?)
How Are Students Learning Programming ?
Oracle APAC Developer Program
OOW 2011 Content Catalog
(OT) : Dead Media Never Really Die
Precedence in Parallel Query specifications
Inequality and NULL
SQL Injection
New Presentation : On Nested Loop and Hash Join
Getting the right statistics
Nested Loop and Consistent Gets

2011-05
Interpreting an AWR report when the ArchiveLog Dest or FRA was full
Deleting SQL Plan Baselines
Database Audit setup -- 2 interesting findings
Capturing SQL PLAN Baselines
11g OCP
Getting all the instance parameters
RMAN's COPY command
Collection of my Oracle Blog posts on Backup and Recovery

2011-04
SELECT FOR UPDATE with SubQuery (and "Write Consistency")
ArchiveLogs in the controlfile
DETERMINISTIC Functions - 3
Standby Databases (aka "DataGuard") -1
DETERMINISTIC Functions -- 2
DETERMINISTIC Functions

2011-03
OuterJoin with Filter Predicate
I/O for OutOfLine LOBs
Oracle Enterprise Cloud Summit in Singapore
Cardinality Estimates in Dynamic Partition Pruning
Primary Key and Index
Most Popular Posts - Feb 11

2011-02
ITIL v3 Foundation
Index Block Splits --- with REVERSE KEY Index
Qualifying Column/Object names to set the right scope
Cardinality Feedback in 11.2
Oracle Diagnostics Presentations
Locks and Lock Trees
Most Popular Posts - Jan 11

2011-01
Gather Stats Concurrently
Synchronising Recovery of two databases
Transaction Failure --- when is the error returned ?
GLOBAL TEMPORARY TABLEs and GATHER_TABLE_STATS
Rollback of Transaction(s) after SHUTDOWN ABORT
Latches and Enqueues
Incomplete Recovery
ZDNet Asia IT Salary Benchmark 2010
Most Popular Posts - Dec 10

2010-12
Using V$SESSION_LONGOPS
Most Popular Posts - Nov 10

2010-11
Oracle VM Templates released
Some Common Errors - 7 - "We killed the job because it was hung"
SET TIME ON in RMAN
Most Popular Posts - Oct 10

2010-10
How the Optimizer can use Constraint Definitions
Featured in Oracle Magazine
Data Skew and Cardinality Changing --- 2
Most Popular Posts

2010-09
Data Skew changing over time --- and the Cardinality Estimate as well !
Index Skip Scan
Deadlocks : 2 -- Deadlock on INSERT
Deadlocks

2010-08
Adding a DataFile that had been excluded from a CREATE CONTROLFILE
Trying to understand LAST_CALL_ET -- 3
Trying to understand LAST_CALL_ET -- 2
Trying to understand LAST_CALL_ET -- 1
Oracle Mergers and Acquisitions
Creating a "Sparse" Index

2010-07
Oracle switching to non-sequential logs
(Off Topic): "What's the body count ?"
Preserving the Index when dropping the Constraint
V$DATABASE.CREATED -- is this the Database Creation timestamp ?
(Off Topic) : "Now Is That Architecture ?"

2010-06
Some Common Errors - 6 - Not collecting Metrics
Know your data and write a better query
RECOVER DATABASE starts with an update -- 2

2010-05
RECOVER DATABASE starts with an update
Database Links
Cardinality Estimation
Read Only Tablespaces and BACKUP OPTIMIZATION
Database and SQL Training

2010-04
Oracle Database History
AutoTune Undo
Data Warehousing Performance
SQLs in PLSQL -- 2
SQLs in PLSQL
Partitions and Statistics
11gR2 Recursive SubQuery Factoring

2010-03
Extracting Application / User SQLs from a TraceFile
A large index for an empty table ?
ALTER INDEX indexname REBUILD.
Adaptive Cursor Sharing explained
An "unknown" error ?
Misinterpreting RESTORE DATABASE VALIDATE

2010-02
Some Common Errors - 5 - Not reviewing the alert.log and trace files
Something Unique about Unique Indexes
Some Common Errors - 4 - Not using ARRAYSIZE
Using Aliases for Columns and Tables in SQLs
Table and Index Statistics with MOVE/REBUILD/TRUNCATE
Some Common Errors - 3 - NOLOGGING and Indexes
An Oracle DBA Interview
Some Common Errors - 2 - NOLOGGING as a Hint
Multiple Block Sizes

2010-01
Common Errors Series
DDL on Empty Partitions -- are Global Indexes made UNUSABLE ?
Some Common Errors - 1 - using COUNT(*)
Adding a PK Constraint sets the key column to NOT NULL

2009-11
MOS Survey Results
SIZE specification for Column Histograms
Sample Sizes : Table level and Column level

2009-10
Some MORE Testing on Intra-Block Row Chaining
Some Testing on Intra-Block Row Chaining
Indexes Growing Larger After Rebuilds

2009-09
SQLs in Functions : Performance Impact
SQLs in Functions : Each Execution is Independent
RMAN can identify and catalog / use ArchiveLogs automagically
Table and Partition Statistics
I am an Oracle ACE, officially

2009-08
Histograms on "larger" columns
Counting the Rows in a Table
Using an Index created by a different user

2009-07
A PACKT book on Oracle Database Utilities
Direct Path Read cannot do delayed block cleanouts
The difference between NOT IN and NOT EXISTS
Simple Tracing
Sizing OR Growing a Table in AUTOALLOCATE

2009-06
AUTOEXTEND ON Next Size
Why EXPLAIN PLAN should not be used with Bind Variables

2009-05
Backup Online Redo Logs ? (AGAIN ?!)
Index Block Splits and REBUILD
Index Block Splits : 50-50
Database Recovery with new datafile not present in the controfile
Index Block Splits : 90-10
Rename Database while Cloning it.
Incorrectly using AUTOTRACE and EXPLAIN PLAN
Ever wonder "what if this database contains my data ?"

2009-04
Bringing ONLINE a Datafile that is in RECOVER mode because it was OFFLINE
Controlfile Backup older than the ArchiveLogs
RMAN Backup and Recovery for Loss of ALL files
Incorrect Cardinality Estimate of 1 : Bug 5483301

2009-03
Database Independence Anyone ?
Columnar Databases
Materialized View on Prebuilt Table
Materialized Views and Tables
Checking the status of a database
Logical and Physical Storage in Oracle

2009-02
CLUSTERING_FACTOR
RDBMS Software, Database and Instance
Restore or Create Controlfile
Full Table Scan , Arraysize etc
Array Processing, SQL*Net RoundTrips and consistent gets
MIN/MAX Queries, Execution Plans and COST

2009-01
Faulty Performance Diagnostics based on initial set of rows returned
When NOT to use V$SESSION_LONGOPS

2008-11
Database Event Trigger and SYSOPER
Tracing a Process -- Tracing DBWR
expdp to the default directory without the DBA role
Data Pump using default directory
Histogram (skew) on Unique Values
OPEN RESETLOGS without really doing a Recovery
Numbers and NULLs

2008-10
Using STATISTICS_LEVEL='ALL' and 10046, level 8
Delayed Block Cleanout -- through Instance Restart
Delayed Block Cleanout

2008-09
Relational Theory and SQL

2008-08
ASSM or MSSM ? -- DELETE and INSERT
The once again new forums.oracle.com
Testing Bug 4260477 Fix for Bug 4224840
ASSM or MSSM ? -- The impact on INSERTS
VMWare Bug presents a nightmare scenario
Preventing a User from changing his password
More Tests of COL_USAGE
Testing Gather Stats behaviour based on COL_USAGE

2008-07
More Tests on DBMS_STATS GATHER AUTO
Testing the DBMS_STATS option GATHER AUTO
Cardinality Estimate : Dependent Columns --- Reposted
Table Elimination (aka "Join Elimination")
Bind Variable Peeking

2008-06
Cardinality Estimates : Dependent Columns
Delete PARENT checks every CHILD row for Parent Key !
Monitoring "free memory" on Linux
A long forums discussion on Multiple or Different Block Sizes
Tuning Very Large SELECTs in SQLPlus
MVs with Refresh ON COMMIT cannot be used for Synchronous Replication

2008-05
Ever heard of "_simple_view_merging" ?
Tracing a DBMS_STATS run
Creating a COMPRESSed Table
Passwords are One Way Hashes, Not Encrypted
APPEND, NOLOGGING and Indexes
RMAN Consistent ("COLD" ?) Backup and Restore
DBAs working long hours
One Thing Leads to Another ....
TEMPORARY Segments in Data/Index Tablespaces

2008-04
Row Sizes and Sort Operations
Using SYS
Indexed column (unique or not) -- What if it is NULLable
The Worst Ever SQL Rewrite
Complex View Merging -- 7
Complex View Merging - 4,5,6
Programming for MultiCore architectures
Complex View Merging -- 3
Complex View Merging -- 2
Complex View Merging -- 1

2008-03
Example "sliced" trace files and tkprof
tkprof on partial trace files
Backup the Online Redo Logs ?
Rebuilding Indexes - When and Why ?
Rebuilding Indexes
ALTER TABLE ... SHRINK SPACE

2008-02
OS Statistics from AWR Reports
Database Recovery : RollForward from a Backup Controlfile
Indexing NULLs -- Update

2008-01
Is RAID 5 bad ? Always bad ?
The Impact of the Clustering Factor
Examples Of Odd Extent Sizes In Tablespaces With AUTOALLOCATE
When Should Indexes Be Rebuilt

2007-12
Using an Index for a NOT EQUALS Query
Always Explicitly Convert DataTypes

2007-11
Are ANALYZE and DBMS_STATS also DDLs ?

2007-10
Flush Buffer_Cache -- when Tracing doesn't show anything happening
Inserts waiting on Locks ? Inserts holding Locks ?

2007-09
More on Bind Variable Peeking and Execution Plans
ATOMIC_REFRESH=>FALSE causes TRUNCATE and INSERT behaviour in 10g ??

2007-08
When "COST" doesn't indicate true load
NULLs are not Indexed, Right ? NOT !
NLS_DATE_FORMAT
Shared Nothing or Shared Disks (RAC) ?
LGWR and 'log file sync waits'

2007-07
Using ARRAYSIZE to reduce RoundTrips and number of FETCH calls
Parse "Count"

2007-06
Read Consistency across Statements
Some observations from the latest Oracle-HP Benchmark
A Bug in OWI
Oracle Books in the Library

2007-05
AUTOALLOCATE and Undo Segments
Where's the Problem ? Not in the Database !
RollForward from a Cold Backup
Another Recovery from Hell story
SQL Statement Execution Times
Recovery in Cold Backup
Stress Testing

2007-04
DBA Best Practices
Recovery without UNDO Tablespace DataFiles
Programs that expect strings to be of a certain length !
UNDO and REDO for INSERTs and DELETEs
Snapshot Too Old or Rollback Segment Too Small

2007-03
Backups and Recoveries, SANs and Clones, and Murphy
Optimizer Index Cost Parameters
Understanding "Timed Events" in a StatsPack Report
Database Error Exposed on the Internet
Throughput v Scalability
Using Normal Tables for Temporary Data

2007-02
Interpreting Explain Plans
Creating Database Links
Buffer Cache Hit Ratio GOOD or BAD ?

2007-01
Sequences : Should they be Gap Free ?
Using Partial Recoveries to test Backups
Deleting data doesn't reduce the size of the backup
Large (Growing) Snapshot Logs indicate that you have a problem
Sometimes you trip up on Triggers
Views with ORDER BY
Building Materialized Views and Indexes

2006-12
ArchiveLogs and Transaction Volumes
ORA-1555 and UNDO_RETENTION
Why an Oracle DBA Blog ?


Categories: DBA Blogs

CODE : Persistent Variables via PL/SQL Package and DBMS_APPLICATION_INFO

Sat, 2016-07-23 10:43
I am now introducing some code samples in my blog.  I won't restrict myself to SQL but will also include PL/SQL  (C ? Bourne/Korn Shell ? what else ?)

This is the first of such samples.


Here I demonstrate using a PL/SQL Package to define persistent variables and then using them with DBMS_APPLICATION_INFO.  This demo consists of only 2 variables being used by 1 session.  But we could have a number of variables in this Package and invoked by multiple client sessions in the real workd.

I first :

SQL> grant create procedure to hr;

Grant succeeded.

SQL>


Then, in the HR schema, I setup a Package to define variables that can persist throughout a session.  Public Variables defined in a Package, once invoked, persist throughout the session that invoked them.

create or replace package
define_my_variables
authid definer
is
my_application varchar2(25) := 'Human Resources';
my_base_schema varchar2(25) := 'HR';
end;
/

grant execute on define_my_variables to hemant;
grant select on employees to hemant;


As HEMANT, I then execute :

SQL> connect hemant/hemant  
Connected.
SQL> execute dbms_application_info.set_module(-
> module_name=>HR.define_my_variables.my_application,-
> action_name=>NULL);

PL/SQL procedure successfully completed.

SQL>


As SYSTEM, the DBA can monitor HEMANT

QL> show user
USER is "SYSTEM"
SQL> select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4 order by 1
5 /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources



SQL>


Then, HEMANT intends to run a query on the EMPLOYEES Table.

SQL> execute dbms_application_info.set_action(-
> action_name=>'Query EMP');

PL/SQL procedure successfully completed.

SQL> select count(*) from hr.employees where job_id like '%PROG%'
2 /

COUNT(*)
----------
5

SQL>


SYSTEM can see what he is doing with

SQL> l
1 select sid, serial#, to_char(logon_time,'DD-MON HH24:MI:SS') Logon_At, module, action
2 from v$session
3 where username = 'HEMANT'
4* order by 1
SQL> /

SID SERIAL# LOGON_AT MODULE
---------- ---------- ------------------------ ----------------------------------------------------------------
ACTION
----------------------------------------------------------------
1 63450 23-JUL 23:24:03 Human Resources
Query EMP


SQL>


Returning, to the HR login, I can see :

SQL> show user
USER is "HEMANT"
SQL> execute dbms_output.put_line(-
> 'I am running ' || hr.define_my_variables.my_application || ' against ' || hr.define_my_variables.my_base_schema);
I am running Human Resources against HR

PL/SQL procedure successfully completed.

SQL>


So, I have demonstrated :
1.  Using a PLSQL Package Specification (without the need for a Package Body) to define variables that are visible to another session.

2.  The possibility of using this across schemas.  HR could be my "master schema" that setups all variables and HEMANT is one of many "client" schemas (or users) that use these variables..

3. The variables defined will persist throughout the client session once they are invoked.

4.  Using DBMS_APPLICATION_INFO to call these variables and setup client information.


Note :  SYSTEM can also trace HEMANT's session using DBMS_MONITOR as demonstrated in Trace Files -- 2 : Generating SQL Traces (another session)

.
.
.

Categories: DBA Blogs

Loading SQL*Plus HELP into the Database

Sun, 2016-07-10 22:39
Oracle provides scripts to load the HELP command for SQL*Plus.

See $ORACLE_HOME/sqlplus/admin/help

The schema to use is SYSTEM, not SYS.

I demonstrate
(a) How to load SQLPlus Help  into the database
(b) How to customise the Help (e.g. add new commands)

[oracle@ora11204 help]$ cd $ORACLE_HOME/sqlplus/admin/help
[oracle@ora11204 help]$ ls -l
total 84
-rwxrwxrwx. 1 oracle oracle 265 Feb 17 2003 helpbld.sql
-rwxrwxrwx. 1 oracle oracle 366 Jan 4 2011 helpdrop.sql
-rwxrwxrwx. 1 oracle oracle 71817 Aug 17 2012 helpus.sql
-rwxrwxrwx. 1 oracle oracle 2154 Jan 4 2011 hlpbld.sql
[oracle@ora11204 help]$ sqlplus -S system/oracle @helpbld.sql `pwd` helpus.sql
...
...
...
View created.


58 rows created.


Commit complete.


PL/SQL procedure successfully completed.

[oracle@ora11204 help]$


The 'pwd`  (note the back-quote character, not the single quote character) is a way of specifying the current directory in Unix and Linux shells.   This specifies where the help datafile is located.  helpus.sql is the help data in English (US-English).

The scripts create a table called "HELP" in the SYSTEM schema.  SQL*Plus's "HELP" command then uses this table.

Examples :

SQL> connect hemant/hemant
Connected.
SQL> help

HELP
----

Accesses this command line help system. Enter HELP INDEX or ? INDEX
for a list of topics.

You can view SQL*Plus resources at
http://www.oracle.com/technology/documentation/

HELP|? [topic]


SQL>
SQL> help set

SET
---

Sets a system variable to alter the SQL*Plus environment settings
for your current session. For example, to:
- set the display width for data
- customize HTML formatting
- enable or disable printing of column headings
- set the number of lines per page

SET system_variable value

where system_variable and value represent one of the following clauses:

APPI[NFO]{OFF|ON|text} NEWP[AGE] {1|n|NONE}
ARRAY[SIZE] {15|n} NULL text
AUTO[COMMIT] {OFF|ON|IMM[EDIATE]|n} NUMF[ORMAT] format
AUTOP[RINT] {OFF|ON} NUM[WIDTH] {10|n}
AUTORECOVERY {OFF|ON} PAGES[IZE] {14|n}
AUTOT[RACE] {OFF|ON|TRACE[ONLY]} PAU[SE] {OFF|ON|text}
[EXP[LAIN]] [STAT[ISTICS]] RECSEP {WR[APPED]|EA[CH]|OFF}
BLO[CKTERMINATOR] {.|c|ON|OFF} RECSEPCHAR {_|c}
CMDS[EP] {;|c|OFF|ON} SERVEROUT[PUT] {ON|OFF}
COLSEP {_|text} [SIZE {n | UNLIMITED}]
CON[CAT] {.|c|ON|OFF} [FOR[MAT] {WRA[PPED] |
COPYC[OMMIT] {0|n} WOR[D_WRAPPED] |
COPYTYPECHECK {ON|OFF} TRU[NCATED]}]
DEF[INE] {&|c|ON|OFF} SHIFT[INOUT] {VIS[IBLE] |
DESCRIBE [DEPTH {1|n|ALL}] INV[ISIBLE]}
[LINENUM {OFF|ON}] [INDENT {OFF|ON}] SHOW[MODE] {OFF|ON}
ECHO {OFF|ON} SQLBL[ANKLINES] {OFF|ON}
EDITF[ILE] file_name[.ext] SQLC[ASE] {MIX[ED] |
EMB[EDDED] {OFF|ON} LO[WER] | UP[PER]}
ERRORL[OGGING] {ON|OFF} SQLCO[NTINUE] {> | text}
[TABLE [schema.]tablename] SQLN[UMBER] {ON|OFF}
[TRUNCATE] [IDENTIFIER identifier] SQLPLUSCOMPAT[IBILITY] {x.y[.z]}
ESC[APE] {\|c|OFF|ON} SQLPRE[FIX] {#|c}
ESCCHAR {@|?|%|$|OFF} SQLP[ROMPT] {SQL>|text}
EXITC[OMMIT] {ON|OFF} SQLT[ERMINATOR] {;|c|ON|OFF}
FEED[BACK] {6|n|ON|OFF} SUF[FIX] {SQL|text}
FLAGGER {OFF|ENTRY|INTERMED[IATE]|FULL} TAB {ON|OFF}
FLU[SH] {ON|OFF} TERM[OUT] {ON|OFF}
HEA[DING] {ON|OFF} TI[ME] {OFF|ON}
HEADS[EP] {||c|ON|OFF} TIMI[NG] {OFF|ON}
INSTANCE [instance_path|LOCAL] TRIM[OUT] {ON|OFF}
LIN[ESIZE] {80|n} TRIMS[POOL] {OFF|ON}
LOBOF[FSET] {1|n} UND[ERLINE] {-|c|ON|OFF}
LOGSOURCE [pathname] VER[IFY] {ON|OFF}
LONG {80|n} WRA[P] {ON|OFF}
LONGC[HUNKSIZE] {80|n} XQUERY {BASEURI text|
MARK[UP] HTML [OFF|ON] ORDERING{UNORDERED|
[HEAD text] [BODY text] [TABLE text] ORDERED|DEFAULT}|
[ENTMAP {ON|OFF}] NODE{BYVALUE|BYREFERENCE|
[SPOOL {OFF|ON}] DEFAULT}|
[PRE[FORMAT] {OFF|ON}] CONTEXT text}


SQL>
SQL> help show

SHOW
----

Shows the value of a SQL*Plus system variable, or the current
SQL*Plus environment. SHOW SGA requires a DBA privileged login.

SHO[W] option

where option represents one of the following terms or clauses:
system_variable
ALL
BTI[TLE]
ERR[ORS] [{FUNCTION | PROCEDURE | PACKAGE | PACKAGE BODY | TRIGGER
| VIEW | TYPE | TYPE BODY | DIMENSION | JAVA CLASS} [schema.]name]
LNO
PARAMETERS [parameter_name]
PNO
RECYC[LEBIN] [original_name]
REL[EASE]
REPF[OOTER]
REPH[EADER]
SGA
SPOO[L]
SPPARAMETERS [parameter_name]
SQLCODE
TTI[TLE]
USER


SQL>
SQL> help connect

CONNECT
-------

Connects a given username to the Oracle Database. When you run a
CONNECT command, the site profile, glogin.sql, and the user profile,
login.sql, are processed in that order. CONNECT does not reprompt
for username or password if the initial connection does not succeed.

CONN[ECT] [{logon|/|proxy} [AS {SYSOPER|SYSDBA|SYSASM}] [edition=value]]

where logon has the following syntax:
username[/password][@connect_identifier]

where proxy has the syntax:
proxyuser[username][/password][@connect_identifier]
NOTE: Brackets around username in proxy are required syntax


SQL>


Remember !  These are SQL*Plus commands, not SQL Language commands.  So you won't see help about CREATE or ALTER or SELECT and other such commands.

Since, it uses a plain-text file (helpus.sql in this case) to load the help information, it is possible to extend this.

For example, I copy helpus.sql as helpcustom.sql and add these lines into the scrip file :

INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 2, 'This Hemant''s Test Database');
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 3, 'A Playground database');
INSERT INTO SYSTEM.HELP VALUES ('DBINFO', 4, 'Running 11.2.0.4 on Linux');

INSERT INTO SYSTEM.HELP VALUES ('OWNERINFO', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('OWNERINFO', 2, 'Test Database owned by Hemant');
INSERT INTO SYSTEM.HELP VALUES ('CONTENTS', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('CONTENTS', 2, 'Various Experiments by Hemant');

INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 1, NULL);
INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 2, 'Hemant K Chitale');
INSERT INTO SYSTEM.HELP VALUES ('WHO IS HEMANT', 3, 'https://hemantoracledba.blogspot.com');

COMMIT;


and then I run the command :

sqlplus -S system/oracle @helpbld.sql `pwd` helpcustom.sql


And view the results :

SQL> connect hemant/hemant
Connected.
SQL> help dbinfo

This Hemant's Test Database
A Playground database
Running 11.2.0.4 on Linux

SQL> help ownerinfo

Test Database owned by Hemant

SQL> help who is hemant

Hemant K Chitale
https://hemantoracledba.blogspot.com

SQL>
SQL> help startup

STARTUP
-------

Starts an Oracle instance with several options, including mounting,
and opening a database.

STARTUP options | upgrade_options

where options has the following syntax:
[FORCE] [RESTRICT] [PFILE=filename] [QUIET] [ MOUNT [dbname] |
[ OPEN [open_options] [dbname] ] |
NOMOUNT ]

where open_options has the following syntax:
READ {ONLY | WRITE [RECOVER]} | RECOVER

and where upgrade_options has the following syntax:
[PFILE=filename] {UPGRADE | DOWNGRADE} [QUIET]


SQL> help shutdown

SHUTDOWN
--------

Shuts down a currently running Oracle Database instance, optionally
closing and dismounting a database.

SHUTDOWN [ABORT|IMMEDIATE|NORMAL|TRANSACTIONAL [LOCAL]]


SQL>


And, so, the SQL*Plus HELP command can be customised !

.
.
.

Categories: DBA Blogs

ACS, SQL Patch and SQL Plan Baseline

Mon, 2016-07-04 03:49
Marko Sutic's blog post on Adaptive Cursor Sharing and SQL Plan Baselines, with an example of SQL Patch as well.
.
.
.
Categories: DBA Blogs

Services -- 4 : Using the SERVICE_NAMES parameter (non-RAC, PDB)

Mon, 2016-06-27 09:55
In my previous blog post, I have demonstrated using DBMS_SERVICE to create and start services.

But there is another way.  Using the SERVICE_NAMES parameter.  Going by most google references, SERVICE_NAME seems to be more popular than DBMS_SERVICE.

Can it be used in a Pluggable Database ?  Let' try.

[oracle@ora12102 Desktop]$ . oraenv
ORACLE_SID = [oracle] ? CDB1
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ora12102 Desktop]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:39:59 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size 2925024 bytes
Variable Size 973082144 bytes
Database Buffers 654311424 bytes
Redo Buffers 13848576 bytes
Database mounted.
Database opened.
SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:43:36

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "pdb1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$


SQL> connect system/oracle@PDB1
Connected.
SQL> show con_id

CON_ID
------------------------------
3
SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';
alter system set service_names='NEW_SVC_1,NEW_SVC_2'
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


SQL> !oerr ora 65040
65040, 00000, "operation not allowed from within a pluggable database"
// *Cause: An operation was attempted that can only be performed in the root
// container.
// *Action: Switch to the root container to perform the operation.
//

SQL>


So, apparently, ORA-65040 means that we can't define SERVICE_NAMES for a PDB.

SQL> connect sys/oracle as sysdba
Connected.
SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> alter system set service_names='NEW_SVC_1,NEW_SVC_2';

System altered.

SQL>

[grid@ora12102 ~]$ lsnrctl services

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:53:56

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
Services Summary...
Service "+ASM" has 1 instance(s).
Instance "+ASM", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "CDB1XDB" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"D000" established:0 refused:0 current:0 max:1022 state:ready
DISPATCHER
(ADDRESS=(PROTOCOL=tcp)(HOST=ora12102)(PORT=14129))
Service "NEW_SVC_1" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
Service "NEW_SVC_2" has 1 instance(s).
Instance "CDB1", status READY, has 1 handler(s) for this service...
Handler(s):
"DEDICATED" established:0 refused:0 state:ready
LOCAL SERVER
The command completed successfully
[grid@ora12102 ~]$

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options
[oracle@ora12102 Desktop]$ tnsping NEW_SVC_1

TNS Ping Utility for Linux: Version 12.1.0.2.0 - Production on 27-JUN-2016 22:54:57

Copyright (c) 1997, 2014, Oracle. All rights reserved.

Used parameter files:


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = ora12102)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = NEW_SVC_1)))
OK (30 msec)
[oracle@ora12102 Desktop]$ sqlplus hemant/hemant@new_svc_1

SQL*Plus: Release 12.1.0.2.0 Production on Mon Jun 27 22:55:04 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ERROR:
ORA-01017: invalid username/password; logon denied


Enter user-name: system/oracle@new_svc_1
Last Successful login time: Mon Jun 27 2016 22:44:13 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics
and Real Application Testing options

SQL> show con_id

CON_ID
------------------------------
1
SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL>


The new Services are actually running in the ROOT, not in the PDB .  Unlike my earlier blog post where I used the new services to connect to the PDB.

Therefore, for a PDB environment, I'd use DBMS_SERVICE as I've demonstrated here and here.
.
.
.



Categories: DBA Blogs

Pages