Skip navigation.

DBA Blogs

Final version of RMOUG 2015 presentation and script files are here ...

Grumpy old DBA - Sat, 2015-02-21 16:46
Wow I had a great time presenting at RMOUG 2015.  The room was absolutely packed and I felt presentation went pretty well ( so let's see what the evaluation numbers show ).

Here is a link (Final version presentation and supporting files )  to zip file containing the final version of the presentation as well the script and sql file shown at the end of the presentation.  Please let me know if anyone has any problems using this and sorry I thought I would get this updated and available Thursday.

Thanks so much to RMOUG!

John
Categories: DBA Blogs

Data Guard Logical Standby – what does it mean?

The Oracle Instructor - Fri, 2015-02-20 11:34

With Data Guard, you have the choice between Physical and Logical Standby databases. Let’s see the differences! My demo starts with a Physical Standby, that is then converted into a Logical Standby (therefore the name of the database):

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

For now, logst is still a Physical Standby. It is called that way, because the datafiles of prima and logst are physically identical. I can even restore them from one side to the other:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 11:43:07 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 offline;

Database altered.

Now I copy the datafile from the standby server uhesse2 to the primary server uhesse1 – there are different ways to do that, but scp is one:

SYS@logst > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/logst/users01.dbf

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
The authenticity of host 'uhesse1 (192.168.56.10)' can't be established.
RSA key fingerprint is e9:e7:5b:8b:b2:33:42:26:89:03:54:0c:16:0d:98:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uhesse1,192.168.56.10' (RSA) to the list of known hosts.
oracle@uhesse1's password: 
users01.dbf                                                                                               100% 5128KB   5.0MB/s   00:00    
[oracle@uhesse2 ~]$ 

When I try to online the datafile again on prima, it is like if I would have restored it from backup:

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

The datafiles and also the archived logfiles are physically identical on both sites here, only the controlfiles are different. v$database (like v$datafile, by the way) derives its content from the controlfile:

SYS@prima > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PRIMARY

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PHYSICAL STANDBY

Now I will convert it into Logical Standby:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:29:16 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > exec dbms_logstdby.build

PL/SQL procedure successfully completed.

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > alter database recover to logical standby logst;

Database altered.

SYS@logst > shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SYS@logst > startup mount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size		    2214936 bytes
Variable Size		  314573800 bytes
Database Buffers	  201326592 bytes
Redo Buffers		    3821568 bytes
Database mounted.
SYS@logst > alter database open resetlogs;

Database altered.

SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
LOGST						   3156487356 LOGICAL STANDBY

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> remove database logst;
Removed database "logst" from the configuration
DGMGRL> add database logst as connect identifier is logst;
Database "logst" added
DGMGRL> enable database logst;
Enabled.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

One significant change is that the DBID and the name is now different from the primary database as you see above. And the datafiles are no longer physically identical:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:38:56 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > alter database datafile 4 offline;

Database altered.

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.dbf /u01/app/oracle/oradata/prima/users01.old

I copy the original file because I know that the restore from logst will not work. It is just to show my point:

[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
oracle@uhesse1's password: 
users01.dbf                                                                                   100% 5128KB   5.0MB/s   00:00    
SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01122: database file 4 failed verification check
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
ORA-01206: file is not part of this database - wrong database id

Exactly. logst is now an autonomous database that is just incidentally doing (nearly) the same DML as prima does. It is no longer Oracle-Block-wise the same as prima. The rowids from prima have no meaning on logst any more:

DGMGRL> edit database logst set state=apply-on;
Succeeded.
SYS@prima > insert into scott.dept values (50,'TEST','TEST');
insert into scott.dept values (50,'TEST','TEST')
                  *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.old /u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

SYS@prima > insert into scott.dept values (50,'TEST','TEST');

1 row created.

SYS@prima > commit;
Commit complete.

SYS@prima > select rowid,dept.* from scott.dept where deptno=50;

ROWID		       DEPTNO DNAME	     LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACNAAE	   50 TEST	     TEST

This rowid is what we normally record in the redo entries and it would be sufficient to retrieve that row on the primary and also on a physical standby where we do “Redo Apply” (another term for “recover database”). But that rowid is different on logst:

SYS@logst > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select rowid,dept.* from scott.dept where deptno=50;

ROWID DEPTNO DNAME LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACOAAA 50 TEST TEST

That is why we need to put additional information – supplemental log data – into the redo entries on the primary. It will help the SQL Apply mechanism to retrieve the row there:

Logical Standby Architecture

Logical Standby Architecture

The supplemental log data contains at least additionally the primary/unique key like on the picture. In the absence of primary/unique keys, every column of a modified row is written into the redo logs. That may impact the performance of the primary database. Another serious drawback of Logical Standby is that not every datatype and not every operation on the primary is supported for the SQL Apply mechanism. The number of unsupported datatypes decreases version by version, though.

The demo and the sketch above are from my presentation about Transient Logical Standby at the Oracle University Expert Summit 2015 in Dubai – really an amazing location! Hope you find it useful :-)


Tagged: Data Guard
Categories: DBA Blogs

Database Flashback -- 5

Hemant K Chitale - Fri, 2015-02-20 10:15
Continuing my series on Database Flashback.

Here I demonstrate that Flashback Logs alone are not sufficient.  The process of FLASHBACK DATABASE does need *some* redo entries from Archive/Online Redo Logs.



[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:18:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
20-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 78561280 415383552

SYS>

My database seems to have adequate Flashback Capability in the Flashback Logs.  Let me run some transactions and generate Redo and Flashback.

SYS>connect hemant/hemant
Connected.
HEMANT>drop table obj_list;
drop table obj_list
*
ERROR at line 1:
ORA-00942: table or view does not exist


HEMANT>create table obj_list tablespace users
2 as select * from dba_objects where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'OBJ_LIST';

TABLESPACE_NAME
------------------------------
USERS

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225138 rows created.

HEMANT>rollback;

Rollback complete.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225135 rows created.

HEMANT>delete obj_list;

225135 rows deleted.

HEMANT>
HEMANT>select count(*)
2 from v$archived_log
3 where first_time >
4 (select startup_time
5 from v$instance)
6 /

COUNT(*)
----------
2

HEMANT>
HEMANT>alter system switch logfile;

System altered.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 union all
8 select * from dba_objects;

300180 rows created.

HEMANT>insert into obj_list
2 select * from obj_list;

300180 rows created.

HEMANT>select count(*) from obj_list;

COUNT(*)
----------
600360

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from v$archived_log
2 where first_time >
3 (select startup_time from v$instance)
4 /

COUNT(*)
----------
4

HEMANT>

Now, let's suppose that a scheduled (periodic) archive log backup job kicks in and creates a backup of archivelogs and deletes them.

HEMANT>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:30:36 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup as compressed backupset archivelog all delete input;

Starting backup at 20-FEB-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=106 STAMP=872033779
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnshxs_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_18/o1_mf_1_3_bg9dcl3v_.arc RECID=106 STAMP=872033779
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=112 STAMP=872205844
input archived log thread=1 sequence=4 RECID=113 STAMP=872205859
input archived log thread=1 sequence=5 RECID=114 STAMP=872205867
input archived log thread=1 sequence=6 RECID=115 STAMP=872206048
input archived log thread=1 sequence=7 RECID=116 STAMP=872206098
input archived log thread=1 sequence=8 RECID=117 STAMP=872206254
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_3_bggndmz4_.arc RECID=112 STAMP=872205844
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_4_bggnf0qy_.arc RECID=113 STAMP=872205859
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggnf8ty_.arc RECID=114 STAMP=872205867
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggnlzvr_.arc RECID=115 STAMP=872206048
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_7_bggnnk6c_.arc RECID=116 STAMP=872206098
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_8_bggnsg5k_.arc RECID=117 STAMP=872206254
Finished backup at 20-FEB-15

Starting Control File and SPFILE Autobackup at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2015_02_20/o1_mf_s_872206274_bggnt34k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-FEB-15

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$

Now, suppose that I need to Flashback the database.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:32:36 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14577571 19-FEB-15 1440 135348224 3784998912

SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI:SS')
2 from V$flashback_database_log;

TO_CHAR(OLDEST_FLASHBACK
------------------------
19-FEB 00:04:02

SYS>

Notice how the OLDEST_FLASHBACK_TIME has changed from 17-Feb to the midnight of 18/19-Feb ! Apparently, my FRA cannot hold very many Flashback Logs.
As I have mentioned in two posts earlier, here and here, the scope of the actual ability to Flashback may vary.

Can I flashback to SCN 14577572 ?  Let me give it a try.

SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14577572;
flashback database to SCN 14577572
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577571 to SCN 14577572
ORA-38761: redo log sequence 1 in thread 1, incarnation 5 could not be accessed


SYS>

Aaha ! Apparently, Oracle needs to read some redo from Archive Log(s) !  So, having Flashback Logs alone is *not* sufficient.  I know that I need the database to be running in ArchiveLog mode.  But I should also know that if I want to Flashback to a particular Time or SCN, I will need the corresponding ArchiveLog as well !  (Imaging trying to Flashback to 3 days ago and having purged all ArchiveLogs simply because I do daily Full Backups and have Retention of 2 days only !)

So, I must take the necessary action now.

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:43:43 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 1 until sequence 2;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp tag=TAG20150219T001619
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=2
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp tag=TAG20150219T001930
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14577572
SYS>/

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>

What if I want to Flashback to another, later time ?

SYS>select sequence#, first_change#, to_char(first_time,'DD-MON HH24:MI:SS')
2 from v$archived_log
3 where first_time > trunc(sysdate)
4 order by 1;

SEQUENCE# FIRST_CHANGE# TO_CHAR(FIRST_TIME,'DD-M
---------- ------------- ------------------------
4 14580736 20-FEB 23:24:03
5 14581651 20-FEB 23:24:16
6 14583536 20-FEB 23:24:23
7 14584203 20-FEB 23:27:27
8 14584351 20-FEB 23:28:17

SYS>
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14584205;
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 5 in thread 1, incarnation 5 could not be accessed


SYS>
SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:53:48 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 5 until sequence 6;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=5
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=6
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$

I have Sequences 1 and 2 that were restored for the first flashback and Sequences 5 and 6 that have been restored now. Do I need Sequences 3 and 4 ? Do I need Sequence 7 (that contains the Redo beyond SCN 14584203) ?

Let's see.
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 3 in thread 1, incarnation 5 could not be accessed


SYS>
Apparently, I also need Sequences 3,at least,  and (maybe ?) 4 ! Why ? Because my database is currently at an SCN lower than the one I want to Flashback to and the corresponding redo is required. (If I had *not* done the first Flashback to the lower SCN, I wouldn't need these Archivelogs !)

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Feb 21 00:01:34 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 3 until sequence 4;

Starting restore at 21-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=4
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 21-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$ cd ../*21
sh-3.2$ ls -l
total 80272
-rw-rw---- 1 oracle oracle 31472640 Feb 21 00:01 o1_mf_1_3_bggpmf06_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 21 00:01 o1_mf_1_4_bggpmdxk_.arc
sh-3.2$

Here is something important (nothing to do with Flashback Database).  The server clock went past midnight into 21-Feb so the restored files went into 2015_02_21 and not 2015_02_20 !

sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/

Flashback complete.

SYS>

Remember my question about whether I would need Sequence 7 ?  Let's see what the alert.log shows about the Flashback Database actions.

Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_2_bggol6wm_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_3_bggpmf06_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_4_bggpmdxk_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggp4zbp_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggp4zh3_.arc
Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14584206 time 02/20/2015 23:28:03
Flashback Media Recovery Complete
Completed: flashback database to SCN 14584205

Another learning point !  Sequence 7 was applied from the Online Redo Log file.

What we have learned :
1.  If we Flashback the database to a particular SCN / Time / Restore Point, Oracle does need the Redo from the Archive / Online Redo Log file that was active at that time.  It still needs some Redo to make the database consistent (e.g. apply Undo)

2. If we Flashback the database to SCN 101 and then (without OPEN RESETLOGS), choose to Flashback to a subsequent SCN 201, we again need ArchiveLogs !

3. Flashback from the SQL command-line is intelligent enough to use the Online Redo Log but not (like, say, RMAN's RECOVER DATABASE), automatically restore ArchiveLogs as they are required !

.
.
,



Categories: DBA Blogs

Oracle MAF - assigning value to a backing bean property from javascript

Communicating between layers in MAF is quite easy with the provided APIs. For calling Java method for javascript we have the invokeMethod js function, which is quite straight forward: ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Test MySQL on AWS quickly

Kubilay Çilkara - Thu, 2015-02-19 02:20
Using sysbench to performance test AWS RDS MySQL hardware is an easy three step  operation. Sysbench creates synthetic tests and they are done on a 1 mil row 'sbtest' table that sysbench creates in the MySQL database you indicate. The test doesn't intrude with your database schema, and it doesn't use your data, so it is quite safe. The test is an OLTP test trying to simulate event operations in the database as it runs various, SELECT, INSERT, UPDATE and DELETE requests on it's own 'sbtest' table.The results  of the tests are metrics like transactions per second, number of events, elapsed time etc. See man pages for description and Google it, it is a  popular testing tool. Other things you can set it up to do are to control how many requests (events) you want it to execute in a given time or you can tell it to keep on executing infinite requests until you stop it, or destruction testing. Is a very flexible testing tool with many options including throtling concurrency.You can be up and running with 3 commands on a unix system as follows.  Download sysbench tool (doing this on ubuntu)sudo apt-get install sysbenchCreate a table with 1 mil rowssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} prepareTest with different parameterssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} --max-time=60 --num-threads=550 runWarning: Synthetic  tests will just give you the ability of the hardware at a given standard set of requests and DML operations. There are no way an indication of what will happen to your database if the real workload increases beacause of the applications. Application Load Testing is something else, applications are  complex!  Database Workload is dependent on the application generated workload from real users using the system and is very hard to simulate that in a test. It is not imppossible If you use a database, such as Oracle which has the capability of recording and replaying its production database workload - called Automatic Workload Repository (AWR). In MySQL I couldn't find  so far a way to do this. But sysbench synthetic tests gave me the ability to quickly benchmark and baseline a MySQL database capabilities on different AWS Amazon hardware, something is better than nothing I suppose. 
Categories: DBA Blogs

Detecting Source of MySQL Queries with Comments

Pythian Group - Tue, 2015-02-17 19:22

As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.

However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.

I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.

Here’s an example:

When a query generally shows up in a slow query log, it might look something like this:

# Time: 150217 10:26:01
# User@Host: comments[comments] @ localhost []  Id:    13
# Query_time: 0.000231  Lock_time: 0.000108 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424186761;
select * from cars;

That logging shows me who executed the query (the comments user), the server it was executed from (localhost in this case), and what time it was executed (who, where, when).

What this doesn’t tell me is where in my codebase this query is. For example, if I want to change that select * to a more targeted select column1, column2, we may not know where to find it.

This is where comments help.

Comments can be any of three styles:

select * from cars; # comment to end of line here
select * from cars; -- comment to end of line here
select * from /* inline comment here */ cars;

When you add a comment to your query, you’ll capture the comment. Here’s an example.

On the command line:

mysql> select * from cars; # valerie - command line - testing comments

In the slow log:

# Time: 150217 11:25:24
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000246  Lock_time: 0.000119 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190324;
select * from cars # valerie - command line - testing comments;

This is especially useful if you are forming queries dynamically and can add the name of the piece of code, function, user, etc. to the comment. Those comments would look something like this in the slow log:

# Time: 150217 11:32:04
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000225  Lock_time: 0.000104 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190724;
select * from cars # userid 7695424 - index.php?search=true - display function;

 

Categories: DBA Blogs

Pythian’s Annual MySQL Community Dinner at Pedro’s

Pythian Group - Tue, 2015-02-17 08:54

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Tuesday April 14, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

How: RSVP through Eventbrite

Attendees:

Laine Campbell

Derek Downey

Gillian Gunson

Miklos Szel

Marco Tusa

Mark Filipi

Alkin Tezuysal

Brian Cain

Brian Kelly

Joe Muraski

Patrick Pearson

Looking forward to seeing you all at the event!

 

Categories: DBA Blogs

Database Flashback -- 4

Hemant K Chitale - Tue, 2015-02-17 08:44
Continuing my series on Oracle Database Flashback.  This post demonstrates a FLASHBACK DATABASE when a Tablespace is set to  FLASHBACK OFF.

UPDATE 20-Feb-15 : When might I have FLASHBACK OFF for a Tablespace in a FLASHBACK ON Database ?  When, the Tablespace contents are "throwaway-able".  I can avoid the overheads of Flashback for DML in the Tablespace and I am confident that on the rare occasion that I have to Flashback the Database, I can discard and rebuild the tablespace and it's contents because they are not important for persistency.


[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 17 22:30:53 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
17-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573520 15-FEB-15 1440 190971904 0

SYS>

After my last post on 15/16-Feb (midnight), I had generated some transactions. The Flashback Size has grown from 24.576MB to 190.972MB.

Question : Does Database Flashback always apply to the whole database ?  Can we exclude a selected tablespace ?

SYS>create tablespace TEST_NOFB;

Tablespace created.

SYS>alter tablespace TEST_NOFB flashback OFF;

Tablespace altered.

SYS>

So, it is possible to set FLASHBACK OFF for a tablespace !  What are the implications ?  Oracle does NOT save Flashback information for this tablespace.  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

UPDATE : 18-FEB-15

Continuing the case.
Note : Subsequent to last night's post I had some more activity on the database and even a RESTORE and RESETLOGS.  The v$flashback_database_log contents are different now.

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 18 23:27:19 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
18-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 0

SYS>

Now, let me create a RESTORE POINT and then some activity in the TEST_NOFB database.

SYS>create restore point FEB18_15;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>create table test_nofb_tbl tablespace test_nofb
2 as select * from dba_objects
3 where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'TEST_NOFB_TBL'
4 /

TABLESPACE_NAME
------------------------------
TEST_NOFB

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>truncate table test_nofb_tbl;

Table truncated.

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>connect / as sysdba
Connected.
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 646397952

SYS>

So, I have generated DML activity in the TEST_NOFB Tablespace.

Let me define another Restore Point and generate more DML.

SYS>create restore point FEB18_15_2nd;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from test_nofb_tbl;

COUNT(*)
----------
450276

HEMANT>

I now have two Restore Points and DML against the target table in a FLASHBACK OFF Tablespace between and after the Restore Points.

Let my try to Flashback to the 2nd Restore Point.  What should I see ?  225138 rows in the table ? Or no rows in the table ?

HEMANT>connect / as sysdba
Connected.
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>flashback database to restore point FEB18_15_2nd;
flashback database to restore point FEB18_15_2nd
*
ERROR at line 1:
ORA-38753: Cannot flashback data file 6; no flashback log data.
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>

AAHA ! It cannot flashback the datafile because no flashback log exists for it.

Let me quote again what I said yesterday :  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

So, I must take the necessary action :

SYS>alter database datafile 6 offline;

Database altered.

SYS>flashback database to restore point FEB18_15_2nd;

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>
SYS>alter database datafile 6 online;
alter database datafile 6 online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>

I *cannot* selectively recover that datafile.  What options do I have ?
In the meantime, let's look at the alert.log.

Wed Feb 18 23:47:04 2015
flashback database to restore point FEB18_15_2nd
ORA-38753 signalled during: flashback database to restore point FEB18_15_2nd...
Wed Feb 18 23:49:42 2015
alter database datafile 6 offline
Completed: alter database datafile 6 offline
flashback database to restore point FEB18_15_2nd
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
Serial Media Recovery started
Warning: Datafile 6 (/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf) is offline during full database recovery and will not be recovered
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo03.log
Recovery of Online Redo Log: Thread 1 Group 1 Seq 4 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14577570 time 02/18/2015 23:40:29
Flashback Media Recovery Complete
Completed: flashback database to restore point FEB18_15_2nd
Wed Feb 18 23:50:08 2015
alter database open read only
AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
ARCH: STARTING ARCH PROCESSES
.....
.....
Completed: alter database open read only
Wed Feb 18 23:50:10 2015
.....
.....
Wed Feb 18 23:50:36 2015
alter database datafile 6 online
ORA-1113 signalled during: alter database datafile 6 online...
Wed Feb 18 23:50:36 2015
Signalling error 1152 for datafile 6!
Checker run found 2 new persistent data failures
ALTER DATABASE RECOVER datafile 6
Media Recovery Start
Serial Media Recovery started
Media Recovery failed with error 38798
ORA-283 signalled during: ALTER DATABASE RECOVER datafile 6 ...

So, the datafile is unusable.
I can only drop the tablespace.

SYS>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SYS>alter database open resetlogs;

Database altered.

SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-19909: datafile 6 belongs to an orphan incarnation
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>select tablespace_name from dba_data_files
2 where file_id=6;

TABLESPACE_NAME
------------------------------
TEST_NOFB

SYS>drop tablespace test_nofb;
drop tablespace test_nofb
*
ERROR at line 1:
ORA-01549: tablespace not empty, use INCLUDING CONTENTS option


SYS>drop tablespace test_nofb including contents and datafiles;

Tablespace dropped.

SYS>

There ! A Tablespace that was in FLASHBACK OFF mode cannot be accessed after a Flashback Database --- not even in respect to data that existed before the Restore Point that is used for the Flashback.
.
.
.

Categories: DBA Blogs

Partner Webcast – Agile Business Intelligence in the Cloud for Oracle Partner Solutions

As the #1 vendor in business analytics with the industry’s most complete and integrated range of enterprise-class business intelligence (BI) solutions, Oracle leads the way in helping organizations...

We share our skills to maximize your revenue!
Categories: DBA Blogs

A new index on a small table makes a big difference

Bobby Durrett's DBA Blog - Mon, 2015-02-16 16:12

A few weeks back on the weekend just before I went on call we got a complaint about slowness on an important set of reports.  I worried that the slowness of these reports would continue during my support week so I tried to figure out why they were slow.  I reviewed an AWR report for the 24 hours when the reports were running and found a simple query against a tiny table at the top of the “SQL ordered by Elapsed Time” report:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      307,516.33 3.416388E+09

I edited the AWR report to show just elapsed seconds and number of executions.  3.4 billion executions totaling 307,000 seconds of elapsed time.  This was about 90 microseconds per execution.

The previous weekend the same query looked like this:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      133,143.65 3.496291E+09

So, about the same number of executions but less than half of the elapsed time.  This was about 38 microseconds per execution.  I never fully explained the change from week to week, but I found a way to improve the query performance by adding a new index.

The plan was the same both weekends so the increase in average execution time was not due to a plan change.  Here is the plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 1430621991

------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| DIV_RPT_GEN_CTL    |     1 |
|   2 |   INDEX RANGE SCAN          | DIV_RPT_GEN_CTL_U1 |     1 |
------------------------------------------------------------------

I found that the table only had 369 rows and 65 blocks so it was tiny.

The table’s only index was on columns RPT_NM and RPT_ID but only RPT_NM was in the query.  For the given value of RPT_NM the index would look up all rows in the table with that value until it found those with GEN_STAT=1.  I suspect that on the weekend of the slowdown that the number of rows being scanned for a given RPT_NM value had increased, but I can not prove it.

I did a count grouping by the column GEN_STAT and found that only 1 of the 300 or so rows had GEN_STAT=1.

SELECT GEN_STAT,count(*)
FROM DIV_RPT_GEN_CTL
group by GEN_STAT;

  GEN_STAT   COUNT(*)
---------- ----------
         1          1
         2        339
         0         29

So, even though this table is tiny it made sense to add an index which included the selective column GEN_STAT.  Also, since the reports execute the query billions of times per day it made sense to include the one column in the select clause as well, DIV_NBR.  By including DIV_NBR in the index the query could get DIV_NBR from the index and not touch the table.  The new index was on the columns RPT_NM, GEN_STAT, and DIV_NBR in that order.

Here is the new plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 2395994055

-------------------------------------------------------
| Id  | Operation        | Name               | Rows  |
-------------------------------------------------------
|   0 | SELECT STATEMENT |                    |       |
|   1 |  INDEX RANGE SCAN| DIV_RPT_GEN_CTL_U2 |     1 |
-------------------------------------------------------

Note that it uses the new index and does not access the table.  Here is the part of the AWR report for the problem query for last weekend:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6       84,303.02 4.837909E+09

4.8 billion executions and only 84,000 seconds elapsed.  That is about 17.4 microseconds per execution.  That is less than half of what the average execution time was the weekend before the problem started.

The first Monday after we put the index in we found that one of the slow reports had its run time reduced from 70 minutes to 50 minutes.  It was great that we could improve the run time so much with such a simple fix.

It was a simple query to tune.  Add an index using the columns in the where clause and the one column in the select clause.  It was a tiny table that normally would not even need an index.  But, any query that an application executes billions of times in a day needs to execute in the most efficient way possible so it made sense to add the best possible index.

– Bobby

Categories: DBA Blogs

Log Buffer #409, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:29

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.

Oracle:

Patch Set Update: Hyperion Data Relationship Management 11.1.2.3.504

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process

MySQL:

New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Categories: DBA Blogs

Log Buffer #410, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:28

This Log Buffer Edition spread love of databases just before Valentine’s Day. Lovely blog posts from Oracle, SQL Server and MySQL are here for you to love.

Oracle:

Creating a Mobile-Optimized REST API Using Oracle Service Bus by Steven Davelaar.

GROUP BY – wrong results in 12.1.0.2

Using Edition-Based Redefinition to Bypass Those Pesky Triggers

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code.

If you have a sorted collection of elements, how would you find index of specific value?

SQL Server:

How to use the IsNothing Inspection Function in SSRS

What better way to learn how to construct complex CHECK CONSTRAINTs, use the SQL 2012 window frame capability of the OVER clause and LEAD analytic function, as well as how to pivot rows into columns using a crosstab query?

SQL Server’s GROUP BY clause provides you a way to aggregate your SQL Server data and to group data on a single column, multiple columns, or even expressions. Greg Larsen discusses how to use the GROUP by clause to summarize your data.

Surely, we all know how T-SQL Control-of-flow language works? In fact it is surprisingly easy to get caught out.

Resilient T-SQL code is code that is designed to last, and to be safely reused by others.

MySQL:

The NoSQL databases are gaining increasing popularity. MongoDB, being one of the most established among them, uses JSON data model and offers great scalability and ease of use due to the dynamic data schemas..

Is upgrading RDS like a shit-storm that will not end?

Over the last few MySQL releases the size of the MySQL package have increased in size and it looks like the trend is continuing.

This article details the proper method of load balancing either a Percona XTRADB Cluster (PXC) or MariaDB Cluster.

One common job for a DBA is working with a Development Team member on a batch loading process that is taking more time than expected.

Categories: DBA Blogs

Database Flashback -- 3

Hemant K Chitale - Sun, 2015-02-15 10:02
Continuing my series on Oracle Database Flashback


As I pointed out in my previous post, the ability to flashback is NOT strictly specified by db_flashback_retention_target.  The actual scope may be greater than or even less than the target.

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 15 23:39:24 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
15-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573520 15-FEB-15 1440 24576000 0

SYS>
SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI')
2 from v$flashback_database_log
3 /

TO_CHAR(OLDEST_FLASHB
---------------------
15-FEB 23:31

SYS>

In my previous post, the OLDEST_FLASHBACK_TIME was a week ago. Now, it doesn't appear to be so !

SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 29.12 .12 6
IMAGE COPY 0 0 0
FLASHBACK LOG .61 0 3
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

After the blog of 08-Feb, my database had been SHUTDOWN. The instance was STARTED at 23:24 today and the database was OPENed at 23:25.

Sun Feb 08 23:08:21 2015
Shutting down instance (immediate)
....
....
Sun Feb 08 23:08:37 2015
ARCH shutting down
ARC0: Archival stopped
Thread 1 closed at log sequence 1
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Sun Feb 08 23:08:38 2015
Stopping background process VKTM
Sun Feb 08 23:08:40 2015
Instance shutdown complete
....
....
Sun Feb 15 23:24:46 2015
Starting ORACLE instance (normal)
....
....
Sun Feb 15 23:25:33 2015
QMNC started with pid=34, OS id=2449
Completed: ALTER DATABASE OPEN

So, it seems that, this time, Oracle says I cannot Flashback to 08-Feb. Although, on 08-Feb, it did say that I could Flashback to 01-Feb. I strongly recommend periodically query V$FLASH_RECOVERY_AREA_USAGE and V$FLASHBACK_DATABASE_LOG.
I have seen DBAs only referring to the parameter db_flashback_retention_target without querying these views.
.
.
.
Categories: DBA Blogs

How to add a Cordova NFC plugin in MAF 2.1 application

You may have already noticed that the latest release of Oracle MAF 2.1 brings a major refresh to the MAF-Cordova story. In short, we have two big improvements here: 1. The version of Cordova has...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Querayable Patch Interface

Pakistan's First Oracle Blog - Thu, 2015-02-12 18:39
Starting from Oracle 12c, from within the SQL patching information can be obtained. A new package DBMS_QOPATCH offers some really nifty procedures to get the patch information. Some of that information is shared below:




To get patch information from the inventory:


SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
Inventory      : /u01/app/oraInventory

The following is an equivalent of opatch lsinventory command at the OS level:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0
--------------------------------------------------------------------------------
Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
Inventory      : /u01/app/oraInventory
--------------------------------------------------------------------------------Installed Top-level Products (1):
Oracle Database 12c                       12.1.0.1.0
Installed Products ( 131)

Oracle Database 12c                        12.1.0.1.0
Sun JDK                             1.6.0.37.0
oracle.swd.oui.core.min                     12.1.0.1.0
Installer SDK Component                     12.1.0.1.0
Oracle One-Off Patch Installer                    12.1.0.1.0
Oracle Universal Installer                    12.1.0.1.0
Oracle USM Deconfiguration                    12.1.0.1.0
Oracle Configuration Manager Deconfiguration            10.3.1.0.0
Oracle RAC Deconfiguration                    12.1.0.1.0
Oracle DBCA Deconfiguration                    12.1.0.1.0
Oracle Database Plugin for Oracle Virtual Assembly Builder  12.1.0.1.0
Oracle Configuration Manager Client                10.3.2.1.0
Oracle Net Services                        12.1.0.1.0
Oracle Database 12c                        12.1.0.1.0
Oracle OLAP                            12.1.0.1.0
Oracle Spatial and Graph                    12.1.0.1.0
Oracle Partitioning                        12.1.0.1.0
Enterprise Edition Options                    12.1.0.1.0


Interim patches:

Categories: DBA Blogs

Customer2Cloud Program removes barriers to Cloud Adoption

Modern cloud applications can help lower the cost of IT ownership, accelerate the pace of innovation, and vastly improve users’ experiences. Business leaders acknowledge this, but the journey to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle MAF 2.1 old-way remote debugging

Oracle MAF 2.1 comes with a new, more integrated debugging concept. In essence, in order to debug an application, one just needs to hit the debug command in the menu, and JDeveloper takes care of the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

What Is That Light-Green Oracle Database CPU Wait Time?

This page has been permenately moved. Please CLICK HERE to be redirected.

Thanks, Craig.
What Really Is That Light-Green Oracle Database CPU Wait Time?

Have you ever wondered what that light-green "cpu wait time" really means in Oracle Enterprise Manager? It's what I call, the "gap" time. The "gap" time is the "missing" or the "leftover" time when DB Time does not equal the DB CPU (foreground process CPU consumption) plus the non-idle wait time. And, it happens more often than you might think.
If you have ever noticed that the database time seems too large, then you need to read this article. And, if you really want to know what the light-green "cpu wait time" in your OEM charts is, then you need to read this article. It's that good.

If you're serious about Oracle performance tuning and analysis, you'll want to know I just posted my complete 2015 public training schedule. It's on the main OraPub.com page HERE. Remember, alumni receive a 50% discount...crazy I know.
My Experiment Shows...
My experiment shows a strong relationship between the "gap" time and operating system CPU utilization. This means that a significant portion of the "gap" time is Oracle foreground processes sitting in the CPU run queue ready to consume CPU. This CPU run queue time is not part of DB CPU but it part of DB Time. So, when the CPU run queue time increases, so does DB Time and so does the "gap" time. And I have the data to show it! And you can run the same experiment yourself.
Let me put this another way. Most of the DB Time "gap" is Oracle foreground processes waiting in the operating system CPU run queue so they can eventually and truly consume CPU.

This is really important: When an Oracle foreground process is not consuming CPU but is sitting in the CPU run queue, Oracle Active Session History (ASH) facility records the session sample state as "CPU" and if the Oracle process is a foreground process (not a background process) Oracle's time model records this time as DB Time but not DB CPU. So in both the ASH and time model cases, someone must do some math to calculate this "cpu wait time".

But that name... "cpu wait"!
CPU Wait Time Is A Lousy Name
"CPU wait time" is a lousy name. Why? Mainly because it has caused lots of confusion and speculation. The name would be more appropriately called something like, "cpu queue time." Three reasons come to mind.
First, wait time means something special to Oracle DBAs. To an Oracle DBA anything associate with a "wait" should have a wait event name, a wait occurance, the time should be instrumented (i.e., measured) and should be recorded in the many wait interface related views, such as v$system_event or v$session.
Second, from an Oracle perspective the process is truly "on cpu" because the process is not "waiting." Remember, an Oracle session is either in one of two states; CPU or WAIT. There is no third choice. So the words "CPU Wait" are really confusing.
Third, from an OS perspective or simply a non-Oracle perspective, the Oracle process is sitting in the CPU run queue.
I'm sure in some Oracle Corporation meeting the words "cpu wait" were considered a great idea, but it has caused lots of confusion. And I'm sure it's here to stay.
What Does This "CPU WAIT" Look Like In OEM?
In OEM, the "cpu wait" is a light green color. I grabbed a publically available screenshot off the internet and posted it below. Look familiar? 

OK, so it's really easy to spot in OEM. And if you've seen it before you know EXACTLY what I'm referring to.
What Is CPU Wait Time?
First, let's review what we do know.
1. DB CPU is true Oracle foreground process CPU consumption as reported by the OS through a system call, such as getrusage.
2. CPU Wait time is derived, that is, somebody at Oracle wrote code to calculate the "cpu wait" time.
3. CPU Wait time is a lousy name because it causes lots of confusion.
4. CPU Wait time is shown in OEM as a light green color. DB CPU is shown as a dark/normal green color.
Second, I need to define what I'll call the DB Time "gap." This is not error and I am not implying something is wrong with database time, that it's not useful or anything like that. All I am saying is that sometimes DB Time does not equal DB CPU plus the non-idle wait time. Let's put that in a formula:
DB Time = DB CPU + non Idle Wait Time + gap
Really, What Is CPU Wait Time?
Now I'm ready to answer the question, "What is CPU WAIT time?" Here is the answer stated multiple ways.
"CPU Wait" time is Oracle foreground process OS CPU run queue time.
I ran an experiment (detailed below) and as the OS CPU utilization increased, so did the DB Time "gap" implying that the gap is CPU run queue time or at least a significant part of it.
I ran an experiment and there was a strong correlation between OS CPU utilization and the DB Time "gap" implying that the gap is CPU run queue time.
I ran an experiment and using queuing theory I was able to predict the "gap" time implying that the gap is CPU run queue time. (Whoops... sorry. That's what I'll present in my next post!)
So I'm very comfortable stating that when DB Time is greater than Oracle process CPU consumption plus the non-idle wait time, it's probably the result of Oracle foreground process CPU run queue time.

Yes, there could be some math problems on Oracle's side, there could be uninstrumented time (for sure it's happened before), the operating system could be reporting bogus values or a host of other potential issues. But unless there is an obvious wrong value, I'm sticking with the experimental evidence.
Now I'm going to show the experimental "evidence" that is, that the DB Time "gap" time correlates with the OS CPU utilization.
Let The Data Drive Our Understanding
You can download all the data collection scripts, raw experimental data, Mathematica notepad files, graphic files, etc HERE in a single zip file.

You should be able to run the experiment on any Linux Oracle test system. All you need is a logical IO load and for that I used my free opload tool which, you can download HERE.
The experiment placed an increasing logical IO load on an Linux Oracle 12c system until the operating system CPU utilization exceeded 90%. The load was increased 18 times. During each of the 18 loads, I gathered 31 three minute samples. Each sample contains busy time (v$osstat), idle time (v$osstat), logical IO (v$sysstat "session logical reads"), non-idle wait time (v$system_event where wait_class != 'Idle'), DB CPU (v$sys_time_model), background cpu time (v$sys_time_model), database time (v$sys_time_model DB time) and the sample time (dual table current_timestamp).
The CPU utilization was calculated using the "busy idle" method that I blog about HERE. This method is detailed in my Utilization On Steroids online video seminar.
The workload is defined as the logical IOs per second, lio/s.
Below is a table summarizing the experimental data. The times shown are the averages. If you look at the actual raw experimental data contained in the analysis pack, you'll notice the data is very consistent. This is not suprising since the load I placed should produce a very consistent workload.
Do you see the gaps? Look closely at load 18. The DB Time is 8891.4 seconds. But the sum of DB CPU (996.8 seconds) and the non-idle wait time (2719.2) seconds only equals 3716.0. Yet DB Time is 8891.4. So the "gap" is 5175.3 which is DB Time (8891.3) minus DB CPU (996.8) minus the non-idle wait time (2719.2).

Note: Load 11 and 12 where excluded because of a problem with my data collection. Sorry.

While we can numberically see the DB Time "gap" increase as the CPU utilization increases, check out the graphic in the next section!

The Correlation Between CPU Utilization And DB Time Gap
We can numerically and visually see that as the CPU utilization increases, so does the DB Time "gap." But is there a strong mathematical correlation? To determine this, I used all the experimental samples (except load 11 and 12). Because there was 17 different workloads and with each workload I gathered 31 samples, the correlation comprises of something like 527 samples. Pretty good sample set I'd say.

The correlation coefficient is a strong 0.891. The strongest is 1.0 and the weakest is 0.

Graphically, here is the scatterplot showing the relationship between the CPU utilization and the workload.

Don't expect the DB Time "gap" and OS CPU utilization correlation to be perfect. Remember that DB Time does not include Oracle background process CPU consumption, yet it is obviously part of the OS CPU utilization.

Summary
My experiment indicated the light-green "CPU wait time" is primarily Oracle foreground process operating system CPU run queue time. This is DB Time "gap" time.

My experiment also showed the "gap" time is highly correlated with CPU utilization. Which means, as the CPU utilization increases, so does the "gap" time.

If there are Oracle Database instrumentation bugs or a host of other potential problems, that will also affect the "gap" time.

If you want a more complete and detailed DB Time formula is would be this:

DB Time = DB CPU + Non Idle Wait Time + gap time

In my next post, I'll show you how to calculate the gap time based on queuing theory!

Thanks for reading!

Craig.








Categories: DBA Blogs

EMEA Exadata, Manageability & Hardware Partner Forum

Oracle EMEA Exadata, Manageability, Servers & Storage Partner Community Forum ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Long parse time with interval partitioning and subpartitions

Bobby Durrett's DBA Blog - Mon, 2015-02-09 13:52

In an earlier post I described how some queries with long parse times were causing long library cache lock waits.  Friday I applied a patch from Oracle that resolved the long parse times.  Here are the conditions which may relate to this bug:

  1. Interval partitioned table
  2. Partitioned by range
  3. Sub-partitioned by list
  4. Open ended range on partitioning column in where clause
  5. Tens of thousands of sub-partitions

Prior to applying the patch I did an explain plan on two versions of the problematic query.  One version specified an open-ended range on the partitioning column and the other closed off the range.

Slow version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and 
...
Explained.

Elapsed: 00:00:46.20

Fast version:

...
T392658.CLNDR_DT >= TO_DATE('2014-11-17' , 'YYYY-MM-DD') and
T392658.CLNDR_DT <= TO_DATE('2014-11-26' , 'YYYY-MM-DD') and
...
Explained.

Elapsed: 00:00:00.09

The queries are the same except for the extra date condition which closes off the date range.  Note that the explain plan took 46 seconds with the open-ended range and less than a tenth of a second with the closed off range.

With the patch the slow version is just as fast as the fast one.

This is bug 20061582 in Oracle’s system.

From my conversations with Oracle it sounds like in certain cases the optimizer is reviewing information for many or all of the sub-partitions and since we have 20,000 for this table in production it can take minutes to parse.  I also messed with the dates on the open-ended condition and found that if I made the date early enough the parse time issue went away.  So, it seems that there is some set of conditions, which I don’t know how to easily reproduce, which lead the optimizer to look at sub-partition information and slows parse time.

This is on 11.2.0.4 on an Exadata system.  I was able to reproduce the long parse times on non-Exadata Linux 11.2.0.4 and 12.1.0.2 systems so it does not occur only on Exadata.

This issue surprised me because I thought that the optimizer would not look at partition or sub-partition statistics in a query that has a range condition which spanned more than one partition.  In the past I have always seen the global or table level stats used in these situations.  But, now I know that there are some cases where the optimizer will dig into the sub-partition stats even though the query covers more than one partition.

Looking at the plan of my slow parsing query before and after the patch I get a clue that the optimizer is looking at sub-partition stats:

Partition range without the patch:

KEY(SQ)|KEY(SQ)

Partition range with the patch:

     1 |   136

Evidently KEY(SQ) relates to some sort of filtering of the sub-partitions which cause the long parse time.  The manuals describe KEY(SQ) in a section titled “Dynamic Pruning with Subqueries” so maybe the problem queries have some sub-query that the optimizer was using to choose which sub-partitions that the query needed.

If you have an interval partitioned table with tens of thousands of sub-partitions and parse times in the minutes with open-ended ranges specified on the partitioning column your system could be hitting this same issue.

– Bobby

Categories: DBA Blogs