Skip navigation.

DBA Blogs

emergency monitoring and forcing a flush of the shared pool ... use with caution ONLY if really needed

Grumpy old DBA - Sat, 2014-02-22 13:11
Flushing the shared pool has an impact on systems forcing at least the re parsing of sql statements and new ( perhaps even changed ) execution plans.  Use with caution on any production environment and test/test before deploying anything like this.

The usual approach for a system suffering from shared pool fragmentation and/or 4031 is to identify sql not using bind variables and consider implementing ( after logon session based database trigger best ) CURSOR_SHARING = FORCE ... but at times even that may need to be supplemented with some emergency monitoring and flushing.

This code below needs to be looked at and tested in your environment ( test system first ) ... may need setup and grants put in place to get it operational.  It currently is hard coded to flush when less than 256 mb is available as free memory ... could be kicked off every minute by some kind of scheduler or cron job or database job.

CREATE OR REPLACE PROCEDURE FLUSH_SHARE_POOL is
   v_free_space_meg NUMBER;
BEGIN
   SELECT round(bytes/1024/1024,0) INTO v_free_space_meg FROM v$sgastat
    WHERE name = 'free memory' AND pool =  'shared pool';
   IF ( v_free_space_meg < 256 ) THEN
      EXECUTE IMMEDIATE 'alter system flush shared_pool';
   END IF;    
END FLUSH_SHARE_POOL;
/
Categories: DBA Blogs

Oracle SOA Suite 11g - The Ingredient for Innovation: New Integration Adapters

Enterprise of all sizes, are evaluating cloud computing. Many businesses are making new economic investments in public cloud, while they continue to rely on their existing on-premise IT investments....

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL Analytics

Hemant K Chitale - Fri, 2014-02-21 22:56
Oracle Learning Library video series on SQL Analytics
SQL Analytics Overview Video Series.
.
.
Categories: DBA Blogs

How many times can you practice and revise a presentation?

Grumpy old DBA - Fri, 2014-02-21 18:51
Well the obvious answers are "it depends" and "how far away is the actual presentation"?  For Hotsos 2014 I had the vast majority of the work done in early December 2013 ... not bad for a presentation not delivered until March 3 2014.

That was after a couple of series of revisions and after all this was based on a presentation originally done at Oracle Open World 2011.

It makes even an experienced and grumpy old Oracle professional more than a little nervous delivering something at Hotsos however.  After another series of changes and improvements in January I put it away for a while.  Just back at it again about 10 days ago.  I sent in Hotsos "yet another" final version but is it actually the final version ... heck no!

Last week on Wednesday I delivered a dry run of the presentation to several of my co-workers along with a couple of NEOOUG fellow board members.  Big thanks to Pete Dinin from Sherwin Williams for some great suggestions and comments.  He was typing away during the run ... I thought he was doing work related email ... but no ... comments and suggestions on the presentation.  So Pete as always I owe you!

Two more weekends before I actually do the presentation.  Going to alternate days when I do not look at it at all and other days when I run through the material and the slides.

Working on some dry DBA humor to get some laughs ... wish me luck!

Categories: DBA Blogs

Graph from v$active_session_history

DBA Scripts and Articles - Fri, 2014-02-21 10:57

Description This query generate the data needed to build the load graph from v$active session_history, the output represent what sessions did in your database for the last hour, be careful, the v$active_session_history is part of the diagnostic pack, if you don’t have a license for this pack you should not run this query. Query to [...]

The post Graph from v$active_session_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Oracle JDBC Connection Tester

DBASolved - Thu, 2014-02-20 12:08

From time to time, when working at various sites I have to check connections to a database.  If you have been an Oracle DBA for some time, you already know that the listener has to be up and services have to be registered before you can connect from a remote host.  If the listener is up and services are registered, then you should be able to get to the database; so why the need for a connection tester?

Recently, I’ve been asked to verify connections for a few databases to provide management a “warm-and-fuzzy” about their connections to environments; also helps in troubleshooting if you know the service name.  I have been asked to verify connections more than once in my career and always wanted a simple way that I could test without logging into the database host or needing to use an Oracle client.  Hence the reason I wrote this simple JDBC connection tool.

This tool is simple to use, it takes the following to build the URL for connection to Oracle:

  • Hostname/IP Address
  • Listener Port
  • SID/Service Name
  • Username
  • Password

Note: The password will be passed in clear text!

Once the requirements for the URL are provided, the tool attempts to login to the database with the username and password provided.  If the listener and services are working correctly, the tool will return a successful message.  If not, the tool will provide an error stack with the reason why it failed to connect.

I am making the tool available here if you would like to test or use it.   

In order to run the tool, it will need to be ran from the command prompt and Java should be in your path.  The tool was written against JDK 1.7 and uses the ojdbc6.jar which is provided with Oracle 11g.  Also, the ojdbc6.jar file is provided in the lib directory after unzipping the tool.  The README.txt file that comes with the tool explains how to run the tool.

Here is a screenshot of a run done previously.

image

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, General
Categories: DBA Blogs

An SQL Performance Quiz

Hemant K Chitale - Thu, 2014-02-20 09:19
Markus Winand has written a very small SQL Performance Quiz.  (I scored 5 out of 5 on Oracle).
He has published a review of the results.  A very large number of visitors have failed.

.
.
.


Categories: DBA Blogs

What Happens When Active DB Duplication Goes Wrong?

Pythian Group - Tue, 2014-02-18 13:57

There are many blog posts out there about active database duplication. However, they were all tested in an ideal environment or condition. What happens when a tablespace is created during the middle of active duplication and how to resolve the error? Read on if you would like to know.

For my test case, I created database db01 using OMF and will perform active duplication to db02 using OMF as well on the same host. While duplication was running, I created a new tablespace. Here are the details of the steps performed:

Review files and pfile for TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ ll
total 7692
-rw-rw----. 1 oracle oinstall    1544 Feb 14 13:06 hc_db01.dat
-rwxr-x---. 1 oracle oinstall     590 Feb 13 08:14 initdb01.ora
-rwxr-x---. 1 oracle oinstall     590 Feb 13 08:14 initdb02.ora
-rw-r-----. 1 oracle oinstall      24 Feb 13 08:18 lkDB01
-rw-r-----. 1 oracle oinstall       0 Feb 14 13:06 lkinstdb01
-rw-r-----. 1 oracle oinstall    2048 Feb 11 14:00 orapwdb01
-rw-r-----. 1 oracle oinstall    2048 Feb 11 15:48 orapwdb02
-rw-r-----. 1 oracle oinstall 7847936 Feb 11 18:06 snapcf_db01.f
-rw-r-----. 1 oracle oinstall    3584 Feb 14 13:05 spfiledb01.ora

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ cat initdb02.ora
*.audit_file_dest='/u01/app/oracle/admin/adump'
*.audit_trail='none'
*.compatible='11.2.0.4.0'
*.db_block_size=8192
*.db_create_file_dest='/oradata'
*.db_domain=''
*.db_name='db02'
*.db_recovery_file_dest='/oradata/fra'
*.db_recovery_file_dest_size=4g
*.diagnostic_dest='/u01/app/oracle'
*.event='10795 trace name context forever, level 2'
*.fast_start_mttr_target=300
*.java_pool_size=0
*.local_listener='(ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1531))'
*.pga_aggregate_target=268435456
*.processes=100
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=805306368
*.undo_tablespace='UNDOTBS'

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ diff ./initdb01.ora ./initdb02.ora
7c7
< *.db_name='db01' --- > *.db_name='db02'
[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$

Notice there is only one difference between in the pfile for db01 and db02

Create same directory structures for TARGET database:

[oracle@arrow:]/oradata
$ ls DB*
DB01:
controlfile  datafile  onlinelog

DB02:
controlfile  datafile  onlinelog
[oracle@arrow:]/oradata
$

Startup NOMOUNT TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:11:13 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@db02> startup nomount;
ORACLE instance started.

Total System Global Area  801701888 bytes
Fixed Size                  2257520 bytes
Variable Size             222301584 bytes
Database Buffers          570425344 bytes
Redo Buffers                6717440 bytes
SYS@db02>

Start active database duplication:

[oracle@arrow:db01]/media/sf_linux_x64/rman
$ rman @dupdbomf.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Feb 14 13:12:04 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target *
2> connect auxiliary *
3> run {
4> allocate channel c1 type disk maxopenfiles 1;
5> allocate auxiliary channel a1 type disk;
6> duplicate target database to db02
7>   from active database nofilenamecheck
8>   spfile
9>   PARAMETER_VALUE_CONVERT ('DB01','DB02')
10> ;
11> }
12> exit;
connected to target database: DB01 (DBID=1470673955)

connected to auxiliary database: DB02 (not mounted)

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=14 device type=DISK

allocated channel: a1
channel a1: SID=96 device type=DISK

Starting Duplicate Db at 14-FEB-2014 13:12:07

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb01.ora' auxiliary format
 '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora'   ;
   sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''";
}
executing Memory Script

Starting backup at 14-FEB-2014 13:12:08
Finished backup at 14-FEB-2014 13:12:09

sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  control_files =
 ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''duplicate'' scope=spfile

sql statement: alter system set  control_files =  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.c                                                                              tl'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''DB01'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''DB02'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   backup as copy current controlfile auxiliary format  '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   restore clone controlfile to  '/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl' from
 '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''DB01'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''DB02'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

Starting backup at 14-FEB-2014 13:12:55
channel c1: starting datafile copy
copying current control file
output file name=/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/snapcf_db01.f tag=TAG20140214T131255 RECID=1 STAMP=839509978
channel c1: datafile copy complete, elapsed time: 00:00:07
Finished backup at 14-FEB-2014 13:13:03

Starting restore at 14-FEB-2014 13:13:03

channel a1: copied control file copy
Finished restore at 14-FEB-2014 13:13:04

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

database mounted

contents of Memory Script:
{
   set newname for clone datafile  1 to new;
   set newname for clone datafile  2 to new;
   set newname for clone datafile  3 to new;
   set newname for clone datafile  4 to new;
   backup as copy reuse
   datafile  1 auxiliary format new
   datafile  2 auxiliary format new
   datafile  3 auxiliary format new
   datafile  4 auxiliary format new
   ;
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 14-FEB-2014 13:13:21

----------------------------------------------------------------------
-- While duplication was running, create new tablespace at source
--
[oracle@arrow:db01]/home/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:13:22 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@db01):PRIMARY> create tablespace mdinh;

Tablespace created.

ARROW:(SYS@db01):PRIMARY>
----------------------------------------------------------------------

channel c1: starting datafile copy
input datafile file number=00001 name=/oradata/DB01/datafile/o1_mf_system_9hsw4shz_.dbf
output file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:01:26
channel c1: starting datafile copy
input datafile file number=00002 name=/oradata/DB01/datafile/o1_mf_sysaux_9hsw63d2_.dbf
output file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
channel c1: starting datafile copy
input datafile file number=00003 name=/oradata/DB01/datafile/o1_mf_undotbs_9hsw75h4_.dbf
output file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
channel c1: starting datafile copy
input datafile file number=00004 name=/oradata/DB01/datafile/o1_mf_users_9hsw880k_.dbf
output file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf tag=TAG20140214T131321
channel c1: datafile copy complete, elapsed time: 00:00:35
Finished backup at 14-FEB-2014 13:16:32

sql statement: alter system archive log current

contents of Memory Script:
{
   backup as copy reuse
   archivelog like  "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_10_9hx1xkn1_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_%u_.arc"   ;
   catalog clone recovery area;
   switch clone datafile all;
}
executing Memory Script

Starting backup at 14-FEB-2014 13:16:34
channel c1: starting archived log copy
input archived log thread=1 sequence=10 RECID=2 STAMP=839510193
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 14-FEB-2014 13:16:35

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc

List of Files Which Where Not Cataloged
=======================================
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
  RMAN-07518: Reason: Foreign database file DBID: 1470537681  Database Name: DB01

datafile 1 switched to datafile copy
input datafile copy RECID=1 STAMP=839510196 file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=2 STAMP=839510196 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=3 STAMP=839510197 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=4 STAMP=839510197 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

contents of Memory Script:
{
   set until scn  227291;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 14-FEB-2014 13:16:39
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 02/14/2014 13:17:01
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script
RMAN-06094: datafile 5 must be restored

Recovery Manager complete.
[oracle@arrow:db01]/media/sf_linux_x64/rman
$

Remove spfile and misc files for TARGET database:

Startup NOMOUNT TARGET database:

[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ rm spfiledb02.ora lkDB02 hc_db02.dat
[oracle@arrow:db02]/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs
$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Fri Feb 14 13:18:43 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@db02> startup nomount;
ORACLE instance started.

Total System Global Area  801701888 bytes
Fixed Size                  2257520 bytes
Variable Size             222301584 bytes
Database Buffers          570425344 bytes
Redo Buffers                6717440 bytes
SYS@db02> exit

RESTART active database duplication:

[oracle@arrow:db01]/media/sf_linux_x64/rman
$ rman @dupdbomf.rman

Recovery Manager: Release 11.2.0.4.0 - Production on Fri Feb 14 13:18:52 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

RMAN> connect target *
2> connect auxiliary *
3> run {
4> allocate channel c1 type disk maxopenfiles 1;
5> allocate auxiliary channel a1 type disk;
6> duplicate target database to db02
7>   from active database nofilenamecheck
8>   spfile
9>   PARAMETER_VALUE_CONVERT ('DB01','DB02')
10> ;
11> }
12> exit;
connected to target database: DB01 (DBID=1470673955)

connected to auxiliary database: DB02 (not mounted)

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=14 device type=DISK

allocated channel: a1
channel a1: SID=10 device type=DISK

Starting Duplicate Db at 14-FEB-2014 13:18:54

contents of Memory Script:
{
   backup as copy reuse
   targetfile  '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb01.ora' auxiliary format
 '/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora'   ;
   sql clone "alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''";
}
executing Memory Script

Starting backup at 14-FEB-2014 13:18:54
Finished backup at 14-FEB-2014 13:18:55

sql statement: alter system set spfile= ''/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/spfiledb02.ora''

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''duplicate'' scope=spfile";
   sql clone "alter system set  control_files =
 ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 '''' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''duplicate'' scope=spfile

sql statement: alter system set  control_files =  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= '''' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

contents of Memory Script:
{
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   sql clone "alter system set  db_name =
 ''DB01'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''DB02'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   backup as copy current controlfile auxiliary format  '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   restore clone controlfile to  '/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl' from
 '/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl';
   sql clone "alter system set  control_files =
  ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment=
 ''Set by RMAN'' scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

sql statement: alter system set  db_name =  ''DB01'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''DB02'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

Starting backup at 14-FEB-2014 13:19:11
channel c1: starting datafile copy
copying current control file
output file name=/u01/app/oracle/product/11.2.0.4/dbhome_1/dbs/snapcf_db01.f tag=TAG20140214T131912 RECID=2 STAMP=839510353
channel c1: datafile copy complete, elapsed time: 00:00:03
Finished backup at 14-FEB-2014 13:19:15

Starting restore at 14-FEB-2014 13:19:15

channel a1: copied control file copy
Finished restore at 14-FEB-2014 13:19:16

sql statement: alter system set  control_files =   ''/oradata/DB02/controlfile/o1_mf_9hsw332r_.ctl'', ''/oradata/fra/DB02/controlfile/o1_mf_9hsw33dm_.ctl'' comment= ''Set by RMAN'' scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK

database mounted

Using previous duplicated file /oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf for datafile 1 with checkpoint SCN of 226956
Using previous duplicated file /oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf for datafile 2 with checkpoint SCN of 227250
Using previous duplicated file /oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf for datafile 3 with checkpoint SCN of 227262
Using previous duplicated file /oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf for datafile 4 with checkpoint SCN of 227275

contents of Memory Script:
{
   set newname for datafile  1 to
 "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf";
   set newname for datafile  2 to
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf";
   set newname for datafile  3 to
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf";
   set newname for datafile  4 to
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   set newname for clone datafile  5 to new;

   backup as copy reuse
   datafile  5 auxiliary format new
   ;
   
   sql 'alter system archive log current';
}
executing Memory Script

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting backup at 14-FEB-2014 13:19:31
channel c1: starting datafile copy
input datafile file number=00005 name=/oradata/DB01/datafile/o1_mf_mdinh_9hx1qqko_.dbf
output file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf tag=TAG20140214T131931
channel c1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 14-FEB-2014 13:19:47

sql statement: alter system archive log current

contents of Memory Script:
{
   backup as copy reuse
   archivelog like  "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_10_9hx1xkn1_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_%u_.arc"   archivelog like
 "/oradata/fra/DB01/archivelog/2014_02_14/o1_mf_1_11_9hx23s3n_.arc" auxiliary format
 "/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_%u_.arc"   ;
   catalog clone recovery area;
   catalog clone datafilecopy  "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf",
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf",
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf",
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   switch clone datafile  1 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf";
   switch clone datafile  2 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf";
   switch clone datafile  3 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf";
   switch clone datafile  4 to datafilecopy
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf";
   switch clone datafile all;
}
executing Memory Script

Starting backup at 14-FEB-2014 13:19:53
channel c1: starting archived log copy
input archived log thread=1 sequence=10 RECID=2 STAMP=839510193
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
channel c1: starting archived log copy
input archived log thread=1 sequence=11 RECID=3 STAMP=839510393
output file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc RECID=0 STAMP=0
channel c1: archived log copy complete, elapsed time: 00:00:01
Finished backup at 14-FEB-2014 13:19:56

searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_09p0jqbq_.arc
File Name: /oradata/fra/DB02/archivelog/2014_02_12/o1_mf_1_10_0cp0ekfe_.arc

List of Files Which Where Not Cataloged
=======================================
File Name: /oradata/fra/DB02/controlfile/o1_mf_9hoq0kmv_.ctl
  RMAN-07518: Reason: Foreign database file DBID: 1470537681  Database Name: DB01

cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf RECID=2 STAMP=839510398
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf RECID=3 STAMP=839510398
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf RECID=4 STAMP=839510399
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf RECID=5 STAMP=839510399

datafile 1 switched to datafile copy
input datafile copy RECID=2 STAMP=839510398 file name=/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf

datafile 2 switched to datafile copy
input datafile copy RECID=3 STAMP=839510398 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf

datafile 3 switched to datafile copy
input datafile copy RECID=4 STAMP=839510399 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf

datafile 4 switched to datafile copy
input datafile copy RECID=5 STAMP=839510399 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

datafile 5 switched to datafile copy
input datafile copy RECID=6 STAMP=839510401 file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf

contents of Memory Script:
{
   set until scn  227620;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 14-FEB-2014 13:20:02

starting media recovery

archived log for thread 1 with sequence 10 is already on disk as file /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc
archived log for thread 1 with sequence 11 is already on disk as file /oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc
archived log file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_10_06p0jq5i_.arc thread=1 sequence=10
archived log file name=/oradata/fra/DB02/archivelog/2014_02_14/o1_mf_1_11_0ap0jqbr_.arc thread=1 sequence=11
media recovery complete, elapsed time: 00:00:09
Finished recover at 14-FEB-2014 13:20:14
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''DB02'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''DB02'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2257520 bytes
Variable Size                222301584 bytes
Database Buffers             570425344 bytes
Redo Buffers                   6717440 bytes
allocated channel: a1
channel a1: SID=95 device type=DISK
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "DB02" RESETLOGS ARCHIVELOG
  MAXLOGFILES     16
  MAXLOGMEMBERS      2
  MAXDATAFILES       30
  MAXINSTANCES     1
  MAXLOGHISTORY      292
 LOGFILE
  GROUP   1  SIZE 100 M ,
  GROUP   2  SIZE 100 M ,
  GROUP   3  SIZE 100 M
 DATAFILE
  '/oradata/DB02/datafile/o1_mf_system_02p0jpvh_.dbf'
 CHARACTER SET AL32UTF8

contents of Memory Script:
{
   set newname for clone tempfile  1 to new;
   switch clone tempfile all;
   catalog clone datafilecopy  "/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf",
 "/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf",
 "/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf",
 "/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /oradata/DB02/datafile/o1_mf_temp_%u_.tmp in control file

cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf RECID=1 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf RECID=2 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf RECID=3 STAMP=839510428
cataloged datafile copy
datafile copy file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf RECID=4 STAMP=839510429

datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=839510429 file name=/oradata/DB02/datafile/o1_mf_mdinh_08p0jqb4_.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=5 STAMP=839510430 file name=/oradata/DB02/datafile/o1_mf_sysaux_03p0jq27_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=6 STAMP=839510431 file name=/oradata/DB02/datafile/o1_mf_undotbs_04p0jq3a_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=7 STAMP=839510432 file name=/oradata/DB02/datafile/o1_mf_users_05p0jq4d_.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 14-FEB-2014 13:21:48
released channel: c1
released channel: a1

Recovery Manager complete.
[oracle@arrow:db01]/media/sf_linux_x64/rman
$

Did you noticed duplication reused previous duplicated file versus duplicating it again?

Categories: DBA Blogs

linkedin endorsements make me ... grumpy ... sometimes

Grumpy old DBA - Tue, 2014-02-18 12:59
Well I don't really mind endorsements for skills that are in my current toolset.  It is a nice compliment to get endorsements from people that you know well or have work with or are working with.

On the other hand endorsements for things that you have never worked on in your life seem a little strange.  I now have several endorsements for Oracle HR ... never ever in my life touched that not even peoplesoft.  Even stranger is the recent endorsement in that from a system admin in my previous job where guess what ... we never ran that software there at all.

Just received another strange endorsement for SQL Server from someone that I do not remember how they are connected to me via linkedin.  I guess SQL Server is much improved these days ( needed to change obviously ) but wow have not touched anything on that platform since like 2000 and that was a 3 month project.

Yikes!
Categories: DBA Blogs

Speaking at Collaborate14 in Las Vegas

Pakistan's First Oracle Blog - Mon, 2014-02-17 18:47
12c Multi-Tenancy and Exadata IORM: An Ideal Cloud Based Resource Management



Cloud database enables administrators to scale up and down the resource usage according to the business requirements. Oracle 12c renders multi-tenancy to manage multiple pluggable databases within a multi-tenant container database. In an Exadata, at the compute nodes, the resource manager controls the resources among pluggable databases, and all this resource management is trickled down to the storage servers, where IORM controls the resources. This presentation lucidly explains as how a business can leverage the benefits of 12C multi-tenancy, DBRM, and IORM in an Exadata realm to have an ideal cloud based resource management.

 
Following is the welcome page for the Collaborate14.
Categories: DBA Blogs

SQL Developer’s PL/SQL Debugger and Oracle 12c

Galo Balda's Blog - Mon, 2014-02-17 17:54

The PL/SQL Debugger works pretty much out of the box when used with a previous Oracle version. These are the things we needed in place before we could start debugging PL/SQL:

  1. A grant of the DEBUG CONNECT SESSION privilege.
  2. EXECUTE privilege on DBMS_DEBUG_JDWP.
  3. EXECUTE privilege on the stored procedure you want to debug.
  4. Make sure the stored procedure is “Compiled for Debug”.

Jeff Smith talks about it in this post.

But what happens when you use Oracle 12c? You still need all the stuff that I mentioned but that’s not enough. See the error I got when I was trying to debug a procedure.

Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP( ’192.168.0.10′, ’49428′ )
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at “SYS.DBMS_DEBUG_JDWP”, line 68
ORA-06512: at line 1
Process exited.
Disconnecting from the database SCOTT – ORA12CPDB1

Starting with Oracle 12c, if you want to debug PL/SQL stored procedures in the database through a Java Debug Wire Protocol (JDWP)-based debugger, such as SQL Developer or JDeveloper, then you must be granted the jdwp ACL privilege to connect your database session to the debugger at a particular host.

This is one way you can configure network access for JDWP operations:

BEGIN
 DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE
 (
 host => '192.168.0.10',
 lower_port => null,
 upper_port => null,
 ace => xs$ace_type(privilege_list => xs$name_list('jdwp'),
 principal_name => 'scott',
 principal_type => xs_acl.ptype_db)
 );
END;
Host can can be a host name, domain name, IP address, or subnet.

Principal name in the access control entry (ACE) section is the schema that holds the stored procedures you want to debug.

Now the error goes away and you can start your debugging task.

Connecting to the database SCOTT – ORA12CPDB1.
Executing PL/SQL: ALTER SESSION SET PLSQL_DEBUG=TRUE
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.CONNECT_TCP( ’192.168.0.10′, ’49428′ )
Debugger accepted connection from database on port 49428.
Executing PL/SQL: CALL DBMS_DEBUG_JDWP.DISCONNECT()
Inside the Procedure
Process exited.
Disconnecting from the database SCOTT – ORA12CPDB1.
Debugger disconnected from database.

I hope this post saves you some time when you migrate to 12c.


Filed under: 12C, SQL Developer Tagged: 12C, SQL Developer
Categories: DBA Blogs

Active Data Guard – what does it mean?

The Oracle Instructor - Mon, 2014-02-17 03:10

There are misconceptions and half-truths about that term that I see time after time again in forums, postings and comments.

Some people think that Active Data Guard is a fancy marketing term for Standby Databases in Oracle. Wrong, that is just plain Data Guard :-)

Most people think that Active Data Guard means that a Physical Standby Database can be used for queries while it is still applying redo. Not the whole truth, because that is just one featureReal-Time Query – which is included in the Active Data Guard option.

Active Data Guard is an option, coming with an extra charge. Active is supposed to indicate that you can use the standby database for production usage – it is not just waiting for the primary database to fail.

In 11g, Active Data Guard includes three features:

  • Real-Time Query
  • Automatic Block Media Recovery
  • Block Change Tracking on the physical standby

In 12c, Active Data Guard got even more enhanced and includes now the features:

  • Real-time Query
  • Automatic Block Media Recovery
  • Block Change Tracking on the physical standby
  • Far Sync
  • Real-Time Cascade
  • Global Data Services
  • Application Continuity
  • Rolling Upgrade using DBMS_ROLLING

The bad news is that many of the 12c Data Guard New Features require Active Data Guard


Tagged: Active Data Guard, Data Guard
Categories: DBA Blogs

12c Enhanced Online Index DDL Operations (Lady Godiva’s Operation)

Richard Foote - Mon, 2014-02-17 00:41
In my last couple of posts, I discussed how table partitions can be moved online since 12c, keeping all indexes in sync as part of the process. 12c also introduced enhancements to a number of index related DDL statements, removing blocking locks and making their use online and far less intrusive. The following commands now have a new ONLINE […]
Categories: DBA Blogs

Partner Webcast - Oracle WebLogic Server & Oracle Database 12c Integration

Oracle WebLogic Server 12c, the latest generation of the leading Enterprise-class Java EE Application Server, and Oracle Database 12c, the latest release of the world’s #1 database, have both been...

We share our skills to maximize your revenue!
Categories: DBA Blogs

ora 600's always make me grumpy ... ORA-1555 / ORA-600 [ktbdchk1: bad dscn] #8895202

Grumpy old DBA - Sat, 2014-02-15 09:40
These days at least for me seeing an ORA 600 is a relative rare ( thank god ) occurrence.  They always raise your blood pressure sometimes to unhealthy levels.  Looking at one that at first glance hints at possible block corruption ... not good.

This bug 8895202 was fixed already in current environment but not "enabled" ( so thanks so much ... what use is a bug that is fixed but not enabled to be fixed ).  Apparently can happen in active data guard environment after switchover/switchback?

Looks like bad interaction of commit scn and itl scn in ( index blocks )?

Good news is ( rarely do 600's give you good news ) is can enable this dynamically ... scope=both ...

Although this fix is included in 11.2.0.2 / 11.2.0.3, it has to be enabled by  setting "_ktb_debug_flags"=8;  Rediscovery Notes ORA-1555 / ORA-600 [ktbdchk1: bad dscn] / ktbGetDependentScn /  Dependent scn violations as itl has higher commit scn than block scn. This happens in a Physical Standby database after a switchover.  DBVERIFY (with fix of Bug 7517208) reports:   itl[<itl_id>] has higher commit scn(aaa.bbb) than block scn (xx.yy)   Page <Block#> failed with check code 6056  There is NO DATA CORRUPTION in the block. Workaround This fix is the workaround.  It doesn't prevent to have a higher ITL SCN than the commit scn (csc). With this fix if parameter _ktb_debug_flags = 8 the SCN is repaired when block is cleaned  out (eg: block update).  While blocks are not touched dbverify still reports 6056 errors Sometimes the fix may not repair the block and the index may need rebuilding.

Categories: DBA Blogs

SQL Server Integrity Check – A Necessary Routine

Pythian Group - Fri, 2014-02-14 15:04

Today I’ll be discussing Integrity Check on SQL Server – we have some very good info in the community, but I’d like to summarize my thoughts here. When a DBA talks about Integrity Check, the conversation is usually redirected to one subject: CheckDB. And yes, we are going to talk about it in today’s post. :)

The DBCC CheckDB is a medicine of sorts that can prevent headaches, and remedy possible damages. But it should be used with caution: Without a well-planned backup strategy you can lose data. Even with all of the warnings about the importance of Integrity Check, it’s not uncommon to see instances without it.

“I’m a Manager just trying to understand this… Could you explain this at a high level?”

Let’s cut the blah, blah, blah and go ahead with this… I’ll introduce the CheckDB for those who have never heard of it – probably less technical individuals: The CheckDB command is a Database Console Command, or simply DBCC, categorized as a Validation DBCC. This is a necessary step in the entire instance maintenance, where we can detect, and in some cases repair a possible problem with the database integrity. Running the DBCC CheckDb regularly will help to anticipate problems, and even a unexpected shutdown of the instance. We’ll go into more detail a bit later…

“What’s the objective of the CheckDB?”

CheckDB has two roles: The main objective is the integrity check itself, and the second is the correction of the findings. Please not that the CheckDB verifies the database structure and not the data consistency.

Integrity Check: The command CheckDB follows a few steps on its execution, passing from the following phases:

  • Verification of the allocated structures, i.e. GAM, SGAM, PFS, IAM…
  • Verification of table consistency, including all of its indexes.
  • Verification of the system catalogs.

Correction: After the CheckDB “standard execution” as described above, we will have information about possible damaged pages. The CheckDB itself can fix these damages.

CheckDB has an option that allows the correction of the database structure, but the data allocated into the repaired page will be lost. This option is “REPAIR_ALLOW_DATA_LOSS”, and this option changes the damaged page for a new one. This way, we save the integrity of the database. Note that we are talking about the physical structure – not about data integrity, which is why we lose data. After the execution of this command, a verification based on backups should be made in order to identify and recover the lost data.

Another option is “REPAIR_REBUILD” used to correct non clustered indexes. On this option we don’t lose data. Even though we have options to correct pages, there are few limitations:

  • PFS pages, system tables and data purity detected errors aren’t fixable by CheckDB.
  • To repair using CheckDB, the database should be on SINGLE_USER mode.

“How is the CheckDB executed?”

Not too many people are curious to look deeper and discover more about this, but the CheckDB doesn’t run directly over the database. A hidden database snapshot is created and the process runs based on this snapshot. Many people only realize this when the CheckDB execution fails, and once they begin digging on it, they find out about “some snapshot” that failed the creation.

While we’re on the topic, do you know what to do if the snapshot creation fail? I can see 3 options:

  • Fix the permission to create the snapshot.
  • Execute the CheckDB with the TABLOCK option. This can bring problems, as the DB will be a target of locks.
  • Create a process (job) doing the following steps:
    1. Create a snapshot of the database.
    2. Run the DBCC CheckDB on this snapshot. Yes, the DBCC CheckDB can run over databse snapshots. And it’s obvious that a snapshot of a snapshot won’t be created.

“When and where should we execute an Integrity Check?”
It’s recommended to check  the integrity of all the databases, without exception! The best approach is to execute the CheckDB everyday. But we understand that on busy systems (instances with hundreds of databases, VLDBs, etc.) this can be nearly impossible.

Some interesting facts:

  • SQL Server maintenance plan ignores the TempDB, but if the TempDB becomes corrupt the instance will shutdown. Use caution with this.
  • Executing the CheckDB on the master database will cause the execution on the mssqlsystemresource database as well.

Execution Options: There are few execution options for CheckDB:

  • NO_INFOMSGS: Avoid the output of info messages.
  • ALL_ERRORMSDGS: Allow the output of error messages.
    • Default since SQL Server 2008 SP1.
  • NOINDEX:  Skips nonclustered indexes verification.
  • DATA_PURITY: Validates de data based on the column characteristcs.
    • Default since SQL Server 2005.
  • ESTIMATEONLY: Estimates the space needed on TempDB.
    • As per Paul Randal, this option is broken on newer versions of SQL Server.
  • TABLOCK: Uses locks instead of database snapshots.
    • Useful when the snapshot creation is failing.
  • EXTENDED_LOGICAL_CHECKS: Allows the validation on XML columns and Spatial indexes.
    • This can be costly.
  • PHYSICAL_ONLY: Skips most of the logical verifications.
    • Need less resources to run, but a “full” execution should be sone periodically.

Other Approaches to execute: As CheckDB is a resource-intensive process, in some cases the execution may take a long time to complete. Depending on the environment, the Integrity Check on all the databases (or even in one very large database) may not finish on time, namely, the integrity check will exceed the maintenance window limit. To work around this, we can reproduce a full CheckDB executing its subset commands. As follows:

  • CHECKALLOC
  • CHECKTABLE
  • CHECKCATALOG
  • CHECKFILEGROUP

By executing those commands, one per day, we can cover a full CheckDB execution in phases. Another possible approach is execute the backups with the “CHECKSUM” option. This way we will be able to restore using the option “VERIFY ONLY” and do an extra level of verification. Anyway, we will still need the CheckDB.

So, that’s all about CheckDB. There are more information around there, but the best place to get info from CheckDB is on SQL Skills blog, where the CheckDB guru writes few articles about it. See you in another post ;)

Categories: DBA Blogs

RMAN 12c : Say goodbye to your backup when dropping your PDB

Pythian Group - Fri, 2014-02-14 14:46

I was working on my presentations for IOUG Collaborate, and I came upon this strange behaviour in RMAN 12c (12.1.0.1.0) which to me, shouldn’t happen. Seems that when you do a DROP PLUGGABLE DATABASE , it is the equivalent of DROP DATABASE INCLUDING BACKUPS. This means that if you need to restore your PDB later on, you won’t have this registered – just be careful when dropping them.

Here we go: So I took a backup of my CDB and all of its PDBs, and kept an eye on this TAG 20140212T191237  (I removed a couple of lines for better reading)


oracle@oracleenespanol2.localdomain [cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin

oracle $ rman target sys/oracle@cdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Feb 12 19:12:06 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=808250731)

RMAN> backup database plus archivelog ;

Starting backup at 12/02/2014 19:12:31

current log archived

...

Starting backup at 12/02/2014 19:12:37

using channel ORA_DISK_1

channel ORA_DISK_1: starting full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00003 name=/u01/app/oracle/oradata/CDB1/datafile/o1_mf_sysaux_9c520w6w_.dbf

input datafile file number=00004 name=/u01/app/oracle/oradata/CDB1/datafile/o1_mf_undotbs1_9c524cnr_.dbf

input datafile file number=00001 name=/u01/app/oracle/oradata/CDB1/datafile/o1_mf_system_9c522mbz_.dbf

input datafile file number=00006 name=/u01/app/oracle/oradata/CDB1/datafile/o1_mf_users_9c524bjm_.dbf

channel ORA_DISK_1: starting piece 1 at 12/02/2014 19:12:38

channel ORA_DISK_1: finished piece 1 at 12/02/2014 19:15:23

piece handle=/u01/app/oracle/fast_recovery_area/CDB1/backupset/2014_02_12/<strong>o1_mf_nnndf_TAG20140212T191237_9hrbjrqd_</strong>.bkp tag=TAG20140212T191237 comment=NONE

channel ORA_DISK_1: backup set complete, elapsed time: 00:02:45

channel ORA_DISK_1: starting full datafile backup set

channel ORA_DISK_1: specifying datafile(s) in backup set

input datafile file number=00009 name=/u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_sysaux_9c52fqvt_.dbf

input datafile file number=00008 name=/u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_system_9c52fqt1_.dbf

input datafile file number=00010 name=/u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_users_9c52hof0_.dbf

channel ORA_DISK_1: starting piece 1 at 12/02/2014 19:15:23

channel ORA_DISK_1: finished piece 1 at 12/02/2014 19:16:08

piece handle=/u01/app/oracle/fast_recovery_area/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/backupset/2014_02_12/o1_mf_nnndf_TAG20140212T191237_9hrbovmw_.bkp tag=<b>TAG20140212T191237</b> comment=NONE

..

Finished backup at 12/02/2014 19:16:37

Starting Control File and SPFILE Autobackup at 12/02/2014 19:16:37

piece handle=/u01/app/oracle/fast_recovery_area/CDB1/autobackup/2014_02_12/o1_mf_s_839358997_9hrbr5vr_.bkp comment=NONE

Finished Control File and SPFILE Autobackup at 12/02/2014 19:16:38

So First I want to show you that I was able to recoup the DB in case I were to lose my datafiles of my PDB, so I will first delete them

oracle@oracleenespanol2.localdomain[cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin

oracle $ sqlplus / as sysdba

Connected to:

Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> set pages 999

SQL> COLUMN PDB_ID FORMAT 999

COLUMN PDB_NAME FORMAT A8

COLUMN FILE_ID FORMAT 9999

COLUMN TABLESPACE_NAME FORMAT A10

COLUMN FILE_NAME FORMAT A45

SELECT p.PDB_ID, p.PDB_NAME, d.FILE_ID, d.TABLESPACE_NAME, d.FILE_NAME

FROM DBA_PDBS p, CDB_DATA_FILES d

WHERE p.PDB_ID = d.CON_ID

ORDER BY p.PDB_ID;SQL> SQL> SQL> SQL> SQL> SQL>   2    3    4

PDB_ID PDB_NAME FILE_ID TABLESPACE FILE_NAME

------ -------- ------- ---------- ---------------------------------------------

2 PDB$SEED       5 SYSTEM   /u01/app/oracle/oradata/CDB1/datafile/o1_mf_system_9c5257ms_.dbf

2 PDB$SEED       7 SYSAUX   /u01/app/oracle/oradata/CDB1/datafile/o1_mf_sysaux_9c5257mj_.dbf

3 PDB1      9 SYSAUX   /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_sysaux_9c52fqvt_.dbf

3 PDB1      8 SYSTEM   /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_system_9c52fqt1_.dbf

3 PDB1     10 USERS   /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_users_9c52hof0_.dbf

SQL> alter pluggable database pdb1 close;

Pluggable database altered.

SQL> exit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production

With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

oracle@oracleenespanol2.localdomain[cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin

oracle $ rm /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_sysaux_9c52fqvt_.dbf

oracle@oracleenespanol2.localdomain[cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin

oracle $ rm /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_system_9c52fqt1_.dbf

oracle@oracleenespanol2.localdomain[cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin

oracle $ rm /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_users_9c52hof0_.dbf

Now that I have deleted the datafiles, I will now proceed to restore and recover the PDB, and keep an eye on the tag 20140212T191237  used for the restore.

oracle@oracleenespanol2.localdomain [cdb1] /u01/app/oracle/product/12.1.0/db_1/network/admin
oracle $ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Feb 12 19:19:46 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates. All rights reserved.

connected to target database: CDB1 (DBID=808250731)

RMAN> RESTORE PLUGGABLE DATABASE PDB1;

Starting restore at 12/02/2014 19:20:12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=77 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_system_9c52fqt1_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_sysaux_9c52fqvt_.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/datafile/o1_mf_users_9c52hof0_.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/backupset/2014_02_12/o1_mf_nnndf_TAG20140212T191237_9hrbovmw_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB1/EDDDC5E35CF7216DE043344EB2C0AB6F/backupset/2014_02_12/o1_mf_nnndf_TAG20140212T191237_9hrbovmw_.bkp tag=TAG20140212T191237
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
Finished restore at 12/02/2014 19:20:48

RMAN> RECOVER PLUGGABLE DATABASE PDB1;

Starting recover at 12/02/2014 19:21:06
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 12/02/2014 19:21:07

RMAN> alter pluggable database pdb1 open;

Statement processed

As you can see, I was able to restore and recover my PDB without a problem. But what happens if I decide to drop my PDB, and later on decided that the PDB was needed? So I tried to go back to my backup, it will no longer be there, and it doesn’t report on the backup tag 20140212T191237


RMAN> alter pluggable database pdb1 close;

Statement processed

RMAN> drop pluggable database PDB1;

Statement processed

RMAN> RESTORE PLUGGABLE DATABASE PDB1;

Starting restore at 13/02/2014 11:18:26
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=41 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 02/13/2014 11:18:27
RMAN-06813: could not translate pluggable database PDB1

RMAN> list backup tag TAG20140212T191237;
List of Backup Sets
===================
BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
67 Full 1.88G DISK 00:02:41 12/02/2014 19:15:19
 BP Key: 67 Status: AVAILABLE Compressed: NO Tag: TAG20140212T191237
 Piece Name: /u01/app/oracle/fast_recovery_area/CDB1/backupset/2014_02_12/o1_mf_nnndf_TAG20140212T191237_9hrbjrqd_.bkp
 List of Datafiles in backup set 67
 File LV Type Ckp SCN Ckp Time Name
 ---- -- ---- ---------- ------------------- ----
 1 Full 3220602 12/02/2014 19:12:38 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_system_9c522mbz_.dbf
 3 Full 3220602 12/02/2014 19:12:38 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_sysaux_9c520w6w_.dbf
 4 Full 3220602 12/02/2014 19:12:38 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_undotbs1_9c524cnr_.dbf
 6 Full 3220602 12/02/2014 19:12:38 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_users_9c524bjm_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ -------------------
69 Full 680.13M DISK 00:00:24 12/02/2014 19:16:32
 BP Key: 69 Status: AVAILABLE Compressed: NO Tag: TAG20140212T191237
 Piece Name: /u01/app/oracle/fast_recovery_area/CDB1/EDDDB886A1191F07E043344EB2C0BE27/backupset/2014_02_12/o1_mf_nnndf_TAG20140212T191237_9hrbq8lm_.bkp
 List of Datafiles in backup set 69
 Container ID: 2, PDB Name: PDB$SEED
 File LV Type Ckp SCN Ckp Time Name
 ---- -- ---- ---------- ------------------- ----
 5 Full 1732663 18/12/2013 22:52:25 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_system_9c5257ms_.dbf
 7 Full 1732663 18/12/2013 22:52:25 /u01/app/oracle/oradata/CDB1/datafile/o1_mf_sysaux_9c5257mj_.dbf

As you can see, that backup is no longer registered. I still don’t know if this is normal behaviour for PDBs backup, or a bug – but for now just be careful when dropping a PDB. Your backup will not be reliable. Scary stuff isn’t it ?

Categories: DBA Blogs

SQL Server Statistics: Maintenance and Best Practices

Pythian Group - Fri, 2014-02-14 12:58
What are Statistics?

There are multiple paths a database can use to answer a query, some of them being faster and more efficient than others. It is the job of the query optimizer to evaluate and choose the best path, or execution plan, for a given query. Using the available indexes may not always be the most efficient plan. For example, if 95% of the values for a column are the same, an index scan will probably be more efficient than using the index on that column. Statistics are SQL Server objects which contain metrics on the data count and distribution within a column or columns used by the optimizer to help it make that choice. They are used to estimate the count of rows.

Index statistics: Created automatically when an index (both clustered and non-clustered) is created. These will have the same name as the index and will exist as long as the index exists.

Column statistics: Created manually by the DBA using the ‘CREATE STATISTICS’ command, or automatically if the “Auto Create Statistics” option is set to “True”. Column statistics can be created, modified and dropped at will.

Statistics contain two different types of information about the data; density and distribution. Density is simply the inverse of the count of distinct values for the column or columns. The distribution is a representation of the data contained in the first column of the statistic. This information is stored in a histogram; the histogram contains up to 200 steps with a lower and upper limit and contains the count of values that fall between both limits. To view this histogram, go to the details tab of the statistic’s properties or use the command DBCC SHOW_STATISTICS. The screenshot below shows the histogram of an index statistic; the RANGE_HI_KEY is the upper limit of the step, the RANGE_HI_KEY of the previous step + 1 is the lower limit, and the RANGE_ROWS is the count of rows between the limits.

stats1 Statistics Maintenance and Best Practices

When the data in the database changes the statistics become stale and outdated. When examining a query execution plan, a large discrepancy between the Actual Number of Rows and the Estimated Number of Rows is an indication of outdated stats. Outdated statistics can lead the optimizer in choosing inefficient execution plan and can dramatically affect overall performance. Steps must therefore be taken in order to keep statistics up to date.

stats2

Keep Auto Create Statistics enabled: This database property allows SQL Server to automatically create stats for a single non-indexed column if they are missing when it is used in a where or join condition. This ensures the optimizer will have the necessary information to choose a plan. The statistics automatically created by SQL Server will start with _WA_ in their name.

Keep Auto Update Statistics enabled: This database property allows SQL Server to automatically update the statistics when they are deemed outdated. The update occurs before executing a query if certain conditions are met, or after the query is executed if Auto Update Statistics Asynchronously is used instead. The three conditions that will trigger an update if one is met are:

-Table had 0 rows and increases to one or more rows.

-Table had less than 500 rows and there is an increase of 500 rows or more since the last update

-Table has over 500 rows and there is an increase of 500 + 20% of the table size since the last update

stats3

Maintenance plan: You can also proactively update the statistics yourself using TSQL (sp_updatestats for all stats in a database or UPDATE STATISTICS for a single one) or a maintenance plan task. Scheduling the statistics maintenance during off hours will help reduce the need to update statistics during peak times. The need and frequency of this proactive maintenance will depend on your environment; frequent data changes causes the statistics to become outdated more quickly. You can also specify the sample size used to update the statistic;

Ex:

UPDATE STATISTICS TableName(StatsName) WITH FULLSCAN: Costs more time and resources but will ensure that statistics are accurate.

UPDATE STATISTICS TableName(StatsName) WITH SAMPLE 50 PERCENT: Will only use half the rows and extrapolate the rest, meaning the updating will be faster, but the statistics may not be accurate.

Rebuilding an index will also update index statistics with full scan (column stats will not be updated, and an index reorg will do the update). Note however that updating statistics forces queries to recompile; you must therefore decide when the cost of the overhead for the recompiles is worth having the latest statistics.

Unused Statistics: Statistics comes with a cost, and just as with indexes, too many of them can lead to issues like increasing the cost of statistics maintenance, and can make the optimizer’s job more difficult. Updating statistics for a large database can easily take hours, even days, to complete. When Auto Create Statistics is enabled, stats can be created even for a one time query. A table could end up having a large number of statistics that serve no purpose. It is wise to review and clean up the statistics as part of general maintenance. Identifying unused statistics can be difficult since, unlike indexes, SQL Server does not record statistics usage. However you can identify the statistics that satisfy one of the thresholds for the automatic update above but still hasn’t been updated; this is a good indication of unused statistics.

In addition to unused stats, you may find overlapping stats which are covered by other statistics. The following script from Kendal Van Dyke will identify all single column statistics that are covered by an existing index statistic (share the same leading column) in a database and generates the TSQL commands to drop them.

WITH    autostats ( object_id, stats_id, name, column_id )

AS ( SELECT   sys.stats.object_id ,

sys.stats.stats_id ,

sys.stats.name ,

sys.stats_columns.column_id

FROM     sys.stats

INNER JOIN sys.stats_columns ON sys.stats.object_id = sys.stats_columns.object_id

AND sys.stats.stats_id = sys.stats_columns.stats_id

WHERE    sys.stats.auto_created = 1

AND sys.stats_columns.stats_column_id = 1

)

SELECT  OBJECT_NAME(sys.stats.object_id) AS [Table] ,

sys.columns.name AS [Column] ,

sys.stats.name AS [Overlapped] ,

autostats.name AS [Overlapping] ,

'DROP STATISTICS [' + OBJECT_SCHEMA_NAME(sys.stats.object_id)

+ '].[' + OBJECT_NAME(sys.stats.object_id) + '].['

+ autostats.name + ']'

FROM    sys.stats

INNER JOIN sys.stats_columns ON sys.stats.object_id = sys.stats_columns.object_id

AND sys.stats.stats_id = sys.stats_columns.stats_id

INNER JOIN autostats ON sys.stats_columns.object_id = autostats.object_id

AND sys.stats_columns.column_id = autostats.column_id

INNER JOIN sys.columns ON sys.stats.object_id = sys.columns.object_id

AND sys.stats_columns.column_id = sys.columns.column_id

WHERE   sys.stats.auto_created = 0

AND sys.stats_columns.stats_column_id = 1

AND sys.stats_columns.stats_id != autostats.stats_id

AND OBJECTPROPERTY(sys.stats.object_id, 'IsMsShipped') = 0

Source: http://www.kendalvandyke.com/2010/09/tuning-tip-identifying-overlapping.html

Common Mistakes

Statistics update after Index Rebuild: As mentioned previously, the index rebuild (not reorg) will also update index statistics using full scan. Scheduling stats maintenance after the index maintenance will cause duplicate work. In addition, if the stats maintenance is using a small sample size, the new updated stats will overwrite the ones that were just updated with full scan, meaning their values will be less accurate. Scheduling it after an index reorg however is fine.

Relying on Auto Update: As seen above, the threshold which triggers the auto update is around 20% of the total row count. This is fine for small tables, but larger tables require a lot of data changes before the update is triggered, during which the stats can become outdated.

Not specifying the sample size: While updating, choosing the right sample size is important to keep statistics accurate. While the cost of using full scan is higher, in some situations it is required, especially for very large databases. Running EXEC sp_updatestats @resample = ‘resample’ will update all statistics using the last sample used. If you do not specify the resample, it will update them using the default sample. The default sample is determined by SQL server and is a fraction of the total row count in a table. We have recently run into an issue where a DBA executed “EXEC sp_updatestats” on a 1 terabyte database, which caused all statistics to be updated with the default sample. Due to the size of the database, the default sample is simply not enough to represent the data distribution in the database and caused all queries to use bad execution plans which caused major performance issues. Only a full scan update of the statistics provided accurate statistics for this database, but it takes a very long time to run.  Luckily there was a QA server where the database was restored before the stats update and with almost identical data. We were able to script the statistics from the QA server and recreate them on production using their binary representation (see WITH STATS_STREAM).  This solution is not recommended and was only used as a last resort. This incident shows the importance of statistics and implementing proper maintenance appropriate for the environment.

Updating too often: Not only is there a cost in updating statistics, remember that it also causes queries to recompile. Updating statistics should be done only as required, and a schedule appropriate for your environment should be used. The frequency depends on the amount of data changes in the database, more changes require more frequent stats update.

Conclusion

Statistics are a crucial element in the overall performance of a database and require proper maintenance and attention. In addition, each environment is unique and has different needs regarding statistics maintenance. For more information regarding statistics, see http://technet.microsoft.com/en-us/library/ms190397.aspx.

Categories: DBA Blogs

Managing Oracle Central Inventory

DBA Scripts and Articles - Fri, 2014-02-14 12:36

Maybe have you ever been in a situation in which the Oracle central inventory is missing a Home or a Home is in it and doesn’t exists anymore or even the central Inventory doesn’t exists at all. Here is the way to consolidate you inventory or create it if it doesn’t exists. Create the central [...]

The post Managing Oracle Central Inventory appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Log Buffer #359, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-02-14 10:47

On this Valentine’s Day, what is the most romantic thing you could do as database professional? Why, yes — you could read (and then share) this scintillating and lovely Log Buffer Edition!

Oracle:

Oracle is kicking off a 17-city North American event series on how running Oracle Applications on Oracle hardware can help customers deliver maximum value and achieve dramatically better business results.

Five guidelines to follow as you begin building and employing mobile applications – plus Oracle technologies and products that support your move to mobility in your enterprise.

When you run R functions in the database, especially functions involving multiple R engines in parallel, you can monitor their progress using the Oracle R Enterprise datastore as a central location for progress notifications, or any intermediate status or results.

In the era of big data, data warehousing is becoming even more relevant for enterprises which are eager to become more information-driven.

The License Definitions and Rules (LDRs) are a critical part of every software license that Oracle grants to a customer.

SQL Server:

An examination into how the various transaction isolation levels affect locking (and blocking.)

What to do if you need to push the limits of your disk subsystem, in order to determine whether the hardware’s I/O capacity meets the needs of a database application.

An Overview of SSIS Parameters – Level 15 of the Stairway to Integration Services.

With the new SQL Server 2014 In-Memory OLTP tables, stored procedures can be natively compiled and can improve performance.

Excel spreadsheets are useful for distributing data generated by SQL Server, but SSIS lacks direct support for writing Excel files.

MySQL:

The Sign: row-based binary logging and integer signedness in MySQL and MariaDB.

Delayed Transaction Durability: A new feature in SQL Server 2014 CTP2.

Getting Started with the Spider Storage Engine.

Shard-Query is now much faster for some aggregate functions.

One of the most expensive database operations is performing Data Definition Language (DDL, e.g. CREATE, DROP, ALTER, etc.) statements, specially, the ALTER statements because MySQL blocks the entire table for both reads and writes while modifying the table.

Categories: DBA Blogs