The Oracle Instructor

Subscribe to The Oracle Instructor feed The Oracle Instructor
about Database Technology
Updated: 16 hours 44 min ago

Fast-Start Failover for Maximum Protection in #Oracle 12c

Fri, 2017-01-13 11:40

Fast-Start Failover is supported with Maximum Protection in 12cR2. Also Multiple Observers can now monitor the same Data Guard Configuration simultaneously. I will show both in this article. Starting with a (Multitenant) Primary in Maximum Protection mode with two Standby Databases. It is still not recommended to have the highest protection mode configured with only one standby. So this is my starting point:

DGMGRL> show configuration;

Configuration - myconf

Protection Mode: MaxProtection
Members:
cdb1 - Primary database
cdb1sb - Physical standby database
cdb1sb2 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 57 seconds ago)

All three databases have flashback turned on. I want to have a setup like this in the end:

FSFO with Max Protection and 2 Observers

FSFO with Max Protection and 2 Observers

This is how it’s been configured:

DGMGRL> edit database cdb1 set property faststartfailovertarget='cdb1sb,cdb1sb2';
Property "faststartfailovertarget" updated
DGMGRL> edit database cdb1sb set property faststartfailovertarget='cdb1,cdb1sb2';
Property "faststartfailovertarget" updated
DGMGRL> edit database cdb1sb2 set property faststartfailovertarget='cdb1,cdb1sb';
Property "faststartfailovertarget" updated
DGMGRL> enable fast_start failover;
Enabled.

On host uhesse4:

[oracle@uhesse4 ~]$ dgmgrl sys/oracle@cdb1
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jan 13 17:20:52 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "cdb1"
Connected as SYSDBA.
DGMGRL> start observer number_one;
[W000 01/13 17:21:04.85] FSFO target standby is cdb1sb
[W000 01/13 17:21:07.05] Observer trace level is set to USER
[W000 01/13 17:21:07.05] Try to connect to the primary.
[W000 01/13 17:21:07.05] Try to connect to the primary cdb1.
[W000 01/13 17:21:07.05] The standby cdb1sb is ready to be a FSFO target
[W000 01/13 17:21:09.06] Connection to the primary restored!
[W000 01/13 17:21:13.07] Disconnecting from database cdb1.

On host uhesse3:

[oracle@uhesse3 ~]$ dgmgrl sys/oracle@cdb1
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jan 13 17:22:16 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "cdb1"
Connected as SYSDBA.
DGMGRL> start observer number_two;
[W000 01/13 17:22:32.68] FSFO target standby is cdb1sb
[W000 01/13 17:22:34.85] Observer trace level is set to USER
[W000 01/13 17:22:34.85] Try to connect to the primary.
[W000 01/13 17:22:34.85] Try to connect to the primary cdb1.
[W000 01/13 17:22:34.85] The standby cdb1sb is ready to be a FSFO target
[W000 01/13 17:22:36.86] Connection to the primary restored!
[W000 01/13 17:22:40.86] Disconnecting from database cdb1.

This is now the state of the configuration:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxProtection
  Members:
  cdb1    - Primary database
    cdb1sb  - (*) Physical standby database 
    cdb1sb2 - Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 33 seconds ago)

DGMGRL> show fast_start failover;

Fast-Start Failover: ENABLED

  Threshold:          15 seconds
  Target:             cdb1sb
  Candidate Targets:  cdb1sb,cdb1sb2
  Observers:      (*) number_two
                      number_one
  Lag Limit:          30 seconds (not in use)
  Shutdown Primary:   TRUE
  Auto-reinstate:     TRUE
  Observer Reconnect: (none)
  Observer Override:  FALSE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Write Errors          YES

  Oracle Error Conditions:
    (none)

That protects against the failure of any two components in the configuration with automatic failover and zero data loss! For example the first standby may fail and then the primary. We failover to the second standby that becomes the new fast-start failover target:

[oracle@uhesse2 ~]$ ps -ef | grep smon
oracle   15087     1  0 17:40 ?        00:00:00 ora_smon_cdb1sb
oracle   15338  9765  0 17:49 pts/2    00:00:00 grep --color=auto smon
[oracle@uhesse2 ~]$ kill -9 15087

Above crashed the first standby. This is what the Observers report:

[W000 01/13 17:49:34.24] Failed to ping the standby.
[W000 01/13 17:49:37.25] Failed to ping the standby.
[W000 01/13 17:49:40.25] Failed to ping the standby.
[W000 01/13 17:49:43.25] Failed to ping the standby.
[W000 01/13 17:49:46.26] Failed to ping the standby.
[W000 01/13 17:49:46.26] Standby database has changed to cdb1sb2.
[W000 01/13 17:49:47.26] Try to connect to the primary.
[W000 01/13 17:49:47.26] Try to connect to the primary cdb1.
[W000 01/13 17:49:48.34] The standby cdb1sb2 is ready to be a FSFO target
[W000 01/13 17:49:53.35] Connection to the primary restored!
[W000 01/13 17:49:57.35] Disconnecting from database cdb1.

This is the state of the configuration now:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxProtection
  Members:
  cdb1    - Primary database
    Error: ORA-16778: redo transport error for one or more members

    cdb1sb2 - (*) Physical standby database 
    cdb1sb  - Physical standby database 
      Error: ORA-1034: ORACLE not available

Fast-Start Failover: ENABLED

Configuration Status:
ERROR   (status updated 14 seconds ago)

Notice that the Fast-Start Failover indicator (*) now points to cdb1sb2. Now the primary fails:

[oracle@uhesse1 ~]$ ps -ef | grep smon
oracle   21334     1  0 17:41 ?        00:00:00 ora_smon_cdb1
oracle   22077  5043  0 17:52 pts/0    00:00:00 grep --color=auto smon
[oracle@uhesse1 ~]$ kill -9 21334

This is what the Observers report:

[W000 01/13 17:52:54.04] Primary database cannot be reached.
[W000 01/13 17:52:54.04] Fast-Start Failover threshold has not exceeded. Retry for the next 15 seconds
[W000 01/13 17:52:55.05] Try to connect to the primary.
[W000 01/13 17:52:57.13] Primary database cannot be reached.
[W000 01/13 17:52:58.13] Try to connect to the primary.
[W000 01/13 17:53:06.38] Primary database cannot be reached.
[W000 01/13 17:53:06.38] Fast-Start Failover threshold has not exceeded. Retry for the next 3 seconds
[W000 01/13 17:53:07.39] Try to connect to the primary.
[W000 01/13 17:53:09.46] Primary database cannot be reached.
[W000 01/13 17:53:09.46] Fast-Start Failover threshold has expired.
[W000 01/13 17:53:09.46] Try to connect to the standby.
[W000 01/13 17:53:09.46] Making a last connection attempt to primary database before proceeding with Fast-Start Failover.
[W000 01/13 17:53:09.46] Check if the standby is ready for failover.
[S019 01/13 17:53:09.47] Fast-Start Failover started...

17:53:09.47  Friday, January 13, 2017
Initiating Fast-Start Failover to database "cdb1sb2"...
[S019 01/13 17:53:09.47] Initiating Fast-start Failover.
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1sb2"
17:53:23.68  Friday, January 13, 2017

After having restarted the two crashed databases, they become automatically reinstated and the configuration then looks like this:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxProtection
  Members:
  cdb1sb2 - Primary database
    cdb1    - (*) Physical standby database 
    cdb1sb  - Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 7 seconds ago)

DGMGRL> show fast_start failover;

Fast-Start Failover: ENABLED

  Threshold:          15 seconds
  Target:             cdb1
  Candidate Targets:  cdb1,cdb1sb
  Observers:      (*) number_two
                      number_one
  Lag Limit:          30 seconds (not in use)
  Shutdown Primary:   TRUE
  Auto-reinstate:     TRUE
  Observer Reconnect: (none)
  Observer Override:  FALSE

Configurable Failover Conditions
  Health Conditions:
    Corrupted Controlfile          YES
    Corrupted Dictionary           YES
    Inaccessible Logfile            NO
    Stuck Archiver                  NO
    Datafile Write Errors          YES

  Oracle Error Conditions:
    (none)

Switching back to make cdb1 primary – this is of course optional:

DGMGRL> switchover to cdb1;
Performing switchover NOW, please wait...
Operation requires a connection to database "cdb1"
Connecting ...
Connected to "cdb1"
Connected as SYSDBA.
New primary database "cdb1" is opening...
Operation requires start up of instance "cdb1sb2" on database "cdb1sb2"
Starting instance "cdb1sb2"...
ORACLE instance started.
Database mounted.
Connected to "cdb1sb2"
Switchover succeeded, new primary is "cdb1"
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxProtection
  Members:
  cdb1    - Primary database
    cdb1sb  - (*) Physical standby database 
    cdb1sb2 - Physical standby database 

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 29 seconds ago)

I think this enhancement is really a big deal!


Tagged: 12c New Features, Data Guard
Categories: DBA Blogs

Upgrade nach #Oracle 12c selbst testen: Mainz, 2. bis 3. Februar

Wed, 2017-01-11 02:10

noon2noon

Die Deutsche Oracle Anwender Gruppe (DOAG) bietet diesen interaktiven Workshop an. Eine sehr gute Gelegenheit, das Upgrade einmal selbst mit der Unterstützung renommierter Experten durchzuführen. Ich werde einen Vortrag zum Thema Common vs. Local beisteuern. Das ist eins der tendenziell komplizierten Themen, mit denen Einsteiger in Multitenant konfrontiert werden.


Tagged: DOAG
Categories: DBA Blogs

How to reduce Buffer Busy Waits with Hash Partitioned Tables in #Oracle

Tue, 2016-12-06 04:57

fight_contention_2

Large OLTP sites may suffer from Buffer Busy Waits. Hash Partitioning is one way to reduce it on both, Indexes and Tables. My last post demonstrated that for Indexes, now let’s see how it looks like with Tables. Initially there is a normal table that is not yet hash partitioned. If many sessions do insert now simultaneously, the problem shows:

Contention with a heap table

Contention with a heap table

The last extent becomes a hot spot; all inserts go there and only a limited number of blocks is available. Therefore we will see Buffer Busy Waits. The playground:

SQL> create table t (id number, sometext varchar2(50));

Table created.

create sequence id_seq;

Sequence created.

create or replace procedure manyinserts as
begin
 for i in 1..10000 loop
  insert into t values (id_seq.nextval, 'DOES THIS CAUSE BUFFER BUSY WAITS?');
 end loop;
 commit;
end;
/

Procedure created.

create or replace procedure manysessions as
v_jobno number:=0;
begin
FOR i in 1..100 LOOP
 dbms_job.submit(v_jobno,'manyinserts;', sysdate);
END LOOP;
commit;
end;
/

Procedure created.

The procedure manysessions is the way how I simulate OLTP end user activity on my demo system. Calling it leads to 100 job sessions. Each does 10.000 inserts:

SQL> exec manysessions

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T			    2985

So we got thousands of Buffer Busy Waits that way. Now the remedy:

SQL> drop table t purge;

Table dropped.

SQL> create table t (id number, sometext varchar2(50))
     partition by hash (id) partitions 32;

Table created.

 
SQL> alter procedure manyinserts compile;

Procedure altered.

SQL> alter procedure manysessions compile;

Procedure altered.

SQL> exec manysessions 

PL/SQL procedure successfully completed.

SQL> select count(*) from t;

  COUNT(*)
----------
   1000000

SQL> select object_name,subobject_name,value from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';  

OBJECT_NAM SUBOBJECT_	   VALUE
---------- ---------- ----------
T	   SYS_P249	       0
T	   SYS_P250	       1
T	   SYS_P251	       0
T	   SYS_P252	       0
T	   SYS_P253	       0
T	   SYS_P254	       0
T	   SYS_P255	       0
T	   SYS_P256	       1
T	   SYS_P257	       0
T	   SYS_P258	       0
T	   SYS_P259	       1
T	   SYS_P260	       0
T	   SYS_P261	       0
T	   SYS_P262	       0
T	   SYS_P263	       0
T	   SYS_P264	       1
T	   SYS_P265	       1
T	   SYS_P266	       0
T	   SYS_P267	       0
T	   SYS_P268	       0
T	   SYS_P269	       0
T	   SYS_P270	       0
T	   SYS_P271	       1
T	   SYS_P272	       0
T	   SYS_P273	       0
T	   SYS_P274	       0
T	   SYS_P275	       1
T	   SYS_P276	       0
T	   SYS_P277	       0
T	   SYS_P278	       0
T	   SYS_P279	       2
T	   SYS_P280	       0

32 rows selected.

SQL> select sum(value) from v$segment_statistics 
     where owner='ADAM' 
     and statistic_name='buffer busy waits'
     and object_name = 'T';

SUM(VALUE)
----------
	 9

SQL> select 2985-9 as waits_gone from dual;

WAITS_GONE
----------
      2976

The hot spot is gone:

hash_part_table

This emphasizes again that Partitioning is not only for the Data Warehouse. Hash Partitioning in particular can be used to fight contention in OLTP environments.


Tagged: partitioning, Performance Tuning
Categories: DBA Blogs

Index Competition in #Oracle 12c

Tue, 2016-08-09 02:39

Suppose you want to find out which type of index is best for performance with your workload. Why not set up a competition and let the optimizer decide? The playground:

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------
SQL_ID  3hrvrf1r6kn8s, child number 0
-------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 3593230073

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     4 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN                  | BSTAR |     1 |       |     3   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=9)


20 rows selected.

There is a standard B*tree index on the column CHANNEL_ID that speeds up the SELECT above. I think a bitmap index would be better:

ADAM@pdb1 > create bitmap index bmap on sales(channel_id) invisible nologging;

Index created.

ADAM@pdb1 > alter index bstar invisible;

Index altered.

ADAM@pdb1 > alter index bmap visible;

Index altered.

ADAM@pdb1 > select max(amount_sold) from sales where channel_id=9;

MAX(AMOUNT_SOLD)
----------------
            5000

ADAM@pdb1 > @lastplan

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------
select max(amount_sold) from sales where channel_id=9

Plan hash value: 2178022915

----------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE                      |       |     1 |     6 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES |     1 |     6 |     3   (0)| 00:00:01 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | BMAP  |       |       |            |          |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("CHANNEL_ID"=9)


21 rows selected.

With this 12c New Feature (two indexes on the same column), I got a smooth transition to the new index type. But this left no choice to the optimizer. What about this?

ADAM@pdb1 > alter index bmap invisible;

Index altered.

ADAM@pdb1 > alter session set optimizer_use_invisible_indexes=true;

Now both indexes are invisible and the optimizer may choose any of them. Turns out that it likes the bitmap index better here. Instead of watching the execution plans, V$SEGMENT_STATISTICS can also be used to find out:

ADAM@pdb1 > select object_name,statistic_name,value
            from v$segment_statistics
            where object_name in ('BSTAR','BMAP')
            and statistic_name in ('physical reads','logical reads');

OBJECT STATISTIC_NAME                      VALUE
------ ------------------------------ ----------
BSTAR  logical reads                       22800
BSTAR  physical reads                       6212
BMAP   logical reads                        1696
BMAP   physical reads                          0

The numbers of BSTAR remain static while BMAP numbers increase. You may also monitor that with DBA_HIST_SEG_STAT across AWR snapshots. Now isn’t that cool?:-)
Couple of things to be aware of here:
Watch out for more than just physical/logical reads – bitmap indexes may cause a locking problem in an OLTP environment.
Don’t keep the two indexes invisible forever – after you saw which one performs better, drop the other one. Invisible indexes need to be maintained upon DML and therefore slow it down.


Tagged: 12c New Features, Performance Tuning
Categories: DBA Blogs

Data Redaction and Data Pump in #Oracle 12c

Fri, 2016-08-05 08:07

What happens upon Data Pump Export if tables are being exported that have a Data Redaction Policy? I got that question several times in class, which is why I put the answer here , so I can refer to it subsequently.  Might also be of interest to the Oracle Community:-)

SYS@orcl > BEGIN
DBMS_REDACT.ADD_POLICY
(object_schema => 'SCOTT',
object_name => 'EMP',
policy_name => 'EMPSAL_POLICY',
column_name => 'SAL',
function_type => DBMS_REDACT.FULL,
expression => '1=1');
END;
/  

PL/SQL procedure successfully completed.

SYS@orcl > connect scott/tiger
Connected.
SCOTT@orcl > select ename,sal from emp;

ENAME             SAL
---------- ----------
SMITH               0
ALLEN               0
WARD                0
JONES               0
MARTIN              0
BLAKE               0
CLARK               0
SCOTT               0
KING                0
TURNER              0
ADAMS               0
JAMES               0
FORD                0
MILLER              0

14 rows selected.

Scott doesn’t see the values of the SAL column because of the Data Redaction Policy. SYS is not subject to that policy, because SYS has the privilege EXEMPT REDACTION POLICY:

SYS@orcl > select ename,sal from scott.emp;

ENAME             SAL
---------- ----------
SMITH             800
ALLEN            1600
WARD             1250
JONES            2975
MARTIN           1250
BLAKE            2850
CLARK            2450
SCOTT            9000
KING             5000
TURNER           1500
ADAMS            1100
JAMES             950
FORD             9000
MILLER           1300

14 rows selected.

If Data Pump Export is done as a user who owns that privilege, the table is just exported with all its content, regardless of the policy:

SYS@orcl >  create directory dpdir as '/home/oracle/';
[oracle@uhesse ~]$ expdp tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:56:51 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SYS"."SYS_EXPORT_TABLE_01":  /******** AS SYSDBA tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/RADM_POLICY
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported "SCOTT"."EMP"                               8.781 KB      14 rows
Master table "SYS"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SYS"."SYS_EXPORT_TABLE_01" successfully completed at Fri Aug 5 08:57:10 2016 elapsed 0 00:00:15

If Scott tries to export the table, that raises an error message:

SYS@orcl > grant read,write on directory dpdir to scott;

Grant succeeded.

[oracle@uhesse ~]$ expdp scott/tiger tables=scott.emp directory=DPDIR

Export: Release 12.1.0.2.0 - Production on Fri Aug 5 08:55:10 2016

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Starting "SCOTT"."SYS_EXPORT_TABLE_01":  scott/******** tables=scott.emp directory=DPDIR
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-28081: Insufficient privileges - the command references a redacted object.
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
  /home/oracle/expdat.dmp
Job "SCOTT"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at Fri Aug 5 08:55:28 2016 elapsed 0 00:00:16

Taken from the 12c New Features class that I delivered this week in Hinckley. As always: Don’t believe it, test it:-)


Tagged: 12c New Features
Categories: DBA Blogs

Create a SQL Profile to let the Optimizer ignore hints in #Oracle

Thu, 2016-07-07 03:36

Something I presented recently during an Oracle Database 12c Performance Management and Tuning class. Hints are a double-edged sword; they may do more harm than good. What if  hinted SQL comes from an application that you as the DBA in charge can’t modify? You can tell the Optimizer to ignore that nasty hint.

One method is to use alter session set “_optimizer_ignore_hints”=true; This will make the optimizer ignore all hints during that session  – also the useful ones, so maybe that is not desirable. The method I show here works on the statement level. The playground:

SQL> select /*+ index (sales,sales_bix) */ max(amount_sold) from sales where channel_id=3;

MAX(AMOUNT_SOLD)
----------------
         1782.72

Elapsed: 00:00:04.92
SQL> select plan_table_output from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------
SQL_ID  7m2k0y4hy1ngh, child number 0
--------------------------------------
select /*+ index (sales,sales_bix) */ max(amount_sold) from sales where channel_id=3

Plan hash value: 1767991108

--------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |           |       |       |   139K(100)|          |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------
|   1 |  SORT AGGREGATE                      |           |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| SALES     |    17M|   131M|   139K  (1)| 00:00:06 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |           |       |       |            |          |
|*  4 |     BITMAP INDEX SINGLE VALUE        | SALES_BIX |       |       |            |          |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("CHANNEL_ID"=3)

The index hint directs the optimizer here to use a bad plan that wouldn’t be used otherwise:

SQL> select max(amount_sold) from sales where channel_id=3;

MAX(AMOUNT_SOLD)
----------------
         1782.72

Elapsed: 00:00:01.06
SQL> select plan_table_output from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------
SQL_ID  ahw4npmjpnu1k, child number 0
--------------------------------------
select max(amount_sold) from sales where channel_id=3

Plan hash value: 1047182207

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |       |       | 28396 (100)|          |
|   1 |  SORT AGGREGATE    |       |     1 |     8 |            |          |

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
|*  2 |   TABLE ACCESS FULL| SALES |    17M|   131M| 28396   (1)| 00:00:02 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CHANNEL_ID"=3)

Now the remedy:

begin
 dbms_sqltune.import_sql_profile(
 name => 'MYPROFILE1',
 category => 'DEFAULT',
 sql_text => 'select /*+ index (sales,sales_bix) */ max(amount_sold) from sales where channel_id=3',
 profile => sqlprof_attr('IGNORE_OPTIM_EMBEDDED_HINTS')
                                 );
end;
/

PL/SQL procedure successfully completed.

SQL> select /*+ index (sales,sales_bix) */ max(amount_sold) from sales where channel_id=3;

MAX(AMOUNT_SOLD)
----------------
         1782.72

Elapsed: 00:00:01.05

SQL> select plan_table_output from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  7m2k0y4hy1ngh, child number 0
-------------------------------------
select /*+ index (sales,sales_bix) */ max(amount_sold) from sales where channel_id=3

Plan hash value: 1047182207

----------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |       |       | 28396 (100)|          |

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------
|   1 |  SORT AGGREGATE    |       |     1 |     8 |            |          |
|*  2 |   TABLE ACCESS FULL| SALES |    17M|   131M| 28396   (1)| 00:00:02 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("CHANNEL_ID"=3)

Note
-----

PLAN_TABLE_OUTPUT
----------------------------------------------------
   - SQL profile MYPROFILE1 used for this statement

This works for that SQL statement only without having to modify the application. The SQL profile can be removed like this:

SQL> exec dbms_sqltune.drop_sql_profile('MYPROFILE1')
PL/SQL procedure successfully completed.

All the above is not new, but still I think it might be worthwhile to mention it here for your reference, should you encounter some nasty hints once:-)


Tagged: optimizer, Performance Tuning
Categories: DBA Blogs

Another reason why you should use the Data Guard Broker for your #Oracle Standby

Wed, 2016-05-18 02:00

The Data Guard Broker is recommended for various reasons, this one is less obvious: It prevents a Split-Brain problem that may otherwise occur in certain situations. Let me show you:

[oracle@uhesse ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Members:
  prima - Primary database
    physt - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

This is my setup with 12c, but the demonstrated behavior is the same with 11g already. I will cause a crash of the primary database now, without damaging any files – like a power outage on the primary site:

[oracle@uhesse ~]$ ps -ef | grep smon
oracle    6279     1  0 08:30 ?        00:00:00 ora_smon_prima
oracle    6786     1  0 08:32 ?        00:00:00 ora_smon_physt
oracle    7168  3489  0 08:43 pts/0    00:00:00 grep --color=auto smon
[oracle@uhesse ~]$ kill -9 6279

Don’t do that at home:-) Now the primary is gone, but of course I can failover to the standby:

[oracle@uhesse ~]$ dgmgrl sys/oracle@physt
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDBA.
DGMGRL> failover to physt;
Performing failover NOW, please wait...
Failover succeeded, new primary is "physt"

So far so good, my end users can continue to work now on the new primary. But what happens when the power outage is over and the ex-primary comes back up again?

[oracle@uhesse ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed May 18 08:47:30 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1258291200 bytes
Fixed Size		    2923920 bytes
Variable Size		  452985456 bytes
Database Buffers	  788529152 bytes
Redo Buffers		   13852672 bytes
Database mounted.
ORA-16649: possible failover to another database prevents this database from
being opened

The DMON background process of the new primary communicates with the DMON on the ex-primary, telling it that there cannot be two primary databases within the same Data Guard Broker configuration! Try the same scenario without the broker and you will observe the ex-primary coming up until status OPEN. Just wanted to let you know:-)


Tagged: Data Guard
Categories: DBA Blogs

FREE Webinar: Efficient techniques to create and maintain your #Oracle Standby Database

Fri, 2016-05-13 01:36

Join us with this FREE event on May 19, 13:00 CET and register now, because seats are limited.

DataGuard_Lunchtime_Webinar

I will be talking about

  • how to create and maintain a 12c Standby Database in the most efficient way
  • how to do switchover and failover
  • how to keep up client connectivity after role changes

These topics will be live demonstrated – positively no slide show reading.

After this major part of the event, we will briefly advertise our digital learning offerings that relate to Oracle Database Core Technology and how you may take advantage of them.

Hope to see YOU in the session:-)


Categories: DBA Blogs

Speakers: Put your Twitter Handle on the Windows taskbar!

Sat, 2016-04-30 04:56

If you speak often at conferences, sharing your screen to demo things, this could be helpful:

Twitter Handle on the Windows taskbar

Throughout your presentation, the audience will be able to see your Twitter Handle, reminding them to include it with tweets about the event. I used to include it in the slides, but this is better, because it works also with live demonstrations where no slides are being showed. Which is incidentally my favorite way to do presentations:-)

Now how can you do it? Quite easy, you open the Windows Control Panel and click on Region and Language. Then click on Additional settings:

Region and Language 1

Then you insert your Twitter Handle (or any other text you like to see on the taskbar) as AM and PM symbol. Make sure to select Time formats with trailing tt:

Region and Language 2

That’s it. If you want the font size as large as on the first picture above, that can be done here:

twitterhandel_taskbar4

I did that with Windows 7 Professional 64 bit. Hope you find it useful:-)


Tagged: speaker tip
Categories: DBA Blogs

Turkish Oracle User Group Conference in Istanbul 2016 #TROUGDays

Fri, 2016-04-29 02:31

Straight after the Oracle University Expert Summit in Berlin – which was a big success, by the way – the circus moved on to another amazing place: Istanbul!

Istanbul viewThe Turkish Oracle User Group (TROUG) did its annual conference in the rooms of the Istanbul Technical University with local and international speakers and a quite attracting agenda.

Do you recognize anyone here?:-)

#TROUGDays speakers

I delivered my presentation “Best of RMAN” again like at the DOAG annual conference last year:

Uwe Hesse speaking in Istanbul

Many thanks to the organizers for making this event possible and for inviting us speakers to dinner

Istanbul speakers dinner

The conference was well received and in my view, it should be possible to attract even more attendees in the coming years by continuing to invite high-profile international speakers

audience

My special thanks to Joze, Yves and Osama for giving me your good company during the conference – even if that company was sometimes very tight during the car rides

Categories: DBA Blogs

DEFAULT_CACHE_SIZE mentioned in alert.log of an #Oracle database

Wed, 2016-04-06 04:53

Today, I got this message in my alert.log file:

Full DB Caching disabled: DEFAULT_CACHE_SIZE should be at least 709 MBs bigger than current size.

When I look at the datafile sizes and compare them with the buffer cache size, it shows:

 

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        3296
In-Memory Area Size                                      2048
Shared Pool Size                                          736
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                       208
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,258133
Free SGA Memory Available                                   0

14 rows selected.

SYS@cloudcdb > select sum(bytes)/1024/1024 as mb from v$datafile;

        MB
----------
      3675

It is true, the database doesn’t fit completely into the buffer cache, missing roughly that amount of space mentioned. There is no such parameter as DEFAULT_CACHE_SIZE, though.
What we have instead is DB_CACHE_SIZE. In order to fix that issue, I was using this initialization parameter file to create a new spfile from:

[oracle@uhesse-service2 dbs]$ cat initCLOUDCDB.ora
*.audit_file_dest='/u02/app/oracle/admin/CLOUDCDB/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
*.control_files='/u02/app/oracle/oradata/CLOUDCDB/control01.ctl','/u03/app/oracle/fra/CLOUDCDB/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='CLOUDCDB'
*.db_recovery_file_dest='/u03/app/oracle/fra'
*.db_recovery_file_dest_size=10737418240
*.diagnostic_dest='/u02/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=CLOUDCDBXDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
*.sga_target=6g
*.pga_aggregate_target=2g
*.inmemory_size=1g
*.db_cache_size=4g

That reduced the size of the In-Memory Column Store to make room for the buffer cache. Now the database fits nicely into the buffer cache again:

SYS@cloudcdb > select name,bytes/1024/1024 as mb from v$sgainfo;

NAME                                                       MB
-------------------------------------------------- ----------
Fixed SGA Size                                     2,80265045
Redo Buffers                                       13,1953125
Buffer Cache Size                                        4256
In-Memory Area Size                                      1024
Shared Pool Size                                          800
Large Pool Size                                            32
Java Pool Size                                             16
Streams Pool Size                                           0
Shared IO Pool Size                                         0
Data Transfer Cache Size                                    0
Granule Size                                               16
Maximum SGA Size                                         6144
Startup overhead in Shared Pool                    181,290176
Free SGA Memory Available                                   0

14 rows selected.

Accordingly the message in the alert.log now reads
Buffer Cache Full DB Caching mode changing from FULL CACHING DISABLED to FULL CACHING ENABLED

Don’t get me wrong: I’m not arguing here against the In-Memory Option or in favor of Full Database Caching. Or whether it makes sense to use any of them or both. This post is just about clarifying the strange message in the alert.log that may confuse people.

And by the way, my demo database is running in the Oracle Cloud:-)


Tagged: 12c New Features
Categories: DBA Blogs

#Oracle University Expert Summit Berlin 2016

Sun, 2016-03-27 06:20

Join us in Berlin, 18th – 20th April. The event will take place at the Adlon Hotel with Jonathan Lewis, Pete Finnigan, Christian Antognini, Javier de la Torre Medina and myself as speakers. We have over 70 enrollments already, so take action to secure your seat in time!

Expert Summit Berlin 2016

My topic will be Rolling Upgrade from 11g to 12c, with a special focus on doing it with Transient Logical Standby. In a live demonstration I will start with a Primary and a Physical Standby Database both running 11.2.0.4 in Maximum Availability protection mode using the Data Guard Broker. This is likely one of the most common setups today. We will then see how to upgrade this configuration to 12.1.0.2 with minimum downtime.

When I did this seminar before during another Expert Summit in Dubai, I was still using 11.2.0.1 as the initial release, then upgrading it to 11.2.0.3. It took me some time and effort to update my demo environment and the lessons to cover more recent and meanwhile more relevant versions. Hope to see you there:-)


Categories: DBA Blogs