Yann Neuhaus

Subscribe to Yann Neuhaus feed
dbi services technical blog
Updated: 13 hours 27 min ago

Oracle Public Cloud: 2 OCPU for 1 proc. license

Wed, 2017-02-08 11:40

I’ve blogged recently about the Oracle Core Factor in the Clouds. And then, in order to optimize your Oracle licences, you need to choose the instance type that can run faster on less cores. In a previous blog post, I tried to show how this can be complex, comparing the same workload (cached SLOB) on different instances of same Cloud provider (Amazon). I did that on instances with 2 virtual cores, covered by 2 Oracle Database processor licences. Here I’m doing the same on the Oracle Public Cloud where, with the same number of licenses, you can run on 4 hyper-threaded cores.

Trial IaaS

I’m running with the 30-months trial subscription. I did several tests because they were not consistent at first. I had some runs where it seems that I was not running at full CPU. What I know is that your CPU resources are guaranteed on the Oracle Public Cloud, but maybe it’s not the case on trial, or I were working on a maintenance window, or…

Well, I finally got consistent results and I’ve run the following test on the IaaS (Cloud Compute Service) to do something similar to what I did on AWS, with the Bring You Own License idea.

In Oracle Public Cloud, you can run 2 cores per 1 Oracle processor licence. This means that if I have 2 processor licences, I can run on an instance shape with 4 OCPU. This shape is called ‘OC5′. Here it is:

[oracle@a9f97f ~]$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 8
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
Stepping: 2
CPU MHz: 2294.938
BogoMIPS: 4589.87
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0-7
 
[oracle@a9f97f ~]$ cat /proc/cpuinfo | tail -26
processor : 7
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
stepping : 2
microcode : 0x36
cpu MHz : 2294.938
cache size : 46080 KB
physical id : 0
siblings : 8
core id : 7
cpu cores : 8
apicid : 14
initial apicid : 14
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm xsaveopt fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid
bogomips : 4589.87
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

And here are the results:


Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 30.2 0.00 5.48
DB CPU(s): 1.0 30.1 0.00 5.47
Logical read (blocks): 884,286.7 26,660,977.4
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 25.0 0.00 9.53
DB CPU(s): 2.0 25.0 0.00 9.53
Logical read (blocks): 1,598,987.2 20,034,377.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 9.29
DB CPU(s): 3.0 40.9 0.00 9.28
Logical read (blocks): 2,195,570.8 29,999,381.1
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 42.9 0.00 14.46
DB CPU(s): 4.0 42.8 0.00 14.45
Logical read (blocks): 2,873,420.5 30,846,373.9
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 5.0 51.7 0.00 15.16
DB CPU(s): 5.0 51.7 0.00 15.15
Logical read (blocks): 3,520,059.0 36,487,232.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 6.0 81.8 0.00 17.15
DB CPU(s): 6.0 81.8 0.00 17.14
Logical read (blocks): 4,155,985.6 56,787,765.6
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 7.0 65.6 0.00 17.65
DB CPU(s): 7.0 65.5 0.00 17.62
Logical read (blocks): 4,638,929.5 43,572,740.0
 
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 8.0 92.3 0.00 19.20
DB CPU(s): 8.0 92.1 0.00 19.16
Logical read (blocks): 5,153,440.6 59,631,848.6
 

This is really good. This is x2.8 more LIOPS than the maximum I had on AWS EC2. A x2 factor is expected because I have x2 vCPUS here. But CPU is also faster. So, two conclusions here:

  • There is no technical reason behind the reject of core factor on Amazon EC2. It is only a marketing decision.
  • For sure, for same Oracle Database cost, Oracle Cloud outperforms Amazon EC2 because is is cheaper (not to mention the discounts you will get if you go to Oracle Cloud)
So what?

This is not a benchmark. The LIOPS may depend a lot on your application behaviour, and CPU is not the only resource to take care. But for sure, the Oracle Public Cloud IaaS is fast and costs less when used for Oracle products, because of the rules on core factor. But those rules are for information only. Check your contract for legal stuff.

 

Cet article Oracle Public Cloud: 2 OCPU for 1 proc. license est apparu en premier sur Blog dbi services.

Oracle 12c – RMAN list failure does not show any failure even if there is one

Wed, 2017-02-08 04:11

Relying to much on the RMAN Data Recovery Advisor is not always the best idea. In a lot of situations,  it tells you the right things, however, sometimes it tells you not the optimal things, and sometimes, RMAN list failure does not show any failure at all, even if there is one.

So … let’s simulate quickly a loss of a datafile during the normal runtime of the database. The result is a clear error message which says that the datafile 5 is missing.

SQL> select count(*) from hr.employees;
select count(*) from hr.employees
                        *
ERROR at line 1:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

Of course, the error message is immediately reflected in the alert.log as well where it clearly says that Oracle in unable to open file number 5.

Errors in file /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/trace/DBTEST1_smon_17115.trc:
ORA-01116: error in opening database file 5
ORA-01110: data file 5: '/u01/oradata/DBTEST1/hrDBTEST01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory

Only the RMAN Data Recovery advisor does not know what it’s going on.

RMAN> list failure;

using target database control file instead of recovery catalog
Database Role: PRIMARY

no failures found that match specification

Of course, I could shutdown the DB, and then startup again which would trigger a Health Check, but shutting down an instance is not always so easy on production systems. Especially when only one datafile is missing, but all others are available and only a part of the application is affected.

The solution to that issue, is to run a manual health check. Quite a lot of health checks can be run manually, like show in the following documentation.

https://docs.oracle.com/database/121/ADMIN/diag.htm#ADMIN11269

I start with the DB Structure Integrity Check. This check verifies the integrity of database files and reports failures if these files are inaccessible, corrupt or inconsistent.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','Williams Check 00000001');
  3  end;
  4  /

PL/SQL procedure successfully completed.

After running the Health Check, Oracle finds the failure and in the alter.log you will see an entry like the following:

Checker run found 1 new persistent data failures

If you want to take a look what exactly the Health check found, you can invoke the ADRCI and execute the “show hm_run” command.

oracle@vmoratest1:/oracle/workshop/bombs/ [DBTEST1] adrci

ADRCI: Release 12.1.0.2.0 - Production on Tue Feb 7 16:02:21 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u00/app/oracle"
adrci> show homes
ADR Homes:
diag/clients/user_oracle/host_1833655127_82
diag/tnslsnr/vmoratest1/listener
diag/rdbms/cdb1p/CDB1P
diag/rdbms/dbtest1/DBTEST1
diag/rdbms/rcat/RCAT

adrci> set home diag/rdbms/dbtest1/DBTEST1

adrci> show hm_run

ADR Home = /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1:
*************************************************************************

...
...

**********************************************************
HM RUN RECORD 9
**********************************************************
   RUN_ID                        206
   RUN_NAME                      Williams Check 00000001
   CHECK_NAME                    DB Structure Integrity Check
   NAME_ID                       2
   MODE                          0
   START_TIME                    2017-02-07 16:03:44.431601 +01:00
   RESUME_TIME                   <NULL>
   END_TIME                      2017-02-07 16:03:44.478127 +01:00
   MODIFIED_TIME                 2017-02-07 16:03:44.478127 +01:00
   TIMEOUT                       0
   FLAGS                         0
   STATUS                        5
   SRC_INCIDENT_ID               0
   NUM_INCIDENTS                 0
   ERR_NUMBER                    0
   REPORT_FILE                   <NULL>
9 rows fetched

adrci>

However, if you take a look at the HM RUN report, is gives you an error.

adrci> show report hm_run 'Williams Check 00000001'
DIA-48415: Syntax error found in string [show report hm_run 'Williams Check 00000001'] at column [44]

This is not a bug. The HM run name must be only alphanumeric and underscore.  So … better don’t use spaces in between your name. The following would have been better.

SQL> begin
  2  dbms_hm.run_check ('DB Structure Integrity Check','WilliamsCheck');
  3  end;
  4  /

PL/SQL procedure successfully completed.

In case, the “adrci show report hm_run” does not work for you, it is not the end of the story. We still can look up the v$hm_finding view.

select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

SQL> select RUN_ID, TIME_DETECTED, STATUS, DESCRIPTION, DAMAGE_DESCRIPTION from v$hm_finding where run_id = '206';

RUN_ID TIME_DETECTED                STATUS       DESCRIPTION                                  DAMAGE_DESCRIPTION
------ ---------------------------- ------------ -------------------------------------------- --------------------------------------------
   206 07-FEB-17 04.03.44.475000 PM OPEN         Datafile 5: '/u01/oradata/DBTEST1/hrDBTEST01 Some objects in tablespace HR might be unava
                                                 .dbf' is missing                             ilable

Now let’s check the RMAN “list failure” again.

RMAN> list failure;

Database Role: PRIMARY

List of Database Failures
=========================

Failure ID Priority Status    Time Detected        Summary
---------- -------- --------- -------------------- -------
2          HIGH     OPEN      07-FEB-2017 15:39:38 One or more non-system datafiles are missing


RMAN> advise failure;
...
Automated Repair Options
========================
Option Repair Description
------ ------------------
1      Restore and recover datafile 5
  Strategy: The repair includes complete media recovery with no data loss
  Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

  
RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u00/app/oracle/diag/rdbms/dbtest1/DBTEST1/hm/reco_668410907.hm

contents of repair script:
   # restore and recover datafile
   sql 'alter database datafile 5 offline';
   restore ( datafile 5 );
   recover datafile 5;
   sql 'alter database datafile 5 online';
Conclusion

The Oracle Data Recovery Advisor is quite good, but sometimes you need to push it into the right direction. Besides that, take care of the naming convention that you use for your health check runs. ;-)

 

Cet article Oracle 12c – RMAN list failure does not show any failure even if there is one est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: SQL and Optimizer trace

Mon, 2017-02-06 15:51

The Oracle PDBaaS is for database developers. And database developers may need to trace what happens with their queries: SQL trace and Optimizer trace. Let’s see what we can do on Exadata Express Cloud Service

V$DIAG_TRACE_FILE_CONTENTS

On the managed PDBaaS you don’t have access to the filesystem. But in 12.2 you have a view that can display the content of the trace files: V$DIAG_TRACE_FILE lists the files you can access (that pertain to your container) and V$DIAG_TRACE_FILE_CONTENTS can display the content.

Here is an example how to read it.
I get my tracefile name from v$process:

SQL> column tracefile new_value tracefile
SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
 
TRACEFILE
--------------------------------------------------------------------------------
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc

And read the ‘payload’ from the view:

SQL> set linesize 4000 pagesize 0 trimspool on
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile';
old 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='&tracefile'
new 1: select payload from V$DIAG_TRACE_FILE_CONTENTS where trace_filename='/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389.trc'
 
no rows selected

Of course, I didn’t enable any trace, so the file is empty.

10046

You can’t enable SQL Trace in the Exadata Express Cloud Service. This is disabled by lockdown profile and by not granting execute to the packages that can do it.
Here is what I tried. Please tell me if you have other ideas:
SQL> alter session set sql_trace=true;
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec sys.dbms_support.start_trace;
BEGIN sys.dbms_support.start_trace; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 7:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored
 
SQL> exec dbms_session.set_sql_trace(true);
BEGIN dbms_session.set_sql_trace(true); END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_SESSION", line 178
ORA-06512: at line 1
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_system.set_sql_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SYSTEM' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> exec for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop;
BEGIN for i in (select sid,serial# from v$session where sid=sys_context('userenv','sid')) loop sys.dbms_support.start_trace_in_session(i.sid,i.serial#,true); end loop; END;
 
*
ERROR at line 1:
ORA-06550: line 1, column 96:
PLS-00201: identifier 'SYS.DBMS_SUPPORT' must be declared
ORA-06550: line 1, column 96:
PL/SQL: Statement ignored
 
SQL> alter session set events 'sql_trace';
ERROR:
ORA-01031: insufficient privileges
 
SQL> exec execute immediate q'{alter session set events 'sql_trace'}';
BEGIN execute immediate q'{alter session set events 'sql_trace'}'; END;
 
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at line 1
 
SQL> create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
2 /
create or replace procedure sys.my_sql_trace as begin execute immediate q'{alter session set events 'sql_trace'}'; end;
*
ERROR at line 1:
ORA-01031: insufficient privileges

No doubt, we can’t sql_trace. I’m not sure it is a big problem because we have all options in the Exadata Express Cloud Service so we have ASH, SQL monitor, etc. I’m not saying that sql_trace is not useful anymore – there are cases where you need to see the wait events one by one – but from development point of view an plan with execution statistics should be sufficient.

10053

Tracing the CBO is a different thing. There is no alternative when you want to understand the choices of the optimizer. It is not something I use every day, but there are some cases where it is the only tool to troubleshoot.

SQL> alter session set events 'trace [SQL_Optimizer.*]';
ERROR:
ORA-01031: insufficient privileges

The ALTER SESSION SET EVENT is blocked by the S20 lockdown profile. But there is another way.

I’ve run the following query:

SQL> select * from dual;
X
 
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID a5ks9fhw2v9s1, child number 0
-------------------------------------
select * from dual
 
Plan hash value: 272002086
 
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 2 (100)| |
| 1 | TABLE ACCESS STORAGE FULL| DUAL | 1 | 2 | 2 (0)| 00:00:01 |
----------------------------------------------------------------------------------
 
13 rows selected.

Now I want more information about the execution plan.


SQL> exec dbms_sqldiag.dump_trace('a5ks9fhw2v9s1', 0, 'Compiler','Compiler');
PL/SQL procedure successfully completed.

Cool. It seems I’m allowed to do that. This procedures sets the SQL Compiler event, but it seems that it is allowed from the procedure.

I can now get the trace:


SQL> select tracefile from v$process where addr=(select paddr from v$session where sid=sys_context('userenv','sid'));
/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc
 
SQL> spool last.trc
SQL> select payload from V$DIAG_TRACE_FILE_CONTENTS
2 where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
3 and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
4 ;
old 2: where adr_home=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\1')
new 2: where adr_home=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\1')
old 3: and trace_filename=regexp_replace('&tracefile','^(.*)/trace/([^/]*)','\2')
new 3: and trace_filename=regexp_replace('/u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc','^(.*)/trace/([^/]*)','\2')
Trace file /u02/app/oracle/diag/rdbms/cfcdba1/cfcdba1_1/trace/cfcdba1_1_ora_24389_Compiler.trc

This outputs the full 10053 trace:


Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production
Build label: RDBMS_12.2.0.0.3_LINUX.X64_160720
ORACLE_HOME: /u01/app/oracle/product/12.2.0.0.3/dbhome_1
System name: Linux
Node name: cfcldx0171.usdc2.oraclecloud.com
Release: 2.6.39-400.264.1.el6uek.x86_64
Version: #1 SMP Wed Aug 26 16:42:25 PDT 2015
Machine: x86_64
Storage: Exadata
Instance name: cfcdba1_1
Redo thread mounted by this instance: 1
Oracle process number: 109
Unix process pid: 24389, image: oracle@cfcldx0171.usdc2.oraclecloud.com
 
*** 2017-02-05T20:42:38.580323+00:00 (EPTDOJVM1KG(47))
 
*** SESSION ID:(2343.25237) 2017-02-05T20:42:38.580349+00:00
*** CLIENT ID:() 2017-02-05T20:42:38.580354+00:00
*** SERVICE NAME:(eptdojvm1kg.usdc2.oraclecloud.com) 2017-02-05T20:42:38.580358+00:00
*** MODULE NAME:(SQL*Plus) 2017-02-05T20:42:38.580363+00:00
*** ACTION NAME:() 2017-02-05T20:42:38.580368+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2017-02-05T20:42:38.580372+00:00
*** CONTAINER ID:(47) 2017-02-05T20:42:38.580377+00:00
 
Enabling tracing for cur#=6 sqlid=bqf9h9bhb6c88 recursive
Parsing cur#=6 sqlid=bqf9h9bhb6c88 len=45
sql=/* SQL Analyze(2343,0) */ select * from dual
 
End parsing of cur#=6 sqlid=bqf9h9bhb6c88
Semantic Analysis cur#=6 sqlid=bqf9h9bhb6c88
OPTIMIZER INFORMATION
 
******************************************
 
----- Current SQL Statement for this session (sql_id=3u37gu3fhx3q1) -----
/* SQL Analyze(2343,0) */ select * from dual
----- PL/SQL Stack -----
----- PL/SQL Call Stack -----
object line object
handle number name
0x3c5cdebc38 161 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL_CALLOUT
0x3c5cdebc38 14080 package body SYS.DBMS_SQLTUNE_INTERNAL.I_PROCESS_SQL
0x3e17d89590 1599 package body SYS.DBMS_SQLDIAG.DUMP_TRACE
0x25dda27db0 1 anonymous block
...

Unified Trace Service views

Those views are new in 12.2 and are declined to show only part of files so that you can grant access to read those files only for 10046 (SQL Trace) or for 10053 (Optimizer Trace), for all sessions or only the user’s one.

GV$DIAG_TRACE_FILE (from x$dbgtflist) and GV$DIAG_TRACE_FILE_CONTENTS (from x$dbgtfview) show all files from the ADR traces
GV$DIAG_APP_TRACE_FILE (from x$dbgatflist) is a subset showing all files containing SQL Trace or Optimizer Trace
GV$DIAG_SQL_TRACE_RECORDS (from x$dbgtfsqlt) is a subset showing all files containing SQL Trace
GV$DIAG_OPT_TRACE_RECORDS (from x$dbgtfoptt) is a subset showing all files containing Optimizer Trace
V$DIAG_SESS_SQL_TRACE_RECORDS (from x$dbgtfssqlt) and V$DIAG_SESS_OPT_TRACE_RECORDS (from x$dbgtfsoptt) are similar, but for your session (and then no GV$ as your session is on one instance only).

A new role APPLICATION_TRACE_VIEWER grants to select on views that show only SQL and Optimizer traces.

Alert.log

This is not new in 12.2, you can query V$DIAG_ALERT_EXT to see the alert.log content. Of course, from the PDB you will see only PDB related content:

Endian type of dictionary set to little
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185d68
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570370849 end:2570371107 diff:258 ms (0.3 seconds)
Database Characterset for EPTDOJVM1KG is AL32UTF8
JIT: pid 111297 requesting full stop
detach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued detach broadcast request 0x3e26185d18
Autotune of undo retention is turned on.
This instance was first to open pluggable database EPTDOJVM1KG (container=47)
attach called for domid 47 (domuid: 0x7fa9e59a, options: 0x0, pid: 111297)
queued attach broadcast request 0x3e26185cc8
Endian type of dictionary set to little
[111297] Successfully onlined Undo Tablespace 2.
Undo initialization finished serial:0 start:2570382111 end:2570382464 diff:353 ms (0.4 seconds)
Deleting old file#8 from file$
Deleting old file#9 from file$
Deleting old file#10 from file$
Deleting old file#11 from file$
Deleting old file#12 from file$
Deleting old file#14 from file$
Deleting old file#17 from file$
Adding new file#486 to file$(old file#8)
Adding new file#487 to file$(old file#9)
Adding new file#488 to file$(old file#10)
Adding new file#489 to file$(old file#11)
Adding new file#490 to file$(old file#12)
Adding new file#491 to file$(old file#14)
Adding new file#492 to file$(old file#17)
Successfully created internal service eptdojvm1kg.usdc2.oraclecloud.com at open
Database Characterset for EPTDOJVM1KG is AL32UTF8
Opatch validation is skipped for PDB EPTDOJVM1KG (con_id=0)
Opening pdb with no Resource Manager plan active
Creating new database key for new master key and wallet
Creating new database key with the new master key
Retiring: ena 2 flag 6 mkloc 0
encrypted key 32ce589b50b3105f9419d0a86ce8e7f400000000000000000000000000000000
mkid cbcd33d929f64fd0bf1367f6e69f2dd5
Creating: ena 2 flag e mkloc 1
encrypted key baf27c1e11623fe9837f678df2e48f8a00000000000000000000000000000000
mkid 2d99ba8694054fd5bf41f998513e1d4c
New database key and new master key created successfully
create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
Force tablespace EPTDOJVM1KG_TEMP to be encrypted with AES128
Completed: create temporary tablespace "EPTDOJVM1KG_TEMP" tempfile size 20M extent management local uniform size 16M
create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Force tablespace EPTDOJVM1KG_DATA to be encrypted with AES128
Completed: create BIGFILE tablespace "EPTDOJVM1KG_DATA" datafile size 10485760 autoextend on next 1M maxsize UNLIMITED extent management local autoallocate segment space management auto
Resize operation completed for file# 486, old size 307200K, new size 317440K
ALTER SYSTEM SET pdb_lockdown='S20' SCOPE=BOTH PDB='EPTDOJVM1KG';
...

So what?

We can access SQL Trace and Optimizer trace and this is very nice. Access to trace has always been a problem because they are on the server, may contain sensitive data, etc. Having views to give access easily and in a controlled way is a very nice 12.2 feature.
I don’t know if enabling Optimizer trace on Exadata Express Cloud Service is expected, or just something that was forgotten by the lockdown profile. I hope the first hypothesis is the right one and I hope that we will be allowed to enable SQL Trace as well. This service is for developers, and I’m a big advocate of giving all tools to developers so that performance is addressed before production.

 

Cet article Exadata Express Cloud Service: SQL and Optimizer trace est apparu en premier sur Blog dbi services.

Oracle – RMAN Backups to CIFS

Mon, 2017-02-06 08:26

Not to often, but sometimes you need to offload your RMAN backups and your DataPump exports to CIFS. You might run out of space on your NFS server, or you are a Windows shop who has only CIFS shares, and then you might need to put your RMAN backups to CIFS.

One very important MOS Note regarding RMAN backups to CIFS is the following

Is use of CIFS Protocol for RMAN backups supported? (Doc ID 444809.1)

Running a RDBMS on CIFS is clearly not supported because of CIFS does not guarantee atomic write of 512 byte blocks. On the other hand, RMAN is not relying on 512-byte atomic writes, and so it is fine to use RMAN to CIFS.

However, CIFS is not certified by Oracle and the RDBMS development team can not be involved is case there are any issue. The MOS note was last updated on November 2013, so it might be not a bad idea to double check it with Oracle again.

In earlier OS and DB version, there used to be issues with filesystemio_options SETALL,DIRECT or ASYNCH, and so I wanted to double check if this is still the case with more recent releases.

I have done the tests with Oracle 11.2.0.4 and 12.1.0.2 on OEL 6.8 with Samba 3.6.

[root@oel001 ~]# cat /etc/oracle-release
Oracle Linux Server release 6.8
[root@oel001 ~]# uname -a
Linux oel001 4.1.12-61.1.25.el6uek.x86_64 #2 SMP Wed Jan 11 19:13:03 PST 2017 x86_64 x86_64 x86_64 GNU/Linux
[root@oel001 ~]# rpm -qa | grep samba-3.6.23
samba-3.6.23-36.0.1.el6_8.x86_64

I start with filesystemio_options SETALL and RMAN and Datapump with 18 parallel sessions, to make sure that I hit the CIFS share quite hard. Before we start the Test, we have to mount the CIFS share. All mount options are documented at the following link.

https://www.samba.org/samba/docs/man/manpages-3/mount.cifs.8.html

For my case, the most important CIFS mount options are:

user = M$ User to connect to the CIFS share
rw = mount read write
uid = sets the uid that will own all files or directories on the mounted filesystem
gid = sets the gid that will own all files or directories on the mounted filesystem
rsize = default network read size
wsize = default network write size

Regarding the rsize/wsize I have used the ones, which I usually take when using NFS mounts.

[root@oel001 ~]# mount -t cifs //10.0.2.2/rman /rman -o rsize=32768,wsize=32768,rw,user=wrs,uid=oracle,gid=dba --verbose
Password:
mount.cifs kernel mount options: ip=10.0.2.2,unc=\\10.0.2.2\rman,rsize=32768,wsize=32768,uid=54321,gid=54322,ver=1,user=wrs,pass=***                          *****

[root@oel001 ~]# mount | grep rman
//10.0.2.2/rman on /rman type cifs (rw)

Before running any RMAN backups to it, make sure that you can create files with the oracle user. If it is failing at this step already, then you can stop here. This has to work before continuing any further.

oracle@oel001:/home/oracle/ [OCM121] cd /rman/backup/
oracle@oel001:/rman/backup/ [OCM121] touch new_file
oracle@oel001:/rman/backup/ [OCM121] ls -l new_file
-rwxr-xr-x 1 oracle dba 0 Feb  6 09:22 new_file
oracle@oel001:/rman/backup/ [OCM121] rm new_file

Ok. Let’s start with 11.2.0.4 first. The database has filesystemio_options=SETALL, meaning that the database uses aynch and directio at the same time.

My RMAN backup tests will be a regular backup with 16 parallel sessions to that share, and a backup of the Fast Recovery Area to the same share as well.

CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO COMPRESSED BACKUPSET;
BACKUP DATABASE FORMAT '/rman/backup/%U';
BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

My DataPump test will be a full backup with 16 parallel sessions.

expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

ok. Let’s start doing the first backup.

oracle@oel001:/home/oracle/ [OCM112] rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Mon Feb 6 13:31:37 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: OCM112 (DBID=177109848)

RMAN> backup database format '/rman/backup/%U';

Starting backup at 06-FEB-2017 13:31:45
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=128 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=249 device type=DISK
allocated channel: ORA_DISK_3
channel ORA_DISK_3: SID=366 device type=DISK
allocated channel: ORA_DISK_4
channel ORA_DISK_4: SID=12 device type=DISK
allocated channel: ORA_DISK_5
channel ORA_DISK_5: SID=129 device type=DISK
allocated channel: ORA_DISK_6
channel ORA_DISK_6: SID=250 device type=DISK
allocated channel: ORA_DISK_7
channel ORA_DISK_7: SID=360 device type=DISK
allocated channel: ORA_DISK_8
channel ORA_DISK_8: SID=14 device type=DISK
allocated channel: ORA_DISK_9
channel ORA_DISK_9: SID=130 device type=DISK
allocated channel: ORA_DISK_10
channel ORA_DISK_10: SID=251 device type=DISK
allocated channel: ORA_DISK_11
channel ORA_DISK_11: SID=367 device type=DISK
allocated channel: ORA_DISK_12
channel ORA_DISK_12: SID=15 device type=DISK
allocated channel: ORA_DISK_13
channel ORA_DISK_13: SID=131 device type=DISK
allocated channel: ORA_DISK_14
channel ORA_DISK_14: SID=252 device type=DISK
allocated channel: ORA_DISK_15
channel ORA_DISK_15: SID=368 device type=DISK
allocated channel: ORA_DISK_16
channel ORA_DISK_16: SID=16 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA11G/ocm112/datafile/system.261.927382121
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_2: starting compressed full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=+DATA11G/ocm112/datafile/undotbs1.263.927382145
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_3: starting compressed full datafile backup set
channel ORA_DISK_3: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA11G/ocm112/datafile/sysaux.262.927382135
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_4: starting compressed full datafile backup set
channel ORA_DISK_4: specifying datafile(s) in backup set
input datafile file number=00004 name=+DATA11G/ocm112/datafile/example.265.927382167
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: starting compressed full datafile backup set
channel ORA_DISK_5: specifying datafile(s) in backup set
input datafile file number=00005 name=+DATA11G/ocm112/datafile/users.266.927382177
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 13:31:48
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 13:32:03
piece handle=/rman/backup/0krrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0irrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 13:32:13
piece handle=/rman/backup/0jrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0grrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 13:32:23
piece handle=/rman/backup/0hrrtai4_1_1 tag=TAG20170206T133147 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:35
Finished backup at 06-FEB-2017 13:32:23

Starting Control File and SPFILE Autobackup at 06-FEB-2017 13:32:23
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935242343.266.935242347 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 13:32:30

RMAN>

Cool .. it seems to work perfectly. Ok. Let’s do the backup of the FRA to CIFS as well.

RMAN> BACKUP RECOVERY AREA TO DESTINATION '/rman/backup';

Starting backup at 06-FEB-2017 14:48:35
using channel ORA_DISK_1
using channel ORA_DISK_2
using channel ORA_DISK_3
using channel ORA_DISK_4
using channel ORA_DISK_5
using channel ORA_DISK_6
using channel ORA_DISK_7
using channel ORA_DISK_8
using channel ORA_DISK_9
using channel ORA_DISK_10
using channel ORA_DISK_11
using channel ORA_DISK_12
using channel ORA_DISK_13
using channel ORA_DISK_14
using channel ORA_DISK_15
using channel ORA_DISK_16
specification does not match any datafile copy in the repository
specification does not match any backup set in the repository
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=77 RECID=3 STAMP=935241746
channel ORA_DISK_1: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=78 RECID=4 STAMP=935241790
channel ORA_DISK_2: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_3: starting compressed archived log backup set
channel ORA_DISK_3: specifying archived log(s) in backup set
input archived log thread=1 sequence=79 RECID=5 STAMP=935242521
channel ORA_DISK_3: starting piece 1 at 06-FEB-2017 14:48:36
channel ORA_DISK_4: starting compressed archived log backup set
channel ORA_DISK_4: specifying archived log(s) in backup set
input archived log thread=1 sequence=80 RECID=6 STAMP=935242549
channel ORA_DISK_4: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_5: starting compressed archived log backup set
channel ORA_DISK_5: specifying archived log(s) in backup set
input archived log thread=1 sequence=81 RECID=7 STAMP=935242639
channel ORA_DISK_5: starting piece 1 at 06-FEB-2017 14:48:37
channel ORA_DISK_6: starting compressed archived log backup set
channel ORA_DISK_6: specifying archived log(s) in backup set
input archived log thread=1 sequence=82 RECID=8 STAMP=935242724
channel ORA_DISK_6: starting piece 1 at 06-FEB-2017 14:48:38
channel ORA_DISK_1: finished piece 1 at 06-FEB-2017 14:48:38
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzond8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzondy_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_3: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzonqf_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_3: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_4: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzoop8_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_4: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_5: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzooxq_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_5: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_6: finished piece 1 at 06-FEB-2017 14:48:39
piece handle=/rman/backup/OCM112/backupset/2017_02_06/o1_mf_annnn_TAG20170206T144835_d9jzopqx_.bkp tag=TAG20170206T144835 comment=NONE
channel ORA_DISK_6: backup set complete, elapsed time: 00:00:01
Finished backup at 06-FEB-2017 14:48:39

Starting Control File and SPFILE Autobackup at 06-FEB-2017 14:48:39
piece handle=+FRA11G/ocm112/autobackup/2017_02_06/s_935246919.286.935246921 comment=NONE
Finished Control File and SPFILE Autobackup at 06-FEB-2017 14:48:46

RMAN>

Worked very well as well. Now lets do the DataPump Test.

SQL> connect system/manager
Connected.

SQL> create directory rman as '/rman/backup';

Directory created.

oracle@oel001:/home/oracle/ [OCM112] expdp system/manager directory=rman dumpfile=rman%U.dmp parallel=16 full=y

Export: Release 11.2.0.4.0 - Production on Mon Feb 6 13:33:41 2017

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Automatic Storage Management, Oracle Label Security, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01":  system/******** directory=rman dumpfile=rman%U.dmp parallel=16 full=y
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 351.8 MB
. . exported "SH"."COSTS":"COSTS_Q1_1999"                183.5 KB    5884 rows
. . exported "SH"."COSTS":"COSTS_Q2_2000"                119.0 KB    3715 rows
. . exported "SH"."COSTS":"COSTS_Q1_2001"                227.8 KB    7328 rows
. . exported "SH"."COSTS":"COSTS_Q1_2000"                120.6 KB    3772 rows
. . exported "SH"."CUSTOMERS"                            9.853 MB   55500 rows
. . exported "SH"."COSTS":"COSTS_Q3_1998"                131.1 KB    4129 rows
. . exported "SH"."COSTS":"COSTS_Q2_1998"                79.52 KB    2397 rows
. . exported "SH"."COSTS":"COSTS_Q2_1999"                132.5 KB    4179 rows
. . exported "SH"."COSTS":"COSTS_Q1_1998"                139.5 KB    4411 rows
. . exported "SH"."COSTS":"COSTS_Q3_2000"                151.4 KB    4798 rows
. . exported "SH"."COSTS":"COSTS_Q3_2001"                234.4 KB    7545 rows
...
...
...
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/INC_TYPE
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
. . exported "ORDDATA"."ORDDCM_MAPPING_DOCS"             7.890 KB       1 rows
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/XMLSCHEMA/XMLSCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/PRE_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/BITMAP_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/DOMAIN_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_TABLE_ACTION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/EVENT/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/MATERIALIZED_VIEW
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/DIMENSION
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCACT_INSTANCE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/POST_INSTANCE/PROCDEPOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/AUDIT
Master table "SYSTEM"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01 is:
  /rman/backup/rman01.dmp
  /rman/backup/rman02.dmp
  /rman/backup/rman03.dmp
  /rman/backup/rman04.dmp
  /rman/backup/rman05.dmp
  /rman/backup/rman06.dmp
  /rman/backup/rman07.dmp
  /rman/backup/rman08.dmp
  /rman/backup/rman09.dmp
  /rman/backup/rman10.dmp
  /rman/backup/rman11.dmp
  /rman/backup/rman12.dmp
  /rman/backup/rman13.dmp
  /rman/backup/rman14.dmp
  /rman/backup/rman15.dmp
  /rman/backup/rman16.dmp

Job "SYSTEM"."SYS_EXPORT_FULL_01" completed at Mon Feb 6 13:37:32 2017 elapsed 0 00:03:44

 

Cool. That also worked perfectly.

I have repeated the same test with 12.1.0.2 also with no issues. In earlier releases I had issues with the filesystemio_options=SETALL, where I had to disable directio, and sometimes direct and aynch io. But it is not the case anymore with more resent OEL and Samba releases.

Conclusion

I would not use RMAN backups to CIFS as a permanent solution, but if you are running out of space on your NFS share, or maybe for migrations it is a good alternative.

 

 

Cet article Oracle – RMAN Backups to CIFS est apparu en premier sur Blog dbi services.

Oracle Public Cloud 12cR2: TDE is not an option

Sun, 2017-02-05 12:52

In Oracle Public Cloud, Transparent Data Encryption is not an option. You can use it because it is included in all database services for all editions. You have to use it because the database won’t allow you to create, or import, non encrypted tablespaces. This is controlled by a new parameter, encrypt_new_tablespaces, which defaults to CLOUD_ONLY;

encrypt_new_tablespaces

In previous versions, we had to ENCRYPT explicitly the tablespace in the CREATE TABLESPACE statement.
Here we don’t need to add this clause in the DDL when we are on the Oracle Public Cloud because the default is:
SQL> show parameter encrypt
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string CLOUD_ONLY

This parameter can take the following values:

SQL> alter system set encrypt_new_tablespaces=tictactoe;
alter system set encrypt_new_tablespaces=tictactoe
*
ERROR at line 1:
ORA-00096: invalid value TICTACTOE for parameter encrypt_new_tablespaces, must be from among DDL, ALWAYS, CLOUD_ONLY

The CLOUD_ONLY will automatically encrypt new tablespaces when we are on the Oracle Public Cloud only.
The ALWAYS will always encrypt new tablespaces, which is good if you have the Advanced Security Option and want to ensure that all data is encrypted
The DDL will never encrypt new tablespaces, except when explicitely done from the CREATE TABLESPACE. This is similar to the behavior before this parameter was introduced.

encrypt_new_tablespaces=DDL

If you are not on the Oracle Public Cloud, CLOUD_ONLY, the default, behaves as DDL. If you are on the Oracle Public Cloud, CLOUD_ONLY behaves like ALWAYS.

You may think that you can bypass the obligation to encrypt, just by setting encrypt_new_tablespaces=DDL but it’s not a good idea. Let’s test it.

I’ve set encrypt_new_tablespaces=ddl in an init.ora and I’ve created a database manually (CREATE DATABASE, catalog, catproc) and it works.

In 12.2 you can encrypt SYSTEM, SYSAUX and UNDO but this is not mandatory. However, I have created a user tablespace, USERS without any problem thanks to encrypt_new_tablespaces=DDL:


SQL> show parameter encrypt
 
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
encrypt_new_tablespaces string ddl
 
SQL> select TABLESPACE_NAME,ENCRYPTED from dba_tablespaces;
 
TABLESPACE_NAME ENC
------------------------------ ---
SYSTEM NO
SYSAUX NO
SYS_UNDOTS NO
UNDOTBS1 NO
USERS NO

Here I’m able to work without any problem… except when I have to restart the instance.

Startup


SQL> startup
ORACLE instance started.
 
Total System Global Area 2214592512 bytes
Fixed Size 8622968 bytes
Variable Size 1090522248 bytes
Database Buffers 1107296256 bytes
Redo Buffers 8151040 bytes
Database mounted.
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Process ID: 11378
Session ID: 4 Serial number: 14394

It is impossible to open a database with unencrypted user tablespaces in the Oracle Public Cloud, whatever the encrypt_new_tablespaces is. I can only startup mount and there’s nothing to do at that point.

Here is the alert.log related entries:

Database Characterset is US7ASCII
No Resource Manager plan active
Verifying all user tablespaces in pdb 0 are encrypted in Oracle Cloud..
Found unencrypted tablespace USERS (pdb 0). Encrypted tablespace is mandatory in Oracle Cloud.
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc:
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc:
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Error 28427 happened during db open, shutting down database
Errors in file /u01/app/oracle/diag/rdbms/test/TEST/trace/TEST_ora_11378.trc (incident=128065):
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-28427: cannot create, import or restore unencrypted tablespace: USERS in Oracle Cloud
Incident details in: /u01/app/oracle/diag/rdbms/test/TEST/incident/incdir_128065/TEST_ora_11378_i128065.trc
2017-02-04 22:25:26.612000 +00:00
opiodr aborting process unknown ospid (11378) as a result of ORA-603
ORA-603 : opitsk aborting process
License high water mark = 1
USER (ospid: 11378): terminating the instance due to error 28427
2017-02-04 22:25:27.615000 +00:00
PSP0 (ospid: 11293): terminating the instance due to error 28427
PMON (ospid: 11289): terminating the instance due to error 28427
2017-02-04 22:25:28.621000 +00:00
Instance terminated by PMON, pid = 11289

You don’t find those “Verifying all user tablespaces in pdb 0 are encrypted in Oracle Cloud..” when you are not in the Oracle Public Cloud.
But in Oracle Public Cloud, the instance is forced to stop as soon as an un-encrypted tablespace is found.

Fake it

What I’ll do now is only for academic purpose, to understand what happens and, maybe, troubleshoot if you have created unencrypted tablespaces. But it is not documented, and not supported.

From a cloud instance, you can get metadata about your instance by getting attributes from http://192.0.0.192 (Oracle Cloud uses Nimbula)
When Oracle Database instance starts, it reads the dns domain from http://192.0.0.192/latest/attributes/dns/domain:

[oracle@DBI122 ~]$ curl http://192.0.0.192/latest/attributes/dns/domain
compute-franck.oraclecloud.internal.[oracle@DBI122 ~]$

I suppose that the detection of Oracle Cloud is done from that. If you are not in Oracle Cloud, you will have no answer from http://192.0.0.192 so let’s simulate that by blocking this ip address:

[root@DBI122 opc]# iptables -A OUTPUT -p 192.0.0.192 -j REJECT
 
[root@DBI122 opc]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
 
Chain FORWARD (policy ACCEPT)
target prot opt source destination
 
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere 192.0.0.192 reject-with icmp-port-unreachable

Now back to my database, I can start it without any problem:

SQL> startup
ORACLE instance started.
 
Total System Global Area 2214592512 bytes
Fixed Size 8622968 bytes
Variable Size 1090522248 bytes
Database Buffers 1107296256 bytes
Redo Buffers 8151040 bytes
Database mounted.
Database opened.
 
SQL> select TABLESPACE_NAME,ENCRYPTED from dba_tablespaces;
 
TABLESPACE_NAME ENC
------------------------------ ---
SYSTEM NO
SYSAUX NO
SYS_UNDOTS NO
UNDOTBS1 NO
IOPS NO

In the alert.log I have nothing about checking encrypted tablespaces:

No Resource Manager plan active
2017-02-04 22:31:34.510000 +00:00
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process AQPC

So what?

encrypt_new_tablespaces=DDL is not a way to avoid encryption in the Oracle Public Cloud, except temporarily for a test on a database that you create for that and never re-start once you have created user tablespaces. There are still some bugs and issues with TDE (when importing from non-TDE, when using local undo,…) so it is good to know that there can be a workaround. But remember this is not supported. If you need this, please contact My Oracle Support.

 

Cet article Oracle Public Cloud 12cR2: TDE is not an option est apparu en premier sur Blog dbi services.

CBO parse time – matching time with transformation

Sat, 2017-02-04 14:36

By a strange coincidence, I’ve encountered several cases of long parse time recently. That was the occasion to give a closer look at a CBO trace enhancement that came with April 2016 PSU: Bug 16923858 Diagnostic enhancement to annotate 10053 trace with accumulated parse time

I have a query that takes several seconds to parse and here is the new parse time information when you trace SQL_Compiler (the 10053 trace):
$ grep TIMER db003_ora_22231_orig.trc
TIMER: costing additional plans cpu: 1.807 sec elapsed: 2.019 sec
TIMER: costing general plans cpu: 26.131 sec elapsed: 27.747 sec
TIMER: costing additional plans cpu: 8.995 sec elapsed: 9.329 sec
TIMER: SU: costing SEL$A2089C65 cpu: 38.313 sec elapsed: 40.506 sec
TIMER: costing additional plans cpu: 1.523 sec elapsed: 1.850 sec
TIMER: SU: costing Interleaved CVM SEL$488B5694 cpu: 4.255 sec elapsed: 4.807 sec
TIMER: SU: Interleaved CVM SEL$A2089C65 cpu: 4.293 sec elapsed: 4.846 sec
TIMER: i costing additional plans cpu: 1.742 sec elapsed: 1.884 sec
TIMER: i costing general plans cpu: 21.343 sec elapsed: 21.843 sec
TIMER: i costing additional plans cpu: 8.072 sec elapsed: 8.375 sec
TIMER: i JPPD: costing SEL$A2089C65 cpu: 32.798 sec elapsed: 33.838 sec
TIMER: i JPPD: iteration (#1) SEL$A2089C65 cpu: 32.822 sec elapsed: 33.863 sec
TIMER: i costing general plans cpu: 22.088 sec elapsed: 22.751 sec
TIMER: i costing additional plans cpu: 5.484 sec elapsed: 5.745 sec
TIMER: i JPPD: costing SEL$A2089C65 cpu: 30.181 sec elapsed: 31.113 sec
TIMER: i JPPD: iteration (#2) SEL$A2089C65 cpu: 30.197 sec elapsed: 31.129 sec
TIMER: SU: Interleaved JPPD SEL$A2089C65 cpu: 63.026 sec elapsed: 64.999 sec
TIMER: SU: iteration (#1) SEL$A2089C65 cpu: 105.656 sec elapsed: 110.375 sec
TIMER: SU: costing SEL$EB7B6E47 cpu: 3.749 sec elapsed: 4.025 sec
TIMER: SU: iteration (#2) SEL$EB7B6E47 cpu: 3.762 sec elapsed: 4.038 sec
TIMER: CBQT SU and CVM SEL$1 cpu: 109.512 sec elapsed: 114.507 sec
TIMER: Cost-Based Transformations (Overall) SEL$62E0E936 cpu: 110.118 sec elapsed: 115.112 sec
TIMER: costing additional plans cpu: 1.247 sec elapsed: 1.485 sec
TIMER: Access Path Analysis (Final) SEL$62E0E936 cpu: 2.227 sec elapsed: 2.227 sec
TIMER: SQL Optimization (Overall) SEL$62E0E936 cpu: 114.674 sec elapsed: 119.906 sec

The default is to have a line when an operation takes more than 1 second. You can change that with the fix control:

This will trace all times > 10 microseconds:
alter session set "_fix_control"='16923858:1';

This will trace all times > 10 seconds:
alter session set "_fix_control"='16923858:7';

You get it: each level increases the threshold to the next order of magnitude, the default is 6 (1 second)

I’ve a small awk script to display the time in a better format:


cpu(ms) ela(ms)
42138.000 42159.000 costing general plans 12.1.0.2 640468
6416.000 6418.000 costing additional plans 12.1.0.2 719893
1509.000 1509.000 costing general plans 12.1.0.2 753749
2377.000 2377.000 or expansion check 12.1.0.2 763107
52698.000 52721.000 SU: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10) 12.1.0.2 763120
2035.000 2035.000 costing general plans 12.1.0.2 842348
2642.000 2642.000 or expansion check 12.1.0.2 849927
4920.000 4921.000 SU: costing Interleaved CVM SEL$959A6A10 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B) 12.1.0.2 849940
4941.000 4942.000 SU: Interleaved CVM SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10) 12.1.0.2 849942
42556.000 42568.000 i costing general plans 12.1.0.2 1482291
6348.000 6348.000 i costing additional plans 12.1.0.2 1561716
1653.000 1653.000 i costing general plans 12.1.0.2 1595574
2487.000 2488.000 i or expansion check 12.1.0.2 1604932
52924.000 52938.000 i JPPD: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 1604945
52933.000 52947.000 i JPPD: iteration (#1) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 1604956
1116.000 1116.000 i costing additional plans 12.1.0.2 1630694
33570.000 33576.000 i costing general plans 12.1.0.2 2098660
10614.000 10622.000 i costing additional plans 12.1.0.2 2211397
2092.000 2093.000 i costing general plans 12.1.0.2 2246743
1396.000 1397.000 i costing additional plans 12.1.0.2 2264252
3778.000 3780.000 i or expansion check 12.1.0.2 2265816
50741.000 50759.000 i JPPD: costing SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265829
50752.000 50769.000 i JPPD: iteration (#2) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265839
103688.000 103719.000 SU: Interleaved JPPD SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265844
161339.000 161394.000 SU: iteration (#1) SEL$5061678E (SUBQUERY UNNEST SEL$93416A64; SEL$10)(COPY SEL$5061678E) 12.1.0.2 2265846
3250.000 3252.000 SU: costing SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307338
3262.000 3264.000 SU: iteration (#2) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307349
164632.000 164689.000 CBQT SU and CVM SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307498
164827.000 164884.000 Cost-Based Transformations (Overall) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2307946
2577.000 2578.000 costing general plans 12.1.0.2 2387060
3204.000 3204.000 or expansion check 12.1.0.2 2393273
4597.000 4598.000 Access Path Analysis (Final) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2393703
170872.000 170931.000 SQL Optimization (Overall) SEL$1 (PARSER)(COPY SEL$1) 12.1.0.2 2393730

Note that I’m more interested by the CPU time because the elapsed time includes the time to write to the trace.
I’m reading a 2 million line 10053 trace file here. The CBO takes time but is doing lot of work here.

My awk script also records the maximum time spend on each query block and then displays the ‘Query Block Registry’ with those times. This is a good way to understand which CBO transformation is responsible for most of the parse time:


cpu(ms) ela(ms) Query Block Registry:
0.000 0.000 SEL$10 0x897e9b8 (PARSER) [FINAL] 161339.000 161394.000 SEL$5061678E 0x0 (SUBQUERY UNNEST SEL$93416A64; SEL$10)
0.000 0.000 SEL$6F0423A9 0x0 (PUSHED PREDICATE SEL$CAA3F13B; SEL$5061678E; "VW_SQ_3"@"SEL$93416A64" 28)
0.000 0.000 SEL$959A6A10 0x0 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B)
0.000 0.000 SEL$1DF60DFD 0x0 (QUERY BLOCK TABLES CHANGED SEL$10)
0.000 0.000 SEL$CAA3F13B 0x0 (SUBQ INTO VIEW FOR COMPLEX UNNEST SEL$1DF60DFD)
0.000 0.000 SEL$6F0423A9 0x0 (PUSHED PREDICATE SEL$CAA3F13B; SEL$5061678E; "VW_SQ_3"@"SEL$93416A64" 28)
0.000 0.000 SEL$959A6A10 0x0 (VIEW MERGE SEL$5061678E; SEL$CAA3F13B)
0.000 0.000 SEL$9 0x8987df0 (PARSER) [FINAL] 0.000 0.000 SEL$8 0x8988ee8 (PARSER) [FINAL] 0.000 0.000 SEL$7 0x898a4f8 (PARSER) [FINAL] 0.000 0.000 SEL$6 0x898c028 (PARSER) [FINAL] 0.000 0.000 SEL$5 0x898d268 (PARSER) [FINAL] 0.000 0.000 SEL$4 0x898eaf8 (PARSER) [FINAL] 0.000 0.000 SEL$3 0x89912b8 (PARSER) [FINAL] 0.000 0.000 SEL$2 0x89923c0 (PARSER) [FINAL] 170872.000 170931.000 SEL$1 0x89a3ef0 (PARSER) [FINAL] 0.000 0.000 SEL$93416A64 0x0 (VIEW ADDED SEL$1)
161339.000 161394.000 SEL$5061678E 0x0 (SUBQUERY UNNEST SEL$93416A64; SEL$10)
0.000 0.000 ...

The interesting point here is that the [FINAL] is the transformation that is chosen by the optimizer, so we know that we have spent time on a query block that has been finally chosen for the best plan.

Before going further, here is my ugly awk script that gives the above output from a 10053 trace file:


BEGIN{
print ; print " cpu(ms) ela(ms)"
}
/^ *optimizer_features_enable += +/{
ofe=$3 ; sub(/^[8-9][.]/,"0&",ofe)
delete qb
}
/TIMER/{
l=$0 ; sub(/ cpu: .*$/,"",l) ; sub(/TIMER: /,"",l)
sub(/^.* cpu: /,"") # --> $1 is cpu $4 is ela
q=gensub(/^.* /,"","g",l);
if ( qb[q] == "" ) { q="" } else { sub(/ [^ ]*$/,"",l) }
if ( q!="" && $1*1e3 > cpu[q] ) cpu[q]=$1*1e3
if ( q!="" && $4*1e3 > ela[q] ) ela[q]=$4*1e3
printf "%10.3f %10.3f\t%-40s %20s %-60s\t%6s %s\n",$1*1e3,$4*1e3,l,q,qb[q],ofe,NR
if ( q != "") versions[l" "q]=versions[l" "q] ofe"="($1*1e3)"ms\n"
}
/Query Block Registry/,/^:/{
if ( $0 ~/Query Block Registry/ ) { ; print "" ; print " cpu(ms) ela(ms) "$0 ; print "" }
else { printf "%10.3f %10.3f\t%s\n",cpu[$1],ela[$1],$0 }
}

So, I know that the unnesting of subquery SEL$10 is responsible for my long parse time. The ‘+alias’ format of the explain plan is an easy way to find which subquery it is comes from, :

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
64 - SEL$10
65 - SEL$10 / TABLE_WITH_DATES@SEL$10
66 - SEL$10 / TABLE_WITH_DATES@SEL$10

Here is the subselect (I redacted the table names).

AND begin_date =
(SELECT MAX (begin_date)
FROM table_with_dates
WHERE some_id = some_id
AND begin_date <= ...
)

So what can I do from now? I can avoid the subquery unnesting by adding the NO_UNNEST hin in the subquery, or add it to the main query as /*+ NO_UNNEST( @SEL$10) */

This really reduces the parse time, but I have to check that the plan is still acceptable. Actually we cannot blame the CBO for the time it takes here, because the goal of subquery unnesting (SU) is exactly that: open the way to lot of new access path, that can be cost based and may give a better execution plan. And an application should not parse too often, so spending a few seconds in parsing is not bad if it helps to execute the query quickly all over the day.

So don’t forget to have the latest PSU and trace with:

alter session set events 'trace [SQL_Compiler.*]';
explain plan for ...
alter session set events 'trace [SQL_Compiler.*] off';

get timer information with:

column value new_value tracefile
select value from v$diag_info where name='Default Trace File';
column value new_value clear
host grep TIMER &tracefile

And find which transformation is responsible for most of the parse time.

 

Cet article CBO parse time – matching time with transformation est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: MAX_PDB_STORAGE

Fri, 2017-02-03 16:06

The separation of roles between system DBA and application DBA is the way to go for agile development and the PDBaaS managed service is an excellent way to play with this concept: You are PDB administrator but not the CDB administrator.
Here is an example about tablespace creation/deletion, and the kind of problems that may arise with this architecture.

The PDB administrator manages all the PDB objects. He must be able to create users and tablespaces in order to deploy an application.

Create tablespace

So yes, in Exadata Express Cloud Service we can create a tablespace and this is what I did:

SQL> create tablespace IOPS datafile size 10G;
Tablespace created.

and you don’t need to specify the datafile location because db_create_file_dest is defined:

SQL> show parameter db_create_file_dest
 
NAME TYPE VALUE
------------------- ------ -----
db_create_file_dest string +DATA

Actually, this is something that I cannot change because it is set by the CDB administrator when creating the PDB:

SQL> alter session set db_create_file_dest='/var/tmp'
 
Error report -
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-01031: insufficient privileges

This is a 12.2 feature. Impossible to create a file outside:

create tablespace IOPS datafile '/var/tmp/franck.dbf' size 10G
Error report -
ORA-65250: invalid path specified for file - /var/tmp/franck.dbf

I’ve already written about that 12cR2 feature: http://blog.dbi-services.com/12cr2-create_file_dest-for-pdb-isolation/ and this is a rather nice feature.

MAX_PDB_STORAGE

There’s another limit in PDBaaS: you have a limit on the storage you can use. And in this X20 service, the limit is 20G.

When you have reached the maximum, you get the following error:

SQL> create tablespace ANOTHERONE datafile size 30G;
 
create tablespace ANOTHERONE datafile size 30G
Error report -
ORA-65114: space usage in container is too high
65114. 00000 - "space usage in container is too high"
*Cause: Space usage in the current container exceeded the value of MAX_PDB_STORAGE for the container.
*Action: Specify a higher value for MAX_PDB_STORAGE using the ALTER PLUGGABLE DATABASE statement.

Ok. This limit is visible as a property of my PDB:
SQL> select property_name, property_value from database_properties where property_name ='MAX_PDB_STORAGE';
 
PROPERTY_NAME PROPERTY_VALUE
------------- --------------
MAX_PDB_STORAGE 24771223880

And the current size is visible from V$PDBS:
SQL> select con_id,total_size from v$pdbs;
 
CON_ID TOTAL_SIZE
------ -----------
47 24766742528

Nothing hidden here, this is the sum of my PDB files, datafiles and tempfiles:

SQL> select sum(bytes) from dba_data_files;
SUM(BYTES)
----------
21761523712
 
SQL> select sum(bytes) from dba_temp_files;
SUM(BYTES)
----------
3005218816

Of course, I’m not authorized to increase my limit:

SQL> alter pluggable database storage(maxsize 40G);
 
alter pluggable database storage(maxsize 40G)
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

And this is once again a very nice feature coming in 12cR2 multitenant.

So… I’ve filled my X20 service. There a need for some housekeeping. But…

Drop tablespace

I’m allowed to create and drop tablespaces. Let’s drop that IOPS tablespace:


SQL> drop tablespace iops;
 
drop tablespace iops
Error report -
ORA-01031: insufficient privileges
01031. 00000 - "insufficient privileges"
*Cause: An attempt was made to perform a database operation without the necessary privileges.
*Action: Ask your database administrator or designated security administrator to grant you the necessary privileges

With lockdown profiles, you have always the same message: no clue about what is not authorized. I know that there is a DROP_TABLESPACE_KEEP_DATAFILES feature that you can disable with lockdown profiles and this makes sense when the CDB administrator do not want that PDB administrators leave dead datafiles in the system. Here we are on ASM, with OMF, so the datafiles are automatically dropped. But the lockdown is working at statement syntax level, so we have to mention the clause:

SQL> drop tablespace iops including contents and datafiles;
 
drop tablespace iops including contents and datafiles
Error report -
ORA-38881: Cannot drop tablespace IOPS on primary database due to guaranteed restore points.
38881. 00000 - "Cannot drop tablespace %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to drop a tablespace on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo dropping of a tablespace.
*Action: Drop all guaranteed restore points first and retry, or delay dropping the tablespace until all guaranteed restore points are removed.

So the syntax is accepted, but here I have another problem. I have a guaranteed restore point and this prevents the drop of tablespace.

Let’s have a look at restore points because I didn’t create one (this is something were are not allowed to do on Exadata Express Cloud Service, which would be a nice feature as this servie is focused at developers).

SQL> select * from v$restore_point;
 
SCN DATABASE_INCARNATION# GUARANTEE_FLASHBACK_DATABASE STORAGE_SIZE TIME RESTORE_POINT_TIME PRESERVED NAME PDB_RESTORE_POINT CLEAN_PDB_RESTORE_POINT PDB_INCARNATION# CON_ID
--- --------------------- ---------------------------- ------------ ---- ------------------ --------- ---- ----------------- ----------------------- ---------------- ------
84602869122 2 YES 18253611008 28-JAN-17 03.23.08.000000000 AM YES PRE_17_1_2 NO NO 0 0

The PDB has been created when I subscribed to the service, on 22-JAN-17 and we are now 03-Feb-17.

So it seems that the CDB administrator (it is a managed service, CDB DBA is Oracle) has created a restore point last Saturday.
The name, PRE_17_1_7, looks like something we do before a maintenance, in case something goes wrong. I had no notification about any maintenance. And anyway, we usually remove the restore point as soon as possible because this fills the FRA.

I can see somme CDB structures, such as the FRA:


SQL> select * from v$recovery_area_usage;
 
FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES CON_ID
--------- ------------------ ------------------------- --------------- ------
CONTROL FILE 0 0 1 0
REDO LOG 0 0 0 0
ARCHIVED LOG 0.81 0.81 193 0
BACKUP PIECE 0.2 0 26 0
IMAGE COPY 0.33 0 10 0
FLASHBACK LOG 1.3 0.98 41 0
FOREIGN ARCHIVED LOG 0 0 0 0
AUXILIARY DATAFILE COPY 0 0 0 0

Ok, that’s not too much. There is not a lot of activity during one week here. And anyway, FRA is big:

SQL> show parameter recovery
 
NAME TYPE VALUE
-------------------------- ----------- -----
db_recovery_file_dest string +RECO
db_recovery_file_dest_size big integer 6T

So far so good, but I’m stuck here. The restore point is at CDB level, so I cannot drop it:

SQL> drop restore point PRE_17_1_2;
 
drop restore point PRE_17_1_2
Error report -
ORA-38780: Restore point 'PRE_17_1_2' does not exist.
38780. 00000 - "Restore point '%s' does not exist."
*Cause: The restore point name of the DROP RESTORE POINT command does not exist.
*Action: No action required.

This means that I cannot drop my tablespace. And I cannot even resize the datafiles to their minimum:

SQL> alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M;
 
alter database datafile '+DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351' resize 10M
Error report -
ORA-38883: Cannot shrink data file +DATA/CFCDBA1/46AB9829A44EB2C1E0538846220A6F7D/DATAFILE/iops.1455.934053351 on primary database due to guaranteed restore points.
38883. 00000 - "Cannot shrink data file %s on primary database due to guaranteed restore points."
*Cause: An attempt was made to shrink a data file on a primary database while there are guaranteed restore points. You cannot do this because Flashback database cannot undo the shrinking of a data file.
*Action: Drop all guaranteed restore points first and retry, or delay the data file resize until all guaranteed restore points are removed.

I have an empty tablespace that takes all my allocated storage and I cannot remove it.

So what to do? Try to contact support? Or fill the FRA until raises an alert?
I would try the first one but I received a CSI with my order, but it’s not a valid one…

 

Cet article Exadata Express Cloud Service: MAX_PDB_STORAGE est apparu en premier sur Blog dbi services.

Database Diagram using SQL Developer

Thu, 2017-02-02 12:13

Last Day a client asked me if I can generate a relational diagram for an oracle schema. He was just preparing a migration and wanted to see how tables are organized in the schema.
In this article we will show how this is possible with SQL Developer . We are using SQL Developper 4.1.3.20
sqldev1
Once SQL Developer started just proceed as following :
Launch File ==>Data Modeler ==>Import==>Data Dictionary
sqldev2
Choose your connection and Click Next
sqldev3
And then after we can choose the schema
sqldev4
In this example we check all tables and click Next
sqldev5
Review the summary
sqldev6
And Then Click Finish
logfile

After closing the log file, we have the diagram
sqldev7

And we also can save the model in a PDF file or an image file
sqldev8

Hope this article will help

 

Cet article Database Diagram using SQL Developer est apparu en premier sur Blog dbi services.

Running SLOB on Exadata Express Cloud Service

Wed, 2017-02-01 07:06

The Exadata Express Cloud Service is a managed PDBaaS: Oracle is the system admin and the CDB database administrator, you are the PDB administrator. You connect with a local user, PDB_ADMIN, which is no SYSDBA privilege but has a PDB_DBA which has nearly all DBA rights, but with some features disabled by lockdown profile.
I have no worry about the performance on this service: it is an Exadata X5 half rack bare metal (my guess), the CDB running as RAC One Node on 2 compute nodes, accessing the 7 storage cells. Smart Scan is disabled and given the small size of the PDB, and the low usage of the CDB, I/O is fast (most single block latency bwtween 128 and 256us coming from cell flash cache).
But I like to run SLOB on new platforms and I wanted to see if I can run it here, without SYSDBA role, and connecting to a CDB.

TNS_ADMIN

I’ll connect remotely because SLOB will be running on my computer (I have no access to the host for the PDBaaS managed service). In order to connect remotely, through a SQL*Net over SSL, you have to download the client credentials kit
CaptureEXCS017

It is a zip file that contains the encryption wallet and the sqlnet.ora and tnsnames.ora to use. I unzip it in a directory (/var/tmp/client_credentials in this example) and I’ll use it by setting the TNS_ADMIN environment to this directory.
Thus I change the wallet location in the sqlnet.ora:
$ cat sqlnet.ora
WALLET_LOCATION = (SOURCE = (METHOD = file)
(METHOD_DATA = (DIRECTORY="$TNS_ADMIN")))

and I use the provided tnsnames.ora which defines the ‘dbaccess’ service:
$ cat tnsnames.ora
dbaccess = (description=
(address=(protocol=tcps)(port=1522)(host=dbaccess.us2.oraclecloudapps.com))
(connect_data=(service_name=eptdojjm1ag.usdc2.oraclecloud.com))
(security=(ssl_server_cert_dn="CN=dbaccess.us2.oraclecloudapps.com,O=Oracle Corporation,L=Redwood Shores,ST=California,C=US"))
)

Changes in slob.conf

The default slob.conf connects with a bequeath connection. Here I need to connect through the network service described above. I uncommented:

#ADMIN_SQLNET_SERVICE=slob
#SQLNET_SERVICE_BASE=slob
#SYSDBA_PASSWD=change_on_install

and replaced it with:

ADMIN_SQLNET_SERVICE=dbaccess
SQLNET_SERVICE_BASE=dbaccess
SYSDBA_PASSWD=xxxxxxxxx

Where xxxxxxxxx is my PDB_ADMIN password that I setup in:
CaptureEXCS024

Of course we need an Oracle Client. Download instant client if you don’t have one.

Changes in setup.sh

SLOB creates the users and grant them RESOURCE,DBA
In Exadata Express Cloud Service, I don’t have the DBA role but the CREATE SESSION, and the PDB_ROLE which is sufficient. In addition to that I need to give quota to the IOPS tablespace because PDB_ADMIN do not have unlimited tablespace. Finally here is the create user part after my modifications:


msg NOTIFY "Creating user: $user "
 
sqlplus "$constring" <<EOF
WHENEVER SQLERROR EXIT 2;
SET TERMOUT ON
SET ECHO ON
PROMPT User grants for $user
WHENEVER SQLERROR EXIT 2;
GRANT CREATE SESSION TO $user IDENTIFIED BY $user;
GRANT PDB_DBA TO $user;
ALTER USER $user DEFAULT TABLESPACE $tablespace ;
ALTER USER $user QUOTA UNLIMITED ON $tablespace ;
EXIT;
EOF

The setup.sh starts by dropping the SLOB schemas up to MAX_SLOB_SCHEMAS just in case they are there. The default is 4096 but that’s too much for me because of the latency to the server in US (it is planned to have Exadata Express Cloud Service in Europe in the future), so I reduced it to 8 (as I have only 1 CPU in this service, there’s no need for more users):

export MAX_SLOB_SCHEMAS=8

And last modification in setup.sh was to disable the Multitenant checking because PDBaaS is by definition on a Multitenant database:


if ( ! check_mto "$ADMIN_CONNECT_STRING")
then
msg FATAL ""
msg FATAL "This version of SLOB does not support Oracle Multitenant Option"
msg FATAL
#exit 1
fi

I’ve not seen any problem. Oracle designed multitenant so that everything you used to do on a database is possible to do on a PDB.

Changes in runit.sh

First, in order to connect as PDB_ADMIN instead of SYS as SYSDBA, you have to change the admin_connect_string:

export admin_connect_string="pdb_admin/${SYSDBA_PASSWD}@${ADMIN_SQLNET_SERVICE}"

Second, SLOB calls UTL_FILE which is not allowed in Exadata Express Cloud Service. Becase I’ve seen in the code that this is bypassed when the ‘nt’ argument is 1 so I’ve set it to this value in runit.sh:

arg9=1 #was: $nt

Finally, because we can’t create manual AWR snapshots in Exadata Express Cloud Service, I replaced the EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT by the creation of a small view to get the basic statistics:

create or replace view FPAV as
select 'STAT' class,name,value from V$SYSSTAT
union all
select 'WAIT' class,event,total_waits from V$SYSTEM_EVENT
union all
select 'TIME' class,stat_name,value from V$SYS_TIME_MODEL
union all
select 'V$OS' class,stat_name,value from V$OSSTAT
union all
select 'MICRO' class,event||' - '||wait_time_format,wait_count from V$EVENT_HISTOGRAM_MICRO
;
create table FPAT as select sysdate time,FPAV.* from FPAV where 1=0;
insert into FPAT select sysdate time,FPAV.* from FPAV;
commit;

and I replaced the part that runs the AWR report with a simple query:


($admin_conn < awr.txt
column per_sec format 999G999G999G999G999
spool awr.txt append
select to_char(btim,'hh24:mi:ss') btime,to_char(time,'hh24:mi:ss')etime,seconds,class,name,round(delta/seconds) per_second , decode(class||' '||name
,'STAT session logical reads','LIO/s'
,'STAT physical read total IO requests','PIO/s'
,'TIME DB CPU','CPU us'
,'TIME DB time','DB time us'
,'STAT physical read total bytes','Read B/s'
,'STAT physical write total bytes','Write B/s'
,'V$OS BUSY_TIME','OS busy cs'
,'WAIT resmgr:cpu quantum','resmgr'
) main from (
select
FPAT.*
,max(time)over(order by time rows between unbounded preceding and unbounded following) last
,value-lag(value)over(partition by class,name order by time) delta
,(time-lag(time)over(partition by class,name order by time))*24*60*60 seconds
,lag(time)over(partition by class,name order by time) btim
from FPAT
) where time=last and delta>0 order by main,per_second desc;
spool off
exit
EOF

Statspack is not an alternative here because it requires some views and grants from SYS which you cannot do on Exadata Express Cloud Service. I really don’t understand why we cannot use AWR locally because in 12.2 you can have AWR snapshots at PDB level. But any use of dbms_workload_repository is disabled by lockdown profile.

Result

So this is the kind of output I get on running 2 SLOB sessions during 10 minutes:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
11:16:09 11:26:15 606 TIME DB CPU 1968151 CPU us
11:16:09 11:26:15 606 TIME DB time 1981479 DB time us
11:16:09 11:26:15 606 STAT session logical reads 1228557 LIO/s
11:16:09 11:26:15 606 STAT physical read total IO requests 0 PIO/s
11:16:09 11:26:15 606 STAT physical read total bytes 189 Read B/s
11:16:09 11:26:15 606 STAT physical write total bytes 189 Write B/s
11:16:09 11:26:15 606 WAIT resmgr:cpu quantum 0 resmgr
11:16:09 11:26:15 606 STAT logical read bytes from cache 1.0064E+10
11:16:09 11:26:15 606 TIME sql execute elapsed time 1980376
11:16:09 11:26:15 606 V$OS IDLE_TIME 6787
11:16:09 11:26:15 606 V$OS BUSY_TIME 391
11:16:09 11:26:15 606 V$OS USER_TIME 311

1.96 CPU seconds per second and 1.98 DB time. Those are my 2 cached SLOB sessions.
About 1.2 million LIO per seconds. This is what I expect from those test on Exadata X5 (PCT_UPDATE=0 and WORK_UNIT=64).
Resource manager ready to kick in as my CPU_COUNT is set to 2 in my PDB (set by lockdown profile, you cannot change it)
The overall system hosting the CDB has been mostly idle: 3.91 / (67.87 + 3.91) is 5.44% busy. I’m responsible for 2 sessions over 72 threads (36 cores): 2/72=2.78% CPU usage

The Exadata Express Cloud Service allocates 1 core (2 threads) but because the system is nearly idle, I didn’t use 2 threads of the same core. Let’s compare with a 1 session only run:


BTIME ETIME SECONDS CLAS NAME PER_SECOND MAIN
-------- -------- ---------- ---- ---------------------------------------------------------------- ---------- ----------
12:15:49 12:25:55 606 TIME DB CPU 989008 CPU us
12:15:49 12:25:55 606 TIME DB time 991198 DB time us
12:15:49 12:25:55 606 STAT session logical reads 668368 LIO/s
12:15:49 12:25:55 606 STAT physical read total IO requests 0 PIO/s
12:15:49 12:25:55 606 STAT physical read total bytes 203 Read B/s
12:15:49 12:25:55 606 STAT physical write total bytes 189 Write B/s
12:15:49 12:25:55 606 STAT logical read bytes from cache 5475272359
12:15:49 12:25:55 606 TIME sql execute elapsed time 990290
12:15:49 12:25:55 606 STAT consistent gets 668368

Comparing logical reads, 1228557/668368=1.8 so probably I was not running the two sessions on the same core.

So what?

SLOB (https://kevinclosson.net/slob/) doesn’t need to be installed on the database server because it is mostly PL/SQL, so no roundtrips. And this is a way to run it when you have only access to a PDB without SYSDBA privileges.

 

Cet article Running SLOB on Exadata Express Cloud Service est apparu en premier sur Blog dbi services.

Can I do it with PostgreSQL? – 9 – Temporary tables

Wed, 2017-02-01 06:47

It has been quite a while since the last posts in this series, so here is what we looked at until now:

In this post we’ll look at temporary tables. Temporary tables hold data for the scope of a session or a transaction and the data is lost afterwards, so what are they good for then? Usually you will use them to store intermediate results that you need for further processing without the need to permanently store these. A typical use case is a business report that requires some intermediate aggregations. Can we do this in PostgreSQL? Yes, of course. Lets go.

When we look at the create table syntax the keywords TEMP or TEMPORARY are there:

(postgres@[local]:5439) [postgres] > \h create table
Command:     CREATE TABLE
Description: define a new table
Syntax:
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table_name ( [
  { column_name data_type [ COLLATE collation ] [ column_constraint [ ... ] ]
    | table_constraint
    | LIKE source_table [ like_option ... ] }
    [, ... ]
] )
[ INHERITS ( parent_table [, ... ] ) ]
[ WITH ( storage_parameter [= value] [, ... ] ) | WITH OIDS | WITHOUT OIDS ]
[ ON COMMIT { PRESERVE ROWS | DELETE ROWS | DROP } ]
[ TABLESPACE tablespace_name ]

Then we should be able to create a temporay table, shouldn’t we?

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

Easy. Have you looked at the schema it got created in? What is pg_temp_2? I do not have schema that is named pg_temp_2:

(postgres@[local]:5439) [postgres] > \dn
  List of schemas
  Name  |  Owner   
--------+----------
 public | postgres
(1 row)

Temporay tables get created in a special schema. You can see it if you query pg_namespace directly:

(postgres@[local]:5439) [postgres] > select nspname from pg_namespace where nspname = 'pg_temp_2';
  nspname  
-----------
 pg_temp_2
(1 row)

Important to know: When you exit from your session the temporary table is gone:

(postgres@[local]:5439) [postgres] > \d tmp1
           Table "pg_temp_2.tmp1"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \q
postgres@pgbox:/home/postgres/ [PG961] psql postgres
psql (9.6.1 dbi services build)
Type "help" for help.

(postgres@[local]:5439) [postgres] > \d tmp1
Did not find any relation named "tmp1".

What about the visibiliy of the data in a temporary table? It depends on how you create the table. When you want the data
to be visible for the entire lifetime of the session you do it like above:

(postgres@[local]:5439) [postgres] > create temporary table tmp1 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > insert into tmp1 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > select * from tmp1;
 a | b 
---+---
 1 | 1
(1 row)

When you want the data to be visible only for the duration of the current transaction you do it like this:

(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10)) on commit delete rows;
CREATE TABLE
(postgres@[local]:5439) [postgres] > begin;
BEGIN
(postgres@[local]:5439) [postgres] > insert into tmp2 values (1,'1');
INSERT 0 1
(postgres@[local]:5439) [postgres] > end;
COMMIT
(postgres@[local]:5439) [postgres] > select count(*) from tmp2;
 count 
-------
     0
(1 row)

Another important point to know: When you create a temporary table with the same name as an existing table you will not see
the existing table anymore unless you prefix the normal table with the schema name:

(postgres@[local]:5439) [postgres] > create table tmp3 ( a int );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp3 ( a int, b varchar(10));
CREATE TABLE
(postgres@[local]:5439) [postgres] > \d tmp3
           Table "pg_temp_2.tmp3"
 Column |         Type          | Modifiers 
--------+-----------------------+-----------
 a      | integer               | 
 b      | character varying(10) | 

(postgres@[local]:5439) [postgres] > \d public.tmp3
     Table "public.tmp3"
 Column |  Type   | Modifiers 
--------+---------+-----------
 a      | integer | 

Be careful with this. What about performance? Is a temporary table faster to insert than a normal table?

(postgres@[local]:5439) [postgres] > create table tmp1 ( a int, b varchar(10) );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create temporary table tmp2 ( a int, b varchar(10) );
CREATE TABLE

This is the script:

(postgres@[local]:5439) [postgres] > \! cat a.sql
truncate tmp1;
truncate tmp2;
with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp1 (a,b)
       select a, a
         from aa;

with aa as
( select generate_series ( 1, 1000000 ) a  )
insert into tmp2 (a,b)
       select a, a
         from aa;

This is the result:

(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 19.626 ms
TRUNCATE TABLE
Time: 35.665 ms
INSERT 0 1000000
Time: 7777.034 ms
INSERT 0 1000000
Time: 3327.984 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 46.221 ms
TRUNCATE TABLE
Time: 33.286 ms
INSERT 0 1000000
Time: 7425.957 ms
INSERT 0 1000000
Time: 3241.140 ms
(postgres@[local]:5439) [postgres] > \i a.sql
TRUNCATE TABLE
Time: 44.365 ms
TRUNCATE TABLE
Time: 35.992 ms
INSERT 0 1000000
Time: 8732.566 ms
INSERT 0 1000000
Time: 3888.795 ms

The temporary table is almost double as fast to write to than the normal table. Of course you can create indexes on temporary tables as well:

(postgres@[local]:5439) [postgres] > create temporary table tmp4 ( a int, b varchar );
CREATE TABLE
(postgres@[local]:5439) [postgres] > create index tmpi1 on tmp4(a);
CREATE INDEX

.. and of course the index is gone as well once you end your session.

Another point to remember: Temporary tables are not visible to the vacuum deamon. You might think this is not a problem as they disappear anyway but remember that autovacuum is responsible for gathering the statistics (kicking off analyze) as well.
Depending on what you want to do with the data you loaded into the temporary table it might be wise to issue a manual analyze on it:

(postgres@[local]:5439) [postgres] > analyze verbose tmp4;

I hope I do not need to say that tempoary tables are not crash safe :) Have fun …

 

Cet article Can I do it with PostgreSQL? – 9 – Temporary tables est apparu en premier sur Blog dbi services.

Amazon AWS instances and Oracle database performance

Wed, 2017-02-01 03:19

When you run Oracle Database on Amazon AWS you Bring Your Own Licenses depending on the number of virtual cores (which are the number of cores allocated to your vCPUs). Behind the instance types, you have different processors and hyper-threading. Then, when choosing which instance type to run, you want to know which processor offers the best performance for your Oracle Workload. Here is an example comparing the logical reads on T2, M4, R4 and C4 instances.

My comparison is done running cached SLOB (https://kevinclosson.net/slob/) to measure the maximum number of logical reads per seconds when running the same workload on the different instance types.
I’ve compared what you can have with 2 Oracle Database processor licences, which covers 2 cores (no core factor on AWS) which means 2 vCPU for T2 which is not hyper-threaded, and 4 vCPU for the others.

T2.large: 2vCPU, 8GB RAM, monthly cost about 100$

I was on Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz

With one session:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.37
DB CPU(s): 1.0 13.0 0.00 5.34
Logical read (blocks): 747,004.5 9,760,555.7

With 2 sessions:

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 11.12
DB CPU(s): 2.0 27.1 0.00 11.04
Logical read (blocks): 1,398,124.7 19,111,284.0

T2 is not hyper-threaded which is why we double the LIOPS with two sessions. So with 2 Oracle licences on T2 we get 1.4 LIO/s

M4.xlarge: 4vCPU, 16GB RAM, monthly cost about 180$

M4 is the latest General Purpose instance in EC2. It is hyper-threaded so with 2 Oracle processor licences we can use 4 vCPU.
Here I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz, 2 cores with 2 threads each.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.1 0.00 5.46
DB CPU(s): 1.0 13.1 0.00 5.46
Logical read (blocks): 874,326.7 11,420,189.2

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.24
DB CPU(s): 2.0 27.2 0.00 9.22
Logical read (blocks): 1,540,116.9 21,047,307.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 12.33
DB CPU(s): 3.0 40.8 0.00 12.30
Logical read (blocks): 1,645,128.2 22,469,983.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 14.46
DB CPU(s): 4.0 54.3 0.00 14.39
Logical read (blocks): 1,779,361.3 24,326,538.0

Those CPU are faster than the T2 ones. With a single session, we can do 17% more LIOPS. And running on all the 4 threads, we can reach 1.8 kLIOPS which is 27% more that T2 for same Oracle licences.

R4.xlarge: 4vCPU, 30.5GB RAM, monthly cost about 200$

R4 is the memory-intensive instance. I was on Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz so I expect about the same performance as M4.

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.01
DB CPU(s): 1.0 13.7 0.00 6.01
Logical read (blocks): 864,113.9 11,798,650.6

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,546,138.8 21,115,125.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 40.9 0.00 14.07
DB CPU(s): 3.0 40.9 0.00 14.05
Logical read (blocks): 1,686,595.4 23,033,987.3

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.00
DB CPU(s): 4.0 54.3 0.00 14.93
Logical read (blocks): 1,837,289.9 25,114,082.1

This one looks a little faster. It is the same CPU but cached SLOB does not test only CPU frequency but also memory access. R4 instances have DDR4 memory.

C4.xlarge: 4vCPU, 7.5GB RAM, monthly cost about 170$

For my last test I choose the compute-optimized C4 with Intel(R) Xeon(R) CPU E5-2666 v3 @ 2.90GHz

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 1.0 13.7 0.00 6.83
DB CPU(s): 1.0 13.7 0.00 6.83
Logical read (blocks): 923,185.0 12,606,636.8

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 2.0 27.3 0.00 9.38
DB CPU(s): 2.0 27.2 0.00 9.36
Logical read (blocks): 1,632,424.3 22,296,021.5

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 3.0 39.2 0.00 13.64
DB CPU(s): 3.0 39.1 0.00 13.61
Logical read (blocks): 1,744,709.5 22,793,491.7

Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~~~~ --------------- --------------- --------- ---------
DB Time(s): 4.0 54.6 0.00 15.79
DB CPU(s): 4.0 54.3 0.00 15.71
Logical read (blocks): 1,857,692.6 25,396,599.8

According to https://aws.amazon.com/ec2/instance-types/ C4 instances have the lowest price/compute performance in EC2. The frequency is 20% faster than R4, but we have similar IOPS. The CPU frequency is not the only parameter for database workload.

So what?

You should not compare only the EC2 instance cost (I’ve indicated the approximate cost for RHEL in Europe, but you can check pricing at https://aws.amazon.com/ec2/pricing/reserved-instances/pricing/). You should estimate the Oracle licences you need to run your workload. Creating an EC2 instance takes only few minutes. Installing Oracle from an ORACLE_HOME clone is also very fast and creating a database with SLOB create_database_kit is easy. Fully automated, you can run the same SLOB tests on an instance and get results after 2 hours. It is highly recommended to do that before choosing the instance type for your database. The number of cores will determine the Oracle licences to buy, which is an acquisition cost + a yearly maintenance fee. The goal is to run on the processors that gives the best performance for your workload.

 

Cet article Amazon AWS instances and Oracle database performance est apparu en premier sur Blog dbi services.

Introducing high-availability and multi-subnet scenarios with SQL Server on Linux

Tue, 2017-01-31 15:56

On my first blog about SQL Server on Linux, I introduced the new high availability feature which concerns only SQL Server failover cluster instances so far. During this discovery time, I had the support of Mihaela Blendea (@MihaelaBlendea) at Microsoft to clarify some architecture aspects about this new kind of architecture. Firstly, I would like to thank her. It’s always a big pleasure to get the availability of the Microsoft team in this case. But after achieving the installation of my SQL Server FCI environment on Linux, I was interested in performing the same in a more complex scenario like multi-subnets failover clusters as I may notice at some customer shops. The installation process will surely change over the time and it is not intended as an official documentation of course. This is only an exercise which is part of my Linux immersion experience.

So I decided to evolve my current architecture (two clusters nodes with PaceMaker on the same subnet) by introducing a third one on a different subnet. Here a picture of the architecture I wanted to install.

blog 115 - 1 - sqlfci multisubnet architecture

So basically, referring to my previous architecture, the task to perform was as follows:

  • Make the initial heartbeat configuration redundant. Even if nowadays having redundant network paths is mostly handled by modern infrastructures and virtualization layers as well, I still believe it is always a best practice to make the heartbeat redundant at the cluster level in order to avoid unexpected behaviors like split brains (for instance with two nodes in this case). I will have the opportunity to talk about quorum stuff in a next post.
  • Introduce a third node on a different subnet to the existing architecture and then adding it to the cluster. You may follow the Microsoft documentation to perform this task. The main challenge here was to add the third node in the context of multi-subnet scenario and to ensure the communication path is working well between cluster nodes for both networks (public and private).
  • Find a way to make the existing SQL Server FCI resource multi-subnet compliant. I mean to get the same kind of behavior we may have with WSFC on Windows when the resource fails over nodes on different subnets. In this case, we have to configure an OR based resource dependency which includes second virtual address IP.
  • Check if applications are able to connect in the context of multi-subnet failover event.

You may notice that I didn’t introduce redundancy at the storage layer. Indeed, the NFS server becomes the SPOF but I didn’t want to make my architecture more complex at all for the moment. In a more realistic scenario at customer shops, this aspect would be probably covered by other storage vendor solutions.

So let’s begin by the heartbeat configuration. According to my existing infrastructure, only one ring was configured and ran on the top of my eth0 interfaces on both nodes ((respectively 192.168.5.17 for the linux01 node and 192.168.5.18 for the linux02 node).

 [mikedavem@linux01 ~]$ sudo pcs cluster corosync
…
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        nodeid: 1
    }

    node {
        ring0_addr: linux02.dbi-services.test
        nodeid: 2
    }
}
…

 

So I added one another network interface (eth1) on each cluster node with a different subnet (192.168.20.0). Those interfaces will be dedicated on running the second Corosync link (ring 2).

  • Linux01
[mikedavem@linux01 ~]$ ip addr show eth1
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.17/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe00:2bd4/64 scope link
       valid_lft forever preferred_lft forever

 

  • Linux02
[mikedavem@linux01 ~]$ sudo ssh linux02 ip addr show eth1
…
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 00:15:5d:00:2b:d5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.18/24 brd 192.168.20.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::36d8:d6f9:1b7a:cebd/64 scope link
       valid_lft forever preferred_lft forever

 

At this point I binded each new IP address with a corresponding hostname. We may either store the new configuration in the /etc/hosts file or in the DNS server(s).

Then I updated the Corosync.conf on both nodes by adding the new ring configuration as follows. The point here is that configuration changes are not synchronized automatically across nodes like Windows Failover clusters. To allow redundant ring protocol, I added the rrp_mode parameter to be active on both network interfaces (eth0 and eth1) and a new ring section for each node (ring1_addr).

totem {
    version: 2
    secauth: off
    cluster_name: linux_cluster
    transport: udpu
    rrp_mode: active
}
nodelist {
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
}

After restarting the Corosync service on both nodes, I checked the new ring status on both nodes

 [mikedavem@linux01 ~]# sudo corosync-cfgtool -s
Printing ring status.
Local node ID 1
RING ID 0
        id      = 192.168.5.17
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.17
        status  = Marking seqid 23 ringid 1 interface 192.168.20.17 FAULTY
[root@linux01 ~]#
 [root@linux01 ~]# ssh linux02 corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
        id      = 192.168.5.18
        status  = ring 0 active with no faults
RING ID 1
        id      = 192.168.20.18
        status  = ring 1 active with no faults

 

At this point, my pacemaker cluster was able to use all the network interfaces to execute heartbeat.

In the respect of the Microsoft documentation, I added a new node LINUX03 with the same heartbeat configuration and the general Corosync configuration was updated as follows:

[mikedavem@linux01 ~]# sudo pcs cluster node add linux03.dbi-services.test,linux03H2.dbi-services.testnodelist 
…
    node {
        ring0_addr: linux01.dbi-services.test
        ring1_addr: linux01H2.dbi-services.test
        nodeid: 1
    }
    node {
        ring0_addr: linux02.dbi-services.test
        ring1_addr: linux02H2.dbi-services.test
        nodeid: 2
    }
    node {
        ring0_addr: linux03.dbi-services.test
        ring1_addr: linux03H2.dbi-services.test
        nodeid: 3
    }
}

 

Obviously, communication paths were done successfully after configuring correctly the routes between nodes on different subnets. Corresponding default gateways are already configured for eth0 interfaces but we have to add static routes for eth1 interfaces as shown below:

  • LINUX01 and LINUX02 (eth0 – subnet 192.168.5.0 – default gateway 192.168.5.10 / eth1 – subnet 192.168.20.0 – static route to 192.168.30.0 subnet by using 192.168.20.10).
[mikedavem@linux01 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.5.10    0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 eth1
192.168.5.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.30.0    192.168.20.10   255.255.255.0   UG    0      0        0 eth1

 

  • LINUX03 (eth0 – subnet 192.168.50.0 – default gateway 192.168.50.10 / eth1 – subnet 192.168.30.0 – static route to 192.168.20.0 subnet by using 192.168.30.10).
[mikedavem@linux03 ~]$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.50.10   0.0.0.0         UG    0      0        0 eth0
0.0.0.0         192.168.50.10   0.0.0.0         UG    100    0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
192.168.20.0    192.168.30.10   255.255.255.0   UG    100    0        0 eth1
192.168.30.0    0.0.0.0         255.255.255.0   U     100    0        0 eth1
192.168.50.0    0.0.0.0         255.255.255.0   U     100    0        0 eth0

 

Let’s have a look at the cluster status:

[root@linux01 ~]# pcs cluster status
Cluster Status:
 Stack: corosync
 Current DC: linux01.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
 Last updated: Mon Jan 30 12:47:00 2017         Last change: Mon Jan 30 12:45:01 2017 by hacluster via crmd on linux01.dbi-services.test
 3 nodes and 3 resources configured

PCSD Status:
  linux01.dbi-services.test: Online
  linux03.dbi-services.test: Online
  linux02.dbi-services.test: Online

 

To enable NFS share to be mounted from the new cluster node LINUX03 on the 192.168.50.0 subnet, we have to add the new configuration in the /etc/exports file and export it afterwards.

[root@nfs ~]# exportfs -rav
exporting 192.168.5.0/24:/mnt/sql_log_nfs
exporting 192.168.5.0/24:/mnt/sql_data_nfs
exporting 192.168.50.0/24:/mnt/sql_data_nfs

[root@nfs ~]# showmount -e
Export list for nfs.dbi-services.com:
/mnt/sql_log_nfs  192.168.5.0/24
/mnt/sql_data_nfs 192.168.50.0/24,192.168.5.0/24

 

Well, after checking everything is ok from the cluster side, the next challenge was to find a way to configure the SQL Server FCI resource to be multi-subnet compliant. As stated by Microsoft, the SQL Server FCI is not as coupled with Pacemaker add-on as the Windows Failover Cluster. Based on my Windows Failover experience, I wondered if I had to go to the same way with the pacemaker cluster on Linux and I tried to find out a way to add a second VIP and then to include it as part of the OR dependency but I found nothing on this field. But Pacemaker offers concepts which include location / collocation and scores in order to behave on the resources during failover events. My intention is not to go into details trough the pacemaker documentation but by playing with the 3 concepts I was able to address our need. Again please feel free to comments if you have a better method to meet my requirement.

Let’s first add a second virtual IP address for the 192.168.50.0 subnet (virtualipdr) and then let’s add a new dependency / colocation between for SQL Server resource (sqllinuxfci)

[mikedavem@linux01 ~]$sudo pcs cluster cib cfg
[mikedavem@linux01 ~]$sudo pcs -f cfg resource create virtualipdr ocf:heartbeat:IPaddr2 ip=192.168.50.20
[mikedavem@linux01 ~]$sudo pcs -f cfg constraint colocation add virtualipdr sqlinuxfci
[mikedavem@linux01 ~]$sudo pcs cluster cib-push cfg
[mikedavem@linux01 ~]$sudo pcs constraint location

 

Now to avoid starting virtualip or virtualipdr resources on the wrong subnet, let’s configure an “opt-out” scenario which includes symmetric cluster to allow resources to run everywhere and location constraints to avoid running a resource on a specified location / node.

[mikedavem@linux01 ~]$sudo pcs property set symmetric-cluster=true
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux01.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualipdr avoids linux02.dbi-services.test=-1
[mikedavem@linux01 ~]$pcs constraint location virtualip avoids linux03.dbi-services.test=-1

 

The new constraint topology is as follows

[mikedavem@linux01 ~]$ sudo pcs constraint
Location Constraints:
  Resource: sqllinuxfci
    Enabled on: linux01.dbi-services.test (score:INFINITY) (role: Started)
  Resource: virtualip
    Disabled on: linux03.dbi-services.test (score:-1)
  Resource: virtualipdr
    Disabled on: linux01.dbi-services.test (score:-1)
    Disabled on: linux02.dbi-services.test (score:-1)
Ordering Constraints:
Colocation Constraints:
  FS with sqllinuxfci (score:INFINITY)
  virtualip with sqllinuxfci (score:INFINITY)
  virtualipdr with sqllinuxfci (score:INFINITY)
Ticket Constraints:

 

Let’s have a look at the pacemaker status. At this point all SQL Server resources are running on the LINUX01 on the 192.168.5.0 subnet. We may notice the virtualipdr is in stopped state in this case.

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:28:57 2017          Last change: Mon Jan 30 16:57:10 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Started linux01.dbi-services.test
 FS     (ocf::heartbeat:Filesystem):    Started linux01.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Started linux01.dbi-services.test
 virtualipdr    (ocf::heartbeat:IPaddr2):       Stopped

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

Go ahead and let’s try to move the resources on the LINUX03 node – 192.168.50.0 subnet

[mikedavem@linux01 ~]$ sudo pcs resource move sqllinuxfci linux03.dbi-services.test

 

The new Pacemarker status becomes

[mikedavem@linux01 ~]$ sudo pcs status
Cluster name: linux_cluster
Stack: corosync
Current DC: linux02.dbi-services.test (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Tue Jan 31 22:33:21 2017          Last change: Tue Jan 31 22:32:53 2017 by root via crm_resource on linux01.dbi-services.test

3 nodes and 4 resources configured

Online: [ linux01.dbi-services.test linux02.dbi-services.test linux03.dbi-services.test ]

Full list of resources:

 sqllinuxfci    (ocf::mssql:fci):       Stopped
 FS     (ocf::heartbeat:Filesystem):    Started linux03.dbi-services.test
 virtualip      (ocf::heartbeat:IPaddr2):       Stopped
 virtualipdr    (ocf::heartbeat:IPaddr2):       Started linux03.dbi-services.test

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

 

In turn, the virtualipdr brought online and virtualip brought offline as well because we are now located on the 192.168.50.0 subnet. Here we go!

Ok at this point our SQL Server Failover Cluster Instance seems to behave as expected but how to deal with client connections in this case? If I refer to previous Windows Failover Cluster experiences, I may think about two scenarios by using DNS servers.

  • We are able to use SqlClient / JAVA / ODBC support for HA with MultiSubnetFailover parameter on the connection string. In this case good news, we may simply put the both different addresses for the corresponding DNS record and the magic will operate by itself (similar to RegisterAllProvidersIP property with availability groups). The client will reach out automatically the first available address and everything should be fine.
  • We cannot modify or use the MultiSubnetFailover and in this case we may setup the TTL value manually for the corresponding DNS record (similar to the HostRecordTTL parameters with availability groups). We will experience timeout issues for the first connection attempt but the second one should work.
  • Other scenarios?? Please feel free to comment

In my lab environnement using SqlClient based connections seem to work well in the aforementioned cases. I will perform further tests in a near feature and update this blog accordingly with the results.

I’m looking forward to see other improvements / features with the next SQL Server CTPs

Happy clustering on Linux!

 

 

 

 

 

 

 

Cet article Introducing high-availability and multi-subnet scenarios with SQL Server on Linux est apparu en premier sur Blog dbi services.

Oracle Core factor and Oracle or non-Oracle Cloud

Mon, 2017-01-30 05:10

Oracle Database licensing can be based on two metrics: Processor or Named User Plus (NUP). NUP also necessitate to count the processors because there is a minimum number of NUP per processor. Then it is very important to know how to count processors. The time where a machine had 1 socket with 1 processor with 1 chip with 1 core running 1 thread is really old past. Today, we have physical machines with multi-core processors, running multiple threads per core. And virtualization allocates part of it to virtual machines, often called vCPU.

Oracle still uses the name ‘processor’ but declines the way of counting them by processor vendor (Core Factor), virtualization solution, and cloud provider.

References

The main document referenced here are:

The Oracle Processor Core Factor Table: http://www.oracle.com/us/corporate/contracts/processor-core-factor-table-070634.pdf which describes how to adjust core count to processor count. It also describes how to count Oracle Cloud CPU (aka OCPU).

The Licensing Oracle Software in the Cloud Computing Environment: http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf which describes how to count Amazon AWS vCPU and Microsoft Azure Cores.

There is no mention of other cloud providers, but they all run virtualization so comes the Oracle Partitioning Policy to know how to count physical vs. virtual CPU: http://www.oracle.com/us/corporate/pricing/partitioning-070609.pdf. However, their hosting contract may mention different things.

At the end of those documents, you see a mention that they are ‘for educational purposes only’ so basically there’s no legal truth in them and my blog post being my interpretation on that has no value at all… Your contracts have the truth when it comes to legal interpretation.

So you might wonder what’s the point reading those documents if they have no value? Well, your goal, as an IT manager, is not to go to court and fight with attorneys. If you want to avoid any problems, it is better to read those rules and adhere to them, whether you find them fair or not. Of course, you can also choose to go to other vendors with other models, such as open source. But that’s not the goal of this post. Here I suppose that you like the oracle database technology, as much as I do, and that you are looking for a way to run it at affordable cost without worrying about LMS audits…

Something else about those documents: they change, so you may want to look at history. For example, about Amazon and Azure, the definitions were a bit different a few days ago and the web time machine can help us: http://web.archive.org/web/20161226082104/http://www.oracle.com/us/corporate/pricing/cloud-licensing-070579.pdf.

Vendors

Of course, there may be some technical reasons behind core factor, virtualization and cloud factors. But let’s face it, besides the database, Oracle is also a hardware vendor, a virtualization software vendor, and a cloud provider. They are at the top in database market, willing to do the same on the cloud market. Making database cheaper on their products may be a way to achieve that. Customers wanting to stay with Oracle Database will be tempted to go to full Oracle stack.

The combination of core factor and virtualisation rules clearly disadvantages all competitors:

  • IBM LPAR virtualisation is accepted, but IBM POWER core factor makes the database 2x more expensive than on processors you find in Oracle hardware (Intel, SPARC)
  • Most data center run on VMWare ESX with 0.5 core factor Intel, but counting vCPU is not accepted and the whole datacenter may have to be licenced
  • We are allowed to count vCPU on Amazon and Azure virtualization, they run Intel Xeon, but core factor applies only on Oracle Cloud. You need x2 processor licenses on AWS or Azure for the same processor cores
  • Disabling cores from the BIOS is how Capacity on Demand is done on bare metal ODA, but we are not allowed to do the same on similar hardware

So basically, running Oracle Database is cheaper on Oracle Hardware, Oracle virtualization and Oracle Cloud. When you do it on other vendors, you can’t do capacity on demand, and you need more processor licenses to run on same number of CPU.

Enterprise Edition

On the Oracle Cloud, you allocate CPU resources with shape OCPU. One OCPU is the equivalent of one Intel Xeon core with 2 threads. And one processor licences covers 2 OCPUs. This looks like the core factor for Inter Xeon even if it is not directly mentioned (but this equivalence is written in the core factor document). So One processor license covers 2 cores, 4 vCPU with hyper-threading.

On Amazon EC2 and RDS one processor licence covers 1 core. This can be 1 vCPU (T2 and m3.medium have one thread per core) or 2 vCPU (hyper-threaded instances). The AWS document https://aws.amazon.com/ec2/instance-types/ mentions that each vCPU is a hyperthread of an Intel Xeon core except for T2 and m3.medium.

On Microsoft Azure one processor license covers 1 core. This is 1 vCPU as there is no hyper-threading enabled.

23-Jan-2017

I mentioned looking at archive.org version of the documents. Tim Hall has mentioned what has changed on is blog

The previous document was not very clear, mentioning ‘virtual cores’. Actually, ‘virtual cores’ are the number of cores that are behind the vCPU: one for T2 instances (and m3.medium), two for hyper-threaded instances. And this is listed precisely on https://aws.amazon.com/ec2/virtualcores/ for Amazon. It is clear also for Azure as there is no hyper-threading.

But what has changed is that there were previously two mentions of applying the core factor:

  • An example on 8 virtual cores requiring 8 * 0.5 = 4 processor licenses
  • The mention that AWS implements Oracle VM and then follow the rules of hard partitioning and ‘standard processor metric’ which, IMO, includes the core factor

Those two have disappear in the new document, with an explicit mention that When counting Oracle Processor license requirements in Authorized Cloud Environments, the Oracle Processor Core Factor Table is not applicable. This makes the Oracle Cloud competitors two times more expensive than before for running Oracle Databases. The fun part is that IaaS providers that are not ‘Authorized’, or private clouds, can apply core factor and count vCPUs as long as they virtualize with Oracle VM on Intel Xeon. But hosting companies may have different rules in their contracts.

NUP

When licensing in NUP, the metric is not the number of processor. But you have to count them because you need to buy more than 25 NUP per processor. This means that if you have 25 NUP only (for a development environment with few developers for example) you can run it on 2 cores in the Oracle Public Cloud, but only one core on the other ‘Authorized’ ones.

Standard Edition

Note that the core factor has nothing to do in Standard Edition. Processor metric for Standard Edition is not a core but a socket. On current Intel processors, a socket can run 12, 18, or 24 cores. But maths are different in the Cloud from an Oracle licensing perspective.

In the Oracle Cloud, one ‘socket’ is 4 OCPU which is 4 cores (8 threads). An instance with 1 to 8 vCPU requires 1 SE2 processor license. 9 to 16 vCPU requires 2 SE processor licences and I think it is the maximum you can have on Oracle Cloud. So the idea is that one socket is equivalent to 4 cores. This is low, but it was worse before October where one ‘socket’ was 2 cores only.

Worse also for the competitors cloud, as this has not changed and it is still one ‘socket’ for 2 cores. This means that in Azure a SE2 processor licence covers 2 cores, and in Amazon it covers 4 vCPU (which are 4 threads on 2 cores).

Standard Edition 2

Now if you remember that SE2 has a limit of 16 threads for on-premises, the limit on the Cloud is 8 vCPUs on Amazon. You cannot run SE2 on a AWS instance with more than 8 vCPU. This is a big limitation. On-premises, the limit is soft only: resource manager will schedule at most 16 user sessions on CPU. But more resources can be used by background processes. On AWS, you are limited to 8 CPU treads in total, which means that you have only 4 cores for all processes – user and background included. And CPU starvation is not very good when you wait for LGWR.

This is even worse on Azure where the limit is 4 cores without hyper-threading.

This is clearly stated in the ‘Authorized Cloud’ document:
Oracle Standard Edition One and Standard Edition 2 may only be licensed on Authorized Cloud Environment instances up to eight Amazon vCPUs or four Azure CPU Cores.. Here again, the SE2 comes at the price of SE but with limitations of SE1.

Once again, rules are more friendly for Oracle Cloud where you can create a Standard Edition instance on 16 OCPU (which is 32 threads). SE2 resource manager will still limit to 16 user sessions, and they will have one core for each. And background processes will be able use the additional resource coming from hyperthreading.

ULA

There are two important points regarding ULA and ‘authorized clouds': ULA licences can be used during the ULA, but they don’t count in certification to exit ULA.

So what?

I don’t think I’ll risk a conclusion here. Having different price for same software running on similar hardware doesn’t look fair. I’m more annoyed by the difficulty to compare, espacially when rules change from time to time. But I can understand that vendors want to push their own products.

Let’s get technical. My recommandation is to use the minimum of CPU resources so that you don’t have to run your application on dozens of vCPUs. This is why we like Oracle: it is very efficient both for OLTP and BI. Most of the code used by your critical use-cases was optimized decades ago for mono-core processors. Look at an AWR report, identify the queries that are responsible for most of CPU and tune them. Stop hard parsing over and over. Stop reading always the same blocks. Stop doing row-by-row round-trips to your database. Don’t waste your CPU cycles doing all this unnecessary work and you will save money, whatever the ‘core factor’ is.

 

Cet article Oracle Core factor and Oracle or non-Oracle Cloud est apparu en premier sur Blog dbi services.

Oracle 12c – How to Prevent License Violation with Active Data Guard

Sat, 2017-01-28 05:00

There are some articles floating around how to prevent a license violation with Active Data Guard. Some of them related to an underscore parameter “_query_on_physical” and some of them related to a startup trigger. Both of them have advantages and disadvantages. E.g. regarding the “_query_on_physical” I can’t find any MOS Note about it, and I don’t know the side effects.

Oracle gives us a hard time to disable features that we don’t want to be enabled by accident. It gets much better with 12.2 where you can use lockdown profiles. My colleague Franck explained it very well at the DOAG 2016 how this new feature works.

http://www.doag.org/formes/pubfiles/8586609/docs/Konferenz/2016/vortraege/Datenbank/2016-DB-Franck_Pachot-Multitenant_New_Security_Features_Clarify_DevOps_DBA_roles-Praesentation.pdf

But for now, I am on 12cR1 and I need a solution for that version. Especially with Active Data Guard it is very easy to activate it. Just type in “startup” on the Standby, and then you have it already. Nothing more is needed.

Nevertheless, I have 12cR1 here, and my favorite way to prevent a license violation with Active Data Guard is related to cluster resources, in combination with the DataGuard Broker and an Observer. If all of them are in place and when you are on the right patch level, then it works. Especially the patch level is quite important. We will see later. What is also important, is that you should work only with the Broker command or with the srvctl utility.

In my case I have a primary single instance called DBIT121_SITE1 and a standby single instance called DBIT121_SITE2. After the Data Guard has been setup, it is time to configure the Cluster Resources.

In this particular case, the most important parameters when you add the database cluster resources are “role” and “startoption”

$ srvctl add database -h | egrep '(<role>|<start_options>)' | tail -2
    -role <role>                   Role of the database (PRIMARY, PHYSICAL_STANDBY, LOGICAL_STANDBY, SNAPSHOT_STANDBY, FAR_SYNC)
    -startoption <start_options>   Startup options for the database. Examples of startup options are OPEN, MOUNT, or "READ ONLY".

With the parameter “role” you specify the role that your database has at the moment (not the future role). The role adjustments are done later by the Broker whenever you do a switchover or failover.

The role option is not only available with the “srvctl add database” command, it is also available with the “srvctl add service” command. Now it becomes really interesting. You tell Oracle to start the service only, if the role is PRIMARY.

$ srvctl add service -h | grep '<role>'
    -role <role>                   Role of the service (primary, physical_standby, logical_standby, snapshot_standby)

Ok. Let’s create the cluster resources now.

-- Primary
$ srvctl add database -db DBIT121_SITE1 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
-dbtype SINGLE -instance DBIT121 -node dbidg01 \
-spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
-pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
-role PRIMARY -startoption OPEN \
-dbname DBIT121

$ srvctl add service -db DBIT121_SITE1 -service DBIT121_SERVICE -role primary \
-failovertype SELECT -notification TRUE -tafpolicy BASIC

-- Standby
$ srvctl add database -db DBIT121_SITE2 -oraclehome /u01/app/oracle/product/12.1.0.2/dbhome_1 \
-dbtype SINGLE -instance DBIT121 -node dbidg02 \ 
-spfile /u01/app/oracle/admin/DBIT121/pfile/spfileDBIT121.ora \
-pwfile /u01/app/oracle/admin/DBIT121/pfile/orapwDBIT121 \
-role PHYSICAL_STANDBY -startoption MOUNT \
-dbname DBIT121

$ srvctl add service -db DBIT121_SITE2 -service DBIT121_SERVICE -role primary \
-failovertype SELECT -notification TRUE -tafpolicy BASIC

To test if everything works, simply do a “SWITCHOVER” with the Data Guard Broker and check the Cluster Resources afterwards. After a role change, you should see the following Cluster resource entries on the Primary

$ crsctl stat res ora.dbit121_site1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
ROLE=PRIMARY
USR_ORA_OPEN_MODE=open

and these ones on the Standby

$ crsctl stat res ora.dbit121_site2.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
ROLE=PHYSICAL_STANDBY
USR_ORA_OPEN_MODE=mount

Oracle preserves the Open modes and also some other stuff like Block Change Tracking. If Active Data Guard was not enabled beforehand, it will also not be enabled afterwards (this is at least how it should be), and besides that, Oracle also disables the “Block Change Tracking” feature on the new Standby, because this would need the Active Data Guard license as well.

alert.log
...
Completed: ALTER DATABASE SWITCHOVER TO 'DBIT121_SITE2'
Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of switchover.
To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
Fri Jan 27 08:49:23 2017
..

But the final and most important test is killing the PMON on the Standby. In GI version below 12.1.0.2 with 2016 Oct PSU, you might end up with Active Data Guard enabled. Opsssssss …
Everything was setup up correctly, but still not working like expected. I just have simulated that a background process dies. This could happen in reality for example due to a bug with “_use_single_log_writer=false” which is the default with 12c, or simply by someone accidently killing the wrong process.

$ ps -ef | grep ora_pmon_DBIT121 | grep -v grep 
oracle 639 1 0 13:31 ? 00:00:00 ora_pmon_DBIT121

$ kill -9 639 

alert.log 
... 
... 
Physical Standby Database mounted. 
Lost write protection mode set to "typical" 
Completed: ALTER DATABASE MOUNT /* db agent *//* {0:33:25} */ 
ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 
Data Guard Broker initializing... 
... 

Physical standby database opened for read only access. 
Completed: ALTER DATABASE OPEN /* db agent *//* {0:33:25} */ 

... 

SQL> select open_mode from v$database; 

OPEN_MODE 
-------------------- 
READ ONLY WITH APPLY

After killing the PMON, the instance dies and the Cluster takes over which is very good. However, the cluster is ignoring my startup options which I have configured beforehand. After upgrading GI and the Database to 12.1.0.2 with 2016 Oct PSU, I could not reproduce this issue anymore and I have a good solution for preventing Active Data Guard to be activated.

But what happens if my Primary host dies and a Failover is initiated by the observer. Then I do have two cluster resources with Primary and startup option OPEN. Let’s simulate this scenario by doing a shutdown abort with srvctl.

DGMGRL> show configuration;

Configuration - DBIT121

  Protection Mode: MaxAvailability
  Members:
  DBIT121_SITE1 - Primary database
    DBIT121_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 5 seconds ago)


$ srvctl stop database -db DBIT121_SITE1 -stopoption ABORT

 

After 30 seconds, the observer initiated a fast start failover, and the new primary is now on SITE2.

Initiating Fast-Start Failover to database "DBIT121_SITE2"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "DBIT121_SITE2"

On SITE1 I still have the old Primary with Startup option OPEN. Not an issue at the moment, because it is a Primary and on a Primary there is no Active Data Guard. After I start up SITE1, a few moments later the reinstate takes place. Therefore, the database has to be brought again into the MOUNT state to do a “FLASHBACK DATABASE”.

$ srvctl start database -db DBIT121_SITE1

observer.log
...
Initiating reinstatement for database "DBIT121_SITE1"...
Reinstating database "DBIT121_SITE1", please wait...
Reinstatement of database "DBIT121_SITE1" succeeded

broker.log on old Primary
...
Data Guard notifying Oracle Clusterware to prepare database for role change
Database Reinstate needs instance count to be reduced to 1
Flashback SCN is 22408550; DB checkpoint SCN is 22405622. Flashback to SCN 22408550.
01/28/2017 10:59:25
Physical Standby Reinstatement: Converting old primary to a physical standby
01/28/2017 10:59:34
Conversion to physical standby database succeeded
Instance restart not required
Purging diverged redos on resetlogs branch 933516912, starting SCN 22408551
Purged 0 archived logs
Target standby DBIT121_SITE2 did not have Active Data Guard enabled at the time of failover.
To maintain Active Data Guard license compliance Block Change Tracking will be disabled.
01/28/2017 10:59:42
Notifying Oracle Clusterware to buildup after database reinstatement

The broker knows that Active DataGuard was not enabled beforehand, an so it does not enable it now.

$ crsctl stat res ora.DBIT121_SITE1.db -p | egrep '(USR_ORA_OPEN_MODE|ROLE)'
ROLE=PHYSICAL_STANDBY
USR_ORA_OPEN_MODE=mount


SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

That’s it. This is my way to prevent Active Data Guard from being activated. :-)

Conclusion

Using cluster resources to prevent Active Data Guard from being activated is a fully supported way. You only need to take care that you are on GI/DB and Observer version 12.1.0.2 2016 Oct PSU or higher. Before that patchlevel, it never worked for me correctly with cluster resources. Besides that, use only Broker and the cluster srvctl commands to manage your Data Guard environment.

 

Cet article Oracle 12c – How to Prevent License Violation with Active Data Guard est apparu en premier sur Blog dbi services.

Temporal tables with PostgreSQL

Fri, 2017-01-27 15:51

In this blog we are going to talk about a nice extension in PostgreSQL: temporal_tables. This extension provides support for temporal tables.
What is a temporal table? Just a table that tracks the period of validity of a row.
When implemented, this feature allows you to specify that old rows are archived into another table (that is called the history table). This can be useful for many purposes
-Audit
-Comparison
-Checking table state in the past
First we have to install the temporal_table extension. We are going to use the pgxn client to install the extension.
Install the yum repository for PostgreSQL

[root@pgserver1 ~]# rpm -ivh https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
Retrieving https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-oraclelinux96-9.6-3.noarch.rpm
warning: /var/tmp/rpm-tmp.3q9X12: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:pgdg-oraclelinux96-9.6-3 ################################# [100%] [root@pgserver1 ~]#

And after we install the pgxn client

root@pgserver1 ~]# yum search pgxn
Loaded plugins: langpacks, ulninfo
pgdg96 | 4.1 kB 00:00:00
(1/2): pgdg96/7Server/x86_64/group_gz | 249 B 00:00:00
(2/2): pgdg96/7Server/x86_64/primary_db | 127 kB 00:00:00
==================================================== N/S matched: pgxn =====================================================
pgxnclient.x86_64 : Command line tool designed to interact with the PostgreSQL Extension Network
Name and summary matches only, use "search all" for everything.


[root@pgserver1 ~]# yum install pgxnclient.x86_64
Loaded plugins: langpacks, ulninfo
Resolving Dependencies
--> Running transaction check
---> Package pgxnclient.x86_64 0:1.2.1-2.rhel7 will be installed
....
....
Installed:
pgxnclient.x86_64 0:1.2.1-2.rhel7
Complete!
[root@pgserver1 ~]#

And finally we can install the extension

[root@pgserver1 ~]# pgxn install temporal_tables --pg_config=/u01/app/PostgreSQL/9.6/bin/pg_config
INFO: best version: temporal_tables 1.1.1
INFO: saving /tmp/tmpJit39m/temporal_tables-1.1.1.zip
INFO: unpacking: /tmp/tmpJit39m/temporal_tables-1.1.1.zip
INFO: building extension
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o temporal_tables.o temporal_tables.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -I. -I./ -I/u01/app/PostgreSQL/9.6/include/postgresql/server -I/u01/app/PostgreSQL/9.6/include/postgresql/internal -D_GNU_SOURCE -I/opt/local/Current/include/libxml2 -I/opt/local/Current/include -c -o versioning.o versioning.c
gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -DMAP_HUGETLB=0x40000 -fpic -shared -o temporal_tables.so temporal_tables.o versioning.o -L/u01/app/PostgreSQL/9.6/lib -L/opt/local/Current/lib -Wl,--as-needed -Wl,-rpath,'/u01/app/PostgreSQL/9.6/lib',--enable-new-dtags
INFO: installing extension
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/lib/postgresql'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/share/postgresql/extension'
/bin/mkdir -p '/u01/app/PostgreSQL/9.6/doc/postgresql/extension'
/usr/bin/install -c -m 755 temporal_tables.so '/u01/app/PostgreSQL/9.6/lib/postgresql/temporal_tables.so'
/usr/bin/install -c -m 644 .//temporal_tables.control '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
/usr/bin/install -c -m 644 .//temporal_tables--1.1.1.sql .//temporal_tables--1.0.0--1.0.1.sql .//temporal_tables--1.0.1--1.0.2.sql .//temporal_tables--1.0.2--1.1.0.sql .//temporal_tables--1.1.0--1.1.1.sql '/u01/app/PostgreSQL/9.6/share/postgresql/extension/'
/usr/bin/install -c -m 644 .//README.md '/u01/app/PostgreSQL/9.6/doc/postgresql/extension/'
[root@pgserver1 ~]#

Once the installation done, we can load it in our database.

[postgres@pgserver1 extension]$ psql
Password:
psql.bin (9.6.1)
Type "help" for help.
postgres=# CREATE EXTENSION temporal_tables;
CREATE EXTENSION
postgres=#

We can then verify that the temporal extension is now present in our database.

postgres=# \dx
List of installed extensions
Name | Version | Schema | Description
-----------------+---------+------------+-----------------------------------------
adminpack | 1.0 | pg_catalog | administrative functions for PostgreSQL
plpgsql | 1.0 | pg_catalog | PL/pgSQL procedural language
temporal_tables | 1.1.1 | public | temporal tables
(3 rows)

postgres=# \dx+ temporal_tables
Objects in extension "temporal_tables"
Object Description
----------------------------------------------------
function set_system_time(timestamp with time zone)
function versioning()
(2 rows)

For the demonstration, we create the following table Customers

CREATE TABLE Customers (
CustNo SERIAL NOT NULL,
CustName VARCHAR(30) NOT NULL,
start_date timestamp NOT NULL DEFAULT now(),
PRIMARY KEY (CustNo)
);

In order to make this table system-period temporal table we should first add a system period column

postgres=# ALTER TABLE Customers ADD COLUMN sys_period tstzrange NOT NULL;
ALTER TABLE

Then we need a history table that contains archived rows of our table. The easiest way to create it is by using LIKE statement

postgres=# CREATE TABLE Customers_history (LIKE Customers);
CREATE TABLE

Finally we create a trigger on our table to link it with the history table

postgres=# CREATE TRIGGER customers_hist_trigger BEFORE INSERT OR UPDATE OR DELETE ON Customers FOR EACH ROW
EXECUTE PROCEDURE versioning('sys_period', 'Customers_history', true);
CREATE TRIGGER
postgres=#

Now Let’s insert data into customers

insert into customers (custname,start_date) values ('HP','2013-08-05 00:00:00');
insert into customers (custname,start_date) values ('IBM','2014-10-10 00:00:00');
insert into customers (custname,start_date) values ('DBI','2017-01-07 00:00:00');
insert into customers (custname) values ('DHL');

We can see below rows in customers.
For example the row concerning IBM was inserted on 2017-01-26 10:48:49. Information stored in the sys_period column and represents the starting validity of the row. Note the borders [,). The lower bound is [ and thus inclusive. The upper bound is ) which means it is exclusive.
For IBM [“2017-01-26 10:48:49.768031+01″,) means
Start of validity: 2017-01-26 10:48:49.
End of validity: infinity (because there is nothing).

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+----------------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01",)
(4 rows)

The table Customers_history is empty. This is normal because no update or delete are done, just we have inserted rows.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+------------+------------
(0 rows)
postgres=#

Let’s do an update on customers, but before let’s display the current time.
postgres=# select now();
now
-------------------------------
2017-01-26 11:02:32.381634+01
(1 row)


postgres=# update customers set custname='HPSuisse' where custno=4;
UPDATE 1
postgres=#

Verifying again the customers table, we can see that the validity of row concerning HPsuisse starts at 2017-01-26 11:02:46

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+----------------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01",)
4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
(4 rows)

If we now query the table customers_history, we can see the row updated on the table customers with the validity of the row.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+---------------------+-------------------------------------------------------------------
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")

Let’s do a delete on the table customers

postgres=# select now();
now
-------------------------------
2017-01-26 11:32:12.229105+01
(1 row)


postgres=# delete from customers where custno=3;
DELETE 1

Below rows in table customers

postgres=# table customers;
custno | custname | start_date | sys_period
--------+----------+---------------------+------------------------------------
1 | IBM | 2014-10-10 00:00:00 | ["2017-01-26 10:48:49.768031+01",)
2 | DBI | 2017-01-07 00:00:00 | ["2017-01-26 10:48:49.778487+01",)
4 | HPSuisse | 2013-08-05 00:00:00 | ["2017-01-26 11:02:46.347574+01",)
(3 rows)

And in the history table, we can see a new row with the validity date.

postgres=# table customers_history;
custno | custname | start_date | sys_period
--------+----------+----------------------------+-------------------------------------------------------------------
4 | HP | 2013-08-05 00:00:00 | ["2017-01-26 10:50:21.275201+01","2017-01-26 11:02:46.347574+01")
3 | DHL | 2017-01-26 10:48:49.841405 | ["2017-01-26 10:48:49.841405+01","2017-01-26 11:32:15.370438+01")
(2 rows)

Conclusion
In this blog we see how temporal tables can be implemented with PostgreSQL using extention temporal_table. This feature can help for auditing, archiving,…
And the history table can be moved to lower storage.

 

Cet article Temporal tables with PostgreSQL est apparu en premier sur Blog dbi services.

Part 2 – vagrant up – get your Oracle infrastructure up an running

Thu, 2017-01-26 08:31

Last week in the first part of this blog we have seen a short introduction how to setup an Oracle Infrastructure with Vagrant and Ansible. Remember all the files for this example are available here https://github.com/nkadbi/oracle-db-12c-vagrant-ansible
Get the example code:

git clone https://github.com/nkadbi/oracle-db-12c-vagrant-ansible

If you have prepared your environment with Ansible, Vagrant and Oracle Virtual Box installed – and provided the Oracle software zip files –
than you can just start to build your Test Infrastructure with the simple callvagrant up
cleanup is also easy- stop the vagrant machines and deletes all traces:
vagrant destroy
How does this work ?
vagrant up starts Vagrant which will setup two virtual servers using a sample box with CentOS 7.2.
When this has been finished Vagrant calls Ansible for provisioning which configures the linux servers, installs the Oracle software and creates your databases on the target servers in parallel.

Vagrant configuration
All the configuration for Vagrant is in one file called Vagrantfile
I used a box with CentOS 7.2 which you can find among other vagrant boxes here https://atlas.hashicorp.com/search
config.vm.box = "boxcutter/centos72" If you start vagrant up the first time it will download the vagrant box
$ vagrant up

Bringing machine 'dbserver1' up with 'virtualbox' provider...
Bringing machine 'dbserver2' up with 'virtualbox' provider...
==> dbserver1: Box 'boxcutter/centos72' could not be found. Attempting to find and install...
dbserver1: Box Provider: virtualbox
dbserver1: Box Version: >= 0
==> dbserver1: Loading metadata for box 'boxcutter/centos72'
dbserver1: URL: https://atlas.hashicorp.com/boxcutter/centos72
==> dbserver1: Adding box 'boxcutter/centos72' (v2.0.21) for provider: virtualbox
dbserver1: Downloading: https://atlas.hashicorp.com/boxcutter/boxes/centos72/versions/2.0.21/providers/virtualbox.box
==> dbserver1: Successfully added box 'boxcutter/centos72' (v2.0.21) for 'virtualbox'!
==> dbserver1: Importing base box 'boxcutter/centos72'...

I have chosen a private network for the virtual servers and use vagrant hostmanager plugin to take care of the /etc/hosts files on all guest machines (and optionally your localhost)
you can add this plugin to vagrant with:
vagrant plugin install vagrant-hostmanager
The corresponding part in the Vagrantfile will look like this:
config.hostmanager.enabled = true
config.hostmanager.ignore_private_ip = false # include private IPs of your VM's
config.vm.hostname = “dbserver1”
config.vm.network "private_network", ip: "192.168.56.31"

ssh Configuration
The Vagrant box comes already with ssh key configuration and- if security does not matter in your demo environment – the easiest way to configure ssh connection to your guest nodes is to use the same ssh key for all created virtual hosts.
config.ssh.insert_key = false # Use the same insecure key provided from box for each machine After bringing up the virtual servers you can display the ssh settings:
vagrant ssh-config The important lines from the output are:
Host dbserver1
HostName 127.0.0.1
User vagrant
Port 2222
IdentityFile /home/user/.vagrant.d/insecure_private_key
You should be able to reach your guest server without password with user vagrant
vagrant ssh dbserver1
Than you can switch to user oracle ( password = welcome1 ) or root (default password for vagrant boxes vagrant) su - oracle or directly connect with ssh ssh vagrant@127.0.0.1 -p 2222 -i /home/user/.vagrant.d/insecure_private_key
Virtual Disks
I added additional virtual disks because I wanted to separate data file destination from fast recovery area destination. # attach disks only localy
if ! File.exist?("dbserver#{i}_disk_a.vdi") # create disks only once
v.customize ['createhd', '--filename', "dbserver#{i}_disk_a.vdi", '--size', 8192 ] v.customize ['createhd', '--filename', "dbserver#{i}_disk_b.vdi", '--size', 8192 ] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_a.vdi"] v.customize ['storageattach', :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', "dbserver#{i}_disk_b.vdi"] end # create disks only once

Provisioning with Ansible
At the end of the Vagrantfile provisioning with Ansible is called.
N = 2
(1..N).each do |i| # do for each server i
...
if i == N
config.vm.provision "ansible" do |ansible| # vm.provisioning
#ansible.verbose = "v"
ansible.playbook = "oracle-db.yml"
ansible.groups = { "dbserver" => ["dbserver1","dbserver2"] }
ansible.limit = 'all'
end # end vm.provisioning
end
end
To prevent the Ansible provisioning to start before all servers have been setup by Vagrant, I included the condition if i == N , where N is the number of desired servers.

Ansible Inventory
The Ansible Inventory is a collection of guest hosts against which Ansible will work.
You can either put the information in an inventory file or let Vagrant create an Inventory file for you. Vagrant does this if you did not specify any inventory file.
To enable Ansible to connect to the target hosts without password Ansible has to know the ssh key provided by the vagrant box.
Example Ansible Inventory:
# Generated by Vagrant
dbserver2 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2200 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
dbserver1 ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='vagrant' ansible_ssh_private_key_file='/home/user/.vagrant.d/insecure_private_key'
[dbserver] dbserver1
dbserver2
You can see that the inventory created by Vagrant presents the necessary information to Ansible to connect to the targets and has also defined the group dbserver which includes the server dbserver1 and dbserver2.

Ansible configuration
tell Ansible where to find the inventory in the ansible.cfg.
nocows=1
hostfile = .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory
host_key_checking = False

Ansible Variables
In this example I have put the general variables for all servers containing an Oracle Database into this file:
group_vars/dbserver
The more specific variables including variables used to create the database like the database name, character set
can be adapted individual for each server:
host_vars/dbserver1,host_vars/dbserver2

Ansible Playbook
The Ansible playbook is a simple textfile written in YAML syntax, which is easy readable.
Our playbook oracle-db.yml has only one play called “ Configure Oracle Linux 7 with Oracle Database 12c” which will be applied on all servers belonging to the group dbserver. In my example Vagrant creates the vagrant inventory and initiates the play of the playbook but you can also start it stand-alone or repeat it if you want.
ansible-playbook oracle-db.yml
This is the whole playbook, to configure the servers and install Oracle Databases:
$cat oracle-db.yml
---
- name: Configure Oracle Linux 7 with Oracle Database 12c
hosts: dbserver
become: True
vars_files:
# User Passwords hashed are stored here:
- secrets.yml
roles:
- role: disk_layout
- role: linux_oracle
- role: oracle_sw_install
become_user: '{{ oracle_user }}'
- role: oracle_db_create
become_user: '{{ oracle_user }}'

Ansible roles
To make the playbook oracle-db.yml lean and to be more flexible I have split all the tasks into different roles.This makes it easy to reuse parts of the playbook or skip parts. For example if you only want to install the oracle software on the server, but do not want to create databases you can just delete the role oracle_db_create from the playbook.
You (and Ansible ) will find the files containing the tasks for a role in the directory roles/my_role_name/main.yml.
There can be further directories. The default directory structure looks like below. If you want to create a new role you can even create the directory structure by using ansible-galaxy. Ansible Galaxy is Ansible’s official community hub for sharing Ansible roles. https://galaxy.ansible.com/intro

# example to create the directory structure for the role "my_role_name"
ansible-galaxy init my_role_name


# default Ansible role directory structure
roles/
my_role_name/
defaults/
files/
handlers/
meta/
tasks/
templates/
vars/

Ansible Modules
Ansible will run the tasks described in the playbook on the target servers by invoking Ansible Modules.
This Ansible Web Page http://docs.ansible.com/ansible/list_of_all_modules.html shows information about Modules ordered by categories.
You can also get information about all the Ansible modules from command line:

# list all modules
ansible-doc --list
# example to show documentation about the Ansible module "copy"
ansible-doc copy

One Example:
To install the oracle software with response file I use the Ansible module called “template”. Ansible uses Jinja2, a templating engine for Python.
This makes it very easy to design reusable templates. For example Ansible will replace {{ oracle_home }} with the variable, which I have defined in group_vars/dbserver, and than copies the response file to the target servers:

Snipped from the Jinja2 template db_install.rsp.j2

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Home.
#-------------------------------------------------------------------------------
ORACLE_HOME={{ oracle_home }}

Snipped from roles/oracle_sw_install/tasks/main.yml

- name: Gerenerate the response file for software only installation
template: src=db_install.rsp.j2 dest={{ installation_folder }}/db_install.rsp

Ansible Adhoc Commands – Some Use Cases
Immediately after installing Ansible you already can use Ansible to gather facts from your localhost which will give you a lot of information:
ansible localhost -m setup
Use Ansible adhoc command with module ping to check if you can reach all target servers listed in your inventory file:

$ ansible all -m ping
dbserver2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
dbserver1 | SUCCESS => {
"changed": false,
"ping": "pong"
}

File transfer – spread a file to all servers in the group dbserver
ansible dbserver -m copy -b -a "src=/etc/hosts dest=/etc/hosts"

Conclusion
With the open source tools Vagrant and Ansible you can easily automate the setup of your infrastructure.
Even if you do not want to automate everything, Ansible still can help you with your daily work if you want to check or apply something on several servers.
Just group your servers in an inventory and run an Ansible Adhoc Command or write a small playbook.

Please keep in mind that this is a simplified example for an automated Oracle Database Installation.
Do not use this example for productive environments.

 

Cet article Part 2 – vagrant up – get your Oracle infrastructure up an running est apparu en premier sur Blog dbi services.

Oracle 12cR2 – RMAN cold backup with TAG’s

Thu, 2017-01-26 07:35

I am planning to backup my 12R2 container database, because a huge application change is coming up,
and I want to be sure that I have a good RMAN backup beforehand. For that particular DB, I want to do it with a cold backup in combination with RMAN tags. Unfortunately I don’t have any backups at the moment, so I start with a full backup with the TAG ‘DBI_BACKUP’ to be 100% that I restore the correct one.

RMAN> list backup summary;

specification does not match any backup in the repository

RMAN> shutdown immediate

database closed
database dismounted
Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes

RMAN> run
    {
         allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
     BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';
         backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
         backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

allocated channel: c1
channel c1: SID=16 device type=DISK

allocated channel: c2
channel c2: SID=271 device type=DISK

allocated channel: c3
channel c3: SID=31 device type=DISK

allocated channel: c4
channel c4: SID=272 device type=DISK


Starting backup at 26-JAN-2017 13:18:53
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
channel c1: starting piece 1 at 26-JAN-2017 13:18:53
channel c2: starting compressed archived log backup set
channel c2: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
channel c2: starting piece 1 at 26-JAN-2017 13:18:53
channel c3: starting compressed archived log backup set
channel c3: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
channel c3: starting piece 1 at 26-JAN-2017 13:18:53
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
channel c4: starting piece 1 at 26-JAN-2017 13:18:53
channel c4: finished piece 1 at 26-JAN-2017 13:18:54
piece handle=/u99/backup/CDB/database_2arr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_27rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:15
channel c2: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_28rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:15
channel c3: finished piece 1 at 26-JAN-2017 13:19:08
piece handle=/u99/backup/CDB/database_29rr09lt_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:15
Finished backup at 26-JAN-2017 13:19:08

Starting backup at 26-JAN-2017 13:19:08
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c1: starting piece 1 at 26-JAN-2017 13:19:09
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:09
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:09
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:19:09
channel c4: finished piece 1 at 26-JAN-2017 13:19:24
piece handle=/u99/backup/CDB/database_2err09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c4: backup set complete, elapsed time: 00:00:15
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:19:24
channel c3: finished piece 1 at 26-JAN-2017 13:19:39
piece handle=/u99/backup/CDB/database_2drr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:30
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:39
channel c3: finished piece 1 at 26-JAN-2017 13:19:40
piece handle=/u99/backup/CDB/database_2grr09nb_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:01
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:19:41
channel c2: finished piece 1 at 26-JAN-2017 13:19:41
piece handle=/u99/backup/CDB/database_2crr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:32
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:41
channel c2: finished piece 1 at 26-JAN-2017 13:19:44
piece handle=/u99/backup/CDB/database_2irr09nd_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:03
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:44
channel c2: finished piece 1 at 26-JAN-2017 13:19:45
piece handle=/u99/backup/CDB/database_2jrr09ng_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:19:45
channel c2: finished piece 1 at 26-JAN-2017 13:19:46
piece handle=/u99/backup/CDB/database_2krr09nh_1_1 tag=TAG20170126T131908 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2brr09md_1_1 tag=TAG20170126T131908 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:43
channel c3: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2hrr09nd_1_1 tag=TAG20170126T131908 comment=NONE
channel c3: backup set complete, elapsed time: 00:00:11
channel c4: finished piece 1 at 26-JAN-2017 13:19:52
piece handle=/u99/backup/CDB/database_2frr09ms_1_1 tag=TAG20170126T131908 comment=NONE
channel c4: backup set complete, elapsed time: 00:00:28
Finished backup at 26-JAN-2017 13:19:52

Starting backup at 26-JAN-2017 13:19:52
current log archived
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
channel c1: starting piece 1 at 26-JAN-2017 13:19:53
channel c1: finished piece 1 at 26-JAN-2017 13:19:54
piece handle=/u99/backup/CDB/database_2lrr09np_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:54

Starting backup at 26-JAN-2017 13:19:54
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:19:55
channel c1: finished piece 1 at 26-JAN-2017 13:19:56
piece handle=/u99/backup/CDB/control_2mrr09nq_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:56

Starting backup at 26-JAN-2017 13:19:56
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:19:56
channel c1: finished piece 1 at 26-JAN-2017 13:19:57
piece handle=/u99/backup/CDB/spfile_2nrr09ns_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:19:57

Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:19:57
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934291197_d8mtcfjz_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:19:58

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

After the backup was done, I do a quick “list backup summary” to see if everything is there, and also check the destination directory.

RMAN> list backup summary tag 'DBI_BACKUP';

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
67      B  A  A DISK        26-JAN-2017 13:18:54 1       1       YES        DBI_BACKUP
68      B  A  A DISK        26-JAN-2017 13:19:02 1       1       YES        DBI_BACKUP
69      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
70      B  A  A DISK        26-JAN-2017 13:19:07 1       1       YES        DBI_BACKUP
81      B  A  A DISK        26-JAN-2017 13:19:53 1       1       YES        DBI_BACKUP
82      B  F  A DISK        26-JAN-2017 13:19:55 1       1       NO         DBI_BACKUP
83      B  F  A DISK        26-JAN-2017 13:19:56 1       1       NO         DBI_BACKUP

RMAN>

oracle@dbidg03:/u99/backup/CDB/ [CDB] ls -l
total 975304
-rw-r----- 1 oracle oinstall  18792448 Jan 26 13:19 control_2mrr09nq_1_1
-rw-r----- 1 oracle oinstall 112111616 Jan 26 13:19 database_27rr09lt_1_1
-rw-r----- 1 oracle oinstall 112711168 Jan 26 13:19 database_28rr09lt_1_1
-rw-r----- 1 oracle oinstall  58626048 Jan 26 13:19 database_29rr09lt_1_1
-rw-r----- 1 oracle oinstall   3691520 Jan 26 13:18 database_2arr09lt_1_1
-rw-r----- 1 oracle oinstall 215056384 Jan 26 13:19 database_2brr09md_1_1
-rw-r----- 1 oracle oinstall 132710400 Jan 26 13:19 database_2crr09md_1_1
-rw-r----- 1 oracle oinstall 112173056 Jan 26 13:19 database_2drr09md_1_1
-rw-r----- 1 oracle oinstall  56778752 Jan 26 13:19 database_2err09md_1_1
-rw-r----- 1 oracle oinstall 110149632 Jan 26 13:19 database_2frr09ms_1_1
-rw-r----- 1 oracle oinstall   1507328 Jan 26 13:19 database_2grr09nb_1_1
-rw-r----- 1 oracle oinstall  54157312 Jan 26 13:19 database_2hrr09nd_1_1
-rw-r----- 1 oracle oinstall   7716864 Jan 26 13:19 database_2irr09nd_1_1
-rw-r----- 1 oracle oinstall   1327104 Jan 26 13:19 database_2jrr09ng_1_1
-rw-r----- 1 oracle oinstall   1073152 Jan 26 13:19 database_2krr09nh_1_1
-rw-r----- 1 oracle oinstall      7680 Jan 26 13:19 database_2lrr09np_1_1
-rw-r----- 1 oracle oinstall    114688 Jan 26 13:19 spfile_2nrr09ns_1_1

But to be really 100% sure that I can restore the backup from TAG, I do a restore preview. The restore preview exists for quite a while now, but it is not so widly used for whatever reasons, I don’t know. I find it quite useful.

RMAN> restore database preview from tag 'DBI_BACKUP';

Starting restore at 26-JAN-2017 13:22:49
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=16 device type=DISK

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of restore command at 01/26/2017 13:22:49
RMAN-06026: some targets not found - aborting restore
RMAN-06023: no backup or copy of datafile 13 found to restore
RMAN-06023: no backup or copy of datafile 12 found to restore
RMAN-06023: no backup or copy of datafile 11 found to restore
RMAN-06023: no backup or copy of datafile 10 found to restore
RMAN-06023: no backup or copy of datafile 9 found to restore
RMAN-06023: no backup or copy of datafile 8 found to restore
RMAN-06023: no backup or copy of datafile 7 found to restore
RMAN-06023: no backup or copy of datafile 6 found to restore
RMAN-06023: no backup or copy of datafile 5 found to restore
RMAN-06023: no backup or copy of datafile 4 found to restore
RMAN-06023: no backup or copy of datafile 3 found to restore
RMAN-06023: no backup or copy of datafile 1 found to restore

RMAN>

Oh no … that doesn’t look good. RMAN complaints that no backup or copy exists for all datafiles. What is going here? Is my backup useless? Yes and no. If I rely only on the TAG, then yes. However, the RMAN backup have been created successfully but with two different TAG’s. For the datafiles it used tag=TAG20170126T131908 and for the archivelogs, the controlfile and the spfile it used tag=DBI_BACKUP.

So what is wrong here? The TAG was simply specified at the wrong location. If you put tag after archivelog, then only the archivelogs get that tag.

BACKUP INCREMENTAL LEVEL 0 FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

If you want to have the datafiles and the archivelogs tagged correctly, you have to put it after level 0 in my case. That’s usually enough.

BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;

Or if you want to be double sure and you are sort of paranoid, you can specify it twice, one after level 0, and one after archivelog.

BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog tag 'DBI_BACKUP';

ok. So lets try it again from scratch. But this time I put the Tag after LEVEL 0.

RMAN> list backup summary;

specification does not match any backup in the repository

RMAN> shutdown immediate

database closed
database dismounted
Oracle instance shut down

RMAN> startup mount

connected to target database (not started)
Oracle instance started
database mounted

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes


RMAN> run
    {
         allocate channel c1 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c2 device type disk format '/u99/backup/CDB/database_%U';
         allocate channel c3 device type disk format '/u99/backup/CDB/database_%U';
     allocate channel c4 device type disk format '/u99/backup/CDB/database_%U';
     BACKUP INCREMENTAL LEVEL 0 tag 'DBI_BACKUP' FORCE AS COMPRESSED BACKUPSET DATABASE plus archivelog;
         backup current controlfile tag 'DBI_BACKUP' format '/u99/backup/CDB/control_%U';
         backup spfile tag 'DBI_BACKUP' format '/u99/backup/CDB/spfile_%U';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12> 13> 14>

allocated channel: c1
channel c1: SID=237 device type=DISK

allocated channel: c2
channel c2: SID=20 device type=DISK

allocated channel: c3
channel c3: SID=254 device type=DISK

allocated channel: c4
channel c4: SID=22 device type=DISK


Starting backup at 26-JAN-2017 13:43:45
channel c1: starting compressed archived log backup set
channel c1: specifying archived log(s) in backup set
input archived log thread=1 sequence=4 RECID=3 STAMP=934074668
input archived log thread=1 sequence=5 RECID=4 STAMP=934154679
channel c1: starting piece 1 at 26-JAN-2017 13:43:46
channel c2: starting compressed archived log backup set
channel c2: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=1 STAMP=934038010
input archived log thread=1 sequence=3 RECID=2 STAMP=934066843
channel c2: starting piece 1 at 26-JAN-2017 13:43:46
channel c3: starting compressed archived log backup set
channel c3: specifying archived log(s) in backup set
input archived log thread=1 sequence=6 RECID=5 STAMP=934203623
input archived log thread=1 sequence=7 RECID=6 STAMP=934275778
input archived log thread=1 sequence=8 RECID=7 STAMP=934284094
input archived log thread=1 sequence=9 RECID=8 STAMP=934284153
channel c3: starting piece 1 at 26-JAN-2017 13:43:46
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=10 RECID=9 STAMP=934284199
input archived log thread=1 sequence=11 RECID=10 STAMP=934291133
input archived log thread=1 sequence=12 RECID=11 STAMP=934291192
input archived log thread=1 sequence=13 RECID=12 STAMP=934291966
channel c4: starting piece 1 at 26-JAN-2017 13:43:46
channel c4: finished piece 1 at 26-JAN-2017 13:43:47
piece handle=/u99/backup/CDB/database_3frr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c4: starting compressed archived log backup set
channel c4: specifying archived log(s) in backup set
input archived log thread=1 sequence=14 RECID=13 STAMP=934292026
input archived log thread=1 sequence=15 RECID=14 STAMP=934292464
channel c4: starting piece 1 at 26-JAN-2017 13:43:47
channel c4: finished piece 1 at 26-JAN-2017 13:43:48
piece handle=/u99/backup/CDB/database_3grr0b4j_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:01
channel c1: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3crr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:16
channel c2: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3drr0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:16
channel c3: finished piece 1 at 26-JAN-2017 13:44:02
piece handle=/u99/backup/CDB/database_3err0b4i_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:16
Finished backup at 26-JAN-2017 13:44:02

Starting backup at 26-JAN-2017 13:44:02
channel c1: starting compressed incremental level 0 datafile backup set
channel c1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c1: starting piece 1 at 26-JAN-2017 13:44:02
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:02
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00010 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:02
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00009 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
input datafile file number=00011 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:44:03
channel c4: finished piece 1 at 26-JAN-2017 13:44:18
piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:15
channel c4: starting compressed incremental level 0 datafile backup set
channel c4: specifying datafile(s) in backup set
input datafile file number=00006 name=/u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
channel c4: starting piece 1 at 26-JAN-2017 13:44:18
channel c3: finished piece 1 at 26-JAN-2017 13:44:33
piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:31
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00013 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
input datafile file number=00012 name=/u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:33
channel c3: finished piece 1 at 26-JAN-2017 13:44:34
piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:01
channel c3: starting compressed incremental level 0 datafile backup set
channel c3: specifying datafile(s) in backup set
input datafile file number=00005 name=/u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
channel c3: starting piece 1 at 26-JAN-2017 13:44:35
channel c2: finished piece 1 at 26-JAN-2017 13:44:38
piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:36
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00008 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:38
channel c2: finished piece 1 at 26-JAN-2017 13:44:41
piece handle=/u99/backup/CDB/database_3orr0b66_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:03
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00004 name=/u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:41
channel c2: finished piece 1 at 26-JAN-2017 13:44:42
piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c2: starting compressed incremental level 0 datafile backup set
channel c2: specifying datafile(s) in backup set
input datafile file number=00007 name=/u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c2: starting piece 1 at 26-JAN-2017 13:44:43
channel c1: finished piece 1 at 26-JAN-2017 13:44:44
piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:42
channel c2: finished piece 1 at 26-JAN-2017 13:44:44
piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP comment=NONE
channel c2: backup set complete, elapsed time: 00:00:01
channel c3: finished piece 1 at 26-JAN-2017 13:44:46
piece handle=/u99/backup/CDB/database_3nrr0b62_1_1 tag=DBI_BACKUP comment=NONE
channel c3: backup set complete, elapsed time: 00:00:11
channel c4: finished piece 1 at 26-JAN-2017 13:44:46
piece handle=/u99/backup/CDB/database_3lrr0b5i_1_1 tag=DBI_BACKUP comment=NONE
channel c4: backup set complete, elapsed time: 00:00:28
Finished backup at 26-JAN-2017 13:44:46

Starting backup at 26-JAN-2017 13:44:46
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 26-JAN-2017 13:44:46

Starting backup at 26-JAN-2017 13:44:46
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current control file in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:44:47
channel c1: finished piece 1 at 26-JAN-2017 13:44:48
piece handle=/u99/backup/CDB/control_3rrr0b6e_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:44:48

Starting backup at 26-JAN-2017 13:44:48
channel c1: starting full datafile backup set
channel c1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel c1: starting piece 1 at 26-JAN-2017 13:44:48
channel c1: finished piece 1 at 26-JAN-2017 13:44:49
piece handle=/u99/backup/CDB/spfile_3srr0b6g_1_1 tag=DBI_BACKUP comment=NONE
channel c1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JAN-2017 13:44:49

Starting Control File and SPFILE Autobackup at 26-JAN-2017 13:44:49
piece handle=/u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JAN-2017 13:44:50

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

As you can see in the log, all backup pieces have been done with tag=DBI_BACKUP. But let’s double check it again.

RMAN> list backup summary tag 'DBI_BACKUP';

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
104     B  A  A DISK        26-JAN-2017 13:43:46 1       1       YES        DBI_BACKUP
105     B  A  A DISK        26-JAN-2017 13:43:47 1       1       YES        DBI_BACKUP
106     B  A  A DISK        26-JAN-2017 13:43:54 1       1       YES        DBI_BACKUP
107     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
108     B  A  A DISK        26-JAN-2017 13:43:59 1       1       YES        DBI_BACKUP
109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
119     B  F  A DISK        26-JAN-2017 13:44:47 1       1       NO         DBI_BACKUP
120     B  F  A DISK        26-JAN-2017 13:44:48 1       1       NO         DBI_BACKUP

RMAN> restore database preview summary from tag 'DBI_BACKUP';

Starting restore at 26-JAN-2017 13:45:26
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK

List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
116     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
112     B  0  A DISK        26-JAN-2017 13:44:36 1       1       YES        DBI_BACKUP
114     B  0  A DISK        26-JAN-2017 13:44:41 1       1       YES        DBI_BACKUP
117     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
118     B  0  A DISK        26-JAN-2017 13:44:44 1       1       YES        DBI_BACKUP
115     B  0  A DISK        26-JAN-2017 13:44:43 1       1       YES        DBI_BACKUP
113     B  0  A DISK        26-JAN-2017 13:44:39 1       1       YES        DBI_BACKUP
109     B  0  A DISK        26-JAN-2017 13:44:14 1       1       YES        DBI_BACKUP
110     B  0  A DISK        26-JAN-2017 13:44:30 1       1       YES        DBI_BACKUP
111     B  0  A DISK        26-JAN-2017 13:44:34 1       1       YES        DBI_BACKUP
using channel ORA_DISK_1

archived logs generated after SCN 1904449 not found in repository
recovery will be done up to SCN 1904449
Media recovery start SCN is 1904449
Recovery must be done beyond SCN 1904725 to clear datafile fuzziness
Finished restore at 26-JAN-2017 13:45:26

RMAN>

Ok. Very good. That looks promising now. :-) Let’s do the application changes now …

RMAN> alter database open;

Statement processed

-- Do some application changes ...

SQL> create table x ...
SQL> create table y ...
SQL> create table z ...

And the final test is of course, to do the real restore/recovery to the point where the cold backup was done.

RMAN> shutdown abort

Oracle instance shut down

RMAN> startup nomount

connected to target database (not started)
Oracle instance started

Total System Global Area    1795162112 bytes

Fixed Size                     8793832 bytes
Variable Size                553648408 bytes
Database Buffers            1224736768 bytes
Redo Buffers                   7983104 bytes

RMAN> restore controlfile from '/u99/backup/CDB/control_3rrr0b6e_1_1';

Starting restore at 26-JAN-2017 13:48:50
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=237 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u02/oradata/CDB/controlfile/o1_mf_d81c6189_.ctl
output file name=/u03/fast_recovery_area/CDB/controlfile/o1_mf_d81c61b4_.ctl
Finished restore at 26-JAN-2017 13:48:51

RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1

	
RMAN> run
    {
         allocate channel c1 device type disk;
     allocate channel c2 device type disk;
         allocate channel c3 device type disk;
     allocate channel c4 device type disk;
     restore database from tag 'DBI_BACKUP';
         release channel c1;
         release channel c2;
         release channel c3;
         release channel c4;
    }2> 3> 4> 5> 6> 7> 8> 9> 10> 11> 12>

allocated channel: c1
channel c1: SID=256 device type=DISK

allocated channel: c2
channel c2: SID=24 device type=DISK

allocated channel: c3
channel c3: SID=257 device type=DISK

allocated channel: c4
channel c4: SID=25 device type=DISK

Starting restore at 26-JAN-2017 13:49:39
Starting implicit crosscheck backup at 26-JAN-2017 13:49:39
Crosschecked 15 objects
Finished implicit crosscheck backup at 26-JAN-2017 13:49:40

Starting implicit crosscheck copy at 26-JAN-2017 13:49:40
Finished implicit crosscheck copy at 26-JAN-2017 13:49:40

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u03/fast_recovery_area/CDB/autobackup/2017_01_26/o1_mf_s_934292553_d8mvt1l0_.bkp


skipping datafile 5; already restored to file /u02/oradata/CDB/datafile/o1_mf_system_d81c6fqo_.dbf
skipping datafile 6; already restored to file /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c6fqn_.dbf
skipping datafile 8; already restored to file /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c6fqp_.dbf
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00009 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_system_d81cgjbv_.dbf
channel c1: restoring datafile 00011 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_undotbs1_d81cgjc2_.dbf
channel c1: reading from backup piece /u99/backup/CDB/database_3krr0b52_1_1
channel c2: starting datafile backup set restore
channel c2: specifying datafile(s) to restore from backup set
channel c2: restoring datafile 00010 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_sysaux_d81cgjc2_.dbf
channel c2: reading from backup piece /u99/backup/CDB/database_3jrr0b52_1_1
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00012 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_users_d81cgq9f_.dbf
channel c3: restoring datafile 00013 to /u02/oradata/CDB/46727C2ED8612B70E053CB38A8C078C9/datafile/o1_mf_rman_d8ccofgs_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3mrr0b61_1_1
channel c4: starting datafile backup set restore
channel c4: specifying datafile(s) to restore from backup set
channel c4: restoring datafile 00003 to /u02/oradata/CDB/datafile/o1_mf_sysaux_d81c49wd_.dbf
channel c4: reading from backup piece /u99/backup/CDB/database_3irr0b52_1_1
channel c3: piece handle=/u99/backup/CDB/database_3mrr0b61_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:03
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00004 to /u02/oradata/CDB/datafile/o1_mf_undotbs1_d81c530h_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3prr0b69_1_1
channel c3: piece handle=/u99/backup/CDB/database_3prr0b69_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:01
channel c3: starting datafile backup set restore
channel c3: specifying datafile(s) to restore from backup set
channel c3: restoring datafile 00001 to /u02/oradata/CDB/datafile/o1_mf_system_d81c2wsf_.dbf
channel c3: reading from backup piece /u99/backup/CDB/database_3hrr0b52_1_1
channel c1: piece handle=/u99/backup/CDB/database_3krr0b52_1_1 tag=DBI_BACKUP
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:20
channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00007 to /u02/oradata/CDB/datafile/o1_mf_users_d81c542r_.dbf
channel c1: reading from backup piece /u99/backup/CDB/database_3qrr0b6b_1_1
channel c1: piece handle=/u99/backup/CDB/database_3qrr0b6b_1_1 tag=DBI_BACKUP
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:01
channel c2: piece handle=/u99/backup/CDB/database_3jrr0b52_1_1 tag=DBI_BACKUP
channel c2: restored backup piece 1
channel c2: restore complete, elapsed time: 00:00:27
channel c4: piece handle=/u99/backup/CDB/database_3irr0b52_1_1 tag=DBI_BACKUP
channel c4: restored backup piece 1
channel c4: restore complete, elapsed time: 00:00:35
channel c3: piece handle=/u99/backup/CDB/database_3hrr0b52_1_1 tag=DBI_BACKUP
channel c3: restored backup piece 1
channel c3: restore complete, elapsed time: 00:00:40
Finished restore at 26-JAN-2017 13:50:25

released channel: c1

released channel: c2

released channel: c3

released channel: c4

RMAN>

No recovery is needed here, because it was an cold RMAN backup. You can just open the database with open resetslogs.

RMAN> alter database open RESETLOGS;

Statement processed
Conclusion

Take care that you put your RMAN Tags at the correct location.

 

Cet article Oracle 12cR2 – RMAN cold backup with TAG’s est apparu en premier sur Blog dbi services.

Documentum D2 4.5 and IE compatibility and F5

Thu, 2017-01-26 02:26

We had a problem with a customer where D2 was not loading properly in IE when going through F5 (load balancer). When trying to access D2 through the F5, let’s say: https://d2prod/D2 only a few menus and some part of the workspace were loading but it ended to say “Unexpected error occured”.

Investigation

It would have been too easy if this error appeared in the logs, but it didn’t. So that means it was not a D2 internal error but maybe in the interface or the way it is loading in IE. Because, fun fact, it was loading properly in Chrome. Additional fun fact, when using a superuser account it was also loading properly in IE!

As it was an interface error I used the IE debugging tool F12. At first I didn’t see the error in the console but when digging a bit inside all the verbose logs I found this:

SEVERE: An unexpected error occurred. Please refresh your browser
com.google.gwt.core.client.JavaScriptException: (TypeError) 
 description: Object doesn't support property or method 'querySelectorAll'
number: -2146827850: Object doesn't support property or method 'querySelectorAll'

After some researches I figured out that others had issues with “querySelectorAll” and IE. In fact it was depending on the version of  IE used because this function was not available prior IE 9.

Hence I came to the idea that my IE was not in the right compatibility mode, because I had IE 11, so it couldn’t be a version mismatch.

Fortunately thanks to the F12 console you can change the compatibility mode:

Capture_Compat_8

As I thought, the compatibility mode was set (and blocked) to 8, which was not supporting “querySelectorAll”. But I couldn’t change it to a higher value. Hence, I figured this out:

Capture_Compat_Enterprise

I was in Enterprise Mode. This mode forces the compatibility version and some other sort of things. Fortunately you can disable it in the browser by going into the “Tools” menu of IE. Then, like magic, I was able to switch to the compatibility version 10:

Capture_Compat_10_2

And, miracle. The D2 interface reloaded properly, with all menus and workspaces. You remember it was working with superuser accounts? In fact, when using a superuser account the Enterprise Mode was not activated and the Compatibility version was set to 10.

The question is, why was it forced to 8?

Solution

In fact, it was customer related. They had a policy rule applying for the old D2 (3.1) which needed the Enterprise Mode and compatibility mode set to 8. So when using the old dns link to point to the new D2, these modes were still applied.

So we asked to disable the Enterprise Mode and the compatibility mode returned to 10 by default. So be careful with IE in your company ;)

 

Cet article Documentum D2 4.5 and IE compatibility and F5 est apparu en premier sur Blog dbi services.

Exadata Express Cloud Service: PDB_ADMIN privileges

Tue, 2017-01-24 15:39

I usually explain that the main point about Multitenant Architecture is the separation of roles between a system DBA who manages the container, and is responsible for its availability, and application DBA who owns all the application objects, and is responsible for data integrity and performance. The Exadata Express Cloud Service is the implementation of that: a CDB managed by Oracle and a PDB that you administrate. But the fact that the system is shared, over the internet, brings the necessity for new limitations. Let’s see what you can do or not as a PDB administrator.

CaptureEXCS024 When you create a pluggable database you need to provide a local user and its password. When you create an Exadata Express Cloud Service, the PDM_ADMIN user is created but the account is locked. You need to provide your password in order to unlock it. Then, this PDB_ADMIN is the only user for which you know the password.

But what really means ‘PDB administrator’? Which privileges do you have?

System privileges

PDB_ADMIN looks like a DBA. He is not granted ‘DBA’ but ‘PDB_DBA’ and many other roles.
Let’s see which system privileges are not granted, either directly or though a path of roles:

with my_user_roles(grantee,granted_role) as (
select grantee,granted_role from dba_role_privs where grantee='PDB_ADMIN' union all select r.grantee,r.granted_role from dba_role_privs r join my_user_roles u on r.grantee =u.granted_role
) select listagg(privilege,',')within group(order by privilege) from (
select distinct privilege from dba_sys_privs minus select distinct privilege from dba_sys_privs where grantee in (select granted_role from my_user_roles));
 
LISTAGG(PRIVILEGE,',')WITHINGROUP(ORDERBYPRIVILEGE)
---------------------------------------------------
ADMINISTER KEY MANAGEMENT,BACKUP ANY TABLE,BECOME USER,CREATE ANY CREDENTIAL,CREATE ANY DIRECTORY,CREATE ANY JOB,CREATE ANY LIBRARY,CREATE CREDENTIAL,CREATE DATABASE LINK,CREATE EXTERNAL JOB,CREATE LIBRARY,CREATE PUBLIC DATABASE LINK,DROP ANY DIRECTORY,DROP PUBLIC DATABASE LINK,EM EXPRESS CONNECT,EXECUTE ANY CLASS,EXECUTE ANY PROGRAM,EXEMPT ACCESS POLICY,EXEMPT DDL REDACTION POLICY,EXEMPT DML REDACTION POLICY,EXEMPT REDACTION POLICY,EXPORT FULL DATABASE,FLASHBACK ANY TABLE,FLASHBACK ARCHIVE ADMINISTER,FORCE TRANSACTION,GRANT ANY PRIVILEGE,GRANT ANY ROLE,IMPORT FULL DATABASE,INHERIT ANY PRIVILEGES,LOGMINING,MANAGE ANY FILE GROUP,MANAGE FILE GROUP,MANAGE SCHEDULER,SELECT ANY TRANSACTION,UNLIMITED TABLESPACE,USE ANY JOB RESOURCE,USE ANY SQL TRANSLATION PROFILE

So your PDB_ADMIN has some system privileges. For example you can gather system statistics:

SQL> exec dbms_stats.gather_system_stats;
PL/SQL procedure successfully completed.
 
SNAME PNAME PVAL1 PVAL2
----- ----- ----- -----
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_INFO DSTART 01-24-2017 19:48
SYSSTATS_INFO DSTOP 01-24-2017 19:48
SYSSTATS_INFO FLAGS 1
SYSSTATS_MAIN CPUSPEEDNW 3097
SYSSTATS_MAIN IOSEEKTIM 7
SYSSTATS_MAIN IOTFRSPEED 4096

but you cannot create database links or directories, which may be considered as application objects:

create directory mydir as '/tmp'
ORA-01031: insufficient privileges
 
SQL> create database link mydblink connect to pdb_admin identified by "Ach1z0#d" using '//localhost';
ORA-01031: insufficient privileges

What you should understand is that you have a PDB, in a shared CDB, it is yours and you can do whatever you want as long as it has no side effects on your neighbours. You will see that communication with the external world (network, filesystem, host server) are very limited.

Lockdown profiles

Now we will see that the privileges we have are going beyond those that you can see in DBA_SYS_PRIVS and DBA_ROLE_PRIVS.

From my query above, I have the ALTER SYSTEM privilege, so I can change some parameters:

SQL> alter system set temp_undo_enabled=true;
System SET altered.
SQL> alter system set cursor_sharing=exact;
System SET altered.

However, some parameters cannot be set:

SQL> alter session set max_idle_time=60;
ORA-01031: insufficient privileges
SQL> alter system set sga_target=3G;
ORA-01031: insufficient privileges
SQL> alter system set sql_trace=true;
ORA-01031: insufficient privileges

and most of the other ALTER SYSTEM statements are forbidden:

SQL> alter system flush shared_pool;
ORA-01031: insufficient privileges

This is forbidden by a multitenant lockdown profile, which gives a finer grain than privileges: it disables some statements, or statements clauses or options. The bad thing about it is that you don’t know what you are allowed or not. Always the same ‘insufficient privileges’ and the detail is stored only on CDB$ROOT. From the PDB:

SQL> select * from dba_lockdown_profiles;
no rows selected

the only thing that can be known from the PDB is the name of the lockdown profile:

SQL> show parameter lockdown
NAME TYPE VALUE
------------ ------ -----
pdb_lockdown string S20

That’s not a coincidence. I’m on a ‘S20′ service (30 GB storage, 3GB SGA) and the lockdown profile is associated with the service level.
Of course, you cannot change it:

SQL> alter system set pdb_lockdown='S50';
ORA-32017: failure in updating SPFILE
ORA-01031: insufficient privileges
 
SQL> alter system set pdb_lockdown='S50' scope=memory;
ORA-01031: insufficient privileges

Always the same message… Before 12cR2 you just query the system privileges to understand why you get this message. Now, you need to ask to your CDB administrator. Of course, for Exadata Express Cloud Service, this is documented in limitations.

Lockdown profiles are not only for ALTER SYSTEM.
You have a few things that you cannot do with ALTER SESSION, such as setting SQL Trace, any Event, and any underscore parameter is also forbidden. ALTER DATABASE and ALTER PLUGGABLE DATABASE allow only OPEN/CLOSE, or change default edition, default tablespace and temporary tablespace. On datafiles, you can only resize and set autoextend on/off. You can also set the time zone.

Well, I’m not completely sure about CLOSE:

SQL> alter pluggable database close immediate;
ORA-01031: insufficient privileges
alter pluggable database open
ORA-65019: pluggable database EPTDOJVM1KG already open

Lockdown profiles goes beyond enabling or disable statements. It can disable a few features that you see mentioned in ‘limitations': you can’t create a manual AWR snapshot, can’t access the OS files by any way (UTL_FILE is disabled), can’t use any network protocol (UTL_TCP, UTL_MAIL,…)

So what?

I’m a DBA and connecting to a database where I’m not SYSDBA is a bit frustrating. PDB_ADMIN is granted SYSOPER but is locked in the password file. I’ve unlocked PDB_ADMIN but it is still EXPIRED & LOCKED in password file;

SQL> select * from dba_users where username='PDB_ADMIN';
 
USERNAME USER_ID PASSWORD ACCOUNT_STATUS LOCK_DATE EXPIRY_DATE DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE CREATED PROFILE INITIAL_RSRC_CONSUMER_GROUP EXTERNAL_NAME PASSWORD_VERSIONS EDITIONS_ENABLED AUTHENTICATION_TYPE PROXY_ONLY_CONNECT COMMON LAST_LOGIN ORACLE_MAINTAINED INHERITED DEFAULT_COLLATION IMPLICIT
-------- ------- -------- -------------- --------- ----------- ------------------ -------------------- --------------------- ------- ------- --------------------------- ------------- ----------------- ---------------- ------------------- ------------------ ------ ---------- ----------------- --------- ----------------- --------
PDB_ADMIN 94 OPEN 22-JUL-2017 19:17:41 SYSEXT TEMP TEMP 27-JUL-2016 17:05:02 DEFAULT DEFAULT_CONSUMER_GROUP 11G 12C N PASSWORD N NO 24-JAN-17 08.46.02.000000000 PM EUROPE/BERLIN N NO USING_NLS_COMP NO
 
SQL> select * from v$pwfile_users where username='PDB_ADMIN';
 
USERNAME SYSDBA SYSOPER SYSASM SYSBACKUP SYSDG SYSKM ACCOUNT_STATUS PASSWORD_PROFILE LAST_LOGIN LOCK_DATE EXPIRY_DATE EXTERNAL_NAME AUTHENTICATION_TYPE COMMON CON_ID
-------- ------ ------- ------ --------- ----- ----- -------------- ---------------- ---------- --------- ----------- ------------- ------------------- ------ ------
PDB_ADMIN FALSE TRUE FALSE FALSE FALSE FALSE EXPIRED & LOCKED DEFAULT 07-JAN-2017 04:47:03 07-JAN-2017 04:47:03 PASSWORD NO 47

Keep in mind that this service is not for system DBA. You have the DBaaS for this. This is Oracle managed PDBaaS for developers and developers will get lot of freedom here: ability to test any feature, any optimizer setting, etc. This is a lot more than what Amazon RDS allows. You don’t have file or network access, but this is for small databases and it is totally integrated with SQL Developer to move data. No sql trace, no tkprof, but you have SQL Monitor. This is a very interesting platform for development small projects, 12cR2 features testing or prototypes. Very easy and fast to start. You can connect and code with APEX, .Net, Java, OCI…

 

Cet article Exadata Express Cloud Service: PDB_ADMIN privileges est apparu en premier sur Blog dbi services.

Oracle 12cR2 – DataGuard and Backups to NFS

Tue, 2017-01-24 02:11

From the RMAN point of view, you have generally two possibilities to store your backups, on Disk or on Tape. In case of a single instance, it doesn’t really matter from the accessible or inaccessible point of view. However, in a DataGuard environment it makes a huge difference.

Per default, backups to Tape are always accessible and backups to disk always inaccessible. But what do you do in case you backup to NFS, and you want the backups to be usable on Primary and Standby?

Ok. Let’s to a quick demo.

Before we start, make sure that the oracle Unix user has the same unique id, and group id. If not, you end up with a big mess.

oracle@dbidg01:/home/oracle/ [DBIT122] id -a
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(sysbkp),54324(sysdg),54325(syskm),54326(oper)

The next import thing with NFS are the mount options. Oracle has documented it quite well which mount options are needed if you put Binaries, Datafiles or CRS Voting Disk and OCR on it.

Mount Options for Oracle files for RAC databases and Clusterware when used with NFS on NAS devices (Doc ID 359515.1)

For Linux x86-64bit it would be the following options for Datafiles

rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600

However, if your NFS mount is needed only for RMAN backup pieces, the actimeo=0 is not recommended. The actimeo=0 disables all NFS attribute caching, like acregmin, acregmax, acdirmin and acdirmax. And disabling attribute caching is suboptimal for RMAN backups. See also the following MOS note:

NFS options for 11.2.0.2 RMAN disk backups on Linux 64-bit (Doc ID 1117597.1)

I am using for that example NFS 4.1 and these are the mount options I use.

[root@dbidg01 etc]# mount | grep nfs4
dbidg03:/u99 on /u99 type nfs4 (rw,relatime,vers=4.1,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.56.201,local_lock=none,addr=192.168.56.203)

Last but not least, the Primary database has to be registered with the RMAN catalog. This is a very important point when working with RMAN in a DataGuard environment.

Ok. Let’s do the first backup on the Primary.

-- Primary

oracle@dbidg01:/home/oracle/ [DBIT122] rman target / catalog rman/rman@pdb1

Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:09:16 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT122 (DBID=653953745)
connected to recovery catalog database


RMAN> list backup summary;

specification does not match any backup in the repository

RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_PRIMARY' format '/u99/backup/DBIT122/%U';


Starting backup at 24-JAN-2017 08:09:45
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=31 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=36 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=68 RECID=386 STAMP=934081238
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:46
channel ORA_DISK_2: starting archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=69 RECID=388 STAMP=934099691
input archived log thread=1 sequence=70 RECID=390 STAMP=934099729
input archived log thread=1 sequence=71 RECID=392 STAMP=934099785
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:46
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:47
piece handle=/u99/backup/DBIT122/2vrqqeqa_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: starting archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=64 RECID=378 STAMP=934044533
input archived log thread=1 sequence=65 RECID=380 STAMP=934044555
input archived log thread=1 sequence=66 RECID=382 STAMP=934045153
input archived log thread=1 sequence=67 RECID=384 STAMP=934045183
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:47
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:48
piece handle=/u99/backup/DBIT122/2urqqeqa_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:48
piece handle=/u99/backup/DBIT122/30rqqeqb_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:09:48

Starting backup at 24-JAN-2017 08:09:48
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_system_d4fnthn8_.dbf
input datafile file number=00007 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_users_d4fnx3bd_.dbf
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:49
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_sysaux_d4fnvx0w_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_undotbs1_d4fnx03r_.dbf
input datafile file number=00002 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbv_d7klp09z_.dbf
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:49
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:52
piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T080948_d8fzfxqx_.bkp tag=TAG20170124T080948 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
including current control file in backup set
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:09:53
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:54
piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T080948_d8fzfxps_.bkp tag=TAG20170124T080948 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:05
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:54
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:09:54
piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T080948_d8fzg1yp_.bkp tag=TAG20170124T080948 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:55
piece handle=/u03/fast_recovery_area/DBIT122_SITE1/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T080948_d8fzg2yn_.bkp tag=TAG20170124T080948 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:09:55

Starting backup at 24-JAN-2017 08:09:56
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=72 RECID=394 STAMP=934099796
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:09:56
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:09:57
piece handle=/u99/backup/DBIT122/35rqqeqk_1_1 tag=DBI_BACKUP_ON_PRIMARY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:09:57

RMAN>

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY

 

Ok. So what does my Standby know about this backup, which I have done on the Primary.

oracle@dbidg02:/u99/backup/DBIT122/ [DBIT122] rman target / catalog rman/rman@pdb1

Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:11:12 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT122 (DBID=653953745)
connected to recovery catalog database

RMAN> list backup summary completed after 'sysdate -1';

specification does not match any backup in the repository

RMAN>

 

Nothing …

Ok. Let’s do now a backup on the Standby to NFS and check if it is visible on the Primary.

RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_STANDBY' format '/u99/backup/DBIT122/%U';


Starting backup at 24-JAN-2017 08:11:49
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=41 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=64 RECID=291 STAMP=934044533
input archived log thread=1 sequence=65 RECID=292 STAMP=934044555
input archived log thread=1 sequence=66 RECID=293 STAMP=934045153
input archived log thread=1 sequence=67 RECID=294 STAMP=934045183
input archived log thread=1 sequence=68 RECID=295 STAMP=934081238
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:11:50
channel ORA_DISK_2: starting archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=69 RECID=296 STAMP=934099691
input archived log thread=1 sequence=70 RECID=297 STAMP=934099729
input archived log thread=1 sequence=71 RECID=298 STAMP=934099785
input archived log thread=1 sequence=72 RECID=299 STAMP=934099796
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:11:50
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:11:51
piece handle=/u99/backup/DBIT122/49rqqeu6_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:11:53
piece handle=/u99/backup/DBIT122/48rqqeu6_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 24-JAN-2017 08:11:53

Starting backup at 24-JAN-2017 08:11:53
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_system_d4zc9yd3_.dbf
input datafile file number=00007 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_users_d4zccmnt_.dbf
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:11:54
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_sysaux_d4zcbr8r_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_undotbs1_d4zccjnc_.dbf
input datafile file number=00002 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbv_d7klp0gg_.dbf
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:11:54
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:12:09
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081153_d8fzktfn_.bkp tag=TAG20170124T081153 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:12:09
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081153_d8fzktg7_.bkp tag=TAG20170124T081153 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:12:09
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:12:10
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:12:10
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T081153_d8fzl9p7_.bkp tag=TAG20170124T081153 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:12:11
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T081153_d8fzlbqg_.bkp tag=TAG20170124T081153 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:12:11

Starting backup at 24-JAN-2017 08:12:11
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_DISK_1
using channel ORA_DISK_2
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 24-JAN-2017 08:12:12

RMAN>

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153

 

Now let’s check the Primary.

-- Primary

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY

 

No … nothing there from the Standby. Only the backups done on the Primary are visible.

Before we continue, let’s take a look at the following warning message that appeared on the standby database:

RMAN-06820: warning: failed to archive current log at primary database

In a DataGuard environment, it is not a good practise to logon to the standby with “rman target /”. You should use the
username/password@connectstring instead, or Wallets, if configured. To have a good backup on the Standby, Oracle first attempts to archive the current online redo log, and before the backup ends, it does another one. That’s why you see the RMAN-06820 twice. In case you are doing backups on the standby, always use “username/password@connectstring” or Wallets.

oracle@dbidg02:/u99/backup/DBIT122/ [DBIT122] rman target sys/Manager1@DBIT122_SITE2 catalog rman/rman@PDB1

Recovery Manager: Release 12.2.0.1.0 - Production on Tue Jan 24 08:15:47 2017

Copyright (c) 1982, 2016, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT122 (DBID=653953745)
connected to recovery catalog database

RMAN> backup database plus archivelog tag 'DBI_BACKUP_ON_STANDBY' format '/u99/backup/DBIT122/%U';


Starting backup at 24-JAN-2017 08:16:09
current log archived at primary database
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=271 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=16 device type=DISK
skipping archived logs of thread 1 from sequence 64 to 72; already backed up
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=73 RECID=300 STAMP=934100169
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:11
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:12
piece handle=/u99/backup/DBIT122/4erqqf6b_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:16:12

Starting backup at 24-JAN-2017 08:16:12
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_system_d4zc9yd3_.dbf
input datafile file number=00007 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_users_d4zccmnt_.dbf
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:13
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_sysaux_d4zcbr8r_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_undotbs1_d4zccjnc_.dbf
input datafile file number=00002 name=/u02/oradata/DBIT122_SITE2/datafile/o1_mf_dbv_d7klp0gg_.dbf
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:16:13
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:28
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081612_d8fzsx8o_.bkp tag=TAG20170124T081612 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:16:28
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnndf_TAG20170124T081612_d8fzsx94_.bkp tag=TAG20170124T081612 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_2: starting full datafile backup set
channel ORA_DISK_2: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_2: starting piece 1 at 24-JAN-2017 08:16:28
including current control file in backup set
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:28
channel ORA_DISK_2: finished piece 1 at 24-JAN-2017 08:16:28
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_nnsnf_TAG20170124T081612_d8fztdkp_.bkp tag=TAG20170124T081612 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:00
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:29
piece handle=/u03/fast_recovery_area/DBIT122_SITE2/backupset/2017_01_24/o1_mf_ncnnf_TAG20170124T081612_d8fztdl7_.bkp tag=TAG20170124T081612 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:16:29

Starting backup at 24-JAN-2017 08:16:29
current log archived at primary database
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=74 RECID=301 STAMP=934100189
channel ORA_DISK_1: starting piece 1 at 24-JAN-2017 08:16:31
channel ORA_DISK_1: finished piece 1 at 24-JAN-2017 08:16:32
piece handle=/u99/backup/DBIT122/4jrqqf6v_1_1 tag=DBI_BACKUP_ON_STANDBY comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:16:32

RMAN>


-- A message like the following popps up in the alert.log of the Primary, when Backup was done correctly.

ALTER SYSTEM ARCHIVE LOG
2017-01-24T08:16:09.244238+01:00
LGWR: Standby redo logfile selected to archive thread 1 sequence 74
LGWR: Standby redo logfile selected for thread 1 sequence 74 for destination LOG_ARCHIVE_DEST_2
2017-01-24T08:16:09.264207+01:00
Thread 1 advanced to log sequence 74 (LGWR switch)
  Current log# 2 seq# 74 mem# 0: /u02/oradata/DBIT122_SITE1/onlinelog/o1_mf_2_d4fotcnx_.log
  Current log# 2 seq# 74 mem# 1: /u03/fast_recovery_area/DBIT122_SITE1/onlinelog/o1_mf_2_d4fotkvy_.log
2017-01-24T08:16:09.268607+01:00
Archived Log entry 396 added for T-1.S-73 ID 0x27387e4f LAD:1
2017-01-24T08:16:29.636421+01:00
ALTER SYSTEM ARCHIVE LOG
2017-01-24T08:16:29.656594+01:00
LGWR: Standby redo logfile selected to archive thread 1 sequence 75
LGWR: Standby redo logfile selected for thread 1 sequence 75 for destination LOG_ARCHIVE_DEST_2
2017-01-24T08:16:29.665882+01:00
Thread 1 advanced to log sequence 75 (LGWR switch)

 

Nevertheless, we still can’t see the backups on Primary which were done on Standby. Now the accessible flag comes into play. If executed on Primary, you can see now the backups which were done on the Standby.

-- Primary

RMAN> set backup files for device type disk to accessible;

executing command: SET backup

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY

RMAN>

 

If executed on Standby, you can see now the backups which were done on the Primary.

 

-- Standby

RMAN> set backup files for device type disk to accessible;

executing command: SET backup

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY

RMAN>

If you don’t want to use the the accessible flag, but still want to see the backups on NFS from both sites you have another option. You can use the Oracle DISKSBT library and point it to the NFS directory.

RMAN> CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT122)';

new RMAN configuration parameters:
CONFIGURE CHANNEL DEVICE TYPE 'SBT_TAPE' PARMS  'SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT122)';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;

old RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
new RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
new RMAN configuration parameters are successfully stored
starting full resync of recovery catalog
full resync complete

RMAN> backup database plus archivelog tag 'DBI_SBT_TAPE_BACKUP_ON_PRIMARY';


Starting backup at 24-JAN-2017 08:22:16
current log archived
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: SID=31 device type=SBT_TAPE
channel ORA_SBT_TAPE_1: WARNING: Oracle Test Disk API
allocated channel: ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_2: SID=36 device type=SBT_TAPE
channel ORA_SBT_TAPE_2: WARNING: Oracle Test Disk API
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=64 RECID=378 STAMP=934044533
input archived log thread=1 sequence=65 RECID=380 STAMP=934044555
input archived log thread=1 sequence=66 RECID=382 STAMP=934045153
input archived log thread=1 sequence=67 RECID=384 STAMP=934045183
input archived log thread=1 sequence=68 RECID=386 STAMP=934081238
channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:16
channel ORA_SBT_TAPE_2: starting compressed archived log backup set
channel ORA_SBT_TAPE_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=69 RECID=388 STAMP=934099691
input archived log thread=1 sequence=70 RECID=390 STAMP=934099729
input archived log thread=1 sequence=71 RECID=392 STAMP=934099785
input archived log thread=1 sequence=72 RECID=394 STAMP=934099796
input archived log thread=1 sequence=73 RECID=396 STAMP=934100169
input archived log thread=1 sequence=74 RECID=398 STAMP=934100189
channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:16
channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:17
piece handle=37rqqfho_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_2: starting compressed archived log backup set
channel ORA_SBT_TAPE_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=75 RECID=400 STAMP=934100536
channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:18
channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:19
piece handle=38rqqfhq_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:25
piece handle=36rqqfho_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:09
Finished backup at 24-JAN-2017 08:22:25

Starting backup at 24-JAN-2017 08:22:25
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_system_d4fnthn8_.dbf
input datafile file number=00007 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_users_d4fnx3bd_.dbf
channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:25
channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
input datafile file number=00003 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_sysaux_d4fnvx0w_.dbf
input datafile file number=00004 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_undotbs1_d4fnx03r_.dbf
input datafile file number=00002 name=/u02/oradata/DBIT122_SITE1/datafile/o1_mf_dbv_d7klp09z_.dbf
channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:25
channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:51
piece handle=39rqqfi1_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:26
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:51
piece handle=3arqqfi1_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:26
channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_SBT_TAPE_2: starting piece 1 at 24-JAN-2017 08:22:51
including current control file in backup set
channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:52
channel ORA_SBT_TAPE_2: finished piece 1 at 24-JAN-2017 08:22:52
piece handle=3crqqfir_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:53
piece handle=3brqqfir_1_1 tag=TAG20170124T082225 comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:22:53

Starting backup at 24-JAN-2017 08:22:53
current log archived
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
channel ORA_SBT_TAPE_1: starting compressed archived log backup set
channel ORA_SBT_TAPE_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=76 RECID=402 STAMP=934100573
channel ORA_SBT_TAPE_1: starting piece 1 at 24-JAN-2017 08:22:54
channel ORA_SBT_TAPE_1: finished piece 1 at 24-JAN-2017 08:22:55
piece handle=3drqqfiu_1_1 tag=DBI_SBT_TAPE_BACKUP_ON_PRIMARY comment=API Version 2.0,MMS Version 8.1.3.0
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-JAN-2017 08:22:55

Now backups on NFS are accessible without the accessible flag. It’s not needed for backups to SBT_TAPE. It’s default. :-)

RMAN> list backup summary completed after 'sysdate -1';


List of Backups
===============
Key     TY LV S Device Type Completion Time      #Pieces #Copies Compressed Tag
------- -- -- - ----------- -------------------- ------- ------- ---------- ---
1981    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
1982    B  A  A DISK        24-JAN-2017 08:09:48 1       1       NO         DBI_BACKUP_ON_PRIMARY
1983    B  A  A DISK        24-JAN-2017 08:09:46 1       1       NO         DBI_BACKUP_ON_PRIMARY
2003    B  F  A DISK        24-JAN-2017 08:09:51 1       1       NO         TAG20170124T080948
2004    B  F  A DISK        24-JAN-2017 08:09:53 1       1       NO         TAG20170124T080948
2005    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2006    B  F  A DISK        24-JAN-2017 08:09:54 1       1       NO         TAG20170124T080948
2025    B  A  A DISK        24-JAN-2017 08:09:56 1       1       NO         DBI_BACKUP_ON_PRIMARY
2065    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2066    B  A  A DISK        24-JAN-2017 08:11:50 1       1       NO         DBI_BACKUP_ON_STANDBY
2084    B  F  A DISK        24-JAN-2017 08:12:01 1       1       NO         TAG20170124T081153
2085    B  F  A DISK        24-JAN-2017 08:12:02 1       1       NO         TAG20170124T081153
2086    B  F  A DISK        24-JAN-2017 08:12:09 1       1       NO         TAG20170124T081153
2087    B  F  A DISK        24-JAN-2017 08:12:10 1       1       NO         TAG20170124T081153
2137    B  A  A DISK        24-JAN-2017 08:16:11 1       1       NO         DBI_BACKUP_ON_STANDBY
2148    B  F  A DISK        24-JAN-2017 08:16:17 1       1       NO         TAG20170124T081612
2149    B  F  A DISK        24-JAN-2017 08:16:21 1       1       NO         TAG20170124T081612
2150    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2151    B  F  A DISK        24-JAN-2017 08:16:28 1       1       NO         TAG20170124T081612
2169    B  A  A DISK        24-JAN-2017 08:16:31 1       1       NO         DBI_BACKUP_ON_STANDBY
2312    B  A  A SBT_TAPE    24-JAN-2017 08:22:17 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
2313    B  A  A SBT_TAPE    24-JAN-2017 08:22:18 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
2314    B  A  A SBT_TAPE    24-JAN-2017 08:22:20 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY
2345    B  F  A SBT_TAPE    24-JAN-2017 08:22:44 1       1       YES        TAG20170124T082225
2346    B  F  A SBT_TAPE    24-JAN-2017 08:22:46 1       1       YES        TAG20170124T082225
2347    B  F  A SBT_TAPE    24-JAN-2017 08:22:51 1       1       YES        TAG20170124T082225
2348    B  F  A SBT_TAPE    24-JAN-2017 08:22:52 1       1       YES        TAG20170124T082225
2377    B  A  A SBT_TAPE    24-JAN-2017 08:22:54 1       1       YES        DBI_SBT_TAPE_BACKUP_ON_PRIMARY

RMAN>

 

Conclusion

Backups to NFS are perfectly possible, however, in a DataGuard environment it is very useful to see the backups on the primary and standby. You never know which host might fail, so it is good to have all options.

 

Cet article Oracle 12cR2 – DataGuard and Backups to NFS est apparu en premier sur Blog dbi services.

Pages