Feed aggregator

18c PDB switchover

Yann Neuhaus - Sun, 2018-03-11 08:58

In multitenant, the recovery and availability are at CDB level. But customers asked for a switchover at PDB level so Oracle has done that in 18c, based on refreshable PDBs.

For this test I have two multitenant database on an Oracle Cloud service in 18c: CDB1 and CDB2. The only special thing I did was disable the mandatory TDE encryption, because I was not able to have the switchover working. With TDE encryption, I got the “ORA-46697: Keystore password required”. But there is no ‘keystore identified by’ option in the ‘alter pluggable database’. Then If you came upon this post from a search on this error, I’ve no solution yet (SR 3-17001228251 opened on the Oracle Cloud Support ).

Creating the source PDB

In CDB1 I create CDB1DEMO pluggable database:

21:06:06 //localhost/CDB1 SQL>create pluggable database CDB1DEMO admin user admin identified by oracle file_name_convert=('/pdbseed/','/CDB1DEMO/');
Pluggable database CDB1DEMO created.

I could use ‘show pdb’ but I use SQLcl and the current version (17.4) does not recognize Oracle 18c as a container database, so I’ve created my own alias to run a query onV$PDBS and DBA_PDBS:

21:06:12 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right outer join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id;
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1524290
3 CDB1DEMO MOUNTED NEW 944121613 2 21:06:06 NONE 944121613 1524290

CDB1DEMO is in mount state in V$PDBS and status is NEW because just created. You may wonder why I ‘right join’ here as PDBs known by the database should always be known by the instance. But I said that I opened a SR for switchover with TDE and here I got an inconsistency between V$PDBS and DBA_PDBS.


21:06:12 //localhost/CDB1 SQL>alter pluggable database CDB1DEMO open;
Pluggable database CDB1DEMO altered.
21:06:14 //localhost/CDB1 SQL>select * from pdb_plug_in_violations;
no rows selected
 
21:06:15 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1524741
3 CDB1DEMO READ WRITE NO NORMAL 944121613 2 21:06:06 NONE 944121613 1524741

The CDB1DEMO PDB is opened READ WRITE. As my goal is to show refreshes, I need to have database with updates. To do it autonomously ;) I create a materialized view refreshing its timestamp every second.


21:06:15 //localhost/CDB1 SQL>create materialized view DEMO refresh complete start with (sysdate) next (sysdate+1/24/60/60) as select current_timestamp "refresh timestamp",current_scn "refresh scn" from v$database;
Materialized view DEMO created.

Here is how I’ll query this autonomous ;) materialized view, comparing the timestamp and SCN at the time of refresh with the current ones:

21:06:16 //localhost/CDB1 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
10-MAR-18 09.06.16.010529000 PM EUROPE/ZURICH 1524747 1524749 10-MAR-18 09.06.16.560146000 PM EUROPE/ZURICH 1524848

Refreshable PDB refreshing every minute

On CDB1 I create the user I’ll use for the remote clone: C##SYSOPER which needs either SYSOPER or CREATE PLUGGABLE DATABASE privilege.

21:06:17 //localhost/CDB1 SQL>connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
21:06:17 //localhost/CDB1 SQL>grant create session, sysoper to C##SYSOPER identified by oracle1 container=all;
Grant succeeded.
 
21:06:18 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1524907
3 CDB1DEMO READ WRITE NO NORMAL 944121613 2 21:06:06 NONE 944121613 1524907

On CDB2 I create a database link to connect to this CDB1 user.

21:06:18 //localhost/CDB1 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
21:06:18 //localhost/CDB2 SQL>create database link CDB1@SYSOPER connect to C##SYSOPER identified by oracle1 using '//localhost/CDB1'
Database link CDB1@SYSOPER created.
 
21:06:18 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1522015

Here is the remote clone creating CDB2DEMO from CDB1DEMO, as a refreshable PDB, automatically refreshed every 1 minute (when it is in MOUNT):

21:06:18 //localhost/CDB2 SQL>create pluggable database CDB2DEMO from CDB1DEMO@CDB1@SYSOPER file_name_convert=('/CDB1/','/CDB2/','/CDB1DEMO/','/CDB2DEMO/') refresh mode every 1 minutes;
Pluggable database CDB2DEMO created.

This is not new, we got it in 12cR2 and If you go to Collaborate 18 in Las Vegas next month I’ll demo it: https://app.attendcollaborate.com/event/member/448410 , with all transportable tablespace and pluggable databases data movement.

From the alert.log you can see the clone and one first refresh:

21:06:28 //localhost/CDB2 SQL>host ssh oracle@localhost 'ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1 /u01/app/oracle/product/18.0.0/dbhome_1/bin/adrci exec="set home diag/rdbms/cdb2/CDB2; show alert -tail 30" '| grep --color -E "(^Completed:|^alter pluggable.*|^create pluggable.*|Media Recovery|onlined Undo|KILL|^Pluggable.*|^)"
2018-03-10 21:06:18.317000 +01:00
create pluggable database CDB2DEMO from CDB1DEMO@CDB1@SYSOPER file_name_convert=('/CDB1/','/CDB2/','/CDB1DEMO/','/CDB2DEMO/') refresh mode every 1 minutes keystore identified by *
Opatch validation is skipped for PDB CDB2DEMO (con_id=4)
2018-03-10 21:06:25.942000 +01:00
Endian type of dictionary set to little
****************************************************************
Pluggable Database CDB2DEMO with pdb id - 4 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
local undo-1, localundoscn-0x00000000000000fb
****************************************************************
2018-03-10 21:06:27.413000 +01:00
Applying media recovery for pdb-4099 from SCN 1524926 to SCN 1525064
Remote log information: count-1
thr-1, seq-2, logfile-/u01/fast_recovery_area/CDB1/foreign_archivelog/CDB1DEMO/2018_03_10/o1_mf_1_2_212315018_.arc, los-1497297, nxs-18446744073709551615
Media Recovery Start
Serial Media Recovery started
max_pdb is 4
Media Recovery Log /u01/fast_recovery_area/CDB1/foreign_archivelog/CDB1DEMO/2018_03_10/o1_mf_1_2_212315018_.arc
Incomplete Recovery applied until change 1525064 time 03/10/2018 21:06:26
Media Recovery Complete (CDB2)
Completed: create pluggable database CDB2DEMO from CDB1DEMO@CDB1@SYSOPER file_name_convert=('/CDB1/','/CDB2/','/CDB1DEMO/','/CDB2DEMO/') refresh mode every 1 minutes keystore identified by *
alter pluggable database refresh

You can’t open this one in READ WRITE as it is refreshed with the redo from the source, as you see in the alert.log, but you can open it in READ ONLY to query it or to clone it further:

21:06:28 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO open read only;
Pluggable database CDB2DEMO altered.
 
21:06:32 //localhost/CDB2 SQL>alter session set container=CDB2DEMO;
Session altered.
 
21:06:32 //localhost/CDB2 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
10-MAR-18 09.06.27.140229000 PM +01:00 1525071 1525072 10-MAR-18 09.06.32.565600000 PM EUROPE/ZURICH 1525100

if you look at the timestamps, you can see that it is in sync from the source as of the time of the end of creation. The alert.log shows that a refresh happened just after the creation completion.

21:06:39 //localhost/CDB2 SQL>host ssh oracle@localhost 'ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1 /u01/app/oracle/product/18.0.0/dbhome_1/bin/adrci exec="set home diag/rdbms/cdb2/CDB2; show alert -tail 30" '| grep --color -E "(^Completed:|^alter pluggable.*|^create pluggable.*|Media Recovery|onlined Undo|KILL|^Pluggable.*|^)"
2018-03-10 21:06:28.674000 +01:00
alter pluggable database CDB2DEMO open read only
Applying media recovery for pdb-4099 from SCN 1525064 to SCN 1525083
Remote log information: count-1
thr-1, seq-2, logfile-/u01/fast_recovery_area/CDB1/foreign_archivelog/CDB1DEMO/2018_03_10/o1_mf_1_2_212315018_.arc, los-1497297, nxs-18446744073709551615
Media Recovery Start
Serial Media Recovery started
2018-03-10 21:06:29.721000 +01:00
max_pdb is 4
Media Recovery Log /u01/fast_recovery_area/CDB1/foreign_archivelog/CDB1DEMO/2018_03_10/o1_mf_1_2_212315018_.arc
Incomplete Recovery applied until change 1525083 time 03/10/2018 21:06:28
Media Recovery Complete (CDB2)
Completed: alter pluggable database refresh
Autotune of undo retention is turned on.
2018-03-10 21:06:30.880000 +01:00
Undo initialization finished serial:0 start:4386360 end:4386360 diff:0 ms (0.0 seconds)
Database Characterset for CDB2DEMO is AL32UTF8
2018-03-10 21:06:32.305000 +01:00
Opening pdb with no Resource Manager plan active
Pluggable database CDB2DEMO opened read only
Completed: alter pluggable database CDB2DEMO open read only

The refresh can happen only when the PDB is in MOUNT. If it was opened (READ ONLY) for a long time, it will have to retreive some archive logs. This is why you can see FOREIGN ARCHIVED LOG in your recovery area in 12cR2.

So, basically here you have a kind of standby database at PDB level refreshed asynchonously, that you can open when you want:

21:06:32 //localhost/CDB2 SQL>alter session set container=CDB$ROOT;
Session altered.
21:06:32 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO close;
Pluggable database CDB2DEMO altered.
21:06:39 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO open read only;
Pluggable database CDB2DEMO altered.
21:06:40 //localhost/CDB2 SQL>alter session set container=CDB2DEMO;
Session altered.
21:06:40 //localhost/CDB2 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
10-MAR-18 09.06.27.140229000 PM +01:00 1525071 1525072 10-MAR-18 09.06.40.159432000 PM EUROPE/ZURICH 1525117
 
21:06:40 //localhost/CDB2 SQL>alter session set container=CDB$ROOT;
Session altered.

You will probably do a last refresh before opening it, but I left it on purpose to show that the switchover will ensure that they are in sync.

In 12cR2 if you want to ‘switchover’, you need to stop modifications to CDB1DEMO, alter CDB2DEMO in REFRESH NONE, then open it READ WRITE to be the new ‘primary’ after a last refresh, and drop CDB1DEMO to create it as a refreshable clone from CDB2DEMO. And this is where comes the new 18c PDB Switchover: one command to do all this.

Note that I leave the CDB2DEMO in opened (READ ONLY) or the switchover will fail with ORA-17628: Oracle error 65036 returned by remote Oracle server, ORA-65036: pluggable database not open in required mode.


21:06:40 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1525131
4 CDB2DEMO READ ONLY NO REFRESHING 944121613 3 21:06:18 AUTO 1 1525083 717451787 1525131

PDB Switchover

As the goal is to change roles, I need a SYSOPER user and a database link on the other sides:

A user to connect to CDB2 which is the source we will refresh from after the switchover:

21:06:40 //localhost/CDB2 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
21:06:40 //localhost/CDB2 SQL>grant create session, sysoper to C##SYSOPER identified by oracle2 container=all;
Grant succeeded.

A database link from the CDB1 which will become the refreshing side:

21:06:41 //localhost/CDB2 SQL>connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
21:06:41 //localhost/CDB1 SQL> create database link CDB2@SYSOPER connect to C##SYSOPER identified by oracle2 using '//localhost/CDB2'
Database link CDB2@SYSOPER created.

For the moment this side is in READ WRITE as it is the current ‘primary’

21:06:41 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1525456
3 CDB1DEMO READ WRITE NO NORMAL 944121613 2 21:06:06 NONE 944121613 1525456

Here is the one-command refresh in 18c. We alter CDB1DEMO to be refreshable from CDB2DEMO, and we add ‘switchover’ to stop refreshing the remote PDB as it will be now the ‘primary’.

21:06:41 //localhost/CDB1 SQL>alter pluggable database CDB1DEMO refresh mode every 1 minutes from CDB2DEMO@CDB2@SYSOPER switchover;
Pluggable database CDB1DEMO altered.

The alert.log here from CDB1 shows ‘Deleted file’ as in a DROP PLUGGABLE DATABASE, then ‘created as UNUSABLE’ as in CREATE PLUGGABLE DATABASE, then ‘Applying media recovery’ as in refreshable clone:

21:06:59 //localhost/CDB1 SQL>host ssh oracle@localhost 'ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1 /u01/app/oracle/product/18.0.0/dbhome_1/bin/adrci exec="set home diag/rdbms/cdb1/CDB1; show alert -tail 30" '| grep --color -E "(^Completed:|^alter pluggable.*|^create pluggable.*|Media Recovery|onlined Undo|KILL|^Pluggable.*|^)"
2018-03-10 21:06:41.354000 +01:00
alter pluggable database CDB1DEMO refresh mode every 1 minutes from CDB2DEMO@CDB2@SYSOPER switchover
JIT: pid 11896 requesting stop
Buffer Cache flush started: 3
Buffer Cache flush finished: 3
While transitioning the pdb 3 to clean state, clearing all its abort bits in the control file.
Pluggable database CDB1DEMO closed
2018-03-10 21:06:45.734000 +01:00
Deleted file /u01/oradata/CDB1/CDB1DEMO/temp012018-03-10_12-43-46-436-PM.dbf
Deleted file /u01/oradata/CDB1/CDB1DEMO/undotbs01.dbf
Deleted file /u01/oradata/CDB1/CDB1DEMO/sysaux01.dbf
Deleted file /u01/oradata/CDB1/CDB1DEMO/system01.dbf
2018-03-10 21:06:48.199000 +01:00
Opatch validation is skipped for PDB CDB1DEMO (con_id=3)
2018-03-10 21:06:55.321000 +01:00
Endian type of dictionary set to little
****************************************************************
Pluggable Database CDB1DEMO with pdb id - 3 is created as UNUSABLE.
If any errors are encountered before the pdb is marked as NEW,
then the pdb must be dropped
local undo-1, localundoscn-0x00000000000000fb
****************************************************************
2018-03-10 21:06:59.142000 +01:00
Applying media recovery for pdb-4099 from SCN 1526441 to SCN 1526451
Remote log information: count-1
thr-1, seq-2, logfile-/u01/fast_recovery_area/CDB2/foreign_archivelog/CDB2DEMO/2018_03_10/o1_mf_1_2_2195948769_.arc, los-1497207, nxs-18446744073709551615
Media Recovery Start
Serial Media Recovery started
max_pdb is 4
Media Recovery Log /u01/fast_recovery_area/CDB2/foreign_archivelog/CDB2DEMO/2018_03_10/o1_mf_1_2_2195948769_.arc
Incomplete Recovery applied until change 1526451 time 03/10/2018 21:06:55
Media Recovery Complete (CDB1)
Completed: alter pluggable database CDB1DEMO refresh mode every 1 minutes from CDB2DEMO@CDB2@SYSOPER switchover
alter pluggable database refresh

The CDB1DEMO which was the ‘primary’ in READ WRITE is now the ‘standby’ in MOUNT, automatically refreshing every minute as mentioned in the switchover command:

21:07:00 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1526489
3 CDB1DEMO MOUNTED REFRESHING 717451787 4 21:06:06 AUTO 1 1526451 944121613 1526489

As any refreshable clone, I can open it READ ONLY and query it:

21:07:00 //localhost/CDB1 SQL>alter pluggable database CDB1DEMO open read only;
Pluggable database CDB1DEMO altered.
 
21:07:02 //localhost/CDB1 SQL>alter session set container=CDB1DEMO;
Session altered.
 
21:07:02 //localhost/CDB1 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
10-MAR-18 09.06.41.175918000 PM +01:00 1525436 1525594 10-MAR-18 09.07.02.875782000 PM EUROPE/ZURICH 1526520

Look at the timestamp: the data is freshed as of the switchover. No data is lost: the transactions that were committed on the source at the time of switchover are applied on the clone.

Another switchover

I’m now doing a switchover on the opposite way. Same as before: the destination is READ ONLY and the source is READ WRITE:

21:07:20 //localhost/CDB1 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
21:07:20 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1526851
4 CDB2DEMO READ WRITE NO NORMAL 944121613 3 21:06:18 NONE 717451787 1526851

Here is the switchover

21:07:20 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO refresh mode every 1 minutes from CDB1DEMO@CDB1@SYSOPER switchover;
Pluggable database CDB2DEMO altered.

The alert.log shows an error at the end on the last refresh after the switchover:

21:07:33 //localhost/CDB2 SQL>host ssh oracle@localhost 'ORACLE_HOME=/u01/app/oracle/product/18.0.0/dbhome_1 /u01/app/oracle/product/18.0.0/dbhome_1/bin/adrci exec="set home diag/rdbms/cdb2/CDB2; show alert -tail 30" '| grep --color -E "(^Completed:|^alter pluggable.*|^create pluggable.*|Media Recovery|onlined Undo|KILL|^Pluggable.*|^)"
2018-03-10 21:07:32.707000 +01:00
Incomplete Recovery applied until change 1527253 time 03/10/2018 21:07:31
Media Recovery Complete (CDB2)
Completed: alter pluggable database CDB2DEMO refresh mode every 1 minutes from CDB1DEMO@CDB1@SYSOPER switchover
alter pluggable database refresh
ORA-65376 signalled during: alter pluggable database refresh...
Errors in file /u01/app/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_j000_12081.trc:
ORA-12012: error on auto execute of job "SYS"."CDB2DEMO_510111146_REFRESH"
ORA-65376: unable to refresh the PDB
ORA-06512: at "SYS.DBMS_SQL", line 2995
ORA-06512: at line 1

The content of the tracefile doesn’t tell a lot more:

ORA-12012: error on auto execute of job "SYS"."CDB2DEMO_510111146_REFRESH"
at 0x7ffd3c59af38 placed jslv.c@1659
ORA-65376: unable to refresh the PDB
ORA-06512: at "SYS.DBMS_SQL", line 2995
ORA-06512: at line 1

However, the switchover was ok, so nothing was lost and I’ll be able to run new refreshes later.

The CDB2DEMO is now the ‘standby’ again:

21:07:33 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1527275
4 CDB2DEMO MOUNTED REFRESHING 944121613 3 21:06:18 AUTO 1 1527253 717451787 1527275

The CDB1 DEMO is now the ‘primary’ in READ WRITE':

21:07:34 //localhost/CDB2 SQL>connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
21:07:34 //localhost/CDB1 SQL>pdbs
21:07:34 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1527257
3 CDB1DEMO READ WRITE NO NORMAL 717451787 4 21:06:06 NONE 944121613 1527257

The standby is refreshing, containing the data as-of the time of switchover, until ne next refresh:

21:07:34 //localhost/CDB1 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
21:07:35 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1527276
4 CDB2DEMO MOUNTED REFRESHING 944121613 3 21:06:18 AUTO 1 1527253 717451787 1527276
 
21:07:35 //localhost/CDB2 SQL>alter session set container=CDB2DEMO;
Session altered.
 
21:07:36 //localhost/CDB2 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
10-MAR-18 09.07.20.108426000 PM +01:00 1526838 1526839 10-MAR-18 09.07.36.114424000 PM EUROPE/ZURICH 1527282

I’ve checked the state the next day. The ‘primary’ PDB had its materlialized view still refreshing every second:

12:44:48 //localhost/CDB2 SQL>connect sys/oracle@//localhost/CDB1 as sysdba
Connected.
12:44:48 //localhost/CDB1 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:43:46 NONE 944121613 1767409
4 CDB1DEMO READ WRITE NO NORMAL 717451787 3 12:43:31 NONE 944121613 1767409
 
12:44:48 //localhost/CDB1 SQL>alter session set container=CDB1DEMO;
Session altered.
 
12:44:49 //localhost/CDB1 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
11-MAR-18 12.44.33.078430000 PM +01:00 1766975 1766976 11-MAR-18 12.44.49.085200000 PM EUROPE/ZURICH 1767410

And the ‘standby’ PDB is updating every minute:

12:44:49 //localhost/CDB1 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
 
12:44:49 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1767422
3 CDB2DEMO MOUNTED REFRESHING 944121613 4 12:43:42 AUTO 1 1767399 717451787 1767422
 
12:44:49 //localhost/CDB2 SQL>alter session set container=CDB2DEMO;
Session altered.
 
12:44:49 //localhost/CDB2 SQL>alter pluggable database open read only;
Pluggable database OPEN altered.
 
12:44:50 //localhost/CDB2 SQL>select demo.*,ora_rowscn,current_timestamp,(select current_scn from v$database) current_scn from demo;
 
refresh timestamp refresh scn ORA_ROWSCN CURRENT_TIMESTAMP CURRENT_SCN
------- --------- ------- --- ---------- ----------------- -----------
11-MAR-18 12.44.33.078430000 PM +01:00 1766975 1766976 11-MAR-18 12.44.50.205050000 PM EUROPE/ZURICH 1767432

But no failover

When we are talking about ‘standby’ at PDB level, we want to be able to do a failover. Of course, we accept to loose some transactions as the refresh is every minutes, but 1 minute RPO is still an interesting solution for a feature that is available in all editions.

However, as in 12.2.0.1, this doesn’t work because you cannot alter ‘standby’ PDB to REFRESH MODE NONE when the ‘primary’ is not available.

Here is my ‘standby’ PDB CDB2DEMO:

12:44:50 //localhost/CDB2 SQL>connect sys/oracle@//localhost/CDB2 as sysdba
Connected.
 
12:44:50 //localhost/CDB2 SQL>select con_id,name con_name,open_mode,restricted,status,foreign_cdb_dbid,foreign_pdb_id,to_char(creation_time, 'hh24:mi:ss') creation_time,refresh_mode,refresh_interval,last_refresh_scn,cdb_dbid,current_scn from v$pdbs right join dba_pdbs using(con_id,creation_time,dbid) cross join (select dbid cdb_dbid,current_scn from v$database) order by pdb_id
 
CON_ID CON_NAME OPEN_MODE RESTRICTED STATUS FOREIGN_CDB_DBID FOREIGN_PDB_ID CREATION_TIME REFRESH_MODE REFRESH_INTERVAL LAST_REFRESH_SCN CDB_DBID CURRENT_SCN
------ -------- --------- ---------- ------ ---------------- -------------- ------------- ------------ ---------------- ---------------- -------- -----------
2 PDB$SEED READ ONLY NO NORMAL 1214140 2 12:53:21 NONE 717451787 1767433
3 CDB2DEMO READ ONLY NO REFRESHING 944121613 4 12:43:42 AUTO 1 1767405 717451787 1767433

I stop the listener which listens for the ‘primary’ CDB1DEMO:

12:44:50 //localhost/CDB2 SQL>host lsnrctl stop
 
LSNRCTL for Linux: Version 18.0.0.0.0 - Production on 11-MAR-2018 12:44:50
 
Copyright (c) 1991, 2017, Oracle. All rights reserved.
 
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully

You can see in the alert.log that the refresh fails (it is running from a scheduler job):

Errors in file /u01/app/oracle/diag/rdbms/cdb2/CDB2/trace/CDB2_j000_25443.trc:
ORA-12012: error on auto execute of job "SYS"."CDB2DEMO_1159316120_REFRESH"
ORA-17627: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
ORA-17629: Cannot connect to the remote database server
ORA-06512: at "SYS.DBMS_SQL", line 2995
ORA-06512: at line 1

Then, an attempt to stop the refreshing mode of the ‘standby’ fails with ‘Cannot connect to the remote database server':

12:44:50 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO close;
Pluggable database CDB2DEMO altered.
 
12:44:50 //localhost/CDB2 SQL>alter pluggable database CDB2DEMO refresh mode none;
 
Error starting at line : 180 File @ ~/PDB-switchover.sql
In command -
alter pluggable database CDB2DEMO refresh mode none
Error report -
ORA-17627: ORA-12541: TNS:no listener
ORA-17629: Cannot connect to the remote database server
17627. 00000 - "%s"
*Cause: An error returned by OCI while sending/receiving message from remote instance
*Action: Look at error message and take appropriate action or contact Oracle Support Services for further assistance

So nothing new here about failover. I already explained how to do something like a failover by cloning the standby, which can be a snapshot clone to be faster: https://blog.dbi-services.com/12cr2-pdb-refresh-as-a-poor-man-standby/.

Note that this new feature is leashed to specific platforms only – Oracle Cloud PaaS and Oracle Exadata machine, so most of Oracle customers paying for software update will not be able to use it. However, Don’t worry, you can do the same with a few commands, as in 12cR2.

 

Cet article 18c PDB switchover est apparu en premier sur Blog dbi services.

Best method to re-org Very Big Banking Tables

Tom Kyte - Sun, 2018-03-11 06:26
Hello All, We need expert advice how can we achieve the ONLINE table reorganization and data files shirking. this is a core banking database of one of the biggest bank. i can provide all required information from my side, if you are ready to ...
Categories: DBA Blogs

Invisible columns

Tom Kyte - Sun, 2018-03-11 06:26
hi tom, i hava a question about the feature oracle 12c (field invisible). Why if i to do create a new field with condition invisible and not null constrain this failure when insert values into the table, but dont is enable for set values to new field...
Categories: DBA Blogs

Sybase to Oracle migration

Tom Kyte - Sun, 2018-03-11 06:26
Is there any tools available to compare data (data validation post Sybase migration) between Sybase and Oracle 12c which high volume up to 1tB. SQL developer takes a long time even for a small set of data
Categories: DBA Blogs

How to count records without using count function

Tom Kyte - Sun, 2018-03-11 06:26
Hi, How we can find employee count department wise from emp table without using count function using a plsql block.
Categories: DBA Blogs

Memoptimized Rowstore tables

Tom Kyte - Sat, 2018-03-10 11:06
Are the Memoptimized Rowstore tables newly available in Oracle 18, a complete substitute for single-table hash clusters? Judging by the docs. about the direct memory access available with these types of table, I guess they're actually superior in ...
Categories: DBA Blogs

Docker-CE on Ubuntu 17.10 (Artful Aardvark) (2)

Dietrich Schroff - Sat, 2018-03-10 03:42
Three months ago i installed docker on my ubuntu 17.10. At those days there was no straight forward howto on docher.com.

Now the installation is listed on docker.com:


The installation manual can be found here.

root@zerberus:~# apt-get install apt-transport-https ca-certificates curl   software-properties-common
root@zerberus:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  OK
root@zerberus:~# apt-key fingerprint 0EBFCD88  pub   rsa4096 2017-02-22 [SCEA]
      9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88
  uid        [ unbekannt ] Docker Release (CE deb)
sub   rsa4096 2017-02-22 [S]
root@zerberus:~# add-apt-repository \
>    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
>    $(lsb_release -cs) \
>    stable"
root@zerberus:~# apt update

root@zerberus:~# apt install docker-ceAnd then a check:
root@zerberus:~# docker versionClient:
 Version:    17.12.0-ce
 API version:    1.35
 Go version:    go1.9.2
 Git commit:    c97c6d6
 Built:    Wed Dec 27 20:11:14 2017
 OS/Arch:    linux/amd64

Server:
 Engine:
  Version:    17.12.0-ce
  API version:    1.35 (minimum version 1.12)
  Go version:    go1.9.2
  Git commit:    c97c6d6
  Built:    Wed Dec 27 20:09:47 2017
  OS/Arch:    linux/amd64
  Experimental:    false

desc command not working in SQL Developer

Tom Kyte - Fri, 2018-03-09 16:46
Hi Tom, There are a few table on the remote server. I can't use the DESC command against these tables on SQL developer. But when I am using SQL*Plus it is working without any error. I can run this command against the local tables and those are o...
Categories: DBA Blogs

REST API in PLSQL Giving ORA-29273 & ORA-12541

Tom Kyte - Fri, 2018-03-09 16:46
Below is my environmental information Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit PL/SQL Release 12.1.0.2.0 - Below activity performed by DBA <code>grant execute on utl_http to schema_name grant execute on dbms_l...
Categories: DBA Blogs

Java version in Oracle DB

Tom Kyte - Fri, 2018-03-09 16:46
Hi, I have Oracle DB stored procedure (java) which: - pulls data from Miscosoft SQL Server 2000 and - inserts them into Oracle DB 11.2.0.2.0 Java version on Oracle DB is: 1.5.0_10 <code>SELECT dbms_java.get_ojvm_property(PROPSTRING=>'ja...
Categories: DBA Blogs

Procedure to read the input array and exclude in select

Tom Kyte - Fri, 2018-03-09 16:46
Hi, Have a question in accessing json array parameter in db proc. below is the example input json array. '{"List":["name1","name2"]}' or ["name1","name2"] how to access this type of input data and need to apply filter with this data. Means ...
Categories: DBA Blogs

I cannot automate the value for utl_http.begin_request()'s new https_host parameter

Tom Kyte - Fri, 2018-03-09 16:46
Hi "Tom", We're just beginning to test our application in earnest with Oracle 12.2 and have run into a significant problem with a coding change to UTL_HTTP for handling SSL sites. There is a new parameter called "https_host" for which the appli...
Categories: DBA Blogs

SQLLDR Replace Function not working

Tom Kyte - Fri, 2018-03-09 16:46
Help I am trying to use repace function on the Gender Column. The Column has only 2 possible values(-2,-3) and I am trying to replace those with M and F respectively. But I keep getting an error. Record 5: Rejected - Error on table L2K_TNCPS_T_PERSO...
Categories: DBA Blogs

Enabled, Accepted, Fixed SQL Plan Baselines

Yann Neuhaus - Fri, 2018-03-09 15:44

When the documentation is not always clear, I prefer to build a test case to be sure about the behavior in different context and different versions. Here is a test on SQL Plan Management to show which plan is chosen among the different states of SQL Plan Baselines: Enabled, Accepted, Fixed. Thanks to Oracle ACE program, I have some Oracle Cloud credits to quickly provision a database, so I tested that on Oracle 18c.

For this test, I’ve created a table:

create table DEMO as select rownum n from xmltable('1 to 10000');

with 8 indexes:

exec for i in 1..8 loop execute immediate 'create index DEMO'||i||' on DEMO(n,'||i||')'; end loop;

and a procedure to query it several times, setting random costs for the indexes, with only one cheapest:

create or replace procedure runplans(n number) as
dummy number;
begin
-- run all this 30 times
for k in 1..30 loop
run from index DEMO1 to DEMOt with one of them cheaper each time
for t in 1..n loop
-- set random cost for all indexes
for i in 1..8 loop dbms_stats.set_index_stats(user,'DEMO'||i,indlevel=>round(dbms_random.value(10,100)),no_invalidate=>true); end loop;
-- set cheap cost for index DEMOt
dbms_stats.set_index_stats(user,'DEMO'||t,indlevel=>1,no_invalidate=>true);
-- change some parameters to parse new child
execute immediate 'alter session set optimizer_index_caching='||(t*8+k);
-- query with an index hint but not specifying the index so the cheaper is chosen
select /*+ index(DEMO) */ n into dummy from DEMO where n=1;
end loop;
end loop;
end;
/

So you understand the idea: have 8 possible execution plans, with one cheaper than the others. And the goal is to see which one is chosen depending of the state of the SQL Plan Baseline.

I will play with the baselines and will display the cursor execution with the following SQLcl alias:

SQL> alias sqlpb=select sql_id,plan_hash_value,sum(executions),sql_plan_baseline,id,object_name from v$sql join v$sql_plan using(sql_id,plan_hash_value,child_number) where sql_text like 'SELECT % FROM DEMO%' and id=1 group by sql_id,plan_hash_value,sql_plan_baseline,id,object_name order by 3;

So, I call my procedure to run the query 30 times when index DEMO1 is the cheapest:

SQL> exec runplans(1)
PL/SQL procedure successfully completed.

Here is my alias to show the 30 executions using DEMO1 (object_name from v$sqlplan):

SQL> sqlpb
 
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- -------------------- -- -----------
gqnkn2d17zjvv 3739632713 30 1 DEMO1

I load this to SPM

SQL> exec dbms_output.put_line(dbms_spm.load_plans_from_cursor_cache(sql_id=>'gqnkn2d17zjvv'));
PL/SQL procedure successfully completed.

Here is my SQL Plan Baseline, enabled and accepted:

SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
 
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
------------------------------ ------- -------- ----- ----------
SQL_PLAN_dcc9d14j7k1vu97e16a35 YES YES NO 30

Now I run my procedure to run 30 times the cursor and for each, 8 times with one different index being the cheapest:

SQL> exec runplans(8)
PL/SQL procedure successfully completed.

So all executions have used the only one SQL Plan Baseline which is enabled and accepted:

SQL> sqlpb
 
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- --------------------------------- -- -----------
gqnkn2d17zjvv 3739632713 232 SQL_PLAN_dcc9d14j7k1vu97e16a35 1 DEMO1

And the other plans (because only 8 of them had this DEMO1 plan being the cheapest) were loaded, enabled but not accepted:

SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
 
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
------------------------------ ------- -------- ----- ----------
SQL_PLAN_dcc9d14j7k1vu287d1344 YES NO NO 0
SQL_PLAN_dcc9d14j7k1vu452ab005 YES NO NO 0
SQL_PLAN_dcc9d14j7k1vu4564f9cd YES NO NO 0
SQL_PLAN_dcc9d14j7k1vu4cdc9ee7 YES NO NO 0
SQL_PLAN_dcc9d14j7k1vu5353a77e YES NO NO 0
SQL_PLAN_dcc9d14j7k1vu97e16a35 YES YES NO 30
SQL_PLAN_dcc9d14j7k1vuc6a3f323 YES NO NO 0
SQL_PLAN_dcc9d14j7k1vufb8f9e5a YES NO NO 0

Now, I change the status of the baselines to get all combinations of enabled, fixed and accepted, and in addition to that change the plan name to tne line of plan which differs:

SQL> begin
2 for i in (select rownum n,plan_name from dba_sql_plan_baselines) loop
3 if i.n in (2,4,6,8) then dbms_output.put_line(dbms_spm.evolve_sql_plan_baseline(plan_name=>i.plan_name,verify=>'no',commit=>'yes')); end if;
4 if i.n in (1,2,5,6) then dbms_output.put_line(dbms_spm.alter_sql_plan_baseline(plan_name=>i.plan_name,attribute_name=>'enabled',attribute_value=>'no')); end if;
5 if i.n in (5,6,7,8) then dbms_output.put_line(dbms_spm.alter_sql_plan_baseline(plan_name=>i.plan_name,attribute_name=>'fixed',attribute_value=>'yes')); end if;
6 for p in ( select plan_table_output from dbms_xplan.display_sql_plan_baseline(plan_name=>i.plan_name,format=>'basic') where plan_table_output like '%| DEMO%' ) loop
7 dbms_output.put_line(dbms_spm.alter_sql_plan_baseline(plan_name=>i.plan_name,attribute_name=>'plan_name',attribute_value=>p.plan_table_output));
8 end loop;
9 end loop;
10 end;
11 /

So here they are, with their new name:

SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
 
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
---------------------------------- ------- -------- ----- ----------
| 1 | INDEX RANGE SCAN| DEMO4 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO3 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO2 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO6 | YES YES NO 0
| 1 | INDEX RANGE SCAN| DEMO5 | NO NO YES 0
| 1 | INDEX RANGE SCAN| DEMO1 | NO YES YES 30
| 1 | INDEX RANGE SCAN| DEMO8 | YES NO YES 0
| 1 | INDEX RANGE SCAN| DEMO7 | YES YES YES 0

Fixed plans have priority

I flush the shared pool and run my 240 statements again:

SQL> alter system flush shared_pool;
System FLUSH altered
SQL> exec runplans(8)
PL/SQL procedure successfully completed.

Here is the result in V$SQL, only one plan used for all those executions:

SQL> sqlpb
 
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- ------------------------------------- -- -----------
gqnkn2d17zjvv 1698325646 240 | 1 | INDEX RANGE SCAN| DEMO7 | 1 DEMO7

The only possible plans are those that are fixed and accepted, which are the DEMO6 and DEMO7 ones. However, fixed plans have a priority, so even when the CBO came with the DEMO6 plan it was not used. When there are fixed enabled accepted SQL Plan Baseline, those are the only one considered.

Enabled and Accepted are used

Now setting all fixed attribute to no:

SQL> begin
2 for i in (select rownum n,plan_name from dba_sql_plan_baselines) loop
3 dbms_output.put_line(dbms_spm.alter_sql_plan_baseline(plan_name=>i.plan_name,attribute_name=>'fixed',attribute_value=>'no'));
4 end loop;
5 end;
6 /
PL/SQL procedure successfully completed.
 
SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
---------------------------------- ------- -------- ----- ----------
| 1 | INDEX RANGE SCAN| DEMO4 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO3 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO2 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO6 | YES YES NO 0
| 1 | INDEX RANGE SCAN| DEMO5 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO1 | NO YES NO 30
| 1 | INDEX RANGE SCAN| DEMO8 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO7 | YES YES NO 0

Here is another run:

SQL> alter system flush shared_pool;
System FLUSH altered.
SQL> exec runplans(8)
PL/SQL procedure successfully completed.
 
SQL> sqlpb
 
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- ------------------------------------- -- -----------
gqnkn2d17zjvv 1698325646 95 | 1 | INDEX RANGE SCAN| DEMO7 | 1 DEMO7
gqnkn2d17zjvv 3449379882 145 | 1 | INDEX RANGE SCAN| DEMO6 | 1 DEMO6

Now that there are no fixed plans taking the priority, all enabled and accepted plans are possible, but only them.

All possible plans in the baseline but none enabled and accepted

Then what happens when all possible plans are in the SQL Plan Baseline but none of them are both enabled and accepted?

SQL> begin
2 for i in (select rownum n,plan_name from dba_sql_plan_baselines where accepted='YES') loop
3 dbms_output.put_line(dbms_spm.alter_sql_plan_baseline(plan_name=>i.plan_name,attribute_name=>'enabled',attribute_value=>'no'));
4 end loop;
5 end;
6 /
 
SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
 
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
---------------------------------- ------- -------- ----- ----------
| 1 | INDEX RANGE SCAN| DEMO4 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO3 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO2 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO6 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO5 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO1 | NO YES NO 30
| 1 | INDEX RANGE SCAN| DEMO8 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO7 | NO YES NO 0

So all combinations of indexes are there (and my query forces index access with a hint) but none are accepted and enabled.

SQL> alter system flush shared_pool;
System FLUSH altered.
SQL> exec runplans(8)
PL/SQL procedure successfully completed.
 
SQL> sqlpb
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- -------------------- -- -----------
gqnkn2d17zjvv 3739632713 8 1 DEMO1
gqnkn2d17zjvv 4234411015 16 1 DEMO2
gqnkn2d17zjvv 2199479965 24 1 DEMO3
gqnkn2d17zjvv 1698325646 30 1 DEMO7
gqnkn2d17zjvv 3449379882 30 1 DEMO6
gqnkn2d17zjvv 2144220082 30 1 DEMO5
gqnkn2d17zjvv 918903766 30 1 DEMO4
gqnkn2d17zjvv 39208404 72 1 DEMO8

When there are no enabled and accepted plans, then anything is possible and each execution keeps the one the CBO came with.

None enabled and accepted, but new plan possible

Now, in order to have a new plan possible I’ll still run the same query but after dropping all indexes.

SQL> exec for i in 1..8 loop execute immediate 'drop index DEMO'||i; end loop;
PL/SQL procedure successfully completed.

I’ve run the same as before but without the dbms_stats calls.

Here all executions have run with the only possible plan: a full table scan:

SQL> select sql_id,plan_hash_value,sum(executions),sql_plan_baseline,id,object_name from v$sql join v$sql_plan using(sql_id,plan_hash_value,child_number) where sql_text like 'SELECT % FROM DEMO%' and id=1 group by sql_id,plan_hash_value,sql_plan_baseline,id,object_name order by 3
 
SQL_ID PLAN_HASH_VALUE SUM(EXECUTIONS) SQL_PLAN_BASELINE ID OBJECT_NAME
------------- --------------- --------------- -------------------- -- -----------
gqnkn2d17zjvv 4000794843 29 1 DEMO

this plan has been added, enabled but not accepted, to the SQL Plan Baseline:

SQL> select plan_name,enabled,accepted,fixed,executions from dba_sql_plan_baselines;
 
PLAN_NAME ENABLED ACCEPTED FIXED EXECUTIONS
------------- --------------- --------------- -------------------- -- -----------
| 1 | INDEX RANGE SCAN| DEMO4 | NO NO NO 0
| 1 | INDEX RANGE SCAN| DEMO3 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO2 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO6 | NO YES NO 0
| 1 | INDEX RANGE SCAN| DEMO5 | NO NO NO 0
SQL_PLAN_dcc9d14j7k1vu838f84a8 YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO1 | NO YES NO 30
| 1 | INDEX RANGE SCAN| DEMO8 | YES NO NO 0
| 1 | INDEX RANGE SCAN| DEMO7 | NO YES NO 0

Not accepted means that it cannot be used. But as there are no other plan possible, it will be used anyway.

In summary:
Fixed plans are like telling to the optimizer: You must use one of these.
Enabled accepted plans are like telling the optimizer: You should use one of these.
Disabled or non-accepted plans are like telling the optimizer: Try to find another plan.
The optimizer will always come with a plan, so if the rules cannot be applied, the optimizer best-estimated plan is used. It may be a non-enabled or non-accepted plan. Or it can be a new plan, which will then be added as non accepted.

 

Cet article Enabled, Accepted, Fixed SQL Plan Baselines est apparu en premier sur Blog dbi services.

Inconsistencies – Are they good or bad?

Michael Dinh - Fri, 2018-03-09 07:40

Different companies will have different implementations of the same, e.g. backup, naming convention, to name a few.

However, should there be different implementations of the same in the company itself?

There are pros and cons to everything and there are no wrongs or rights, just objectives and requirements.

Example: if you wants to have the same experience with coffee no matter where, you goes to Starbucks.

Don’t let it frustrate you. Think of it as freedom and the opportunity to do thing the way you want to.

Is the glass half empty or half full? For me, it’s half empty because I can fill it up with what I want.

Parallel pg_dump is slow by default?

Yann Neuhaus - Fri, 2018-03-09 00:25

Short answer: Yes, it is. Being at a customer the last days we wanted to parallel pg_dump a 2TB database. We were quite surprised that it was quite slow and it was not immediately clear why it was. Well, the answer is in the documentation: When you go for parallel dumps you need to use the directory format and this comes with: “This format is compressed by default and also supports parallel dumps.”. Compression takes time, so the question was if we could disable compression which was not clear from the statement: Does “compressed by default” mean that it per default is compressed and you can not change that or does it just mean it is the default, but you can change it?

As always, lets setup a short test case:

postgres=# create table dmp1 as 
           select a,a::varchar b,now() c 
             from generate_series ( 1, 1000000) a;
SELECT 1000000
postgres=# create table dmp2 as select * from dmp1;
SELECT 1000000
postgres=# create table dmp3 as select * from dmp1;
SELECT 1000000
postgres=# create table dmp4 as select * from dmp1;
SELECT 1000000
postgres=# \d dmp*
                        Table "public.dmp1"
 Column |           Type           | Collation | Nullable | Default 
--------+--------------------------+-----------+----------+---------
 a      | integer                  |           |          | 
 b      | character varying        |           |          | 
 c      | timestamp with time zone |           |          | 

                        Table "public.dmp2"
 Column |           Type           | Collation | Nullable | Default 
--------+--------------------------+-----------+----------+---------
 a      | integer                  |           |          | 
 b      | character varying        |           |          | 
 c      | timestamp with time zone |           |          | 

                        Table "public.dmp3"
 Column |           Type           | Collation | Nullable | Default 
--------+--------------------------+-----------+----------+---------
 a      | integer                  |           |          | 
 b      | character varying        |           |          | 
 c      | timestamp with time zone |           |          | 

                        Table "public.dmp4"
 Column |           Type           | Collation | Nullable | Default 
--------+--------------------------+-----------+----------+---------
 a      | integer                  |           |          | 
 b      | character varying        |           |          | 
 c      | timestamp with time zone |           |          | 

We have four tables each containing 1’000’000 rows. When we use pg_dump in parallel with the default it looks like this:

postgres@pgbox:/home/postgres/ [PG10] mkdir /var/tmp/dmp
postgres@pgbox:/home/postgres/ [PG10] time pg_dump --format=d --jobs=4 --file=/var/tmp/dmp/ postgres

real	0m2.788s
user	0m2.459s
sys	0m0.597s
postgres@pgbox:/home/postgres/ [PG10] ls -la /var/tmp/dmp/
total 19528
drwxr-xr-x. 2 postgres postgres    4096 Mar  9 07:16 .
drwxrwxrwt. 4 root     root          51 Mar  9 07:15 ..
-rw-r--r--. 1 postgres postgres      25 Mar  9 07:16 3113.dat.gz
-rw-r--r--. 1 postgres postgres      25 Mar  9 07:16 3114.dat.gz
-rw-r--r--. 1 postgres postgres      25 Mar  9 07:16 3115.dat.gz
-rw-r--r--. 1 postgres postgres 4991138 Mar  9 07:16 3116.dat.gz
-rw-r--r--. 1 postgres postgres 4991138 Mar  9 07:16 3117.dat.gz
-rw-r--r--. 1 postgres postgres 4991138 Mar  9 07:16 3118.dat.gz
-rw-r--r--. 1 postgres postgres 4991138 Mar  9 07:16 3119.dat.gz
-rw-r--r--. 1 postgres postgres    5819 Mar  9 07:16 toc.dat

As stated in the documentation the result is compressed. When speed is more important then the size on disk you can however disable the compression:

postgres@pgbox:/home/postgres/ [PG10] rm -rf /var/tmp/dmp/*
postgres@pgbox:/home/postgres/ [PG10] time pg_dump --format=d --jobs=4 --file=/var/tmp/dmp/ --compress=0 postgres

real	0m5.357s
user	0m0.065s
sys	0m0.460s
postgres@pgbox:/home/postgres/ [PG10] ls -la /var/tmp/dmp/
total 171040
drwxr-xr-x. 2 postgres postgres     4096 Mar  9 07:18 .
drwxrwxrwt. 4 root     root           51 Mar  9 07:15 ..
-rw-r--r--. 1 postgres postgres        5 Mar  9 07:18 3113.dat
-rw-r--r--. 1 postgres postgres        5 Mar  9 07:18 3114.dat
-rw-r--r--. 1 postgres postgres        5 Mar  9 07:18 3115.dat
-rw-r--r--. 1 postgres postgres 43777797 Mar  9 07:18 3116.dat
-rw-r--r--. 1 postgres postgres 43777797 Mar  9 07:18 3117.dat
-rw-r--r--. 1 postgres postgres 43777797 Mar  9 07:18 3118.dat
-rw-r--r--. 1 postgres postgres 43777797 Mar  9 07:18 3119.dat
-rw-r--r--. 1 postgres postgres     5819 Mar  9 07:18 toc.dat

In my case it got slower than the compressed dump but this is because I do not really have fast disks on my little VM. When you have a good storage solution disabling compression should bring you more speed.

 

Cet article Parallel pg_dump is slow by default? est apparu en premier sur Blog dbi services.

Commit and Rollback

Tom Kyte - Thu, 2018-03-08 22:26
Hi Tom, In one of our database, some senior management has reported that there are more ROLLBACKS than COMMITS which is not a good sign. Can you help me with the step by step instructions for the below doubts?? 1) Where can I get ROLLBACK an...
Categories: DBA Blogs

How to concatenate 2 xmltype records into one record

Tom Kyte - Thu, 2018-03-08 22:26
Hi, I have a requirement of concatenating 6 different xml files stored in a table and passing it to another procedure as a single file as an input. The column datatype holding the xml is XMLTYPE. Thanks, Mona
Categories: DBA Blogs

Need some suggestion on Special characters.

Tom Kyte - Thu, 2018-03-08 22:26
Hi Tom, I'm using Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production. Need some suggestion regarding how to handle special characters. <b>My doubt with converting any special characters to normal character except belo...
Categories: DBA Blogs

It doesnt Work. VARCHAR2(4000 CHAR)

Tom Kyte - Thu, 2018-03-08 22:26
Hi, I have five columns as below in my DB col1 VARCHAR2(4000 CHAR) col2 VARCHAR2(4000 CHAR) col3 VARCHAR2(4000 CHAR) col4 VARCHAR2(4000 CHAR) col5 VARCHAR2(4000 CHAR) When i try to save the data with max length, i get the following error. <...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator