Feed aggregator

Dynamic SQL to execute a function

Tom Kyte - Fri, 2017-02-24 08:46
Hi, I'm trying to write a function which will accept arithmetic operator(example : mod(100,10)) as input and the result will be from the input arithmetic operation ----- declare l_retval varchar2(200); ...
Categories: DBA Blogs

'Order by' within or outside View, gives same execution plan and results

Tom Kyte - Fri, 2017-02-24 08:46
==================================================== -- v_product view definition create or replace view v_product select <<column_names>> from t_product order by col_1, col_2; --Below is the exiting code running since many years in product...
Categories: DBA Blogs

Calling SQL file from batch file with return vairables

Tom Kyte - Fri, 2017-02-24 08:46
Hi.. I have to write a batch file which will call a sql file by passing one parameter. The sql file has to select two columns from a table and return them to the batch file and the batch file should display it on the screen. Can I get some inpu...
Categories: DBA Blogs

Clob and NClob

Tom Kyte - Fri, 2017-02-24 08:46
Sir, I want to know when I would you a NClob and can not use a Clob ? The character of the database is now US7ASCII. I test using some Chinese and Japanese characters. Looks like I can get back what I inserted on the Clob column, but for NClob col...
Categories: DBA Blogs

Access updated column names in DDL alter trigger

Tom Kyte - Fri, 2017-02-24 08:46
I have a trigger create or replace TRIGGER audit_trigger_update AFTER ALTER OR CREATE ON SCHEMA WHEN (ORA_DICT_OBJ_TYPE = 'TABLE') BEGIN create_audit_trigger(ORA_DICT_OBJ_NAME); END audit_trigger_update; with calling procedure to re...
Categories: DBA Blogs

INTERVAL PARTITIONING with custom TABLESPACE names

Tom Kyte - Fri, 2017-02-24 08:46
We have a range partitioned table on daily basis. Every year, we are splitting the MAX partition into 365* days partitions such as: ALTER TABLE txn SPLIT PARTITION p_txn_max AT (to_date('01/02/2017','MM/DD/YYYY')) INTO (PARTITION ...
Categories: DBA Blogs

Update query using case and joins

Tom Kyte - Fri, 2017-02-24 08:46
<code>Table 1: select * from FV Table 2: select * From DB_FV_W UPDATE DB_FV_W Set FV_02 = (CASE WHEN db.FV_02 IS NULL THEN '0' Else fv.MD END) FROM DB_FV_W d LEFT OUTER JOIN FV f On...
Categories: DBA Blogs

Executing dynamic sql with

Tom Kyte - Fri, 2017-02-24 08:46
Hi Tom, I have a requirement wherein a sql saved in a table column (clob type) is executed in plsql . The results will then be used for further processing . SQLs to be executed will be selected based on some criteria. eg : Assume the table n...
Categories: DBA Blogs

Dump Oracle data into a delimited ascii file with PL/SQL

Amis Blog - Fri, 2017-02-24 08:30

This is how I dump data from an Oracle Database (tested on 8i,9i,10g,11g,12c) to a delimited ascii file:

SQL*Plus: Release 12.1.0.2.0 Production on Fri Feb 24 13:55:47 2017
Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Standard Edition Release 12.1.0.2.0 - 64bit Production

SQL> set timing on
SQL> select Dump_Delimited('select * from all_objects', 'all_objects.csv') nr_rows from dual;

   NR_ROWS
----------
     97116

Elapsed: 00:00:11.87
SQL> ! cat /u01/etl/report/all_objects_readme.txt


  *********************************************************************  
  Record Layout of file /u01/etl/report/all_objects.csv
  *********************************************************************  


  Column                          Sequence  MaxLength  Datatype  
  ------------------------------  --------  ---------  ----------  

  OWNER                           1         128        VARCHAR2                 
  OBJECT_NAME                     2         128        VARCHAR2                 
  SUBOBJECT_NAME                  3         128        VARCHAR2                 
  OBJECT_ID                       4         24         NUMBER                   
  DATA_OBJECT_ID                  5         24         NUMBER                   
  OBJECT_TYPE                     6         23         VARCHAR2                 
  CREATED                         7         20         DATE                     
  LAST_DDL_TIME                   8         20         DATE                     
  TIMESTAMP                       9         19         VARCHAR2                 
  STATUS                          10        7          VARCHAR2                 
  TEMPORARY                       11        1          VARCHAR2                 
  GENERATED                       12        1          VARCHAR2                 
  SECONDARY                       13        1          VARCHAR2                 
  NAMESPACE                       14        24         NUMBER                   
  EDITION_NAME                    15        128        VARCHAR2                 
  SHARING                         16        13         VARCHAR2                 
  EDITIONABLE                     17        1          VARCHAR2                 
  ORACLE_MAINTAINED               18        1          VARCHAR2                 


  ----------------------------------  
  Generated:     24-02-2017 13:56:50
  Generated by:  ETL
  Columns Count: 18
  Records Count: 97116
  Delimiter: ][
  Row Delimiter: ]
  ----------------------------------  

SQL> 

Next to the query and the generated filename the Dump_Delimited function takes another 6 parameters, each one with a default value. Check out the PL/SQL, and BTW… the basics for this code comes from Tom Kyte.

SET DEFINE OFF;
CREATE OR REPLACE DIRECTORY ETL_UNLOAD_DIR AS '/u01/etl/report';
GRANT READ, WRITE ON DIRECTORY ETL_UNLOAD_DIR TO ETL;

CREATE OR REPLACE FUNCTION Dump_Delimited
   ( P_query                IN VARCHAR2
   , P_filename             IN VARCHAR2
   , P_column_delimiter     IN VARCHAR2    := ']['
   , P_row_delimiter        IN VARCHAR2    := ']'
   , P_comment              IN VARCHAR2    := NULL
   , P_write_rec_layout     IN PLS_INTEGER := 1
   , P_dir                  IN VARCHAR2    := 'ETL_UNLOAD_DIR'
   , P_nr_is_pos_integer    IN PLS_INTEGER := 0 )
RETURN PLS_INTEGER
 IS
    filehandle             UTL_FILE.FILE_TYPE;
    filehandle_rc          UTL_FILE.FILE_TYPE;

    v_user_name            VARCHAR2(100);
    v_file_name_full       VARCHAR2(200);
    v_dir                  VARCHAR2(200);
    v_total_length         PLS_INTEGER := 0;
    v_startpos             PLS_INTEGER := 0;
    v_datatype             VARCHAR2(30);
    v_delimiter            VARCHAR2(10):= P_column_delimiter;
    v_rowdelimiter         VARCHAR2(10):= P_row_delimiter;

    v_cursorid             PLS_INTEGER := DBMS_SQL.OPEN_CURSOR;
    v_columnvalue          VARCHAR2(4000);
    v_ignore               PLS_INTEGER;
    v_colcount             PLS_INTEGER := 0;
    v_newline              VARCHAR2(32676);
    v_desc_cols_table      DBMS_SQL.DESC_TAB;
    v_dateformat           NLS_SESSION_PARAMETERS.VALUE%TYPE;
    v_stat                 VARCHAR2(1000);
    counter                PLS_INTEGER := 0;
BEGIN

    SELECT directory_path
      INTO v_dir 
    FROM DBA_DIRECTORIES
    WHERE directory_name = P_dir;
    v_file_name_full  := v_dir||'/'||P_filename;

    SELECT VALUE
      INTO v_dateformat
    FROM NLS_SESSION_PARAMETERS
    WHERE parameter = 'NLS_DATE_FORMAT';

    /* Use a date format that includes the time. */
    v_stat := 'alter session set nls_date_format=''dd-mm-yyyy hh24:mi:ss'' ';
    EXECUTE IMMEDIATE v_stat;

    filehandle := UTL_FILE.FOPEN( P_dir, P_filename, 'w', 32000 );

    /* Parse the input query so we can describe it. */
    DBMS_SQL.PARSE(  v_cursorid,  P_query, dbms_sql.native );

    /* Now, describe the outputs of the query. */
    DBMS_SQL.DESCRIBE_COLUMNS( v_cursorid, v_colcount, v_desc_cols_table );

    /* For each column, we need to define it, to tell the database
     * what we will fetch into. In this case, all data is going
     * to be fetched into a single varchar2(4000) variable.
     *
     * We will also adjust the max width of each column. 
     */
IF P_write_rec_layout = 1 THEN

   filehandle_rc := UTL_FILE.FOPEN(P_dir, SUBSTR(P_filename,1, INSTR(P_filename,'.',-1)-1)||'_readme.txt', 'w');

--Start Header
    v_newline := CHR(10)||CHR(10)||'  *********************************************************************  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Record Layout of file '||v_file_name_full;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  *********************************************************************  '||CHR(10)||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  Column                          Sequence  MaxLength  Datatype  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  ------------------------------  --------  ---------  ----------  '||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Header

--Start Body
    FOR i IN 1 .. v_colcount
    LOOP
       DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
       SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                   , 12, 20, v_desc_cols_table(i).col_max_len )
         INTO v_desc_cols_table(i).col_max_len
       FROM dual;

       SELECT DECODE( TO_CHAR(v_desc_cols_table(i).col_type), '1'  , 'VARCHAR2'
                                                            , '2'  , 'NUMBER'
                                                            , '8'  , 'LONG'
                                                            , '11' , 'ROWID'
                                                            , '12' , 'DATE'
                                                            , '96' , 'CHAR'
                                                            , '108', 'USER_DEFINED_TYPE', TO_CHAR(v_desc_cols_table(i).col_type) )
         INTO v_datatype
       FROM DUAL;

       v_newline := RPAD('  '||v_desc_cols_table(i).col_name,34)||RPAD(i,10)||RPAD(v_desc_cols_table(i).col_max_len,11)||RPAD(v_datatype,25);
    UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    END LOOP;
--End Body

ELSE

    FOR i IN 1 .. v_colcount LOOP
       DBMS_SQL.DEFINE_COLUMN( v_cursorid, i, v_columnvalue, 4000 );
       SELECT DECODE( v_desc_cols_table(i).col_type,  2, DECODE(v_desc_cols_table(i).col_precision,0,v_desc_cols_table(i).col_max_len,v_desc_cols_table(i).col_precision)+DECODE(P_nr_is_pos_integer,1,0,2)
                                                   , 12, 20, v_desc_cols_table(i).col_max_len )
         INTO v_desc_cols_table(i).col_max_len
       FROM dual;
     END LOOP;

END IF;

    v_ignore := DBMS_SQL.EXECUTE(v_cursorid);

     WHILE ( DBMS_SQL.FETCH_ROWS(v_cursorid) > 0 )
     LOOP
        /* Build up a big output line. This is more efficient than
         * calling UTL_FILE.PUT inside the loop.
         */
        v_newline := NULL;
        FOR i IN 1 .. v_colcount LOOP
            DBMS_SQL.COLUMN_VALUE( v_cursorid, i, v_columnvalue );
            if i = 1 then
              v_newline := v_newline||v_columnvalue;
            else
              v_newline := v_newline||v_delimiter||v_columnvalue;
            end if;              
        END LOOP;

        /* Now print out that line and increment a counter. */
        UTL_FILE.PUT_LINE( filehandle, v_newline||v_rowdelimiter );
        counter := counter+1;
    END LOOP;

IF P_write_rec_layout = 1 THEN

--Start Footer
    v_newline := CHR(10)||CHR(10)||'  ----------------------------------  ';
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Generated:     '||SYSDATE;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Generated by:  '||USER;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Columns Count: '||v_colcount;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Records Count: '||counter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Delimiter: '||v_delimiter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
       v_newline := '  Row Delimiter: '||v_rowdelimiter;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
    v_newline := '  ----------------------------------  '||CHR(10)||CHR(10);
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Footer

--Start Commment
    v_newline := '  '||P_comment;
      UTL_FILE.PUT_LINE(filehandle_rc, v_newline);
--End Commment

UTL_FILE.FCLOSE(filehandle_rc);

END IF;

    /* Free up resources. */
    DBMS_SQL.CLOSE_CURSOR(v_cursorid);
    UTL_FILE.FCLOSE( filehandle );

    /* Reset the date format ... and return. */
    v_stat := 'alter session set nls_date_format=''' || v_dateformat || ''' ';
    EXECUTE IMMEDIATE v_stat;

    RETURN counter;
EXCEPTION
    WHEN OTHERS THEN
        DBMS_SQL.CLOSE_CURSOR( v_cursorid );
        EXECUTE IMMEDIATE 'alter session set nls_date_format=''' || v_dateformat || ''' ';
        RETURN counter;

END Dump_Delimited;
/

SHOW ERRORS;

The post Dump Oracle data into a delimited ascii file with PL/SQL appeared first on AMIS Oracle and Java Blog.

Oracle E-Business Suite Mobile and Web Services Security - What You Need To Know

Securing packaged software such as the Oracle E-Business Suite presents different challenges than securing bespoke custom software. Unlike custom software, both the structure of and the security vulnerabilities of the Oracle E-Business Suite are well known and documented, not only to users but also to threat actors.  To begin an attack, limited probing and/or reconnaissance is needed because threat actors know exactly what to target and what to expect.  This also makes the Oracle E-Business Suite, like other ERP platforms, vulnerable to automated attacks. Threat actors only need to compromise one publically facing URL or web service, which given the size and complexity of the Oracle E-Business Suite, makes securing it a somewhat daunting task.

Starting with version 12.1 and continuing with 12.2, the Oracle E-Business Suite delivers a considerable amount of new web services and Mobile functionality as standard core functionality.  Much, if not most, of this new Mobile and web services functionality, replicates functionality previously only available through the traditional user interface forms and/or public interfaces and these new web services can be easily deployed on the Internet through a DMZ node.  The security implications of 12.2’s increased web services capabilities is that the Oracle E-Business Suite’s attack surface has increased and harder to defend. 

This blog series summarize the new Mobile and web services functionality and review their security features before recommending best practices for using them securely.

If you have any questions, please contact us at info@integrigy.com

-Michael Miller, CISSP-ISSMP, CCSP, CCSK

REFERENCES

 
 
 
 
 
Web Services
Categories: APPS Blogs, Security Blogs

ORA-06550 line N column N Solution

Complete IT Professional - Fri, 2017-02-24 05:00
Are you getting the ORA-06550 error when running an SQL statement? Learn the cause of this error and the solution in this article. Demonstration of the Error To demonstrate this error, I can run this code: CREATE OR REPLACE PROCEDURE testProcedure AS   textValue VARCHAR2(3); BEGIN   textValue := someOtherValue; END; If I compile this […]
Categories: Development

Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs

Yann Neuhaus - Fri, 2017-02-24 04:44

Sometimes you might run into situations where the controlfile does not represent the backups and archivelogs correctly, because of a mismatch of the control_file_record_keep_time and the RMAN retention. The controlfile has non circular and a circular records. Non circular are e.g. database information, redo threads, datafiles and so on. These non circular records don’t age out, however, they can be reused, e.g. when a tablespace is dropped. The circular records are e.g. the log history, archived logs, backupsets, datafile copies and so on. These records can age out. So, when you have a control_file_record_keep_time of 7 days and a RMAN recovery window of 14 days, then you obviously have a mismatch here. In 11gR2, Oracle stores 37 different record types in the control file, which can be check with:

SELECT type FROM v$controlfile_record_section ORDER BY 1;

12cR1 stores 41 different record types, where the AUXILIARY DATAFILE COPY, MULTI INSTANCE REDO APPLY, PDB RECORD and PDBINC RECORD was added. In 12cR2 there are even more. The TABLESPACE KEY HISTORY record type was added, so you end up with 42 different record types in 12cR2.

If RMAN needs to add new backup set or archive log record to the control file, any records that expired as per the control_file_record_keep_time parameter are overwritten. But coming back to my issue. My controlfile is out of sync with the recovery catalog and in some situation you can’t correct it anymore, even with delete force commands or alike, and you end up with error like the following:

ORA-19633: control file record 8857 is out of sync with recovery catalog

There might be other solutions to fix it, however, I want to have a clean control file and so I am recreating it manually. However, I don’t want to open the DB with resetlogs.

The high level steps to get this done are

  • Disable everything that might interfere with your action e.g. Fast Start Failover, Broker and so on
  • Adjust your control_file_record_keep_time to a higher value
  • Create the controlfile to trace
  • Unregister from RMAN catalog
  • Shutdown immediate and re-create the controlfile
  • Re-catalog your backups and archivelogs
  • Re-register into the RMAN catalog

Ok, let’s get started and disable fast start failover first. We don’t want that the observer to kick in and do any nasty stuff during my action.

DGMGRL> show configuration;

Configuration - DBIT121

  Protection Mode: MaxAvailability
  Members:
  DBIT121_SITE1 - Primary database
    DBIT121_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 2 seconds ago)

DGMGRL> disable fast_start failover;
Disabled.

As a next step, I increase the control_file_record_keep_time to a much higher time. The formula is usually CONTROL_FILE_RECORD_KEEP_TIME = retention period + level 0 backup interval + 1. Meaning that with a retention period of 24 days and a weekly level 0 backup, it would be 24+7+1, so at least 32. But I don’t care if my controlfile is 20MB in size 30MB, so I set it directly to 72 days.

-- Primary

SQL> alter system set control_file_record_keep_time=72;

System altered.

-- Standby

SQL> alter system set control_file_record_keep_time=72;

System altered.

The next important step is to create a trace of the controlfile, which can be adjusted manually later on, depending on your needs. Beforehand, I specify a tracefile identifier, so that I easily spot my trace file in the DIAG destination.

SQL> alter session set tracefile_identifier='control';

Session altered.

SQL> alter database backup controlfile to trace noresetlogs;

Database altered.

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] ls -rlt | grep control
-rw-r----- 1 oracle oinstall     101 Feb 24 09:10 DBIT121_ora_25050_control.trm
-rw-r----- 1 oracle oinstall    9398 Feb 24 09:10 DBIT121_ora_25050_control.trc

oracle@dbidg01:/u01/app/oracle/diag/rdbms/dbit121_site1/DBIT121/trace/ [DBIT121] mv DBIT121_ora_25050_control.trc /u01/app/oracle/admin/DBIT121/create/recreate_controlfile.sql

Let’s take a look at the control file trace which was created. It contains nearly everything that we need. Some parts might have to be adjusted, and some parts do not work at all or have to be done in a different way, but we will see later. But in general it is a very good starting point to get the job done.

oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cat recreate_controlfile.sql
-- The following are current System-scope REDO Log Archival related
-- parameters and can be included in the database initialization file.
--
-- LOG_ARCHIVE_DEST=''
-- LOG_ARCHIVE_DUPLEX_DEST=''
--
-- LOG_ARCHIVE_FORMAT=%t_%s_%r.dbf
--
-- DB_UNIQUE_NAME="DBIT121_SITE1"
--
-- LOG_ARCHIVE_CONFIG='SEND, RECEIVE'
-- LOG_ARCHIVE_CONFIG='DG_CONFIG=("DBIT121_SITE2")'
-- LOG_ARCHIVE_MAX_PROCESSES=4
-- STANDBY_FILE_MANAGEMENT=AUTO
-- STANDBY_ARCHIVE_DEST=?/dbs/arch
-- FAL_CLIENT=''
-- FAL_SERVER=DBIT121_SITE2
--
-- LOG_ARCHIVE_DEST_2='SERVICE=DBIT121_SITE2'
-- LOG_ARCHIVE_DEST_2='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_2='LGWR AFFIRM NOVERIFY ASYNC=0'
-- LOG_ARCHIVE_DEST_2='REGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_2='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED'
-- LOG_ARCHIVE_DEST_2='DB_UNIQUE_NAME=DBIT121_SITE2'
-- LOG_ARCHIVE_DEST_2='VALID_FOR=(STANDBY_LOGFILE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_2=ENABLE
--
-- LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST'
-- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY'
-- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOVERIFY SYNC'
-- LOG_ARCHIVE_DEST_1='REGISTER NOALTERNATE NODEPENDENCY'
-- LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME'
-- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)'
-- LOG_ARCHIVE_DEST_STATE_1=ENABLE
--
-- The following commands will create a new control file and use it
-- to open the database.
-- Data used by Recovery Manager will be lost.
-- Additional logs may be required for media recovery of offline
-- Use this only if the current versions of all online logs are
-- available.
-- After mounting the created controlfile, the following SQL
-- statement will place the database in the appropriate
-- protection mode:
--  ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY
STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 2 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
  ) SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
--   GROUP 4 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 5 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 6 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
--   ) SIZE 50M BLOCKSIZE 512,
--   GROUP 7 (
--     '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log',
--     '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
--   ) SIZE 50M BLOCKSIZE 512
DATAFILE
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
CHARACTER SET AL32UTF8
;
-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 10
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');
-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to
-- re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';
-- Recovery is required if any of the datafiles are restored backups,
-- or if the last shutdown was not normal or immediate.
RECOVER DATABASE
-- Block change tracking was enabled, so re-enable it now.
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;
-- All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;
-- Database can now be opened normally.
ALTER DATABASE OPEN;
-- Commands to add tempfiles to temporary tablespaces.
-- Online tempfiles have complete space information.
-- Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
     SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;
-- End of tempfile additions.
--
--
--
----------------------------------------------------------
-- The following script can be used on the standby database
-- to re-populate entries for a standby controlfile created
-- on the primary and copied to the standby site.
----------------------------------------------------------
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t89m_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj8m_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_5_dbx3tj3b_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tpb4_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_6_dbx3tp52_.log';
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log'
 SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE MEMBER '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twkt_.log'
                                       TO '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_7_dbx3twdq_.log';

I am also stopping the broker to avoid any side effects and afterwards I unregister the database from the RMAN catalog. I will re-create it later on with the clean entries.

-- primary

SQL> alter system set dg_broker_start=false;

System altered.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:16:17 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)
connected to recovery catalog database
recovery catalog schema release 12.02.00.01. is newer than RMAN release

RMAN> unregister database;

database name is "DBIT121" and DBID is 172831209

Do you really want to unregister the database (enter YES or NO)? YES
database unregistered from the recovery catalog

RMAN>

The next step is very important. We need to shutdown the DB cleanly, either with normal or immediate. Afterwards, I create a copy of the current controlfiles. You never know, it is always good to have another fallback.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>

oracle@dbidg01:/home/oracle/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/controlfile/
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjws55_.ctl o1_mf_d4fjws55_.ctl.old
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/controlfile/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/controlfile/
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/controlfile/ [DBIT121] mv o1_mf_d4fjwsgr_.ctl o1_mf_d4fjwsgr_.ctl.old

Now we can startup nomount, and recreate our control from scratch. It is very important that you specify REUSE and NORESETLOGS here.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1325400064 bytes
Fixed Size                  2924112 bytes
Variable Size             436208048 bytes
Database Buffers          872415232 bytes
Redo Buffers               13852672 bytes

SQL> CREATE CONTROLFILE REUSE DATABASE "DBIT121" NORESETLOGS FORCE LOGGING ARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnop9_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_1_d4fpnq4o_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 2 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo42k_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_2_d4fpo43q_.log'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppn86_.log',
    '/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_3_d4fppngb_.log'
 19    ) SIZE 50M BLOCKSIZE 512
DATAFILE
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_system_d4fjt03j_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_sysaux_d4fjrlvs_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_undotbs1_d4fjvtd1_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_example_d4fjz1fz_.dbf',
  '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_users_d4fjvqb1_.dbf'
CHARACTER SET AL32UTF8
 27  ;

Control file created.

SQL>

Now we can configure the RMAN persistent settings like retention and so on.

-- Configure RMAN configuration record 1
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 14 DAYS');
-- Configure RMAN configuration record 2
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON');
-- Configure RMAN configuration record 3
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ARCHIVELOG DELETION POLICY','TO SHIPPED TO ALL STANDBY BACKED UP 1 TIMES TO DISK');
-- Configure RMAN configuration record 4
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','DISK PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 5
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RMAN OUTPUT','TO KEEP FOR 32 DAYS');
-- Configure RMAN configuration record 6
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE1'' CONNECT IDENTIFIER  ''DBIT121_SITE1''');
-- Configure RMAN configuration record 7
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DB_UNIQUE_NAME','''DBIT121_SITE2'' CONNECT IDENTIFIER  ''DBIT121_SITE2''');
-- Configure RMAN configuration record 8
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE ''SBT_TAPE'' PARMS  ''SBT_LIBRARY=oracle.disksbt,ENV=(BACKUP_DIR=/u99/backup/DBIT121)''');
-- Configure RMAN configuration record 9
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEVICE TYPE','''SBT_TAPE'' PARALLELISM 2 BACKUP TYPE TO COMPRESSED BACKUPSET');
-- Configure RMAN configuration record 10
VARIABLE RECNO NUMBER;
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('DEFAULT DEVICE TYPE TO','DISK');

The next step is to the re-create the incarnation table. This might fail with a recursive SQL error if you use the SQL provided in the trace file. Just use REGISTER PHYSICAL LOGFILE instead of REGISTER LOGFILE and then it works.

-- Commands to re-create incarnation table
-- Below log names MUST be changed to existing filenames on
-- disk. Any one log file from each branch can be used to re-create incarnation records.
-- ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_1_%u_.arc';

SQL> ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';
ALTER DATABASE REGISTER LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc'
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level


SQL> ALTER DATABASE REGISTER PHYSICAL LOGFILE '/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_142_dbzv31hq_.arc';

Database altered.

Because I have shutdown the database cleanly, there is no need to do any recovery and I can continue to enable the block change tracking file, open the database, and add my tempfile back to the database.

SQL> recover database;
ORA-00283: recovery session canceled due to errors
ORA-00264: no recovery required

SQL> ALTER SYSTEM ARCHIVE LOG ALL;

System altered.

SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE '/u02/oradata/DBIT121_SITE1/changetracking/o1_mf_dbx3wgqg_.chg' REUSE;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL>

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u02/oradata/DBIT121_SITE1/datafile/o1_mf_temp_d4fjxn8l_.tmp'
  2  SIZE 206569472  REUSE AUTOEXTEND ON NEXT 655360  MAXSIZE 32767M;

Tablespace altered.

Regarding the Standby Redo logs, the easiest is to remove the old ones, and simply recreate them afterwards, because you can’t add them back as long as they have Oracle managed file names.

SQL> select * from v$standby_log;

no rows selected

SQL> ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE;
ALTER DATABASE ADD STANDBY LOGFILE THREAD 1 '/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log' SIZE 50M BLOCKSIZE 512 REUSE
*
ERROR at line 1:
ORA-01276: Cannot add file
/u02/oradata/DBIT121_SITE1/onlinelog/o1_mf_4_dbx3t840_.log.  File has an Oracle
Managed Files file name.

-- delete standby redo logs

oracle@dbidg01:/u01/app/oracle/admin/DBIT121/create/ [DBIT121] cd /u02/oradata/DBIT121_SITE1/onlinelog/
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
total 358428
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnop9_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo42k_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppn86_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t840_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj3b_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tp52_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twdq_.log
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t840_.log o1_mf_5_dbx3tj3b_.log o1_mf_6_dbx3tp52_.log o1_mf_7_dbx3twdq_.log
oracle@dbidg01:/u02/oradata/DBIT121_SITE1/onlinelog/ [DBIT121] cd /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] ls -l
total 358428
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_1_d4fpnq4o_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:42 o1_mf_2_d4fpo43q_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 24 09:47 o1_mf_3_d4fppngb_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_4_dbx3t89m_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_5_dbx3tj8m_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_6_dbx3tpb4_.log
-rw-r----- 1 oracle oinstall 52429312 Feb 23 09:55 o1_mf_7_dbx3twkt_.log
oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/onlinelog/ [DBIT121] rm o1_mf_4_dbx3t89m_.log o1_mf_5_dbx3tj8m_.log o1_mf_6_dbx3tpb4_.log o1_mf_7_dbx3twkt_.log

-- recreate standby redo logs

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 4 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 5 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 6 SIZE 50M BLOCKSIZE 512;

Database altered.

SQL> alter database add STANDBY LOGFILE THREAD 1 GROUP 7 SIZE 50M BLOCKSIZE 512;

Database altered.

Don’t forget to enable Flashback as well, if your DataGuard is running in Max availability mode.

SQL> alter database flashback on;

Database altered.

Now we need to recatalog all our backups and archivelogs again.

oracle@dbidg01:/u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/ [DBIT121] rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:50:16 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)

RMAN> catalog recovery area;

using target database control file instead of recovery catalog
searching for all files in the recovery area

List of Files Unknown to the Database
=====================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_140_dbzswh06_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/archivelog/2017_02_24/o1_mf_1_141_dbzsxpv5_.arc
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64pz6_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090854_dbx64q0b_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s0z_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnndf_TAG20170223T090856_dbx64s3n_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_nnsnf_TAG20170223T090856_dbx65kmx_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_ncnnf_TAG20170223T090856_dbx65lnt_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_23/o1_mf_annnn_TAG20170223T090923_dbx65mto_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypdc_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpypfp_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080806_dbzpysqh_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy2f_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnndf_TAG20170224T080812_dbzpyy56_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_nnsnf_TAG20170224T080812_dbzpzqnz_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_ncnnf_TAG20170224T080812_dbzpzqop_.bkp
File Name: /u03/fast_recovery_area/DBIT121_SITE1/backupset/2017_02_24/o1_mf_annnn_TAG20170224T080841_dbzpzskt_.bkp

List of Files Which Were Not Cataloged
=======================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx641px_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbx642pf_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dby398lz_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbymcg20_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/flashback/o1_mf_dbyyg1r0_.flb
  RMAN-07529: Reason: catalog is not supported for this file type
File Name: /u03/fast_recovery_area/DBIT121_SITE1/controlfile/o1_mf_d4fjwsgr_.ctl.old
  RMAN-07519: Reason: Error while cataloging. See alert.log.

List of files in Recovery Area not managed by the database
==========================================================
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_4_dbzwt72f_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_5_dbzwtgl3_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_6_dbzwtn04_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter
File Name: /u03/fast_recovery_area/DBIT121_SITE1/onlinelog/o1_mf_7_dbzwtvc7_.log
  RMAN-07527: Reason: File was not created using DB_RECOVERY_FILE_DEST initialization parameter

number of files not managed by recovery area is 4, totaling 200.00MB

RMAN>

We are nearly done. We just need to enable the Data Guard broker again and enable fast start failover.

SQL> alter system set dg_broker_start=true;

System altered.

SQL> alter system archive log current;

System altered.

DGMGRL> enable fast_start failover;
Enabled.
DGMGRL> show configuration;

Configuration - DBIT121

 Protection Mode: MaxAvailability
 Members:
 DBIT121_SITE1 - Primary database
 DBIT121_SITE2 - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS (status updated 21 seconds ago)

DGMGRL> validate database 'DBIT121_SITE2';

 Database Role: Physical standby database
 Primary Database: DBIT121_SITE1

 Ready for Switchover: Yes
 Ready for Failover: Yes (Primary Running)

Re-register the database into the RMAN catalog.

oracle@dbidg01:/home/oracle/ [DBIT121] rman target sys/manager catalog rman/rman@rman

Recovery Manager: Release 12.1.0.2.0 - Production on Fri Feb 24 09:57:34 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DBIT121 (DBID=172831209)
connected to recovery catalog database
recovery catalog schema release 12.02.00.01. is newer than RMAN release

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN>

Ready. That’s it. In case your Standby Controlfile is not correct as well (was not in my case), you can now simply create a new standby controlfile on the primary and move it to the standby like documented in the following MOS note (Steps to recreate a Physical Standby Controlfile (Doc ID 459411.1).

Conclusion

As a last resort, sometimes it is needed to recreate the controlfile manually, but in case you have all the online redo logs and your datafiles in place, you can do it with noresetlogs. And take care that your RMAN retention is always higher than your control file record keep time.

 

Cet article Oracle 12c – Recreating a Controlfile in a Data Guard environment with noresetlogs est apparu en premier sur Blog dbi services.

Analysing Elections Data with Oracle Data Visualisation Desktop

Rittman Mead Consulting - Fri, 2017-02-24 04:38

Disclaimer #1 This post is not about politics. Its dataset is about politics, but that's a coincidence. It could be about immunisation or transport or anything else. If you are strictly against any politics, here is a link to watch cute kittens.

Introduction

Let's pretend that I'm an analyst and got a supposedly interesting data set. Now I want to understand if the data is actually interesting or it's a total rubbish. Having been in IT for some time I can use tools and technologies which typical end-user can’t access. But this time I pretend I’m a usual analyst which has data and desktop tools. And my task is to do a research and tell if there are anomalies in the data or everything looks like it supposed to look like.
The main tool for this work is obviously Oracle Data Visualisation Desktop (DVD). And, as a supplementary tool, I use Excel. This post is not a guide for any particular DVD feature. It won’t give a step by step instruction or anything like that. The main idea is to show how we can use Oracle Data Visualisation for an analysis of a real dataset. Not simply show that we can build bar charts, and pie charts and other fancy whatever charts, but show how we can get insights from the data.

The Data

I should say a few words about the data. It is an official result of the Russian State Duma (parliament) elections in 2016. Half of the Duma was elected by party lists and for this post I took that data. I should confess that I cheated a little and decided not spend my time downloading and parsing the data piece by piece from the official site, and took a prepared set.

From a bird's-eye view I have the following data:

  1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
  2. Turnout figures at given times throughout the day.

From a more technical point of view, the data was stored in two big files with multiple JSON in each. As the data preparation part is big enough, it was extracted to another post. This one concentrates purely on visualisation and the second one is about data flows and comparison to Excel.

Analysing the Data

I did some cleaning, refining and enriching of the data and it's time to use it. I started with a standard Stacked bar chart. It shows percentages of parties by regions and in addition width of bars shows Total votes. The chart is sorted by ascending Total votes.

What can we say here?

Before I start talking I need a lawyer and a disclaimer #2:

Disclaimer #2 Some of the results may be interpreted in different ways. Some of them may be not so pleasant for some people. But I'm not a court and this post is only a data visualisation exercise. Therefore I'm not accusing anyone of committing any crimes. I will make some conclusions because of rules of the genre, but they should be treated as hypotheses only.

That’s not a proven charge (see disclaimer #2) but for me these regions look a bit suspicious. Their results are very uncommon. United Russia ruling party (orange bars) got an extremely high result in these few regions. This may be a sign of some kind of interfere with an election process there. But of course, other explanations (including a measure incorrectly selected for sorting) exist.

Just for reference so we don’t forget the names: Tatarstan, Kemerovo, Dagestan, Chechnya and Mordovia. There are a few more regions with similar results but their number of voters is lower so I don’t show them here.

At this point I'm starting to suspect something. But I need more visuals to support my position, and my next hypothesis is that in these regions ballots were somehow added to voting boxes (or protocols were changed which is basically the same). From a data visualisation point of view that will mean that these regions will have higher turnout (because of added ballots) along with higher United Russia result.

To check this I need one more measure - Turnout, %. It shows how many of registered voters actually voted. I can create this field without leaving visualisation mode. Cool.

Note. This formula may be not absolutely correct but it works well for demonstration purposes.

In order to visualise this hypothesis, I built a Scatter chart. Its horizontal axis is Turnout,% and vertical one United Russia, %. And I added a trend line to make things more obvious. Colour density shows Total votes.

I think my hypothesis just got a strong support. As usual it is not an absolutely impossible situation. But it's hard to explain why the more people come to voting stations the higher one party result is. I'd expect either high result not depending on the turnout (more or less like Uniform distribution) or at least a few exceptions with high turnout and low result.

I consider this result strange because in real life I'd expect that higher turnout should mean more opposition voters (a passive group indeed) coming to voting stations. But that's only my opinion. And highly arguable I should agree. What I really want to show here is that these charts highlight an oddity that should be investigated and may or may not have a rational explanation.

And who are our heroes? Let’s zoom in on the chart …and we see the same regions.

But maybe other parties can show the same result? We can build the same Scatter charts for every party or we can visualise all at once with a usual Line chart. Here I’ve plotted the percent of vote won by each party (Y-axis) against the overall turnout % (X-axis).

United Russia is the only party that increases with turnout.

So far all my charts were about relative measures, it's time to check some absolute values. Here is a Bar chart which shows a number of precinct commissions by results. I'd expect to see something close to normal distribution - a bell-shaped chart with the maximum around 54% (average turnout). Now, look at the real chart (bin size=1.0%). Its maximum is at 36-37%.

Normal distribution

In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.

Strictly speaking all numbers I'm showing here are discrete and I should say Binomial distribution rather than Normal but right now for my purposes the diffence is not that big.

I'm definitely not Carl Gauss (and even not one of his best students) and you may ignore my opinion, but I expected something more like this:

And I don't have the slightest idea how it is possible that the most "popular" turnout is 100%.

What if we look at the same chart with more details? The previous one was grouped by 1% bins, and this one has 0.1% bins. And I had to add turnout not equal to 100% filter. Even with smaller bin size, the last one is almost the same ~3K commissions. This bar is much bigger than the others and the chart doesn't show anything in that case.
What can we see here? Well, people aren’t really good in generating random numbers. It's perfectly OK to have some runout on the chart. But hey, it's not normal to have them mostly at round values. That looks like someone was trying to fit the result to some plan.

Here is my favourite part of the chart. I marked 1% intervals, and you can see that round turnout value is always more probable than its closest non-round neighbours. No exceptions. A possible explanation is that the number of commissions with results that high is relatively low and even the slightest manipulation is clearly visible.

But wait. What about that 64.3 percent? It’s not round, but it is a big runaway. Let’s take a closer look at this value and check if there is anything interesting or that is a normal situation. Here is a few interesting visualisation for it.

The first one is Tree Diagram. It shows all existing combinations of district and precinct commissions by regions for the filtered data (turnout=64.3). And in order to demonstrate how it works for this case I made an animation. Most of the regions have a few commissions with 64.3% turnout. But Saratov region beats them all.

This visualisation has a serious flaw. End-user has to scroll it (I mean for this set of data) and can miss the point. Another visualisation can improve the situation. Network diagram doesn't need scrolling.

Looks good and shows exactly the same. But for this chart we must ensure that every data point is unique what is not true in my case. Different precinct commissions have the same numbers and I had to create a unique field first (DEC #||PEC #). It's easy to forget and get unpredictable or even misleading results.

Or if you prefer more traditional charts, here is Sunburst for you. Its sectors size shows Total votes and the colour is PEC count. It gives a good representation of how uncommon Saratov's result is.

And the last picture for the same topic boring never-old classic Bar chart.

Considering all these charts I'd say that almost exclusive concentration of commissions with 63.4% turnout in Saratov doesn't look normal for me. It's pretty weird that sibling commissions show exactly the same figures.

A few more diagrams which could work well are Sankey and Parallel coordinates, unfortunately, they are less informative because of the high number of precinct commissions. But if the number was lower I'd consider them too.

All previous charts are based on voting data. But I have one more dataset - official turnout. Let's check if we can find anything interesting there. And unfortunately significant part of commissions doesn't have official data, and sometimes I may use formulas that are not exactly the same as official ones, so numbers may differ slightly from what I got from the protocols data.

The first chart shows the number of commissions (vertical axis) by the official turnout (horizontal axis). Colour shows the moment of time. Strictly saying I shouldn't have used continuous linear charts for discrete values, but coloured overlapped bars don't give that clear picture.

Except for the 100% tail, everything is more or less natural. Graph shape looks more like Gamma distribution rather than Normal but I didn't test it.

What picture do I have for various regions?
Moscow city is known for a relatively high number of poll watchers and we may expect more clean data there. Ignoring the long tail, these look normal (or binomial if you want to be precise).

Saratov region. The one with 64.3% turnout. Look at these peaks. Do they look natural to you?
Do you remember Tatarstan (was the hero in the beginning of this story)? Here it is. I don't know how can anyone explain how it is possible (without manual results adjusting I mean).

Summary

This post shows how we can use Oracle DVD for visualisation of a real data set. And I hope I was able to convince you that this tool can be useful and can give you really interesting insights. Of course, visualisation alone doesn't answer all questions. And this time actually it was less about answers but more about questions. It helps to ask right questions.

More reading on the topic: 1, 2 (Russian language). If you can read Russian, here you will find more visualisations, discussions and interesting information. And this article is about elections in 2011. Its undisputable advantage is that it is in English.

Categories: BI & Warehousing

EBS Sessions at Alliance 2017

Steven Chan - Fri, 2017-02-24 02:05

HEUG Alliance 2017 logoSenior EBS Development staff (including me) are presenting at the Higher Education User Group (HEUG) Alliance 2017 conference in Las Vegas from February 27 to March 2.  This is your opportunity to get the latest updates on our E-Business Suite roadmap.  Check out the Alliance session catalog for a complete list of all Oracle EBS Development sessions:

We are also hosting a Meet The Experts roundtable where you can get your questions answered directly by EBS Development senior staff (including me):

  • Wednesday, March 1 at 3:30 PM
  • Third Floor Pre-function area in Conference Center, Table 6



Categories: APPS Blogs

Links for 2017-02-23 [del.icio.us]

Categories: DBA Blogs

Reading Execution Plans for Parallel DML

Pythian Group - Thu, 2017-02-23 20:36

Parallel execution in Oracle seems to be a topic of myths, mysteries and misunderstandings. In this post I hope to clear up at least one of them by diving into a discussion about parallel DML. Actually, I want to show you two things that can go wrong with it.

My demo setup and case are pretty simple, I just create two copies of dba_objects:

SQL> create table a as select * from dba_objects;

Table created.

SQL> create table b as select * from dba_objects;

Table created.

Now let’s delete from one of the tables with a subselect to the other one, and let’s make it parallel. When, if and how it makes sense to actually use parallel DML for a speedup is not the point of this post. In fact, this is a poor example as the overhead of extra communication and spinning up the processes just made the actual delete run slower.

SQL> delete /*+ PARALLEL(a 42) */ from a where object_id in (select object_id from b);

91277 rows deleted.

SQL> rollback;

And at first glance the execution plan for it seems to confirm that the operation was performed in parallel:

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	1q2m84vq594pj, child number 0
-------------------------------------
delete /*+ PARALLEL(a 42) */ from a where object_id in (select
object_id from b)

Plan hash value: 1475376193

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation	 	   | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT	   |	      |       |       |   855 (100)|	      |        |      | 	   |
|   1 |  DELETE 		   | A	      |       |       | 	   |	      |        |      | 	   |
|   2 |   PX COORDINATOR	   |	      |       |       | 	   |	      |        |      | 	   |
|   3 |    PX SEND QC (RANDOM)	   | :TQ10002 | 91277 |   891K|   855	(1)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  4 |     HASH JOIN SEMI BUFFERED|	      | 91277 |   891K|   855	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|   5 |      PX RECEIVE 	   |	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|   6 |       PX SEND HASH	   | :TQ10000 | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | P->P | HASH	  |
|   7 |        PX BLOCK ITERATOR   |	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | PCWC | 	   |
|*  8 | 	TABLE ACCESS FULL  | A	      | 91277 |   445K|   427	(1)| 00:00:01 |  Q1,00 | PCWP | 	   |
|   9 |      PX RECEIVE 	   |	      | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,02 | PCWP | 	   |
|  10 |       PX SEND HASH	   | :TQ10001 | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,01 | S->P | HASH	  |
|  11 |        PX SELECTOR	   |	      |       |       | 	   |	      |  Q1,01 | SCWC | 	   |
|  12 | 	TABLE ACCESS FULL  | B	      | 91278 |   445K|   427	(1)| 00:00:01 |  Q1,01 | SCWP | 	   |
--------------------------------------------------------------------------------------------------------------------

But a closer look reveals that the actual delete that we wanted to run in parallel is performed only after (above and to the left in the execution plan) the parallel part coordinator is done with the query. Also, because we only specified parallelism for Table A, the scan of Table B is being performed sequentially. The results are then distributed to parallel workers, and hash joined against the parallel query of A. Those results are buffered and returned to the coordinator which then serially performs the actual deletes. Sounds silly? It is. And the second issue is that even though the hint gets ignored for the DELETE part, it is still applied to the query and join, forcing a half-parallel operation.

Enable parallel DML mode

The issue to be aware of with parallel DML in Oracle is that it is disabled by default and you have to explicitly enable it in the session before running parallel DML operations.

And at least DBMS_XPLAN in 12c will also warn you about not using parallel DML in it’s output:

Note
-----
   - Degree of Parallelism is 42 because of hint
   - PDML is disabled in current session

So let’s try the same thing again, but enable parallel DML this time:

SQL> alter session enable parallel dml;

Session altered.

SQL> delete /*+ PARALLEL(a 42) */ from a where object_id in (select object_id from b);

91277 rows deleted.

SQL> rollback;

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	1q2m84vq594pj, child number 1
-------------------------------------
delete /*+ PARALLEL(a 42) */ from a where object_id in (select
object_id from b)

Plan hash value: 2691386263

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |	 TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |	    |	    |	    |	438 (100)|	    |	    |	    |		 |
|   1 |  PX COORDINATOR 		 |	    |	    |	    |		 |	    |	    |	    |		 |
|   2 |   PX SEND QC (RANDOM)		 | :TQ10003 | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    DELETE			 | A	    |	    |	    |		 |	    |  Q1,03 | PCWP |		 |
|   4 |     PX RECEIVE			 |	    | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,03 | PCWP |		 |
|   5 |      PX SEND HASH (BLOCK ADDRESS)| :TQ10002 | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,02 | P->P | HASH (BLOCK|
|*  6 |       HASH JOIN SEMI BUFFERED	 |	    | 91277 |	891K|	438   (1)| 00:00:01 |  Q1,02 | PCWP |		 |
|   7 |        PX RECEIVE		 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   8 | 	PX SEND HASH		 | :TQ10000 | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | P->P | HASH	 |
|   9 | 	 PX BLOCK ITERATOR	 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWC |		 |
|* 10 | 	  TABLE ACCESS FULL	 | A	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWP |		 |
|  11 |        PX RECEIVE		 |	    | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,02 | PCWP |		 |
|  12 | 	PX SEND HASH		 | :TQ10001 | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,01 | S->P | HASH	 |
|  13 | 	 PX SELECTOR		 |	    |	    |	    |		 |	    |  Q1,01 | SCWC |		 |
|  14 | 	  TABLE ACCESS FULL	 | B	    | 91278 |	445K|	427   (1)| 00:00:01 |  Q1,01 | SCWP |		 |
--------------------------------------------------------------------------------------------------------------------------

Now we see the DELETE operation below the PX COORDINATOR, which means it gets executed in parallel by the parallel workers. B is still read serially because we only specified table A in the hint. Let me just add this 100% parallel plan for the sake of completeness…

PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID	02w21a0rgz02m, child number 1
-------------------------------------
delete /*+ PARALLEL(42) */ from a where object_id in (select object_id
from b)

Plan hash value: 149866034

--------------------------------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name     | Rows  | Bytes | Cost (%CPU)| Time     |	 TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT		 |	    |	    |	    |	 23 (100)|	    |	     |	    |		 |
|   1 |  PX COORDINATOR 		 |	    |	    |	    |		 |	    |	     |	    |		 |
|   2 |   PX SEND QC (RANDOM)		 | :TQ10003 | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    DELETE			 | A	    |	    |	    |		 |	    |  Q1,03 | PCWP |		 |
|   4 |     PX RECEIVE			 |	    | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,03 | PCWP |		 |
|   5 |      PX SEND HASH (BLOCK ADDRESS)| :TQ10002 | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,02 | P->P | HASH (BLOCK|
|*  6 |       HASH JOIN SEMI BUFFERED	 |	    | 91277 |	891K|	 23   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   7 |        PX RECEIVE		 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|   8 | 	PX SEND HASH		 | :TQ10000 | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | P->P | HASH	 |
|   9 | 	 PX BLOCK ITERATOR	 |	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWC |		 |
|* 10 | 	  TABLE ACCESS FULL	 | A	    | 91277 |	445K|	 11   (0)| 00:00:01 |  Q1,00 | PCWP |		 |
|  11 |        PX RECEIVE		 |	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,02 | PCWP |		 |
|  12 | 	PX SEND HASH		 | :TQ10001 | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | P->P | HASH	 |
|  13 | 	 PX BLOCK ITERATOR	 |	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | PCWC |		 |
|* 14 | 	  TABLE ACCESS FULL	 | B	    | 91278 |	445K|	 11   (0)| 00:00:01 |  Q1,01 | PCWP |		 |
--------------------------------------------------------------------------------------------------------------------------
Conclusion

Keep in mind that parallel DML has to be enabled at the session level specifically. And when reviewing execution plans with parallel DML, make sure the update, insert or delete operation is below the PX coordinator or else you will serialize again before performing the actual operation.

Categories: DBA Blogs

IRs with Subscriptions that might not work

Jeff Kemp - Thu, 2017-02-23 20:31

If you have an Interactive Report with the Subscription feature enabled, users can “subscribe” to the report, getting a daily email with the results of the report. Unfortunately, however, this feature doesn’t work as expected if it relies on session state – e.g. if the query uses bind variables based on page items to filter the records. In this case, the subscription will run the query with a default session state – Apex doesn’t remember what the page item values were when the user subscribed to the report.

This is a query I used to quickly pick out all the Interactive Reports that have the Subscription feature enabled but which might rely on session state to work – i.e. it relies on items submitted from the page, refers to a bind variable or to a system context:


select workspace, application_id, application_name,
page_id, region_name, page_items_to_submit
from apex_application_page_ir
where show_notify = 'Yes'
and (page_items_to_submit is not null
or regexp_like(sql_query,':[A-Z]','i')
or regexp_like(sql_query,'SYS_CONTEXT','i')
);

For these reports, I reviewed them and where appropriate, turned off the Subscription feature. Note that this query is not perfect and might give some false positives and negatives.


Filed under: Oracle

LISTAGG not giving distinct values

Tom Kyte - Thu, 2017-02-23 14:26
Hi Guys, If I use LISTAGG for a single table it gives distance values in that column. But if I use it for by joining multiple tables it gives all the values even though they are duplicating. Scenario: CREATE TABLE TEMP1(ID1 NUMBER,TYPE1 CHAR...
Categories: DBA Blogs

REDO LOG growing for SMON Background process

Tom Kyte - Thu, 2017-02-23 14:26
Hello Tom, We experienced a continuous growing on redo log for background processes, specifically: SMON: 309 Gb Now, 300 Gb Last week, 291 two weeks ago DBW0: 141 Gb Now, 135 Gb Last week, 128 two weeks ago DBW1: 118 Gb Now, 110 Gb Last we...
Categories: DBA Blogs

Oracle CAST function conversion for BOOLEAN TO VARCHAR

Tom Kyte - Thu, 2017-02-23 14:26
Hi Tom, As per Oracle docs on CAST function, http://docs.oracle.com/javadb/10.8.3.0/ref/rrefsqlj33562.html <b>A BOOLEAN value can be cast explicitly to any of the string types. The result is 'true', 'false', or null. Conversely, string type...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator