Skip navigation.

Feed aggregator

Video: Database as a Service (DBaaS) on Oracle Cloud

Tim Hall - Sat, 2016-01-30 09:29

The latest video on my YouTube Channel is a run through of using the Database as a Service (DBaaS) offering on Oracle Cloud.

There have been a few minor changes in the interface since I last ran through capturing images, so the related article has been brought up to date.

I used my dad for the cameo in this video. Hopefully this will help him get a little more recognition, as he’s pretty much a nobody on the Oracle scene at the moment. With your help this could change!

Cheers

Tim…

Update: Almost as soon as I released this blog post the footage was out of date as Oracle released some minor changes to the interface. I rerecorded the video and re-uploaded it, so it is up to date as of now. All links from my website and this blog post point to the new video. If you have read this post via an RSS reader, you may still be seeing the old version of the post, and as a result see the link to the video as broken. But in that case, you won’t be able to read this either. :)

Video: Database as a Service (DBaaS) on Oracle Cloud was first posted on January 30, 2016 at 4:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Database 12c Features Now Available on apex.oracle.com

Joel Kallman - Sat, 2016-01-30 06:42
As a lot of people know, apex.oracle.com is the customer evaluation instance of Oracle Application Express (APEX).  It's a place where anyone on the planet can sign up for a workspace and "kick the tires" of APEX.  After a brief signup process, in a matter of minutes you have access to a slice of an Oracle Database, Oracle REST Data Services, and Oracle Application Express, all easily accessed through your Web browser.

apex.oracle.com has been running Oracle Database 12c for a while now.  But a lot of the 12c-specific developer features weren't available, simply because the database initialization parameter COMPATIBLE wasn't set to 12.0.0.0.0 or higher.  If you've ever tried to use one of these features in SQL on apex.oracle.com, you may have run into the dreaded ORA-00406.  But as of today (January 30, 2016), that's changed.  You can now make full use of the 12c specific features on apex.oracle.com.  Even if you don't care about APEX, you can still sign up on apex.oracle.com and kick the tires of Oracle Database 12c.

What are some things you can do now on apex.oracle.com? You can use IDENTITY columns.  You can generate a default value from a sequence.  You can specify a default value for explicit NULL columns.  And much more.

You might wonder what's taken so long, and let's just say that sometimes it takes a while to move a change like this through the machinery that is Oracle.

P.S.  I've made the request to update MAX_STRING_SIZE to EXTENDED, so you can define column datatypes up to VARCHAR2(32767).  Until this is implemented, you're limited to VARCHAR2(4000).

node-oracledb 1.6.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Sat, 2016-01-30 06:07
Node-oracledb 1.6.0, the Node.js add-on for Oracle Database, is on NPM.

In this release a comprehensive pull request by Dieter Oberkofler adds support for binding PL/SQL Collection Associative Array (Index-by) types. Strings and numbers can now be bound and passed to and from PL/SQL blocks. Dieter tells us that nowadays he only gets to code for a hobby - keep it up Dieter!

Using PL/SQL Associative Arrays can be a very efficient way of transferring database between an application and the database because it can reduce the number of 'round trips' between the two.

As an example, consider this table and PL/SQL package:

  CREATE TABLE mytab (numcol NUMBER);

  CREATE OR REPLACE PACKAGE mypkg IS
    TYPE numtype IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE myinproc(p IN numtype);
  END;
  /

  CREATE OR REPLACE PACKAGE BODY mypkg IS
    PROCEDURE myinproc(p IN numtype) IS
    BEGIN
      FORALL i IN INDICES OF p
	INSERT INTO mytab (numcol) VALUES (p(i));
    END;
  END;
  /

With this schema, the following JavaScript will result in mytab containing five rows:

  connection.execute(
    "BEGIN mypkg.myinproc(:bv); END;",
    {
      bv: { type : oracledb.NUMBER,
	    dir: oracledb.BIND_IN,
	    val: [1, 2, 23, 4, 10]
	  }
    },
    function (err) { . . . });

There is a fuller example in examples/plsqlarray.sql and check out the documentation.

Other changes in node-oracledb 1.6 are

  • @KevinSheedy sent a GitHub Pull Request for the README to help the first time reader have the right pre-requisites and avoid the resulting pitfalls.

  • Fixed a LOB problem causing an uncaught error to be generated.

  • Removed the 'close' event that was being generated for LOB Writables Streams. The Node.js Streams doc specifies it only for Readable Streams.
  • Updated the LOB examples to show connection release.

  • Extended the OS X install section with a way to install on El Capitan that doesn't need root access for Instant Client 11.2. Thanks to @raymondfeng for pointing this out.

  • Added RPATH to the link line when building on OS X in preparation for future client.

TypeScript users will be happy to hear Richard Natal recently had a node-oracledb TypeScript type definition file added to the DefinitelyTyped project. This is not part of node-oracledb itself but Richard later mentioned he found a way it could be incorporated. Hopefully he will submit a pull request and it will make it directly to the project so it can be kept in sync.

Thanks to everyone who has worked on this release and kept the momentum going.

What's coming up for the next release? There is discussion about adding a JavaScript layer. This was kicked off by a pull request from Sagie Gur-Ari which has lead to some work by Oracle's Dan McGhan. See the discussion and let us know what you think. Having this layer could make it quicker and easier for JavaScript coders to contribute node-oracledb and do things like reduce API inconsistency, make it easier to add a promise API in future, and of course provide a place to directly add Sagie's Streaming query result suggestion that started the whole thing.

I know a few contributors have recently submitted the Oracle Contributor Agreement ready to do big and small things - every bit counts. I look forward to being able to incorporate your work.

I've heard a couple of reports that Node LTS 4.2.6 on Windows is having some issues building native add-ons. 0.10, 0.12, 5.x and 4.2.5 don't have issues. Drop me a line if you encounter a problem.

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Not Another Standby Monitoring Script

Michael Dinh - Fri, 2016-01-29 21:55

I know what you are thinking. Not another standby monitoring script, right?

There’s plenty of them out there already and what makes this one any different?

You are probably correct and there’s probably nothing different about this.

Having to work with environments that are inconsistent such as manual standby, dataguard without broker, dataguard with broker,
I have been obsessed to create standby SQL which can accommodate any environment and can be run from primary or standby.

From PRIMARY: Recovery is IDLE
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:05 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  IDLE                    NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       2        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10153          0

ARROW:(SYS@hawklas):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From STANDBY: Recovery is IDLE
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawksan
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:11 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawksan        READ ONLY            PHYSICAL STANDBY ENABLED  NOT ALLOWED          ENABLED   hawklas

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   ARCH              0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   RFS               0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN_READ-ONLY  IDLE                                    NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       1        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 882 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                 +00 01:14:48       day(2) to second(0) interval

*** gv$managed_standby ***

no rows selected

ARROW:(SYS@hawksan):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From DG Broker: SET STATE=’APPLY-ON’

Note: Apply Lag is NOT really 1 hour.

Look at the time difference between SQL*Plus sessions.

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:04:11 2016
SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:28 2016
oracle@arrow:hawksan:/media/sf_working/dataguard
$ dgmgrl
DGMGRL for Linux: Version 11.2.0.4.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect /
Connected.
DGMGRL> show configuration

Configuration - dg_hawk

  Protection Mode: MaxPerformance
  Databases:
    hawklas - Primary database
    hawksan - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-OFF
  Transport Lag:   0 seconds (computed 1 second ago)
  Apply Lag:       1 hour(s) 17 minutes 16 seconds (computed 1 second ago)
  Apply Rate:      (unknown)
  Real Time Query: OFF
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> EDIT DATABASE hawksan SET STATE='APPLY-ON';
Succeeded.
DGMGRL> show database hawksan

Database - hawksan

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 2 seconds ago)
  Apply Lag:       1 hour(s) 17 minutes 47 seconds (computed 2 seconds ago)
  Apply Rate:      465.00 KByte/s
  Real Time Query: ON
  Instance(s):
    hawksan

Database Status:
SUCCESS

DGMGRL> exit
From PRIMARY: Recovery is STILL IDLE
oracle@arrow:hawksan:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawksan] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:20 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  IDLE                    NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         4      3.7   --- STANDBY LAG
       2        1 YES            882 29-JAN-2016 17:46:22

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10402          0

ARROW:(SYS@hawklas):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
From STANDBY: Recovery is MANAGED REAL TIME APPLY
oracle@arrow:hawklas:/media/sf_working/dataguard
$ . oraenv <<< hawksan
ORACLE_SID = [hawklas] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawksan:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:07:28 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawksan        READ ONLY WITH APPLY PHYSICAL STANDBY ENABLED  NOT ALLOWED          ENABLED   hawklas

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   ARCH              0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL   ACTIVE   RFS               0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN_READ-ONLY  MANAGED REAL TIME APPLY                 NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 IN-MEMORY      886 29-JAN-2016 17:50:04         1 .0333333   --- STANDBY LAG
       1        1 YES            885 29-JAN-2016 17:50:02

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                 +00 00:00:00       day(2) to second(0) interval

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    4463        1        1 MRP0      N/A      APPLYING_LOG       887    10409          0

ARROW:(SYS@hawksan):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
oracle@arrow:hawksan:/media/sf_working/dataguard
$
Let’s check again. From PRIMARY: Recovery is MANAGED REAL TIME APPLY
oracle@arrow:hawksan:/media/sf_working/dataguard
$ . oraenv <<< hawklas
ORACLE_SID = [hawksan] ? The Oracle base remains unchanged with value /u01/app/oracle
oracle@arrow:hawklas:/media/sf_working/dataguard
$ sysdba @stby

SQL*Plus: Release 11.2.0.4.0 Production on Fri Jan 29 19:09:14 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Session altered.

*** v$database ***

DB_UNIQUE_NAME OPEN_MODE            DATABASE_ROLE    REMOTE_A SWITCHOVER_STATUS    DG_BROKER PRIMARY_DB_UNIQUE_NAME
-------------- -------------------- ---------------- -------- -------------------- --------- ----------------------
hawklas        READ WRITE           PRIMARY          ENABLED  TO STANDBY           ENABLED   hawksan

*** gv$archive_dest ***

 THREAD#  DEST_ID DESTINATION               STATUS       TARGET  SCHEDULE PROCESS         MID
-------- -------- ------------------------- ------------ ------- -------- ---------- --------
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        PRIMARY ACTIVE   ARCH              0
       1        2 hawksan                   VALID        STANDBY ACTIVE   LGWR              0

*** gv$archive_dest_status ***

 DEST_ID STATUS       DATABASE_MODE   RECOVERY_MODE           GAP_STATUS      ERROR
-------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1 VALID        OPEN            IDLE                                    NONE
       2 VALID        OPEN_READ-ONLY  MANAGED REAL TIME APPLY NO GAP          NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                  887 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       2        1 NO             886 29-JAN-2016 17:50:04         1 .0333333   --- STANDBY LAG
       2        1 YES            885 29-JAN-2016 17:50:02

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 886 thread 1;

*** v$dataguard_stats ***

no rows selected

*** gv$managed_standby ***

     PID  INST_ID  THREAD# PROCESS   CLIENT_P STATUS       SEQUENCE#   BLOCK# DELAY_MINS
-------- -------- -------- --------- -------- ------------ --------- -------- ----------
    2659        1        1 LNS       LNS      WRITING            887    10512          0

ARROW:(SYS@hawklas):PRIMARY>

SQL can be download from my Public Google Drive.

Look all the way to the right under Menu or search page for PublicGoogleDrive


Table Scans

Jonathan Lewis - Fri, 2016-01-29 06:49

It’s amazing how easy it is to interpret a number incorrectly until the point comes where you have to look at it closely – and then you realise that there was a lot more to the number than your initial casual assumption, and you would have realised it all along if you’d ever needed to think about it before.

Here’s a little case in point. I have a simple (i.e. non-partitioned) heap table t1 which is basically a clone of the view dba_segments, and I’ve just connected to Oracle through an SQL*Plus session then run a couple of SQL statements. The following is a continuous log of my activity:


SQL> select table_name, partitioned, blocks from user_tables;

TABLE_NAME           PAR     BLOCKS
-------------------- --- ----------
T1                   NO         958

1 row selected.

SQL> select name, value from v$mystat ms, v$statname sn where ms.statistic# = sn.statistic# and name like '%table scan%';

NAME                                  VALUE
-------------------------------- ----------
table scans (short tables)                9
table scans (long tables)                 0
table scans (rowid ranges)                0
table scans (cache partitions)            0
table scans (direct read)                 0
table scan rows gotten                 4188
table scan blocks gotten                 14

7 rows selected.

SQL> select count(extents) from t1;

COUNT(EXTENTS)
--------------
          5143

1 row selected.

SQL> select name, value from v$mystat ms, v$statname sn where ms.statistic# = sn.statistic# and name like '%table scan%';

  • My cache size is quite small, so t1 doesn’t count as a “short” table.
  • I’ve collected stats on the table (and there are no indexes) so the optimizer doesn’t need to do any dynamic sampling to generate an execution plan.
  • This is 11g, so there are no SQL Plan Directives in place to force dynamic sampling

So here’s the question: how many “table scans (long tables)” will Oracle record against my session when I re-run that query against v$mystat ?

Warning – this IS a trick question.

Update number 1

I said it was a trick question and, as you will have seen if you’ve had time to read the comments, the answer is going to depend on various configuration options. Here’s what I got in my test – and I’ve reported not just the session stats, but the segment statistics (v$segment_statistics):


NAME                                  VALUE
-------------------------------- ----------
table scans (short tables)                9
table scans (long tables)                26
table scans (rowid ranges)               26
table scans (cache partitions)            0
table scans (direct read)                 0
table scan rows gotten                 9331
table scan blocks gotten                954

7 rows selected.

SQL> select owner, object_name, object_type, value  from v$segment_statistics where owner = 'TEST_USER' and statistic_name = 'segment scans' and value != 0;

OWNER           OBJECT_NAME          OBJECT_TYPE             VALUE
--------------- -------------------- ------------------ ----------
TEST_USER       T1                   TABLE                      26

1 row selected.

I recorded 26 long table scans – and that 26 was echoed in the segment statistics. (Note: I don’t normally use v$segment_statistics, which is a join between a couple of in-memory structures and three real table, I usually query v$segstat). It’s the segment statistics that made me pause in a recent problem review;  up to that moment I had been using the “Segments by Table Scans” section of the AWR Report as a useful (but badly named – since it also counts (most) index fast full scans) indicator of a potential threat, then I suddenly realised that there was yet another important detail I had to check before I could determine what the numbers were telling me.

So the next question is WHY has Oracle reported 26 tablescans ?

Update number 2

Martin Preiss (see comments) has been working hard to investigate this, and managed to produce a couple more variations in statistics for “the same” problem. In my case I had declared the table as “parallel 2”. For a table of degree N Oracle is very likely to break the table into 13*N chunks – there’s a historic reason for 13 – and that’s why I see 26 “rowid range” scans which, unfortunately, are also recorded as “long” table scans even though each is individually well short of the short table limit. This is really annoying when you start to look at the “Segments by Table Scan” report buecause you now don’t know how many times a table really was scanned unless you know something about the run-time degree of parellelism involved.

If you check the hidden parameters you will find several parameters relating to this chunking, in particular:


_px_min_granules_per_slave        minimum number of rowid range granules to generate per slave (default  13)
_px_max_granules_per_slave        maximum number of rowid range granules to generate per slave (default 100)

Technically, therefore, Oracle MIGHT get to 100 granules per parallel slave, and a tablescan at degree 2 could be recorded as 200 tablescans!

Martin asked why my 26 tablescans didn’t show up as “table scans (direct read)” – but that’s 11g for you, it allows a serial tablescan to use direct path reads, and it allows parallel tablescans to read into the cache, and the combination of cache size and table size meant that my example just happened to read into the cache.

And that introduces ANOTHER interpretation problem – what annoying things might I discover if I declare the table the the CACHE option ? (as Ivica suggested in his first comment below) ?

Final Update (probably)

Iviva has been busy on checking the CACHE operation and shown that there are a number of cases to consider (more, in fact, than I was planning to mention – and the variation the combination in NOPARALLEL and CACHE should give you some pause for thought). The only point I wanted to make was the effect of enabling PARALLEL and CACHE; I don’t think that this is a combination that is very likely to appear in a production system, but boundary conditions (and accidents) do occur. With my little sample – even after I grew the table to be MUCH larger, the tablescan DIDN’T get reported: here are some session stats and segment stats (taken using my snapshot code) of a single tablescan running parallel 2 when the table was cached:


Name                                                                     Value
----                                                                     -----
table scans (rowid ranges)                                                  26
table scans (cache partitions)                                              26
table scan rows gotten                                                 164,672
table scan blocks gotten                                                29,611

  Statistic                                    Value
  ---------                             ------------

T1
  logical reads                               30,272
  physical reads                              29,614
  physical read requests                         261

I know I’ve done a tablescan in 26 pieces (parallel 2) – and scanned 29,000+ blocks doing it; but according to the segment stats AND session stats I haven’t done a tablescan. Fortunately, of course, I can see the parallel tablescan in the session stats, and in this isolated case I can see from the “cache partitions” statistics that that tablescan was on a cached table. But if I’ve been depending on the segment stats to tell me about which tablescans happen most frequently and do most work I’ve found another reason why I can’t trust the stats and have to do more work cross-checking different parts of the AWR for self-consistent results.

I don’t expect to add any more comments about this mixture of tablescans and parallelism, with the varying effects on the session and segment statistics – but there’s always the possibility that one day I’ll start to worry about how the KEEP and RECYCLE (db_keep_cache_size and db_recycle_cache_size) could confuse things further.

Footnote:

It’s probably worth pointing out that the segment statistics have never recorded SHORT tablescans, they’ve only ever captured details of LONG tablescans. There is a slight inconsistency here, though, since they capture all the “index fast full scans (full)” reported in the session stats whether they are short or long – not that the session stats record the difference; unfortunately, another flaw creeps in: parallel index fast full scans “disappear” from the stats, although they show up as “index fast full scans (rowid ranges)” and “index fast full scans (direct)” with the same multiplier of 13 that we see for parallel table scans.


What PeopleSoft content was popular in 2015?

Duncan Davies - Thu, 2016-01-28 17:48

The ‘Year in Blogging’ reports have come through so I can see what posts and newsletter items garnered the most views.

PeopleSoft Tipster Blog

So, according to the summary, this blog was visited 130,000 times during the year, an average of ~350/day with the busiest day being just over double that at 749 visitors. About 50% of the traffic is from the US, 15% from India, and 5% from the UK and Canada.

Amazingly, the most viewed post was one written prior to 2015, about PeopleSoft Entity Relationship Diagrams. The most popular post that was actually authored last year was The Future of PeopleSoft video with Marc Weintraub, followed by PeopleSoft and Taleo integration, the Faster Download of PeopleSoft Images and the profile of Graham Smith and how he works.

The PeopleSoft Weekly Newsletter

The PSW newsletter seems to go from strength to strength. During 2015 the subscriber base rose from 919 to 1,104 which is an approx 20% increase. The ‘open rate’ sits around 40% for any one issue (against an industry average of 17%) with the US accounting for 55% of readers, the UK 15% and India 10%.

The top articles in terms of clicks were:

  1. Gartner’s Report on Oracle’s Commitment to PeopleSoft (263 clicks)
  2. Randy ‘Remote PS Admin’ on Forcing Cache Clears (198)
  3. PeopleSoft Planned Features and Enhancements (180)
  4. 5 Life Lessons I Learned at PeopleSoft (167)
  5. Dan Sticka on stopping writing Record Field PeopleCode (166)
  6. Greg Kelly’s Security Checklist from Alliance (155)
  7. Virginia Ebbeck’s list of PeopleSoft Links (145)
  8. Greg Wendt of Grey Heller on the PS Token Vulnerability (142)
  9. Dennis Howlett on the Oracle vs Rimini St court battle (142)
  10. Wade Coombs on PeopleSoft File Attachments (140)
  11. I’m Graham Smith and this is How I Work (139)
  12. Graham’s PeopleSoft Ping Survey (135)
  13. How to write an efficient PeopleCode (134)
  14. Mohit Jain on Tracing in PeopleSoft (131)
  15. The 4 types of PeopleSoft Testing (130)
  16. PS Admin.io on Cobol (127)
  17. Matthew Haavisto on the Cost of PeopleSoft vs SaaS (124)
  18. The PeopleSoft Spotlight Series (119)
  19. Prashant Tyagi on PeopleSoft Single Signon (118)
  20. Adding Watermarks to PeopleSoft Fields (116)

 

 


Bad to crosscheck archivelog all

Michael Dinh - Thu, 2016-01-28 16:44

Typically, it’s not a good idea to have “crosscheck archivelog all;” in backup scripts and
even worse to have “delete expired archivelog all;” since any evidence will be eradicated.

When you don’t crosscheck archivelog and the archivelog is missing, backup archivelog will fail.
When you crosscheck archivelog and archivelog is missing backup archivelog does not fail since RMAN marks archivelog as expired and ignore.

oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:17:35 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> list archivelog all;
using target database control file instead of recovery catalog
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     A 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc

RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit
$ mv -v /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc /tmp/
oracle@arrow:hawklas:/tmp
`/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc' -> `/tmp/o1_mf_1_873_cbo4v850_.arc'
oracle@arrow:hawklas:/tmp
$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Thu Jan 28 14:18:30 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: HAWK (DBID=3130551611)
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:18:40
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of backup command at 01/28/2016 14:18:46
RMAN-06059: expected archived log not found, loss of archived log compromises recoverability
ORA-19625: error identifying file /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
RMAN> crosscheck archivelog all;
released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
Crosschecked 1 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Crosschecked 2 objects

RMAN> backup archivelog all;

sequence=873 is missing and no errors raised.

Starting backup at 2016-JAN-28 14:19:09
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:19:11
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:19:12
piece handle=/oradata/backup/HAWK_3130551611_20160128_jdqsgqev_1_1_1645_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_3130551611_20160128_jeqsgqev_1_1_1646_1 tag=TAG20160128T141910 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:03
Finished backup at 2016-JAN-28 14:19:14

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:19:14
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-09 comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:19:21
RMAN> list backup of archivelog sequence 873;
specification does not match any backup in the repository

RMAN> exit


Recovery Manager complete.
oracle@arrow:hawklas:/tmp
$

RMAN> delete expired archivelog all;
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=21 device type=DISK
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5191    1    868     X 2016-JAN-28 11:46:46
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc

5201    1    873     X 2016-JAN-28 14:15:44
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc


Do you really want to delete the above objects (enter YES or NO)? yes
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_868_cbnw3vtd_.arc RECID=5191 STAMP=902317692
deleted archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_873_cbo4v850_.arc RECID=5201 STAMP=902326632
Deleted 2 EXPIRED objects
RMAN> backup archivelog all;
Starting backup at 2016-JAN-28 14:36:20
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=874 RECID=5203 STAMP=902326725
input archived log thread=1 sequence=875 RECID=5204 STAMP=902326750
channel ORA_DISK_1: starting piece 1 at 2016-JAN-28 14:36:21
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=876 RECID=5207 STAMP=902327780
channel ORA_DISK_2: starting piece 1 at 2016-JAN-28 14:36:22
channel ORA_DISK_1: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jgqsgrf5_1_1_1648_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02
channel ORA_DISK_2: finished piece 1 at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_3130551611_20160128_jhqsgrf5_1_1_1649_1 tag=TAG20160128T143621 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
Finished backup at 2016-JAN-28 14:36:23

Starting Control File and SPFILE Autobackup at 2016-JAN-28 14:36:23
piece handle=/oradata/backup/HAWK_c-3130551611-20160128-0a comment=NONE
Finished Control File and SPFILE Autobackup at 2016-JAN-28 14:36:30

RMAN>
Updated:

Does this mean we should never perform crosscheck archivelog all? No.

The purpose is to let archivelog backup fail and to investigate.

If investigation shows archivelog is indeed missing, then might be better to perform Level 1 or Level 0 backup.

RMAN> crosscheck archivelog all;
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=10 device type=DISK
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_874_cbo4y4or_.arc RECID=5203 STAMP=902326725
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_875_cbo4yy69_.arc RECID=5204 STAMP=902326750
validation succeeded for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_876_cbo5z4lz_.arc RECID=5207 STAMP=902327780
Crosschecked 3 objects

validation failed for archived log
archived log file name=/oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc RECID=5209 STAMP=902336024
Crosschecked 1 objects
RMAN> list expired archivelog all;
List of Archived Log Copies for database with db_unique_name HAWKLAS
=====================================================================

Key     Thrd Seq     S Low Time
------- ---- ------- - --------------------
5209    1    877     X 2016-JAN-28 14:36:20
        Name: /oradata/fra/HAWKLAS/archivelog/2016_01_28/o1_mf_1_877_cbog0qbg_.arc

RMAN> list backup of archivelog sequence 877;
specification does not match any backup in the repository

RMAN>

Is the DBA Career Dead?

Pythian Group - Thu, 2016-01-28 16:00

 

With the increased popularity of cloud services, one of the questions that I often receive is: “Is the DBA career dying? What will you do for a living in the future?” In this article I will give my personal opinion about the future of our beloved profession, and try to calm down those that have already started to look for another career.

The first thing that I want to point out is that when we started to work in IT we knew that it was a career that is different than most of the other ones out there. Its nature is  a dynamic and exciting one that reinvents itself all the time, with technological news showing up every single year and changing the entire landscape. We have chosen a field that pushes us to keep studying, learning and evolving, and this is the kind of mindset I want you to have while reading this article.

The Database Administrator role is not going anywhere. We are not an endangered species and won’t become one in the foreseeable future. Cloud is not our enemy. The data market is just evolving, and the cloud is bringing a lot of new things that will give us more power and more options.

In today’s market we have two very common problems:

  1. Companies can’t find enough people to fill in all positions.

We all know this one. I’m sure we all know several companies that have an open position for months, have interviewed dozens of people, and just can’t find anyone that suits the position.

  1. Companies want to keep their costs as low as possible.

Companies want to make money, and we had a big worldwide crisis just a few years ago that we are still recovering from. This means companies are trying to find ways to improve their productivity, while keeping their costs as low as possible.

 

In a scenario like this, the cloud offerings come as an aid to both improve our productivity as a DBA, and to help the company save money. Let’s think for a while about how many tasks we perform daily that don’t bring real value for the business. No doubt that when we’re planning the new high availability solution, or doing performance tuning on that slow query we can see the value that it will bring to the company. In the first case, this will guarantee that all applications are up and running at full speed when the company needs it. The latter will make sure that the server is handling the workload, running more sessions at the same time, and making both internal and external customers happy.

But how about the time you spent trying to find more disk space for all your databases? How about trying to find disk space for all your backups because the database has grown too large and we didn’t plan ahead? Then there’s all the time that you spend installing SQL and Windows patches. I know, in some big companies, we have a dedicated SAN admin and the infrastructure administrators that will worry about those tasks, but that’s not the everyone’s reality. The vast majority of small and medium companies have a small team that is responsible for multiple areas. Why? Scroll up and read problems 1 and 2 om my list above one more time.

I’ll wait for you.

Now, let’s imagine another reality. Let’s imagine a world where I receive a disk space alert for my backups. The company has acquired a new company, the database growth was much bigger than expected, and we ran out of disk space. I go to a web portal and a few mouse clicks later I have 1TB of disk available to me. All I have to do is open SQL Server Management Studio and change my backup jobs to use the new storage area. Problem solved in less than 15 minutes.

Let’s envision a world where I can get all those small databases I have that are not too important for the business (yeah, we all have a lot of those, don’t lie to yourself) and move those databases to the cloud so they don’t use our precious server resources. I don’t need to worry about patching and managing those databases. Wouldn’t that be great? And how about getting rid of the QA and testing servers and replacing them with virtual machines that can just turn off when they are not in use and save money? And those huge tables with hundreds of millions of rows that causes us problems every single day. Wouldn’t it be great if I could replace that complicated sliding window partition solution that we developed to manage historic data, and instead make SQL Server automatically move old and unused data to the cloud, while also keeping the data available for end users in a transparent way?

Cloud is indeed a career shift dynamic, but not one that will kill the database administrator role and destroy families. Instead, it’s one that will make us more efficient, provide us with tools and options to focus ourselves on tasks that bring value to the company. It’s a solution where we can use the existing hardware more efficiently and make our lives easier. Embrace the changes just like we embraced all new technologies that came before it, and use each one as a tool to be successful in your role.

 

Discover more about our expertise in the Cloud.

Categories: DBA Blogs

My Virtual Internship with Pythian

Pythian Group - Thu, 2016-01-28 14:38

My internship with Pythian started in October and has been an incredible opportunity to build upon my project management skills with a virtual internship. I never imagined working for a tech company. As business students, we are often characterized as lacking the hard skills needed for industries as competitive as those in STEM fields. After my internship with Pythian, I know that my perspective can bring value to a team especially within the tech industry. My work with Pythian has inspired me to apply to computer science programs in Taiwan after I graduate in June.

During my time at Pythian I worked on the Pythia Program which is Pythian’s commitment to improving gender diversity by encouraging, promoting and retaining women and young girls in STEM fields. I was able to work with managers across many different departments and learn how to be part of a virtual team, while building a plan for the Pythia Program.

 

Making an impact for women in STEM fields

The Pythia program is setting an incredible precedent for other tech companies in North America and I am very excited that I was a part of that process. Researching and compiling data on women in STEM fields, particularly the tech industries, was an eye-opening experience. Identifying the pain points for underrepresented groups in tech, particularly women, is key in developing solutions that encourage women to seek positions in this field.

I am looking forward to seeing Pythian’s impact in the local community with Technovation and the impact on the young women who will be walking the halls and learning from such great mentors. Pythian is not only making great strides for the young women in their local community, but for all women in tech by setting an example with their own diversity initiatives.

 

Working with the women of Pythian

While assigned to the Pythia Program, I was lucky to be working with women who were heads of their departments and brought a diverse range of skills to the table. Whether building communication plans with the marketing department, or measuring progress with the organizational development team, I was always challenged to look at the issue from different perspectives.

As a project manager it can be very easy to stay on the periphery and remain an outsider, and admittedly this was a concern of mine as an intern and a newcomer to Pythian. The level of trust that the OD department put in me, and their guiding hand in helping me navigate through the on boarding period was instrumental to our success.

Working with women from many different technical backgrounds, I was able to learn much more than if I had stayed within one specific department. I cannot say how important it is as a young women to work with other women on the Pythia Program. It was inspiring to be able to work with accomplished women with so much expertise that they were willing to share.

 

Working virtually is a whole different ballgame

It has been said that virtual work can be trying and difficult for those new to the team, however my time with Pythian was the complete opposite. I am very thankful to have been able to navigate a virtual internship with such incredible support from the OD team. The lines of communication have always been open, and this has been instrumental to our success on the Pythia Program.

Pythian’s managers made themselves accessible and available during my time on the Pythia program, and their guidance was excellent, as was learning from their experiences.

All in all, I could not have asked for a greater internship than my time at Pythian. I was able to work on a project that was important to me as a woman, while working with other women at Pythian. Together we made an impact within the organization and in the local tech community in Ottawa. In the months to come we will see the reach of the Pythia Program on others. For myself, the experience has been impressive as I apply to computer science programs abroad and see my own future in tech. I look forward to following the continued success of Pythian and the Pythia Program.

 

Find out more about the Pythia Program.

Categories: DBA Blogs

M2M, the Other IoT

Oracle AppsLab - Thu, 2016-01-28 11:50

Before IoT became ‘The’ buzzword, there was M2M (machine to machine). Some industries still refer to IoT as M2M, but overall the term Internet of Things has become the norm. I like the term M2M because it describes better what IoT is meant to do: Machines talking to other machines.

This year our team once again participated int he AT&T Developer Summit 2016 hackathon. With M2M in our minds, we created a platform to allow machines and humans report extraordinary events in their neighborhood.  Whenever a new event was reported (by machine or human),  devices and people (notified by an app) connected to the platform could react accordingly.  We came with two possible use cases to showcase our idea.

CX1FlaUUoAAuT6f

Virtual Gated Community

Gated communities are a great commodity for those wanting to have privacy and security. The problem is that usually these communities come with a high price tag. So we came up with a turnkey solution for a virtual gate using M2M. We created a device using the Qualcomm DragonBoard 410c board with wifi and bluetooth capabilities. We used a common motion sensor and a camera to detect cars and people not belonging to the neighborhood. Then, we used Bluetooth beacons that could be placed in at the resident keychains. When a resident drove (or walked) by the virtual gate, it would not trigger the automated picture and report to the system, but if someone without the Bluetooth beacon drove by, the system will log and report it.

We also created an app, so residents could get notifications as well as report different events, which brings me to our second use case.

Devices reacting to events

We used AT&T Flow Designer and M2X platform to create event workflows with notifications. A user or a device could subscribe to receive only events that they care about such as lost dog/cat, water leaks etc. The real innovative idea here is that devices can also react to certain events. For example, a user could configure its porch lights to automatically turn on when someone nearby reported suspicious activity. If everyone in their street do the same, it could be a pretty good crime deterrent to effectively being able to turn all the porch lights in the street at once by reporting such event.

We called our project “Neighborhood”, and we are still amazed on how much we were able to accomplish in merely 20+ hours.

IMG_4469 IMG_4472 IMG_4470 IMG_4471 Possibly Related Posts:

Stinkin' Badges

Scott Spendolini - Thu, 2016-01-28 07:55
Ever since APEX 5, the poor Navigation Bar has taken a back seat to the Navigation Menu. And for good reason, as the Navigation Menu offers a much more intuitive and flexible way to provide site-wide navigation that looks great, is responsive and just plain works. However, the Navigation Bar can and does still serve a purpose. Most application still use it to display the Logout link and perhaps the name of the currently signed on user. Some applications use it to also provide a link to a user's profile or something similar.

Another use for the Navigation Bar is to present simple metrics via badges. You've seen the before: the little red numbered icons that hover in the upper-right corner of an iPhone or Mac application, indicating that there's something that needs attention. Whether you consider them annoying or helpful, truth be told, they are a simple, minimalistic way to convey that something needs attention.

Fortunately, adding a badge to a Navigation Bar entry in the Universal Theme in APEX 5 is tremendously simple. In fact, it's almost too simple! Here's what you need to do:
First, navigate to the Shared Components of your application and select Navigation Bar List. From there, click Desktop Navigation Bar. There will likely only be one entry there: Log Out.

2016 01 28 08 40 05

Click Create List Entry to get started. Give the new entry a List Entry Label and make sure that the sequence number is lower than the Log Out link. This will ensure that your badged item displays to the left of the Log Out link. Optionally add a Target page. Ideally, this will be a modal page that will pop open from any page. This page can show the summary of whatever the badge is conveying. Next, scroll down to the User Defined Attributes section. Enter the value that you want the badge to display in the first (1.) field. Ideally, you should use an Application or Page Item here with this notation: &ITEM_NAME. But for simplicity's sake, it's OK to enter a value outright.
Run your application, and have a look:

2016 01 28 08 48 45

Not bad for almost no work. But we can make it a little better. You can control the color of the badge with a single line of CSS, which can easily be dropped in the CSS section of Theme Roller. Since most badges are red, let's make ours red as well. Run your application and Open Theme Roller and scroll to the bottom of the options. Expand the Custom CSS region and enter the following text:

.t-Button--navBar .t-Button-badge { background-color: red;}

Save your customizations, and note that the badge should now be red:

2016 01 28 08 49 49

Repeat for each metric that you want to display in your Navigation Bar.

Bitmap Efficiency

Jonathan Lewis - Thu, 2016-01-28 07:02

An interesting observation came up on the Oracle-L list server a few days ago that demonstrated how clever the Oracle software is at minimising run-time work, and how easy it is to think you know what an execution plan means when you haven’t actually thought through the details – and the details might make a difference to performance.

The original question was about a very large table with several bitmap indexes, and an anomaly that appeared as a query changed its execution plan.  Here are the critical sections from the plans (extracted from memory with rowsource execution statistics enabled):

--------------------------------------------------------------------------------------------------------
|  Id |Operation                        | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
--------------------------------------------------------------------------------------------------------
|   6 |    TABLE ACCESS BY INDEX ROWID  |       FACT |      1 |      1 |     24 |00:00:00.01 |      31 |
|   7 |     BITMAP CONVERSION TO ROWIDS |            |      1 |        |     24 |00:00:00.01 |       7 |
|   8 |      BITMAP AND                 |            |      1 |        |      1 |00:00:00.01 |       7 |
|*  9 |       BITMAP INDEX SINGLE VALUE |     FACT_0 |      1 |        |      1 |00:00:00.01 |       3 |
|* 10 |       BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      1 |        |      4 |00:00:00.01 |       4 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C0"=243001)
    10 - access("FACT"."C1"="DIMENSION1"."ID")


-------------------------------------------------------------------------------------------------------
|  Id | Operation                      | Name       | Starts | E-Rows | A-Rows |     A-Time | Buffers |
-------------------------------------------------------------------------------------------------------
|   7 |    BITMAP CONVERSION TO ROWIDS |            |      5 |        |      8 |00:00:00.01 |     119 |
|   8 |     BITMAP AND                 |            |      5 |        |      1 |00:00:00.01 |     119 |
|*  9 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM1 |      5 |        |     20 |00:00:00.01 |      28 |
|* 10 |      BITMAP INDEX SINGLE VALUE |  FACT_DIM2 |      5 |        |    140 |00:00:00.01 |      78 |
|* 11 |      BITMAP INDEX SINGLE VALUE |     FACT_0 |      5 |        |      5 |00:00:00.01 |      13 |
|  12 |   TABLE ACCESS BY INDEX ROWID  |       FACT |      8 |      1 |      8 |00:00:00.01 |       8 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
     9 - access("FACT"."C1"="DIMENSION1"."ID")
    10 - access("FACT"."C2"="DIMENSION2"."ID")
    11 - access("FACT"."C0"=243001)

The first plan shows the steps leading to a single access (Starts = 1) to the FACT table after combining two bitmap indexes; the second shows the second child of a nested loop join where Oracle has combined three bitmaps indexes to access the FACT table – operation 7 (and its descendants) execute 5 times in this case. I’ve included the related parts of the predicate section so that you can see that the predicates at operations 9 and 10 of the first plan are the same as the predicates at operations 9 and 11 of the second plan.

So here’s the question – if one access to fact_dim1 requires 4 buffer visits, why does it take 28 buffer visits to do the same thing 5 times (and it is with the same value every time); conversely if one access to fact_0 requires 3 buffer visits, why do 5 visits to do the same thing take only 13 buffer visits. (Note: the arithmetic is made a little more obscure by the way in which index branch blocks may be pinned during nested loop joins.)

Then there’s a further question – not visible in the plan – the A-Rows in the “BITMAP INDEX SINGLE VALUE” operation is the number of bitmap sections in the rowsource, and we can see that the key values for index fact_dim2 have a significant number of bitmap chunks for a single key (5 executions returned 140 bitmap chunks). This scale, though, is true of all three indexes – in fact a follow-up email pointed out that a typical key value in EVERY ONE of the three indexes consisted of about 100 bitmap chunks, so why can’t we see those hundreds in the execution plan ?

So this is where we’re at: we have an execution plan where we haven’t visited all the bitmap chunks for a bitmap key, and the order in which the bitmap indexes are used in the plan seems to have some effect on the choice of leaf-blocks you visit when accessing the chunks. So (a) could a change in the order of indexes make a significant difference to the number of bitmap chunks you visit and the resulting performance, and (b) is there a way to control the order in which you visit the indexes. That’s where the note starts to get a bit technical – if you don’t want to read any more the answers are: (a) yes but probably not significantly and (b) yes.

Demo

To investigate what goes on inside a “BITMAP AND” I created a table with two bitmap indexes and used a very large setting for pctfree for the indexes so that they had to be stored with a large number of bitmap chunks per key. Here’s the code that I used, with some results from an instance of 12.1.0.2:


create table people
nologging
as
with generator as (
        select  --+ materialize 
                rownum id 
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum-1, 1e2)      id_town_home,
        trunc((rownum-1)/1e4)   id_town_work,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'PEOPLE',
                method_opt       => 'for all columns size 1'
        );
end;
/

create bitmap index pe_home on people(id_town_home) nologging pctfree 95;
create bitmap index pe_work on people(id_town_work) nologging pctfree 95;

select
        index_name, distinct_keys, num_rows, leaf_blocks, avg_leaf_blocks_per_key
from
        user_indexes
where
        table_name = 'PEOPLE'
order by
        index_name
;


INDEX_NAME           DISTINCT_KEYS   NUM_ROWS LEAF_BLOCKS AVG_LEAF_BLOCKS_PER_KEY
-------------------- ------------- ---------- ----------- -----------------------
PE_HOME                        100      30399       15200                     152
PE_WORK                        100       1800         907                       9

As you can see I’ve generated two columns (id_town_home, id_town_work) with 100 distinct values and 10,000 rows each, but with very different data distributions – the rows for any given value for id_town_home are uniformly spread across the entire table, every hundredth row; while the rows for any given value of id_town_work are very tightly clustered as a group of 10,000 consecutive rows. As a consequence the index entry (bitmap string) for a typical key value for id_town_home is enormous and has to be broken into 304 chunks spread across 152 leaf blocks (2 index entries per leaf block), while the index entry for a typical key value for id_town_work is much shorter, but still requires 18 chunks spread across 9 leaf blocks.

So what will I see if I run the following query, and force it to use a BITMAP AND of the two indexes, in the two different orders:

select
        /*+ index_combine(pe) */
        max(small_vc)
from
        people pe
where
        id_town_home = 50
and     id_town_work = 50
;

Based on a very simple interpretation of the typical execution plan and using the index stats shown above we might expect to see roughly A-Rows = 18 with 9 buffer gets (plus a few more for segment headers and branch blocks) on the id_town_work index and A-Rows = 304 with 152 buffer gets on the id_town_home index to allow Oracle to generate and compare the two bit strings – but here are the two plans with their execution stats, generated in 12.1.0.2, and each run after flushing the buffer cache:

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     118 |    117 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     118 |    117 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     118 |    117 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      18 |     17 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      18 |     17 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      4 |00:00:00.01 |       4 |      4 |
-------------------------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name    | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
-------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |         |      1 |        |      1 |00:00:00.01 |     122 |    120 |
|   1 |  SORT AGGREGATE                      |         |      1 |      1 |      1 |00:00:00.01 |     122 |    120 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| PEOPLE  |      1 |    100 |    100 |00:00:00.01 |     122 |    120 |
|   3 |    BITMAP CONVERSION TO ROWIDS       |         |      1 |        |    100 |00:00:00.01 |      22 |     20 |
|   4 |     BITMAP AND                       |         |      1 |        |      1 |00:00:00.01 |      22 |     20 |
|*  5 |      BITMAP INDEX SINGLE VALUE       | PE_HOME |      1 |        |      5 |00:00:00.01 |       8 |      7 |
|*  6 |      BITMAP INDEX SINGLE VALUE       | PE_WORK |      1 |        |     18 |00:00:00.01 |      14 |     13 |
-------------------------------------------------------------------------------------------------------------------

We have NOT touched anything like the entire bit-string for the id_town_home index – a bit-string that spans 152 leaf blocks! Clearly Oracle is doing something clever to minimise the work, and it’s so clever that switching the order of these two extremely different indexes in the plan has made virtually no difference to the work done. Obviously I can’t tell you exactly what the code is doing, but I think I can produce a reasonable guess about what’s going on.

The pe_work index has the smaller number of leaf blocks per key, which makes it the better starting choice for the AND in this case, so the optimizer’s default starting action was to pick the first couple of chunks of that index key value; and Oracle immediately sees that the first rowid that it could possibly need in its result set is roughly in the middle of the table – remember that the “key” columns of a bitmap index are (real_key, first_rowid_of chunk, last_rowid_of_chunk, compressed_bitstring).

Since it now knows the lowest possible rowid that it could need Oracle can now probe the pe_home index by (id_town_home=50, {target_rowid}) – which will let it go to a bitmap index chunk that’s roughly in the middle of the full range of 152. Then Oracle can expand the bitstrings from the chunks it has, reading new chunks as needed from each of the indexes until the 18 chunks / 9 leaf block from the pe_work index have been used up (and that range would have aligned with just two or three chunks from the pe_home index) at which point Oracle can see there’s no more rows in the table that could match both predicates and it doesn’t need to read the next 75 chunks of the pe_home index.

Conversely, when I forced Oracle to use the (inappropriate) pe_home index first, it read the first couple of chunks, then read the first couple of chunks of the pe_work index, at which point it discovered that it didn’t need any of the pe_home index prior to (roughly) chunk 75, so it jumped straight to the right chunk to align with pe_work and carried on from there. That’s why the forced, less efficient, plan that visited pe_home first visited just a couple more leaf blocks than the plan the optimizer selected for itself.

Bottom line on performance (tl;dr) – Oracle is sufficiently smart about checking the start and end ranges on bitmap indexes (rather then arbitrarily expanding the entire bitmap for each key) that even for very large bitmap index entries it will probably only access a couple of “redundant” leaf blocks per index even if it picks the worst possible order for using the indexes. You’re far more likely to notice Oracle picking the wrong indexes (because you know the data better) than you are to spot it using the right indexes in the wrong order – and given that bitmap indexes tend to be relatively small and well buffered (compared to the tables), and given the relatively large number of rows we pick by random I/O from fact tables, a little extra work in the bitmap indexes is unlikely to make a significant difference to the performance of most queries.

Closing fact: in the unlikely circumstances that you do spot the special case where it will make a difference (and it will probably be a difference in CPU usage) then you can dictate the order of the indexes with the undocumented bitmap_tree() hint.  I may get round to writing up the variations one day but, for this simple case, the index_combine() hint that I used to force the BITMAP AND turned into the following bitmap_tree() hint in the outline:

bitmap_tree(@sel$1 pe@sel$1 and((people.id_town_work) (people.id_town_home)))

bitmap_tree( @query_block     table_name@query_block     and( ({first index definition}) ({second index definition}) ) )

Obviously not suitable to throw into production code casually – check with Oracle support if you think it’s really necessary – but if you wanted to reverse the order of index usage in this case you could just swap the order of the index definitions. If you thought there was a third index that should be used you could include its definition (note that it’s table_name.column_name – the index definition – in the brackets).

My reference: bitmap_control_02.sql


(Not So) Internal Dialogue

Tim Hall - Thu, 2016-01-28 04:52

internal-dialogueWhen I wrote about rehearsals in my public speaking tips series, I mentioned talking through small sections of the presentations as a means for rehearsals. I do this a lot! I live on my own, so this is not an internal dialogue. I say this stuff out loud.

This morning I was talking through some ideas as I left the house and cleared the ice off the car. I continued during the journey to work, including when I got out of the car to get a coffee from the Costa Express at the garage. Even as I was unlocking the office door.

If you happen see me doing this in the street, I’m not talking to an imaginary friend. It’s an imaginary audience. If you’re gonna do crazy, you might as well do it at scale… :)

Cheers

Tim…

(Not So) Internal Dialogue was first posted on January 28, 2016 at 11:52 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The best OBIEE 12c feature that you’re probably not using.

Rittman Mead Consulting - Thu, 2016-01-28 03:00

With the release of OBIEE 12c we got a number of interesting new features on the front-end.  We’re all talking about the cleaner look-and-feel, Visual Analyzer, and the ability to create data-mashups, etc.

While all this is incredibly useful, it’s one of the small changes you don’t hear about that’s truly got me excited.  I can’t tell you how thrilled I am that we can finally save a column back to the web catalog as an independent object (to be absolutely fair, this actually first shipped with 11.1.1.9).

For the most part, calculations should be pushed back to the RPD.  This reduces the complexity of the reports on the front-end, simplifies maintenance of these calculations, and ultimately assures that the same logic is used across the board in all dashboards and reports… all the logic should be in the RPD.  I agree with that 110%… at least in theory.  In reality, this isn’t always practical.  When it comes down to it, there’s always some insane deadline or there’s that pushy team (ahem, accounting…) riding you to get their dashboard updated and migrated in time for year end, or whatever.  It’s quite simply just easier sometimes to just code the calculation in the analysis.  So, rather than take the time to modify the RPD, you fat finger the calculation in the column formula.  We’ve all done it.  But, if you spend enough time developing OBIEE reports and dashboards, sooner or later you’ll find that this is gonna come back to bite you.

Six months, a year from now, you’ll have completely forgotten about that calculation.  But there will be a an org change, or a hierarchy was updated… something, to change the logic of that calculation and you’ll need make a change.  Only now, you know longer remember the specifics of the logic you coded, and even worse you don’t remember if you included that same calculation in any of the other analyses you were working on at the time.  Sound familiar?  Now, a change that should have been rather straightforward and could have been completed in an afternoon takes two to three times longer as you dig through all your old reports trying to make sense of things.  (If only you’d documented your development notes somewhere…)

Saving columns to the web catalog is that middle ground that gives us the best of both worlds… the convenience of quickly coding the logic on the front-end but the piece of mind knowing that the logic is all in one place to ensure consistency and ease maintenance.

After you update your column formula, click OK.

From the column dropdown, select the Save Column As option.

Save the column to the web catalog.  Also, be sure to use the description field.  The description is a super convenient place to store a few lines of text that your future self or others can use to understand the purpose of this column.

As an added bonus, this description field is also used when searching the web catalog.  So, if you don’t happen to remember what name you gave a column but included a little blurb about the calculation, all is not lost.

Saved columns can be added from the web catalog.

Add will continue to reference the original saved column, so that changes to made to the saved column will be reflected in your report.  Add Copy will add the column to your report, but future changes to the saved column will not be reflected.

One thing to note, when you add a saved column to a report it can no longer be edited from within the report.  When you click on Edit Formula you will still be able to see the logic, but you will need to open and edit that saved column directly to make any changes to the formula.

Try out the saved columns, you’ll find it’s invaluable and will greatly reduce the time it takes to update reports.  And with all that free time, maybe you’ll finally get to play around with the Visual Analyzer!

The post The best OBIEE 12c feature that you’re probably not using. appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Checksum entire directory using md5sum

Michael Dinh - Wed, 2016-01-27 17:52

When you are backing up 3.5+ TB database, resulting in 600+ GB backupset, transferring to new DC, you would want to ensure there are no corruptions resulting from transfer.

Here is an example of how to perform checkum using md5sum for all the contents in the directory.

Taking the process further, split the .md5 file into multiple files for parallel processing.

oracle@arrow:hawklas:/oradata/keep
$ md5sum *MIG* > backup.md5


oracle@arrow:hawklas:/oradata/keep
$ cat backup.md5
080adb9ba716f3bf6f91efb06adc311e  HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1
36bd695b6d87b25d153edc91ee79992f  HAWK_3130551611_20160127_hfqsdvfv_2_1_DB_MIG_1583_2
730c32ef5415bc9dd931d026fb42bcd7  HAWK_3130551611_20160127_hgqsdvg0_1_1_DB_MIG_1584_1
36fd7d0098bd628921ac789155f0a712  HAWK_3130551611_20160127_hgqsdvg0_2_1_DB_MIG_1584_2
072f757dfb808c96aef92555f80165eb  HAWK_3130551611_20160127_hgqsdvg0_3_1_DB_MIG_1584_3
767e06a7eea3cb72243d180a5591e2a2  HAWK_3130551611_20160127_hgqsdvg0_4_1_DB_MIG_1584_4
600ef7710995dcb1d11c0b27a6ee9971  HAWK_3130551611_20160127_hgqsdvg0_5_1_DB_MIG_1584_5
28c279a24cf47fb53b1f7840b7df7fc5  HAWK_3130551611_20160127_hgqsdvg0_6_1_DB_MIG_1584_6
e418ac58da6629f6aeec9c9f7bc47ca5  HAWK_3130551611_20160127_hgqsdvg0_7_1_DB_MIG_1584_7
3e6d8788ec1b5c2071b435c10b34b746  HAWK_3130551611_20160127_hgqsdvg0_8_1_DB_MIG_1584_8
5b1c964eabcc8bd602f6cef15482820a  HAWK_3130551611_20160127_hgqsdvg0_9_1_DB_MIG_1584_9
4655bac6d066ff47799ef8dcf423f532  HAWK_3130551611_20160127_hhqsdvg6_1_1_DB_MIG_1585_1
da4add20652a16726006f46a294cf90a  HAWK_3130551611_20160127_hhqsdvg6_2_1_DB_MIG_1585_2
a2ce4073fb16336c2f8a3cf78f1709ec  HAWK_3130551611_20160127_hhqsdvg6_3_1_DB_MIG_1585_3
301ef428887aba61aa33410d6c8b3c70  HAWK_3130551611_20160127_hhqsdvg6_4_1_DB_MIG_1585_4
47379e228a839bb4257aa141dbfd5107  HAWK_3130551611_20160127_hiqsdvgm_1_1_DB_MIG_1586_1
84e2cd2931f7272832b3c4cd3923e69d  HAWK_3130551611_20160127_hiqsdvgm_2_1_DB_MIG_1586_2
33da63364865b3b959491545905fdc9f  HAWK_3130551611_20160127_hiqsdvgm_3_1_DB_MIG_1586_3
06e04d3e05aff26a197621964fcb8617  HAWK_3130551611_20160127_hiqsdvgm_4_1_DB_MIG_1586_4
5436a855b3d287f9b6ed87ba07cefeb7  HAWK_3130551611_20160127_hiqsdvgm_5_1_DB_MIG_1586_5
0bc71d9930bf2653b6c8661dbf388989  HAWK_3130551611_20160127_hjqsdvh9_1_1_DB_MIG_1587_1
fa2447d3842b476ed365ef37c74db7d9  HAWK_3130551611_20160127_hkqsdvhd_1_1_DB_MIG_1588_1
beb4c1726c99335dff262224828925f5  HAWK_3130551611_20160127_hlqsdvhf_1_1_DB_MIG_1589_1
93aede1cc96dc286ff6c7d927d9505af  HAWK_3130551611_20160127_hmqsdvhh_1_1_DB_MIG_1590_1
eb0a7eb4cab45e5c9b053abb7c736f0d  HAWK_3130551611_20160127_hnqsdvhu_1_1_AL_MIG_1591_1
77808e8a0cb4ac766e7e4d868c7c81b4  HAWK_3130551611_20160127_hoqsdvhu_1_1_AL_MIG_1592_1
5994fe2cf07786495f99d2fd9f42b1f8  HAWK_3130551611_20160127_hpqsdvi5_1_1_AL_MIG_1593_1
784be893cd558cecf65aa51ba79b3e04  HAWK_3130551611_20160127_hpqsdvi5_2_1_AL_MIG_1593_2
5859c37d98d77174675f89e7590ed597  HAWK_3130551611_20160127_hqqsdvi6_1_1_AL_MIG_1594_1
6f2a8a68ab0e9847f8f2248de55cb51a  HAWK_3130551611_20160127_hrqsdvi9_1_1_AL_MIG_1595_1
446976ee788754e8cb48fb00a0acec92  HAWK_3130551611_20160127_hsqsdvid_1_1_AL_MIG_1596_1
73a488bc5aa0664fd80237a2b8da5ea8  HAWK_3130551611_20160127_htqsdvig_1_1_AL_MIG_1597_1
c0200bb23218859fb6a1edc3f7cba94d  HAWK_3130551611_20160127_huqsdvii_1_1_AL_MIG_1598_1
802c8f1524b7c6405d4d41a3b64f0a47  HAWK_3130551611_20160127_hvqsdvil_1_1_AL_MIG_1599_1
acf0c5b5ca6a3f9b58e7880eb093330a  HAWK_3130551611_20160127_i0qsdvil_1_1_AL_MIG_1600_1
dd1746fbaf90b1d867300286e297d2b3  HAWK_3130551611_20160127_i1qsdvin_1_1_AL_MIG_1601_1
7bb58056cac524304a88ba95a6837ac8  HAWK_3130551611_20160127_i2qsdvin_1_1_AL_MIG_1602_1
81ea52aadb6767ac3d3e2ae33acc9d64  HAWK_3130551611_20160127_i3qsdvio_1_1_AL_MIG_1603_1
8481443992c6858edbc03a481e13f579  HAWK_3130551611_20160127_i4qsdvip_1_1_AL_MIG_1604_1
539716837bd98835cf9b43b83cb60b79  HAWK_3130551611_20160127_i5qsdvj0_1_1_CF_MIG_1605_1
d2715ac45c5aa1e7dcd4c706c0a542ee  HAWK_3130551611_20160127_i6qsdvj5_1_1_CF_MIG_1606_1
9532408727adfd012728f989dd9a41ad  HAWK_3130551611_20160127_i7qsdvj8_1_1_CF_MIG_1607_1
840cd94839941643ecd1cbacc568ff9c  HAWK_3130551611_20160127_i8qsdvja_1_1_CF_MIG_1608_1


oracle@arrow:hawklas:/oradata/keep
$ md5sum -c -w backup.md5
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: OK
HAWK_3130551611_20160127_hfqsdvfv_2_1_DB_MIG_1583_2: OK
HAWK_3130551611_20160127_hgqsdvg0_1_1_DB_MIG_1584_1: OK
HAWK_3130551611_20160127_hgqsdvg0_2_1_DB_MIG_1584_2: OK
HAWK_3130551611_20160127_hgqsdvg0_3_1_DB_MIG_1584_3: OK
HAWK_3130551611_20160127_hgqsdvg0_4_1_DB_MIG_1584_4: OK
HAWK_3130551611_20160127_hgqsdvg0_5_1_DB_MIG_1584_5: OK
HAWK_3130551611_20160127_hgqsdvg0_6_1_DB_MIG_1584_6: OK
HAWK_3130551611_20160127_hgqsdvg0_7_1_DB_MIG_1584_7: OK
HAWK_3130551611_20160127_hgqsdvg0_8_1_DB_MIG_1584_8: OK
HAWK_3130551611_20160127_hgqsdvg0_9_1_DB_MIG_1584_9: OK
HAWK_3130551611_20160127_hhqsdvg6_1_1_DB_MIG_1585_1: OK
HAWK_3130551611_20160127_hhqsdvg6_2_1_DB_MIG_1585_2: OK
HAWK_3130551611_20160127_hhqsdvg6_3_1_DB_MIG_1585_3: OK
HAWK_3130551611_20160127_hhqsdvg6_4_1_DB_MIG_1585_4: OK
HAWK_3130551611_20160127_hiqsdvgm_1_1_DB_MIG_1586_1: OK
HAWK_3130551611_20160127_hiqsdvgm_2_1_DB_MIG_1586_2: OK
HAWK_3130551611_20160127_hiqsdvgm_3_1_DB_MIG_1586_3: OK
HAWK_3130551611_20160127_hiqsdvgm_4_1_DB_MIG_1586_4: OK
HAWK_3130551611_20160127_hiqsdvgm_5_1_DB_MIG_1586_5: OK
HAWK_3130551611_20160127_hjqsdvh9_1_1_DB_MIG_1587_1: OK
HAWK_3130551611_20160127_hkqsdvhd_1_1_DB_MIG_1588_1: OK
HAWK_3130551611_20160127_hlqsdvhf_1_1_DB_MIG_1589_1: OK
HAWK_3130551611_20160127_hmqsdvhh_1_1_DB_MIG_1590_1: OK
HAWK_3130551611_20160127_hnqsdvhu_1_1_AL_MIG_1591_1: OK
HAWK_3130551611_20160127_hoqsdvhu_1_1_AL_MIG_1592_1: OK
HAWK_3130551611_20160127_hpqsdvi5_1_1_AL_MIG_1593_1: OK
HAWK_3130551611_20160127_hpqsdvi5_2_1_AL_MIG_1593_2: OK
HAWK_3130551611_20160127_hqqsdvi6_1_1_AL_MIG_1594_1: OK
HAWK_3130551611_20160127_hrqsdvi9_1_1_AL_MIG_1595_1: OK
HAWK_3130551611_20160127_hsqsdvid_1_1_AL_MIG_1596_1: OK
HAWK_3130551611_20160127_htqsdvig_1_1_AL_MIG_1597_1: OK
HAWK_3130551611_20160127_huqsdvii_1_1_AL_MIG_1598_1: OK
HAWK_3130551611_20160127_hvqsdvil_1_1_AL_MIG_1599_1: OK
HAWK_3130551611_20160127_i0qsdvil_1_1_AL_MIG_1600_1: OK
HAWK_3130551611_20160127_i1qsdvin_1_1_AL_MIG_1601_1: OK
HAWK_3130551611_20160127_i2qsdvin_1_1_AL_MIG_1602_1: OK
HAWK_3130551611_20160127_i3qsdvio_1_1_AL_MIG_1603_1: OK
HAWK_3130551611_20160127_i4qsdvip_1_1_AL_MIG_1604_1: OK
HAWK_3130551611_20160127_i5qsdvj0_1_1_CF_MIG_1605_1: OK
HAWK_3130551611_20160127_i6qsdvj5_1_1_CF_MIG_1606_1: OK
HAWK_3130551611_20160127_i7qsdvj8_1_1_CF_MIG_1607_1: OK
HAWK_3130551611_20160127_i8qsdvja_1_1_CF_MIG_1608_1: OK

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5

oracle@arrow:hawklas:/oradata/keep
$ vi backup.md5  -- create false error by modifying checksum.

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: FAILED
md5sum: WARNING: 1 of 43 computed checksums did NOT match
oracle@arrow:hawklas:/oradata/keep
$
oracle@arrow:hawklas:/oradata/keep
$ ls *MIG*|wc -l
43

oracle@arrow:hawklas:/oradata/keep
$ split -l 10 backup.md5 backup.md5 -- split file to contain 10 checksums per file.

oracle@arrow:hawklas:/oradata/keep
$ ll ba*
-rw-r--r--. 1 oracle oinstall 3698 Jan 27 12:52 backup.md5
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5aa
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ab
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ac
-rw-r--r--. 1 oracle oinstall  860 Jan 27 12:56 backup.md5ad
-rw-r--r--. 1 oracle oinstall  258 Jan 27 12:56 backup.md5ae

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5aa
HAWK_3130551611_20160127_hfqsdvfv_1_1_DB_MIG_1583_1: FAILED
md5sum: WARNING: 1 of 10 computed checksums did NOT match

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ab

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ac

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ad

oracle@arrow:hawklas:/oradata/keep
$ md5sum -c --quiet backup.md5ae

SQL Server Cumulative Update changes

Pythian Group - Wed, 2016-01-27 12:42

 

Microsoft recently released the first Cumulative Update for SQL Server 2012 SP3 This CU addresses 8 issues in the product. But more importantly, it also marks a big shift in the Cumulative Update message sent by Microsoft.

For a long time, there was a discussion between DBAs about when/if we should apply a cumulative update. The Microsoft official message always have been that we should apply a cumulative update only if we were facing an issue in our environment caused by a know and fixed bug. This was also evident by the fact that to be able to download a cumulative update it was necessary to register on their website, provide an email address and they would send a link to download the package to your email address.

So, what has changed? Starting now, the latest cumulative update package will be maintained in the Download Center instead of the hotfix server. This will eliminate the need to register to get the latest package, but this is not the only thing that has changed, the entire message that you can read in the knowledge base article has changed, and instead of a warning message saying that we should not install the package unless it was necessary, now we have:
“ Microsoft recommends ongoing, proactive installation of SQL Server CUs as they become available:
SQL Server CUs are certified to the same levels as Service Packs, and should be installed with the same level of confidence
Historical data show a significant number of support cases involve an issue that has already been addressed in a released CU
CUs may contain added value over and above hotfixes such as supportability, manageability, and reliability updates
As with SQL Server Service packs, Microsoft recommends that you test CUs before deploying to production environments”

This is a big change from what we had before. The concerns we had in the past were necessary because the hotfixes were not tested in the same levels as the service packs were. There were no regression tests and not all possible integration tests were executed. So there was a real concern that something could go wrong in specific scenarios that were not tested. But this has changed and now every cumulative update goes through all the same certification levels that are applied to Service Packs.

This is a trend that is happening not only with SQL Server, this is the result of an agile development effort that is happening throughout the entire Microsoft stack of products. Windows, both personal and server editions already have constant updates instead of Service Packs for some time now and it seems SQL Server will soon follow this road.

This big change in how Microsoft deliver updates bring us to an interesting discussion: how to manage frequent and constant product updates in your environment? The last item in the Microsoft message clearly says that you need to test CUs, just like you test Service Packs before applying. Are customers willing to go through testing and patching processes every couple of months when a new CU is released? How can we convince customers of the benefit of having the latest version of the product?

I believe people will eventually get used to this new model of constant updates and catch up, creating plans to update more often their environments, maybe not apply every single CU, but apply them every 2 releases, or every half year, etc.

What do you think? How do you see this new model fitting in your existing environment? I would love to know other people experience on this subject.

Oh, and before I forget: you can download the latest CU for SQL 2012 SP3 that I mentioned in the beginning of the article. The link will not change for every release, so you will always be able to download the latest version using this link.

Categories: DBA Blogs

How do you define IT innovation?

Pythian Group - Wed, 2016-01-27 10:57

Find out how the experts answered at Pythian’s Velocity of Innovation event in San Francisco

Once again, I had the pleasure of moderating another Velocity of Innovation event this past November in San Francisco. Both panelists and guests brought with them a varied range of insights, experiences and perspectives. And as always, this resulted in a thoughtful and lively discussion.

The format of these events is to put three IT leaders with expertise in a particular area in a room, and start a conversation. With our Velocity events, we always start our discussions with a few questions, and the panelists provide some answers. But the idea is to get a conversation going between the audience and panelists.

That day, we discussed a range of subjects from IT innovation, to security, to our favourite wrap-up topic: taking a look into the IT crystal ball and commenting on what current technology trends will really take hold in the future. This last one is always a lot of fun. In this blog post I will provide just some of the highlights from the first discussion topic at the event: innovation and agility in IT.

On our panel were three IT experts:

Sean Rich is the director of IT at Mozilla, leading their application services group. He takes care of web operations, along with pretty much everything data related.

Cory Isaacson is the CTO at Risk Management Solutions. He’s currently bringing big data and scalable systems together to create a new cloud-based platform.

Aaron Lee is Pythian’s VP of transformation services. He runs a team that specializes in helping clients harness technology to deliver real outcomes. Usually they involve things like big data, DevOps, cloud, advanced analytics. He’s involved in some of the most leading edge initiatives for Pythian customers.

I started the discussion by asking the panel, and the audience members, to discuss the notions of innovation and agility, and to try to describe what they have explicitly done to improve innovation and to make their own organizations and those of their customers more agile.

Cory: My business evaluates risk. Our customers are the biggest insurance companies in the world. We run catastrophe models for them so that we can actually see what an earthquake might cost them or a hurricane, for example. I run technology for the company and have to build all the software. We’re innovating tremendously and so now it’s funny because our executives ask us to evaluate the risk of our own projects. We’re trying to do some very, very innovative things. I don’t know if any of you have an insurance background, but it’s not the most up-to-date industry when it comes to technology. As you know it’s been around a long, long time. But at my company some of the things that we’re trying to do are, honestly, more advanced than most other things I’ve ever seen in my career. And that’s why I took the position. But when you’re doing innovation, it is risky. There’s no way around it. There are a lot to evaluate: from different algorithms to the risk models and the catastrophe models. How do you evaluate them? Can you actually run them? We’re going from a 25-year-old desktop application, running on Microsoft SQL server to a cloud-based implementation. We’re taking thousands of servers and trying to move all the customers into a cloud implementation.

Sean: In my role I’m interpreting it a little bit differently. Innovation is doing something new. In an effort toward agility, one of the things that we’re doing within our organization is enabling the agility of our business partners, by changing our own operating model. Traditional IT where we run all the services and infrastructure necessary to drive the business, actually taking more of an enabler or a partnership approach where we’re doing things like encouraging shadow IT, encouraging the use of SaaS applications and helping them really do that better through different service offerings like vendor management or change management when it comes to user adoption of certain platforms, data integration so that when we have work flows that span multiple areas of the business, we can complete those without moving data around manually and some of the other problems that come with that. That’s one way that we’re doing new things, looking at ourselves differently, what new capabilities do we need to develop, processes, tools and skills to enable agility for our marketing group or our product lines, as an example. That’s a little bit of one way I think about it.

Then I asked: What gets in the way of agility?

Aaron: I think it’s interesting considering we used to improve innovation by looking right at the root of the motivation for it. Why are we going down this path, trying to innovate something and what is the value of that thing we’re trying to innovate? It’s my belief that the thing that we should value in our colleagues is their judgment. The pre-conditions for being able to offer judgment are you need to be fully informed, you need to be aligned with an objective, there needs to be a reason and even an incentive to offer that kind of input. If the shared goals around innovation opportunities aren’t defined in a way that actually lead to success over time, then the business is just like any other organism: it gets its fingers burned, it starts to get more risk adverse and then it becomes harder and harder to execute any kind of change agenda. Planning in a way that is likely to have a good long-term outcome, even at the outset of any sort of initiative, is one key success criteria that we put in place to help ourselves and our customers get to a good place.

Attendee 1: Established companies who have the ability to have long-term leaders are losing the race because they’re not agile. Those leaders need to transform in their mind first to say, “This is something that needs to be done,” and commit to it and take maintain an attitude where, having given that direction, don’t penalize employees for failure as they run small experiments. A lot of companies have, complex projects where they spin-off a small team, say, “You go do whatever you want and we are going to give you some limited funding, but we are not going to ask you for results.” CEOs and COOs are looking for, “If I spend 10 million bucks, what am I going to get for it?” When you focus on bottom line results, then you hide the cost of innovation.

Sean: Yeah, it’s culturally endemic, sort of near-term focus on success instead of on the long term and the impact that has on innovation.

Cory: There are some companies, like Google who have been known to allow an engineer to take a day a week or a day every two weeks and just look at things. I think though, the challenge is you have to get your organization up to the point where this is an economically viable thing to do. That’d be something I’d love to do with our team, but they’re so stressed out on getting the next thing done. Once we get more ahead of the curve, I think we could do that kind of thing.

The discussion went on to cover operationalizing innovation, or making it a program within an organization, before we moved on to other similarly thought-provoking subjects.

Interested in being a part of a discussion like this one? VELOCITY OF INNOVATION is a series of thought-leadership events for senior IT management hosted by Pythian. Pythian invites leading IT innovators to participate in discussions about today’s disruptive technologies: big data, cloud, advanced analytics, DevOps, and more. These events are by invitation only.

If you are interested in attending an upcoming Velocity of Innovation event in a city near you, please contact events@pythian.com.

Categories: DBA Blogs

HCM Cloud Takes to the Airwaves

Linda Fishman Hoyle - Wed, 2016-01-27 10:44

A Guest Post by Frank Cowell,  Director, Oracle HCM Cloud Center of Excellence (pictured left)

Have you heard about HCM Talk Radio? It provides audio podcasts for our HCM Cloud customers and partners. Each show is about 20 minutes long and focuses on best practices and hot topics. Obviously in 20 minutes, we don’t go deep. We’re just trying to peak your interest in the topic and then point you to more in-depth resources on My Oracle Support, Customer Connect, and Help Center.

HCM Talk Radio is recorded at the UK Oracle Studios in Reading. Our guests are customers, partners, and employees, and they join the show either in person or remotely. The shows are published to the Oracle Podcast site. Many of our listeners have signed up for the RSS feeds; others download from iTunes or use Doc ID 2052534 on My Oracle Support Knowledge Base.

We are currently averaging more than 200 views per day. The total number of views since we started in March 2015 is 25,401 and climbing. Your feedback has been very positive and helps us keep our podcasts relevant to the needs of our HCM Cloud community.

We do hope you’ll tune in and see what we’re all about. Here are three of the most recent podcasts you may want to recommend to your prospects and customers: “Change Management best practices when adopting HCM Cloud Services,” “The new Role Copy feature in Release 10,” and “How to prepare for an HCM Cloud release upgrade.”

And one more thing: please contact me at frank cowell@oracle.com if you want to collaborate on a future broadcast.

HCM Talk Radio comes courtesy of the Oracle HCM Cloud Development Center of Excellence Team.


Up in the JCS Clouds !!

Tim Dexter - Wed, 2016-01-27 04:05
Hello Friends,

Oracle BI Publisher has been in the cloud for quite sometime ....as a part of Fusion Applications or few other Oracle product offerings. We now announce certification of BI Publisher in the Java Cloud Services!! 

BI Publisher on JCS

Oracle Java Cloud Service (JCS) is a part of the platform service offerings in Oracle Cloud. Powered by Oracle WebLogic Server, it provides a platform on top of Oracle's enterprise-grade cloud infrastructure for developing and deploying new or existing Java EE applications. Check for more details on JCS here. In this page, under "Perform Advanced Tasks" you can find a link to "Leverage your on-premise licenses". This page cites all the products certified for Java Cloud Services and now we can see BI Publisher 11.1.1.9 listed as one of the certified products using Fusion Middleware 11.1.1.7.


How to Install BI Publisher on JCS?

Here are the steps to install BI Publisher on JCS. The certification supports the Virtual Image option only.

Step 1: Create DBaaS Instance


Step 2: Create JCS Instance

To create an Oracle Java Cloud Service instance, use the REST API for Oracle Java Cloud Service. Do not use the Wizard in the GUI. The Wizard does not allow an option to specify the MWHOME partition size, whereas REST API allows us to specify this. The default size created by the Wizard is generally insufficient for BI Publisher deployments.

The detailed instructions to install JCS instance are available in the Oracle By Example Tutorial under "Setting up your environment", "Creating an Oracle Java Cloud Service instance".


Step 3:  Install and Configure BI Publisher

  1. Set up RCU on DBaaS
    • Copy RCU
    • Run RCU
  2. Install BI Publisher in JCS instance
    • Copy BI Installer in JCS instance
    • Run Installer
    • Use Software Only Install
  3. Configure BI Publisher
    • Extend Weblogic Domain
    • Configure Policy Store
    • Configure JMS
    • Configure Security

You can follow the detailed installation instructions as documented in "Oracle By Example" tutorial. 


Minimum Cloud Compute and Storage Requirements:

  1. Oracle Java Cloud Service: 1 OCPU, 7.5 GB Memory, 62 GB Storage
    • To install Weblogic instance
    • To Install BI Publisher
    • To set Temp File Directory in BI Publisher
  2. Oracle Database Cloud Service: 1 OCPU, 7.5 GB Memory, 90 GB Storage
    • To install RCU
    • To use DBaaS as a data source
  3. Oracle IaaS (Compute & Storage): (Optional - Depends on sizing requirements)
    • To Enable Local & Cloud Storage option in DBaaS (Used with Full Tooling option)

So now you can use your on-premise license to host BI Publisher as a standalone on the Java Cloud Services for all your highly formatted, pixel perfect enterprise reports for your cloud based applications. Have a great Day !!

Categories: BI & Warehousing

Add primary key.

Jonathan Lewis - Wed, 2016-01-27 03:07

I thought I had written this note a few years ago, on OTN or Oracle-L if not on my blog, but I can’t find any sign of it so I’ve decided it’s time to write it (again) – starting as a question about the following code:


create table t1
as
with generator as (
        select  rownum  id
        from            dual
        connect by
                        rownum <= 1000
)
select
        rownum                                  id,
        trunc((rownum-1)/50)                    clustered,
        mod(rownum,20000)                       scattered,
        lpad(rownum,10)                         vc_small,
        rpad('x',100,'x')                       vc_padding
from
        generator       g1,
        generator       g2
;

execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

alter system flush buffer_cache;

alter table t1 add constraint t1_pk primary key(id, scattered);

I’ve generated a table with 1,000,000 rows, including a column that’s guaranteed to be unique; then I’ve added a (two-column) primary key constraint to that table.

Because of the guaranteed unique column the call to add constraint will succeed. Because Oracle will automatically create a unique index to support that constraint it will have to do a tablescan of the table. So here’s the question: HOW MANY TIMES will it tablescan that table (and how many rows will it scan) ?

Space for thought …

The answer is three tablescans, 3 million rows.

Oracle will scan the table to check the validity of adding a NOT NULL definition and constraint for the id column, repeat the scan to do the same for the scattered column, then one final scan to accumulate the key data and rowids to sort and create the index.

Knowing this, you may be able to find ways to modify bulk data loading operations to minimise overheads.

The most recent version I’ve tested this on is 12.1.0.2.

See also: https://jonathanlewis.wordpress.com/2012/03/02/add-constraint/

My reference: pk_overhead.sql