Skip navigation.

Feed aggregator

ANSI expansion

Jonathan Lewis - Fri, 2015-03-27 04:46

Here’s a quirky little bug that appeared on the OTN database forum in the last 24 hours which (in 12c, at least) produces an issue which I can best demonstrate with the following cut-n-paste:


SQL> desc purple
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 G_COLUMN_001                        NOT NULL NUMBER(9)
 P_COLUMN_002                                 VARCHAR2(2)

SQL> select p.*
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;
  join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
       *
ERROR at line 4:
ORA-01792: maximum number of columns in a table or view is 1000

SQL> select p.g_column_001, p.p_column_002
  2  from GREEN g
  3    join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
  4    join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001;

no rows selected

A query that requires “star-expansion” fails with ORA-01792, but if you explicitly expand the ‘p.*’ to list all the columns it represents the optimizer is happy. (The posting also showed the same difference in behaviour when changing “select constant from  {table join}” to “select (select constant from dual) from {table join}”)

The person who highlighted the problem supplied code to generate the tables so you can repeat the tests very easily; one of the quick checks I did was to modify the code to produce tables with a much smaller number of columns and then expanded the SQL to see what Oracle would have done with the ANSI. So, with only 3 columns each in table RED and GREEN, this is what I did:

set serveroutput on
set long 20000

variable m_sql_out clob

declare
    m_sql_in    clob :=
                        '
                        select p.*
                        from GREEN g
                        join RED r on g.G_COLUMN_001 = r.G_COLUMN_001
                        join PURPLE p on g.G_COLUMN_001 = p.G_COLUMN_001
                        ';
begin

    dbms_utility.expand_sql_text(
        m_sql_in,
        :m_sql_out
    );

end;
/

column m_sql_out wrap word
print m_sql_out

The dbms_utility.expand_sql_text() function is new to 12c, and you’ll need the execute privilege on the dbms_utility package to use it; but if you want to take advantage of it in 11g you can also find it (undocumented) in a package called dbms_sql2.

Here’s the result of the expansion (you can see why I reduced the column count to 3):


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."G_COLUMN_001_6" "G_COLUMN_001","A1"."P_COLUMN_002_7" "P_COLUMN_002"
FROM  (SELECT "A3"."G_COLUMN_001_0" "G_COLUMN_001","A3"."G_COLUMN_002_1"
"G_COLUMN_002","A3"."G_COLUMN_003_2" "G_COLUMN_003","A3"."G_COLUMN_001_3"
"G_COLUMN_001","A3"."R_COLUMN__002_4" "R_COLUMN__002","A3"."R_COLUMN__003_5"
"R_COLUMN__003","A2"."G_COLUMN_001" "G_COLUMN_001_6","A2"."P_COLUMN_002"
"P_COLUMN_002_7" FROM  (SELECT "A5"."G_COLUMN_001"
"G_COLUMN_001_0","A5"."G_COLUMN_002" "G_COLUMN_002_1","A5"."G_COLUMN_003"
"G_COLUMN_003_2","A4"."G_COLUMN_001" "G_COLUMN_001_3","A4"."R_COLUMN__002"
"R_COLUMN__002_4","A4"."R_COLUMN__003" "R_COLUMN__003_5" FROM
"TEST_USER"."GREEN" "A5","TEST_USER"."RED" "A4" WHERE
"A5"."G_COLUMN_001"="A4"."G_COLUMN_001") "A3","TEST_USER"."PURPLE" "A2" WHERE
"A3"."G_COLUMN_001_0"="A2"."G_COLUMN_001") "A1"

Tidying this up:


SELECT
        A1.G_COLUMN_001_6 G_COLUMN_001,
        A1.P_COLUMN_002_7 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_002_1 G_COLUMN_002,
                A3.G_COLUMN_003_2 G_COLUMN_003,
                A3.G_COLUMN_001_3 G_COLUMN_001,
                A3.R_COLUMN__002_4 R_COLUMN__002,
                A3.R_COLUMN__003_5 R_COLUMN__003,
                A2.G_COLUMN_001 G_COLUMN_001_6,
                A2.P_COLUMN_002 P_COLUMN_002_7
        FROM    (
                SELECT
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A5.G_COLUMN_002 G_COLUMN_002_1,
                        A5.G_COLUMN_003 G_COLUMN_003_2,
                        A4.G_COLUMN_001 G_COLUMN_001_3,
                        A4.R_COLUMN__002 R_COLUMN__002_4,
                        A4.R_COLUMN__003 R_COLUMN__003_5
                FROM
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1

As you can now see, the A1 alias lists all the columns in GREEN, plus all the columns in RED, plus all the columns in PURPLE – totalling 3 + 3 + 2 = 8. (There is a little pattern of aliasing and re-aliasing that turns the join column RED.g_column_001 into G_COLUMN_001_3, making it look at first glance as if it has come from the GREEN table).

You can run a few more checks, increasing the number of columns in the RED and GREEN tables, but essentially when the total number of columns in those two tables goes over 998 then adding the two extra columns from PURPLE makes that intermediate inline view break the 1,000 column rule.

Here’s the equivalent expanded SQL if you identify the columns explicitly in the select list (even with several hundred columns in the RED and GREEN tables):


SELECT
        A1.G_COLUMN_001_2 G_COLUMN_001,
        A1.P_COLUMN_002_3 P_COLUMN_002
FROM    (
        SELECT
                A3.G_COLUMN_001_0 G_COLUMN_001,
                A3.G_COLUMN_001_1 G_COLUMN_001,
                A2.G_COLUMN_001 G_COLUMN_001_2,
                A2.P_COLUMN_002 P_COLUMN_002_3
        FROM    (
                SELECT
                        A5.G_COLUMN_001 G_COLUMN_001_0,
                        A4.G_COLUMN_001 G_COLUMN_001_1
                FROM
                        TEST_USER.GREEN A5,
                        TEST_USER.RED A4
                WHERE
                        A5.G_COLUMN_001=A4.G_COLUMN_001
                ) A3,
                TEST_USER.PURPLE A2
        WHERE
                A3.G_COLUMN_001_0=A2.G_COLUMN_001
        ) A1

As you can see, the critical inline view now holds only the original join columns and the columns required for the select list.

If you’re wondering whether this difference in expansion could affect execution plans, it doesn’t seem to; the 10053 trace file includes the following (cosmetically altered) output:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT
        P.G_COLUMN_001 G_COLUMN_001,
        P.P_COLUMN_002 P_COLUMN_002
FROM
        TEST_USER.GREEN   G,
        TEST_USER.RED     R,
        TEST_USER.PURPLE  P
WHERE
        G.G_COLUMN_001=P.G_COLUMN_001
AND     G.G_COLUMN_001=R.G_COLUMN_001

So it looks as if the routine to transform the syntax puts in a lot of redundant text, then the optimizer takes it all out again.

The problem doesn’t exist with traditional Oracle syntax, by the way, it’s an artefact of Oracle’s expansion of the ANSI syntax, and 11.2.0.4 is quite happy to handle the text generated by the ANSI transformation when there are well over 1,000 columns in the inline view.


Index on SUBSTR(string,1,n) - do you still need old index?

Yann Neuhaus - Fri, 2015-03-27 03:57

In a previous post I've shown that from 12.1.0.2 when you have an index on trunc(date) you don't need additional index. If you need the column with full precision, then you can add it to the index on trunc(). A comment from Rainer Stenzel asked if that optimization is available for other functions. And Mohamed Houri has linked to his post where he shows that it's the same with a trunc() on a number.

Besides that, there is the same kind of optimization with SUBSTR(string,1,n) so here is the demo, with a little warning at the end.

I start with the same testcase as the previous post.

SQL> create table DEMO as select prod_id,prod_name,prod_eff_from +rownum/0.3 prod_date from sh.products,(select * from dual connect by level>=1000);
Table created.

SQL> create index PROD_NAME on DEMO(prod_name);
Index created.

SQL> create index PROD_DATE on DEMO(prod_date);
Index created.
string>Z

I've an index on the PROD_NAME and I can use it with equality or inequality predicates:

SQL> set autotrace on explain
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME">'Z')

And I also can use it with a LIKE when there is no starting joker:
SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 72593368

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX RANGE SCAN | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PROD_NAME" LIKE 'Z%')
       filter("PROD_NAME" LIKE 'Z%')

That optimization is available for several releases (9.2 if I remember well but I didn' check).

substr(string,1,n)

But sometimes, when we want to check if a column starts with a string, the application uses SUBSTR instead of LIKE:

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 1665545956

--------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes |
--------------------------------------------------------
|   0 | SELECT STATEMENT   |           |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT|           |     1 |    27 |
|*  2 |   INDEX FULL SCAN  | PROD_NAME |     1 |    27 |
--------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,1)='Z')

But - as you see - there is no access predicate here. The whole index has to be read.

Of course, I can use a function based index for that:

SQL> create index PROD_NAME_SUBSTR on DEMO( substr(prod_name,1,1) );
Index created.

SQL> select distinct prod_name from DEMO where substr(prod_name,1,1) = 'Z';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 4209586087

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    31 |
|   1 |  HASH UNIQUE                 |                  |     1 |    31 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    31 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

One index only?

Then, as in the previous post about TRUNC I'll check if that new index is sufficient. Let's fdrop the first one.

SQL> drop index PROD_NAME;
Index dropped.
The previous index is dropped. Let's see if the index on SUBSTR can be used with an equality predicate:
SQL> select distinct prod_name from DEMO where prod_name = 'Zero';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 953445334

-------------------------------------------------------------------------
| Id  | Operation                    | Name             | Rows  | Bytes |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                  |     1 |    27 |
|   1 |  SORT UNIQUE NOSORT          |                  |     1 |    27 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| DEMO             |     1 |    27 |
|*  3 |    INDEX RANGE SCAN          | PROD_NAME_SUBSTR |     1 |       |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME"='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

Good. The index on substring is used for index range scan on the prefix, and then the filter occurs on the result. This is fine as long as the prefix is selective enough.

It is also available with inequality:
SQL> select distinct prod_name from DEMO where prod_name > 'Z';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME">'Z')
   3 - access(SUBSTR("PROD_NAME",1,1)>='Z')

And we can use it even when using a substring with a different number of characters:
SQL> select distinct prod_name from DEMO where substr(prod_name,1,4) = 'Zero';
no rows selected

...

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(SUBSTR("PROD_NAME",1,4)='Zero')
   3 - access(SUBSTR("PROD_NAME",1,1)='Z')

However, if we use the LIKE syntax:

SQL> select distinct prod_name from DEMO where prod_name like 'Z%';
no rows selected

Execution Plan
----------------------------------------------------------
Plan hash value: 51067428

---------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |
---------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    27 |
|   1 |  HASH UNIQUE       |      |     1 |    27 |
|*  2 |   TABLE ACCESS FULL| DEMO |     1 |    27 |
---------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("PROD_NAME" LIKE 'Z%')

The LIKE snytax does not allow to filter from the index on SUBSTR. So there are cases where we have to keep all indexes. Index on full column for LIKE predicates, and index on substring for SUBSTR predicates.

Note that indexes on SUBSTR are mandatory when you have columns larger than your block size, which is probably the case if you allow extended datatypes (VARCHAR2 up to 32k)

ASH

Jonathan Lewis - Fri, 2015-03-27 03:41

There was a little conversation on Oracle-L about ASH (active session history) recently which I thought worth highlighting – partly because it raised a detail that I had got wrong until Tim Gorman corrected me a few years ago.

Once every second the dynamic performance view v$active_session_history copies information about active sessions from v$session. (There are a couple of exceptions to the this rule – for example if a session has called dbms_lock.sleep() it will appear in v$session as state = ‘ACTIVE’, but it will not be recorded in v$active_session_history.) Each of these snapshots is referred to as a “sample” and may hold zero, one, or many rows.

The rows collected in every tenth sample are flagged for copying into the AWR where, once they’ve been copied into the underlying table, they can be seen in the view dba_hist_active_sess_history.  This is where a common misunderstanding occurs: it is not every 10th row in v$active_session_history it’s every 10th second; and if a sample happens to be empty that’s still the sample that is selected (which means there will be a gap in the output from dba_hist_active_sess_history). In effect dba_hist_active_sess_history holds copies of the information you’d get from v$session if you sampled it once every 10 seconds instead of once per second.

It’s possible to corroborate this through a fairly simple query as the rows from v$active_session_history that are going to be dumped to the AWR are as they are created:


select
        distinct case is_awr_sample when 'Y' then 'Y' end flag,
        sample_id,
        sample_time
from
        v$active_session_history
where
        sample_time > sysdate - 1/1440
order by
        2,1
;

F  SAMPLE_ID SAMPLE_TIME
- ---------- --------------------------------
     3435324 26-MAR-15 05.52.53.562 PM
     3435325 26-MAR-15 05.52.54.562 PM
     3435326 26-MAR-15 05.52.55.562 PM
     3435327 26-MAR-15 05.52.56.562 PM
     3435328 26-MAR-15 05.52.57.562 PM
     3435329 26-MAR-15 05.52.58.562 PM
     3435330 26-MAR-15 05.52.59.562 PM
     3435331 26-MAR-15 05.53.00.562 PM
Y    3435332 26-MAR-15 05.53.01.562 PM
     3435333 26-MAR-15 05.53.02.572 PM
     3435334 26-MAR-15 05.53.03.572 PM
     3435335 26-MAR-15 05.53.04.572 PM
     3435336 26-MAR-15 05.53.05.572 PM
     3435337 26-MAR-15 05.53.06.572 PM
     3435338 26-MAR-15 05.53.07.572 PM
     3435339 26-MAR-15 05.53.08.572 PM
     3435340 26-MAR-15 05.53.09.572 PM
     3435341 26-MAR-15 05.53.10.582 PM
Y    3435342 26-MAR-15 05.53.11.582 PM
     3435343 26-MAR-15 05.53.12.582 PM
     3435344 26-MAR-15 05.53.13.582 PM
     3435345 26-MAR-15 05.53.14.582 PM
     3435346 26-MAR-15 05.53.15.582 PM
     3435347 26-MAR-15 05.53.16.582 PM
     3435348 26-MAR-15 05.53.17.582 PM
     3435349 26-MAR-15 05.53.18.592 PM
     3435350 26-MAR-15 05.53.19.592 PM
     3435351 26-MAR-15 05.53.20.592 PM
Y    3435352 26-MAR-15 05.53.21.602 PM
     3435355 26-MAR-15 05.53.24.602 PM
     3435358 26-MAR-15 05.53.27.612 PM
     3435361 26-MAR-15 05.53.30.622 PM
     3435367 26-MAR-15 05.53.36.660 PM
     3435370 26-MAR-15 05.53.39.670 PM
     3435371 26-MAR-15 05.53.40.670 PM
     3435373 26-MAR-15 05.53.42.670 PM
     3435380 26-MAR-15 05.53.49.700 PM
     3435381 26-MAR-15 05.53.50.700 PM
Y    3435382 26-MAR-15 05.53.51.700 PM
     3435383 26-MAR-15 05.53.52.700 PM

40 rows selected.

As you can see at the beginning of the output the samples have a sample_time that increases one second at a time (with a little slippage), and the flagged samples appear every 10 seconds at 5.53.01, 5.53.11 and 5.53.21; but then the instance becomes fairly idle and there are several sample taken over the next 20 seconds or so where we don’t capture any active sessions; in particular there are no rows in the samples for 5.53.31, and 5.53.41; but eventually the instance gets a little busy again and we see that we’ve had active sessions in consecutive samples for the last few seconds, and we can see that we’ve flagged the sample at 5.53.51 for dumping into the AWR.

You’ll notice that I seem to be losing about 1/100th second every few seconds; this is probably a side effect of virtualisation and having a little CPU-intensive work going on in the background. If you see periods where the one second gap in v$active_session_history or 10 second gap in dba_hist_active_sess_history has been stretched by several percent you can assume that the CPU was under pressure over that period. The worst case I’ve seen to date reported gaps of 12 to 13 seconds in dba_hist_active_sess_history.  The “one second” algorithm is “one second since the last snapshot was captured” so if the process that’s doing the capture doesn’t get to the top of the runqueue in a timely fashion the snapshots slip a little.

When the AWR snapshot is taken, the flagged rows from v$active_session_history are copied to the relevant AWR table. You can adjust the frequency of sampling for both v$active_session_history, and dba_hist_active_sess_history, of course – there are hidden parameters to control both: _ash_sampling_interval (1,000 milliseconds) and _ash_disk_filter_ratio (10). There’s also a parameter controlling how much memory should be reserved in the shared pool to hold v$active_session_history.: _ash_size (1048618 bytes per session in my case).  The basic target is to keep one hour’s worth of data in memory, but if there’s no pressure for memory you can find that the v$active_session_history holds more than the hour; conversely, if there’s heavy demand for memory and lots of continuously active sessions you may find that Oracle does “emergency flushes” of v$active_session_history between the normal AWR snapshots. I have heard of people temporarily increasing the memory and reducing the interval and ratio – but I haven’t yet felt the need to do it myself.

 


Oracle Priority Support Infogram for 26-MAR-2015

Oracle Infogram - Thu, 2015-03-26 15:12

Cloud
A Glance at Smartwatches in the Enterprise: A Moment in Time Experience, from the Usable Apps in the Cloud Blog.
Solaris
From Solaris 11 Maintenance Lifecycle: iSCSI improvements
OVM
Oracle VM Server for SPARC 3.2 - Live Migration, from Virtually All the Time.
NetBeans
From Geertjan’s Blog: Mobile Boilerplate and NetBeans IDE
Fusion
Finding "End of Support" Information for Fusion Middleware Applications, from Proactive Support - Java Development using Oracle Tools.
Live Virtual Training: Fast Track: Oracle Fusion Project Portfolio Management 2014 for PreSales, from Oracle PartnerNetwork News.
WebLogic
Oracle WebLogic Server Now Running on Docker Containers, from The WebLogic Server Blog.
BI
From the Oracle BI applications blog: How to Invoke Oracle Enterprise Data Quality(OEDQ) using ODI Client.
Hyperion
Hyperion Essbase Family 11.1.2.4 PSU post updated, from the Business Analytics - Proactive Support.
BPM
Video: Error Handling and Recovery in Oracle BPM12c, from ArchBeat.
Data Warehousing
From the Data Warehouse Insider:
Finding the Distribution Method in Adaptive Parallel Joins
Finding the Reason for DOP Downgrades
Security
86% of Data Breaches Miss Detection, How Do You Beat The Odds? from Security Inside Out.
RFID
IoB - Internet of Bees, from Hinkmond Wong's Weblog.
EBS
From the Oracle E-Business Suite Technology blog:
Oracle Database 12.1.0.2 Certified with EBS 12.1 on HP-UX Itanium, IBM AIX
From CN Support NEWS:

E-Business Suite 12.1 Premier Support Now Through Dec. 2016 and 11.5.10 Sustaining Support Exception Updates

Configuring Oracle #GoldenGate Monitor Agent

DBASolved - Thu, 2015-03-26 14:06

In a few weeks I’ll be talking about monitoring Oracle GoldenGolden using Oracle Enterprise Manager 12c at IOUG Collaborate in Las Vegas.  This is one of the few presentations I will be giving that week (going to be a busy week).  Although this posting, kinda mirrors a previous post on how to configure the Oracle GoldenGate JAgent, it is relevant because:

1. Oracle changed the name of the JAgent to Oracle Monitor Agent
2. Steps are a bit different with this configuration

Most people running Oracle GoldenGate and want to monitor the processes with EM12c, will try to use the embedded JAgent.  This JAgent will work with the OGG Plug-in 12.1.0.1.  To get many of the new features and use the new plug-in (12.1.0.2), the new Oracle Monitor Agent (12.1.3.0) needs to be downloaded and installed.  Finding the binaries for this is not that easy though.  In order to get the binaires, download Oracle GoldenGate Monitor v12.1.3.0.0 from OTN.oracle.com.

Once downloaded, unzip the file to a directory to a temp location

$ unzip ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip -d ./oggmonitor
Archive: ./fmw_12.1.3.0.0_ogg_Disk1_1of1.zip
 inflating: ./oggmonitor/fmw_12.1.3.0.0_ogg.jar

In order to install the agent, you need to have java 1.8 installed somewhere that can be used.  The 12.1.3.0.0 software is built using JDK 1.8.

$ ./java -jar ../../ggmonitor/fmw_12.1.3.0.0_ogg.jar

After executing the command, the OUI installer will start.  As you walk through the OUI, when the select page comes up; select the option to only install the Oracle GoldenGate Monitor Agent.


The proceed through the rest of the OUI and complete the installation.

After the installation is complete, then the JAgent needs to be configured.  In order to do this, navigate to the directory where the binaries were installed.

$ cd /u01/app/oracle/product/jagent/oggmon/ogg_agent

In this directory, look for a file called create_ogg_agent_instance.sh.  This files has to be ran first to create the JAgent that will be associated with Oracle GoldenGate. In order to run this script, the $JAVA_HOME variable needs to be pointed to the JDK 1.8 location as well.  Inputs that will need to be provided are the Oracle GoldenGate Home and where to install the JAgent (this is different from where the OUI installed).

$ ./create_ogg_agent_instance.sh
Please enter absolute path of Oracle GoldenGate home directory : /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
Please enter absolute path of OGG Agent instance : /u01/app/oracle/product/12.1.3.0/jagent
Sucessfully created OGG Agent instance.

Next, go to the directory for the OGG Agent Instance (JAgent), then to the configuration (cfg) directory.  In this directory, the Config.properities file needs to be edited.  Just like with the old embedded JAgent, the same variables have to be changed.

$ cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./cfg
$ vi ./Config.properties

Change the following or keep the defaults, then save the file:

jagent.host=fred.acme.com (default is localhost)
jagent.jmx.port=5555 (default is 5555)
jagent.username=root (default oggmajmxuser)
jagent.rmi.port=5559 (default is 5559)
agent.type.enabled=OEM (default is OGGMON)

Then create the password that will be stored in the wallet directory under $OGG_HOME.  

cd /u01/app/oracle/product/12.1.3.0/jagent
$ cd ./bin
$ ./pw_agent_util.sh -jagentonly
Please create a password for Java Agent:
Please confirm password for Java Agent:
Mar 26, 2015 3:18:46 PM oracle.security.jps.JpsStartup start
INFO: Jps initializing.
Mar 26, 2015 3:18:47 PM oracle.security.jps.JpsStartup start
INFO: Jps started.
Wallet is created successfully.

Now, enable monitoring in the GLOBALS file in $OGG_HOME.

$ cd /u01/app/oracle/product/12.1.2.0/12c/oggcore_1
$ vi ./GLOBALS


After enabling monitoring, the JAgent should appear when doing an info all inside of GGSCI.


Before starting the JAgent, create a datastore.  What I’ve found works is to delete the datastore, restart GGSCI and create a new one. 

$ ./ggsci
Oracle GoldenGate Command Interpreter for Oracle<br>Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)> info all
Program           Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   STOPPED

GGSCI (fred.acme.com)>; stop mgr!
Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (fred.acme.com)>; delete datastore
Are you sure you want to delete the datastore? yes
Datastore deleted.
GGSCI (fred.acme.com)>; exit

$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.1.2.1.0 OGGCORE_12.1.2.1.0_PLATFORMS_140727.2135.1_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Aug&nbsp; 7 2014 10:21:34
Operating system character set identified as UTF-8.
Copyright (C) 1995, 2014, Oracle and/or its affiliates. All rights reserved.

GGSCI (fred.acme.com)>; create datastore
Datastore created.

GGSCI (fred.acme.com)>; start mgr
Manager started.

GGSCI (fred.acme.com)>; start jagent
Sending START request to MANAGER ...
JAGENT starting

GGSCI (fred.acme.com)>; info all

Program  Group Lag at Chkpt Time Since Chkpt
MANAGER  RUNNING
JAGENT   RUNNING

With the JAgent running, now configure Oracle Enterprise Manager 12c to use the JAgent.

Note: In order to monitor Oracle GoldenGate with Oracle Enterprise Manager 12c, you need to deploy the Oracle GoldenGate Plug-in (12.1.0.2).

To configure discovery of the Oracle GoldenGate process, go to Setup -> Add Target -> Configure Auto Discovery

Select the Host where the JAgent is running.

Ensure the the Discovery Module for GoldenGate Discovery is enabled and then click the Edit Parameters to provided the username and rmx port specified in the Config.properties file.  And provide the password was setup in the wallet. Then click OK.

At this point, force a discovery of any new targets that need to be monitored by using the Discover Now button.

If the discovery was successful, the Oracle GoldenGate Manager process should be able to be seen and promoted for monitoring.

After promoting the Oracle GoldenGate processes, they can then be seen in the Oracle GoldenGate Interface within Oracle Enterprise Manager 12c (Target -> GoldenGate).

At this point, Oracle GoldenGate is being monitored by Oracle Enterprise Manager 12c.  The new plug-in for Oracle GoldenGate is way better than the previous one; however, there still are a few thing that could be better.  More on that later.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

AOUG - Real World Performance Tour

Yann Neuhaus - Thu, 2015-03-26 13:26

This week, Tom Kyte, Graham Wood and Andrew Holdsworth were present in Europe for several dates. One of the events was organised by the Austrian Oracle User Group (AOUG) in collaboration with the German and Swiss User Group (DOAG and SOUG) and I had the chance to be there to attend to one session of the Real Worl Performance tour session in Vienna.

Oracle Database 12c In-Memory Q&A Webinar

Pythian Group - Thu, 2015-03-26 09:21

Today I will be debating Oracle 12c’s In-Memory option with Maria Colgan of Oracle (aka optimizer lady, now In-Memory lady).

This will be in a debate form with lots of Q&A from the audience. Come ask the questions you always wanted to ask.

Link to register and attend:
https://attendee.gotowebinar.com/register/7874819190629618178

Starts at 12:00pm EDT.

Categories: DBA Blogs

12c MView refresh

Jonathan Lewis - Thu, 2015-03-26 07:19

Some time ago I wrote a blog note describing a hack for refreshing a large materialized view with minimum overhead by taking advantage of a single-partition partitioned table. This note describes how Oracle 12c now gives you an official way of doing something similar – the “out of place” refresh.

I’ll start by creating a matieralized view and creating a couple of indexes on the resulting underlying table; then show you three different calls to refresh the view. The materialized view is based on all_objects so it can’t be made available for query rewrite (ORA-30354: Query rewrite not allowed on SYS relations) , and I haven’t created any materialized view logs so there’s no question of fast refreshes – but all I intend to do here is show you the relative impact of a complete refresh.


create materialized view mv_objects nologging
build immediate
refresh on demand
as
select
        *
from
        all_objects
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'mv_objects',
		method_opt 	 => 'for all columns size 1'
	);
end;
/

create index mv_obj_i1 on mv_objects(object_name) nologging compress;
create index mv_obj_i2 on mv_objects(object_type, owner, data_object_id) nologging compress 2;

This was a default install of 12c, so there were about 85,000 rows in the view. You’ll notice that I’ve created all the objects as “nologging” – this will have an effect on the work done during some of the refreshes.

Here are the three variants I used – all declared explicitly as complete refreshes:


begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> true
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false
	);
end;
/

begin
	dbms_mview.refresh(
		list			=> 'MV_OBJECTS',
		method			=> 'C',
		atomic_refresh		=> false,
		out_of_place		=> true
	);
end;
/

The first one (atomic_refresh=>true) is the one you have to use if you want to refresh several materialized views simultaneously and keep them self consistent, or if you want to ensure that the data doesn’t temporarily disappear if all you’re worried about is a single view. The refresh works by deleting all the rows from the materialized view then executing the definition to generate and insert the replacement rows before committing. This generates a lot of undo and redo – especially if you have indexes on the materialized view as these have to be maintained “row by row” and may leave users accessing and applying a lot of undo for read-consistency purposes. An example at a recent client site refreshed a table of 6.5M rows with two indexes, taking about 10 minutes to refresh, generating 7GB of redo as it ran, and performing 350,000 “physical reads for flashback new”. This strategy does not take advantage of the nologging nature of the objects – and as a side effect of the delete/insert cycle you’re likely to see the indexes grow to roughly twice their optimal size and you may see the statistic “recursive aborts on index block reclamation” climbing as the indexes are maintained.

The second option (atomic_refresh => false) is quick and efficient – but may result in wrong results showing up in any code that references the materialized view (whether explicitly or by rewrite). The session truncates the underlying table, sets any indexes on it unusable, then reloads the table with an insert /*+ append */. The append means you get virtually no undo generated, and if the table is declared nologging you get virtually no redo. In my case, the session then dispatched two jobs to rebuild the two indexes – and since the indexes were declared nologging the rebuilds generated virtually no redo. (I could have declared them with pctfree 0, which would also have made them as small as possible).

The final option is the 12c variant – the setting atomic_refresh => false is mandatory if we want  out_of_place => true. With these settings the session will create a new table with a name of the form RV$xxxxxx where xxxxxx is the hexadecimal version of the new object id, insert the new data into that table (though not using the /*+ append */ hint), create the indexes on that table (again with names like RV$xxxxxx – where xxxxxx is the index’s object_id). Once the new data has been indexed Oracle will do some name-switching in the data dictionary (shades of exchange partition) to make the new version of the materialized view visible. A quirky detail of the process is that the initial create of the new table and the final drop of the old table don’t show up in the trace file  [Ed: wrong, see comment #1] although the commands to drop and create indexes do appear. (The original table, though it’s dropped after the name switching, is not purged from the recyclebin.) The impact on undo and redo generation is significant – because the table is empty and has no indexes when the insert takes place the insert creates a lot less undo and redo than it would if the table had been emptied by a bulk delete – even though the insert is a normal insert and not an append; then the index creation honours my nologging definition, so produces very little redo. At the client site above, the redo generated dropped from 7GB to 200MB, and the time dropped to 200 seconds which was 99% CPU time.

Limitations, traps, and opportunities

The manuals say that the out of place refresh can only be used for materialized views that are joins or aggregates and, surprisingly, you actually can’t use the method on a view that simply extracts a subset of rows and columns from a single table.  There’s a simple workaround, though – join the table to DUAL (or some other single row table if you want to enable query rewrite).

Because the out of place refresh does an ordinary insert into a new table the resulting table will have no statistics – you’ll have to add a call to gather them. (If you’ve previously been using a non-atomic refreshes this won’t be a new problem, of course). The indexes will have up to date statistics, of course, because they will have been created after the table insert.

The big opportunity, of course, is to change a very expensive atomic refresh into a much cheaper out of place refresh – in some special cases. My client had to use the atomic_refresh=>true option in 11g because they couldn’t afford to leave the table truncated (empty) for the few minutes it took to rebuild; but they might be okay using the out_of_place => true with atomic_refresh=>false in 12c because:

  • the period when something might break is brief
  • if something does go wrong the users won’t get wrong (silently missing) results, they’ll an Oracle error (probably ORA-08103: object no longer exists)
  • the application uses this particular materialized view directly (i.e. not through query rewrite), and the query plans are all quick, light-weight indexed access paths
  • most queries will probably run correctly even if they run through the moment of exchange

I don’t think we could guarantee that last statement – and Oracle Corp. may not officially confirm it – and it doesn’t matter how many times I show queries succeeding but it’s true. Thanks to “cross-DDL read-consistency” as it was called in 8i when partition-exchange appeared and because the old objects still exist in the data files, provided your query doesn’t hit a block that has been overwritten by a new object, or request a space management block that was zero-ed out on the “drop” a running query can keep on using the old location for an object after it has been replaced by a newer version. If you want to make the mechanism as safe as possible you can help – put each relevant materialized view (along with its indexes) into its own tablespace so that the only thing that is going to overwrite an earlier version of the view is the stuff you create on the next refresh.

 


IBM Bluemix demo using IBM Watson Tradeoff Analytics Service

Pas Apicella - Thu, 2015-03-26 03:57
The IBM Watson Tradeoff Analytics service helps you make better choices under multiple conflicting goals. The service combines smart visualization and recommendations for tradeoff exploration.

The following demo application shows how to use the IBM Watson Tradeoff Analytics Service from IBM Bluemix. This is the demo application for this service.

1. Clone the GitHub project as shown below.

pas@pass-mbp:~/bluemix-apps/watson$ git clone https://github.com/watson-developer-cloud/tradeoff-analytics-nodejs.git
Cloning into 'tradeoff-analytics-nodejs'...
remote: Counting objects: 112, done.
remote: Total 112 (delta 0), reused 0 (delta 0), pack-reused 112
Receiving objects: 100% (112/112), 163.05 KiB | 11.00 KiB/s, done.
Resolving deltas: 100% (38/38), done.
Checking connectivity... done.

2. Create the Tradeoff Analytics service as shown below.

pas@pass-mbp:~/bluemix-apps/watson$ cf create-service tradeoff_analytics free tradeoff-analytics-service
Creating service tradeoff-analytics-service in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

3. Create a mainifest.yml as shown below, ensuring you edit the application name to be a unique name

declared-services:
  tradeoff-analytics-service:
    label: tradeoff_analytics
    plan: free
applications:
- services:
  - tradeoff-analytics-service
  name: pas-tradeoff-analytics-nodejs
  command: node app.js
  path: .
  memory: 128M

4. Push the application into Bluemix as follows

pas@pass-mbp:~/bluemix-apps/watson/tradeoff-analytics-nodejs$ cf push
Using manifest file /Users/pas/ibm/bluemix/apps/watson/tradeoff-analytics-nodejs/manifest.yml

Creating app pas-tradeoff-analytics-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-tradeoff-analytics-nodejs.mybluemix.net...
OK

Binding pas-tradeoff-analytics-nodejs.mybluemix.net to pas-tradeoff-analytics-nodejs...
OK

Uploading pas-tradeoff-analytics-nodejs...
Uploading app files from: /Users/pas/ibm/bluemix/apps/watson/tradeoff-analytics-nodejs
Uploading 204K, 45 files
Done uploading
OK
Binding service tradeoff-analytics-service to app pas-tradeoff-analytics-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Starting app pas-tradeoff-analytics-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (156K)
-----> Node.js Buildpack Version: v1.14-20150309-1555
       TIP: Avoid using semver ranges starting with '>' in engines.node
-----> Requested node range:  >=0.10
-----> Resolved node version: 0.10.36
-----> Installing IBM SDK for Node.js from cache
-----> Checking and configuring service extensions
-----> Installing dependencies
       body-parser@1.10.2 node_modules/body-parser
       ├── media-typer@0.3.0
       ├── bytes@1.0.0
       ├── raw-body@1.3.2
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── iconv-lite@0.4.6
       └── type-is@1.5.7 (mime-types@2.0.10)
       express@4.12.3 node_modules/express
       ├── merge-descriptors@1.0.0
       ├── escape-html@1.0.1
       ├── utils-merge@1.0.0
       ├── cookie-signature@1.0.6
       ├── methods@1.1.1
       ├── fresh@0.2.4
       ├── cookie@0.1.2
       ├── range-parser@1.0.2
       ├── finalhandler@0.3.4
       ├── content-type@1.0.1
       ├── vary@1.0.0
       ├── parseurl@1.3.0
       ├── serve-static@1.9.2
       ├── content-disposition@0.5.0
       ├── path-to-regexp@0.1.3
       ├── depd@1.0.0
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── qs@2.4.1
       ├── debug@2.1.3 (ms@0.7.0)
       ├── etag@1.5.1 (crc@3.2.1)
       ├── send@0.12.2 (destroy@1.0.3, ms@0.7.0, mime@1.3.4)
       ├── proxy-addr@1.0.7 (forwarded@0.1.0, ipaddr.js@0.1.9)
       ├── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       └── type-is@1.6.1 (media-typer@0.3.0, mime-types@2.0.10)
       errorhandler@1.3.5 node_modules/errorhandler
       ├── escape-html@1.0.1
       └── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       request@2.53.0 node_modules/request
       ├── caseless@0.9.0
       ├── json-stringify-safe@5.0.0
       ├── forever-agent@0.5.2
       ├── aws-sign2@0.5.0
       ├── stringstream@0.0.4
       ├── oauth-sign@0.6.0
       ├── tunnel-agent@0.4.0
       ├── isstream@0.1.2
       ├── node-uuid@1.4.3
       ├── combined-stream@0.0.7 (delayed-stream@0.0.5)
       ├── qs@2.3.3
       ├── form-data@0.2.0 (async@0.9.0)
       ├── mime-types@2.0.10 (mime-db@1.8.0)
       ├── http-signature@0.10.1 (assert-plus@0.1.5, asn1@0.1.11, ctype@0.5.3)
       ├── bl@0.9.4 (readable-stream@1.0.33)
       ├── tough-cookie@0.12.1 (punycode@1.3.2)
       └── hawk@2.3.1 (cryptiles@2.0.4, sntp@1.0.9, boom@2.6.1, hoek@2.12.0)
       jade@1.9.2 node_modules/jade
       ├── character-parser@1.2.1
       ├── void-elements@2.0.1
       ├── commander@2.6.0
       ├── mkdirp@0.5.0 (minimist@0.0.8)
       ├── with@4.0.1 (acorn-globals@1.0.2, acorn@0.11.0)
       ├── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)
       └── constantinople@3.0.1 (acorn-globals@1.0.2)
       watson-developer-cloud@0.9.6 node_modules/watson-developer-cloud
       ├── object.pick@1.1.1
       ├── cookie@0.1.2
       ├── extend@2.0.0
       ├── isstream@0.1.2
       ├── async@0.9.0
       ├── string-template@0.2.0 (js-string-escape@1.0.0)
       └── object.omit@0.2.1 (isobject@0.2.0, for-own@0.1.3)
-----> Caching node_modules directory for future builds
-----> Cleaning up node-gyp and npm artifacts
-----> No Procfile found; Adding npm start to new Procfile
-----> Building runtime environment
-----> Checking and configuring service extensions
-----> Installing App Management
-----> Node.js Buildpack is done creating the droplet

-----> Uploading droplet (14M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-tradeoff-analytics-nodejs was started using this command `node app.js`

Showing health and status for app pas-tradeoff-analytics-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 128M x 1 instances
urls: pas-tradeoff-analytics-nodejs.mybluemix.net
last uploaded: Thu Mar 26 09:44:55 +0000 2015

     state     since                    cpu    memory          disk          details
#0   running   2015-03-26 08:45:51 PM   0.0%   43.5M of 128M   50.3M of 1G

5. Access the application


6. Click on the "Analyze Sample Data"


The demo can be found on the GutHub link below.

https://github.com/watson-developer-cloud/tradeoff-analytics-nodejshttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Is Your Shellshocked Poodle Freaked Over Heartbleed?

Mary Ann Davidson - Wed, 2015-03-25 16:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE



























Security weenies will understand that the above title is not as nonsensical as it appears. Would that it were mere nonsense. Instead, I suspect more than a few will read the title and their heads will throb, either because the readers hit themselves in the head, accompanied by the multicultural equivalents of “oy vey” (I’d go with “aloha ‘ino”), or because the above expression makes them reach for the most potent over- the-counter painkiller available.


For those who missed it, there was a sea change in security vulnerabilities reporting last year involving a number of mass panics around “named” vulnerabilities in commonly-used – and widely-used – embedded libraries. For example, the POODLE vulnerability (an acronym for Padding Oracle On Downgraded Legacy Encryption) affects SSL version 3.0, and many products and services using SSL version 3.0 use third party library implementations. The Shellshock vulnerabilities affect GNU bash, a program that multiple Unix-based systems use to execute command lines and command scripts. These vulnerabilities (and others) were widely publicized (the cutesie names helped) and resulted in a lot of scrambling to find, fix, and patch the vulnerabilities. The cumulative result of a number of named vulnerabilities last year in widely-used and deployed libraries I refer to as the Great Shellshocked Poodle With Heartbleed Security Awakening (GSPWHSA). It was a collective IT community eye opener as to:


The degree to which common third party components are embedded in many products and services
The degree to which vendors (and customers) did not know where-all these components actually were used, or what versions of them were used
And, to some degree (more on which below) the actual severity of these issues



A slight digression on how we got to a Shellshocked Poodle with Heartbleed. Way back in the olden days (when I started working at Oracle), the Internet hadn’t taken off yet, and there weren’t as many standard ways of doing things. The growth of the Internet led to the growth of standards (e.g., SSL, now superseded by TLS) so Stuff Would Work Together. The requirement for standards-based interoperability fostered the growth of common libraries (many of them open source), because everyone realized it was dumb to, say, build your own pipes when you could license someone else’s ready-made pipe libraries. Open source/third party libraries helped people build things faster that worked together, because everyone wasn’t building everything from scratch. None of these – standards, common libraries, open source – are bad things. They are (mostly) very good things that have fostered the innovation we now take for granted.



Historically, development organizations didn’t always keep careful track of where all the third party libraries were used, and didn’t necessarily upgrade them regularly. To some degree, the “not upgrade” was understandable – unless there is a compelling reason to move from Old Reliable to New and Improved (as in, they actually are improved and there is a benefit to using the new stuff), you might as well stick with Old and Reliable. Or so it seemed.


When security researchers began focusing on finding vulnerabilities in widely-used libraries, everyone got a rude awakening that their library of libraries (that is, listing of what components were used where) needed to be a whole lot better, because customers wanted to know very quickly the answer to “is the product or cloud service I am using vulnerable?” Moreover, many vendors and service providers realized that, like it or not, they needed to aggressively move to incorporate reasonably current (patched) versions of libraries because, if the third party component you embed is not supported for the life of the product or service you are embedding it in, you can’t get a security patch when you need one: in short, “you are screwed,” as we security experts say. I’ve remarked a lot recently, with some grumbling, that people don’t do themselves any favors by continuing to incorporate libraries older than the tablets of Moses (at least God is still supporting those).


Like all religious revivals, the GSPWHSA has thus resulted in a lot of people repenting of their sins: “Forgive me, release manager, for I have sinned, I have incorporated an out-of-support library in my code.” “Three Hail Marys and four version upgrades, my son…” Our code is collectively more holy now, we all hope, instead of continuing to be hole-y. (Yes, that was a vile pun.) This is a good thing.


The second aspect of the GSPWHSA is more disturbing, and that is, for lack of a better phrase, the “marketing of security vulnerabilities.” Anybody who knows anything about business knows how marketing can – and often intends to – amplify reality. Really, I am sure I can lose 20 pounds and find true love and happiness if I only use the right perfume: that’s why I bought the perfume! Just to get the disclaimer out of the way, no, this is not another instance of the Big Bad Vendor complaining about someone outing security vulnerabilities. What’s disturbing to me is the outright intent to create branding around security vulnerabilities and willful attempt to create a mass panic – dare we say “trending?” – around them regardless of the objective threat posed by the issue. A good example is the FREAK vulnerability (CVE-2015-0204). The fix for FREAK was distributed by OpenSSL on January 8th. It was largely ignored until early March when it was given the name FREAK.  Now, there are a lot of people FREAKing out about this relatively low risk vulnerability while largely ignoring unauthenticated, network, remote code execution vulnerabilities.

Here’s how it works. A researcher first finds vulnerability in a widely-used library: the more widely-used, the better, since nobody cares about a vulnerability in Digital Buggy Whip version 1.0 that is, like, so two decades ago and hardly anybody uses. OpenSSL has been a popular target, because it is very widely used so you get researcher bragging rights and lots of free PR for finding another problem in it. Next, the researcher comes up with a catchy name. You get extra points for it being an acronym for the nature of the vulnerability, such as SUCKS – Security Undermining of Critical Key Systems. Then, you put up a website (more points for cute animated creature dancing around and singing the SUCKS song). Add links so visitors can Order the T-shirt, Download the App, and Get a Free Bumper Sticker! Get a hash tag. Develop a Facebook page and ask your friends to Like your vulnerability. (I might be exaggerating, but not by much.) Now, sit back and wait for the uninformed public to regurgitate the headlines about “New Vulnerability SUCKS!” If you are a security researcher who dreamed up all the above, start planning your speaking engagements on how the world as we know it will end, because (wait for it), “Everything SUCKS.”

Now is where the astute reader is thinking, “but wait a minute, isn’t it really a good thing to publicize the need to fix a widely-embedded library that is vulnerable?” Generally speaking, yes. Unfortunately, most of the publicity around some of these security vulnerabilities is out of proportion to the actual criticality and exploitability of the issues, which leads to customer panic. Customer panic is a good thing – sorta – if the vulnerability is the equivalent of the RMS Titanic’s “vulnerability” as exploited by a malicious iceberg. It’s not a good thing if we are talking about a rowboat with a bad case of chipped paint. The panic leads to suboptimal resource allocation as code providers (vendors and open source communities) are – to a point – forced to respond to these issues based on the amount of press they are generating instead of how serious they really are. It also means there is other more valuable work that goes undone. (Wouldn’t most customers actually prefer that vendors fix security issues in severity order instead of based on “what’s trending?”). Lastly, it creates a shellshock effect with customers, who cannot effectively deal with a continuous string of exaggerated vulnerabilities that cause their management to apply patches as soon as possible or document that their environment is free of the bug.


The relevant metric around how fast you fix things should be objective threat. If something has a Common Vulnerability Scoring System (CVSS) Base Score of 10, then I am all for widely publicizing the issue (with, of course, the Common Vulnerability Enumeration (CVE) number, so people can read an actual description, rather than “run for your lives, Godzilla is stomping your code!”) If something is CVSS 2, I really don’t care that it has a cuter critter than Bambi as a mascot and generally customers shouldn’t, either. To summarize my concerns, the willful marketing of security vulnerabilities is worrisome for security professionals because:

It creates excessive focus on issues that are not necessarily
truly critical
It creates grounds for confusion (as opposed to using CVEs)
It creates a significant support burden for organizations,* where resources would be better spent elsewhere

I would therefore, in the interests of constructive suggestions, recommend that customers assess the following criteria before calling all hands on deck over the next “branded” security vulnerability being marketed as the End of Life On Earth As We Know It:


1. Consider the source of the vulnerability information. There are some very good sites (arstechnica comes to mind) that have well-explained, readily understandable analyses of security issues. Obviously, the National Vulnerability Database (NVD) is also a great source of information.


2. Consider the actual severity of the bug (CVSS Base Score) and the exploitation scenario to determine “how bad is bad.”


3. Consider where the vulnerability exists, its implications, and whether mitigation controls exist in the environment: e.g., Heartbleed was CVSS 5.0, but the affected component (SSL), the nature of the information leakage (possible compromise of keys), and the lack of mitigation controls made it critical.


* e.g., businesses patching based on the level of hysteria rather than the level of threat


Organizations should look beyond cutesie vulnerability names so as to focus their attention where it matters most.  Inquiring about the most recent medium-severity bugs will do less in term of helping an organization secure its environment than, say applying existing patches for higher severity issues. Furthermore, it fosters a culture of “security by documentation” where organizations seek to collect information about a given bug from their cloud and software providers, while failing to apply existing patches in their environment. Nobody is perfect, but if you are going to worry, worry about vulnerabilities based on How Bad Is Bad, and not based on which ones have catchy acronyms, mascots or have generated a lot of press coverage.






DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

Push Notifications and the Internet of Things

Matthias Wessendorf - Wed, 2015-03-25 14:26

Today on Facebook’s F8 conference they announced Parse for IoT. This a cool, but not unexpected move, especially since there is demand to have connected objects being part of an (enterprise) cloud systems. We will see more of that happening soon, and our lesson learned on traditional mobile, will be applied to IoT devices or “connected objects” in general.

AeroGear and IoT

In the AeroGear project we have done similar experiments, bringing functionality of our UnifiedPush Server to the IoT space. My colleage Sébastien Blanc did two short screencasts on his work in this area:

The above examples basically leverage our support for SimplePush, which is a WebSocket based protocol used on Firefox OS for Push Notification. Due to the fact Firefox OS uses such an open protocol, we are able to extend this mechanism of Push Notifications delivery to other platforms, not just Firefox OS devices.

Bringing Push Notifications to the IoT sector is a logical move, to integrate connected objects and mobile cloud services!


Tackling Strategic CIO Issues of 2015 by Oracle's CIO, Mark Sunday

Linda Fishman Hoyle - Wed, 2015-03-25 12:50

Technology evangelist Ben Kepes states in a recent Forbes article that “cloud will be the default way of delivering technology in the future.” And who better to lead a company’s cloud strategy than the CIO, who has direct access to the business and the C-suite?

Because of their pivotal positions, CIOs are in both the enterprise spotlight and the media spotlight. In February, Oracle’s Bob Evans published his "Top 10 Strategic CIO Issues for 2015." As a follow on, Oracle’s Mark Sunday writes a how-to article for CIOs. As Oracle’s CIO, he writes from a place of experience, credibility, and success.

Sunday organizes his article around 11 roles that CIOs need to take on—or hats they need to wear to face the challenges and issues of 2015. One is “The Digital Disruptor.” He passionately believes CIOs need to be out in front of the fundamental shifts in business that are being caused by big data, cloud, mobile, and social. Second, he advises CIOs to be the BFF of the CMO, CFO, and CHRO, who are new strategic stakeholders. Another recommendation is that IT be an enabler of customer loyalty by providing employees with the technology and tools to interact with a broader community and protect their company's brand.

Sunday's hefty list shows us how many aspects of an enterprise are dependent on IT—which ultimately rest on the CIO’s shoulders. Their jobs hold incredible challenges and responsibility. You might want to use this post from Oracle's CIO to connect with some of your customer CIOs.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Standard Edition on Oracle Database Appliance

Yann Neuhaus - Wed, 2015-03-25 11:15

The Oracle Database Appliance is really interresting for small enterprises. It's very good hardware for very good price. It's capacity on demand licensing for Enteprise Edition. But small companies usually go to Standard Edition for cost reasons.

Then does it make sense to propose only Enterprise Edition to the small companies that are interrested by ODA?

Modern Marketing Experience 2015

WebCenter Team - Wed, 2015-03-25 08:29

The Modern Marketing Experience offers an unprecedented opportunity to gain insights from experts in marketing automation, social marketing, content marketing, and big data at a single premier event—presented by Oracle. With programs designed for Modern Marketing executives and practitioners along transformation and execution tracks, you will take home the strategy and tactics you need to make Modern Marketing succeed in your organization. Imagine an even bigger and better Eloqua Experience. That’s why we’ve moved the location and date in North America to accommodate the size and scope of what will become the new industry standard for marketing conferences. Join us at The Venetian Las Vegas March 31 - April 2. Where what happens there will not stay there. You’ll get the knowledge and experience to make a difference in your company and your career. Learn from thought leaders, peers, and all those friends you’ve made over the years. We can’t wait to see you!

How Much Do College Students Actually Pay For Textbooks?

Michael Feldstein - Wed, 2015-03-25 07:16

By Phil HillMore Posts (304)

With all of the talk about the unreasonably high price of college textbooks, the unfulfilled potential of open educational resources (OER), and student difficulty in paying for course materials, it is surprising how little is understood about student textbook expenses. The following two quotes illustrate the most common problem.

Atlantic: “According to a recent College Board report, university students typically spend as much as $1,200 a year total on textbooks.”

US News: “In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone.”

While I am entirely sympathetic to the need and desire to lower textbook and course material prices for students, no one is served well by misleading information, and this information is misleading. Let’s look at the actual sources of data and what that data tells us, focusing on the aggregate measures of changes in average textbook pricing in the US and average student expenditures on textbooks. What the data tells us is that the answer is that students spend on average $600 per year on textbooks, not $1,200.

First, however, let’s address the all-too-common College Board reference.

College Board Reference

The College Board positions itself as the source for the cost of college, and their reports look at tuition (published and net), room & board, books & supplies, and other expenses. This chart is the source of most confusion.

College Board Chart

The light blue “Books and Supplies” data, ranging from $1,225 to $1,328, leads to the often-quoted $1,200 number. But look at the note right below the chart:

Other expense categories are the average amounts allotted in determining total cost of attendance and do not necessarily reflect actual student expenditures.

That’s right – the College Board just adds budget estimates for the books & supplies category, and this is not at all part of their actual survey data. The College Board does, however, point people to one source that they use as a rough basis for their budgets.

According to the National Association of College Stores, the average price of a new textbook increased from $62 (in 2011 dollars) in 2006-07 to $68 in 2011-12. Students also rely on textbook rentals, used books, and digital resources. (http://www.nacs.org/research/industrystatistics/higheredfactsfigures.aspx)

The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes. The US Public Interest Research Group is one of the primary reasons that journalists use the College Board data incorrectly, but I’ll leave that subject for another post.

The other issue is the combination of books and supplies. Let’s look at actual data and sources specifically for college textbooks.

Average Textbook Price Changes

What about the idea that textbook prices keep increasing?

BLS and Textbook Price Index

The primary source of public data for this question is the Consumer Price Index (CPI) from the Bureau of Labor Statistics (BLS). The CPI sets up a pricing index based on a complex regression model. The index is set to 100 for December, 2001 when they started tracking this category. Using this data tool for series CUUR0000SSEA011 (college textbooks), we can see the pricing index from 2002 – 2014[1].

CPI Annual

This data equates to roughly 6% year-over-year increases in the price index of new textbooks, roughly doubling every 11 years. But note that this data is not inflation-adjusted, as the CPI is used to help determine the inflation rate. Since the US average inflation rate over 2002 – 2014 has averaged 2%, this means that textbook prices are rising roughly 3 times the rate of inflation.

NACS and Average Price Per Textbook

NACS, as its name implies, surveys college bookstores to determine what students spend on various items. The College Board uses them as a source. This is the most concise summary, also showing rising textbook prices on a raw, non inflation-adjusted basis, although a lower rate of increase than the CPI.

The following graph for average textbook prices is based on data obtained in the annual financial survey of college stores. The most recent data for “average price” was based on the sale of 3.4 million new books and 1.9 million used books sold in 134 U.S. college stores, obtained in the Independent College Stores Financial Survey 2013-14.

NACS Avg Textbook Price

Other Studies

The Government Accountability Office (GAO) did a study in 2013 looking at textbook pricing, but their data source was the BLS. This chart, however, is popularly cited.

GAO Chart

There are several private studies done by publishers or service companies that give similar results, but by definition these are not public.

Student Expenditure on Books and Supplies

For most discussion on textbook pricing, the more relevant question is what do students actually spend on textbooks, or at least on required course materials. Does the data above indicate that students are spending more and more every year? The answer is no, and the reason is that there are far more options today for getting textbooks than there used to be, and one choice – choosing not to acquire the course materials – is rapidly growing. According to Student Monitor, 30% of students choose to not acquire every college textbook.

Prior to the mid 2000s, the rough model for student expenditures was that roughly 65% purchased new textbooks and 35% bought used textbooks. Today, there are options for rentals, digital textbooks, and courseware, and the ratios are changing.

The two primary public sources for how much students spend on textbooks are the National Association of College Stores (NACS) and The Student Monitor.

NACS

The NACS also measures average student expenditure for required course materials, which is somewhat broader than textbooks but does not include non-required course supplies.

The latest available data on student spending is from Student Watch: Attitudes & Behaviors toward Course Materials, Fall 2014. Based on survey data, students spent an average of $313 on their required course materials, including purchases and rentals, for that fall term. Students spent an average of $358 on purchases for “necessary but not required” technology, such as laptops, USB drives, for the same period.

NACS Course Material Expenditures

Note that by the nature of analyzing college bookstores, NACS is biased towards traditional face-to-face education and students aged 18-24.

Update: I should have described the NACS methodology in more depth (or probably need a follow-on post), but their survey is distributed through the bookstore to students. Purchasing through Amazon, Chegg, rental, and decisions not to purchase are all captured in that study. It’s not flawless, but it is not just for purchases through the bookstore. From the study itself:

Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.

Student Monitor

Student Monitor is a company that provides syndicated and custom market research, and they produce extensive research on college expenses in the spring and fall of each year. This group interviews students for their data, rather than analyzing college bookstore financials, which is a different methodology than NACS. Based on the Fall 2014 data specifically on textbooks, students spent an average of $320 per term, which is quite close to the $638 per year calculated by NACS. Based on information from page 126:

Average Student Acquisition of Textbooks by Format/Source for Fall 2014

  • New print: 59% of acquirers, $150 total mean
  • Used print: 59% of acquirers, $108 total mean
  • Rented print: 29% of acquirers, $38 total mean
  • eTextbooks (unlimited use): 16% of acquirers, $15 total mean
  • eTextbooks (limited use): NA% of acquirers, $9 total mean
  • eTextbooks (file sharing): 8% of acquirers, $NA total mean
  • Total for Fall 2014: $320 mean
  • Total on Annual Basis: $640 mean

Note, however, that the Fall 2014 data ($640 annual) represents a steep increase from the previous trend as reported by NPR (but based on Student Monitor data). I have asked Student Monitor for commentary on the increase but have not heard back (yet).

NPR Student Monitor

Like NACS, Student Monitor is biased towards traditional face-to-face education and students aged 18-24.

Summary

I would summarize the data as follows:

The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.

I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.

I would like to thank Rob Reynolds from NextThought for his explanation and advice on the subject.

Update (3/25): See note on NACS above.

Update (3/27): See postcript post for additional information on data sources.

  1. Note that BLS has a category CUSR0000SEEA (Educational Books & Supplies) that has been tracked far longer than the sub-category College Textbooks. We’ll use the textbooks to simplify comparisons.

The post How Much Do College Students Actually Pay For Textbooks? appeared first on e-Literate.

New Oracle Big Data Quick-Start Packages from Rittman Mead

Rittman Mead Consulting - Wed, 2015-03-25 05:00

Many organisations using Oracle’s business intelligence and data warehousing tools are now looking to extend their capabilities using “big data” technologies. Customers running their data warehouses on Oracle Databases are now looking to use Hadoop to extend their storage capacity whilst offloading initial data loading and ETL to this complementary platform; other customers are using Hadoop and Oracle’s Big Data Appliance to add new capabilities around unstructured and sensor data analysis, all at considerably lower-cost than traditional database storage.

NewImage

In addition, as data and analytics technologies and capabilities have evolved, there has never been a better opportunity to reach further into your data to exploit more value. Big Data platforms, Data Science methods and data discovery technologies make it possible to unlock the power of your data and put it in the hands of your  executives and team members – but what is it worth to you? What’s the value to your organisation of exploring deeper int the data you have, and how do you show return?

Many organisations have begin to explore Big Data technologies to understand where they can exploit value and extend their existing analytics platforms, but what’s the business case? The good news is, using current platforms, and following architectures like the Oracle Information Management and Big Reference Architecture written in conjunction with Rittman Mead, the foundation is in place to unlock a range of growth opportunities. Finding new value in existing data, predictive analytics, data discovery, reducing the cost of data storage, ETL offloading are all starter business cases proven to return value quickly.

NewImage

To help you start on the Oracle big data journey, Rittman Mead have put together two quick-start packages focuses on the most popular Oracle customer use-cases;

If this sounds like something you or your organization might be interested in, take a look at our new Quick Start Oracle Big Data and Big Data Discovery packages from Rittman Mead home page, or drop me an email at mark.rittman@rittmanmead.com and I’ll let you know how we can help.

Categories: BI & Warehousing

Source Dependent Extract and Source Independent Load

Dylan's BI Notes - Tue, 2015-03-24 17:40
Typical data warehousing ETL process involves Extract, Transform, and Load. The concept of Source Dependent Extract (SDE) and Source Independent Load (SIL) are unique part of the BI Apps ETL since BI Apps has a universal data warehouse. Since the staging schema are designed according to the universal data warehouse design, the logic of loading data […]
Categories: BI & Warehousing

“Speeding up the innovation cycle with SaaS” by Steve Miranda

Linda Fishman Hoyle - Tue, 2015-03-24 14:56

It wasn’t so very long ago that cloud discussions, and department pleas to adopt cloud, raised the hackles of IT departments. IT professionals ranged from being wary to resistant. Now, in many companies, the foe has turned into a friend. In this post, EVP Steve Miranda (pictured left) takes a crack at answering the question: “How did IT became a friend of cloud?”

Miranda gives a brief, but concise, explanation of the traditional on-premises process, which made it very difficult for IT departments to deliver what the business was asking for. On the flip side, the power of the cloud lands vendors and IT departments in a place that is much more sustainable, more efficient, and more relevant. Miranda says, “In essence, the cloud has made it possible for vendors and software providers to deliver a better product to customers faster and at a lower price.” That’s good news for IT departments.

Read the article: Speeding up the innovation cycle with SaaS


Parallel Execution -- 3 Limiting PX Servers

Hemant K Chitale - Tue, 2015-03-24 09:05
In my previous posts, I have demonstrated how Oracle "auto"computes the DoP when using the PARALLEL Hint by itself, even when PARALLEL_DEGREE_POLICY is set to MANUAL.  This "auto"computed value is CPU_COUNT x PARALLEL_THREADS_PER_CPU.

How do we limit the DoP ?

1.  PARALLEL_MAX_SERVERS is an instance-wide limit, not usable at the session level.

2.  Resource Manager configuration can be used to limit the number of PX Servers used

3.  PARALLEL_DEGREE_LIMIT, unfortunately, is not usable when PARALLEL_DEGREE_POLICY is MANUAL

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 24 22:57:18 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel_degree_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy string MANUAL
SYS>show parameter parallel_max

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers integer 64
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

As expected, the query uses 16 PX Servers (and not the table-level definition of 4).  Can we use PARALLEL_DEGREE_LIMIT ?

HEMANT>alter session set parallel_degree_limit=4;

Session altered.

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
2 32 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

No, it actually still used 16 PX servers f or the second execution.

What about PARALLEL_MAX_SERVERS ?

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_max_servers=4;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
3 36 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Yes, PARALLEL_MAX_SERVERS restricted the next run of the query to 4 PX Servers.  However, this parameter limits the total concurrent usage of PX Servers at the instance level.  It cannot be applied or derived to the session level.


UPDATE : Another method pointed out to me by Mladen Gogala is to use the Resource Manager to limit the number of PX Servers a session can request.

.
.

.
Categories: DBA Blogs

Oracle Exadata Performance: Latest Improvements and Less Known Features

Tanel Poder - Tue, 2015-03-24 08:57

Here are the slides of a presentation I did at the IOUG Virtual Exadata conference in February. I’m explaining the basics of some new Oracle 12c things related to Exadata, plus current latest cellsrv improvements like Columnar Flash Cache and IO skipping for Min/Max retrieval using Storage Indexes:

Note that Christian Antognini and Roger MacNicol have written separate articles about some new features:

Enjoy!

 

Related Posts