Skip navigation.

Feed aggregator

Parallel Execution -- 2 PX Servers

Hemant K Chitale - Tue, 2015-03-03 09:51
I've posted a couple of examples (here and here) of Parallel Execution servers for Parallel Query.

How do we identify usage of Parallel Execution ?

I will update this post (and, possibly, subsequent post(s)) with a few methods.

The first one (as I've shown in my previous posts) is to look at the column PX_SERVERS_EXECUTIONS in either V$SQLSTATS or V$SQL.  This can identify the number of PX Servers used for an SQL (Query or DML).  However, there is a caveat when the SQL undergoes multiple execution -- the statistic on PX_SERVERS_EXECUTIONS may be cumulative (i.e. additive) across all the executions of the SQL.  UPDATE 13-Mar-15 : See the new post here.

Another method is to look at the V$PX_PROCESS and V$PX_SESSION views.

Let me demonstrate this second method using the same SQL query from my previous blog post.

[oracle@localhost ~]$ sqlplus hemant/hemant

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 3 23:34:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

HEMANT>select distinct sid from v$mystat;

SID
----------
197

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
0

HEMANT>select count(*) from v$px_session;

COUNT(*)
----------
0

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select count(*) from v$px_process;

COUNT(*)
----------
16

SYS>select qcsid, req_degree, degree, count(*)
2 from v$px_session
3 group by qcsid, req_degree, degree
4 /

QCSID REQ_DEGREE DEGREE COUNT(*)
---------- ---------- ---------- ----------
197 1
197 16 16 16

SYS>

The query by the SYS user is from a different session while the "select /*+ PARALLEL */ count(*) from Large_Table;" is being executed by HEMANT.  This query is on V$PX_SESSION and shows only when the Parallel Query sessions are active -- i.e. running HEMANT's  parallel count(*) query.  (If I query V$PX_SESSION after the parallel count(*) completes, I won't get the information).

The above output demonstrates
(a) that there were no PX servers before I began the parallel count(*) query and there were 16 at the end -- 16 PX servers had been started and had not yet shutdown by the time I queried V$PX_PROCESS (They will shutdown after a while  ** note below).
(b) that my parallel count(*) query (executed by SID 197 which is the QueryCo-ordinator -- represented by QCSID) DID request and use 16 PX server sessions (as evidenced in the output from the query on V$PX_SESSION).  Thus, what I claimed on the basis of PX_SERVERS_EXECUTION in my previous post is correct.

** Note : A few minutes later, I can see that the PX Servers have shutdown.

HEMANT>select count(*) from v$px_process
2 /

COUNT(*)
----------
0

HEMANT>


Later, I will demonstrate how to join V$PX_PROCESS and V$PX_SESSION.
I will also demonstrate how you manage the number of PX Servers.

.
.
.

Categories: DBA Blogs

12c Parallel Execution New Features: Concurrent UNION ALL - Part 1

Randolf Geist - Tue, 2015-03-03 09:31
12c introduces the concurrent UNION ALL operator that allows multiple branches below the UNION ALL to become active concurrently - which is a significant change. Before the introduction of this feature Oracle never executed multiple branches of an execution plan concurrently (in terms of Parallel Execution) - the parallelism so far was about executing the same operations of a particular branch of the execution plan by multiple processes / sessions. When we later look at the actual runtime behaviour of the new feature we'll notice that it's actually not that much different from previous behaviour than it sounds here.

By default the feature gets automatically used when there is at least one branch of the UNION ALL making use of Parallel Execution features, like a parallel full table scan for example.

If the UNION ALL operator consists of serial branches only the concurrent execution of those serial branches can be explicitly activated by using the PQ_CONCURRENT_UNION hint. To suppress the feature usage, a corresponding NO_PQ_CONCURRENT_UNION hint can be used. Both hints are officially documented.

The concurrent execution of serial branches in my opinion could be particularly useful to overcome the still existing limitation of parallelism and operations via database links: If you want to copy larger segments of data via database links you can use Parallel Execution on the source to speed up data retrieval and you can use Parallel Execution on the target to speed up the DML process, but the data will always have to go through the query coordinator process on both sides, so a copy process is always limited by the CPU and network bandwidth / latency of a single process. If you need to copy multiple objects of similar size you could simply speed up the copy process via spawning multiple jobs, but if the objects are of significantly different size you would still have to wait for the single process to cope with the largest segment.

Before 12c the only way to overcome this limitation was to employ "do-it-yourself" parallelism per (large) segment, more recently supported via DBMS_PARALLEL_EXECUTE. This allows spreading such an operation across several processes / sessions concurrently and therefore allowing to overcome the described limitation. But this approach makes the overall handling more complex (chunking, error handling etc., although DBMS_PARALLEL_EXECUTE provides a standardized interface for such tasks) and isn't a simple SQL based solution any longer.

With the concurrent UNION ALL feature in principle a SQL solution should be possible where multiple branches of a concurrent UNION ALL query (different chunks) of the same remote segment concurrently. Of course this approach still requires some manual preparation since ideally the different branches should only query disjunct parts of the segment, so for non-partitioned segments ROWID ranges should be prepared and used inside the different remote queries, similar to the DBMS_PARALLEL_EXECUTE approach. If you don't care about the duplicated work you could of course simply use something like a ORA_HASH or MOD based filter on a column expression that comes up with a reasonably equal (hash) value distribution (like a primary key).

However, due to the way the concurrent UNION ALL is implemented internally at present, this approach might not work very well, at least in 12.1.0.1 and 12.1.0.2, as I'll show in a later part.

Some other questions I would like to address regarding this feature are the following:

- How is the work distributed at runtime if you end up with a mixture of parallel and serial branches?

- If you request concurrent execution with serial branches only and no other parallel execution in the plan, how is the parallel degree to be used determined?

In this part of the blog post I want to start with the latter question first: What parallel degree gets used if a UNION ALL consists only of serial branches?

And one follow-up question could be: What happens if a more complex execution plan mixes parts with Parallel Execution outside a UNION ALL with a concurrent UNION ALL that consists only of serial branches?

Here is the table setup I'm going to use for all my tests:

-- This is the Parallel table
create table t_2
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2')

alter table t_2 parallel 8;

-- This is the serial table
create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(10000) */ * from dual
connect by
level <= 10000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't2')

create database link loop connect to cbo_test identified by cbo_test using '//localhost:1521/orcl12'
The database link will become relevant in the later parts of this series where I look at the execution of remote branches.

So let's start with the question about the parallel degree used in case of serial branches only. I'll use the following simple query to determine that (repeat the query block inside the UNION ALL as many times as desired, I've used 20 serial branches):

-- Can I force a completely serial UNION ALL into concurrent execution using the hint? Yes, but isn't selected by the optimizer automatically
-- In 12.1.0.2 the PQ_CONCURRENT_UNION hint works as documented as statement level hint, whereas in 12.1.0.1 there is the bug that you need to specify a query block name (Bug 15851422 : PQ_CONCURRENT_HINT DOES NOT WORK WITHOUT QUERY BLOCK SPECIFICATION)
-- What about the degree chosen in such a case, is there an upper limit?
select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
.
.
.
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
I've performed all tests on 12.1.0.2.

If you do an EXPLAIN PLAN on above query without the PQ_CONCURRENT_UNION hint, then you'll see, as documented, the default behaviour, which is serial execution of one branch after the other (I've used 20 serial branches in my tests):

-------------------------------------
| Id | Operation | Name |
-------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | VIEW | |
| 3 | UNION-ALL | |
|* 4 | TABLE ACCESS FULL| T2 |
|* 5 | TABLE ACCESS FULL| T2 |
|* 6 | TABLE ACCESS FULL| T2 |
|* 7 | TABLE ACCESS FULL| T2 |
|* 8 | TABLE ACCESS FULL| T2 |
|* 9 | TABLE ACCESS FULL| T2 |
|* 10 | TABLE ACCESS FULL| T2 |
|* 11 | TABLE ACCESS FULL| T2 |
|* 12 | TABLE ACCESS FULL| T2 |
|* 13 | TABLE ACCESS FULL| T2 |
|* 14 | TABLE ACCESS FULL| T2 |
|* 15 | TABLE ACCESS FULL| T2 |
|* 16 | TABLE ACCESS FULL| T2 |
|* 17 | TABLE ACCESS FULL| T2 |
|* 18 | TABLE ACCESS FULL| T2 |
|* 19 | TABLE ACCESS FULL| T2 |
|* 20 | TABLE ACCESS FULL| T2 |
|* 21 | TABLE ACCESS FULL| T2 |
|* 22 | TABLE ACCESS FULL| T2 |
|* 23 | TABLE ACCESS FULL| T2 |
-------------------------------------
With the hint in place the plan changes to this:

-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
|* 14 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
|* 24 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 31 | PX SELECTOR | | Q1,00 | PCWP | |
|* 32 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 33 | PX SELECTOR | | Q1,00 | PCWP | |
|* 34 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 35 | PX SELECTOR | | Q1,00 | PCWP | |
|* 36 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 37 | PX SELECTOR | | Q1,00 | PCWP | |
|* 38 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 39 | PX SELECTOR | | Q1,00 | PCWP | |
|* 40 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 41 | PX SELECTOR | | Q1,00 | PCWP | |
|* 42 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 43 | PX SELECTOR | | Q1,00 | PCWP | |
|* 44 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 45 | PX SELECTOR | | Q1,00 | PCWP | |
|* 46 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
-------------------------------------------------------------------------
Starting with 12c you'll usually get a plan note about the parallel degree the optimizer has determined, for both the traditional DOP and the in 11.2 introduced Auto DOP way (for Auto DOP this note showed already up in 11.2). However, in this case, no note section shows up.

So what degree do we then get at runtime? (XPLAN_ASH output snippet)

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 20| 20| 1| 20| 20|
| | | | | | | |
So that's interesting, in my case I tested this on a laptop with 2 CPUs and Resource Manager active, which means that insane degrees should be downgraded automatically (or even already limited by the optimizer), but I really got a degree of 20, which doesn't make a lot of sense in this environment. However, in 12c obviously the limits enforced by the Resource Manager have been raised. In 11.2 the highest degree the Resource Manager allowed in this particular environment was 16, in 12c the limit was 27 instead (maybe 28, because in 11.2 I sometimes got 15, sometimes 16).

So pretty straightforward the number of serial branches seem to determine the degree attempted to use at runtime. This also means you need to be careful how many branches you code into the UNION ALL if you want to make use of the concurrent feature.

What happens if I mix now a Parallel Execution with a UNION ALL that consists only of serial branches, for example like this:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) a,
(
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) b
where a.id = b.id;
Again, if I don't use the PQ_CONCURRENT_UNION hint, I'll get the traditional plan shape for the UNION ALL - which gets executed serially and then gets distributed as part of the remaining parallel operations:

-------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,02 | PCWP | |
|* 5 | HASH JOIN | | Q1,02 | PCWP | |
| 6 | BUFFER SORT | | Q1,02 | PCWC | |
| 7 | PX RECEIVE | | Q1,02 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ10000 | | S->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | |
| 10 | VIEW | | | | |
| 11 | UNION-ALL | | | | |
|* 12 | TABLE ACCESS FULL | T2 | | | |
|* 13 | TABLE ACCESS FULL | T2 | | | |
|* 14 | TABLE ACCESS FULL | T2 | | | |
|* 15 | TABLE ACCESS FULL | T2 | | | |
|* 16 | TABLE ACCESS FULL | T2 | | | |
|* 17 | TABLE ACCESS FULL | T2 | | | |
|* 18 | TABLE ACCESS FULL | T2 | | | |
|* 19 | TABLE ACCESS FULL | T2 | | | |
|* 20 | TABLE ACCESS FULL | T2 | | | |
|* 21 | TABLE ACCESS FULL | T2 | | | |
| 22 | PX RECEIVE | | Q1,02 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 24 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 25 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("ID"="B"."ID")
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
13 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
15 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
17 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
19 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
21 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
25 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 4 because of table property
Since I marked the table T_2 with PARALLEL 8 I would expect the overall plan to use a degree of 8, but look at the "note" section, which tells me the degree would be 4 due to a table property...

What happens at runtime:

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 8| 8| 1| 16| 16|
| | | | | | | |
But at runtime I get a degree of 8, so you have to be careful with these plan notes as they quite often seem to report incorrect degrees for the traditional DOP way.

If I now use the PQ_CONCURRENT_UNION hint, I get the following plan:

------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,02 | PCWP | |
|* 5 | HASH JOIN | | Q1,02 | PCWP | |
| 6 | PX RECEIVE | | Q1,02 | PCWP | |
| 7 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 8 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 9 | VIEW | | Q1,00 | PCWP | |
| 10 | UNION-ALL | | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
|* 14 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
|* 24 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 31 | PX RECEIVE | | Q1,02 | PCWP | |
| 32 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 33 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 34 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("ID"="B"."ID")
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
22 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
24 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
26 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
28 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
30 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
34 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 4 because of table property
OK, this is the concurrent UNION ALL shape now, but the "note" section still tells me the parallel degree is 4.

What happens at runtime:

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 10| 10| 1| 20| 20|
| | | | | | | |
Oops, at runtime I now get the degree determined by the PQ_CONCURRENT_UNION part, so the overall degree is then in my case 10, and not 4 (or 8 for that matter), since my UNION ALL had 10 serial branches in this example.

So it becomes obvious that using the new feature with serial branches you have to worry a bit about the parallel degree used at runtime.

In the next instalment we'll look at the actual runtime behaviour of the feature when it gets automatically triggered by a mixture of parallel and serial branches.

I’m William Reynolds and this is How I Work

Duncan Davies - Tue, 2015-03-03 09:00

William joined PeopleSoft in November of 1995 as a developer in the Inventory product. Since then he has worked in various roles within the FSCM pillar eventually landing in the Enterprise Architecture group where he straddles the world between PeopleTools and Applications. He currently works with other Architects in each of the pillars on various projects like security, accessibility and PUM.

William

Name: William Reynolds

Occupation: Senior Application Architect, Enterprise Architecture
Location: Home office, Chicago, IL
Current computer: ThinkPad W520 & T430
Current mobile devices: iPhone 6

What apps/software/tools can’t you live without?
Probably the two pieces of software I’d be lost without are LastPass and Xmarks. Keeping track of hundreds of bookmarks along with hundreds of passwords between work and home would be nearly impossible without them. Lately I’ve also been leaning heavily on Notepad++ for various projects.

Besides your phone and computer, what gadget can’t you live without?
When I’m not on my laptop I usually have my iPad handy for email, calendar and web access.

What’s your workspace like?
I’d like to say it’s a clean, modern, calm fortress of solitude but I work from home and have three kids under the age of 12 so its state of clutter really depends on the week. I built a treadmill desk a couple years ago but these days it’s more of a standing desk. Every week you read a new article about sitting versus standing so I try to stand as much as I can during the dozen or so conference calls throughout the week. Walking while working takes some getting used to. Email is pretty easy to get through while in motion but for something like SQL or coding I find I have to be at least standing still if not sitting to really concentrate.

My workspace

What do you listen to while you work?
My tastes run the gambit and usually it depends on what task I’m doing. For things that require some heads down concentration it’s either Brubeck or Sinatra. If I’m cranking through email or pulling together a slide deck then it’s something more peppy like top 40.

What PeopleSoft-related productivity apps do you use?
It really just depends on what project I’m working on at the moment.  I prefer Firefox as my browser at work, Notepad++ for text work, and SQL Developer.

What SQL/Code do you find yourself writing most often?
Lately SELECT * FROM PSPROJECTITEM;

What would be the one item you’d add to PeopleSoft if you could?
At the moment, my ideal change would be to get rid the Windows dependency in all our client tools.

What everyday thing are you better at than anyone else?
I would say I have a knack for looking at the big picture and being able to put myself in a customer’s shoes.

What’s the best advice you’ve ever received?
Not sure where it came from but “don’t build a Cadillac when what you need is a Honda.”


My Oracle Support Essentials Webcasts - March

Chris Warticki - Tue, 2015-03-03 07:26
Oracle Product Support Advisor Webcasts March 2015
shadow1

This Month:


Oracle Support Advisor Webcast Series for March 2015 shadow1 dial Dear Valued Support Customer,
We are pleased to invite you to our Advisor Webcast series for March 2015. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;
If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Sincerely,
Oracle Support

shadow2 shadow3 pen March Featured Webcasts by Product Area: Database Materialized view basics,architecture and internal working March 24 Enroll E-Business Suite Overview of Accounting for Across Organization Transaction Cycles Part 3 of 3 March 10 Enroll E-Business Suite Demantra, Are you importing Legacy data in Ascii flat files or Oracle Tables? March 11 Enroll E-Business Suite Advanced Supply Chain Planning, Collaborative Planning, and iSupplier Flow and Demonstration March 12 Enroll E-Business Suite Inventory Sales Order Flow and Troubleshooting – R12 March 18 Enroll E-Business Suite Understanding &Troubleshooting Receipts APIs In Oracle Accounts Receivable March 19 Enroll E-Business Suite Use of Forecasting in Enterprise Asset Management (eAM) March 19 Enroll E-Business Suite Understanding Lead Times in Oracle Work In Process March 25 Enroll JD Edwards JD Edwards World: Understanding Multi-Currency Intercompany Entries in A/P a! nd A/R March 10 Enroll JD Edwards JD Edwards World A9.x Language Upgrade March 17 Enroll JD Edwards JD Edwards EnterpriseOne: Overview of Voucher Match Automation March 18 Enroll JD Edwards JD Edwards World: Introduction to Sleeper March 19 Enroll JD Edwards JD Edwards World A9.3 Security - Play it Safe. Don’t let your Environment be a Target. March 24 Enroll JD Edwards JD Edwards EnterpriseOne: Getting to know the NEW Distribution Mobile Applications (Introduction) March 31 Enroll JD Edwards EnterpriseOne EnterpriseOne Table Conversion information and Troubleshooting for Upgrades March 3 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Working with Change Requests March 4 Enroll JD Edwards EnterpriseOne Benefit Enrollment Fundamentals: Setting Up Enrollment Parameters, Dates Codes and Defining Eligibility March 11 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne:The 5 Essentials Keys To Understanding Fiscal/Anniversary Rollover (Leave Balance Rollover) March 25 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne Manufacturing Commitment Processing March 26 Enroll JD Edwards World JD Edwards World: Overview of Payroll Journal Entries March 5 Enroll Middleware WebLogic Server - リクエスト応答遅延の調査に必要な情報 (Japanese Only) March 11 Enroll Middleware WebCenter Content 11g Clusters March 11 Enroll PeopleSoft Enterprise PeopleSoft HCM Updates for the Affordable Care Act March 25 Enroll PeopleSoft Enterprise Using PeopleSoft Update Manager (PUM): Top Topics March 31 Enroll Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | ! Legal Notices | Privacy

In Enkitec We Trust

Jason Arneil - Tue, 2015-03-03 06:02

The 2nd of March 2015 was my first day as part of the Accenture Enkitec Group.

When I first started using Exadata back in 2011, the one thing I relied on more than anything else to get me up to speed was the original Expert Oracle Exadata book by Kerry Osborne, Randy Johnson, and Tanel Poder. I am equally sure the 2nd Edition will prove just as valuable.

e-dba years

I have thoroughly enjoyed my past 3 1/2 years with e-dba. Both myself and e-dba as a company have grown enormously in this time, and it has been a really positive experience being with a growing company.

At e-dba I had all the exposure to Exadata I could have wanted and they have many Exadata customers and a large number of exa racks under their care.

It was a wrench to leave.

Feeling gravity’s pull

Over the past couple of years I have come to know and appreciate the talents of several of the members of the Accenture Enkitec Group. Kerry expressed this well in that oak table world talk at OpenWorld 2014 as an effect like a “gravitational pull” when recruiting people.

It is certainly something I felt when weighing up my options. The prospect of working with such an outstanding collection of Oracle talent was too hard to ignore.

I would always have regretted not haven taken the chance to work in this team.

I can’t wait to get started.


Using Documents Cloud Service PaaS to Accelerate Product Development

WebCenter Team - Tue, 2015-03-03 05:52

by Victor Owuor, Senior Director, Oracle Documents Cloud Service Product Management Normal 0 false false false EN-US X-NONE X-NONE

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Many applications have to manage documents or images in addition to solving the business problems that they target. For example, a marketing campaign manager may want to drop in the very latest marketing brochure in a campaign she is running; a document that is constantly being updated by the marketing team. In general, users have certain expectation about what they should be able to do with the documents. They may want to:

  • Upload the documents from their desktop by simply dragging to specific folder. They would expect some sync client to push the document to the relevant location associated with the application.
  • Access the documents from their mobile application or from a different computer workstation or via the Web.
  • Easily but securely share the documents with other users, either within or external to their organization for work collaboration.
  • Have the documents available even when offline

Whether you are using a Cloud, on-premise or a custom application, application developers do not have to develop these capabilities from scratch. Documents Cloud Service is a Platform-as-a-Service offering from Oracle that can be easily leveraged with Oracle SaaS, on-premise and any third party applications. Oracle Documents Cloud Service provides a platform that developers can use to expose those features quickly with minimal development effort. Developers have a choice between:

  • Iframe Embedding - Embedding the Document Cloud Service HTML interface within their application to surface the document capability. This approach does not require you to implement a user interface to the Document Cloud Service and automatically surfaces all features the Service in your application.
  • API - Surfacing Document Cloud Service within the user interface of their application by making web service calls to the Document Service API. This approach allows you to create a custom user interface that is tailored to your specific application.

Additionally, developers may use Applinks to leverage the same user credentials for Oracle Documents.

This article will focus on Iframe Embedding. We will address the API in a future article.

Iframe Embedding

You can embed the Oracle Documents Cloud Service interface in an inline frame (IFRAME tag). When you do that, you may want to remove the default branding and other aspects of the user interface. You can do that by adding “/embed” to the URL of the page that you would like to embed.

For example, the top level folder URL in the standard interface is as follows:

https://www.example.com/documents/home/nameasc

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

If you change the URL as follows, you will get a user interface that is better suited to embedding.

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

https://www.example.com/documents/embed/home/nameasc

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

You can embed that iframe within your application as follows:

<!doctype html>

<html>

<head>

<title>IFRAME DEMO</title>

</head>

<body>

<iframe src="http://docs.oracle.com/cloud/latest/documentcs_welcome/WCCCD/odcs-embed.htm#WCCCD3791" width="100%" height="300">

<p>Your browser does not support iframes.</p>

</iframe>

<iframe src="https://www.example.com/documents/embed/home/nameasc" width="100%" height="300">

<p>Your browser does not support iframes.</p>

</iframe>

</body>

</html>

Here is another example to show how you can embed the preview of a file.

The file preview URL

https://www.example.com/documents/fileview/D92DA147AC6B1F0698DFB7A2F6C3FF17C1177A968060/
_Getting_Started_with_PBCS.docx

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Normal 0 false false false EN-US X-NONE X-NONE

To get to the embed view for Preview, just add “/embed” into the URL e.g.

https://www.example.com/documents/embed/fileview/
D92DA147AC6B1F0698DFB7A2F6C3FF17C1177A968060/_Getting_Started_with_PBCS.docx

Normal 0 false false false EN-US X-NONE X-NONE

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

You can find more help on the help on how to embed the Web UI here.

And for more information and materials on Documents Cloud Service, visit us here. Until the next time!

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Benefit from Organizing Assets and Users with Support Identifier Groups

Joshua Solomin - Mon, 2015-03-02 19:43

Customer User Administrators (CUAs) are often tasked with managing employee access to Oracle support resources. Oracle uses Support Identifiers (SIs) to help CUAs identify which employees need access to support, and the available resources when they do.

If you have support users scattered across a large geographic area, or have to manage dozens (or possibly hundreds) of software and hardware assets, use Support Identifier Groups (SIGs) to simplify the process of giving your users access to support resources. SIGs can help you align privileges and service request flows to specific projects, locations, or assets.

To make the most of Support Identifier Groups, you will need to pre-plan how users and assets are best organized. Once defined you can set up your Groups, adding users and assets logically the way you need them. When your organization purchases new hardware or software assets (with an associated new SI), these can automatically be assigned to a default SIG that you establish.

NOTE: Be aware that the Default SI will apply to all new assets in all new orders across your Account/Organization. 

Simple. Easy. Maintainable. Visit the Support Identifier Groups Information Center to learn more, view videos and instructions on how to get started, and best practices.

Different plan_hash_value same plan

Bobby Durrett's DBA Blog - Mon, 2015-03-02 15:38

I mentioned this same effect in an earlier post about SQL profiles: link

I get a different plan_hash_value values for a query each time I run an explain plan or run the query.  I see this in queries whose plan includes a system generated temporary segment like this:

|   1 |  TEMP TABLE TRANSFORMATION   |                             |
...
|  72 |    TABLE ACCESS STORAGE FULL | SYS_TEMP_0FD9D668C_764DD84C |

For some reason the system generated temporary table name gets included in the plan_hash_value calculation.  This makes plan_hash_value a less than perfect way to compare two plans to see if they are the same.

Last week I was using my testselect package to test the effect of applying a patch to fix bug 20061582.  I used testselect to grab 1160 select statements from production and got their plans with and without the patch applied on a development database.  I didn’t expect many if any plans to change based on what the patch does.  Surprisingly, 115 out of the 1160 select statements had a changed plan, but all the ones I looked at had the system generated temporary table names in their plan.

Now, I am going to take the queries that have different plans with and without the patch and execute them both ways.  I have a feeling that the plan differences are mainly due to system generated temp table names and their execution times will be the same with and without the patch.

I’ve run across other limitations of plan hash value as I mentioned in an earlier post: link

I’m still using plan_hash_value to compare plans but I have a list of things in my head that reminds me of cases where plan_hash_value fails to accurately compare two plans.

– Bobby

P.S. After posting this I realized that I didn’t know how many of the 115 select statements with plans that differed with and without the patch had system generated temp tables.  Now I know.  114 of the 115 have the string “TEMP TABLE TRANSFORMATION” in their plans.  So, really, there is only one select statement for which the patch may have actually changed its plan.

P.P.S. I reapplied the patch and verified that the one sql_id didn’t really change plans with the patch.  So, that means all the plan changes were due to the system generated name.  Also, all the executions times were the same except for one query that took 50 seconds to parse without the patch and 0 with the patch.  So, one of the queries with the system generated temp table name happened to benefit from the patch.  Very cool!

P.P.P.S This was all done on an 11.2.0.4 Exadata system.

Categories: DBA Blogs

Parallel Execution -- 1b The PARALLEL Hint and AutoDoP (contd)

Hemant K Chitale - Mon, 2015-03-02 09:38
Continuing the previous thread, having restarted the database again, with the same CPU_COUNT and missing I/O Calibration statistics  ....

The question this time is : What if the table level DoP is specifically 1 ?

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 2 23:22:28 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>alter table large_table parallel 1;

Table altered.

HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha ! Again ! The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle chose to use 8 PX Servers for the query !  Again, ignoring the table level DoP (of 1)

So, once again, we see that Oracle actually computes a DoP that looks like it is CPU_COUNT x PARALLEL_THREADS_PER_CPU. Let's verify this.

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_threads_per_cpu=4;

System altered.

SYS>alter system flush shared_pool;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
1

HEMANT>set serveroutput off
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8b0ybuspqu0mm, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from Large_Table

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1311 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 1311 (1)| 00:00:16 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='8b0ybuspqu0mm';

PX_SERVERS_EXECUTIONS
---------------------
16

HEMANT>

YES SIR ! Oracle chose to use 16 PX Servers this time. So that does look like CPU_COUNT x PARALLEL_THREADS_PER_CPU.  Have you also noticed the COST ?  The COST has also dropped to half.  So, the COST is also computed based on the number of PX Servers that it expects to be able to grab and use.

.
.
.


Categories: DBA Blogs

Oracle Data Provider for .NET now on NuGet

Christian Shay - Mon, 2015-03-02 08:30

ODP.NET, Managed Driver is now on NuGet, meaning that you can add ODP.NET to your Visual Studio project with just a few clicks in the NuGet Package Manager. We've also published an Oracle By Example walkthrough to take you step by step through the process of using NuGet and ODP.NET.

Here we are in the NuGet Package Manager:




When searching for us in the package manager, make sure to get the official package - look for the word "Official" in the title.



There's actually two NuGet packages available:

ODP.NET, Managed Driver - Official
NuGet id: Oracle.ManagedDataAccess

This adds Oracle.ManagedDataAccess.dll to your project and also makes needed configuration entries in your app.config or web.config.


ODP.NET, Managed Entity Framework Driver - Official
NuGet id: Oracle.ManagedDataAccess.EntityFramework

This adds Oracle.ManagedDataAccess.EntityFramework.dll as well as config file configuration. It also has a dependency on the ODP.NET package above and will pull it into your project  as well as EF 6 if needed.

If you want to host this package on your local intranet, it is also available for download on the OTN .NET download page.

Please note that if you want to use Visual Studio integration features, such as browsing your Oracle Schema in Server Explorer, or using Entity Designer or Table Adapter Configuration wizard, you should still install Oracle Developer Tools for Visual Studio, as a NuGet package  does not provide any of the Visual Studio integration components needed to do design time work.

Query the Enterprise Manager collected metrics

Yann Neuhaus - Mon, 2015-03-02 08:10

Enterprise Manager (Cloud Control for example) gathers a lot of metrics. You can display them from the GUI, but you can also query the SYSMAN views directly. Today, I wanted to get the history of free space in an ASM disk group for the previous week. Here is how I got it.

IR Scrolling - With a Little Help From My Friends

Denes Kubicek - Mon, 2015-03-02 04:02
If you are working with interactive reports you will for sure be faced with a problem of wide reports. If you are taking care of the page layout and eventually have more than just an interactive report on the page, you will want to limit it's size to something making sense. The first problem will appear if you limit the width by setting the region attribute to something like this

style="width:830px"

and you will not see some of the columns:



If you add a scrolling by wrapping the region in a div and adding the following to the region header:

<div style="width:810px;overflow-x:scroll">

and closing it in the footer by adding:

</div>



you will be able to scroll with two ugly side effects:

  • The action bar will be included in the scrolling as well and disappear as you scroll to the right.
  • The sort widgets for the columns will appear on the wrong position the more you scroll.




  • You can solve this problem in the following way:

  • Remove the scrolling DIV from the region header / footer.
  • Use this java script in the page Function and Global Variable Declaration:

    function onload_ir(p_width, p_report_id){

    $('<div id="scroll_me" style="width:' + p_width + 'px;overflow-x:auto;display:inline-block"></div>').insertBefore('#apexir_DATA_PANEL'); $("#apexir_DATA_PANEL").appendTo("#scroll_me"); $("#apexir_DATA_PANEL").show();

    var or_Finished_Loading = gReport._Finished_Loading; gReport._Finished_Loading = function(){ or_Finished_Loading(); if(gReport.current_control=='SORT_WIDGET'){

    var offset_pos = $("#" + p_report_id ).position().left; var pos_left = $('#apexir_rollover').css('left'); pos_left = pos_left.replace('px',''); if (pos_left>p_width-100) {new_pos = parseFloat(pos_left) + parseFloat(offset_pos) - 25; $('#apexir_rollover').css('left', new_pos+'px');} }; }; };


  • Create a Dynamic Action which runs After Refresh (fire on page load option should be turned on) of the IR region and execute this script there:

    onload_ir(810, 7990109002761687)


  • 810 is the widht of the scolling region, which is a bit less then the total width of the region.

  • 7990109002761687 is the id of the data grid of the interactive report. You can find this id if you use firebug and scroll to the point where the data grid is placed.




  • What this script does is:

  • It will wrap the data grid into an additional div and add a scroll bar to it.
  • It will overwrite the IR onload function and add a sort widget positioning function to it in order to reposition the widget according to the scrolling.
  • The important part of the overloading function was done by Tom Petrus, who is a big help when it comes to tricky stuff like this.

    Now, once you have done that, your report will show up properly once you scroll it.



    Enjoy.
    Categories: Development

    Webcast - Oracle Database 12c High Availability New Features

    Organizations today are dependent on IT to run efficient operations, quickly analyze information and compete more effectively. Consequently, it is essential that their IT infrastructure and databases...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes

    Vikram Das - Sun, 2015-03-01 14:53
    I got this error while upgrading an R12.1.3 instance to R12.2.4, when I completed AD.C.Delta 5 patches with November 2014 bundle patches for AD.C and was in the process of applying TXK.C.Delta5 with November 2014 bundle patches for TXK.C :

    Validation successful. All expected nodes are listed in ADOP_VALID_NODES table.
    [START 2015/03/01 04:53:16] Check if services are down
            [INFO] Run admin server is not down
         [WARNING]  Hotpatch mode should only be used when directed by the patch readme.
      [EVENT]     [START 2015/03/01 04:53:17] Performing database sanity checks
        [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
        Log file: /erppgzb1/erpapp/fs_ne/EBSapps/log/adop/adop_20150301_045249.log


    [STATEMENT] Please run adopscanlog utility, using the command

    "adopscanlog -latest=yes"

    to get the list of the log files along with snippet of the error message corresponding to each log file.


    adop exiting with status = 1 (Fail)

    I was really surprised as I had already run EBS technology codelevel checker (patch 17537119) script checkDBpatch.sh on racnode1.
    To investigate I checked inside checkDBpatch.sh and found that it create a table called TXK_TCC_RESULTS.  
    SQL> desc txk_tcc_results Name                                      Null?    Type ----------------------------------------- -------- ---------------------------- TCC_VERSION                               NOT NULL VARCHAR2(20) BUGFIX_XML_VERSION                        NOT NULL VARCHAR2(20) NODE_NAME                                 NOT NULL VARCHAR2(100) DATABASE_NAME                             NOT NULL VARCHAR2(64) COMPONENT_NAME                            NOT NULL VARCHAR2(10) COMPONENT_VERSION                         NOT NULL VARCHAR2(20) COMPONENT_HOME                                     VARCHAR2(600) CHECK_DATE                                         DATE CHECK_RESULT                              NOT NULL VARCHAR2(10) CHECK_MESSAGE                                      VARCHAR2(4000)
    SQL> select node_name from txk_tcc_results;
    NODE_NAME--------------------------------------------------------------------------------RACNODE1
    I ran checkDBpatch.sh again, but the patch failed again with previous error:
       [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
    It was Saturday 5 AM already working through the night.  So I thought, it is better to sleep now and tackle this on Sunday.  On Sunday morning after a late breakfast, I looked at the problem again.  This time, I realized that the error was complaining about racnode1 (in lower case) and the txk_tcc_results table had RACNODE1(in upper case).  To test my hunch, I immediately updated the value:
    update txk_tcc_resultsset node_name='racnode1' where node_name='RACNODE1';
    commit;
    I restarted the patch, and it went through.  Patch was indeed failing because it was trying to look for a lower case value.  I will probably log an SR with Oracle, so that they change their code to make the node_name check case insensitive.

    Further, I was curious, why node_name was stored in all caps in fnd_nodes and txk_tcc_results.  The file /etc/hosts had it in lowercase.  I tried the hostname command on linux prompt:

    $ hostname
    RACNODE1

    That was something unusual, as in our environment, hostname always returns the value in lowercase.  So I further investigated.
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = RACNODE1

    So I changed it

    [root@RACNODE1 ~]# sysctl kernel.hostname=RACNODE1
    kernel.hostname = racnode1
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = racnode1
    [root@RACNODE1 ~]#
    [root@RACNODE1 ~]# hostname
    racnode1
    Logged in again to see if root prompt changed:
    [root@racnode1 ~]#

    I also checked
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=RACNODE1

    Changed it here also:
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=racnode1

    I also changed it on racnode2.
    Categories: APPS Blogs

    Alternate Ledes for CUNY Study on Raising Graduation Rates

    Michael Feldstein - Sun, 2015-03-01 14:23

    By Phil HillMore Posts (295)

    Last week MDRC released a study on the City University of New York’s (CUNY) Accelerated Study in Associate Programs (ASAP) with near breathless terms.

    Title page

    • ASAP was well implemented. The program provided students with a wide array of services over a three-year period, and effectively communicated requirements and other messages.
    • ASAP substantially improved students’ academic outcomes over three years, almost doubling graduation rates. ASAP increased enrollment in college and had especially large effects during the winter and summer intersessions. On average, program group students earned 48 credits in three years, 9 credits more than did control group students. By the end of the study period, 40 percent of the program group had received a degree, compared with 22 percent of the control group. At that point, 25 percent of the program group was enrolled in a four-year school, compared with 17 percent of the control group.
    • At the three-year point, the cost per degree was lower in ASAP than in the control condition. Because the program generated so many more graduates than the usual college services, the cost per degree was lower despite the substantial investment required to operate the program.

    Accordingly the media followed suit with breathless coverage[1]. Consider this from Inside Higher Ed and their article titled “Living Up to the Hype”:

    Now that firm results are in, across several different institutions, CUNY is confident it has cracked the formula for getting students to the finish line.

    “It doesn’t matter that you have a particularly talented director or a president who pays attention. The model works,” said John Mogulescu, the senior university dean for academic affairs and the dean of the CUNY School of Professional Studies. “For us it’s a breakthrough program.”

    MDRC and CUNY also claim that “cracking the code” means that other schools can benefit, as described earlier in the article:

    “We’re hoping to extend that work with CUNY to other colleges around the country,” said Michael J. Weiss, a senior associate with MDRC who coauthored the study.

    Unfortunately . . .

    If you read the report itself, the data doesn’t back up the bold claims in the executive summary and in the media. A more accurate summary might be:

    For the declining number of young, living-with-parents community college students planning to attend full-time, CUNY has explored how to increase student success while avoiding any changes in the classroom. The study found that a package of interventions requiring full-time enrollment, increasing per-student expenditures by 63%, and providing aggressive advising as well as priority access to courses can increase enrollment by 22%, inclusive of term-to-term retention. At the 3-year mark these combined changes translate into an 82% increase in graduation rates, but it is unknown if any changes to the interventions would affect the results, and it is unknown what results would occur at the 4-year mark. Furthermore, it is unclear whether this program can scale due to priority course access and effects on the growing non-traditional student population. If a state sets performance-funding based on 3-year graduation rates and nothing else, this program could even reduce costs.

    Luckily, the report is very well documented, so nothing is hidden. What are the problems that would lead to this alternate description?

    • This study is only for one segment of the population, those willing to go full-time, first-time students, low income, and one or two developmental course requirements (not zero, not three+). This targeted less than one-fourth of the CUNY 2-year student population where 73% live at home with parents and 77% are younger than 22. For the rest, including the growing working-adult population:

    (p. 92): It is unclear, however, what the effects might be with a different target group, such as low-income parents. It is also unclear what outcomes an ASAP-type program that did not require full-time enrollment would yield.

    • The study required full-time enrollment (12 credits attempted per term) and only evaluated 3-year graduation rates, which is almost explains the results by itself. Do the math (24 credits / year over 3 years minus 3 – 6 as developmental courses don’t count for degree credit) and you see that going “full-time” and getting 66 credits is likely the only way to graduate with a 60-credit associate’s degree in 3 years. As the report itself states:

    (p. 85): It is likely that ASAP’s full-time enrollment requirement, coupled with multiple supports to facilitate that enrollment, were central to the program’s success.

    • The study created a special class of students with priority enrollment. One of the biggest challenges of public colleges is for students to even have access to the courses they need. The ASAP students were given priority enrollment as the report itself states:

    (p. 34): In addition, students were able to register for classes early in every semester they participated in the program. This feature allowed ASAP students to create convenient schedules and have a better chance of enrolling in all the classes they need. Early registration may be especially beneficial for students who need to enroll in classes that are often oversubscribed, such as popular general education requirements or developmental courses, and for students in their final semesters as they complete the last courses they need to graduate.

    • The study made no attempt to understand the many variables at play. There were a plethora of interventions – full-time enrollment requirement, priority enrollment, special seminars, reduced load on advisers, etc. Yet we have no idea which components lead to which effects. From the report

    (p. 85): What drove the large effects found in the study and which of ASAP’s components were most important in improving students’ academic outcomes? MDRC’s evaluation was not designed to definitively answer that question. Ultimately, each component in ASAP had the potential to affect students’ experiences in college, and MDRC’s evaluation estimates the effect of ASAP’s full package of services on students’ academic outcomes.

    • The study made no changes at all to actual teaching and learning practices. It almost seems this was the point to find out how we can everything except teaching and learning to get students to enroll full-time. From the report

    (p. 34): ASAP did not make changes to pedagogy, curricula, or anything else that happened inside of the classroom.

    What Do We Have Left?

    In the end this was a study on pulling out all of the non-teaching stops to see if we can get students to enroll full-time. Target only students willing to go full-time, then constantly advise them to enroll full-time and stick with it, and remove as many financial barriers (fund gap between cost and financial aid, free textbooks, gas cards, etc) as is feasible. With all of this effort, the real result of the study is that they increased the number of credits attempted and credits earned by 22%.

    We already know that full-time enrollment is the biggest variable for graduation rates in community colleges, especially if measured over 4 years or less. Look at the recent National Student Clearinghouse report at a national level (tables 11-13):

    • Community college 4-year completion rate for exclusively part-time students: 2.32%
    • Community college 4-year completion rate for mixed enrollment students (some terms FT, some PT): 14.25%
    • Community college 4-year completion rate for exclusively full-time students: 27.55%

    And that data is for 4 years – 3 years would have been more dramatic simply due to the fact that it’s almost impossible to get 60 credits if you don’t take at least 12 credits per term over 3 years.

    What About Cost Analysis?

    The study showed that CUNY spent approximately 63% more per student for the program compared to the control group. The bigger claim, however, is that cost per graduate is actually lower (163% of the cost with 182% of the graduates). But what about the students who don’t graduate or transfer? What about the students who graduate in 4 years instead of 3? Colleges spend money on all their students, and most community college students (60%) can only go part-time and will never be able to graduate in 3 years.

    Even if you factor in performance-based funding, using a 3-year graduation basis is misleading. No state is considering funding only for 3-year successful graduation. If that were so, I have a much easier solution – refuse to admit any students seeking less than 12 credits per term. That will produce dramatic cost savings and dramatic increases in graduation rates . . . as long as you’re willing to completely ignore the traditional community college mission that includes:

    serv[ing] all segments of society through an open-access admissions policy that offers equal and fair treatment to all students

    Can It Scale?

    Despite the claims that “the model works” and that CUNY has cracked the formula, does the report actually support this claim? Specifically, can this program scale?

    First of all, the report only makes its claims for a small percentage of students that are predominantly young and live at home with their parents – we don’t know if it applies beyond the target group as the report itself calls out.

    But within this target group, I think there are big problems with scaling. One of which is the priority enrollment in all courses, including oversubscribed courses and those available at convenient times. The control group was at a disadvantage as were all non-target students (including the growing working adult population and students going back to school). This priority enrollment approach is based on scarcity, and the very nature of scaling the program will reduce the benefits of the intervention.

    I have Premier Silver status at United airlines thanks to a few international trips. If this status gave me realistic priority access to first-class upgrades, then I would be more likely to fly United on a routine basis. As it is, however, I often show up at the gate and see myself #30 or higher in line for first-class upgrades when the cabin only has 5-10 first class grades available. The priority status has lost most of its benefits as United has scaled such that more than a quarter of all passengers on many routes also have priority status.

    CUNY plans to scale from 456 students in the ASAP study all the way up to 13,000 students in the next two years. Assuming even distribution over two years, this changes the group size from 1% of the entering freshman population to 19%. Won’t that make a dramatic difference in how easy it will be for ASAP students to get into the classes and convenient class times they seek? And doesn’t this program conflict with the goals of offering “equal and fair treatment to all students”?

    Alternate Ledes for Media Coverage of Study

    I realize my description above is too lengthy for media ledes, so here are some others that might be useful:

    • CUNY and MDRC prove that enrollment correlates with graduation time.
    • Requiring full-time enrollment and giving special access to courses leads to more full-time enrollment.
    • What would it cost to double an artificial metric without asking faculty to change any classroom activities? 63% more per student.
    Don’t Get Me Wrong

    I’m all for spending money and trying new approaches to help students succeed, including raising graduation rates. I’m also for increasing the focus on out-of-classroom support services to help students. I’m also glad that CUNY is investing in a program to benefit its own students.

    However, the executive summary of this report and the resultant media coverage are misleading. We have not cracked the formula, CUNY is not ready to scale this program or export to other colleges, and taking the executive summary claims at face value is risky at best. The community would be better served if CUNY:

    • Made some effort to separate variables and effect on enrollment and graduation rates;
    • Extended the study to also look at more realistic 4-year graduate rates in addition to 3-year rates;
    • Included an analysis of diminishing benefits from priority course access; and
    • Performed a cost analysis based on the actual or planned funding models for community colleges.
    1. And this article comes from a reporter for whom I have tremendous respect.

    The post Alternate Ledes for CUNY Study on Raising Graduation Rates appeared first on e-Literate.

    Installing Oracle XE on CentOS

    The Anti-Kyte - Sun, 2015-03-01 11:33

    Another Cricket World Cup is underway. England are fulfilling their traditional role of making all of the other teams look like world beaters.
    To take my mind off this excruciating spectacle, I’ll concentrate this week on installing Oracle XE 11g on CentOS 7.

    Before I get into the nuts and bolts of the installation…

    Flavours of Linux

    Whilst there are many Linux Distros out there, they all share the same common Linux Kernel. Within this there are a few Distros upon which most others are based.
    Debian provides the basis for Ubuntu and Mint among others.
    It uses the .deb package format.

    Red Hat Linux in contrast uses the RPM file format for it’s packages. Red Hat is the basis for Distros such as Fedora, CentOS…and Oracle Linux.

    For this reason, the Oracle Express Edition Linux version is packaged using rpm.
    Whilst it is possible to deploy it to a Debian based Distro – instructions for which are available here, deploying on CentOS is rather more straightforward.
    More straightforward, but not entirely so, as we will discover shortly…

    Getting Oracle Express Edition 11G

    Open your web browser and head over the the Oracle Express Edition download page.

    You’ll need to register for an account if you don’t already have one but it is free.

    The file you need to download is listed under :

    Oracle Express Edition 11g Release 2 for Linux x64.

    NOTE XE 11G only comes in the 64-bit variety for Linux. If you’re running a 32-bit version of your Distro, then you’re out of luck as far as 11G is concerned.

    If you’re not sure whether you’re on 32-bit or 64-bit, the following command will help you :

    uname -i
    

    If this returns x86_64 then your OS is 64-bit.

    Installing XE

    You should now have downloaded the zipped rpm file which will look something like this :

    cd $HOME/Downloads
    ls -l
    -rwxrwx---. 1 mike mike 315891481 Dec 16 20:21 oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    

    The next step is to uncompress…

     unzip oracle-xe-11.2.0-1.0.x86_64.rpm.zip
    

    When you run this, the output will look like this :

       creating: Disk1/
       creating: Disk1/upgrade/
      inflating: Disk1/upgrade/gen_inst.sql  
       creating: Disk1/response/
      inflating: Disk1/response/xe.rsp   
      inflating: Disk1/oracle-xe-11.2.0-1.0.x86_64.rpm 
    

    You now need to switch to the newly created Disk1 directory and become root

    cd Disk1
    su
    

    …and then install the package…

    rpm -ivh oracle-xe-11.2.0-1.0.x86_64.rpm
    

    If all goes well you should see…

    Preparing...                          ################################# [100%]
    Updating / installing...
       1:oracle-xe-11.2.0-1.0             ################################# [100%]
    Executing post-install steps...
    You must run '/etc/init.d/oracle-xe configure' as the root user to configure the database.
    
    Configuring XE

    The configuration will be prompt you for

    1. the APEX http port (8080 by default)
    2. the database (TNS) listener port (1521 by default)
    3. A single password to be assigned to the database SYS and SYSTEM users
    4. whether you want the database to start automatically when the system starts (Yes by default)

    Unless you have other software, or Oracle Instances, running elsewhere, the defaults should be fine.

    Here we go then, still as root, run :

    /etc/init.d/oracle-xe configure
    

    The output, complete with the prompts will be something like :

    Oracle Database 11g Express Edition Configuration
    -------------------------------------------------
    This will configure on-boot properties of Oracle Database 11g Express 
    Edition.  The following questions will determine whether the database should 
    be starting upon system boot, the ports it will use, and the passwords that 
    will be used for database accounts.  Press <Enter> to accept the defaults. 
    Ctrl-C will abort.
    
    Specify the HTTP port that will be used for Oracle Application Express [8080]:8081
    
    Specify a port that will be used for the database listener [1521]:1525
    
    Specify a password to be used for database accounts.  Note that the same
    password will be used for SYS and SYSTEM.  Oracle recommends the use of 
    different passwords for each database account.  This can be done after 
    initial configuration:
    Confirm the password:
    
    Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
    
    Starting Oracle Net Listener...Done
    Configuring database...Done
    Starting Oracle Database 11g Express Edition instance...Done
    Installation completed successfully.
    

    Congratulations, you now have a running database. The first thing to do with it, however, is to shut it down.
    In fact, we need to do a re-start so that the menu items that have been added as part of the installation are visible.
    So, re-boot.

    NOTE – from this point on you can stop being root (although you may need to sudo occasionally).

    Once the system comes back, you will see the new Menu icons in the Applications menu under others :

    oracle_menu

    Just to confirm that your database is up and running, you can select the Run SQL Command Line option from this menu
    and run the following :

    
    conn system/pwd
    select sysdate from dual
    /
    

    This should return the current date.

    Sorting out the Environment Variables

    In the normal run of things, this is the one fiddly bit. There is a bug in one of the scripts Oracle uses to set the environment variables which may cause issues.

    To start with, let’s have a look at the main environment script…

    cat /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh
    

    This script is as follows :

    export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
    export ORACLE_SID=XE
    export NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`
    export ORACLE_BASE=/u01/app/oracle
    export PATH=$ORACLE_HOME/bin:$PATH
    

    There is a bug in the nls_lang.sh that is called from here. If you’re NLS_LANG value contains a space, then it will not be configured correctly. A full list of the affected NLS_LANG values is available on the Oracle XE Installation Guide for Debian based systems I mentioned earlier.

    The easiest way to fix this is to just edit the script :

    sudo gedit /u01/app/oracle/product/11.2.0/xe/bin/nls_lang.sh
    

    Right at the bottom of the script where it says :

    # construct the NLS_LANG
    #
    NLS_LANG=${nlslang}.${charset}
    
    echo $NLS_LANG
    

    …amend it so that the $NLS_LANG value is quoted :

    # construct the NLS_LANG
    #
    NLS_LANG="${nlslang}.${charset}"
    
    echo $NLS_LANG
    

    To test the change and make sure everything is now working properly…

    cd /u01/app/oracle/product/11.2.0/xe/bin
    
    . ./oracle_env.sh
    echo $ORACLE_HOME
    echo $ORACLE_SID
    echo $NLS_LANG
    echo $PATH
    

    You should now see the following environment variable settings :

    echo $ORACLE_HOME
    /u01/app/oracle/product/11.2.0/xe
    echo $ORACLE_SID
    XE
    echo $NLS_LANG
    ENGLISH_UNITED KINGDOM.AL32UTF8
    $PATH
    /u01/app/oracle/product/11.2.0/xe/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin:/home/mike/.local/bin:/home/mike/bin
    

    NOTE – the $NLS_LANG should have a setting appropriate for your system (in my case ENGLISH_UNITED KINGDOM.AL32UTF8).

    The Oracle bin directory is now at the start of $PATH.

    Next, we need to ensure that these environment variables are set for all sessions. This can be done by running …

    sudo cp /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh /etc/profile.d/.
    

    To check this, you can start a new terminal session and echo the environment variables to make sure they have been set.

    Getting the Menu Items to Work

    To do this, you simply need to make sure that the oracle user, as well as your own user, is a member of the dba group :

    sudo usermod -a -G dba oracle
    sudo usermod -a -G dba mike
    

    To check :

    sudo grep dba /etc/group
    dba:x:1001:oracle,mike
    $
    

    The menu items for starting up and shutting down the database etc. should now work.

    Enabling the Getting Started Desktop Icon

    The final touch. The installation creates a Getting Started icon on the desktop which is designed to open the Database Home Page of the APEX application that comes with XE.

    In order to make it work as desired, you simply need to right-click the icon and select Properties.
    In the Permissions Tab check the box to “Allow executing file as program”.
    Close the window.

    You will notice that the icon has transformed into the familiar Oracle beehive and is now called
    Get Started With Oracle Database 11g Express Edition.

    Clicking on it now will reward you with …

    db_home

    All-in-all then, this installation is reasonably painless when compared with doing the same thing on a Debian system.
    I wish the same could be said of following the England Cricket Team.


    Filed under: Linux, Oracle Tagged: CentOS, nls_lang.sh, Oracle 11g Express Edition, oracle_env.sh

    Cedar’s Oracle Cloud and PeopleSoft Day

    Duncan Davies - Sat, 2015-02-28 18:36

    Cedar held it’s annual Oracle Cloud and PeopleSoft Day in London on Friday, with almost a hundred people in attendance (about 80 customers, plus staff from Oracle and Cedar).

    It was a great success, with a really positive vibe – customers are looking to do great things with both PeopleSoft and Oracle’s Cloud suite – and a privilege to be part of.

    Here are some photos from the day:

    Graham welcomes everyone
    2015-02-27 10.05.59

     

    Marc Weintraub gave a great keynote (from his office at 2:30am!)
    Marc Weintraub - Keynote

     

    Liz and I discuss the practical applications of the PeopleSoft RoadmapLiz and Duncan - PeopleSoft Roadmap and Cloud

     

    Mike takes us through the upcoming Oracle Cloud Release 10 features
    2015-02-27 12.26.53

     

    Jo talks about ‘Taleo for PeopleSoft People’
    Jo and Duncan - Taleo for PeopleSoft People

     

     Simon handles the prize draw2015-02-27 15.48.56

    So, a fun event with lots of knowledge sharing. My absolute favourite part is being able to connect customers who can help each other though. I lost count of the number of times we were able to say “oh, you’re doing <some project> are you? In that case, let me introduce you to <another client> as they’ve just finished doing that very thing” and then being able to leave them to share their experiences.


    Unsubscribe

    Michael Feldstein - Sat, 2015-02-28 16:00

    By Michael FeldsteinMore Posts (1020)

    A little while back, e-Literate suddenly got hit by a spammer who was registering for email subscriptions to the site at a rate of dozens of new email addresses every hour. After trying a number of less extreme measures, I ended up removing the subscription widget from the site. Unfortunately, as a few of you have since pointed out to me, by removing the option to subscribe by email, I also inadvertently removed the option to unsubscribe. Once I realized there was a problem (and cleared some time to figure out what to do about it), I investigated a number of other email subscription plugins, hoping that I could find one that is more secure. After some significant research, I came to the conclusion, that there is no alternate solution that I can trust more than the one we already have.

    The good news is that I discovered the plugin we have been using has an option to disable the subscribe feature while leaving on the unsubscribe feature. I have done so. You can now find the unsubscribe capability back near the top of the right-hand sidebar. Please go ahead and unsubscribe yourself if that’s what you’re looking to do. If any of you need help unsubscribing, please don’t hesitate to reach out to me.

    Sorry for the trouble. On a related note, I hope to reactivate the email subscription feature for new subscribers once I can find the right combination of spam plugins to block the spam registrations without getting in the way of actual humans trying to use the site.

    The post Unsubscribe appeared first on e-Literate.

    Even More Oracle Database Health Checks with ORAchk 12.1.0.2.1 and 12.1.0.2.3 (Beta)

    As we have discussed before, it can be a challenge to quantify how well your database is meeting operational expectations and identify areas to improve performance. Database health checks are...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Databricks and Spark update

    DBMS2 - Sat, 2015-02-28 05:06

    I chatted last night with Ion Stoica, CEO of my client Databricks, for an update both on his company and Spark. Databricks’ actual business is Databricks Cloud, about which I can say:

    • Databricks Cloud is:
      • Spark-as-a-Service.
      • Currently running on Amazon only.
      • Not dependent on Hadoop.
    • Databricks Cloud, despite having a 1.0 version number, is not actually in general availability.
    • Even so, there are a non-trivial number of paying customers for Databricks Cloud. (Ion gave me an approximate number, but is keeping it NDA until Spark Summit East.)
    • Databricks Cloud gets at data from S3 (most commonly), Redshift, Elastic MapReduce, and perhaps other sources I’m forgetting.
    • Databricks Cloud was initially focused on ad-hoc use. A few days ago the capability was added to schedule jobs and so on.
    • Unsurprisingly, therefore, Databricks Cloud has been used to date mainly for data exploration/visualization and ETL (Extract/Transform/Load). Visualizations tend to be scripted/programmatic, but there’s also an ODBC driver used for Tableau access and so on.
    • Databricks Cloud customers are concentrated (but not unanimously so) in the usual-suspect internet-centric business sectors.
    • The low end of the amount of data Databricks Cloud customers are working with is 100s of gigabytes. This isn’t surprising.
    • The high end of the amount of data Databricks Cloud customers are working with is petabytes. That did surprise me, and in retrospect I should have pressed for details.

    I do not expect all of the above to remain true as Databricks Cloud matures.

    Ion also said that Databricks is over 50 people, and has moved its office from Berkeley to San Francisco. He also offered some Spark numbers, such as:

    • 15 certified distributions.
    • ~40 certified applications.
    • 2000 people trained last year by Databricks alone.

    Please note that certification of a Spark distribution is a free service from Databricks, and amounts to checking that the API works against a test harness. Speaking of certification, Ion basically agrees with my views on ODP, although like many — most? — people he expresses himself more politely than I do.

    We talked briefly about several aspects of Spark or related projects. One was DataFrames. Per Databricks:

    In Spark, a DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood. DataFrames can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.

    I gather this is modeled on Python pandas, and extends an earlier Spark capability for RDDs (Resilient Distributed Datasets) to carry around metadata that was tantamount to a schema.

    SparkR is also on the rise, although it has the usual parallel R story to the effect:

    • You can partition data, run arbitrary R on every partition, and aggregate the results.
    • A handful of algorithms are truly parallel.

    So of course is Spark Streaming. And then there are Spark Packages, which are — and I’m speaking loosely here — a kind of user-defined function.

    • Thankfully, Ion did not give me the usual hype about how a public repository of user-created algorithms is a Great Big Deal.
    • Ion did point out that providing an easy way for people to publish their own algorithms is a lot easier than evaluating every candidate contribution to the Spark project itself. :)

    I’ll stop here. However, I have a couple of other Spark-related posts in the research pipeline.

    Categories: Other