Skip navigation.

Feed aggregator

Not NULL Constraint Influences Access Path

Oracle in Action - Thu, 2015-03-12 23:12

RSS content

The optimizer can make use of explicitly defined Not NULL constraints to take advantage
of an index in order to avoid a full table scan since a B-tree index stores only not NULL values .
When  count (constant) or count(*)  is queried,  we want to count no. of rows in the table. Hence , if there is a column which is defined as not NULL and has an index on it, the number of index entries  in the index are bound to be same as the number of rows. The query optimizer uses the index to count no. of rows in the table.

Similarly, when  a count (not-nullable-column) is queried,  we want to count the no. of rows having not null values in the column. Since the column  has a not NULL constraint on it, every row in the table will have a not null value in it and count(not-nullable-column) is  same as count(*). As a result, the query optimizer can use  the index on the column to process the query.
In fact, in both the cases above, any B-tree containing at least a not-nullable column can serve the purpose.

When a count (nullable-column) is queried, we want to count the no. of rows having not null values in the column. If we have an index on the column, the index will store only not NULL values and hence can be effectively used by  the query optimizer to give the result.
In fact, the optimizer can use any index containing the nullable column for this purpose.

To demonstrate the above functionality, I have created a  table HR.TEST with two columns – NOTNULL having not NULL constraint
NULLABLE
. having same data as column NOTNULL but has not been declared not NULL
. has a B-tree index on it

SQL>drop table hr.test purge;
    create table hr.test (notnull number not null, nullable number);
    insert into hr.test select rownum, rownum from all_tables;
    create index hr.test_idx on hr.test(nullable);
    exec dbms_stats.gather_table_stats ('HR','TEST', cascade => true);

Now I will query count for various arguments and check if optimizer can use the index on NULLABLE column.

Note that to process count(*),  count(1) and   count(notnull), the query optimizer uses Full Table Scan. Although the column NULLABLE has non-null values in all the rows but since it has not been explicitly declared not null , the  optimizer does not know that no. of entries in index reflect the count correctly and hence does not use the index .

SQL>select count(*) from hr.test;
            select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 1mat065c25crk, child number 0
-------------------------------------
select count(*) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

SQL>select count(1) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID gzpsn7ff3ncmc, child number 0
-------------------------------------
select count(1) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

SQL>select count(notnull) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID 6kxdzxbac62b4, child number 0
-------------------------------------
select count(notnull) from hr.test

Plan hash value: 1950795681
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 3 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | TABLE ACCESS FULL| TEST | 108 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------

To process count(nullable), the optimizer uses index on column NULLABLE because we want to count not null values in column nullable and Btree index stores only not null values.

SQL> select count(nullable) from hr.test;
select * from table(dbms_xplan.display_cursor);
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------
SQL_ID bz8rxw5rmmv8g, child number 0
-------------------------------------
select count(nullable) from hr.test

Plan hash value: 2284640995
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | 4 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 432 | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------

Now I will declare not NULL constraint on  column NULLABLE.

SQL> alter table hr.test modify (nullable not null);

Now if query count(*), count(1), count(notnull) and count(nullable), the optimizer is able to avoid Full Table Index by making  use of the index  on NULLABLE column in all the cases . Since the column NULLABLE having index has been declared not null and optimizer knows that entries in the index represent all the rows of the table.

SQL>select count(*) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID 1mat065c25crk, child number 0
-------------------------------------
select count(*) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL>select count(1) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID gzpsn7ff3ncmc, child number 0
-------------------------------------
select count(1) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL>select count(notnull) from hr.test;
    select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID 6kxdzxbac62b4, child number 0
-------------------------------------
select count(notnull) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

SQL> select count(nullable) from hr.test;
     select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------- 
SQL_ID bz8rxw5rmmv8g, child number 0
-------------------------------------
select count(nullable) from hr.test

Plan hash value: 2284640995
---------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time |
---------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 1 (100)| |
| 1 | SORT AGGREGATE | | 1 | | |
| 2 | INDEX FULL SCAN| TEST_IDX | 108 | 1 (0)| 00:00:01 |
---------------------------------------------------------------------

Hence, It is advisable to declare NOT NULL constraint on relevant columns so that optimizer can choose index access in relevant cases.

References:
Troubleshooting Oracle Performance (second edition ) by Christian Antognini
—————————————————————————————————————

Related links:

Home
Tuning Index

————————-



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Not NULL Constraint Influences Access Path], All Right Reserved. 2015.

The post Not NULL Constraint Influences Access Path appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle Linux 7.1 and MySQL 5.6

Wim Coekaerts - Thu, 2015-03-12 21:47
Yesterday we released Oracle Linux 7 update 1. The individual RPM updates are available from both public-yum (our free, open, public yum repo site) and Oracle Linux Network. The install ISOs can be downloaded from My Oracle Support right away and the public downloadable ISOs will be made available in the next few days from the usual e-delivery site. The ISOs will also, as usual, be mirrored to other mirror sites that also make Oracle Linux freely available.

One update in Oracle linux 7 update 1 that I wanted to point out is the convenience of upgrading to MySQL 5.6 at install time. Oracle Linux 7 GA includes MariaDB 5.5 (due to our compatibility commitment in terms of exact packages and the same packages) and we added MySQL 5.6 RPMs on the ISO image (and in the yum repo channels online). So while it was easy for someone to download and upgrade from MariaDB 5.5 to MySQL 5.6 there was no install option. Now with 7.1 we included an installation option for MySQL. So you can decide which database to install in the installer or through kickstart with @mariadb or @mysql as a group. Again, MariaDB 5.5 is also part of Oracle Linux 7.1 and any users that are looking for strict package compatibility will see that we are very much that. All we have done is make it easy to have a better alternative option (1) conveniently available and integrated (2) without any compatibility risks whatsoever so you can easily run the real standard that is MySQL. A bug fix if you will.

I have a little screenshot available here.

Enjoy.

Hackathon Provides Opportunity for Collaboration and Innovation

Fishbowl team members competed in our third annual Hackathon over the weekend. Our consultants split up into four groups to create an innovative solution, and were given just over 24 hours to complete the task. The goal was to create usable software that could be used in the future when working with clients; some of our most creative products and services originally derived from Hackathon events.

13689_10152683472083053_3545144700959488753_n  10410948_10152683472493053_2066307430651186573_n

10592750_10152683472058053_6810123916371611080_n  11043146_10152683472098053_5907236575559895437_n

 

Here’s a summary of what the teams came up with this year:

Team 1: Integrated Fishbowl’s Control Center with Oracle WebCenter Portal

Team 2: Integrated the Google Search Appliance with Oracle’s Document Cloud Service

Team 3: Worked to connect WebCenter Portal with Oracle’s new Document Cloud Service

Team 4: Got Fishbowl’s ControlCenter to run in the cloud with Google Compute Engine

At the end of the Hackathon, all participants voted on the best solution. The winning team was #2 (GSA-Doc Cloud Service integration), made up of Kim Negaard, Andy Weaver, and Mike Hill. Although there could only be one winner, overall consensus was that each team came up with something extremely useful that could help solve common problems we’ve run into while working with WebCenter and the GSA. If you’re interested in learning more about any of the team’s programs, feel free to contact us at info@fishbowlsolutions.com.

The post Hackathon Provides Opportunity for Collaboration and Innovation appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

12c Parallel Execution New Features: Concurrent UNION ALL - Part 2

Randolf Geist - Thu, 2015-03-12 14:41
In the first part of this series I've focused on the parallel degree chosen by the optimizer when dealing with the new concurrent UNION ALL feature.

I've shown that for the variant with serial branches only in the UNION ALL in principle the number of branches dictates the parallel degree determined, even in cases of more complex plans that mix such a serial branch only UNION ALL operator with some other parallel stuff for example via a join.

In this part I'll focus on the runtime behaviour of the feature, but before doing so let me show you what happens if you start mixing serial and parallel branches in the UNION ALL, like that (using the identical table setup as in the previous part):

select count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
The EXPLAIN PLAN output then looks like this:

--------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
|* 14 | TABLE ACCESS FULL| T_2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
|* 24 | TABLE ACCESS FULL| T_2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

8 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
10 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
22 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
24 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
26 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
28 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
30 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 8 because of table property
So now the concurrent UNION ALL feature got activated automatically (no PQ_CONCURRENT_UNION hint required) as described in the documentation / white paper, as you can tell from the PX SELECTOR operators shown for the serial branches (and the fact these operations are now shown as PCWP). So having at least one parallel branch activates the feature by default, and will even be used if you happen to have parallel branches only, and you would have to use the NO_PQ_CONCURRENT_UNION hint to prevent the feature usage.

The notes section now shows a parallel degree of 8, and when I execute this SQL the actual degree used at runtime agrees to that, so in that case here the degree shown seems to be correct.

So how does the feature now behave at runtime? For that purpose I've changed the set-up slightly, by increasing the size of the serial table T2 to 2M rows (the initial setup used 200K rows), so that the parallel and serial table have the same number of rows. I've also changed the parallel degree of T_2 to 4 instead of 8 to make some points more obvious in the output:

-- This is the Parallel table
drop table t_2 purge;

drop table t2 purge;

create table t_2
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2')

alter table t_2 parallel 4;

-- This is the serial table
create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't2')
So let's execute above query with this slightly modified set-up and look at the output of my XPLAN_ASH script to monitor the execution. These are some snippets from the script output after a couple of seconds:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 2984K | | | | | 0:P002(0)[753K],P001(0)[745K],P000(0)[744K],P003(0)[742K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 2984K | | | | | 0:P002(0)[753K],P001(0)[745K],P000(0)[744K],P003(0)[742K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 753K | | | | | 0:P002(0)[753K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | ==> | TABLE ACCESS FULL| T2 | 4 | 753K | 2 | 14 | 14 | ################### | 1:P002(14)[753K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 3 | 745K | | | | | 0:P001(0)[745K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | ==> | TABLE ACCESS FULL| T2 | 3 | 745K | 2 | 14 | 14 | ################### | 1:P001(14)[745K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 2 | 744K | | | | | 0:P000(0)[744K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | ==> | TABLE ACCESS FULL| T2 | 2 | 744K | 2 | 14 | 14 | ################### | 1:P000(14)[744K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 1 | 742K | | | | | 0:P003(0)[742K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 14 | ==> | TABLE ACCESS FULL| T_2 | 20 | 742K | 2 | 14 | 14 | ################### | 1:P003(14)[742K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 16 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For brevity I've omitted some of the columns from the output, and want to focus specifically on the "Execs" and row distribution per Parallel Execution Server. We can see from the "Activity Timeline" that the statement runs on average with four PX servers active as desired (remember I've lowered the degree to 4 for this run), so the work distribution is optimal at present. What we can tell from the "Execs" and "row distribution" output is that the PX servers in principle are assigned in the following way

- the first branch gets executed by all four PX servers but only one is assigned by the PX SELECTOR to actually do something
- the second branch gets executed by the remaining three PX servers but only one is assigned by the PX SELECTOR to actually do something
- the third branch gets executed by the remaining two PX servers but only one is assigned by the PX SELECTOR to actually do something
- the fourth branch gets executed by the remaining PX server

The fourth branch is particularly interesting because it's actually a parallel full table scan that is usually split into granules via the PX BLOCK ITERATOR operator and each granule is assigned to one of the (usually > 1) PX servers working on the operation. However, in this particular case, since there is only one PX server left (at present) actually only this PX server works on this "parallel" full table scan (and gets all the granules assigned), which isn't a problem in terms of parallelism since all four PX servers have something to do but results in a rather unusual distribution profile of this "parallel" full table scan. You can see this confirmed from the "row distribution" shown in the last column where you see in square brackets behind each process the number of rows produced (the number in parenthesis represents the ASH sample count per process and plan operation), and only one of the PX servers produced rows so far for this "parallel" full table scan operation.

Here's the script output some seconds later (45 seconds runtime so far):

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 8747K | | | | | 0:P002(0)[2200K],P000(0)[2187K],P003(0)[2180K],P001(0)[2180K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 8747K | | | | | 0:P002(0)[2200K],P000(0)[2187K],P003(0)[2180K],P001(0)[2180K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ################### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ################### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ################### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | ==> | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ################### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 291K | | | | | 0:P003(0)[291K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | ==> | TABLE ACCESS FULL| T2 | 4 | 291K | 41 | 5 | 5 | ### | 1:P003(5)[291K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 3 | 163K | | | | | 0:P002(0)[163K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | ==> | TABLE ACCESS FULL| T2 | 3 | 163K | 44 | 2 | 2 | # | 1:P002(2)[163K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 2 | 150K | | | | | 0:P000(0)[150K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | ==> | TABLE ACCESS FULL| T2 | 2 | 150K | 44 | 2 | 2 | # | 1:P000(2)[150K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 1 | 143K | | | | | 0:P001(0)[143K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | ==> | TABLE ACCESS FULL| T2 | 1 | 143K | 44 | 2 | 2 | # | 1:P001(2)[143K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The work distribution is still perfect, four servers active all the time. Interestingly you can see now that the operations that previously were only executed by less than four PX servers now all show four executions, although it doesn't really change the work performed. So it looks like once a PX server is done with its work it executes the next branch only to find out that nothing is left to do, and immediately going to the next branch, until there's something left to do. This implementation behaviour (together with something that is probably is a bug) will become relevant when dealing with remote branches, as I'll show in the final part.

You can tell from the "row distribution" column for operation ID 14 that obviously some granules were not processed yet by that single PX server working on the "parallel" full table scan so far and some 37K rows each were processed by the other PX servers when they obviously finished their work a bit earlier and finally joined the "parallel" full table scan.

We now have the PX servers working on the next four branches, which are just four serial branches of similar workload, so they should all take around the same time, and now that we already have an idea how this works we can expect all of them around the same time to join the next parallel full table scan following (well, one PX server is a bit ahead of the others, so not really exactly around the same time).

This what things look like after approx. 80 seconds:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| | | 0| 0| 0| (0) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 16M | 65 | 1 | 1 | # | 1:P002(1)[3927K],P001(0)[3901K],P000(0)[3900K],P003(0)[3897K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 16M | | | | | 0:P002(0)[3927K],P001(0)[3901K],P000(0)[3900K],P003(0)[3897K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ########### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ########### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ########### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ########### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 41 | 39 | ########### | 1:P003(40)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 4 | 1890K | | | | | 0:P002(0)[1890K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | ==> | TABLE ACCESS FULL| T2 | 4 | 1890K | 44 | 38 | 35 | ########## | 1:P002(36)[1890K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 3 | 1863K | | | | | 0:P000(0)[1863K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | ==> | TABLE ACCESS FULL| T2 | 3 | 1863K | 44 | 38 | 36 | ########## | 1:P000(37)[1863K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 2 | 1864K | | | | | 0:P001(0)[1864K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | ==> | TABLE ACCESS FULL| T2 | 2 | 1864K | 44 | 38 | 36 | ########## | 1:P001(37)[1864K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 1 | 7932 | | | | | 0:P003(0)[7932],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 1 | 7932 | | | | | 0:P003(0)[7932],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We still have a perfect work distribution (ignore the small glitch in ASH instrumentation in the last couple of seconds), and the first of the four serial branches is completed and this PX server has just started to work on the "parallel" full table scan following, the other three are just finishing their serial full table scan.

Again forty seconds later:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3844K| | 2| 0| 2|@@ (2) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| | | 0| 0| 0| (0) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| 961K| | 1| 0| 1|@ (1) |
98| 961K| | 1| 0| 1|@ (1) |
99| 961K| | 1| 0| 1|@ (1) |
100| 961K| | 1| 0| 1|@ (1) |
101| 961K| | 1| 0| 1|@ (1) |
102| 961K| | 1| 0| 1|@ (1) |
103| 961K| | 1| 0| 1|@ (1) |
104| 961K| | 1| 0| 1|@ (1) |
105| 961K| | 1| 0| 1|@ (1) |
106| 961K| | 1| 0| 1|@ (1) |
107| 961K| | 1| 0| 1|@ (1) |
108| 961K| | 1| 0| 1|@ (1) |
109| 961K| | 1| 0| 1|@ (1) |
110| 961K| | 1| 0| 1|@ (1) |
111| 961K| | 1| 0| 1|@ (1) |
112| 961K| | 1| 0| 1|@ (1) |
113| 961K| | 1| 0| 1|@ (1) |
114| 961K| | 1| 0| 1|@ (1) |
115| 961K| | 1| 0| 1|@ (1) |
116| 961K| | 1| 0| 1|@ (1) |
117| 961K| | 1| 0| 1|@ (1) |
118| 961K| | 1| 0| 1|@ (1) |
119| 961K| | 1| 0| 1|@ (1) |
120| 961K| | 1| 0| 1|@ (1) |
121| 961K| | 1| 0| 1|@ (1) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 3 | | | | | 0:P000(0)[1],P001(0)[1],P003(0)[1],P002(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 3 | | | | | 0:P000(0)[1],P001(0)[1],P003(0)[1],P002(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 19M | 65 | 1 | 1 | # | 1:P002(1)[5904K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 19M | | | | | 0:P002(0)[5904K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ####### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ####### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ####### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 43 | 41 | ######## | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 42 | 39 | ####### | 1:P002(40)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ######## | 1:P000(42)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ######## | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[610K],P001(0)[468K],P000(0)[461K],P002(0)[461K],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 84 | 13 | 13 | ### | 4:P003(13)[610K],P001(10)[468K],P000(10)[461K],P002(10)[461K],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 4 | 1406K | | | | | 0:P002(0)[1406K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | ==> | TABLE ACCESS FULL| T2 | 4 | 1406K | 96 | 26 | 26 | ##### | 1:P002(26)[1406K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Something significant has happened now to the work distribution, starting from around second 97 of the execution only a single PX server is left active, all others are idle. We can see that the second parallel full table scan (operation 23 + 24) is now done, and as expected, it was executed by all four servers to a rather similar degree, although one of the four did more work than the others. It's also obvious that this operation took only approx. 13 seconds, whereas the previous scans all took approx. 42 seconds, so this operation was significantly quicker due to the fact that really multiple processes worked concurrently on the operation. Since now only a single operation is left to process, only a single process can be active concurrently which explains the work distribution "skew" observed.

This is the final script output after completion:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3844K| | 2| 0| 2|@@ (2) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
76| 3844K| | 2| 0| 2|@@ (2) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| 961K| | 1| 0| 1|@ (1) |
98| 961K| | 1| 0| 1|@ (1) |
99| 961K| | 1| 0| 1|@ (1) |
100| 961K| | 1| 0| 1|@ (1) |
101| 961K| | 1| 0| 1|@ (1) |
102| 961K| | 1| 0| 1|@ (1) |
103| 961K| | 1| 0| 1|@ (1) |
104| 961K| | 1| 0| 1|@ (1) |
105| 961K| | 1| 0| 1|@ (1) |
106| 961K| | 1| 0| 1|@ (1) |
107| 961K| | 1| 0| 1|@ (1) |
108| 961K| | 1| 0| 1|@ (1) |
109| 961K| | 1| 0| 1|@ (1) |
110| 961K| | 1| 0| 1|@ (1) |
111| 961K| | 1| 0| 1|@ (1) |
112| 961K| | 1| 0| 1|@ (1) |
113| 961K| | 1| 0| 1|@ (1) |
114| 961K| | 1| 0| 1|@ (1) |
115| 961K| | 1| 0| 1|@ (1) |
116| 961K| | 1| 0| 1|@ (1) |
117| 961K| | 1| 0| 1|@ (1) |
118| 961K| | 1| 0| 1|@ (1) |
119| 961K| | 1| 0| 1|@ (1) |
120| 961K| | 1| 0| 1|@ (1) |
121| 961K| | 1| 0| 1|@ (1) |
122| 961K| | 1| 0| 1|@ (1) |
123| 961K| | 1| 0| 1|@ (1) |
124| 961K| | 1| 0| 1|@ (1) |
125| 961K| | 1| 0| 1|@ (1) |
126| 961K| | 1| 0| 1|@ (1) |
127| 961K| | 1| 0| 1|@ (1) |
128| 961K| | 1| 0| 1|@ (1) |
129| 961K| | 1| 0| 1|@ (1) |
130| 961K| | 1| 0| 1|@ (1) |
131| 961K| | 1| 0| 1|@ (1) |
132| 961K| | 1| 0| 1|@ (1) |
133| 961K| | 1| 0| 1|@ (1) |
134| 961K| | 1| 0| 1|@ (1) |
135| 961K| | 1| 0| 1|@ (1) |
136| 961K| | 1| 0| 1|@ (1) |
137| 961K| | 1| 0| 1|@ (1) |
138| 961K| | 1| 0| 1|@ (1) |
139| 961K| | 1| 0| 1|@ (1) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 2 | PX COORDINATOR | | 5 | 4 | | | | | 0:sqlplus.exe(0)[4],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 4 | SORT AGGREGATE | | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 5 | VIEW | | 4 | 20M | 65 | 1 | 1 | # | 1:P002(1)[6498K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 6 | UNION-ALL | | 4 | 20M | | | | | 0:P002(0)[6498K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 7 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ###### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ###### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ####### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 43 | 41 | ####### | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 42 | 39 | ####### | 1:P002(40)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ####### | 1:P000(42)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[610K],P001(0)[468K],P000(0)[461K],P002(0)[461K],sqlplus.exe(0)[0] |
|* 24 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 84 | 13 | 13 | ### | 4:P003(13)[610K],P001(10)[468K],P000(10)[461K],P002(10)[461K],sqlplus.exe(0)[0] |
| 25 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | TABLE ACCESS FULL| T2 | 4 | 2000K | 96 | 44 | 44 | ####### | 1:P002(44)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
So for the last forty seconds only one process was active which represents the final operation and all operations have now been executed by all PX servers.

We can conclude a number of things from these monitoring results:

1. At the end all operations are executed by all PX servers in a concurrent UNION ALL operation
2. The PX SELECTOR operator assigns only one of them to non-parallel branches
3. Depending on the sequence and kind of branches (non-parallel, parallel) you might end up with some unusual execution profiles for parallel branches
4. If you have non-parallel branches towards the end not all of the PX servers might end up doing something when that part gets processed

The last point suggests that it's advisable to move parallel branches towards the end of the UNION ALL to make most out of the parallel execution. If I re-arrange above statement in such a way that the two parallel branches are executed last, the final monitoring output looks like that:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3322K| | 2,5| 0| 2,5|@@@ (2,5) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| 3844K| | 4| 0| 4|@@@@ (4) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 3844K| | 4| 0| 4|@@@@ (4) |
80| 3844K| | 4| 0| 4|@@@@ (4) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| | | 0| 0| 0| (0) |
98| 3844K| | 4| 0| 4|@@@@ (4) |
99| 3844K| | 4| 0| 4|@@@@ (4) |
100| 3844K| | 4| 0| 4|@@@@ (4) |
101| 3844K| | 4| 0| 4|@@@@ (4) |
102| 3844K| | 4| 0| 4|@@@@ (4) |
103| 3844K| | 4| 0| 4|@@@@ (4) |
104| 3844K| | 4| 0| 4|@@@@ (4) |
105| 1922K| | 2| 0| 2|@@ (2) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | 1 | 1 | 1 | # | 1:sqlplus.exe(1)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 2 | PX COORDINATOR | | 5 | 4 | | | | | 0:sqlplus.exe(0)[4],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 4 | SORT AGGREGATE | | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 5 | VIEW | | 4 | 20M | | | | | 0:P000(0)[5035K],P002(0)[5002K],P003(0)[4994K],P001(0)[4969K],sqlplus.exe(0)[0] |
| 6 | UNION-ALL | | 4 | 20M | | | | | 0:P000(0)[5035K],P002(0)[5002K],P003(0)[4994K],P001(0)[4969K],sqlplus.exe(0)[0] |
| 7 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P001(41)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 12 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ######## | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 13 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 14 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 15 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 16 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 40 | 40 | ######### | 1:P000(40)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 17 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 41 | 41 | ######### | 1:P001(41)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 41 | 41 | ######### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 22 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 40 | 40 | ######### | 1:P003(40)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 23 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P000(0)[534K],P003(0)[501K],P002(0)[497K],P001(0)[468K],sqlplus.exe(0)[0] |
|* 24 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 83 | 11 | 11 | ### | 4:P000(11)[534K],P003(10)[501K],P002(10)[497K],P001(10)[468K],sqlplus.exe(0)[0] |
| 25 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P002(0)[505K],P000(0)[501K],P001(0)[501K],P003(0)[493K],sqlplus.exe(0)[0] |
|* 26 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 94 | 12 | 11 | ### | 4:P000(11)[501K],P003(11)[493K],P002(10)[505K],P001(10)[501K],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Notice how I managed in this particular case to reduce the execution duration by more than 30 seconds simply by re-arranging, which ensured that the work distribution was optimal all the time.

In the final part of series I'll focus on the runtime behaviour of the concurrent UNION ALL feature when dealing with remote branches.

Oracle Priority Support Infogram for 12-MAR-2015

Oracle Infogram - Thu, 2015-03-12 13:21

RDBMS
From OTN Garage: 5 Steps for Installing Oracle Database 12c on Oracle Solaris 11
Applying a PSU or BP to a Single-/Multitenant Environment, from Upgrade your Database - NOW!
SQL Developer
Software Updates! SQL Developer, Data Modeler, ORDS, & SQLcl EA2’s Now Available, from that JEFF SMITH.
OVM
From Oracle’s Virtualization Blog: Oracle VM VirtualBox 4.3.24 Released
MySQL
Secure Web Pages By Using Session Handling with PHP, from Oracle’s MySQL Blog.
From MySQL on Windows: MySQL for Excel 1.3.4 has been released
SOA
EBS Adapter for Oracle SOA Suite 12c Now Available, from SOA & BPM Partner Community Blog,
And from the same blog:
Simplify SaaS and On-Premises Integration by Using Oracle Fusion Middleware
RightNow Cloud Adapter (12.1.3) Released
Governance made easy – Oracle API Catalog 12c
WebLogic
From : Cristóbal Soto's Blog: PSU, Patching and Classpath Problems on Weblogic Server
Create a Custom ADF Component, from WebLogic Partner Community EMEA.
And from the same source:
Create a Custom ADF Component
Monitoring of Mobile Applications
Java
Top 10 Documents Linked as Solutions for Weblogic Server J2EE/Webservices November 2014 - February 2015, from Proactive Support - Java Development using Oracle Tools.
Business Analytics
From Business Analytics - Proactive Support:
Planning and Budgeting Cloud Service Videos
Patch Set Update: Hyperion Financial Close Management 11.1.2.2.501
Patch Set Update: Financial Data Quality Management Enterprise Edition 11.1.2.3.530
Patch Set Update: Hyperion Shared Services 11.1.2.3.503

Eclipse
From the Oracle Enterprise Pack for Eclipse Blog: Modeling a POST Request and Redirection in the REST Service Editor
EBS
From the Oracle E-Business Suite Support Blog:
Webcast: Inventory Sales Order Flow and Troubleshooting – R12
ORAchk 12.1.0.2.3 is Now Available for Download!
Mandatory Post Patch Required for Fixed Assets Feb 2015 RPC
Webcast: Understanding &Troubleshooting Receipts APIs In Oracle Accounts Receivable
Free Training Delivered by Product Experts!!!
R12: Important Enhancement: ON-THE-FLY SLA Upgrade of Historical SLA Distributions--BATCH MODE!
From the Oracle E-Business Suite Technology blog:

JRE 1.8.0_40 Certified with Oracle E-Business Suite

Nashville Cat

Floyd Teter - Thu, 2015-03-12 12:38
Nashville cats, play clean as country water
Nashville cats, play wild as mountain dew
Nashville cats, been playin' since they's babies
Nashville cats, get work before they're two
            - From the Lovin' Spoonful's Nashville Cats

I'm heading out to HEUG's Alliance this weekend.  Gonna be a Nashville Cat for a few days.  My big job is to host three workshop/demo sessions on Oracle Cloud Applications, all in Room Delta Island C at the Gaylord Hotel:
1.    Taleo Cloud Demo: Tuesday, March 17 12 noon-1 p.m.

2.     Financials Cloud Demo: Tuesday March 17, 2:00-3:00 p.m.

3.     HCM Cloud Demo: Wednesday, March 18, 7:45-8:45 a.m.
Almost everything will be "live drive", so you'll get some relief fromPowerPoint slides ;)  These are sponsored sessions (Thank you, Sierra Cedar), so you won't find them on the Alliance schedule.  Some of those special conference sessions that only get heard about by word of mouth or from some big-mouth blogger.

You want the straight scoop on Oracle's Cloud Applications, do come by so we can chat for a bit.
I'll also be in some private customers sessions, attending a few sessions, shaking hands and kissing babies.  So if you really have to miss all the workshops, track me down so we can talk.

comment in external table

Laurent Schneider - Thu, 2015-03-12 11:37

Depending the files, you may use different signs for comments, typically


# hash
// slash slash 
/* slash-star star-slash */
: column
-- dash dash

The latest is used in sql and pl/sql, but :


CREATE TABLE t (x NUMBER)
ORGANIZATION EXTERNAL
(
  TYPE ORACLE_LOADER
  DEFAULT DIRECTORY data_pump_dir
  ACCESS PARAMETERS (
    FIELDS TERMINATED BY ';'  -- This is a comment
    (x))
  LOCATION ('x'));

SELECT * FROM t;
SELECT * FROM t
              *
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "minussign": expecting one of: 
  "column, enclosed, (, ltrim, lrtrim, ldrtrim, missing, 
  notrim, optionally, rtrim, reject"
KUP-01007: at line 2 column 38

not in external table access parameters.

No comment is allowed there!

Arizona Oracle User Group Meeting March 18

Bobby Durrett's DBA Blog - Thu, 2015-03-12 09:39
I just found out that this meeting was cancelled.  We will have to catch the next one. :) – 3/16/2015

Sign up for the Arizona Oracle User Group (AZORA) meeting next week: signup url

The email that I received from the meeting organizer described the topic of the meeting in this way:

“…the AZORA meetup on March 18, 2015 is going to talk about how a local business decided to upgrade their Oracle Application from 11i to R12 and give you a first hand account of what went well and what didn’t go so well. ”

Description of the speakers from the email:

Becky Tipton

Becky is the Director of Project Management at Blood Systems located in Scottsdale, AZ. Prior to coming to Blood Systems, Becky was an independent consultant for Tipton Consulting for four years.

Mike Dill

Mike is the Vice President of Application Solutions at 3RP, a Phoenix consulting company. Mike has over 10 years of experience implementing Oracle E-Business Suite and managing large-scale projects.

I plan to attend.  I hope to see you there too. :)

– Bobby

Categories: DBA Blogs

The Best Thing

Floyd Teter - Thu, 2015-03-12 09:33
I'm spending the latter part of my week at the Utah Oracle User Group Training Days conference.  It's a nice regional conference for me to engage with...it's local to me here in Salt Lake City.  So I get to participate in the conference and still go home every night.  Pretty sweet.

But being local is not the best thing about this conference.  The best thing about this...or most user group conferences, for that matter...is the opportunity to exchange ideas with some very smart people.      When I get to listen in for a bit on conversations with the real brains in this business, I always come away with more knowledge...and often with a different point of view.  That's the really cool part.  And, yes, it's worth investing a couple of days of my time.

So far as the Utah Oracle User Group, we'll probably do another one of these this fall.  You really out to come out.  Who knows, you might learn something?  I know I always do.

ODA 12.1.X.X.X - add a multiplexed control file under ACFS

Yann Neuhaus - Thu, 2015-03-12 07:43

Since version 12, ODA stores databases on ACFS volumes instead of ASM directly. This slightly changed the way the files are managed and administer. This articles presents how to multiplex your control files on ACFS.

More on the Rittman Mead BI Forum 2015 Masterclass : “Delivering the Oracle Big Data and Information Management Reference Architecture”

Rittman Mead Consulting - Thu, 2015-03-12 05:29

Each year at the Rittman Mead BI Forum we host an optional one-day masterclass before the event opens properly on Wednesday evening, with guest speakers over the year including Kurt Wolff, Kevin McGinley and last year, Cloudera’s Lars George. This year I’m particularly excited that together with Jordan Meyer, our Head of R&D, I’ll be presenting the masterclass on the topic of “Delivering the Oracle Big Data and Information Management Reference Architecture”.

NewImageLast year we launched at the Brighton BI Forum event a new reference architecture that Rittman Mead had collaborated with Oracle on, that incorporated big data and schema-on-read databases into the Oracle data warehouse and BI reference architecture. In two subsequent blog posts, and in a white paper published on the Oracle website a few weeks after, concepts such as the “Discovery Lab”, “Data Reservoirs” and the “Data Factory” were introduced as a way of incorporating the latest thinking, and product capabilities, into the reference architecture for Oracle-based BI, data warehousing and big data systems.

One of the problems I always feel with reference architectures though is that they tell you what you should create, but they don’t tell you how. Just how do you go from a set of example files and a vague requirement from the client to do something interesting with Hadoop and data science, and how do you turn the insights produced by that process into a production-ready, enterprise Big Data system? How do you implement the data factory, and how do you use new tools such as Oracle Big Data Discovery and Oracle Big Data SQL as part of this architecture? In this masterclass we’re looking to explain the “how” and “why” to go with this new reference architecture, based on experiences working with clients over the past couple of years.

The masterclass will be divided into two sections; the first, led by Jordan Meyer, will focus on the data discovery and “data science” parts of the Information Management architecture, going through initial analysis and discovery of datasets using R and Oracle R Enterprise. Jordan will share techniques he uses from both his work at Rittman Mead and his work with Slacker Radio, a Silicon Valley startup, and will introduce the R and Oracle R Enterprise toolset for uncovering insights, correlations and patterns in sample datasets and productionizing them as database routines. Over his three hours he’ll cover topics including: 

Session #1 – Data exploration and discovery with R (2 hours) 

1.1 Introduction to R 

1.2 Tidy Data  

1.3 Data transformations 

1.4 Data Visualization 

Session #2 – Predictive Modeling in the enterprise (1 hr)

2.1 Classification

2.2 Regression

2.3 Deploying models to the data warehouse with ORE

After lunch, I’ll take the insights and analysis patterns identified in the Discovery Lab and turn them into production big data pipelines and datasets using Oracle Data Integrator 12c, Oracle Big Data Discovery and Oracle Big Data SQL For a flavour of the topics I’ll be covering take a look at this Slideshare presentation from a recent Oracle event, and in the masterclass itself I’ll concentrate on techniques and approaches for ingesting and transforming streaming and semi-structured data, storing it in Hadoop-based data stores, and presenting it out to users using BI tools like OBIEE, and Oracle’s new Big Data Discovery.

Session # 3 – Building the Data Reservoir and Data Factory (2 hr)

3.1 Designing and Building the Data Reservoir using Cloudera CDH5 / Hortonworks HDP, Oracle BDA and Oracle Database 12c

3.2 Building the Data Factory using ODI12c & new component Hadoop KM modules, real-time loading using Apache Kafka, Spark and Spark Streaming

Session #4 – Accessing and visualising the data (1 hr)

4.1 Discovering and Analyzing the Data Reservoir using Oracle Big Data Discovery

4.2 Reporting and Dashboards across the Data Reservoir using Oracle Big Data SQL + OBIEE 11.1.1.9

You can register for a place at the two masterclasses when booking your BI Forum 2015 place, but you’ll need to hurry as we limit the number of attendees at each event in order to maximise interaction and networking within each group. Registration is open now and the two events take place in May – hopefully we’ll see you there!

Categories: BI & Warehousing

Venturing into bulk insert a SQL Server error log and data order

Yann Neuhaus - Thu, 2015-03-12 02:21

Have you ever attempted to bulk import a SQL Server error log in order to use the information inside a report for example? If yes, you have probably wondered how to keep data in the correct order in a query because you cannot refer to any column from the table. In such case you can notice that you may have many records with the same date. Of course, there exists some workarounds but this is not the subject of this blog. Instead, I would like to share with you an interesting discussion I had with a forum member about the guarantee to get the SQL Server error log data in order with a simple SELECT statement without an ORDER BY clause.

Let’s begin with the following script which bulk import data from a SQL Server error log file inside a table:

CREATE TABLE ErrLog ( LogCol NVARCHAR(max) NULL )

 

BULK INSERT ErrLog FROM 'C:Program FilesMicrosoft SQL ServerMSSQL12.MSSQLSERVERMSSQLLogERRORLOG.1' WITH ( FIRSTROW = 6, DATAFILETYPE = 'widechar' )


You may notice that we use FIRSTROW hint to begin from the 6th line and skip information as follows:

2015-03-07 13:57:41.43 Server     Microsoft SQL Server 2014 - 12.0.2370.0 (X64) Jun 21 2014 15:21:00 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor)

 

In addition, using DATAFILETYPE = 'widechar' is mandatory in order to bulk import Unicode data.

Let’s continue and after bulking import data let’s take a look at the data itself inside the table. You will probably get the same kind of sample information as follows:

 

SELECT        LogCol FROM ErrLog;

 

blog_34_-_1_-_bulk_result

 

Comparing records order between the SQL Server error log file and the table tends to state that the order is the same. At this point, we may wonder how to number the records inside the table without affecting the table order. Indeed, numbering records in the table will allow to control the order of data by using the ORDER BY clause. So go ahead and let’s using the ROW_NUMBER() function in order to meet our requirement. You may notice that I use an “artificial” ORDER BY clause inside the windows function to avoid to interfere with the original order of getting data in my table.

 

SELECT        ROW_NUMBER() OVER (ORDER BY (SELECT 0)) AS numline,        LogCol FROM ErrLog;

 

blog_34_-_2_-_bulk_result_with_row_number

 

At this point, the forum member tells me that we cannot guarantee the order of data without using an order by clause but once again, it seems that we get the same order that the previous query but can I trust it? I completely agree with this forum member and I tend to advice the same thing. However in this specific case the order seems to be correct but why?

If you take a look at the first script, the first operation consisted in creating a heap table. This detail is very important. Then, the bulk insert operation reads sequentially the file and insert the data in the allocation order of pages. It means that data insertion order is tied to the allocation order of the pages specifying in the IAM page. So when we perform a table scan operation to number each row in the table (see the execution plan below), in fact this operation will be performed by scanning the IAM page to find the extents that are holding pages.

 

blog_34_-_21_-_execution_plan

 

As you know, IAM represents extents in the same order that they exist in the data files, so in our case table scan is performed by using the same allocation page order. We can check it by catching the call stack of our query. We may see that the query optimizer uses a particular function called AllocationOrderPageScanner for our heap table.

 

blog_34_-_3_-_allocation_order_scan

 

So, that we are agreed, I don’t claim that in all cases we may trust blindly the data order without specifying the ORDER BY clause. In fact, I’m sure you will have to process differently most of the time depending on your context and different factors that will not guarantee the order of data without specifying the ORDER BY CLAUSE (bulking import data from multiple files in parallel, reading data from table with b-tree structure in particular condition etc…)

Enjoy!

An Introduction to Analysing ODI Runtime Data Through Elasticsearch and Kibana 4

Rittman Mead Consulting - Thu, 2015-03-12 01:39

An important part of working with ODI is analysing the performance when it runs, and identifying steps that might be inefficient as well as variations in runtime against a baseline trend. The Operator tool in ODI itself is great for digging down into individual sessions and load plan executions, but for broader analysis we need a different approach. We also need to make sure we keep the data available for trend analysis, as it’s often the case that tables behind Operator are frequently purged for performance reasons.


In this article I’m going to show how we can make use of a generic method of pulling information out of an RDBMS such as Oracle and storing it in Elasticsearch, from where it can be explored and analysed through Kibana. It’s standalone, it’s easy to do, it’s free open source – and it looks and works great! Here I’m going to use it for supporting the analysis of ODI runtime information, but it is equally applicable to any time-based data you’ve got in an RDBMS (e.g. OBIEE Usage Tracking data).

Kibana is an open-source data visualisation and analysis tool, working with data stored in Elasticsearch. These tools work really well for very rapid analysis of any kind of data that you want to chuck at them quickly and work with. By skipping the process of schema definition and data modelling the time taken to the first results is drastically reduced. It enables to you quickly start “chucking about” data and getting meaning out of it before you commit full-scale to how you want to analyse it, which is what the traditional modelling route can sometimes force you to do prematurely.

ODI writes runtime information to the database, about sessions run, steps executed, time taken and rows processed. This data is important for analysing things like performance issues, and batch run times. Whilst with the equivalent runtime data (Usage Tracking) from OBIEE there is the superb RPD/Dashboard content that Oracle ship in SampleApp v406, for ODI the options aren’t as vast, ultimately being based on home-brew SQL against the repository tables using the repository schema documentation from Oracle. Building an OBIEE metadata model against the ODI schema is one option, but then requires an OBIEE server on which to run it – or merging into an existing OBIEE deployment – which means that it can become more hassle than it’s worth. It also means a bunch of up-front modelling before you get any kind of visualisations and data out. By copying the data across into Elasticsearch it’s easy to quickly build analyses against it, and has the additional benefit of retaining the data as long as you’d like meaning that it’s still available for long-term trend analysis once the data’s been purged from the ODI repository itself.

Let’s take a bit of a walk through the ODI dashboard that I’ve put together. First up is a view on the number of sessions that have run over time, along with their duration. For duration I’ve shown 50th (median), 75th and 95th percentiles to get an idea of the spread of session runtimes. At the moment we’re looking at all sessions, so it’s not surprising that there is a wide range since there’ll always be small sessions and longer ones:

Next up on the dashboard comes a summary of top sessions by runtime, both cumulative and per-session. The longest running sessions are an obvious point of interest, but cumulative runtime is also important; something may only take a short while to run when compared to some singular long-running sessions, but if it runs hundreds of times then it all adds up and can give a big performance boost if time is shaved off it.

Plotting out session execution times is useful to be able to see both when the longest running sessions ran:

The final element on this first dashboard is one giving the detail for each of the top x long-running session executions, including the session number so that it can be examined in further detail through the Operator tool.

Kibana dashboards are interactive, so you can click on a point in a graph to zoom in on that time period, as well as click & drag to select an arbitrary range. The latter technique is sometimes known as “Brushing”, and if I’m not describing it very well have a look at this example here and you’ll see in an instant what I mean.

As you focus on a time period in one graph the whole dashboard’s time filter changes, so where you have a table of detail data it then just shows it for the range you’ve selected. Notice also that the granularity of the aggregation changes as well, from a summary of every three hours in the first of the screenshots through to 30 seconds in the last. This is a nice way of presenting a summary of data, but isn’t always desirable (it can mask extremes and abnormalities) so can be configured to be fixed as well.

Time isn’t the only interaction on the dashboard – anything that’s not a metric can be clicked on to apply a filter. So in the above example where the top session by cumulative time are listed out we might want to find out more about the one with several thousand executions

Simply clicking on it then filters the dashboard and now the session details table and graph show information just for that session, including duration, and rows processed:

Session performance analysis

As an example of the benefit of using a spread of percentiles we can see here is a particular session that had an erratic runtime with great variation, that then stabilised. The purple line is the 95th percentile response time; the green and blue are 50th and 75th respectively. It’s clear that whilst up to 75% of the sessions completed in about the same kind of time each time they ran, the remaining quarter took anything up to five times as long.

One of the most important things in performance is ensuring consistent performance, and that is what happens here from about half way along the horizontal axis at c.February:

But what was causing the variation? By digging a notch deeper and looking at the runtime of the individual steps within the given session it can be seen that the inconsistent runtime was caused by a single step (the green line in this graph) within the execution. When this step’s runtime stabilises, so does the overall performance of the session:

This is performing a port-mortem on a resolved performance problem to illustrate how useful the data is – obviously if there were still a performance problem we’d have a clear path of investigation to pursue thanks to this data.

How?

Data’s pulled from the ODI repository tables using Elasticsearch JDBC river, from where it’s stored and indexed in Elasticsearch, and presented through Kibana 4 dashboards.

esr35

The data load from the repository tables into Elasticsearch is incremental, meaning that the solution works for both historical analysis and more immediate monitoring too. Because the data’s actually stored in Elasticsearch for analysis it means the ODI repository tables can be purged if required and you can still work with a full history of runtime data in Kibana.

If you’re interested in finding out more about this solution and how Rittman Mead can help you with your ODI and OBIEE implementation and monitoring needs, please do get in touch.

Categories: BI & Warehousing

APEX 5.0: pimping the Login page

Dimitri Gielis - Wed, 2015-03-11 17:29
When you create a new application in APEX 5.0, the login page probably looks like this:


I love the build-in login page of APEX itself - luckily it's easy enough to build that in our own apps too. Thank you APEX Dev team!

The first step is to change the region type to be of Login Region Template:


We want to add a nice icon on top of the Login text. You can use the Icon CSS Class in the Region options - in this case I opted for fa-medkit:

Next up is making the Login button bigger and make it the complete width like the items.In APEX 5.0 you can use the Template Options to do that:

Once we stretched the Login button it fits the entire size.
Next up is getting some icons in the username and password field.For the username we use the "icon-login-username" css class.Instead of the label we make that hidden and specify a placeholder, so before you start typing you see the word username and when you start typing the text disappears.

For the password field we do the same thing, but for the css class we specify "icon-login-password".


Finally your login screen looks like this:


Great? Absolutely - and so easy with APEX 5.0!

What's next? Is there anything better? Euh... yes, what about live validation?
Sure we can do that in APEX 5.0 without too much hassle :)) Thanks once again APEX Dev Team!

In the item make sure the item is set to Value Required and add in the Post Text following span:


That will give you a nice visual indication if you entered text:


Cool? Creating login pages in APEX 5.0 is ... (you fill in the word)

Interested in more? We're doing an APEX 5.0 UI Training in May.
Categories: Development

Delphix User Group Presentation

Bobby Durrett's DBA Blog - Wed, 2015-03-11 16:30

My Delphix user group presentation went well today. 65 people attended.  It was great to have so much participation.

Here are links to my PowerPoint slides and a recording of the WebEx:

Slides: PowerPoint

Recording: WebEx

Also, I want to thank two Delphix employees, Ann Togasaki and Matthew Yeh.  Ann did a great job of converting my text bullet points into a visually appealing PowerPoint.  She also translated my hand drawn images into useful drawings.  Matthew did an amazing job of taking my bullet points and my notes and adding meaningful graphics to my text only slides

I could not have put the PowerPoint together in time without Ann and Matthew’s help and they did a great job.

Also, for the first time I wrote out my script word for word and added it to the notes on the slides.  So, you can see what I intended to say with each slide.

Thank you to Adam Leventhal of Delphix for inviting me to do this first Delphix user group WebEx presentation.  It was a great experience for me and I hope that it was useful to the user community as well.

– Bobby

Categories: DBA Blogs

Webcast Replay: Public Sector FMW: Mobility Solutions – Re-Think Mobile

WebCenter Team - Wed, 2015-03-11 14:48
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

Mobile is the digital disruptor that has transformed industries and organizations big and small. Mobile transformations are everywhere, across all industries in organizations of all sizes. The enterprise mobile market is expected to bring in $140 billion by 2020, and yet today 7 in 10 enterprises are still struggling to keep pace with new mobile devices and systems. We know that access to relevant information, anywhere and anytime is expected, yet connecting to back-end systems and securing the corporate data is both complex and a necessity.

Watch this webinar to learn how customers are re-thinking their enterprise mobile strategy and unifying the client, content, context, security and cloud in their enterprise mobile strategy. Through case studies and live demonstrations, Oracle Gold Partner 3Di will present how customers like you have successfully addressed these questions using Oracle Technologies and 3Di's solutions, innovations and services.

Find out more here

Product links:

o http://www.oracle.com/technetwork/middleware/webcenter/suite/overview/index.html

o http://www.oracle.com/us/technologies/bpm/overview/index.html

o http://www.oracle.com/technetwork/developer-tools/maf/overview/index.html

Case Studies:

o https://blogs.oracle.com/webcenter/entry/los_angeles_department_of_building

o https://blogs.oracle.com/fusionmiddleware/entry/ladwp_transformed_customer_experience_with

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

Three Weeks with the Nike+ Fuelband SE

Oracle AppsLab - Wed, 2015-03-11 11:42

I don’t like wearing stuff on my wrist, but in my ongoing quest to learn more about the wearables our users wear, I have embarked on a journey.

For science! And for better living through math, a.k.a. the quantified self.

And because I’ll be at HCM World later this month talking about wearables, and because wearables are a thing, and we have a Storify to prove it, and we need to understand them better, and the Apple Watch is coming (squee!) to save us all from our phones and restore good old face time (not that Facetime) and and and. Just keep reading.

Moving on, I just finished wearing the Nike+ Fuelband SE for three weeks, and today, I’m starting on a new wearable. It’s a surprise, just wait three weeks.

Now that I’ve compiled a fair amount of anecdotal data, I figured a loosely organized manifest of observations (not quite a review) was in order.

The band

The Fuelband isn’t my first fitness tracker; you might recall I wore the Misfit Shine for a few months. Unlike the minimalist Shine, the Fuelband has quite a few more bells and whistles, starting with its snazzy display.

Check out a teardown of the nifty little bracelet, some pretty impressive stuff inside there, not bad for a shoe and apparel company.

I’ve always admired the design aspects of Nike’s wearables, dating back to 2012 when Noel (@noelportugal) first started wearing one. So, it was a bit sad to hear about a year ago that Nike was closing that division.

Turns out the Fuelband wasn’t dead, and when Nike finally dropped an Android version of the Nike+ Fuelband app, I sprang into action, quite literally.

Anyway, the band didn’t disappoint. It’s lightweight and can be resized using a nifty set of links that can be added or removed.

IMG_20150217_121004

The fit wasn’t terribly tight, and the band is surprisingly rigid, which eventually caused a couple areas on my wrist to rub a little raw, no biggie.

The biggest surprise was the first pinch I got closing the clasp. After a while, it got easier to close and less pinchy, but man that first one was a zinger.

The battery life was good, something that I initially worried about, lasting about a week per full charge. Nike provides an adapter cord, but the band’s clasp can be plugged directly into  a USB port, which is a cool feature, albeit a bit awkward looking.

It’s water-resistant too, which is a nice plus.

Frankly, the band is very much the same one that Noel showed me in 2012, and the lack of advancement is one of the complaints users have had over the years.

The app and data

Entering into this, I fully expected to be sucked back into the statistical vortex that consumed me with the Misfit Shine, and yeah, that happened again. At least, I knew what to expect this time.

Initial setup of the band requires a computer and a software download, which isn’t ideal. Once that was out of the way, I could do everything using the mobile app.

The app worked flawlessly, and it looks good, more good design from Nike. I can’t remember any sync issues or crashes during the three-week period. Surprising, considering Nike resisted Android for so long. I guess I expected their foray into Android to be janky.

I did find one little annoyance. The app doesn’t support the Android Gallery for adding a profile picture, but that’s the only quibble I have.

Everything on the app is easily figured out; there’s a point system, NikeFuel. The band calculates steps and calories too, but NikeFuel is Nike’s attempt to normalize effort for specific activities, which also allows for measurement and competition among participants.

The default the NikeFuel goal for each day is 2,000, a number that can be configured. I left it at 2,000 because I found that to be easy to reach.

The app includes Sessions too, which allow the wearer to specify the type of activity s/he is doing. I suppose this makes the NikeFuel calculation more accurate. I used Sessions as a way to categorize and compare workouts.

I tested a few Session types and was stunned to discover that the elliptical earned me less than half the NikeFuel than running on a treadmill for the same duration.

Screenshot_2015-03-09-23-31-12 Screenshot_2015-03-09-23-31-52 Screenshot_2015-03-09-23-31-47 Screenshot_2015-03-09-23-32-14 Screenshot_2015-03-09-23-32-23

Update: Forgot to mention that the app communicates in real time with the band (vs. periodic syncing), so you can watch your NikeFuel increase during a workout, pretty cool.

Overall, the Android app and the web app at nikeplus.com are both well-done and intuitive. There’s a community aspect too, but that’s not for me. Although I did enjoy watching my progress vs. other men my age in the web app.

One missing feature of the Fuelband, at least compared to its competition, is the lack of sleep tracking. I didn’t really miss this at first, but now that I have it again, with the surprise wearable I’m testing now, I’m realizing I want it.

Honestly, I was a bit sad to take off the Fuelband after investing three weeks into it. Turns out, I really liked wearing it. I even pondered continuing its testing and wearing multiple devices to do an apples-to-apples comparison, but Ultan (@ultan) makes that look good. I can’t.

So, stay tuned for more wearable reviews, and find me at HCM World if you’re attending.

Anything to add? Find the comments.Possibly Related Posts:

#db12c now certified for #em12c repository (MOS Note: 1987905.1) with some restrictions

DBASolved - Wed, 2015-03-11 11:06

Last October (2014), at Oracle Open World 2014, I posted about a discussion where there was confusion on if Oracle Database 12c was supported as the Oracle Management Repository (OMR).  At the time, Oracle had put a temporary suspension on support for the OMR running on Oracle Database 12c. 

Over the last week or so, in discussions with some friends I heard that there may be an announcement on this topic soon.  As of yesterday, I was provided a MOS note number to reference (1987905.1) for OMR support on database 12c.  In checking out the note, it appears that the OMR can now be ran on a database 12c instance (12.1.0.2) with some restrictions.

These restrictions are:

  • Must apply database patch 20243268
  • Must apply patchset 12.1.0.2.1 (OCT PSU) or later

This note (1987905.1) is welcomed by many in the community who want to build their OMS on the latested database version.  What is missing from the note is if installing the OMR into a pluggable database (PDB) is support.  Guess the only way to find out is to try building a new Oracle Enterprise Manager 12c on top of a pluggable and see what happens.  At least for now, Oracle Database 12c is supported as the OMR.

Enjoy!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Flashback Logging

Jonathan Lewis - Wed, 2015-03-11 09:21

One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:


SQL> execute print_table('select * from v$lock_type where type = ''FB''')
TYPE                          : FB
NAME                          : Format Block
ID1_TAG                       : tablespace #
ID2_TAG                       : dba
IS_USER                       : NO
DESCRIPTION                   : Ensures that only one process can format data blocks in auto segment space managed tablespaces

SQL> execute print_table('select * from v$event_name where name like ''enq: FB%''')
EVENT#                        : 806
EVENT_ID                      : 1238611814
NAME                          : enq: FB - contention
PARAMETER1                    : name|mode
PARAMETER2                    : tablespace #
PARAMETER3                    : dba
WAIT_CLASS_ID                 : 1893977003
WAIT_CLASS#                   : 0
WAIT_CLASS                    : Other

This tells us that a process will acquire the lock when it wants to format a batch of blocks in a segment in a tablespace using ASSM – and prior experience tells us that this is a batch of 16 consecutive blocks in the current extent of the segment; and when we see a wait for an FB enqueue we can assume that two session have simultaneously tried to format the same new batch of blocks and one of them is waiting for the other to complete the format. In some ways, this wait can be viewed (like the “read by other session” wait) in a positive light – if the second session weren’t waiting for the first session to complete the block format it would have to do the formatting itself, which means the end-user has a reduced response time. On the other hand the set of 16 blocks picked by a session is dependent on its process id, so the second session might have picked a different set of 16 blocks to format, which means in the elapsed time of one format call the segment could have had 32 blocks formatted – this wouldn’t have improved the end-user’s response time, but it would mean that more time would pass before another session had to spend time formatting blocks. Basically, in a highly concurrent system, there’s not a lot you can do about FB waits (unless, of course, you do some clever partitioning of the hot objects).

There is actually one set of circumstances where you can have some control of how much time is spent on the wait, but before I mention it I’d like to point out a couple more details about the event itself. First, the parameter3/id2_tag is a little misleading: you can work out which blocks are being formatted (if you really need to), but the “dba” is NOT a data block address (which you might think if you look at the name and a few values). There is a special case when the FB enqueue is being held while you format the blocks in the 64KB extents that you get from system allocated extents, and there’s probably a special case (which I haven’t bothered to examine) if you create a tablespace with uniform extents that aren’t a multiple of 16 blocks, but in the general case the “dba” consists of two parts – a base “data block address” and a single (hex) digit offset identifying which batch of 16 blocks will be formatted.

For example: a value of 0x01800242 means start at data block address 0x01800240, count forward 2 * 16 blocks then format 16 blocks from that point onwards. Since the last digit can only range from 0x0 to 0xf this means the first 7 (hex) digits of a “dba” can only reference 16 batches of 16 blocks, i.e. 256 blocks. It’s not coincidence (I assume) that a single bitmap space management block can only cover 256 blocks in a segment – the FB enqueue is tied very closely to the bitmap block.

So now it’s time to ask why this discussion of the FB enqueue appears in an article titled “Flashback Logging”. Enable the 10704 trace at level 10, along with the 10046 trace at level 8 and you’ll see. Remember that Oracle may have to log the old version of a block before modifying it and if it’s a block that’s being reused it may contribute to “physical reads for flashback new” – here’s a trace of a “format block” event:


*** 2015-03-10 12:50:35.496
ksucti: init session DID from txn DID:
ksqgtl:
        ksqlkdid: 0001-0023-00000014

*** 2015-03-10 12:50:35.496
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0023-00000014
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0023-00000014
ksqgtl: RETURNS 0
WAIT #140627501114184: nam='db file sequential read' ela= 4217 file#=6 block#=736 blocks=1 obj#=192544 tim=1425991835501051
WAIT #140627501114184: nam='db file sequential read' ela= 674 file#=6 block#=737 blocks=1 obj#=192544 tim=1425991835501761
WAIT #140627501114184: nam='db file sequential read' ela= 486 file#=6 block#=738 blocks=1 obj#=192544 tim=1425991835502278
WAIT #140627501114184: nam='db file sequential read' ela= 522 file#=6 block#=739 blocks=1 obj#=192544 tim=1425991835502831
WAIT #140627501114184: nam='db file sequential read' ela= 460 file#=6 block#=740 blocks=1 obj#=192544 tim=1425991835503326
WAIT #140627501114184: nam='db file sequential read' ela= 1148 file#=6 block#=741 blocks=1 obj#=192544 tim=1425991835504506
WAIT #140627501114184: nam='db file sequential read' ela= 443 file#=6 block#=742 blocks=1 obj#=192544 tim=1425991835504990
WAIT #140627501114184: nam='db file sequential read' ela= 455 file#=6 block#=743 blocks=1 obj#=192544 tim=1425991835505477
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=744 blocks=1 obj#=192544 tim=1425991835505985
WAIT #140627501114184: nam='db file sequential read' ela= 591 file#=6 block#=745 blocks=1 obj#=192544 tim=1425991835506615
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=746 blocks=1 obj#=192544 tim=1425991835507157
WAIT #140627501114184: nam='db file sequential read' ela= 489 file#=6 block#=747 blocks=1 obj#=192544 tim=1425991835507684
WAIT #140627501114184: nam='db file sequential read' ela= 375 file#=6 block#=748 blocks=1 obj#=192544 tim=1425991835508101
WAIT #140627501114184: nam='db file sequential read' ela= 463 file#=6 block#=749 blocks=1 obj#=192544 tim=1425991835508619
WAIT #140627501114184: nam='db file sequential read' ela= 685 file#=6 block#=750 blocks=1 obj#=192544 tim=1425991835509400
WAIT #140627501114184: nam='db file sequential read' ela= 407 file#=6 block#=751 blocks=1 obj#=192544 tim=1425991835509841

*** 2015-03-10 12:50:35.509
ksqrcl: FB,16,18002c2
ksqrcl: returns 0

Note: we acquire the lock (ksqgtl), read 16 blocks by “db file sequential read”, write them to the flashback log (buffer), format them in memory, release the lock (ksqrcl). That lock can be held for quite a long time – in this case 13 milliseconds. Fortunately the all the single block reads after the first have been accelerated by O/S prefetching, your timings may vary.

The higher the level of concurrent activity the more likely it is that processes will collide trying to format the same 16 blocks (the lock is exclusive, so the second will request and wait, then find that the blocks are already formatted when it finally get the lock). This brings me to the special case where waits for the FB enqueue waits might have a noticeable impact … if you’re running parallel DML and Oracle decides to use “High Water Mark Brokering”, which means the parallel slaves are inserting data into a single segment instead of each using its own private segment and leaving the query co-ordinator to clean up round the edges afterwards. I think this is most likely to happen if you have a tablespace using fairly large extents and Oracle thinks you’re going to process a relatively small amount of data (e.g. small indexes on large tables) – the trade-off is between collisions between processes and wasted space from the private segments.


Brian Whitmer No Longer in Operational Role at Instructure

Michael Feldstein - Wed, 2015-03-11 09:17

By Phil HillMore Posts (302)

Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:

Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.

Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.

The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:

Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.

As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.

Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.

We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.

This work sounds a lot like early Instructure, as Brian related to me this week.

Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.

By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).

How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA[1] and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:

The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).

The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.

What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.

The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.

Melissa Loble, Senior Director of Partners & Programs at Instructure[2], will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.

I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.

Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.

  1. Much to Brian’s credit
  2. Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC

The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.