Skip navigation.

Randolf Geist

Syndicate content
Updated: 16 hours 45 min ago

Function-Based Indexes And CURSOR_SHARING = FORCE

Mon, 2015-04-20 02:00
In general it is known that Function-Based Indexes (FBIs) can no longer be used by the optimizer if the expression contains literals and CURSOR_SHARING = FORCE / SIMILAR (deprecated) turns those literals into bind variables. Jonathan Lewis described the issue quite a while ago here in detail.

In a recent OTN thread this issue was raised again, but to my surprise when I played around with a test case that mimicked the OP's problem query I found that (certain) Oracle versions have some built-in logic that enable FBI usage for certain cases where you would expect them to be not usable.

If you test the following code on versions from 10.2.0.4 (possibly earlier) up to and including version 11.2.0.3 then you'll notice some interesting details:


create table t
as
select * from all_objects;

create index t_idx on t (owner || ' ' || object_name);

exec dbms_stats.gather_table_stats(null, 't')

set echo on linesize 200 pagesize 0

alter session set cursor_sharing = force;

select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);

select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

select * from table(dbms_xplan.display_cursor);
Here is the relevant output I got from 11.2.0.1 for example:

SQL> alter session set cursor_sharing = force;

Session altered.

SQL>
SQL> select /*+ full(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID ar3tw7r1rvawk, child number 0
-------------------------------------
select /*+ full(t) */ * from t where owner || :"SYS_B_0" || object_name
= :"SYS_B_1"

Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 284 (100)| |
|* 1 | TABLE ACCESS FULL| T | 1 | 117 | 284 (2)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||' '||"OBJECT_NAME"=:SYS_B_1)


19 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || ' ' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 0
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 4 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| T | 1 | 117 | 4 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T_IDX | 1 | | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T"."SYS_NC00016$"=:SYS_B_1)


20 rows selected.

SQL>
SQL> select /*+ index(t) */ * from t where owner || 'A' || object_name = 'BLA';

no rows selected

SQL>
SQL> select * from table(dbms_xplan.display_cursor);
SQL_ID 6kzz3vw5x8x3b, child number 1
-------------------------------------
select /*+ index(t) */ * from t where owner || :"SYS_B_0" ||
object_name = :"SYS_B_1"

Plan hash value: 3778778741

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 52472 (100)| |
|* 1 | TABLE ACCESS BY INDEX ROWID| T | 724 | 84708 | 52472 (1)| 00:10:30 |
| 2 | INDEX FULL SCAN | T_IDX | 72351 | | 420 (1)| 00:00:06 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"||:SYS_B_0||"OBJECT_NAME"=:SYS_B_1)


20 rows selected.
Looking at the statement text that results from "CURSOR_SHARING = force" we can spot the expected bind variables instead of the literals, and this should result in a corresponding predicate that doesn't match the FBI expression. However, when looking at the filter expression in the predicate section (when forcing a full table scan) we can spot something interesting: It still shows the literal, which doesn't correspond to the predicate of the rewritten query text.

The next execution shows that the FBI really can be used despite the bind variable replacement taking place, and the final execution shows that the cursor sharing works correctly in that sense that a new child cursor got created for the same SQL text with a different plan and different predicate section when using a different literal in the original SQL text. V$SQL_SHARED_CURSOR shows "HASH_MATCH_FAILED" which is described as "No existing child cursors have the unsafe literal bind hash values required by the current cursor", which makes sense and probably means that the corresponding bind variable is marked as "unsafe" internally.

This optimisation shows only up if there is a suitable FBI - if there's no corresponding expression the SQL text and predicate section match. Furthermore it only supports certain expressions - Jonathan's example shows that in general it's true that these rewrites prevent FBI usage. And obviously it ceases to work in 11.2.0.4 and 12c. Whether this is a bug or a feature I don't know, but since it only seems to apply to certain expressions it's probably not that relevant anyway.

As Jonathan points out in his note you can always work around the general problem by hiding the expression in a view, and since 11g of course a proper virtual column definition is the better approach, which doesn't expose this problem either.

Even better would be the proper usage of bind variables and not using forced cursor sharing, but there are still many installations out there that rely on that feature.

Combined ACCESS And FILTER Predicates - Excessive Throw-Away

Mon, 2015-04-13 00:00
Catchy title... Let's assume the following data setup:

create table t1
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t2
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

create table t3
as
select
rownum as id
, 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
;

exec dbms_stats.gather_table_stats(null, 't1')

exec dbms_stats.gather_table_stats(null, 't2')

exec dbms_stats.gather_table_stats(null, 't3')

-- Deliberately wrong order (FBI after gather stats) - the virtual columns created for this FBI don't have statistics, see below
create index t2_idx on t2 (case when id2 = 1 then id2 else 1 end, case when id2 = 2 then id2 else 1 end, filler, id);

create index t3_idx on t3 (id, filler, id2);
And the following execution plan (all results are from 12.1.0.2 but should be applicable to other versions, too):

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 1416K| 132 (0)| 00:00:01 |
|* 1 | HASH JOIN | | 10000 | 1416K| 132 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 292K| 44 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 |
----------------------------------------------------------------------------
How long would you expect it to run to return all rows (no tricks like expensive regular expressions or user-defined PL/SQL functions)?

Probably should take just a blink, given the tiny tables with just 10000 rows each.

However, these are the runtime statistics for a corresponding execution:

| | | |
| |DATABASE |CPU |
|DURATION |TIME |TIME |
|------------|------------|------------|
|+0 00:00:23 |+0 00:00:23 |+0 00:00:23 |
| | | |

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 0 | | | | | | |
|* 1 | 0 | 5 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 1401K | 2 | 23 | 22 | ##### ############## | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(22) |
| 2 | 1 | 1 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 1 | 4 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1930K | | | | | | |
| 4 | 3 | 2 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 3 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
How is it possible to burn more than 20 seconds of CPU time with that execution plan?

The actual rows produced correspond pretty much to the estimated cardinalities (except for the final hash join), so that doesn't look suspect at first glance.
What becomes obvious from the SQL Monitoring output is that all the time is spent on the hash join operation ID = 1.

Of course at that point (at the latest) you should tell me off for not having you shown the predicate section of the plan and the corresponding query in first place.

So here is the predicate section and the corresponding query:

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )

3 - access("T3"."ID"="T1"."ID")


select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
full(t2)
use_hash(t2)
swap_join_inputs(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;
There are two important aspects to this query and the plan: First, the join expression (without corresponding expression statistics) between T1 and T2 is sufficiently deceptive to hide from the optimizer that in fact this produces a cartesian product (mimicking real life multi table join expressions that lead to bad estimates) and second, the table T3 is joined to both T1 and an expression based on T1 and T2, which means that this expression can only be evaluated after the join to T1 and T2.
With the execution plan shape enforced via my hints (but could be a real life execution plan shape preferred by the optimizer) T3 and T1 are joined first, producing an innocent 10K rows row source, which is then joined to T2. And here the accident happens inside the hash join operation:

If you look closely at the predicate section you'll notice that the hash join operation has both, an ACCESS operation and a FILTER operation. The ACCESS operation performs based on the join between T1 and T2 a lookup into the hash table, which happens to be a cartesian product, so produces 10K times 10K rows, and only afterwards the FILTER (representing the T3 to T1/T2 join expression) is applied to these 100M rows, but matching only a single row in my example here, which is what the A-Rows shows for this operation.

So the point is that this excessive work and FILTER throwaway isn't very well represented in the row source statistics. Ideally you would need one of the following two modifications to get a better picture of what is going on:

Either the FILTER operator should be a separate step in the plan, which in theory would then look like this:

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 |
|* 1a| FILTER | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 |
|* 1b| HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1a- filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )
1b- access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END =CASE
"T2"."ID2" WHEN 1 THEN "T2"."ID2" ELSE 1 END AND CASE "T1"."ID2" WHEN
2 THEN "T1"."ID2" ELSE 1 END =CASE "T2"."ID2" WHEN 2 THEN "T2"."ID2"
ELSE 1 END )
3 - access("T3"."ID"="T1"."ID")
Which would make the excess rows produced by the ACCESS part of the hash join very obvious, but is probably for performance reasons not a good solution, because then the data would have to flow from one operation to another one rather than being processed within the HASH JOIN operator, which means increased overhead.

Or an additional rowsource statistics should be made available:

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows|AE-Rows|
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 132 (100)| | 1 | 1 | 1 |
|* 1 | HASH JOIN | | 10000 | 1191K| 132 (0)| 00:00:01 | 1 | 1 | 100M |
| 2 | TABLE ACCESS FULL | T2 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
|* 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 10K |
| 4 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | 10K |
| 5 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | 10K |
----------------------------------------------------------------------------------------------------
Which I called here "Actually evaluated rows" and in addition to this case here of combined ACCESS and FILTER operations could also be helpful for other FILTER cases, for example even for simple full table scan to see how many rows were evaluated, and not only how many rows matched a possible filter (what A-Rows currently shows).

In a recent OTN thread this topic came up again, and since I also came across this phenomenon a couple of times recently I thought to put this note together. Note that Martin Preiss has submitted a corresponding database idea on the OTN forum.

Expanding on this idea a bit further, it could be useful to have an additional "Estimated evaluated rows (EE-Rows)" calculated by the optimizer and shown in the plan. This could also be used to improve the optimizer's cost model for such cases, because at present it looks like the optimizer doesn't consider additional FILTER predicates on top of ACCESS predicates when calculating the CPU cost of operations like HASH JOINs.

Note that this problem isn't specific to HASH JOIN operations, you can get similar effects with other join methods, like NESTED LOOP joins, or even simple INDEX lookup operations, where again the ACCESS part isn't very selective but only the FILTER applied afterwards filters matching rows.

Here are some examples with the given setup:

select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
t1.*
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t3.id2 = case when t1.id > t2.id then t1.id else t2.id end
;

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 6 | SELECT STATEMENT | | | | 10090 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 5 | NESTED LOOPS | | 10000 | 1416K| 10090 (1)| 00:00:01 | 1 | 1 | | | | | | | |
|* 2 | 1 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1890K | | | | | | |
| 3 | 2 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 4 | 2 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 5 | 1 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 30 | 1 (0)| 00:00:01 | 10K | 1 | | 3 | 33 | 32 | ################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(32) |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T3"."ID"="T1"."ID")
5 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$")
filter("T3"."ID2"=CASE WHEN ("T1"."ID">"T2"."ID") THEN
"T1"."ID" ELSE "T2"."ID" END )



select /*+
leading(t1 t3 t2)
full(t1)
full(t3)
use_hash(t3)
swap_join_inputs(t3)
index(t2)
use_nl(t2)
*/
max(t1.filler)
--, t3.id2
--, case when t1.id > t2.id then t1.id else t2.id end
from
t1
, t2
, t3
where
1 = 1
--
and case when t1.id2 = 1 then t1.id2 else 1 end = case when t2.id2 = 1 then t2.id2 else 1 end
and case when t1.id2 = 2 then t1.id2 else 1 end = case when t2.id2 = 2 then t2.id2 else 1 end
--
and t3.id = t1.id
and t2.filler >= t1.filler
and t2.id = case when t1.id2 > t3.id2 then t1.id2 else t3.id2 end

;

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Pid | Ord | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Execs | A-Rows| PGA | Start | Dur(T)| Dur(A)| Time Active Graph | Activity Graph ASH | Top 5 Activity ASH |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | 7 | SELECT STATEMENT | | | | 20092 (100)| | 1 | 1 | | | | | | | |
| 1 | 0 | 6 | SORT AGGREGATE | | 1 | 223 | | | 1 | 1 | | | | | | | |
| 2 | 1 | 5 | NESTED LOOPS | | 1 | 223 | 20092 (1)| 00:00:01 | 1 | 10K | | | | | | | |
|* 3 | 2 | 3 | HASH JOIN | | 10000 | 1123K| 88 (0)| 00:00:01 | 1 | 10K | 1900K | | | | | | |
| 4 | 3 | 1 | TABLE ACCESS FULL| T3 | 10000 | 70000 | 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
| 5 | 3 | 2 | TABLE ACCESS FULL| T1 | 10000 | 1054K| 44 (0)| 00:00:01 | 1 | 10K | | | | | | | |
|* 6 | 2 | 4 | INDEX RANGE SCAN | T2_IDX | 1 | 108 | 2 (0)| 00:00:01 | 10K | 10K | | 2 | 34 | 34 | #################### | @@@@@@@@@@@@@@@@@@@ (100%) | ON CPU(34) |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T3"."ID"="T1"."ID")
6 - access(CASE "T1"."ID2" WHEN 1 THEN "T1"."ID2" ELSE 1 END
="T2"."SYS_NC00004$" AND CASE "T1"."ID2" WHEN 2 THEN "T1"."ID2" ELSE 1
END ="T2"."SYS_NC00005$" AND "T2"."FILLER">="T1"."FILLER" AND
"T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN "T1"."ID2" ELSE
"T3"."ID2" END AND "T2"."FILLER" IS NOT NULL)
filter("T2"."ID"=CASE WHEN ("T1"."ID2">"T3"."ID2") THEN
"T1"."ID2" ELSE "T3"."ID2" END )

The former one exhibits exactly the same problem as the HASH JOIN example, only that the FILTER is evaluated in the inner row source of a NESTED LOOP join after the index access operation.

The latter one shows as variation the classic partial "index access" due to a range comparison in between - although the entire expression can be evaluated on index level, the access part matches every index entry, so the range scan actually needs to walk the entire index at each loop iteration and the FILTER is then applied to all the index values evaluated.

Video Tutorial: XPLAN_ASH Active Session History - Part 5

Fri, 2015-04-10 01:02
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

DOAG Expertenseminar "Parallel Execution Masterclass" (German)

Mon, 2015-03-30 15:34
In zwei Wochen findet das Expertenseminar "Parallel Execution Masterclass" in Berlin statt, das ich gemeinsam mit der DOAG veranstalte.

Es sind noch ein paar Plätze frei - sollten Sie also Lust und Zeit haben, nach Berlin zu kommen und exklusives Wissen (nicht nur) über das Parallel Execution Feature der Oracle Datenbank zu erfahren, würde ich mich sehr freuen, Sie dort mit den anderen Teilnehmern begrüßen zu dürfen, um gemeinsam mit Ihnen eine gute und produktive Zeit zu verbringen!

Bei Interesse wenden Sie sich bitte an die Ansprechpartner der DOAG, die im Link angegeben sind - dort finden Sie auch eine genauere Beschreibung des Seminars.

Video Tutorial: XPLAN_ASH Active Session History - Part 4

Sun, 2015-03-29 11:55
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.


12c Parallel Execution New Features: Concurrent UNION ALL - Part 3

Sun, 2015-03-22 15:17
In the final part of this instalment I want to focus on the possible optimisation of remote access that I outlined in the initial part, which is based on the idea of running multiple concurrent remote branches of a UNION ALL to overcome the query coordinator bottleneck of straightforward remote queries that need to transfer larger amounts of data.

For that purpose I now simply change my sample query to access the serial table T2 via the DB link defined in the setup of the initial part, like that:

set echo on timing on time on

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
which gives me this execution plan:

--------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)| Inst |IN-OUT|
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 0 (0)| | |
| 1 | REMOTE | | | LOOP | R->S |
--------------------------------------------------------------
Ouch, not exactly what I wanted. Of course it's nice that the optimizer recognizes that this is a statement that can be executed fully remotely, but for my particular purpose I don't want that to happen. So let's add a dummy local branch:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0

);
which gives now this plan:

---------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
| 8 | REMOTE | T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
| 10 | REMOTE | T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
| 12 | REMOTE | T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
| 14 | REMOTE | T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
| 16 | REMOTE | T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
| 18 | REMOTE | T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
| 20 | REMOTE | T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
| 22 | REMOTE | T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
| 24 | REMOTE | T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
| 26 | REMOTE | T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | FILTER | | Q1,00 | PCWP | |
| 29 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
---------------------------------------------------------------------------
That is more what I wanted. One funny side effect of the dummy branch is that the automatically derived degree is now 11, since there are 11 branches. Although the optimizer knows that the 11th branch won't be executed (the filter operator ID = 28 is "NULL IS NOT NULL") it gets its own PX SELECTOR assigned, so that's probably the explanation why the calculation arrives at 11.

So let's see what happens at runtime using the setup from the previous part where T2 has 2M rows:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| 4485K| | 1| 4| 5|@**** (5) |
2| 45M| | 0| 10| 10|********** (10) |
3| 46M| | 1| 9| 10|@********* (10) |
4| 46M| | 0| 10| 10|********** (10) |
5| 46M| | 0| 10| 10|********** (10) |
6| 46M| | 0| 10| 10|********** (10) |
7| 46M| | 0| 10| 10|********** (10) |
8| 46M| | 0| 10| 10|********** (10) |
9| 46M| | 0| 10| 10|********** (10) |
10| 46M| | 0| 10| 10|********** (10) |
11| 46M| | 0| 10| 10|********** (10) |
12| 46M| | 0| 10| 10|********** (10) |
13| 46M| | 0| 10| 10|********** (10) |
14| 46M| | 0| 10| 10|********** (10) |
15| 46M| | 0| 10| 10|********** (10) |
16| 46M| | 0| 10| 10|********** (10) |
17| 46M| | 0| 10| 10|********** (10) |
18| 46M| | 0| 10| 10|********** (10) |
19| 46M| | 0| 10| 10|********** (10) |
20| 46M| | 0| 10| 10|********** (10) |
21| 46M| | 0| 10| 10|********** (10) |
22| 46M| | 0| 10| 10|********** (10) |
23| 46M| | 0| 10| 10|********** (10) |
24| 46M| | 0| 10| 10|********** (10) |
25| 46M| | 1| 9| 10|@********* (10) |
26| 46M| | 1| 9| 10|@********* (10) |
27| 46M| | 0| 10| 10|********** (10) |
28| 46M| | 0| 10| 10|********** (10) |
29| 46M| | 0| 10| 10|********** (10) |
30| 46M| | 0| 10| 10|********** (10) |
31| 46M| | 0| 10| 10|********** (10) |
32| 46M| | 0| 10| 10|********** (10) |
33| 46M| | 1| 9| 10|@********* (10) |
34| 46M| | 0| 10| 10|********** (10) |
35| 46M| | 1| 9| 10|@********* (10) |
36| 46M| | 0| 10| 10|********** (10) |
37| 46M| | 0| 10| 10|********** (10) |
38| | | 0| 0| 0| (0) |
39| 46M| | 2| 8| 10|@@******** (10) |
40| 46M| | 0| 10| 10|********** (10) |
41| 46M| | 0| 10| 10|********** (10) |
42| 46M| | 0| 10| 10|********** (10) |
43| 46M| | 1| 9| 10|@********* (10) |
44| 46M| | 0| 10| 10|********** (10) |
45| 46M| | 0| 10| 10|********** (10) |
46| 46M| | 0| 10| 10|********** (10) |
47| 46M| | 0| 10| 10|********** (10) |
48| 46M| | 0| 10| 10|********** (10) |
49| 46M| | 0| 10| 10|********** (10) |
50| 46M| | 0| 10| 10|********** (10) |
51| 46M| | 0| 10| 10|********** (10) |
52| 46M| | 0| 10| 10|********** (10) |
53| 46M| | 1| 9| 10|@********* (10) |
54| 46M| | 0| 10| 10|********** (10) |
55| 46M| | 0| 10| 10|********** (10) |
56| 46M| | 0| 10| 10|********** (10) |
57| 46M| | 0| 10| 10|********** (10) |
58| 46M| | 0| 10| 10|********** (10) |
59| 36M| | 0| 8| 8|******** (8) |
60| 4609K| | 0| 1| 1|* (1) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 20M | 43 | 1 | 1 | # | 1:P002(1)[2000K],P00A(0)[2000K],P001(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 6 | UNION-ALL | | 11 | 20M | | | | | 0:P00A(0)[2000K],P001(0)[2000K],P002(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 7 | PX SELECTOR | | 11 | 2000K | | | | | 0:P006(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 8 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 3:P006(58)[2000K],P004(1)[0],P007(1)[0],P00A(0)[0],P000(0)[0],... |
| 9 | PX SELECTOR | | 11 | 2000K | | | | | 0:P008(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 10 | REMOTE | T2 | 11 | 2000K | 1 | 58 | 57 | #################### | 1:P008(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 11 | PX SELECTOR | | 11 | 2000K | | | | | 0:P00A(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 1:P00A(58)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 13 | PX SELECTOR | | 11 | 2000K | | | | | 0:P004(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 14 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P004(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 15 | PX SELECTOR | | 11 | 2000K | | | | | 0:P007(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 16 | REMOTE | T2 | 11 | 2000K | 2 | 59 | 58 | #################### | 1:P007(58)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 17 | PX SELECTOR | | 11 | 2000K | | | | | 0:P005(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 18 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P005(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 19 | PX SELECTOR | | 11 | 2000K | | | | | 0:P002(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 56 | #################### | 1:P002(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 21 | PX SELECTOR | | 11 | 2000K | | | | | 0:P009(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 22 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P009(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 23 | PX SELECTOR | | 11 | 2000K | | | | | 0:P003(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 24 | REMOTE | T2 | 11 | 2000K | 2 | 57 | 56 | #################### | 1:P003(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 25 | PX SELECTOR | | 11 | 2000K | | | | | 0:P001(0)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P001(57)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
|* 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
So in principle this works as desired, with the automatically derived degree all remote branches are active at the same time. If I queried now different chunks of the same remote object to speed up the transfer and maximize throughput this should give me what I want.

If you wonder why the execution took now almost one minute when the processing of the same query in the previous part just took 40-45 seconds: This seems to be a side-effect of running that many concurrent CPU intensive processes on my test system, which nominally has 12 cores / 24 CPUs, but obviously doesn't scale linearly, either due to some architectural issues, or simply because the CPU speed was lowered with that many cores active.

However, look what happens when I change the remote query slightly so that no rows will be returned:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0
);
The runtime profile now looks like this:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
6| 36M| | 0| 9,2| 9,2|********* (9,2) |
12| 36M| | 0| 11| 11|*********** (11) |
18| 36M| | 0| 11| 11|*********** (11) |
24| 36M| | 0| 11| 11|*********** (11) |
30| 36M| | 0| 11| 11|*********** (11) |
36| 36M| | 0| 11| 11|*********** (11) |
42| 36M| | 0| 9,2| 9,2|********* (9,2) |
48| 36M| | 0| 11| 11|*********** (11) |
54| 36M| | 0| 11| 11|*********** (11) |
60| 32M| | 0| 11| 11|*********** (11) |
66| 9641K| | 0| 11| 11|*********** (11) |
72| 9641K| | 0| 11| 11|*********** (11) |
78| 9641K| | 0| 11| 11|*********** (11) |
84| 9641K| | 0| 11| 11|*********** (11) |
90| 9641K| | 0| 11| 11|*********** (11) |
96| 9641K| | 0| 11| 11|*********** (11) |
102| 9641K| | 0| 9,2| 9,2|********* (9,2) |
108| 9641K| | 0| 11| 11|*********** (11) |
114| 9801K| | 0| 11| 11|*********** (11) |
120|10281K| | 0| 11| 11|*********** (11) |
126|10281K| | 0| 11| 11|*********** (11) |
132|10281K| | 0| 11| 11|*********** (11) |
138|10281K| | 0| 11| 11|*********** (11) |
144|10281K| | 0| 11| 11|*********** (11) |
150|10281K| | 0| 11| 11|*********** (11) |
156|10281K| | 0| 11| 11|*********** (11) |
162|10281K| | 0| 9,2| 9,2|********* (9,2) |
168|10281K| | 0| 11| 11|*********** (11) |
174|10281K| | 0| 11| 11|*********** (11) |
180|10281K| | 0| 11| 11|*********** (11) |
186|10281K| | 0| 11| 11|*********** (11) |
192|10281K| | 0| 11| 11|*********** (11) |
198|10281K| | 0| 11| 11|*********** (11) |
204|10281K| | 0| 11| 11|*********** (11) |
210|10281K| | 0| 11| 11|*********** (11) |
216|10281K| | 0| 11| 11|*********** (11) |
222|10281K| | 0| 9,2| 9,2|********* (9,2) |
228|10281K| | 0| 11| 11|*********** (11) |
234|10281K| | 0| 11| 11|*********** (11) |
240|10281K| | 0| 11| 11|*********** (11) |
246|10281K| | 0| 11| 11|*********** (11) |
252|10281K| | 0| 11| 11|*********** (11) |
258|10281K| | 0| 11| 11|*********** (11) |
264|10281K| | 0| 11| 11|*********** (11) |
270|10281K| | 0| 11| 11|*********** (11) |
276|10281K| | 0| 11| 11|*********** (11) |
282|10281K| | 0| 9,2| 9,2|********* (9,2) |
287|10281K| | 0| 11| 11|*********** (11) |
292|10281K| | 0| 11| 11|*********** (11) |
297|10281K| | 0| 11| 11|*********** (11) |
302|10281K| | 0| 11| 11|*********** (11) |
307|10281K| | 0| 11| 11|*********** (11) |
312|10281K| | 0| 11| 11|*********** (11) |
317|10281K| | 0| 11| 11|*********** (11) |
322|10281K| | 0| 11| 11|*********** (11) |
327|10281K| | 0| 11| 11|*********** (11) |
332|10281K| | 0| 11| 11|*********** (11) |
337|10281K| | 0| 11| 11|*********** (11) |
342|10281K| | 0| 8,8| 8,8|********* (8,8) |
347|10281K| | 0| 11| 11|*********** (11) |
352|10281K| | 0| 11| 11|*********** (11) |
357|10281K| | 0| 11| 11|*********** (11) |
362|10281K| | 0| 11| 11|*********** (11) |
367|10281K| | 0| 11| 11|*********** (11) |
372|10281K| | 0| 11| 11|*********** (11) |
377|10281K| | 0| 11| 11|*********** (11) |
382|10281K| | 0| 11| 11|*********** (11) |
387|10281K| | 0| 11| 11|*********** (11) |
392|10281K| | 0| 11| 11|*********** (11) |
397|10281K| | 0| 11| 11|*********** (11) |
402|10281K| | 0| 8,8| 8,8|********* (8,8) |
407|10281K| | 0| 11| 11|*********** (11) |
412|10281K| | 0| 11| 11|*********** (11) |
417|10281K| | 0| 11| 11|*********** (11) |
422|10281K| | 0| 11| 11|*********** (11) |
427|10281K| | 0| 11| 11|*********** (11) |
432|10281K| | 0| 11| 11|*********** (11) |
437|10281K| | 0| 11| 11|*********** (11) |
442|10281K| | 0| 11| 11|*********** (11) |
447|10281K| | 0| 11| 11|*********** (11) |
452|10281K| | 0| 11| 11|*********** (11) |
457|10281K| | 0| 11| 11|*********** (11) |
462|10281K| | 0| 8,8| 8,8|********* (8,8) |
467|10281K| | 0| 11| 11|*********** (11) |
472|10281K| | 0| 11| 11|*********** (11) |
477|10281K| | 0| 11| 11|*********** (11) |
482|10281K| | 0| 11| 11|*********** (11) |
487|10281K| | 0| 11| 11|*********** (11) |
492|10281K| | 0| 11| 11|*********** (11) |
497|10281K| | 0| 11| 11|*********** (11) |
502|10281K| | 0| 11| 11|*********** (11) |
507|10281K| | 0| 11| 11|*********** (11) |
512|10281K| | 0| 11| 11|*********** (11) |
517|10281K| | 0| 11| 11|*********** (11) |
522|10281K| | 0| 8,8| 8,8|********* (8,8) |
527|10281K| | 0| 11| 11|*********** (11) |
532|10281K| | 0| 11| 11|*********** (11) |
537| 9535K| | 0| 10| 10|********** (10) |
542| 7902K| | 0| 8,4| 8,4|******** (8,4) |
547| 4894K| | 0| 5,2| 5,2|***** (5,2) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 6 | UNION-ALL | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 7 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 8 | REMOTE | T2 | 11 | 0 | 2 | 59 | 58 | ### | 11:P001(58)[0],P002(58)[0],P004(58)[0],P005(58)[0],P00A(57)[0],... |
| 9 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 10 | REMOTE | T2 | 11 | 0 | 58 | 57 | 56 | ### | 11:P00A(53)[0],P001(53)[0],P002(53)[0],P006(53)[0],P007(53)[0],... |
| 11 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 0 | 111 | 58 | 57 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P002(53)[0],P004(53)[0],... |
| 13 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 14 | REMOTE | T2 | 11 | 0 | 163 | 61 | 60 | ### | 11:P00A(54)[0],P001(54)[0],P004(54)[0],P000(53)[0],P002(53)[0],... |
| 15 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 16 | REMOTE | T2 | 11 | 0 | 216 | 61 | 60 | ### | 11:P00A(55)[0],P000(54)[0],P005(54)[0],P006(54)[0],P001(53)[0],... |
| 17 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 18 | REMOTE | T2 | 11 | 0 | 269 | 65 | 64 | #### | 11:P005(58)[0],P007(57)[0],P00A(56)[0],P000(56)[0],P004(56)[0],... |
| 19 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 0 | 324 | 64 | 63 | #### | 11:P006(55)[0],P00A(53)[0],P000(53)[0],P004(53)[0],P008(53)[0],... |
| 21 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 22 | REMOTE | T2 | 11 | 0 | 376 | 66 | 65 | #### | 11:P007(54)[0],P00A(53)[0],P005(53)[0],P001(52)[0],P003(52)[0],... |
| 23 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 24 | REMOTE | T2 | 11 | 0 | 429 | 67 | 66 | #### | 11:P004(54)[0],P008(54)[0],P00A(53)[0],P000(53)[0],P001(53)[0],... |
| 25 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 0 | 481 | 67 | 66 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P003(53)[0],P009(53)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Ouch, so this took now more than nine times longer, and although we see the concurrent UNION ALL plan shape there is not much concurrency visible from the runtime activity - each branch seems to be executed by all 11 PX servers concurrently, and only then the next branch gets executed again by all 11 PX servers... It more or less looks like the PX SELECTOR wouldn't work correctly and instead of assigning the branch only to one PX server it seems to get executed by all of them - which we've already seen in the previous parts of this series is what actually happens, but for non-remote branches the PX SELECTOR functionality made sure that only one of them actually did something whereas the other ones simply skipped the operation.

We can see this problem confirmed by looking at the SQL execution statistics of the remote queries executed via V$SQLSTATS or DBA_HIST_SQLSTAT:

SQL_ID FETCHES END_OF_FETCH_COUNT EXECUTIONS PARSE_CALLS ROWS_PROCESSED
------------- ---------- ------------------ ---------- ----------- --------------
dtjb3bxg1ysdk 730 10 110 110 20000100
d36r1d00yaunc 110 110 108 108 0
The first SQL_ID corresponds to the remote query used by my first successful example - it already confirms that the remote query was actually executed 110 times (=> 11 PX servers times 10 branches) instead of the expected 10 times. The "ROWS_PROCESSED" and "END_OF_FETCH_COUNT" suggest what seems to have happened: Ten of the executions actually fetched the rows to the end (10 times 2M rows), but 100 of them fetched only a single row and then stopped the processing.

This also explains why my slightly changed query influenced the runtime profile so dramatically: Since no rows were returned by the remote query all 110 executions had to run to the end, since they would only stop early after fetching the first row, but there were no rows to fetch.

This problem is tracked via bug 19565803: INEFFICIENT EXECUTION OF PARALLEL UNION-ALL INVOLVING REMOTE TABLES and is mentioned to be fixed in 12.2, but there doesn't seem to be patch/fix available for 12.1

So in principle my original idea should work, assuming that the remote queries just access different chunks/partitions of the same segment the issue just described shouldn't matter. However, depending on what the remote queries exactly do, some other scenarios might be affected. The possible overhead depends on how much work the remote queries have to perform before they return the first row.

12c Parallel Execution New Features: Concurrent UNION ALL - Part 2

Thu, 2015-03-12 14:41
In the first part of this series I've focused on the parallel degree chosen by the optimizer when dealing with the new concurrent UNION ALL feature.

I've shown that for the variant with serial branches only in the UNION ALL in principle the number of branches dictates the parallel degree determined, even in cases of more complex plans that mix such a serial branch only UNION ALL operator with some other parallel stuff for example via a join.

In this part I'll focus on the runtime behaviour of the feature, but before doing so let me show you what happens if you start mixing serial and parallel branches in the UNION ALL, like that (using the identical table setup as in the previous part):

select count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
The EXPLAIN PLAN output then looks like this:

--------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
|* 14 | TABLE ACCESS FULL| T_2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
|* 24 | TABLE ACCESS FULL| T_2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

8 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
10 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
22 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
24 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
26 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
28 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
30 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>=
REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 8 because of table property
So now the concurrent UNION ALL feature got activated automatically (no PQ_CONCURRENT_UNION hint required) as described in the documentation / white paper, as you can tell from the PX SELECTOR operators shown for the serial branches (and the fact these operations are now shown as PCWP). So having at least one parallel branch activates the feature by default, and will even be used if you happen to have parallel branches only, and you would have to use the NO_PQ_CONCURRENT_UNION hint to prevent the feature usage.

The notes section now shows a parallel degree of 8, and when I execute this SQL the actual degree used at runtime agrees to that, so in that case here the degree shown seems to be correct.

So how does the feature now behave at runtime? For that purpose I've changed the set-up slightly, by increasing the size of the serial table T2 to 2M rows (the initial setup used 200K rows), so that the parallel and serial table have the same number of rows. I've also changed the parallel degree of T_2 to 4 instead of 8 to make some points more obvious in the output:

-- This is the Parallel table
drop table t_2 purge;

drop table t2 purge;

create table t_2
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2')

alter table t_2 parallel 4;

-- This is the serial table
create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't2')
So let's execute above query with this slightly modified set-up and look at the output of my XPLAN_ASH script to monitor the execution. These are some snippets from the script output after a couple of seconds:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 2984K | | | | | 0:P002(0)[753K],P001(0)[745K],P000(0)[744K],P003(0)[742K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 2984K | | | | | 0:P002(0)[753K],P001(0)[745K],P000(0)[744K],P003(0)[742K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 753K | | | | | 0:P002(0)[753K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | ==> | TABLE ACCESS FULL| T2 | 4 | 753K | 2 | 14 | 14 | ################### | 1:P002(14)[753K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 3 | 745K | | | | | 0:P001(0)[745K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | ==> | TABLE ACCESS FULL| T2 | 3 | 745K | 2 | 14 | 14 | ################### | 1:P001(14)[745K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 2 | 744K | | | | | 0:P000(0)[744K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | ==> | TABLE ACCESS FULL| T2 | 2 | 744K | 2 | 14 | 14 | ################### | 1:P000(14)[744K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 1 | 742K | | | | | 0:P003(0)[742K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 14 | ==> | TABLE ACCESS FULL| T_2 | 20 | 742K | 2 | 14 | 14 | ################### | 1:P003(14)[742K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 16 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
For brevity I've omitted some of the columns from the output, and want to focus specifically on the "Execs" and row distribution per Parallel Execution Server. We can see from the "Activity Timeline" that the statement runs on average with four PX servers active as desired (remember I've lowered the degree to 4 for this run), so the work distribution is optimal at present. What we can tell from the "Execs" and "row distribution" output is that the PX servers in principle are assigned in the following way

- the first branch gets executed by all four PX servers but only one is assigned by the PX SELECTOR to actually do something
- the second branch gets executed by the remaining three PX servers but only one is assigned by the PX SELECTOR to actually do something
- the third branch gets executed by the remaining two PX servers but only one is assigned by the PX SELECTOR to actually do something
- the fourth branch gets executed by the remaining PX server

The fourth branch is particularly interesting because it's actually a parallel full table scan that is usually split into granules via the PX BLOCK ITERATOR operator and each granule is assigned to one of the (usually > 1) PX servers working on the operation. However, in this particular case, since there is only one PX server left (at present) actually only this PX server works on this "parallel" full table scan (and gets all the granules assigned), which isn't a problem in terms of parallelism since all four PX servers have something to do but results in a rather unusual distribution profile of this "parallel" full table scan. You can see this confirmed from the "row distribution" shown in the last column where you see in square brackets behind each process the number of rows produced (the number in parenthesis represents the ASH sample count per process and plan operation), and only one of the PX servers produced rows so far for this "parallel" full table scan operation.

Here's the script output some seconds later (45 seconds runtime so far):

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 8747K | | | | | 0:P002(0)[2200K],P000(0)[2187K],P003(0)[2180K],P001(0)[2180K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 8747K | | | | | 0:P002(0)[2200K],P000(0)[2187K],P003(0)[2180K],P001(0)[2180K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ################### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ################### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ################### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | ==> | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ################### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 291K | | | | | 0:P003(0)[291K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | ==> | TABLE ACCESS FULL| T2 | 4 | 291K | 41 | 5 | 5 | ### | 1:P003(5)[291K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 3 | 163K | | | | | 0:P002(0)[163K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | ==> | TABLE ACCESS FULL| T2 | 3 | 163K | 44 | 2 | 2 | # | 1:P002(2)[163K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 2 | 150K | | | | | 0:P000(0)[150K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | ==> | TABLE ACCESS FULL| T2 | 2 | 150K | 44 | 2 | 2 | # | 1:P000(2)[150K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 1 | 143K | | | | | 0:P001(0)[143K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | ==> | TABLE ACCESS FULL| T2 | 1 | 143K | 44 | 2 | 2 | # | 1:P001(2)[143K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The work distribution is still perfect, four servers active all the time. Interestingly you can see now that the operations that previously were only executed by less than four PX servers now all show four executions, although it doesn't really change the work performed. So it looks like once a PX server is done with its work it executes the next branch only to find out that nothing is left to do, and immediately going to the next branch, until there's something left to do. This implementation behaviour (together with something that is probably is a bug) will become relevant when dealing with remote branches, as I'll show in the final part.

You can tell from the "row distribution" column for operation ID 14 that obviously some granules were not processed yet by that single PX server working on the "parallel" full table scan so far and some 37K rows each were processed by the other PX servers when they obviously finished their work a bit earlier and finally joined the "parallel" full table scan.

We now have the PX servers working on the next four branches, which are just four serial branches of similar workload, so they should all take around the same time, and now that we already have an idea how this works we can expect all of them around the same time to join the next parallel full table scan following (well, one PX server is a bit ahead of the others, so not really exactly around the same time).

This what things look like after approx. 80 seconds:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| | | 0| 0| 0| (0) |
2| 3844K| | 4| 0| 4|@@@@ (4) |
3| 3844K| | 4| 0| 4|@@@@ (4) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
5| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
7| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
9| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| | | 0| 0| 0| (0) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 16M | 65 | 1 | 1 | # | 1:P002(1)[3927K],P001(0)[3901K],P000(0)[3900K],P003(0)[3897K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 16M | | | | | 0:P002(0)[3927K],P001(0)[3901K],P000(0)[3900K],P003(0)[3897K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ########### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ########### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ########### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ########### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | ==> | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 41 | 39 | ########### | 1:P003(40)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 4 | 1890K | | | | | 0:P002(0)[1890K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | ==> | TABLE ACCESS FULL| T2 | 4 | 1890K | 44 | 38 | 35 | ########## | 1:P002(36)[1890K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 3 | 1863K | | | | | 0:P000(0)[1863K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | ==> | TABLE ACCESS FULL| T2 | 3 | 1863K | 44 | 38 | 36 | ########## | 1:P000(37)[1863K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 2 | 1864K | | | | | 0:P001(0)[1864K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | ==> | TABLE ACCESS FULL| T2 | 2 | 1864K | 44 | 38 | 36 | ########## | 1:P001(37)[1864K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 1 | 7932 | | | | | 0:P003(0)[7932],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 1 | 7932 | | | | | 0:P003(0)[7932],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
We still have a perfect work distribution (ignore the small glitch in ASH instrumentation in the last couple of seconds), and the first of the four serial branches is completed and this PX server has just started to work on the "parallel" full table scan following, the other three are just finishing their serial full table scan.

Again forty seconds later:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3844K| | 2| 0| 2|@@ (2) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| | | 0| 0| 0| (0) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| 961K| | 1| 0| 1|@ (1) |
98| 961K| | 1| 0| 1|@ (1) |
99| 961K| | 1| 0| 1|@ (1) |
100| 961K| | 1| 0| 1|@ (1) |
101| 961K| | 1| 0| 1|@ (1) |
102| 961K| | 1| 0| 1|@ (1) |
103| 961K| | 1| 0| 1|@ (1) |
104| 961K| | 1| 0| 1|@ (1) |
105| 961K| | 1| 0| 1|@ (1) |
106| 961K| | 1| 0| 1|@ (1) |
107| 961K| | 1| 0| 1|@ (1) |
108| 961K| | 1| 0| 1|@ (1) |
109| 961K| | 1| 0| 1|@ (1) |
110| 961K| | 1| 0| 1|@ (1) |
111| 961K| | 1| 0| 1|@ (1) |
112| 961K| | 1| 0| 1|@ (1) |
113| 961K| | 1| 0| 1|@ (1) |
114| 961K| | 1| 0| 1|@ (1) |
115| 961K| | 1| 0| 1|@ (1) |
116| 961K| | 1| 0| 1|@ (1) |
117| 961K| | 1| 0| 1|@ (1) |
118| 961K| | 1| 0| 1|@ (1) |
119| 961K| | 1| 0| 1|@ (1) |
120| 961K| | 1| 0| 1|@ (1) |
121| 961K| | 1| 0| 1|@ (1) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Act | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | | SELECT STATEMENT | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 1 | | SORT AGGREGATE | | 1 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 2 | | PX COORDINATOR | | 5 | 0 | | | | | 0:P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 3 | | PX SEND QC (RANDOM) | :TQ10000 | 4 | 3 | | | | | 0:P000(0)[1],P001(0)[1],P003(0)[1],P002(0)[0],sqlplus.exe(0)[0] |
| 4 | | SORT AGGREGATE | | 4 | 3 | | | | | 0:P000(0)[1],P001(0)[1],P003(0)[1],P002(0)[0],sqlplus.exe(0)[0] |
| 5 | | VIEW | | 4 | 19M | 65 | 1 | 1 | # | 1:P002(1)[5904K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 6 | | UNION-ALL | | 4 | 19M | | | | | 0:P002(0)[5904K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 7 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ####### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ####### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ####### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 43 | 41 | ######## | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 42 | 39 | ####### | 1:P002(40)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ######## | 1:P000(42)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ######## | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[610K],P001(0)[468K],P000(0)[461K],P002(0)[461K],sqlplus.exe(0)[0] |
|* 24 | | TABLE ACCESS FULL| T_2 | 52 | 2000K | 84 | 13 | 13 | ### | 4:P003(13)[610K],P001(10)[468K],P000(10)[461K],P002(10)[461K],sqlplus.exe(0)[0] |
| 25 | | PX SELECTOR | | 4 | 1406K | | | | | 0:P002(0)[1406K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | ==> | TABLE ACCESS FULL| T2 | 4 | 1406K | 96 | 26 | 26 | ##### | 1:P002(26)[1406K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Something significant has happened now to the work distribution, starting from around second 97 of the execution only a single PX server is left active, all others are idle. We can see that the second parallel full table scan (operation 23 + 24) is now done, and as expected, it was executed by all four servers to a rather similar degree, although one of the four did more work than the others. It's also obvious that this operation took only approx. 13 seconds, whereas the previous scans all took approx. 42 seconds, so this operation was significantly quicker due to the fact that really multiple processes worked concurrently on the operation. Since now only a single operation is left to process, only a single process can be active concurrently which explains the work distribution "skew" observed.

This is the final script output after completion:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3844K| | 2| 0| 2|@@ (2) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
76| 3844K| | 2| 0| 2|@@ (2) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 7688K| | 8| 0| 8|@@@@@@@@ (8) |
80| | | 0| 0| 0| (0) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| 961K| | 1| 0| 1|@ (1) |
98| 961K| | 1| 0| 1|@ (1) |
99| 961K| | 1| 0| 1|@ (1) |
100| 961K| | 1| 0| 1|@ (1) |
101| 961K| | 1| 0| 1|@ (1) |
102| 961K| | 1| 0| 1|@ (1) |
103| 961K| | 1| 0| 1|@ (1) |
104| 961K| | 1| 0| 1|@ (1) |
105| 961K| | 1| 0| 1|@ (1) |
106| 961K| | 1| 0| 1|@ (1) |
107| 961K| | 1| 0| 1|@ (1) |
108| 961K| | 1| 0| 1|@ (1) |
109| 961K| | 1| 0| 1|@ (1) |
110| 961K| | 1| 0| 1|@ (1) |
111| 961K| | 1| 0| 1|@ (1) |
112| 961K| | 1| 0| 1|@ (1) |
113| 961K| | 1| 0| 1|@ (1) |
114| 961K| | 1| 0| 1|@ (1) |
115| 961K| | 1| 0| 1|@ (1) |
116| 961K| | 1| 0| 1|@ (1) |
117| 961K| | 1| 0| 1|@ (1) |
118| 961K| | 1| 0| 1|@ (1) |
119| 961K| | 1| 0| 1|@ (1) |
120| 961K| | 1| 0| 1|@ (1) |
121| 961K| | 1| 0| 1|@ (1) |
122| 961K| | 1| 0| 1|@ (1) |
123| 961K| | 1| 0| 1|@ (1) |
124| 961K| | 1| 0| 1|@ (1) |
125| 961K| | 1| 0| 1|@ (1) |
126| 961K| | 1| 0| 1|@ (1) |
127| 961K| | 1| 0| 1|@ (1) |
128| 961K| | 1| 0| 1|@ (1) |
129| 961K| | 1| 0| 1|@ (1) |
130| 961K| | 1| 0| 1|@ (1) |
131| 961K| | 1| 0| 1|@ (1) |
132| 961K| | 1| 0| 1|@ (1) |
133| 961K| | 1| 0| 1|@ (1) |
134| 961K| | 1| 0| 1|@ (1) |
135| 961K| | 1| 0| 1|@ (1) |
136| 961K| | 1| 0| 1|@ (1) |
137| 961K| | 1| 0| 1|@ (1) |
138| 961K| | 1| 0| 1|@ (1) |
139| 961K| | 1| 0| 1|@ (1) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 2 | PX COORDINATOR | | 5 | 4 | | | | | 0:sqlplus.exe(0)[4],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 4 | SORT AGGREGATE | | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 5 | VIEW | | 4 | 20M | 65 | 1 | 1 | # | 1:P002(1)[6498K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 6 | UNION-ALL | | 4 | 20M | | | | | 0:P002(0)[6498K],P001(0)[4505K],P003(0)[4499K],P000(0)[4498K],sqlplus.exe(0)[0] |
| 7 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ###### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 12 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ###### | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 13 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[1889K],P000(0)[37K],P001(0)[37K],P002(0)[37K],sqlplus.exe(0)[0] |
|* 14 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 2 | 42 | 40 | ####### | 3:P003(39)[1889K],P000(1)[37K],P002(1)[37K],P001(0)[37K],sqlplus.exe(0)[0] |
| 15 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 16 | TABLE ACCESS FULL| T2 | 4 | 2000K | 41 | 43 | 41 | ####### | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 17 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 42 | 39 | ####### | 1:P002(40)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ####### | 1:P000(42)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 22 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 43 | 41 | ####### | 1:P001(42)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 23 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P003(0)[610K],P001(0)[468K],P000(0)[461K],P002(0)[461K],sqlplus.exe(0)[0] |
|* 24 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 84 | 13 | 13 | ### | 4:P003(13)[610K],P001(10)[468K],P000(10)[461K],P002(10)[461K],sqlplus.exe(0)[0] |
| 25 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 26 | TABLE ACCESS FULL| T2 | 4 | 2000K | 96 | 44 | 44 | ####### | 1:P002(44)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
So for the last forty seconds only one process was active which represents the final operation and all operations have now been executed by all PX servers.

We can conclude a number of things from these monitoring results:

1. At the end all operations are executed by all PX servers in a concurrent UNION ALL operation
2. The PX SELECTOR operator assigns only one of them to non-parallel branches
3. Depending on the sequence and kind of branches (non-parallel, parallel) you might end up with some unusual execution profiles for parallel branches
4. If you have non-parallel branches towards the end not all of the PX servers might end up doing something when that part gets processed

The last point suggests that it's advisable to move parallel branches towards the end of the UNION ALL to make most out of the parallel execution. If I re-arrange above statement in such a way that the two parallel branches are executed last, the final monitoring output looks like that:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
2| 3322K| | 2,5| 0| 2,5|@@@ (2,5) |
4| 3844K| | 4| 0| 4|@@@@ (4) |
6| 3844K| | 4| 0| 4|@@@@ (4) |
8| 3844K| | 4| 0| 4|@@@@ (4) |
10| 3844K| | 4| 0| 4|@@@@ (4) |
11| 3844K| | 4| 0| 4|@@@@ (4) |
12| 3844K| | 4| 0| 4|@@@@ (4) |
13| 3844K| | 4| 0| 4|@@@@ (4) |
14| 3844K| | 4| 0| 4|@@@@ (4) |
15| 3844K| | 4| 0| 4|@@@@ (4) |
16| 3844K| | 4| 0| 4|@@@@ (4) |
17| 3844K| | 4| 0| 4|@@@@ (4) |
18| 3844K| | 4| 0| 4|@@@@ (4) |
19| 3844K| | 4| 0| 4|@@@@ (4) |
20| 3844K| | 4| 0| 4|@@@@ (4) |
21| 3844K| | 4| 0| 4|@@@@ (4) |
22| 3844K| | 4| 0| 4|@@@@ (4) |
23| 3844K| | 4| 0| 4|@@@@ (4) |
24| 3844K| | 4| 0| 4|@@@@ (4) |
25| 3844K| | 4| 0| 4|@@@@ (4) |
26| 3844K| | 4| 0| 4|@@@@ (4) |
27| 3844K| | 4| 0| 4|@@@@ (4) |
28| 3844K| | 4| 0| 4|@@@@ (4) |
29| 3844K| | 4| 0| 4|@@@@ (4) |
30| 3844K| | 4| 0| 4|@@@@ (4) |
31| 3844K| | 4| 0| 4|@@@@ (4) |
32| 3844K| | 4| 0| 4|@@@@ (4) |
33| 3844K| | 4| 0| 4|@@@@ (4) |
34| 3844K| | 4| 0| 4|@@@@ (4) |
35| 3844K| | 4| 0| 4|@@@@ (4) |
36| 3844K| | 4| 0| 4|@@@@ (4) |
37| 3844K| | 4| 0| 4|@@@@ (4) |
38| 3844K| | 4| 0| 4|@@@@ (4) |
39| 3844K| | 4| 0| 4|@@@@ (4) |
40| 3844K| | 4| 0| 4|@@@@ (4) |
41| 3844K| | 4| 0| 4|@@@@ (4) |
42| 3844K| | 4| 0| 4|@@@@ (4) |
43| 3844K| | 4| 0| 4|@@@@ (4) |
44| 3844K| | 4| 0| 4|@@@@ (4) |
45| 3844K| | 4| 0| 4|@@@@ (4) |
46| 3844K| | 4| 0| 4|@@@@ (4) |
47| 3844K| | 4| 0| 4|@@@@ (4) |
48| 3844K| | 4| 0| 4|@@@@ (4) |
49| 3844K| | 4| 0| 4|@@@@ (4) |
50| 3844K| | 4| 0| 4|@@@@ (4) |
51| 3844K| | 4| 0| 4|@@@@ (4) |
52| 3844K| | 4| 0| 4|@@@@ (4) |
53| 3844K| | 4| 0| 4|@@@@ (4) |
54| 3844K| | 4| 0| 4|@@@@ (4) |
55| 3844K| | 4| 0| 4|@@@@ (4) |
56| 3844K| | 4| 0| 4|@@@@ (4) |
57| 3844K| | 4| 0| 4|@@@@ (4) |
58| 3844K| | 4| 0| 4|@@@@ (4) |
59| 3844K| | 4| 0| 4|@@@@ (4) |
60| 3844K| | 4| 0| 4|@@@@ (4) |
61| 3844K| | 4| 0| 4|@@@@ (4) |
62| 3844K| | 4| 0| 4|@@@@ (4) |
63| 3844K| | 4| 0| 4|@@@@ (4) |
64| 3844K| | 4| 0| 4|@@@@ (4) |
65| 3844K| | 4| 0| 4|@@@@ (4) |
66| 3844K| | 4| 0| 4|@@@@ (4) |
67| 3844K| | 4| 0| 4|@@@@ (4) |
68| 3844K| | 4| 0| 4|@@@@ (4) |
69| 3844K| | 4| 0| 4|@@@@ (4) |
70| 3844K| | 4| 0| 4|@@@@ (4) |
71| 3844K| | 4| 0| 4|@@@@ (4) |
72| 3844K| | 4| 0| 4|@@@@ (4) |
73| 3844K| | 4| 0| 4|@@@@ (4) |
74| 3844K| | 4| 0| 4|@@@@ (4) |
75| 3844K| | 4| 0| 4|@@@@ (4) |
76| 3844K| | 4| 0| 4|@@@@ (4) |
77| 3844K| | 4| 0| 4|@@@@ (4) |
78| 3844K| | 4| 0| 4|@@@@ (4) |
79| 3844K| | 4| 0| 4|@@@@ (4) |
80| 3844K| | 4| 0| 4|@@@@ (4) |
81| 3844K| | 4| 0| 4|@@@@ (4) |
82| 3844K| | 4| 0| 4|@@@@ (4) |
83| 3844K| | 4| 0| 4|@@@@ (4) |
84| 3844K| | 4| 0| 4|@@@@ (4) |
85| 3844K| | 4| 0| 4|@@@@ (4) |
86| 3844K| | 4| 0| 4|@@@@ (4) |
87| 3844K| | 4| 0| 4|@@@@ (4) |
88| 3844K| | 4| 0| 4|@@@@ (4) |
89| 3844K| | 4| 0| 4|@@@@ (4) |
90| 3844K| | 4| 0| 4|@@@@ (4) |
91| 3844K| | 4| 0| 4|@@@@ (4) |
92| 3844K| | 4| 0| 4|@@@@ (4) |
93| 3844K| | 4| 0| 4|@@@@ (4) |
94| 3844K| | 4| 0| 4|@@@@ (4) |
95| 3844K| | 4| 0| 4|@@@@ (4) |
96| 3844K| | 4| 0| 4|@@@@ (4) |
97| | | 0| 0| 0| (0) |
98| 3844K| | 4| 0| 4|@@@@ (4) |
99| 3844K| | 4| 0| 4|@@@@ (4) |
100| 3844K| | 4| 0| 4|@@@@ (4) |
101| 3844K| | 4| 0| 4|@@@@ (4) |
102| 3844K| | 4| 0| 4|@@@@ (4) |
103| 3844K| | 4| 0| 4|@@@@ (4) |
104| 3844K| | 4| 0| 4|@@@@ (4) |
105| 1922K| | 2| 0| 2|@@ (2) |

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | 1 | 1 | 1 | # | 1:sqlplus.exe(1)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 2 | PX COORDINATOR | | 5 | 4 | | | | | 0:sqlplus.exe(0)[4],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0] |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 4 | SORT AGGREGATE | | 4 | 4 | | | | | 0:P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],sqlplus.exe(0)[0] |
| 5 | VIEW | | 4 | 20M | | | | | 0:P000(0)[5035K],P002(0)[5002K],P003(0)[4994K],P001(0)[4969K],sqlplus.exe(0)[0] |
| 6 | UNION-ALL | | 4 | 20M | | | | | 0:P000(0)[5035K],P002(0)[5002K],P003(0)[4994K],P001(0)[4969K],sqlplus.exe(0)[0] |
| 7 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 8 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P001(41)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 9 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 10 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P000(41)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 11 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 12 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 42 | 42 | ######## | 1:P003(42)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 13 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 14 | TABLE ACCESS FULL| T2 | 4 | 2000K | 2 | 41 | 41 | ######## | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 15 | PX SELECTOR | | 4 | 2000K | | | | | 0:P000(0)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 16 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 40 | 40 | ######### | 1:P000(40)[2000K],P001(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 17 | PX SELECTOR | | 4 | 2000K | | | | | 0:P001(0)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 18 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 41 | 41 | ######### | 1:P001(41)[2000K],P000(0)[0],P002(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 19 | PX SELECTOR | | 4 | 2000K | | | | | 0:P002(0)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
|* 20 | TABLE ACCESS FULL| T2 | 4 | 2000K | 43 | 41 | 41 | ######### | 1:P002(41)[2000K],P000(0)[0],P001(0)[0],P003(0)[0],sqlplus.exe(0)[0] |
| 21 | PX SELECTOR | | 4 | 2000K | | | | | 0:P003(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
|* 22 | TABLE ACCESS FULL| T2 | 4 | 2000K | 44 | 40 | 40 | ######### | 1:P003(40)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],sqlplus.exe(0)[0] |
| 23 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P000(0)[534K],P003(0)[501K],P002(0)[497K],P001(0)[468K],sqlplus.exe(0)[0] |
|* 24 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 83 | 11 | 11 | ### | 4:P000(11)[534K],P003(10)[501K],P002(10)[497K],P001(10)[468K],sqlplus.exe(0)[0] |
| 25 | PX BLOCK ITERATOR | | 4 | 2000K | | | | | 0:P002(0)[505K],P000(0)[501K],P001(0)[501K],P003(0)[493K],sqlplus.exe(0)[0] |
|* 26 | TABLE ACCESS FULL| T_2 | 52 | 2000K | 94 | 12 | 11 | ### | 4:P000(11)[501K],P003(11)[493K],P002(10)[505K],P001(10)[501K],sqlplus.exe(0)[0] |
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Notice how I managed in this particular case to reduce the execution duration by more than 30 seconds simply by re-arranging, which ensured that the work distribution was optimal all the time.

In the final part of series I'll focus on the runtime behaviour of the concurrent UNION ALL feature when dealing with remote branches.

12c Parallel Execution New Features: Concurrent UNION ALL - Part 1

Tue, 2015-03-03 09:31
12c introduces the concurrent UNION ALL operator that allows multiple branches below the UNION ALL to become active concurrently - which is a significant change. Before the introduction of this feature Oracle never executed multiple branches of an execution plan concurrently (in terms of Parallel Execution) - the parallelism so far was about executing the same operations of a particular branch of the execution plan by multiple processes / sessions. When we later look at the actual runtime behaviour of the new feature we'll notice that it's actually not that much different from previous behaviour than it sounds here.

By default the feature gets automatically used when there is at least one branch of the UNION ALL making use of Parallel Execution features, like a parallel full table scan for example.

If the UNION ALL operator consists of serial branches only the concurrent execution of those serial branches can be explicitly activated by using the PQ_CONCURRENT_UNION hint. To suppress the feature usage, a corresponding NO_PQ_CONCURRENT_UNION hint can be used. Both hints are officially documented.

The concurrent execution of serial branches in my opinion could be particularly useful to overcome the still existing limitation of parallelism and operations via database links: If you want to copy larger segments of data via database links you can use Parallel Execution on the source to speed up data retrieval and you can use Parallel Execution on the target to speed up the DML process, but the data will always have to go through the query coordinator process on both sides, so a copy process is always limited by the CPU and network bandwidth / latency of a single process. If you need to copy multiple objects of similar size you could simply speed up the copy process via spawning multiple jobs, but if the objects are of significantly different size you would still have to wait for the single process to cope with the largest segment.

Before 12c the only way to overcome this limitation was to employ "do-it-yourself" parallelism per (large) segment, more recently supported via DBMS_PARALLEL_EXECUTE. This allows spreading such an operation across several processes / sessions concurrently and therefore allowing to overcome the described limitation. But this approach makes the overall handling more complex (chunking, error handling etc., although DBMS_PARALLEL_EXECUTE provides a standardized interface for such tasks) and isn't a simple SQL based solution any longer.

With the concurrent UNION ALL feature in principle a SQL solution should be possible where multiple branches of a concurrent UNION ALL query (different chunks) of the same remote segment concurrently. Of course this approach still requires some manual preparation since ideally the different branches should only query disjunct parts of the segment, so for non-partitioned segments ROWID ranges should be prepared and used inside the different remote queries, similar to the DBMS_PARALLEL_EXECUTE approach. If you don't care about the duplicated work you could of course simply use something like a ORA_HASH or MOD based filter on a column expression that comes up with a reasonably equal (hash) value distribution (like a primary key).

However, due to the way the concurrent UNION ALL is implemented internally at present, this approach might not work very well, at least in 12.1.0.1 and 12.1.0.2, as I'll show in a later part.

Some other questions I would like to address regarding this feature are the following:

- How is the work distributed at runtime if you end up with a mixture of parallel and serial branches?

- If you request concurrent execution with serial branches only and no other parallel execution in the plan, how is the parallel degree to be used determined?

In this part of the blog post I want to start with the latter question first: What parallel degree gets used if a UNION ALL consists only of serial branches?

And one follow-up question could be: What happens if a more complex execution plan mixes parts with Parallel Execution outside a UNION ALL with a concurrent UNION ALL that consists only of serial branches?

Here is the table setup I'm going to use for all my tests:

-- This is the Parallel table
create table t_2
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2')

alter table t_2 parallel 8;

-- This is the serial table
create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(10000) */ * from dual
connect by
level <= 10000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't2')

create database link loop connect to cbo_test identified by cbo_test using '//localhost:1521/orcl12'
The database link will become relevant in the later parts of this series where I look at the execution of remote branches.

So let's start with the question about the parallel degree used in case of serial branches only. I'll use the following simple query to determine that (repeat the query block inside the UNION ALL as many times as desired, I've used 20 serial branches):

-- Can I force a completely serial UNION ALL into concurrent execution using the hint? Yes, but isn't selected by the optimizer automatically
-- In 12.1.0.2 the PQ_CONCURRENT_UNION hint works as documented as statement level hint, whereas in 12.1.0.1 there is the bug that you need to specify a query block name (Bug 15851422 : PQ_CONCURRENT_HINT DOES NOT WORK WITHOUT QUERY BLOCK SPECIFICATION)
-- What about the degree chosen in such a case, is there an upper limit?
select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
.
.
.
select regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
I've performed all tests on 12.1.0.2.

If you do an EXPLAIN PLAN on above query without the PQ_CONCURRENT_UNION hint, then you'll see, as documented, the default behaviour, which is serial execution of one branch after the other (I've used 20 serial branches in my tests):

-------------------------------------
| Id | Operation | Name |
-------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | SORT AGGREGATE | |
| 2 | VIEW | |
| 3 | UNION-ALL | |
|* 4 | TABLE ACCESS FULL| T2 |
|* 5 | TABLE ACCESS FULL| T2 |
|* 6 | TABLE ACCESS FULL| T2 |
|* 7 | TABLE ACCESS FULL| T2 |
|* 8 | TABLE ACCESS FULL| T2 |
|* 9 | TABLE ACCESS FULL| T2 |
|* 10 | TABLE ACCESS FULL| T2 |
|* 11 | TABLE ACCESS FULL| T2 |
|* 12 | TABLE ACCESS FULL| T2 |
|* 13 | TABLE ACCESS FULL| T2 |
|* 14 | TABLE ACCESS FULL| T2 |
|* 15 | TABLE ACCESS FULL| T2 |
|* 16 | TABLE ACCESS FULL| T2 |
|* 17 | TABLE ACCESS FULL| T2 |
|* 18 | TABLE ACCESS FULL| T2 |
|* 19 | TABLE ACCESS FULL| T2 |
|* 20 | TABLE ACCESS FULL| T2 |
|* 21 | TABLE ACCESS FULL| T2 |
|* 22 | TABLE ACCESS FULL| T2 |
|* 23 | TABLE ACCESS FULL| T2 |
-------------------------------------
With the hint in place the plan changes to this:

-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
|* 14 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
|* 24 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 31 | PX SELECTOR | | Q1,00 | PCWP | |
|* 32 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 33 | PX SELECTOR | | Q1,00 | PCWP | |
|* 34 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 35 | PX SELECTOR | | Q1,00 | PCWP | |
|* 36 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 37 | PX SELECTOR | | Q1,00 | PCWP | |
|* 38 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 39 | PX SELECTOR | | Q1,00 | PCWP | |
|* 40 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 41 | PX SELECTOR | | Q1,00 | PCWP | |
|* 42 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 43 | PX SELECTOR | | Q1,00 | PCWP | |
|* 44 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 45 | PX SELECTOR | | Q1,00 | PCWP | |
|* 46 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
-------------------------------------------------------------------------
Starting with 12c you'll usually get a plan note about the parallel degree the optimizer has determined, for both the traditional DOP and the in 11.2 introduced Auto DOP way (for Auto DOP this note showed already up in 11.2). However, in this case, no note section shows up.

So what degree do we then get at runtime? (XPLAN_ASH output snippet)

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 20| 20| 1| 20| 20|
| | | | | | | |
So that's interesting, in my case I tested this on a laptop with 2 CPUs and Resource Manager active, which means that insane degrees should be downgraded automatically (or even already limited by the optimizer), but I really got a degree of 20, which doesn't make a lot of sense in this environment. However, in 12c obviously the limits enforced by the Resource Manager have been raised. In 11.2 the highest degree the Resource Manager allowed in this particular environment was 16, in 12c the limit was 27 instead (maybe 28, because in 11.2 I sometimes got 15, sometimes 16).

So pretty straightforward the number of serial branches seem to determine the degree attempted to use at runtime. This also means you need to be careful how many branches you code into the UNION ALL if you want to make use of the concurrent feature.

What happens if I mix now a Parallel Execution with a UNION ALL that consists only of serial branches, for example like this:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t_2
where regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) a,
(
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) b
where a.id = b.id;
Again, if I don't use the PQ_CONCURRENT_UNION hint, I'll get the traditional plan shape for the UNION ALL - which gets executed serially and then gets distributed as part of the remaining parallel operations:

-------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,02 | PCWP | |
|* 5 | HASH JOIN | | Q1,02 | PCWP | |
| 6 | BUFFER SORT | | Q1,02 | PCWC | |
| 7 | PX RECEIVE | | Q1,02 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ10000 | | S->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | |
| 10 | VIEW | | | | |
| 11 | UNION-ALL | | | | |
|* 12 | TABLE ACCESS FULL | T2 | | | |
|* 13 | TABLE ACCESS FULL | T2 | | | |
|* 14 | TABLE ACCESS FULL | T2 | | | |
|* 15 | TABLE ACCESS FULL | T2 | | | |
|* 16 | TABLE ACCESS FULL | T2 | | | |
|* 17 | TABLE ACCESS FULL | T2 | | | |
|* 18 | TABLE ACCESS FULL | T2 | | | |
|* 19 | TABLE ACCESS FULL | T2 | | | |
|* 20 | TABLE ACCESS FULL | T2 | | | |
|* 21 | TABLE ACCESS FULL | T2 | | | |
| 22 | PX RECEIVE | | Q1,02 | PCWP | |
| 23 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 24 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 25 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("ID"="B"."ID")
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
13 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
15 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
17 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
19 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
21 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
25 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 4 because of table property
Since I marked the table T_2 with PARALLEL 8 I would expect the overall plan to use a degree of 8, but look at the "note" section, which tells me the degree would be 4 due to a table property...

What happens at runtime:

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 8| 8| 1| 16| 16|
| | | | | | | |
But at runtime I get a degree of 8, so you have to be careful with these plan notes as they quite often seem to report incorrect degrees for the traditional DOP way.

If I now use the PQ_CONCURRENT_UNION hint, I get the following plan:

------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,02 | PCWP | |
|* 5 | HASH JOIN | | Q1,02 | PCWP | |
| 6 | PX RECEIVE | | Q1,02 | PCWP | |
| 7 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 8 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 9 | VIEW | | Q1,00 | PCWP | |
| 10 | UNION-ALL | | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
|* 12 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
|* 14 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
|* 16 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
|* 18 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
|* 20 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
|* 22 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
|* 24 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
|* 26 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 29 | PX SELECTOR | | Q1,00 | PCWP | |
|* 30 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
| 31 | PX RECEIVE | | Q1,02 | PCWP | |
| 32 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 33 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 34 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - access("ID"="B"."ID")
12 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
14 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
16 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
18 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
20 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
22 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
24 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
26 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
28 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
30 - filter( REGEXP_REPLACE ("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))
34 - filter( REGEXP_REPLACE ("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'c')>= REGEXP_REPLACE
("T_2"."FILLER",'^\s+([[:alnum:]]+)\s+$',' \1',1,1,'i'))

Note
-----
- Degree of Parallelism is 4 because of table property
OK, this is the concurrent UNION ALL shape now, but the "note" section still tells me the parallel degree is 4.

What happens at runtime:

Real-Time SQL Monitoring Execution Summary
-----------------------------------------------

| |PX IS|PX |PX | |PX |PX |
| |CROSS|MIN |MAX |PX |SERVERS |SERVERS |
STATUS |USERNAME |INST |DOP |DOP |INSTANCES |REQUESTED |ALLOCATED |
-------------------|------------------------------|-----|----------|----------|----------|----------|----------|
EXECUTING |CBO_TEST |N | 10| 10| 1| 20| 20|
| | | | | | | |
Oops, at runtime I now get the degree determined by the PQ_CONCURRENT_UNION part, so the overall degree is then in my case 10, and not 4 (or 8 for that matter), since my UNION ALL had 10 serial branches in this example.

So it becomes obvious that using the new feature with serial branches you have to worry a bit about the parallel degree used at runtime.

In the next instalment we'll look at the actual runtime behaviour of the feature when it gets automatically triggered by a mixture of parallel and serial branches.

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 2

Thu, 2015-02-19 15:08
In the second part of this post (go to part 1) I want to focus on the hybrid distribution for skewed join expressions.

2. Hybrid Distribution For Skewed Join Expressions
The HYBRID HASH distribution allows to some degree addressing data distribution skew in case of HASH distributions, which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.

One other side effect of the truly hybrid distribution in case of skew (mixture of BROADCAST / HASH for one row source and ROUND-ROBIN / HASH for the other row source) is that HASH distributions following such a hybrid distribution need to redistribute again even if the same join / distribution keys get used by following joins. If this were regular HASH distributions the data would already be suitably distributed and no further redistribution would be required.

Here's an example of this, using the test case setup mentioned here:

-- Here the HYBRID SKEW distribution works for B->C
-- But the (B->C)->A join is affected by the same skew
-- So the HASH re-distribution of the resulting B.ID is skewed, too
-- And hence the HASH JOIN/SORT AGGREGATE (operation 4+5) are affected by the skew
-- The big question is: Why is there a re-distribution (operation 12+11)?
-- The data is already distributed on B.ID??
-- If there wasn't a re-distribution no skew would happen
-- In 11.2 no-redistribution happens no matter if C is probe or hash row source
-- So it looks like a side-effect of the hybrid distribution
-- Which makes sense as it is not really HASH distributed, but hybrid
select count(t_2_filler) from (
select /*+ monitor
leading(b c a)
use_hash(c a)
swap_join_inputs(a)
no_swap_join_inputs(c)
pq_distribute(a hash hash)
pq_distribute(c hash hash)
--optimizer_features_enable('11.2.0.4')
pq_skew(c)
*/
a.id as t_1_id
, a.filler as t_1_filler
, c.id as t_2_id
, c.filler as t_2_filler
from t_1 a
, t_1 b
, t_2 c
where
c.fk_id_skew = b.id
and a.id = b.id
);

-- 11.2 plan
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
|* 10 | HASH JOIN | | Q1,03 | PCWP | |
| 11 | PX RECEIVE | | Q1,03 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 14 | TABLE ACCESS FULL| T_1 | Q1,01 | PCWP | |
| 15 | PX RECEIVE | | Q1,03 | PCWP | |
| 16 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 18 | TABLE ACCESS FULL| T_2 | Q1,02 | PCWP | |
----------------------------------------------------------------------------

-- 12.1 plan
-------------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,04 | PCWP | |
|* 5 | HASH JOIN | | Q1,04 | PCWP | |
| 6 | PX RECEIVE | | Q1,04 | PCWP | |
| 7 | PX SEND HYBRID HASH | :TQ10002 | Q1,02 | P->P | HYBRID HASH|
| 8 | STATISTICS COLLECTOR | | Q1,02 | PCWC | |
| 9 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 10 | TABLE ACCESS FULL | T_1 | Q1,02 | PCWP | |
| 11 | PX RECEIVE | | Q1,04 | PCWP | |
| 12 | PX SEND HYBRID HASH | :TQ10003 | Q1,03 | P->P | HYBRID HASH|
|* 13 | HASH JOIN BUFFERED | | Q1,03 | PCWP | |
| 14 | PX RECEIVE | | Q1,03 | PCWP | |
| 15 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 16 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 17 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 18 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 19 | PX RECEIVE | | Q1,03 | PCWP | |
| 20 | PX SEND HYBRID HASH (SKEW)| :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 21 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 22 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------------------
Note that both joins to A and C are based on B.ID. As you can see from the 11.2 plan therefore the final hash join (operation ID 5) doesn't need to have the output of the previous hash join (operation ID 10) redistributed, since the data is already distributed in a suitable way (and as a consequence both joins therefore will be affected by skewed values in T2.FK_ID_SKEW, but no BUFFERED join variant is required).

Now look at the 12c plan when SKEW is detected: Since the SKEW handling in fact leads to a potential mixture of HASH / BROADCAST and HASH / ROUND-ROBIN distribution, the data gets redistributed again for the final join (operation ID 11 + 12) which has several bad side effects: First it adds the overhead of an additional redistribution, as a side effect this then turns one of the hash joins into its BUFFERED variant, and since the SKEW distribution (at present) is only supported if the right side of the join is a table (and not the result of another join), this following join actually will be affected by the skew that was just addressed by the special SKEW handling in the join before (assuming the HYBRID HASH distributions in operation ID 6+7 / 11+12 operate in HASH / HASH, not BROADCAST / ROUND-ROBIN mode)...

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 1

Mon, 2015-02-16 14:21
In this blog post I want to cover some aspects of the the new HYBRID HASH adaptive distribution method that I haven't covered yet in my other posts.

As far as I know it serves two purposes for parallel HASH and MERGE JOINs, adaptive broadcast distribution and hybrid distribution for skewed join expressions. In the first part of this post I want to focus on former one (goto part 2).

1. Adaptive Broadcast Distribution For Small Left Row Sources
It allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually at each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

It's important to emphasize that this decision is really done at each execution of the same cursor, so the same cursor can do a BROADCAST distribution for the left row source at one execution and HASH distribution at another execution depending on whether the number of rows detected by the STATISTICS COLLECTOR operator exceeds the threshold or not. This is different from the behaviour of "adaptive joins" where the final plan will be resolved at first execution and from then on will be re-used, and therefore a STATISTICS COLLECTOR operator as part of an adaptive plan no longer will be evaluated after the first execution.

Here is a simple script demonstrating that the distribution method is evaluated at each execution:

define dop = 4

create table t_1
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(&dop*2) */ * from dual
connect by
level <= &dop*2) a
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

create table t_2
compress
as
select
rownum as id
, mod(rownum, &dop) + 1 as fk_id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1')

alter table t_1 parallel &dop cache;

alter table t_2 parallel &dop cache;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

delete from t_1 where rownum <= 1;

select count(*) from t_1;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

rollback;
For the table queue 0 (the distribution of T_1) the distribution for the first execution in above script look like this:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 8 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 8

Consumer 1 P000 3 38 ##########
P001 1 13 ###
P002 2 25 #######
P003 2 25 #######
********** ********** ----------
Total 8
So the eight rows are distributed assumingly by hash. But for the second execution with only seven rows in T_1 I get this output:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 28 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 28

Consumer 1 P000 7 25 ##########
P001 7 25 ##########
P002 7 25 ##########
P003 7 25 ##########
********** ********** ----------
Total 28
So the seven rows were this time broadcasted.

The "pqstat" script is simply a query on V$PQ_TQSTAT, which I've mentioned for example here.

So I run the same query twice, the first time the threshold is exceeded and a HASH distribution takes place. After deleting one row the second execution of the same cursor turns into a BROADCAST / ROUND-ROBIN distribution. You can verify that this is the same parent / child cursor via DBMS_XPLAN.DISPLAY_CURSOR / V$SQL. Real-Time SQL Monitoring also can provide more details about the distribution methods used (click on the "binoculars" icon in the "Other" column of the active report for the PX SEND HYBRID HASH operations).

Note that the dynamic switch between HASH to BROADCAST unfortunately isn't the same as a decision of the optimizer at parse time to use BROADCAST distribution, because in such a case the other row source won't be distributed at all, which comes with some important side effects:

Not only the redistribution of larger row sources simply can take significant time and resources (CPU and in case of RAC network), but due to the (in 12c still existing) limitation of Parallel Execution that only a single redistribution is allowed to be active concurrently reducing the number of redistributions in the plan simply as a side effect can reduce the number of BUFFERED operations (mostly HASH JOIN BUFFERED, but could be additional BUFFER SORTs, too), which are a threat to Parallel Execution performance in general.

Here is a very simple example showing the difference:


-- HYBRID HASH with possible BROADCAST distribution of T_1
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | Q1,02 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 7 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 9 | PX RECEIVE | | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
----------------------------------------------------------------------------

-- TRUE BROADCAST of T_1
-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | Q1,01 | PCWP | |
| 5 | PX SEND BROADCAST | :TQ10000 | Q1,00 | P->P | BROADCAST |
| 6 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
| 8 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 9 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------
So even if in the first plan the T_1 row source really has less than 2*DOP rows and the HYBRID HASH distribution turns into a BROADCAST distribution, this doesn't change the overall plan shape generated by the optimizer. The second HYBRID HASH distribution won't be skipped and will turn into a ROUND-ROBIN distribution instead, which can be confirmed by looking at the output from V$PQ_TQSTAT for example. So the data of the second row source still needs to be distributed, and hence the HASH JOIN will be operating as BUFFERED join due to the plan shape and the limitation that only a single PX SEND / RECEIVE pair can be active at the same time.

In the second plan the BROADCAST distribution of T_1 means that T_2 will not be re-distributed, hence there is no need to operate the HASH JOIN in buffered mode.

So the only purpose of this particular adaptive HYBRID HASH distribution is obviously to avoid skew if there are only a couple of rows (and hence possible join key values) in the left row source, because a HASH distribution based on such a low number of distinct values doesn't work well. Oracle's algorithm needs a certain number of distinct values otherwise it can end up with a bad distribution. This probably also explains why the threshold of 2*DOP was chosen so low.

Exadata & In-Memory Real World Performance Artikel (German)

Wed, 2015-02-11 08:49
Heute wurde auf "informatik-aktuell.de" ein aktueller Artikel von mir veröffentlicht. Es geht darin um die Analyse eines Falles bei einem meiner Kunden, der auf Exadata nicht die erwartete Performance erreicht hat.

In dem Artikel werden unterschiedliche Abfrage-Profile analysiert und erklärt, wie diese unterschiedlichen Profile die speziellen Features von Exadata und In-Memory beeinflussen.

Teil 1 des Artikels
Teil 2 des Artikels

Video Tutorial: XPLAN_ASH Active Session History - Part 3

Mon, 2015-02-09 15:04
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Parallel Execution 12c New Features Overview

Wed, 2015-02-04 07:12
Oracle 12c is the first release since a couple of years that adds significant new functionality in the area of Parallel Execution operators, plan shapes and runtime features. Although 11gR2 added the new Auto DOP feature along with In-Memory Parallel Execution and Statement Queueing, the 12c features are more significant because they introduce new operators that can change both execution plan shape and runtime behaviour.

Here is a list of new features that are worth to note (and not necessarily mentioned in the official documentation and white papers by Oracle):

- The new HYBRID HASH adaptive distribution method, that serves two purposes for parallel HASH and MERGE JOINs:

First, it allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually for each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

Second, to some degree it allows to address data distribution skew in case of HASH distributions (and only for parallel hash joins, not merge joins), which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.

I'll cover some aspects of this adaptive distribution that I haven't mentioned in the existing articles in a separate post.

- The new concurrent UNION ALL operator. This is officially documented here. It comes with a new operator PX SELECTOR that is a generic functionality to pick one of the available PX slaves to perform the child operations of this operator. Since the official documentation leaves a lot details unclear how this concurrent operation will actually behave at run time I'll cover some examples with runtime profiles in a separate post.

- The new PQ_REPLICATE feature that for simple parallel FULL TABLE SCAN row sources (I haven't tested yet if a parallel INDEX FAST FULL SCAN is eligible, too, but I assume so) can decide to run the scan entirely in each PX slave instead of running a distributed scan across the PX slaves in granules and distributing by BROADCAST afterwards. It's not entirely clear to me why this was implemented. Although it reduces the number of redistributions, and in some cases where no other parallel redistributions are required can reduce the number of parallel slave sets to one instead of two, BROADCAST distributions are typically used for smaller row sources, so eliminating this distribution doesn't sound like a huge improvement to justify the development effort. Jonathan Lewis describes the feature here along with some of his ideas, why this feature might be useful.

- The new parallel FILTER operator, an important and potentially huge improvement over the previous only available serial FILTER operator. In the past when a FILTER subquery was part of a parallel plan the data of the "driving" row source of the FILTER (the first child operation) had to be passed to the Query Coordinator and only then the second to nth children could be executed as many times as indicated by the first row source (and depending on the efficiency of filter/subquery caching). Now the FILTER operator can run in the PX slaves and there are a number of distribution variants possible with this new parallel operator. I'll cover that in a separate post.

- The new PX SELECTOR operator that I already mention above as part of the new concurrent UNION ALL operator. As described above, the generic functionality of this operator is to pick one of the available PX slaves to perform the child operations of this operator. It will be used in 12c for serial access operators that are part of a parallel plan (like a serial table or index scan). In the past these parts were performed by the Query Coordinator itself, but now one slave out of a slave set will be selected to perform such operations. This has a number of implications and I'll cover that in a separate post

- The new 1 SLAVE distribution method that is a bit similar to the PX SELECTOR operator in that it will use just one slave of the slave set but gets used for serial parts of the execution plan when the data is redistributed from a parallel part of the plan to a part of the plan that needs to be run in serial, because Oracle cannot parallelize the functionality, for example when evaluating ROWNUM or certain analytic function variants (for example LAG or LEAD with no partition clause). This new 1 SLAVE distribution seems to have two purposes: First avoid activity of the query coordinator (like the PX SELECTOR above) and second avoid the decomposition of the parallel plan into multiple DFO trees. I'll cover that in a separate post

- 12c changes also the way some operations in the plan are marked as PARALLEL or not, which in my opinion can be pretty confusing (and is partly inconsistent with runtime behaviour in my opinion) when just looking at the execution plan, since the runtime activity then might look different from what the execution plan suggests. I'll cover that in a separate post and it will also be picked up in the context of other new functionality mentioned above as appropriate.

There is probably more that I haven't come across yet, but as you can see from the number of times I've mentioned "separate post" in this overview this is already enough material for a whole series of posts to follow.

Webinar Followup

Tue, 2015-02-03 06:47
Thanks everyone who attended my recent webinar at AllThingsOracle.com.

The link to the webinar recording can be found here.

The presentation PDF can be downloaded here. Note that this site uses a non-default HTTP port, so if you're behind a firewall this might be blocked.

Thanks again to AllThingsOracle.com and Amy Burrows for hosting the event.

Video Tutorial: XPLAN_ASH Active Session History - Part 2

Thu, 2015-01-22 13:45
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality has been published. In this part I begin the actual walk-through of the script output.

More parts to follow.


New Version Of XPLAN_ASH Utility - In-Memory Support

Thu, 2015-01-22 13:42
A new version 4.21 of the XPLAN_ASH utility is available for download. I publish this version because it will be used in the recent video tutorials explaining the Active Session History functionality of the script.

As usual the latest version can be downloaded here.

This is mainly a maintenance release that fixes some incompatibilities of the 4.2 version with less recent versions (10.2 and 11.2.0.1).

As an extra however, this version now differentiates between general CPU usage and in-memory CPU usage (similar to 12.1.0.2 Real-Time SQL Monitoring). This is not done in all possible sections of the output yet, but the most important ones are already covered.

So if you already use the 12.1.0.2 in-memory option this might be helpful to understand how much of your CPU time is spent on in-memory operations vs. non in-memory. Depending on your query profile you might be surprised by the results.

Here are the notes from the change log:

 - Forgot to address a minor issue where the SET_COUNT determined per DFO_TREE (either one or two slave sets) is incorrect in the special case of DFO trees having only S->P distributions (pre-12c style). Previous versions used a SET_COUNT of 2 in such a case which is incorrect, since there is only one slave set. 12c changes this behaviour with the new PX SELECTOR operator and requires again two sets.

- For RAC Cross Instance Parallel Execution specific output some formatting and readability was improved (more linebreaks etc.)

- Minor SQL issue fixed in "SQL statement execution ASH Summary" that prevented execution in 10.2 (ORA-32035)

- The NO_STATEMENT_QUEUING hint prevented the "OPTIMIZER_FEATURES_ENABLE" hint from being recognized, therefore some queries failed in 11.2.0.1 again with ORA-03113. Fixed

- "ON CPU" now distinguishes between "ON CPU INMEMORY" and "ON CPU" for in-memory scans

Free Webinar "Oracle Exadata & In-Memory Real-World Performance"

Fri, 2015-01-16 16:38
It's webinar time again.

Join me on Wednesday, January 28th at AllThingsOracle.com for a session based on a real world customer experience.

The session starts at 3pm UK (16:00 Central European) time. The webinar is totally free and the recording will made available afterwards.

Here's the link to the official landing page where you can register and below is the official abstract:
AbstractAfter a short introduction into what the Oracle Exadata Database Machine is, in this one-hour webinar I will look at an analysis of different database query profiles that are based on a real-world customer case, how these different profiles influence the efficiency of Exadata’s “secret sauce” features, as well as the new Oracle In-Memory column store option. Based on the analysis different optimization strategies are presented along with lessons learned.

Video Tutorial: XPLAN_ASH Active Session History - Introduction

Sun, 2015-01-11 16:38
I finally got around preparing another part of the XPLAN_ASH video tutorial.

This part is about the main funcationality of XPLAN_ASH: SQL statement execution analysis using Active Session History and Real-Time SQL Monitoring.

In this video tutorial I'll explain what the output of XPLAN_ASH is supposed to mean when using the Active Session History functionality of the script. Before diving into the details of the script output using sample reports I provide some overview and introduction in this part that hopefully makes it simpler to understand how the output is organized and what it is supposed to mean.

This is the initial, general introduction part. More parts to follow.

"SELECT * FROM TABLE" Runs Out Of TEMP Space

Thu, 2015-01-08 12:49
Now that I've shown in the previous post in general that sometimes Parallel Execution plans might end up with unnecessary BUFFER SORT operations, let's have a look what particular side effects are possible due to this.

What would you say if someone tells you that (s)he just did a simple, straightforward "SELECT * FROM TABLE" that took several minutes to execute without returning, only to then error out with "ORA-01652 unable to extend temp segment", and the TABLE in question is actually nothing but a simple, partitioned heap table, so no special tricks, no views, synonyms, VPD etc. involved, it's really just a plain simple table?

Some time ago I was confronted with such a case at a client. Of course, the first question is, why would someone run a plain SELECT * FROM TABLE, but nowadays with power users and developers using GUI based tools like TOAD or SQLDeveloper, this is probably the GUI approach of a table describe command. Since these tools by default show the results in a grid that only fetches the first n rows, this typically isn't really a threat even in case of large tables, besides the common problems with allocated PX servers in case the table is queried using Parallel Execution, and the users simply keep the grid/cursor open and hence don't allow re-using the PX servers for different executions.

But have a look at the following output, in this case taken from 12.1.0.2, but assuming the partitioned table T_PART in question is marked parallel, resides on Exadata, has many partitions that are compressed via HCC, that uncompressed represent several TB of data (11.2.0.4 on Exadata produces a similar plan):


SQL> explain plan for
2 select * from t_part p;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));
Plan hash value: 2545275170

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | | | Q1,02 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | | | Q1,02 | PCWP | |
| 4 | VIEW | VW_TE_2 | | | Q1,02 | PCWP | |
| 5 | UNION-ALL | | | | Q1,02 | PCWP | |
| 6 | CONCATENATION | | | | Q1,02 | PCWP | |
| 7 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 8 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 9 | PX SEND ROUND-ROBIN | :TQ10000 | | | | S->P | RND-ROBIN |
| 10 | BUFFER SORT | | | | | | |
| 11 | PARTITION RANGE SINGLE | | 2 | 2 | | | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | | | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | | | |
| 14 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 15 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 16 | PX SEND ROUND-ROBIN | :TQ10001 | | | | S->P | RND-ROBIN |
| 17 | BUFFER SORT | | | | | | |
| 18 | PARTITION RANGE SINGLE | | 4 | 4 | | | |
| 19 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | | | |
|* 20 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | | | |
| 21 | PX BLOCK ITERATOR | | 6 | 20 | Q1,02 | PCWC | |
|* 22 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,02 | PCWP | |
| 23 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,02 | PCWC | |
|* 24 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,02 | PCWP | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

13 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
20 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')))
22 - filter("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
24 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Can you spot the problem? It's again the "unnecessary BUFFER SORTS" problem introduced in the previous post. In particular the operation ID = 3 BUFFER SORT is "deadly" if the table T_PART is large, because it needs to buffer the whole table data before any row will be returned to the client. This explains why this simple SELECT * FROM T_PART will potentially run out of TEMP space, assuming the uncompressed table data is larger in size than the available TEMP space. Even if it doesn't run out of TEMP space it will be a totally inefficient operation, copying all table data to PGA (unlikely sufficient) respectively TEMP before returning any rows to the client.

But why does a simple SELECT * FROM TABLE come up with such an execution plan? A hint is the VW_TE_2 alias shown in the NAME column of the plan: It's the result of the "table expansion" transformation that was introduced in 11.2 allowing to set some partition's local indexes to unusable but still make use of the usable index partitions of other partitions. It takes a bit of effort to bring the table into a state where such a plan will be produced for a plain SELECT * FROM TABLE, but as you can see, it is possible. And as you can see from the CONCATENATION operation in the plan, the transformed query produced by the "table expansion" then triggered another transformation, the "concatenation" transformation mentioned in the previous post, that then results in the addition of unnecessary BUFFER SORT operations when combined with Parallel Execution.

Here is a manual rewrite that corresponds to the query that is the result of both, the "table expansion" and the "concatenation" transformation:

select * from (
select /*+ opt_param('_optimizer_table_expansion', 'false') */ * from t_part p where
("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
and
(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE('
2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
)
union all
select * from t_part p where
("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
;

But if you run an EXPLAIN PLAN on above manual rewrite, then 12.1.0.2 produces the following simple and elegant plan:

--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | | | Q1,00 | P->S | QC (RAND) |
| 3 | UNION-ALL | | | | Q1,00 | PCWP | |
| 4 | VIEW | | | | Q1,00 | PCWP | |
| 5 | UNION-ALL | | | | Q1,00 | PCWP | |
| 6 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 7 | PARTITION RANGE SINGLE | | 2 | 2 | Q1,00 | PCWP | |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | Q1,00 | PCWP | |
|* 9 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | Q1,00 | PCWP | |
| 10 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 11 | PARTITION RANGE SINGLE | | 4 | 4 | Q1,00 | PCWP | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | Q1,00 | PCWP | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | Q1,00 | PCWP | |
| 14 | PX BLOCK ITERATOR | | 6 | 20 | Q1,00 | PCWC | |
|* 15 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,00 | PCWP | |
| 16 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 17 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

9 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
13 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
15 - filter((LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
17 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

I've disabled the "table expansion" transformation in this case, because it kicks in again when optimizing this query and just adds some harmless (and useless) branches to the plan that confuse the issue. Without those additional, useless branches it is very similar to the above plan, but without any BUFFER SORT operations, hence it doesn't cause any overhead and should return the first rows rather quickly, no matter how large the table is.

The 11.2.0.4 optimizer unfortunately again adds unnecessary BUFFER SORT operations even to the manual rewrite above, so as mentioned in the previous post the problem of those spurious BUFFER SORTs isn't limited to the CONCATENATION transformation.

Of course, since all this is related to Parallel Execution, a simple workaround to the problem is to run the SELECT * FROM TABLE using a NO_PARALLEL hint, and all those strange side effects of BUFFER SORTS will be gone. And not having unusable local indexes will also prevent the problem, because then the "table expansion" transformation won't kick in.

Interestingly, if the optimizer is told about the true intention of initially fetching only the first n rows from the SELECT * FROM TABLE - for example simply by adding a corresponding FIRST_ROWS(n) hint - at least in my tests using 12.1.0.2 all the complex transformations were rejected and a plain (parallel) FULL TABLE SCAN was preferred instead, simply because it is now differently costed, which would allow working around the problem, too.

If you want to reproduce the issue, here's a sample table definition, along with some comments what I had to do to bring it into the state required to reproduce:

-- The following things have to come together to turn a simple SELECT * from partitioned table into a complex execution plan
-- including Table Expansion and Concatenation:
--
-- - Unusable index partitions to trigger Table Expansion
-- - Partitions with usable indexes that are surrounded by partitions with unusable indexes
-- - And such a partition needs to have an index access path that is cheaper than a corresponding FTS, typically by deleting the vast majority of rows without resetting the HWM
-- - All this also needs to be reflected properly in the statistics
--
-- If this scenario is combined with Parallel Execution the "Parallel Concatenation" bug that plasters the plan with superfluous BUFFER SORT will lead to the fact
-- that the whole table will have to be kept in memory / TEMP space when running SELECT * from the table, because the bug adds, among many other BUFFER SORTs, one deadly BUFFER SORT
-- on top level before returning data to the coordinator, typically operation ID = 3
--
create table t_part (dt not null, id not null, filler)
partition by range (dt)
(
partition p_1 values less than (date '2001-01-01'),
partition p_2 values less than (date '2002-01-01'),
partition p_3 values less than (date '2003-01-01'),
partition p_4 values less than (date '2004-01-01'),
partition p_5 values less than (date '2005-01-01'),
partition p_6 values less than (date '2006-01-01'),
partition p_7 values less than (date '2007-01-01'),
partition p_8 values less than (date '2008-01-01'),
partition p_9 values less than (date '2009-01-01'),
partition p_10 values less than (date '2010-01-01'),
partition p_11 values less than (date '2011-01-01'),
partition p_12 values less than (date '2012-01-01'),
partition p_13 values less than (date '2013-01-01'),
partition p_14 values less than (date '2014-01-01'),
partition p_15 values less than (date '2015-01-01'),
partition p_16 values less than (date '2016-01-01'),
partition p_17 values less than (date '2017-01-01'),
partition p_18 values less than (date '2018-01-01'),
partition p_19 values less than (date '2019-01-01'),
partition p_20 values less than (date '2020-01-01')
)
as
with generator as
(
select /*+ cardinality(1000) */ rownum as id, rpad('x', 100) as filler from dual connect by level <= 1e3
)
select
add_months(date '2000-01-01', trunc(
case
when id >= 300000 and id < 700000 then id + 100000
when id >= 700000 then id + 200000
else id
end / 100000) * 12) as dt
, id
, filler
from (
select
(a.id + (b.id - 1) * 1e3) - 1 + 100000 as id
, rpad('x', 100) as filler
from
generator a,
generator b
)
;

delete from t_part partition (p_2);

commit;

exec dbms_stats.gather_table_stats(null, 't_part')

create unique index t_part_idx on t_part (dt, id) local;

alter index t_part_idx modify partition p_1 unusable;

alter index t_part_idx modify partition p_3 unusable;

alter index t_part_idx modify partition p_5 unusable;

alter table t_part parallel;

alter index t_part_idx parallel;

set echo on pagesize 0 linesize 200

explain plan for
select * from t_part p;

select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));

Unnecessary BUFFER SORT Operations - Parallel Concatenation Transformation

Mon, 2015-01-05 15:47
When using Parallel Execution, depending on the plan shape and the operations used, Oracle sometimes needs to turn non-blocking operations into blocking operations, which means in this case that the row source no longer passes its output data directly to the parent operation but buffers some data temporarily in PGA memory / TEMP. This is either accomplished via the special HASH JOIN BUFFERED operation, or simply by adding BUFFER SORT operations to the plan.

The reason for such a behaviour in parallel plans is the limitation of Oracle Parallel Execution that allows only a single data redistribution to be active concurrently. You can read more about that here.

However, sometimes the optimizer adds unnecessary BUFFER SORT operations to parallel execution plans, and one of the most obvious examples is when the so called "concatenation" query transformation is applied by the optimizer and Parallel Execution is involved.

UPDATE Please note: As mentioned below by Martin (thanks) what I call here "concatenation transformation" typically is called "OR expansion transformation" in CBO speak, and this term probably much better describes what the transformation is about. So whenever I wrote here "concatenation transformation" this can be substituted with "OR expansion transformation".

To understand the issue, first of all, what is the concatenation transformation about?

Whenever there are predicates combined with OR there is the possibility to rewrite the different conditions as separate queries unioned together.

In order to ensure that the result of the rewritten query doesn't contain any unwanted duplicates, the different branches of the UNIONed statement need to filter out any data fulfillinh conditions of previous branches - this is probably where originally the (at first sight) odd (and in the meanwhile documented) LNNVL function came into existence.

The predicates can be either single-table filters, where the concatenation might open up different access paths to the same table (like different indexes), or it might be predicates combining multiple tables, like joins or subqueries.

Here is a short example of the latter (the parallel hints are commented out but are used in the further examples to demonstrate the issue with Parallel Execution) - using version 12.1.0.2:


select
max(id)
from
(
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
(t1.id = t2.id or t1.id = t2.id2)
);
In this example the join condition using an OR prevents any efficient join method between T1 and T2 when not re-writing the statement - Oracle can only resort to a NESTED LOOP join with a repeated full table scan of one of the tables, which is reflected in a rather high estimated cost:


----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 2177M (2)| 23:37:34 |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| 2 | NESTED LOOPS | | 3999K| 61M| 2177M (2)| 23:37:34 |
| 3 | TABLE ACCESS FULL| T2 | 2000K| 19M| 1087 (1)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| T1 | 2 | 12 | 1089 (2)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("T1"."ID"="T2"."ID" OR "T1"."ID"="T2"."ID2")
The same statement could be expressed by the following manual rewrite:


select max(id) from (
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
t1.id = t2.id2
---------
union all
---------
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
t1.id = t2.id
and lnnvl(t1.id = t2.id2)
);
Notice the LNNVL function in the second branch of the UNION ALL that filters out any rows fulfilling the condition used in the first branch.

Also note that using UNION instead of UNION ALL plus LNNVL(s) to filter out any duplicate rows is also potentially incorrect as each query branch might produce duplicate rows that need to be retained as they are also part of the original query result.

At the expense of visiting the tables multiple times we now get at least efficient join methods in each branch (and hence a significantly lower cost estimate):


--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | | 11945 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | | |
| 2 | VIEW | | 2100K| 26M| | 11945 (1)| 00:00:01 |
| 3 | UNION-ALL | | | | | | |
|* 4 | HASH JOIN | | 2000K| 30M| 34M| 5972 (1)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 6 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
|* 7 | HASH JOIN | | 100K| 1562K| 34M| 5972 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 9 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("T1"."ID"="T2"."ID2")
7 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
And in fact, when not preventing the concatenation transformation (NO_EXPAND hint), the optimizer comes up with the following execution plan for the original statement:


-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | | 11945 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 16 | | | |
| 2 | CONCATENATION | | | | | | |
|* 3 | HASH JOIN | | 2000K| 30M| 34M| 5972 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
|* 6 | HASH JOIN | | 100K| 1562K| 34M| 5972 (1)| 00:00:01 |
| 7 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."ID"="T2"."ID2")
6 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
The only difference between those two plans for the manual and automatic rewrite is the CONCATENATION operator instead of UNION ALL, and that the subquery isn't merged in case of the UNION ALL (additional VIEW operator).

So far everything works as expected and you have seen the effect and rationale of the concatenation transformation.

If we run now the original statement using Parallel Execution (turn comments into hints), depending on the exact version used the resulting execution plans show various inefficiencies.

For reference, this is the parallel execution plan I get from 12.1.0.2 when using above manual rewrite:


------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 606 (2)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 13 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10004 | 1 | 13 | | | Q1,04 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 13 | | | Q1,04 | PCWP | |
| 5 | VIEW | | 2100K| 26M| 606 (2)| 00:00:01 | Q1,04 | PCWP | |
| 6 | UNION-ALL | | | | | | Q1,04 | PCWP | |
|* 7 | HASH JOIN | | 2000K| 30M| 303 (2)| 00:00:01 | Q1,04 | PCWP | |
| 8 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10000 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 11 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWC | |
| 12 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWP | |
| 13 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 14 | PX SEND HYBRID HASH | :TQ10001 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 15 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWC | |
| 16 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWP | |
|* 17 | HASH JOIN | | 100K| 1562K| 303 (2)| 00:00:01 | Q1,04 | PCWP | |
| 18 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 19 | PX SEND HYBRID HASH | :TQ10002 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | P->P | HYBRID HASH|
| 20 | STATISTICS COLLECTOR | | | | | | Q1,02 | PCWC | |
| 21 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWC | |
| 22 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 23 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 24 | PX SEND HYBRID HASH | :TQ10003 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | P->P | HYBRID HASH|
| 25 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | PCWC | |
| 26 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | PCWP | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

7 - access("T1"."ID"="T2"."ID2")
17 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
This is a pretty straightforward parallel plan, with the only possibly noteable exception of the new 12c "HYBRID HASH" distribution feature being used.

Now let's have a look at the resulting execution plan when the concatenation transformation gets used:


------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 606 (2)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 16 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ20003 | 1 | 16 | | | Q2,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 16 | | | Q2,03 | PCWP | |
| 5 | CONCATENATION | | | | | | Q2,03 | PCWP | |
|* 6 | HASH JOIN | | 2000K| 30M| 303 (2)| 00:00:01 | Q2,03 | PCWP | |
| 7 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q2,03 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ20001 | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | P->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | | | Q2,01 | PCWC | |
| 10 | BUFFER SORT | | 1 | 16 | | | Q2,01 | PCWP | |
| 11 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | PCWP | |
| 13 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q2,03 | PCWP | |
| 14 | PX SEND HYBRID HASH | :TQ20002 | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | P->P | HYBRID HASH|
| 15 | BUFFER SORT | | 1 | 16 | | | Q2,02 | PCWP | |
| 16 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | PCWC | |
| 17 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | PCWP | |
| 18 | BUFFER SORT | | | | | | Q2,03 | PCWC | |
| 19 | PX RECEIVE | | 100K| 1562K| 303 (2)| 00:00:01 | Q2,03 | PCWP | |
| 20 | PX SEND ROUND-ROBIN | :TQ20000 | 100K| 1562K| 303 (2)| 00:00:01 | | S->P | RND-ROBIN |
| 21 | BUFFER SORT | | 1 | 16 | | | | | |
| 22 | PX COORDINATOR | | | | | | | | |
| 23 | PX SEND QC (RANDOM) | :TQ10002 | 100K| 1562K| 303 (2)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
| 24 | BUFFER SORT | | 1 | 16 | | | Q1,02 | PCWP | |
|* 25 | HASH JOIN BUFFERED | | 100K| 1562K| 303 (2)| 00:00:01 | Q1,02 | PCWP | |
| 26 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 27 | PX SEND HYBRID HASH | :TQ10000 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 28 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 29 | BUFFER SORT | | 1 | 16 | | | Q1,00 | PCWP | |
| 30 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWC | |
| 31 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWP | |
| 32 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 33 | PX SEND HYBRID HASH | :TQ10001 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 34 | BUFFER SORT | | 1 | 16 | | | Q1,01 | PCWP | |
| 35 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWC | |
| 36 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access("T1"."ID"="T2"."ID2")
25 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
This looks a bit weird, and when comparing it to the plan gotten from the manual rewrite, it shows the following unnecessary differences:

- There are various BUFFER SORT operations that don't make a lot of sense, for example each parallel table scan is followed by a BUFFER SORT operation, and even the HASH JOIN BUFFERED in the lower part of the plan is followed by a BUFFER SORT (double buffering?)

- The plan is decomposed into two so called DFO trees, which you can see for example from the two PX COORDINATOR operators (operation id 2 and 22), which adds another unnecessary serial execution part to the plan and can have additional side effects I explain in one of my video tutorials.

This means that such execution plan shapes possibly will have a much higher demand for PGA memory than necessary (the BUFFER SORT operation will attempt to keep the data produced by the child row source in PGA), and also might cause additional I/O to and from TEMP. Since PGA memory consumed by one session influences also the Auto PGA allocation of other sessions this means that such executions not only affect the particular SQL execution in question but also any other concurrent executions allocating PGA memory.

Depending on the amount of data to be buffered BUFFER SORT operations closer to the root of the execution plan are more likely to have significant impact performance-wise, as they might have to buffer large amounts of data.

One very obvious sign of inefficiency are double BUFFERing operations, like a HASH JOIN BUFFERED followed by a BUFFER SORT as parent operation, which you can spot in the sample plan shown above.

Another interesting point is that the parallel plans differ from point release to point release and show different levels of inefficiencies, for example, 10.2.0.5, 11.1.0.7 and 11.2.0.1 produce different plans than 11.2.0.2, which is again different from what 11.2.0.3 & 11.2.0.4 produce - and using OPTIMIZER_FEATURES_ENABLE in newer versions to emulate older versions doesn't always reproduce the exact plans produced by the actual, older versions. So all in all this looks like a pretty messy part of the optimizer.

Furthermore the problem doesn't always show up - it seems to depend largely on the exact version and the plan shape used. For example, replacing the SELECT MAX(ID) FROM () outermost query in above example with a simple SELECT ID FROM () results in a plan where the concatenation transformation doesn't produce all those strange BUFFER SORTS - although it still produces a plan decomposed into two DFO trees in some versions.

It also interesting to note that depending on version and plan shape sometimes the manual rewrite using UNION ALL is also affected by either unluckily placed or unnecessary BUFFER SORT operations, but not to the same extent as the plans resulting from the CONCATENATION transformation.

In the next post I'll show how this inefficiency can have some interesting side effects when being triggered by / combined with other transformations.

Footnote
Table structures used in the test cases:


create table t1
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't1')

create table t2
compress
as
select * from t1;

exec dbms_stats.gather_table_stats(null, 't2')