Skip navigation.

Randolf Geist

Syndicate content
Updated: 7 hours 46 sec ago

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 2

Thu, 2015-02-19 15:08
In the second part of this post (go to part 1) I want to focus on the hybrid distribution for skewed join expressions.

2. Hybrid Distribution For Skewed Join Expressions
The HYBRID HASH distribution allows to some degree addressing data distribution skew in case of HASH distributions, which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.

One other side effect of the truly hybrid distribution in case of skew (mixture of BROADCAST / HASH for one row source and ROUND-ROBIN / HASH for the other row source) is that HASH distributions following such a hybrid distribution need to redistribute again even if the same join / distribution keys get used by following joins. If this were regular HASH distributions the data would already be suitably distributed and no further redistribution would be required.

Here's an example of this, using the test case setup mentioned here:

-- Here the HYBRID SKEW distribution works for B->C
-- But the (B->C)->A join is affected by the same skew
-- So the HASH re-distribution of the resulting B.ID is skewed, too
-- And hence the HASH JOIN/SORT AGGREGATE (operation 4+5) are affected by the skew
-- The big question is: Why is there a re-distribution (operation 12+11)?
-- The data is already distributed on B.ID??
-- If there wasn't a re-distribution no skew would happen
-- In 11.2 no-redistribution happens no matter if C is probe or hash row source
-- So it looks like a side-effect of the hybrid distribution
-- Which makes sense as it is not really HASH distributed, but hybrid
select count(t_2_filler) from (
select /*+ monitor
leading(b c a)
use_hash(c a)
swap_join_inputs(a)
no_swap_join_inputs(c)
pq_distribute(a hash hash)
pq_distribute(c hash hash)
--optimizer_features_enable('11.2.0.4')
pq_skew(c)
*/
a.id as t_1_id
, a.filler as t_1_filler
, c.id as t_2_id
, c.filler as t_2_filler
from t_1 a
, t_1 b
, t_2 c
where
c.fk_id_skew = b.id
and a.id = b.id
);

-- 11.2 plan
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10003 | Q1,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,03 | PCWP | |
|* 5 | HASH JOIN | | Q1,03 | PCWP | |
| 6 | PX RECEIVE | | Q1,03 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
|* 10 | HASH JOIN | | Q1,03 | PCWP | |
| 11 | PX RECEIVE | | Q1,03 | PCWP | |
| 12 | PX SEND HASH | :TQ10001 | Q1,01 | P->P | HASH |
| 13 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 14 | TABLE ACCESS FULL| T_1 | Q1,01 | PCWP | |
| 15 | PX RECEIVE | | Q1,03 | PCWP | |
| 16 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
| 17 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 18 | TABLE ACCESS FULL| T_2 | Q1,02 | PCWP | |
----------------------------------------------------------------------------

-- 12.1 plan
-------------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,04 | PCWP | |
|* 5 | HASH JOIN | | Q1,04 | PCWP | |
| 6 | PX RECEIVE | | Q1,04 | PCWP | |
| 7 | PX SEND HYBRID HASH | :TQ10002 | Q1,02 | P->P | HYBRID HASH|
| 8 | STATISTICS COLLECTOR | | Q1,02 | PCWC | |
| 9 | PX BLOCK ITERATOR | | Q1,02 | PCWC | |
| 10 | TABLE ACCESS FULL | T_1 | Q1,02 | PCWP | |
| 11 | PX RECEIVE | | Q1,04 | PCWP | |
| 12 | PX SEND HYBRID HASH | :TQ10003 | Q1,03 | P->P | HYBRID HASH|
|* 13 | HASH JOIN BUFFERED | | Q1,03 | PCWP | |
| 14 | PX RECEIVE | | Q1,03 | PCWP | |
| 15 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 16 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 17 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 18 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 19 | PX RECEIVE | | Q1,03 | PCWP | |
| 20 | PX SEND HYBRID HASH (SKEW)| :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 21 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 22 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------------------
Note that both joins to A and C are based on B.ID. As you can see from the 11.2 plan therefore the final hash join (operation ID 5) doesn't need to have the output of the previous hash join (operation ID 10) redistributed, since the data is already distributed in a suitable way (and as a consequence both joins therefore will be affected by skewed values in T2.FK_ID_SKEW, but no BUFFERED join variant is required).

Now look at the 12c plan when SKEW is detected: Since the SKEW handling in fact leads to a potential mixture of HASH / BROADCAST and HASH / ROUND-ROBIN distribution, the data gets redistributed again for the final join (operation ID 11 + 12) which has several bad side effects: First it adds the overhead of an additional redistribution, as a side effect this then turns one of the hash joins into its BUFFERED variant, and since the SKEW distribution (at present) is only supported if the right side of the join is a table (and not the result of another join), this following join actually will be affected by the skew that was just addressed by the special SKEW handling in the join before (assuming the HYBRID HASH distributions in operation ID 6+7 / 11+12 operate in HASH / HASH, not BROADCAST / ROUND-ROBIN mode)...

12c Parallel Execution New Features: Hybrid Hash Distribution - Part 1

Mon, 2015-02-16 14:21
In this blog post I want to cover some aspects of the the new HYBRID HASH adaptive distribution method that I haven't covered yet in my other posts.

As far as I know it serves two purposes for parallel HASH and MERGE JOINs, adaptive broadcast distribution and hybrid distribution for skewed join expressions. In the first part of this post I want to focus on former one (goto part 2).

1. Adaptive Broadcast Distribution For Small Left Row Sources
It allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually at each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

It's important to emphasize that this decision is really done at each execution of the same cursor, so the same cursor can do a BROADCAST distribution for the left row source at one execution and HASH distribution at another execution depending on whether the number of rows detected by the STATISTICS COLLECTOR operator exceeds the threshold or not. This is different from the behaviour of "adaptive joins" where the final plan will be resolved at first execution and from then on will be re-used, and therefore a STATISTICS COLLECTOR operator as part of an adaptive plan no longer will be evaluated after the first execution.

Here is a simple script demonstrating that the distribution method is evaluated at each execution:

define dop = 4

create table t_1
compress
as
select
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(&dop*2) */ * from dual
connect by
level <= &dop*2) a
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

create table t_2
compress
as
select
rownum as id
, mod(rownum, &dop) + 1 as fk_id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1')

alter table t_1 parallel &dop cache;

alter table t_2 parallel &dop cache;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

delete from t_1 where rownum <= 1;

select count(*) from t_1;

select /*+ leading(t1) no_swap_join_inputs(t2) pq_distribute(t_2 hash hash) */ max(t_2.id) from t_1, t_2 where t_1.id = t_2.fk_id;

@pqstat

rollback;
For the table queue 0 (the distribution of T_1) the distribution for the first execution in above script look like this:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 8 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 8

Consumer 1 P000 3 38 ##########
P001 1 13 ###
P002 2 25 #######
P003 2 25 #######
********** ********** ----------
Total 8
So the eight rows are distributed assumingly by hash. But for the second execution with only seven rows in T_1 I get this output:

TQ_ID SERVER_TYP INSTANCE PROCESS NUM_ROWS % GRAPH
---------- ---------- ---------- -------- ---------- ---------- ----------
0 Producer 1 P004 28 100 ##########
P005 0 0
P006 0 0
P007 0 0
********** ********** ----------
Total 28

Consumer 1 P000 7 25 ##########
P001 7 25 ##########
P002 7 25 ##########
P003 7 25 ##########
********** ********** ----------
Total 28
So the seven rows were this time broadcasted.

The "pqstat" script is simply a query on V$PQ_TQSTAT, which I've mentioned for example here.

So I run the same query twice, the first time the threshold is exceeded and a HASH distribution takes place. After deleting one row the second execution of the same cursor turns into a BROADCAST / ROUND-ROBIN distribution. You can verify that this is the same parent / child cursor via DBMS_XPLAN.DISPLAY_CURSOR / V$SQL. Real-Time SQL Monitoring also can provide more details about the distribution methods used (click on the "binoculars" icon in the "Other" column of the active report for the PX SEND HYBRID HASH operations).

Note that the dynamic switch between HASH to BROADCAST unfortunately isn't the same as a decision of the optimizer at parse time to use BROADCAST distribution, because in such a case the other row source won't be distributed at all, which comes with some important side effects:

Not only the redistribution of larger row sources simply can take significant time and resources (CPU and in case of RAC network), but due to the (in 12c still existing) limitation of Parallel Execution that only a single redistribution is allowed to be active concurrently reducing the number of redistributions in the plan simply as a side effect can reduce the number of BUFFERED operations (mostly HASH JOIN BUFFERED, but could be additional BUFFER SORTs, too), which are a threat to Parallel Execution performance in general.

Here is a very simple example showing the difference:


-- HYBRID HASH with possible BROADCAST distribution of T_1
----------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | Q1,02 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | Q1,02 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 7 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 8 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 9 | PX RECEIVE | | Q1,02 | PCWP | |
| 10 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
----------------------------------------------------------------------------

-- TRUE BROADCAST of T_1
-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | PX RECEIVE | | Q1,01 | PCWP | |
| 5 | PX SEND BROADCAST | :TQ10000 | Q1,00 | P->P | BROADCAST |
| 6 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
| 8 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 9 | TABLE ACCESS FULL | T_2 | Q1,01 | PCWP | |
-------------------------------------------------------------------------
So even if in the first plan the T_1 row source really has less than 2*DOP rows and the HYBRID HASH distribution turns into a BROADCAST distribution, this doesn't change the overall plan shape generated by the optimizer. The second HYBRID HASH distribution won't be skipped and will turn into a ROUND-ROBIN distribution instead, which can be confirmed by looking at the output from V$PQ_TQSTAT for example. So the data of the second row source still needs to be distributed, and hence the HASH JOIN will be operating as BUFFERED join due to the plan shape and the limitation that only a single PX SEND / RECEIVE pair can be active at the same time.

In the second plan the BROADCAST distribution of T_1 means that T_2 will not be re-distributed, hence there is no need to operate the HASH JOIN in buffered mode.

So the only purpose of this particular adaptive HYBRID HASH distribution is obviously to avoid skew if there are only a couple of rows (and hence possible join key values) in the left row source, because a HASH distribution based on such a low number of distinct values doesn't work well. Oracle's algorithm needs a certain number of distinct values otherwise it can end up with a bad distribution. This probably also explains why the threshold of 2*DOP was chosen so low.

Exadata & In-Memory Real World Performance Artikel (German)

Wed, 2015-02-11 08:49
Heute wurde auf "informatik-aktuell.de" ein aktueller Artikel von mir veröffentlicht. Es geht darin um die Analyse eines Falles bei einem meiner Kunden, der auf Exadata nicht die erwartete Performance erreicht hat.

In dem Artikel werden unterschiedliche Abfrage-Profile analysiert und erklärt, wie diese unterschiedlichen Profile die speziellen Features von Exadata und In-Memory beeinflussen.

Teil 1 des Artikels
Teil 2 des Artikels

Video Tutorial: XPLAN_ASH Active Session History - Part 3

Mon, 2015-02-09 15:04
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.

Parallel Execution 12c New Features Overview

Wed, 2015-02-04 07:12
Oracle 12c is the first release since a couple of years that adds significant new functionality in the area of Parallel Execution operators, plan shapes and runtime features. Although 11gR2 added the new Auto DOP feature along with In-Memory Parallel Execution and Statement Queueing, the 12c features are more significant because they introduce new operators that can change both execution plan shape and runtime behaviour.

Here is a list of new features that are worth to note (and not necessarily mentioned in the official documentation and white papers by Oracle):

- The new HYBRID HASH adaptive distribution method, that serves two purposes for parallel HASH and MERGE JOINs:

First, it allows the PX SEND / RECEIVE operation for the left (smaller estimated row source) of the hash join to decide dynamically at runtime, actually for each execution, if it should use either a BROADCAST or HASH distribution, and correspondingly for the other row source to use then either a ROUND-ROBIN or a HASH distribution, too. This is described for example in the corresponding white paper by Maria Colgan here.

Second, to some degree it allows to address data distribution skew in case of HASH distributions (and only for parallel hash joins, not merge joins), which I've described in detail already in the past. A summary post that links to all other relevant articles regarding Parallel Execution Skew can be found here, an overview of the relevant feature can be found here and a detailed description can be found here.

I'll cover some aspects of this adaptive distribution that I haven't mentioned in the existing articles in a separate post.

- The new concurrent UNION ALL operator. This is officially documented here. It comes with a new operator PX SELECTOR that is a generic functionality to pick one of the available PX slaves to perform the child operations of this operator. Since the official documentation leaves a lot details unclear how this concurrent operation will actually behave at run time I'll cover some examples with runtime profiles in a separate post.

- The new PQ_REPLICATE feature that for simple parallel FULL TABLE SCAN row sources (I haven't tested yet if a parallel INDEX FAST FULL SCAN is eligible, too, but I assume so) can decide to run the scan entirely in each PX slave instead of running a distributed scan across the PX slaves in granules and distributing by BROADCAST afterwards. It's not entirely clear to me why this was implemented. Although it reduces the number of redistributions, and in some cases where no other parallel redistributions are required can reduce the number of parallel slave sets to one instead of two, BROADCAST distributions are typically used for smaller row sources, so eliminating this distribution doesn't sound like a huge improvement to justify the development effort. Jonathan Lewis describes the feature here along with some of his ideas, why this feature might be useful.

- The new parallel FILTER operator, an important and potentially huge improvement over the previous only available serial FILTER operator. In the past when a FILTER subquery was part of a parallel plan the data of the "driving" row source of the FILTER (the first child operation) had to be passed to the Query Coordinator and only then the second to nth children could be executed as many times as indicated by the first row source (and depending on the efficiency of filter/subquery caching). Now the FILTER operator can run in the PX slaves and there are a number of distribution variants possible with this new parallel operator. I'll cover that in a separate post.

- The new PX SELECTOR operator that I already mention above as part of the new concurrent UNION ALL operator. As described above, the generic functionality of this operator is to pick one of the available PX slaves to perform the child operations of this operator. It will be used in 12c for serial access operators that are part of a parallel plan (like a serial table or index scan). In the past these parts were performed by the Query Coordinator itself, but now one slave out of a slave set will be selected to perform such operations. This has a number of implications and I'll cover that in a separate post

- The new 1 SLAVE distribution method that is a bit similar to the PX SELECTOR operator in that it will use just one slave of the slave set but gets used for serial parts of the execution plan when the data is redistributed from a parallel part of the plan to a part of the plan that needs to be run in serial, because Oracle cannot parallelize the functionality, for example when evaluating ROWNUM or certain analytic function variants (for example LAG or LEAD with no partition clause). This new 1 SLAVE distribution seems to have two purposes: First avoid activity of the query coordinator (like the PX SELECTOR above) and second avoid the decomposition of the parallel plan into multiple DFO trees. I'll cover that in a separate post

- 12c changes also the way some operations in the plan are marked as PARALLEL or not, which in my opinion can be pretty confusing (and is partly inconsistent with runtime behaviour in my opinion) when just looking at the execution plan, since the runtime activity then might look different from what the execution plan suggests. I'll cover that in a separate post and it will also be picked up in the context of other new functionality mentioned above as appropriate.

There is probably more that I haven't come across yet, but as you can see from the number of times I've mentioned "separate post" in this overview this is already enough material for a whole series of posts to follow.

Webinar Followup

Tue, 2015-02-03 06:47
Thanks everyone who attended my recent webinar at AllThingsOracle.com.

The link to the webinar recording can be found here.

The presentation PDF can be downloaded here. Note that this site uses a non-default HTTP port, so if you're behind a firewall this might be blocked.

Thanks again to AllThingsOracle.com and Amy Burrows for hosting the event.

Video Tutorial: XPLAN_ASH Active Session History - Part 2

Thu, 2015-01-22 13:45
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality has been published. In this part I begin the actual walk-through of the script output.

More parts to follow.


New Version Of XPLAN_ASH Utility - In-Memory Support

Thu, 2015-01-22 13:42
A new version 4.21 of the XPLAN_ASH utility is available for download. I publish this version because it will be used in the recent video tutorials explaining the Active Session History functionality of the script.

As usual the latest version can be downloaded here.

This is mainly a maintenance release that fixes some incompatibilities of the 4.2 version with less recent versions (10.2 and 11.2.0.1).

As an extra however, this version now differentiates between general CPU usage and in-memory CPU usage (similar to 12.1.0.2 Real-Time SQL Monitoring). This is not done in all possible sections of the output yet, but the most important ones are already covered.

So if you already use the 12.1.0.2 in-memory option this might be helpful to understand how much of your CPU time is spent on in-memory operations vs. non in-memory. Depending on your query profile you might be surprised by the results.

Here are the notes from the change log:

 - Forgot to address a minor issue where the SET_COUNT determined per DFO_TREE (either one or two slave sets) is incorrect in the special case of DFO trees having only S->P distributions (pre-12c style). Previous versions used a SET_COUNT of 2 in such a case which is incorrect, since there is only one slave set. 12c changes this behaviour with the new PX SELECTOR operator and requires again two sets.

- For RAC Cross Instance Parallel Execution specific output some formatting and readability was improved (more linebreaks etc.)

- Minor SQL issue fixed in "SQL statement execution ASH Summary" that prevented execution in 10.2 (ORA-32035)

- The NO_STATEMENT_QUEUING hint prevented the "OPTIMIZER_FEATURES_ENABLE" hint from being recognized, therefore some queries failed in 11.2.0.1 again with ORA-03113. Fixed

- "ON CPU" now distinguishes between "ON CPU INMEMORY" and "ON CPU" for in-memory scans

Free Webinar "Oracle Exadata & In-Memory Real-World Performance"

Fri, 2015-01-16 16:38
It's webinar time again.

Join me on Wednesday, January 28th at AllThingsOracle.com for a session based on a real world customer experience.

The session starts at 3pm UK (16:00 Central European) time. The webinar is totally free and the recording will made available afterwards.

Here's the link to the official landing page where you can register and below is the official abstract:
AbstractAfter a short introduction into what the Oracle Exadata Database Machine is, in this one-hour webinar I will look at an analysis of different database query profiles that are based on a real-world customer case, how these different profiles influence the efficiency of Exadata’s “secret sauce” features, as well as the new Oracle In-Memory column store option. Based on the analysis different optimization strategies are presented along with lessons learned.

Video Tutorial: XPLAN_ASH Active Session History - Introduction

Sun, 2015-01-11 16:38
I finally got around preparing another part of the XPLAN_ASH video tutorial.

This part is about the main funcationality of XPLAN_ASH: SQL statement execution analysis using Active Session History and Real-Time SQL Monitoring.

In this video tutorial I'll explain what the output of XPLAN_ASH is supposed to mean when using the Active Session History functionality of the script. Before diving into the details of the script output using sample reports I provide some overview and introduction in this part that hopefully makes it simpler to understand how the output is organized and what it is supposed to mean.

This is the initial, general introduction part. More parts to follow.

"SELECT * FROM TABLE" Runs Out Of TEMP Space

Thu, 2015-01-08 12:49
Now that I've shown in the previous post in general that sometimes Parallel Execution plans might end up with unnecessary BUFFER SORT operations, let's have a look what particular side effects are possible due to this.

What would you say if someone tells you that (s)he just did a simple, straightforward "SELECT * FROM TABLE" that took several minutes to execute without returning, only to then error out with "ORA-01652 unable to extend temp segment", and the TABLE in question is actually nothing but a simple, partitioned heap table, so no special tricks, no views, synonyms, VPD etc. involved, it's really just a plain simple table?

Some time ago I was confronted with such a case at a client. Of course, the first question is, why would someone run a plain SELECT * FROM TABLE, but nowadays with power users and developers using GUI based tools like TOAD or SQLDeveloper, this is probably the GUI approach of a table describe command. Since these tools by default show the results in a grid that only fetches the first n rows, this typically isn't really a threat even in case of large tables, besides the common problems with allocated PX servers in case the table is queried using Parallel Execution, and the users simply keep the grid/cursor open and hence don't allow re-using the PX servers for different executions.

But have a look at the following output, in this case taken from 12.1.0.2, but assuming the partitioned table T_PART in question is marked parallel, resides on Exadata, has many partitions that are compressed via HCC, that uncompressed represent several TB of data (11.2.0.4 on Exadata produces a similar plan):


SQL> explain plan for
2 select * from t_part p;

Explained.

SQL>
SQL> select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));
Plan hash value: 2545275170

------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | | | Q1,02 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | | | Q1,02 | PCWP | |
| 4 | VIEW | VW_TE_2 | | | Q1,02 | PCWP | |
| 5 | UNION-ALL | | | | Q1,02 | PCWP | |
| 6 | CONCATENATION | | | | Q1,02 | PCWP | |
| 7 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 8 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 9 | PX SEND ROUND-ROBIN | :TQ10000 | | | | S->P | RND-ROBIN |
| 10 | BUFFER SORT | | | | | | |
| 11 | PARTITION RANGE SINGLE | | 2 | 2 | | | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | | | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | | | |
| 14 | BUFFER SORT | | | | Q1,02 | PCWC | |
| 15 | PX RECEIVE | | | | Q1,02 | PCWP | |
| 16 | PX SEND ROUND-ROBIN | :TQ10001 | | | | S->P | RND-ROBIN |
| 17 | BUFFER SORT | | | | | | |
| 18 | PARTITION RANGE SINGLE | | 4 | 4 | | | |
| 19 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | | | |
|* 20 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | | | |
| 21 | PX BLOCK ITERATOR | | 6 | 20 | Q1,02 | PCWC | |
|* 22 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,02 | PCWP | |
| 23 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,02 | PCWC | |
|* 24 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,02 | PCWP | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

13 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
20 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss')))
22 - filter("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
24 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Can you spot the problem? It's again the "unnecessary BUFFER SORTS" problem introduced in the previous post. In particular the operation ID = 3 BUFFER SORT is "deadly" if the table T_PART is large, because it needs to buffer the whole table data before any row will be returned to the client. This explains why this simple SELECT * FROM T_PART will potentially run out of TEMP space, assuming the uncompressed table data is larger in size than the available TEMP space. Even if it doesn't run out of TEMP space it will be a totally inefficient operation, copying all table data to PGA (unlikely sufficient) respectively TEMP before returning any rows to the client.

But why does a simple SELECT * FROM TABLE come up with such an execution plan? A hint is the VW_TE_2 alias shown in the NAME column of the plan: It's the result of the "table expansion" transformation that was introduced in 11.2 allowing to set some partition's local indexes to unusable but still make use of the usable index partitions of other partitions. It takes a bit of effort to bring the table into a state where such a plan will be produced for a plain SELECT * FROM TABLE, but as you can see, it is possible. And as you can see from the CONCATENATION operation in the plan, the transformed query produced by the "table expansion" then triggered another transformation, the "concatenation" transformation mentioned in the previous post, that then results in the addition of unnecessary BUFFER SORT operations when combined with Parallel Execution.

Here is a manual rewrite that corresponds to the query that is the result of both, the "table expansion" and the "concatenation" transformation:

select * from (
select /*+ opt_param('_optimizer_table_expansion', 'false') */ * from t_part p where
("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
and
(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
union all
select * from t_part p where
("P"."DT">=TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2020-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND (LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE('
2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
)
union all
select * from t_part p where
("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
;

But if you run an EXPLAIN PLAN on above manual rewrite, then 12.1.0.2 produces the following simple and elegant plan:

--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | | |
| 1 | PX COORDINATOR | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10000 | | | Q1,00 | P->S | QC (RAND) |
| 3 | UNION-ALL | | | | Q1,00 | PCWP | |
| 4 | VIEW | | | | Q1,00 | PCWP | |
| 5 | UNION-ALL | | | | Q1,00 | PCWP | |
| 6 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 7 | PARTITION RANGE SINGLE | | 2 | 2 | Q1,00 | PCWP | |
| 8 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 2 | 2 | Q1,00 | PCWP | |
|* 9 | INDEX RANGE SCAN | T_PART_IDX | 2 | 2 | Q1,00 | PCWP | |
| 10 | PX SELECTOR | | | | Q1,00 | PCWP | |
| 11 | PARTITION RANGE SINGLE | | 4 | 4 | Q1,00 | PCWP | |
| 12 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T_PART | 4 | 4 | Q1,00 | PCWP | |
|* 13 | INDEX RANGE SCAN | T_PART_IDX | 4 | 4 | Q1,00 | PCWP | |
| 14 | PX BLOCK ITERATOR | | 6 | 20 | Q1,00 | PCWC | |
|* 15 | TABLE ACCESS FULL | T_PART | 6 | 20 | Q1,00 | PCWP | |
| 16 | PX BLOCK ITERATOR | |KEY(OR)|KEY(OR)| Q1,00 | PCWC | |
|* 17 | TABLE ACCESS FULL | T_PART |KEY(OR)|KEY(OR)| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

9 - access("P"."DT">=TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
13 - access("P"."DT">=TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
filter(LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss')))
15 - filter((LNNVL("P"."DT"<TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2003-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))) AND (LNNVL("P"."DT"<TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR LNNVL("P"."DT">=TO_DATE(' 2001-01-01
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))))
17 - filter("P"."DT"<TO_DATE(' 2005-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2004-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2001-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') OR "P"."DT"<TO_DATE(' 2003-01-01 00:00:00', 'syyyy-mm-dd
hh24:mi:ss') AND "P"."DT">=TO_DATE(' 2002-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

I've disabled the "table expansion" transformation in this case, because it kicks in again when optimizing this query and just adds some harmless (and useless) branches to the plan that confuse the issue. Without those additional, useless branches it is very similar to the above plan, but without any BUFFER SORT operations, hence it doesn't cause any overhead and should return the first rows rather quickly, no matter how large the table is.

The 11.2.0.4 optimizer unfortunately again adds unnecessary BUFFER SORT operations even to the manual rewrite above, so as mentioned in the previous post the problem of those spurious BUFFER SORTs isn't limited to the CONCATENATION transformation.

Of course, since all this is related to Parallel Execution, a simple workaround to the problem is to run the SELECT * FROM TABLE using a NO_PARALLEL hint, and all those strange side effects of BUFFER SORTS will be gone. And not having unusable local indexes will also prevent the problem, because then the "table expansion" transformation won't kick in.

Interestingly, if the optimizer is told about the true intention of initially fetching only the first n rows from the SELECT * FROM TABLE - for example simply by adding a corresponding FIRST_ROWS(n) hint - at least in my tests using 12.1.0.2 all the complex transformations were rejected and a plain (parallel) FULL TABLE SCAN was preferred instead, simply because it is now differently costed, which would allow working around the problem, too.

If you want to reproduce the issue, here's a sample table definition, along with some comments what I had to do to bring it into the state required to reproduce:

-- The following things have to come together to turn a simple SELECT * from partitioned table into a complex execution plan
-- including Table Expansion and Concatenation:
--
-- - Unusable index partitions to trigger Table Expansion
-- - Partitions with usable indexes that are surrounded by partitions with unusable indexes
-- - And such a partition needs to have an index access path that is cheaper than a corresponding FTS, typically by deleting the vast majority of rows without resetting the HWM
-- - All this also needs to be reflected properly in the statistics
--
-- If this scenario is combined with Parallel Execution the "Parallel Concatenation" bug that plasters the plan with superfluous BUFFER SORT will lead to the fact
-- that the whole table will have to be kept in memory / TEMP space when running SELECT * from the table, because the bug adds, among many other BUFFER SORTs, one deadly BUFFER SORT
-- on top level before returning data to the coordinator, typically operation ID = 3
--
create table t_part (dt not null, id not null, filler)
partition by range (dt)
(
partition p_1 values less than (date '2001-01-01'),
partition p_2 values less than (date '2002-01-01'),
partition p_3 values less than (date '2003-01-01'),
partition p_4 values less than (date '2004-01-01'),
partition p_5 values less than (date '2005-01-01'),
partition p_6 values less than (date '2006-01-01'),
partition p_7 values less than (date '2007-01-01'),
partition p_8 values less than (date '2008-01-01'),
partition p_9 values less than (date '2009-01-01'),
partition p_10 values less than (date '2010-01-01'),
partition p_11 values less than (date '2011-01-01'),
partition p_12 values less than (date '2012-01-01'),
partition p_13 values less than (date '2013-01-01'),
partition p_14 values less than (date '2014-01-01'),
partition p_15 values less than (date '2015-01-01'),
partition p_16 values less than (date '2016-01-01'),
partition p_17 values less than (date '2017-01-01'),
partition p_18 values less than (date '2018-01-01'),
partition p_19 values less than (date '2019-01-01'),
partition p_20 values less than (date '2020-01-01')
)
as
with generator as
(
select /*+ cardinality(1000) */ rownum as id, rpad('x', 100) as filler from dual connect by level <= 1e3
)
select
add_months(date '2000-01-01', trunc(
case
when id >= 300000 and id < 700000 then id + 100000
when id >= 700000 then id + 200000
else id
end / 100000) * 12) as dt
, id
, filler
from (
select
(a.id + (b.id - 1) * 1e3) - 1 + 100000 as id
, rpad('x', 100) as filler
from
generator a,
generator b
)
;

delete from t_part partition (p_2);

commit;

exec dbms_stats.gather_table_stats(null, 't_part')

create unique index t_part_idx on t_part (dt, id) local;

alter index t_part_idx modify partition p_1 unusable;

alter index t_part_idx modify partition p_3 unusable;

alter index t_part_idx modify partition p_5 unusable;

alter table t_part parallel;

alter index t_part_idx parallel;

set echo on pagesize 0 linesize 200

explain plan for
select * from t_part p;

select * from table(dbms_xplan.display(format => 'BASIC PARTITION PARALLEL'));

Unnecessary BUFFER SORT Operations - Parallel Concatenation Transformation

Mon, 2015-01-05 15:47
When using Parallel Execution, depending on the plan shape and the operations used, Oracle sometimes needs to turn non-blocking operations into blocking operations, which means in this case that the row source no longer passes its output data directly to the parent operation but buffers some data temporarily in PGA memory / TEMP. This is either accomplished via the special HASH JOIN BUFFERED operation, or simply by adding BUFFER SORT operations to the plan.

The reason for such a behaviour in parallel plans is the limitation of Oracle Parallel Execution that allows only a single data redistribution to be active concurrently. You can read more about that here.

However, sometimes the optimizer adds unnecessary BUFFER SORT operations to parallel execution plans, and one of the most obvious examples is when the so called "concatenation" query transformation is applied by the optimizer and Parallel Execution is involved.

UPDATE Please note: As mentioned below by Martin (thanks) what I call here "concatenation transformation" typically is called "OR expansion transformation" in CBO speak, and this term probably much better describes what the transformation is about. So whenever I wrote here "concatenation transformation" this can be substituted with "OR expansion transformation".

To understand the issue, first of all, what is the concatenation transformation about?

Whenever there are predicates combined with OR there is the possibility to rewrite the different conditions as separate queries unioned together.

In order to ensure that the result of the rewritten query doesn't contain any unwanted duplicates, the different branches of the UNIONed statement need to filter out any data fulfillinh conditions of previous branches - this is probably where originally the (at first sight) odd (and in the meanwhile documented) LNNVL function came into existence.

The predicates can be either single-table filters, where the concatenation might open up different access paths to the same table (like different indexes), or it might be predicates combining multiple tables, like joins or subqueries.

Here is a short example of the latter (the parallel hints are commented out but are used in the further examples to demonstrate the issue with Parallel Execution) - using version 12.1.0.2:


select
max(id)
from
(
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
(t1.id = t2.id or t1.id = t2.id2)
);
In this example the join condition using an OR prevents any efficient join method between T1 and T2 when not re-writing the statement - Oracle can only resort to a NESTED LOOP join with a repeated full table scan of one of the tables, which is reflected in a rather high estimated cost:


----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 2177M (2)| 23:37:34 |
| 1 | SORT AGGREGATE | | 1 | 16 | | |
| 2 | NESTED LOOPS | | 3999K| 61M| 2177M (2)| 23:37:34 |
| 3 | TABLE ACCESS FULL| T2 | 2000K| 19M| 1087 (1)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| T1 | 2 | 12 | 1089 (2)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("T1"."ID"="T2"."ID" OR "T1"."ID"="T2"."ID2")
The same statement could be expressed by the following manual rewrite:


select max(id) from (
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
t1.id = t2.id2
---------
union all
---------
select /* parallel(t1 8) parallel(t2 8) */
t2.*
from
t1
, t2
where
t1.id = t2.id
and lnnvl(t1.id = t2.id2)
);
Notice the LNNVL function in the second branch of the UNION ALL that filters out any rows fulfilling the condition used in the first branch.

Also note that using UNION instead of UNION ALL plus LNNVL(s) to filter out any duplicate rows is also potentially incorrect as each query branch might produce duplicate rows that need to be retained as they are also part of the original query result.

At the expense of visiting the tables multiple times we now get at least efficient join methods in each branch (and hence a significantly lower cost estimate):


--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | | 11945 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 13 | | | |
| 2 | VIEW | | 2100K| 26M| | 11945 (1)| 00:00:01 |
| 3 | UNION-ALL | | | | | | |
|* 4 | HASH JOIN | | 2000K| 30M| 34M| 5972 (1)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 6 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
|* 7 | HASH JOIN | | 100K| 1562K| 34M| 5972 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 9 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("T1"."ID"="T2"."ID2")
7 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
And in fact, when not preventing the concatenation transformation (NO_EXPAND hint), the optimizer comes up with the following execution plan for the original statement:


-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | | 11945 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 16 | | | |
| 2 | CONCATENATION | | | | | | |
|* 3 | HASH JOIN | | 2000K| 30M| 34M| 5972 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 5 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
|* 6 | HASH JOIN | | 100K| 1562K| 34M| 5972 (1)| 00:00:01 |
| 7 | TABLE ACCESS FULL| T1 | 2000K| 11M| | 1086 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL| T2 | 2000K| 19M| | 1087 (1)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."ID"="T2"."ID2")
6 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
The only difference between those two plans for the manual and automatic rewrite is the CONCATENATION operator instead of UNION ALL, and that the subquery isn't merged in case of the UNION ALL (additional VIEW operator).

So far everything works as expected and you have seen the effect and rationale of the concatenation transformation.

If we run now the original statement using Parallel Execution (turn comments into hints), depending on the exact version used the resulting execution plans show various inefficiencies.

For reference, this is the parallel execution plan I get from 12.1.0.2 when using above manual rewrite:


------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 13 | 606 (2)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 13 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10004 | 1 | 13 | | | Q1,04 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 13 | | | Q1,04 | PCWP | |
| 5 | VIEW | | 2100K| 26M| 606 (2)| 00:00:01 | Q1,04 | PCWP | |
| 6 | UNION-ALL | | | | | | Q1,04 | PCWP | |
|* 7 | HASH JOIN | | 2000K| 30M| 303 (2)| 00:00:01 | Q1,04 | PCWP | |
| 8 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10000 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 11 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWC | |
| 12 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWP | |
| 13 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 14 | PX SEND HYBRID HASH | :TQ10001 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 15 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWC | |
| 16 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWP | |
|* 17 | HASH JOIN | | 100K| 1562K| 303 (2)| 00:00:01 | Q1,04 | PCWP | |
| 18 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 19 | PX SEND HYBRID HASH | :TQ10002 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | P->P | HYBRID HASH|
| 20 | STATISTICS COLLECTOR | | | | | | Q1,02 | PCWC | |
| 21 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWC | |
| 22 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 23 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,04 | PCWP | |
| 24 | PX SEND HYBRID HASH | :TQ10003 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | P->P | HYBRID HASH|
| 25 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | PCWC | |
| 26 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,03 | PCWP | |
------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

7 - access("T1"."ID"="T2"."ID2")
17 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
This is a pretty straightforward parallel plan, with the only possibly noteable exception of the new 12c "HYBRID HASH" distribution feature being used.

Now let's have a look at the resulting execution plan when the concatenation transformation gets used:


------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 16 | 606 (2)| 00:00:01 | | | |
| 1 | SORT AGGREGATE | | 1 | 16 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ20003 | 1 | 16 | | | Q2,03 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 16 | | | Q2,03 | PCWP | |
| 5 | CONCATENATION | | | | | | Q2,03 | PCWP | |
|* 6 | HASH JOIN | | 2000K| 30M| 303 (2)| 00:00:01 | Q2,03 | PCWP | |
| 7 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q2,03 | PCWP | |
| 8 | PX SEND HYBRID HASH | :TQ20001 | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | P->P | HYBRID HASH|
| 9 | STATISTICS COLLECTOR | | | | | | Q2,01 | PCWC | |
| 10 | BUFFER SORT | | 1 | 16 | | | Q2,01 | PCWP | |
| 11 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | PCWC | |
| 12 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q2,01 | PCWP | |
| 13 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q2,03 | PCWP | |
| 14 | PX SEND HYBRID HASH | :TQ20002 | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | P->P | HYBRID HASH|
| 15 | BUFFER SORT | | 1 | 16 | | | Q2,02 | PCWP | |
| 16 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | PCWC | |
| 17 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q2,02 | PCWP | |
| 18 | BUFFER SORT | | | | | | Q2,03 | PCWC | |
| 19 | PX RECEIVE | | 100K| 1562K| 303 (2)| 00:00:01 | Q2,03 | PCWP | |
| 20 | PX SEND ROUND-ROBIN | :TQ20000 | 100K| 1562K| 303 (2)| 00:00:01 | | S->P | RND-ROBIN |
| 21 | BUFFER SORT | | 1 | 16 | | | | | |
| 22 | PX COORDINATOR | | | | | | | | |
| 23 | PX SEND QC (RANDOM) | :TQ10002 | 100K| 1562K| 303 (2)| 00:00:01 | Q1,02 | P->S | QC (RAND) |
| 24 | BUFFER SORT | | 1 | 16 | | | Q1,02 | PCWP | |
|* 25 | HASH JOIN BUFFERED | | 100K| 1562K| 303 (2)| 00:00:01 | Q1,02 | PCWP | |
| 26 | PX RECEIVE | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 27 | PX SEND HYBRID HASH | :TQ10000 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | P->P | HYBRID HASH|
| 28 | STATISTICS COLLECTOR | | | | | | Q1,00 | PCWC | |
| 29 | BUFFER SORT | | 1 | 16 | | | Q1,00 | PCWP | |
| 30 | PX BLOCK ITERATOR | | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWC | |
| 31 | TABLE ACCESS FULL | T1 | 2000K| 11M| 151 (1)| 00:00:01 | Q1,00 | PCWP | |
| 32 | PX RECEIVE | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,02 | PCWP | |
| 33 | PX SEND HYBRID HASH | :TQ10001 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | P->P | HYBRID HASH|
| 34 | BUFFER SORT | | 1 | 16 | | | Q1,01 | PCWP | |
| 35 | PX BLOCK ITERATOR | | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWC | |
| 36 | TABLE ACCESS FULL | T2 | 2000K| 19M| 151 (1)| 00:00:01 | Q1,01 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access("T1"."ID"="T2"."ID2")
25 - access("T1"."ID"="T2"."ID")
filter(LNNVL("T1"."ID"="T2"."ID2"))
This looks a bit weird, and when comparing it to the plan gotten from the manual rewrite, it shows the following unnecessary differences:

- There are various BUFFER SORT operations that don't make a lot of sense, for example each parallel table scan is followed by a BUFFER SORT operation, and even the HASH JOIN BUFFERED in the lower part of the plan is followed by a BUFFER SORT (double buffering?)

- The plan is decomposed into two so called DFO trees, which you can see for example from the two PX COORDINATOR operators (operation id 2 and 22), which adds another unnecessary serial execution part to the plan and can have additional side effects I explain in one of my video tutorials.

This means that such execution plan shapes possibly will have a much higher demand for PGA memory than necessary (the BUFFER SORT operation will attempt to keep the data produced by the child row source in PGA), and also might cause additional I/O to and from TEMP. Since PGA memory consumed by one session influences also the Auto PGA allocation of other sessions this means that such executions not only affect the particular SQL execution in question but also any other concurrent executions allocating PGA memory.

Depending on the amount of data to be buffered BUFFER SORT operations closer to the root of the execution plan are more likely to have significant impact performance-wise, as they might have to buffer large amounts of data.

One very obvious sign of inefficiency are double BUFFERing operations, like a HASH JOIN BUFFERED followed by a BUFFER SORT as parent operation, which you can spot in the sample plan shown above.

Another interesting point is that the parallel plans differ from point release to point release and show different levels of inefficiencies, for example, 10.2.0.5, 11.1.0.7 and 11.2.0.1 produce different plans than 11.2.0.2, which is again different from what 11.2.0.3 & 11.2.0.4 produce - and using OPTIMIZER_FEATURES_ENABLE in newer versions to emulate older versions doesn't always reproduce the exact plans produced by the actual, older versions. So all in all this looks like a pretty messy part of the optimizer.

Furthermore the problem doesn't always show up - it seems to depend largely on the exact version and the plan shape used. For example, replacing the SELECT MAX(ID) FROM () outermost query in above example with a simple SELECT ID FROM () results in a plan where the concatenation transformation doesn't produce all those strange BUFFER SORTS - although it still produces a plan decomposed into two DFO trees in some versions.

It also interesting to note that depending on version and plan shape sometimes the manual rewrite using UNION ALL is also affected by either unluckily placed or unnecessary BUFFER SORT operations, but not to the same extent as the plans resulting from the CONCATENATION transformation.

In the next post I'll show how this inefficiency can have some interesting side effects when being triggered by / combined with other transformations.

Footnote
Table structures used in the test cases:


create table t1
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(100000) */ * from dual
connect by
level <= 100000) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't1')

create table t2
compress
as
select * from t1;

exec dbms_stats.gather_table_stats(null, 't2')

New Version Of XPLAN_ASH Utility

Sun, 2014-12-21 16:40
A new version 4.2 of the XPLAN_ASH utility is available for download.

As usual the latest version can be downloaded here.

There were no too significant changes in this release, mainly some new sections related to I/O figures were added.

One thing to note is that some of the sections in recent releases may require a linesize larger than 700, so the script's settings have been changed to 800. If you use corresponding settings for CMD.EXE under Windows for example you might have to adjust accordingly to prevent ugly line wrapping.

Here are the notes from the change log:

- New sections "Concurrent activity I/O Summary based on ASH" and "Concurrent activity I/O Summary per Instance based on ASH" to see the I/O activity summary for concurrent activity

- Many averages and medians now also have accompanying minimum and maximum values shown. This isn't as good as having histograms but gives a better idea of the range of values, and how potentially outliers influence the average and deserve further investigations

- Bug fixed: When using MONITOR as source for searching for the most recent SQL_ID executed by a given SID due to some filtering on date no SQL_ID was found. This is now fixed

- Bug fixed: In RAC GV$ASH_INFO should be used to determine available samples

- The "Parallel Execution Skew ASH" indicator is now weighted - so far any activity level per plan line and sample below the actual DOP counted as one, and the same if the activity level was above
The sum of the "ones" was then set relative to the total number of samples the plan line was active to determine the "skewness" indicator

Now the actual difference between the activity level and the actual DOP is calculated and compared to the number of total samples active times the actual DOP
This should give a better picture of the actual impact the skew has on the overall execution

- Most queries now use a NO_STATEMENT_QUEUING hint for environments where AUTO DOP is enabled and the XPLAN_ASH queries could get queued otherwise

- The physical I/O bytes on execution plan line level taken from "Real-Time SQL Monitoring" has now the more appropriate heading "ReadB" and "WriteB", I never liked the former misleading "Reads"/"Writes" heading