Re: peeking into global temporary table from other session and some tunning - plans attached

From: Andy Sayer <andysayer_at_gmail.com>
Date: Sun, 26 Nov 2017 11:43:55 +0000
Message-ID: <CACj1VR7JnRqM9eZupQQKEJJJ2NDHq0PO1TMWewGQyJ-rEjtO2A_at_mail.gmail.com>



Insert append does make a difference for GTTs, here's a quick demo, the elapsed time shouldn't be read too much into (this is running on a several years old Surface pro reading/writing external storage), but pay attention to the redo size (and do notice the time taken to rollback each version). Note that the GTT will need to be on commit preserve rows to actually make use of it (I forgot this key point for this demo). ANDY_at_pdb1>create global temporary table check_my_redo as select * from big_table where 1=0;

Table created.

ANDY_at_pdb1>set autotrace on
ANDY_at_pdb1>set timing on
ANDY_at_pdb1>insert into check_my_redo select * from big_table;

771100 rows created.

Elapsed: 00:00:36.16

Execution Plan



Plan hash value: 3993303771

| Id | Operation | Name | Rows | Bytes | Cost
(%CPU)| Time |

| 0 | INSERT STATEMENT | | 771K| 97M| 4131
 (1)| 00:00:01 |
| 1 | LOAD TABLE CONVENTIONAL | CHECK_MY_REDO | | |
  |          |

| 2 | TABLE ACCESS FULL | BIG_TABLE | 771K| 97M| 4131
 (1)| 00:00:01 |

Statistics


        151  recursive calls
      82832  db block gets
      29858  consistent gets
      15001  physical reads
    6088944  redo size
        862  bytes sent via SQL*Net to client
        969  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     771100  rows processed

ANDY_at_pdb1>rollback;

Rollback complete.

Elapsed: 00:00:42.09
ANDY_at_pdb1>insert /*+append*/into check_my_redo select * from big_table;

771100 rows created.

Elapsed: 00:00:17.65

Execution Plan



ERROR:
ORA-12838: cannot read/modify an object after modifying it in parallel

SP2-0612: Error generating AUTOTRACE EXPLAIN report

Statistics


       6894  recursive calls
      15616  db block gets
      28132  consistent gets
      16019  physical reads
      79684  redo size
        850  bytes sent via SQL*Net to client
        980  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
        510  sorts (memory)
          0  sorts (disk)
     771100  rows processed

ANDY_at_pdb1>rollback;

Rollback complete.

Elapsed: 00:00:00.06

GG, to confirm what bind variables were used when the statement was first parsed, you could use:
select * from
table(dbms_xplan.display_cursor(sql_id=>'btay965futjwg',format=>'advanced -projection'));

However, this bind variables in this statement seem to have nothing to do with what went into populating the GTT so it might not really help, other than to prove that the statement was parsed with differing bind variables being peeked. You say that you should be getting a new SQL each time the application executes it due to the comment, if that were the case then I would expect the dynamic sampling to figure out the number of rows in a table much better. In my experiments, it does seem easy for the CBO to get it slightly wrong but I don't think this would ever be the sort of scale you're looking at. It is probably easier to get it wrong when you are applying further predicates against the GTT (which seems weird to me, why not just not populate those rows to begin with?)

Here's a simple demo (that turned into a look into dynamic sampling with private statistics in 12c) of another session using the results of dynamic sampling from another session:
ANDY_at_pdb1>create global temporary table check_my_ds_gtt on commit preserve rows as select * from big_Table where 1=0;

Table created.

ANDY_at_pdb1>alter session set optimizer_features_enable='11.2.0.3';

Session altered.

  • As I'm running 12.2.0.1

ANDY_at_pdb1>insert into check_my_ds_gtt select * from big_table;

771100 rows created.

ANDY_at_pdb1>get last_simple
  1* select * from table(dbms_xplan.display_cursor(format=>'typical'))

ANDY_at_pdb1>select count(*) from check_my_ds_gtt;

  COUNT(*)


    771100

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID 97tsy5rfjja6z, child number 0

select count(*) from check_my_ds_gtt

Plan hash value: 3670425436



| Id | Operation | Name | Rows | Cost (%CPU)| Time
 |

| 0 | SELECT STATEMENT | | | 4050 (100)|
  |
| 1 | SORT AGGREGATE | | 1 | |
  |
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT | 954K| 4050 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

  • It's sort of there, doesn't seem massively off.

<ANOTHER SESSION>
ANDY_at_pdb1>alter session set optimizer_features_enable='11.2.0.3';

Session altered.

ANDY_at_pdb1>insert into check_my_ds_gtt select * from big_table where rownum <=1000;

1000 rows created.

ANDY_at_pdb1>
ANDY_at_pdb1>select count(*) from check_my_ds_gtt;

  COUNT(*)


      1000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID 97tsy5rfjja6z, child number 0

select count(*) from check_my_ds_gtt

Plan hash value: 3670425436



| Id | Operation | Name | Rows | Cost (%CPU)| Time
 |

| 0 | SELECT STATEMENT | | | 4050 (100)|
  |
| 1 | SORT AGGREGATE | | 1 | |
  |
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT | 954K| 4050 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

--Same sql_id, no additional hard parse so we took dynamic sampling from the other session!
<BACK TO ORIGINAL SESSION>

ANDY_at_pdb1>select count(*) from check_my_ds_gtt;

  COUNT(*)


    771100

ANDY_at_pdb1>delete check_my_ds_gtt where rownum<=761100;

761100 rows deleted.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt;

  COUNT(*)


     10000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID 97tsy5rfjja6z, child number 0

select count(*) from check_my_ds_gtt

Plan hash value: 3670425436



| Id | Operation | Name | Rows | Cost (%CPU)| Time
 |

| 0 | SELECT STATEMENT | | | 4050 (100)|
  |
| 1 | SORT AGGREGATE | | 1 | |
  |
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT | 954K| 4050 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

--Still no further hard parses

ANDY_at_pdb1>select count(*) from Check_my_ds_gtt;

  COUNT(*)


     10000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID d9749ukdwaf0c, child number 0

select count(*) from Check_my_ds_gtt

Plan hash value: 3670425436



| Id | Operation | Name | Rows | Cost (%CPU)| Time
 |

| 0 | SELECT STATEMENT | | | 4044 (100)|
  |
| 1 | SORT AGGREGATE | | 1 | |
  |
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT | 1 | 4044 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

--Forced a hard parse by using a different sql_id (by changing the casing of the text).

So how well does it work in 12c?

ANDY_at_pdb1>alter session set optimizer_features_enable='12.2.0.1';

Session altered.

ANDY_at_pdb1>create global temporary table check_my_ds_gtt_12c on commit preserve rows as select * from big_Table where 1=0;

Table created.

ANDY_at_pdb1>insert into check_my_ds_gtt_12c select * from big_table;

771100 rows created.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c;

  COUNT(*)


    771100

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID c46c7xqa5j24k, child number 0

select count(*) from check_my_ds_gtt_12c

Plan hash value: 857759501



| Id | Operation | Name | Rows | Cost (%CPU)|
Time |

| 0 | SELECT STATEMENT | | | 4049 (100)|
      |

| 1 | SORT AGGREGATE | | 1 | |
|

| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT_12C | 758K| 4049 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

<OTHER SESSION>
ANDY_at_pdb1>alter session set optimizer_features_enable='12.2.0.1';

Session altered.

ANDY_at_pdb1>insert into check_my_ds_gtt_12c select * from big_table where rownum <=1000;

1000 rows created.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c;

  COUNT(*)


      1000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID c46c7xqa5j24k, child number 0

select count(*) from check_my_ds_gtt_12c

Plan hash value: 857759501



| Id | Operation | Name | Rows | Cost (%CPU)|
Time |

| 0 | SELECT STATEMENT | | | 4049 (100)|
      |

| 1 | SORT AGGREGATE | | 1 | |
|

| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT_12C | 758K| 4049 (1)|
00:00:01 |

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

--Not good!

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c reparse_please;

  COUNT(*)


      1000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID 9jpq0xskjprsc, child number 0

select count(*) from check_my_ds_gtt_12c reparse_please

Plan hash value: 857759501



| Id | Operation | Name | Rows | Cost (%CPU)|
Time |

| 0 | SELECT STATEMENT | | | 7 (100)|
      |

| 1 | SORT AGGREGATE | | 1 | |
|
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT_12C | 1000 | 7 (0)|
00:00:01 | ----------------------------------------------------------------------------------

Note


  • dynamic statistics used: dynamic sampling (level=2)

18 rows selected.

--Not great so far with DS. What about private stats?

<ORIGINAL SESSION>
ANDY_at_pdb1>truncate table check_my_ds_gtt_12c;

Table truncated.

ANDY_at_pdb1>insert /*+append*/into check_my_ds_gtt_12c select * from big_table;

771100 rows created.

--I'm taking advantage of the online statistics gathering (the append will trigger this if it's the first use of the segment), rather than calling dbms_stats automatically.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c; select count(*) from check_my_ds_gtt_12c

                     *

ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel

ANDY_at_pdb1>commit;

Commit complete.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c;

  COUNT(*)


    771100

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID c46c7xqa5j24k, child number 0

select count(*) from check_my_ds_gtt_12c

Plan hash value: 857759501



| Id | Operation | Name | Rows | Cost (%CPU)|
Time |

| 0 | SELECT STATEMENT | | | 4065 (100)|
      |

| 1 | SORT AGGREGATE | | 1 | |
|

| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT_12C | 771K| 4065 (1)|
00:00:01 |

Note


  • Global temporary table session private statistics used

18 rows selected.
<OTHER SESSION>
ANDY_at_pdb1>truncate table check_my_ds_gtt_12c;

Table truncated.

ANDY_at_pdb1>insert /*+append*/into check_my_ds_gtt_12c select * from big_table where rownum <=1000;

1000 rows created.

ANDY_at_pdb1>commit;

Commit complete.

ANDY_at_pdb1>select count(*) from check_my_ds_gtt_12c;

  COUNT(*)


      1000

ANDY_at_pdb1>_at_last_simple

PLAN_TABLE_OUTPUT



SQL_ID c46c7xqa5j24k, child number 0

select count(*) from check_my_ds_gtt_12c

Plan hash value: 857759501



| Id | Operation | Name | Rows | Cost (%CPU)|
Time |

| 0 | SELECT STATEMENT | | | 6 (100)|
      |

| 1 | SORT AGGREGATE | | 1 | |
|
| 2 | TABLE ACCESS FULL| CHECK_MY_DS_GTT_12C | 1000 | 6 (0)|
00:00:01 | ----------------------------------------------------------------------------------

Note


  • Global temporary table session private statistics used
    • Much better

--But due to the online statistics gathering and session private statistics, the cursor is not very shareable, it will be quickly purged from the cursor cache if required

ANDY_at_pdb1>select * from
table(dbms_xplan.display_cursor(sql_id=>'c46c7xqa5j24k'));

PLAN_TABLE_OUTPUT



SQL_ID: c46c7xqa5j24k, child number: 0 cannot be found

2 rows selected.

Hope this helps, I'm not sure if you do have any plans to upgrade, but I would certainly start considering it as an option.

On 26 November 2017 at 08:13, GG <grzegorzof_at_interia.pl> wrote:

> W dniu 2017-11-26 o 02:16, Glauber, Leo pisze:
>
> We have a stored procedure used with OBIEE that uses a number of GTT with
> wide ranging volumes, thousands to millions of rows inserted. Since it was
> using up to 15 bind variables that may or may not be provided we were
> getting inconsistent performance. This hinted helped our implementation.
>
>
>
> /*+ BIND_AWARE */
>
> Thanks, but as far as I can tell You need histogram on the column in
> predicate in order to have bind aware cursors.
> Will check this anyway .
> Regards .
> G
>
>

--
http://www.freelists.org/webpage/oracle-l
Received on Sun Nov 26 2017 - 12:43:55 CET

Original text of this message