Jonathan Lewis

Subscribe to Jonathan Lewis feed Jonathan Lewis
Just another Oracle weblog
Updated: 8 hours 22 min ago

Join Elimination Bug

Mon, 2017-08-14 05:59

A few years ago a bug relating to join elimination showed up in a comment to a post I’d done about the need to keep on testing and learining. The bug was visible in version 11.2.0.2 and, with a script to replay it, I’d found that it had disappeared by 11.2.0.4.

Today I had a reason to rediscover the script, and decided to test it against 12.2.0.1 – and found that the bug was still present.

Here’s the model:


rem     Script:         join_eliminate_bug_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2012

drop table child purge;
drop table parent purge;

create table parent (
        id      number(4),
        name    varchar2(10),
        constraint par_pk primary key (id)
        deferrable initially immediate
)
;

create table child(
        id_p    number(4)       
                constraint chi_fk_par
                references parent,
        id      number(4),
        name    varchar2(10),
        constraint chi_pk primary key (id_p, id)
)
;

insert into parent values (1,'Smith');
insert into parent values (2,'Jones');

insert into child values(1,1,'Simon');
insert into child values(1,2,'Sally');

insert into child values(2,1,'Jack');
insert into child values(2,2,'Jill');

commit;

begin
        dbms_stats.gather_table_stats(user,'child');
        dbms_stats.gather_table_stats(user,'parent');
end;
/

set serveroutput off

select
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);

The setup is just to show you the correct results with join elimination taking place. Here’s the output from the query and the actual execution plan:

      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill

4 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  1whubydgj8w0s, child number 0
-------------------------------------
select  chi.* from  child chi,  parent par where  par.id = chi.id_p

Plan hash value: 2406669797

-----------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |    11 |
|   1 |  TABLE ACCESS FULL| CHILD |     4 |    48 |    11 |
-----------------------------------------------------------

On a single column join, with referential integrity in place, and no columns other than the primary key involved, the optimizer eliminates table parent from the query. But if I now defer the primary key constraint on parent and duplicate every row (which ought to duplicate the query result), watch what happens with the query:


set constraint par_pk deferred;

insert into parent (id,name) values (1,'Smith');
insert into parent (id,name) values (2,'Jones');

alter system flush shared_pool;

select
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);


      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill

4 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  1whubydgj8w0s, child number 0
-------------------------------------
select  chi.* from  child chi,  parent par where  par.id = chi.id_p

Plan hash value: 2406669797

-----------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost  |
-----------------------------------------------------------
|   0 | SELECT STATEMENT  |       |       |       |    11 |
|   1 |  TABLE ACCESS FULL| CHILD |     4 |    48 |    11 |
-----------------------------------------------------------

I get the same plan, so I get the same results – and notice that I flushed the shared pool before repeating the query so I haven’t fooled Oracle into reusing the wrong plan by accident – it’s a whole new freshly optimized plan.

Just to show what ought to happen here’s the last bit of the test case:


select  /*+ no_eliminate_join(@sel$1 par@sel$1) */
        chi.*
from
        child   chi,
        parent  par
where
        par.id = chi.id_p
;

select * from table(dbms_xplan.display_cursor);


      ID_P         ID NAME
---------- ---------- ------------
         1          1 Simon
         1          2 Sally
         1          1 Simon
         1          2 Sally
         2          1 Jack
         2          2 Jill
         2          1 Jack
         2          2 Jill

8 rows selected.


PLAN_TABLE_OUTPUT
-------------------------------------
SQL_ID  5p8sp7k8b0fgq, child number 0
-------------------------------------
select /*+ no_eliminate_join(@sel$1 par@sel$1) */  chi.* from  child
chi,  parent par where  par.id = chi.id_p

Plan hash value: 65982890

-----------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |       |       |     5 |
|   1 |  NESTED LOOPS                |        |     4 |    60 |     5 |
|   2 |   NESTED LOOPS               |        |     4 |    60 |     5 |
|   3 |    INDEX FULL SCAN           | PAR_PK |     2 |     6 |     1 |
|*  4 |    INDEX RANGE SCAN          | CHI_PK |     2 |       |     1 |
|   5 |   TABLE ACCESS BY INDEX ROWID| CHILD  |     2 |    24 |     2 |
-----------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("PAR"."ID"="CHI"."ID_P")


I ran this test on 11.2.0.4 – and then repeated it on 12.2.0.1: the bug is still present (although I thought I’d seen a MoS note saying it had been fixed in 12.1).

It’s always a little dangerous playing around with deferrable constraints – my view is that you should keep the interval of deferment as short as possible and don’t try to use it for doing anything other than correcting known data errors. At present if you have code that defers constraints and runs non-trivial queries afterwards you might want that code to start with an “alter session” to set the hidden parameter _optimizer_join_elimination_enabled to false (after checking with Oracle support, of course).


Rebuilding Indexes

Thu, 2017-08-03 07:00

One of the special events that can make it necessary to rebuild an index is the case of the “massive DML”, typically a bulk delete that purges old data from a table. You may even find cases where it’s a good idea to mark a couple of your indexes as unusable before doing a massive delete and then rebuild them after the delete.

Despite the fact that a massive delete is an obvious special case it’s still not necessary in many cases to worry about a rebuild afterwards because the space made free by the delete will be smoothly reused over time with very little variation in performance. There is, however, one particular feature that increases the probability of a rebuild becoming necessary – global (or globally partitioned) indexes on partitioned tables. The problem (and the absence of problem in non-partitioned tables) is in the nature of the rowid.

For non-partitioned tables, and partitioned tables with local indexes, the rowid stored in an index is (assuming we’re thinking only of heap tables) stored as a sequence of 6 bytes consisting, in order, of: (tablespace relative file number, block number within file, row number within block). If the table is non-partitioned, or if this is an index segment from a locally partitioned index, all the index entries will be pointing to the same table segment and Oracle knows which segment that is from the data dictionary information – so Oracle can derive the data_object_id of the table segment and convert the tablespace relative file number into the absolute file number to navigate to the right row in the table.

When the index is global or globally partitioned any index entry may point to any of the table’s segments, so the rowid that is stored in the index is expanded by a further 4 bytes to hold the data_object_id of the table segment it is pointing to – and the data_object_id is the leading component: (data_object_id, tablespace relative file number, block number within file, row number within block). Think about what this means when you start to drop “old” partitions and add new partitions. Compare this with what happens when you simply delete a large volume of old data from a table and starting inserting new data. There’s an important difference to the way in which indexes will evolve.

Purging data

When you delete a large volume of data from a (simple, heap) table you will create a lot of empty space in a lot of existing table blocks. If that delete is supposed to get rid of “old” data (and to keep the description simple we’ll assume it’s the first time you’ve done this) it’s very likely that the delete will result in lots of empty blocks near the start of the table – rows that were inserted at the same time will tend to be physically close to each other in the table. This means that future inserts will soon start to reuse those table blocks. Think about what this means for index entries – especially for non-unique keys.

Assume you have 100 rows with value ‘XXX’ for an indexed column. Breaking the rowid into its component parts the index entries will be (‘XXX’,{file_id, block_id, row_number}).  Now assume you delete the oldest 10 rows then, over time, insert 10 more rows with the same key value. You’ve deleted the 10 index entries with the lowest values for (file_id, block_id) but the space that’s become available in the table will be in and around exactly that range of blocks – so the new index entries will probably end up looking very similar to the deleted index entries and inserted in and around the existing index entries for value ‘XXX’, so over time the index is unlikely to allocate much new space.

Now think about what happens when your table it partitioned but the index is global; your index entries are (‘XXX’,{data_object_id, file_id, block_id, row_number}). When you drop the oldest partition you will probably[1] delete all the index entries with the lowest data_object_id. When you start inserting new rows for ‘XXX’ the new table partition will have a data_object_id that is going to be higher than any previous data_object_id – which means you’re going to be inserting rows into the right-hand (high-value) edge of this section of the index. In some cases – typically those where you have a couple of leaf blocks per key value – the index may end up growing significantly because the insertion point for rows in the new partition isn’t in the leaf block with the available space, and it won’t be until you’ve done a few more bulk deletes and the leaf blocks relating to the oldest table partitions become completely empty that the space can be reused.

An example of this type of behaviour probably appeared on the OTN database forum quite recently.  Of course, there are various reasons why indexes can become inefficient, and the degree of inefficiency may only become noticeable over a fairly long period of time; moreover there are various reasons why global indexes are a little problematic, and various reasons why a bulk delete (which is what executing “alter table drop partition” does to a global index) has unpleasant side effects dependent somewhat on the number (and size) of the partitions and on how many you try to drop in one go.

There’s not  a lot you can do about this quirk of global indexes, but it’s always worth taking extra care with partitioned tables and focusing even more carefully on a strategic review of indexes:

  • Does this index really need to exist at all
  • Could this index be replaced by a selective function-based index
  • Does this index really need to be global / globally partitioned
  • How big is this index compared to the size it ought to be
  • Should this index be (basic) compressed
  • Is this index likely to be disrupted by a historic purge – or is there another reason for its undesirable behaviour

 

[1] probably delete entries with the lowest data_object_id” – I have to say this because if you’ve executed a “move partition” at any time a new data_object_id will have been generated for the partition, so the oldest partition could, in principal, have the highest data_object_id. The issue of changing data_object_ids brings a whole new level of complexity to global indexes – but only in a few special cases, fortunately.

 

 


Redo OP Codes:

Tue, 2017-07-25 12:17

This posting was prompted by a tweet from Kamil Stawiarski in response to a question about how he’d discovered the meaning of Redo Op Codes 5.1 and 11.6 – and credited me and Julian Dyke with “the hardest part”.

Over the years I’ve accumulated (from Julian Dyke, or odd MoS notes, etc.) and let dribble out the occasional interpretation of a few op codes – typically in response to a question on the OTN database forum or the Oracle-L listserver, and sometimes as a throwaway comment in a blog post, but I’ve never published the full set of codes that I’ve acquired (or guessed) to date.

It’s been some time since I’ve looked closely at a redo stream, and there are many features of Oracle that I’ve never had to look at at the level of the redo so there are plenty of gaps in the list – and maybe a few people will use the comments to help fill the gaps.

It’s possible that I may be able to add more op codes over the new days – I know that somewhere I have some op codes relating to space management, and a batch relating to LOB handling, but it looks like I forgot to add them to the master list – so here’s what I can offer so far:


1	Transaction Control

2	Transaction read

3	Transaction update

4	Block cleanout
		4.1	Block cleanout record
		4.2	Physical cleanout
		4.3	Single array change
		4.4	Multiple array changes
		4.5	Format block
		4.6	ktbcc redo -  Commit Time Block Cleanout Change (?RAC, ?recursive, ?SYS objects)

5	Transaction undo management
		5.1	Update undo block
		5.2	Get undo header
		5.3	Rollout a transaction begin
		5.4	On a rollback or commit
		5.5	Create rollback segmenr
		5.6	On a rollback of an insert
		5.7	In the ktubl for 'dbms_transaction.local_transaction_id'
			(begin transaction) - also arrives for incoming distributed
			tx, no data change but TT slot acquired. Also for recursive
			transaction (e.g. truncate). txn start scn:  0xffff.ffffffff
		5.8	Mark transaction as dead
		5.9	Rollback extension of rollback seg
		5.10	Rollback segment header change for extension of rollback seg
		5.11	Mark undo as applied during rollback
		5.19	Transaction audit record - first
		5.20	Transaction audit record - subsequent
		5.23	ktudbr redo: disable block level recovery (reports XID)
		5.24	ktfbhundo - File Space Header Undo

6	Control file

10	Index
		10.1	SQL load index block
		10.2	Insert Leaf Row
		10.3	Purge Leaf Row
		10.4	Delete Leaf Row
		10.5	Restore Leaf during rollback
		10.6	(kdxlok) Lock block (pre-split?)
		10.7	(kdxulo) unlock block during undo
		10.8	(kdxlne) initialize leaf block being split
		10.9	(kdxair) apply XAT do to ITL 1	-- related to leaf block split 
		10.10	Set leaf block next pointer
		10.11	(kdxlpr) (UNDO) set kdxleprv (previous pointer)
		10.12 	Initialize root block after split
		10.13	index redo (kdxlem): (REDO) make leaf block empty,
		10.14	Restore block before image
		10.15	(kdxbin) Insert branch block row	
		10.16	Purge branch row
		10.17	Initialize new branch block
		10.18	Update key data in row -- index redo (kdxlup): update keydata
		10.19	Clear split flag
		10.20	Set split flag
		10.21	Undo branch operation
		10.22	Undo leaf operation
		10.23	restore block to tree
		10.24	Shrink ITL
		10.25	format root block
		10.26	format root block (undo)
		10.27	format root block (redo)
		10.28	Migrating block (undo)
		10.29	Migrating block (redo)
		10.30	Update nonkey value
		10.31	index root block redo (kdxdlr):  create/load index
		10.34 	make branch block empty
		10.35	index redo (kdxlcnu): update nonkey
		10.37	undo index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.38	index change (kdxIndexlogicalNonkeyUpdate) -- bitmap index
		10.39	index redo (kdxbur) :  branch block update range
		10.40	index redo (kdxbdu) :  branch block DBA update,

11	Table
		11.1  undo row operation 
		11.2  insert row  piece
		11.3  delete row piece 
		11.4  lock row piece
		11.5  update row piece
		11.6  overwrite row piece
		11.7  manipulate first column
		11.8  change forwarding address - migration
		11.9  change cluster key index
		11.10 Set Cluster key pointers
		11.11 Insert multiple rows
		11.12 Delete multiple rows
		11.13 toggle block header flags
		11.17 Update multiple rows
		11.19 Array update ?
		11.20 SHK (mark as shrunk?)
		11.24 HCC update rowid map ?

12	Cluster

13	Segment management
		13.1	ktsfm redo: -- allocate space ??
		13.5	KTSFRBFMT (block format) redo
		13.6	(block link modify) (? index )  (options: lock clear, lock set)
		13.7	KTSFRGRP (fgb/shdr modify freelist) redo: (options unlink block, move HWM)
		13.13	ktsbbu undo - undo operation on bitmap block
		13.14	ktsbbu undo - undo operation on bitmap block
		13.17	ktsphfredo - Format Pagetable Segment Header
		13.18	ktspffredo - Format Level1 Bitmap Block
		13.19	ktspsfredo - Format Level2 Bitmap Block
		13.21	ktspbfredo - Format Pagetable Datablock
		13.22	State change on level 1 bitmap block
		13.23	Undo on level 1 bitmap block
		13.24	Bitmap block (BMB) state change (level 2 ?)
		13.25	Undo on level 2 bitmap block 
		13.26	?? Level 3 bitmap block state change ??
		13.27	?? Level 3 bitmap block undo ??
		13.28	Update LHWM and HHWM on segment header
		13.29	Undo on segment header
		13.31	Segment shrink redo for L1 bitmap block
		13.32	Segment shrink redo for segment header block

14	Extent management
		14.1	ktecush redo: clear extent control lock
		14.2	ktelk redo - lock extent (map)
		14.3	Extent de-allocate
		14.4	kteop redo - redo operation on extent map
		14.5	kteopu undo - undo operation on extent map
		14.8	kteoputrn - undo operation for flush for truncate

15	Tablespace

16	Row cache

17	Recovery management
		17.1	End backup mode marker
		17.3	Crash Recovery at scn:  0x0000.02429111
		17.28	STANDBY METADATA CACHE INVALIDATION
	
18	Block image (hot backups)
		18.1	Block image
		18.3	Reuse redo entry 
				   (Range reuse: tsn=1 base=8388753 nblks=8)
				or (Object reuse: tsn=2 objd=76515)

19	Direct loader
		19.1	Direct load block record
		19.2	Nologging invalidate block range
			Direct Loader invalidate block range redo entry

20	Compatibility segment

21	LOB segment 
		21.1	kdlop (Long Feild) redo:  [sic]
				(insert basicfile clob)

22	Locally managed tablespace
		22.2	ktfbhredo - File Space Header Redo:
		22.3	ktfbhundo - File Space Header Undo:
		22.5	ktfbbredo - File BitMap Block Redo:
		22.16	File Property Map Block (FPM)

23	Block writes
		23.1	Block written record
		23.2	Block read record (BRR) -- reference in Doc ID: 12423475.8

24	DDL statements
		24.1	DDL
		24.2	Direct load block end mark
		24.4	?? Media recovery marker
		24.10	??
		24.11	??

(E & O.E) – you’ll notice that some of the descriptions include question marks – those are guesses – and some are little more than the raw text extracted from a redo change vector with no interpretation of what they might mean.

 


Fast Now, Fast Later

Mon, 2017-07-24 07:39

The following is the text of an article I published in the UKOUG magazine several years ago (2010), but I came across it recently while writing up some notes for a presentation and thought it would be worth repeating here.

Fast Now, Fast Later

The title of this piece came from a presentation by Cary Millsap and captures an important point about trouble-shooting as a very memorable aphorism. Your solution to a problem may look good for you right now but is it a solution that will still be appropriate when the database has grown in volume and has more users.

I was actually prompted to write this article by a question on the OTN database forum that demonstrated the need for the basic combination of problem solving and forward planning. Someone had a problem with a fairly sudden change in performance of his system from November to December, and he had some samples from trace files and Statspack of a particular query that demonstrated the problem.

The query was very simple:

select  *
from    tph
where   pol_num = :b0
order by
        pm_dt, snum

When the query was operating fast enough the trace file from a sample run showed the following (edited) tkprof output, with an the optimizer taking advantage of the primary key of (pol_num, pm_dt, snum) on table TPH to avoid a sort for the order by clause. (Note that the heading on the plan is “Row Source Operation” – which means it’s the execution plan that really was used)

call    count    cpu  elapsed  disk  query  current  rows
---------------------------------------------------------
Parse       1   0.01     0.13     0    106        0     0
Execute     1   0.03     0.03     0      0        0     0
Fetch       4   0.01     0.22    46     49        0    43
---------------------------------------------------------
total       6   0.06     0.39    46    155        0    43

Rows Row Source Operation
---- --------------------
  43 TABLE ACCESS BY INDEX ROWID TPH (cr=49 pr=46 pw=0 time=226115 us)
  43   INDEX RANGE SCAN TPH_PK (cr=6 pr=3 pw=0 time=20079 us)(object id 152978)

Elapsed times include waiting on following events:
Event waited on               Times  Max. Wait Total Waited
                             Waited 
-----------------------------------------------------------
db file sequential read          46       0.01         0.21

When the query was running less efficiently the change in the trace didn’t immediately suggest any fundamental problems:


call    count    cpu  elapsed  disk  query  current  rows
---------------------------------------------------------
Parse       1   0.00     0.00     0     51        0     0
Execute     1   0.01     0.01     0      0        0     0
Fetch       4   0.00     0.59    47     51        0    45
---------------------------------------------------------
total       6   0.01     0.61    47    102        0    45

Rows Row Source Operation
---- --------------------
45 TABLE ACCESS BY INDEX ROWID TPH (cr=51 pr=47 pw=0 time=593441 us)
45 INDEX RANGE SCAN TPH_PK (cr=6 pr=2 pw=0 time=33470 us)(object id 152978)

Elapsed times include waiting on following events:
Event waited on               Times  Max. Wait Total Waited
                             Waited 
-----------------------------------------------------------
db file sequential read          47       0.03         0.58

The plan is the same, the number of rows returned is roughly the same, and the number of disc reads and buffer gets has hardly changed. Clearly the overall change in performance comes from the slower average disk read times (a total of 0.21 seconds with a maximum of one hundredth of a second, compared to a total 0.58 seconds with a maximum of 3 hundredths), but why has the disk I/O time changed?

The figures give us a couple of preliminary ideas. An average read time of 4.5 milliseconds ( 0.21 seconds / 46 reads) is pretty good for a “small” random read of a reasonably loaded disc subject to a degree of concurrent access [ed: bearing in mind this was 2010], so the waits for “db file sequential read” in the first tkprof output are probably getting some help from a cache somewhere – possibly a SAN cache at the end of a fibre link or maybe from a local file system buffer (we might get a better idea if we could see the complete list of individual read times).

In the second case an average of 12.3 milliseconds ( 0.58 seconds / 45 reads) looks much more like a reasonable amount of genuine disc I/O is taking place – and the maximum of 30 milliseconds suggests that the disc(s) in question are subject to an undesirable level of concurrent access: our session is spending some of its time in a disk queue. Again, it would be nice to see the wait times for all the reads, but at this point it’s not really necessary.

There are couple more clues about what’s going on – one is the text of the query itself (and I’ll be coming back to that later) and the other is in the detail of the disk I/Os. If you check the “Row Source Operation” details you’ll see that in the first case the sample query selected 43 rows from the table and requested 43 (46 – 3) physical reads (pr) of the table to do so. In the second case it was 45 rows and 45 (47 – 2) physical reads. Is this simply a case of the same query needing a little more data and having to do a little more work as time passes?

So now we come to the Statspack data. Based on my observations (or guesses) about the nature of the query and the work going on, I asked if we could see some summary information for a couple of comparative intervals, and also to see if this particular query appeared in the “SQL ordered by reads” section of the Statspack reports. Here are the results, first for a snapshot taken in October:


Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                      3,816,939      58,549     15   79.4
CPU time                                                     7,789          10.6
db file parallel write                         371,865       2,005      5    2.7
log file parallel write                         75,554       1,552     21    2.1
log file sync                                   17,198       1,228     71    1.7

                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
        775,166       43,228           17.9   24.3   212.58  12449.98  1505833981
Module: javaw.exe
SELECT * FROM TPH WHERE POL_NUM = :B1 ORDER BY PM_DT ,SNUM FOR UPDATE NOWAIT}

You might notice that the critical query is actually a ‘select for update’ rather than the simple select that we had originally been told about; this doesn’t affect the execution plan, but is going to have some significance as far as undo and redo are concerned.

Now look at the corresponding figures for an interval in December:

Top 5 Timed Events                                                    Avg %Total
~~~~~~~~~~~~~~~~~~                                                   wait   Call
Event                                            Waits    Time (s)   (ms)   Time
----------------------------------------- ------------ ----------- ------ ------
db file sequential read                      7,000,428      92,970     13   89.8
CPU time                                                     6,780           6.5
db file parallel write                         549,286       1,450      3    1.4
db file scattered read                          84,127         720      9     .7
log file parallel write                         41,197         439     11     .4


                                                     CPU      Elapsd     Old
 Physical Reads  Executions  Reads per Exec %Total Time (s)  Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
      2,444,437       43,363           56.4   25.2   221.31  23376.07 1505833981
Module: javaw.exe
SELECT * FROM TPH WHERE POL_NUM = :B1 ORDER BY PM_DT ,SNUM FOR UPDATE NOWAIT

You’ll see in both cases that a huge fraction of the total database time is spent in single block reads (the “db file sequential read” time), but in December the number of reads has gone up by about 3.2 million. You can also see that about 1.7 million of the “extra” reads could be attributed to the critical query even though the number of executions of that query has hardly changed. The average number of reads per execution has gone up from 18 to 56. (I did ask if I could see the section of Statspack titled “SQL ordered by Executions” as this includes the average number of rows per execution and it would have been nice to know whether this average had gone up just a little bit, or whether it had gone up in line with the average reads per execution. Unfortunately the request was ignored, so I am going to proceed as if the change in the average result set was small.)

This, perhaps, tells us exactly what the problem is (and even if it doesn’t, the figures are symptomatic of one of the common examples of non-scalable queries).

Look at the query again – are we reporting all the rows for a “policy number”, ordered by “payment date”. If so, the number of payments recorded is bound to increase with time, and inevitably there will be lots of payments for policies belonging to other people between each pair of payments I make on my policy – and that would put each of my payments in a different table block (if I use a normal heap table).

Initially the payments table may be sufficiently small that a significant fraction of it stays in Oracle’s data cache or even in the file-system or SAN cache; but as time passes and the table grows the probability of me finding most of my blocks cached decreases – moreover, as time passes, I want increasing numbers of blocks which means that as I read my blocks I’m more likely to knock your blocks out of the cache. Given the constantly increasing numbers of competing reads, it is also no surprise that eventually the average single block read time also increases.

In scenarios like this it is inevitable that performance will degrade over time; in fact it is reasonably likely that the performance profile will degrade slowly to start with and then show an increasingly dramatic plunge. The only question really is how much damage limitation you can do.

One strategy, of course, is to increase the memory available for the critical object(s). This may mean assigning the table to a generously sized KEEP cache. (The cache need not be the same size as the table to improve things, but the bigger the better – for this query, at least). But such a strategy is only postponing the inevitable – you really need to find an approach which is less susceptible to the passage of time.

In this case, there are a few options to consider. First – note that we are selecting all the rows for a policy: do we really need to, or could we select the rows within a given date range, thus setting an upper limit on the average volume of data we need to acquire for any one policy. If we do that, we may want to think about strategies for summarizing and deleting older data, or using partitioning to isolate older data in separate segments.

If we can’t deal with the problem by changing the code (and, in this case, the apparent business requirement) can we avoid the need to visit so many data blocks for single policy. There are two obvious options to consider here – we could create the table as an “index cluster” clustered on the policy number; in this way we pay a penalty each time we insert a new row for a policy because we have to find the correct block in the cluster but when we run a query against that policy we will only need to read one or two blocks (probably) to get all the data. Alternatively we could consider setting the table up as an index-organized table (IOT) – again we do more work inserting data into the correct leaf block in the index but again we reap the benefit as we query the data because all the rows we want are in the same two or three leaf blocks (and stored in the order we want them).

Of course we are still subject to the same basic problem of the result set increasing in size as time passes, but at least we have managed to reduce (dramatically) the number of blocks we have to visit and the rate of growth of the number of blocks per query, thereby improving the scalability of the queries significantly.

Introducing new structures to an existing system is difficult, of course, and we may have to work out variations on this theme (like creating an index that includes all the table columns if we can’t switch to an IOT!). The key point is this, though: sometimes we can look at our data and the critical queries and recognize that the volume of data we have to process (even if we don’t return it, as we did in this example) is always going to increase over time, then we need to consider ways of minimizing the volume of data, or improving the packing of data so that the work we do doesn’t change (much) over time. Don’t just think ‘fast now’, think ‘will it still be fast later’.

 


In Memoriam – 2

Sat, 2017-07-22 11:10

This is the second of two items that my mother typed out more than 25 years ago. I had very mixed emotions when reading it but ultimately I felt that it was a reminder that, despite all the nasty incidents and stupid behaviour hyped up by the press and news outlets, people and organisations are generally kinder, gentler and more understanding than they were 60 years ago.

This story is about the birth of my brother who was born with a genetic flaw now known as Trisomy 21 though formerly known as Down’s syndrome or (colloquially, and no longer acceptably) mongolism. It is the latter term that my mother uses as it was the common term at the time of birth and at the time she typed her story.

A child is born. The history written by Dorothy Kathleen Lewis (1925 – 2017) about the birth of her first son

My pregnancy was normal. The first indication I had that something was wrong was in the delivery room when the baby was born; there was “oh” and silence then whispers. I asked what was wrong but was told nothing. The baby was put in a cot and the doctor came into the room and then he was taken out. I had not seen the baby – just knew that it was a boy. Again when I asked I was told that this was routine. Eventually the baby was brought back and given to me. When I saw him I thought he looked very odd and was so floppy. When I held him upright I could see he was a mongol, but prayed that I was wrong and this would go away. I asked to be told what the matter was with the baby and was told to tell my husband to ask if I was worried – which made me more suspicious.

Visiting was restricted and I did not see my husband until the evening. Fathers were just shown the babies at the nursery door and were not allowed to hold them. My husband was delighted that we had a little boy and I didn’t have the heart to tell him what I feared.

David had difficulty feeding and was put on a bottle at three days, the teat of the bottle made with a big enough hole for the milk to drip into his mouth because he was not sucking. When we went home, still not having been told of his conditions, he was being fed 8 times a day taking just 1.5 ounces per feed. Each feed took an hour to get into him, then at night it was back to bed for 2 hours and a repeat performance.

I took David to the child welfare clinic and again the actions of the people there spoke volumes, the health visitor hurried into the doctor and I was shown in – jumping the queue. (The clinic was held in our church hall which was next to the vicarage and I was very embarrassed that I should be singled out, although it was obvious why.) I asked the doctor what she thought and she said he did look mongoloid, but perhaps I should see the paediatrician where he was born.

At 6 weeks [ed: see photo] I went for my post natal and there was great concern in the waiting room as to how the baby was getting on. None of the other mothers who were there were being asked. I said he still looks like a mongol. My husband was still not aware of David’s condition or my suspicions, I wanted to protect him from the hurt I was feeling, but now I know it was not the kindest thing to do.

I then took David back to the Middlesex hospital and saw the paediatrician, who took him away from me, and whilst I sat at one end of a very large room he had David on a table at the other end with a group of students. I could not hear what they were saying, but when David was brought back to me I was told: “You have a complete vegetable; he will never walk or talk – just lie in his pram and stare up at the sky. The best thing you can do is to put him in a home and forget you ever had a baby.” I was devastated; I couldn’t run away from it any more. He had an enlarged liver and spleen and his spine was curving outwards. When I held him in my arms it was a little like a floppy parcel and there was no buoyancy at all.

When I got home I couldn’t hold back the tears that had been stifled all those weeks and I had to tell my husband. It was dreadful, I think it would have been better had we been able to grieve together in the beginning.

From then on everything David did was a milestone and he brought us a lot of joy. Just before he was five I had to take him to County Hall in London for assessment. That was a nightmare because by this time I had two other children – little boys – it was necessary to take the older of the two with me, a very busy child. We went into a large room and an elderly fussy lady had a lot of questions for David to answer. He was shown pictures and asked what they were. He hid his face from her and was saying the words to me, many of which he already knew, but because he was not answering her they were crossed out. So his assessment was a very low one.

I don’t think it would have made a lot of difference whether he had answered her he so surely wasn’t school material. He had been going to a junior training centre from the age of 3½ because I was expecting Jonathan. A social worker who came to see me at that time asked what sex I would like my third child to be and I said I didn’t mind so long as I had a normal healthy child, and she said that was a funny answer to give – I didn’t think it funny.

People’s reactions were very different 37 years ago [ed: now 64 years]. Once it was made known that David was as he was people who had known me from childhood would cross the street [to avoid speaking to me], they didn’t know what to say. But we didn’t hide him away and when we went on holiday we just said three children and we sometimes got a reaction when we arrived, but David was always well-behaved and everybody loved him. He learned a great deal from his brothers and I thank God he was our first child.

[Dorothy Kathleen Lewis: Banbury 1990]

Footnote

While copying up this story I was prompted to look at a few statistics from the UK’s Office of National Statistics for 1953 (and 50 years later); in particular the stats about child mortality and measles caught my eye.

Infant mortality for England and Wales

Year Births Still-births Died with 1 week Died within 4 weeks Died within 1 year 1953 684,372 15,681 10,127 12,088 18,324 2003 621,469 3,612 1,749 2,264 3,306

Don’t forget when you read the mortality figures that the 2003 numbers will include births that could be anything up to 8 weeks premature. I think anything more than about 2 weeks premature would probably have ended up in the still-births column in 1953.

Measles (England and Wales).

Year Cases Reported Deaths 1953 545,050 242 2003 2,048 0

 


OFE

Fri, 2017-07-07 07:14

The title is a well-known shorthand for parameter optimizer_features_enable and it has been the topic of a recent blog post by Mike Dietrich in which he decries the practice of switching the parameter back to an older version on an upgrade (even though, as he points out, Oracle support has been known to recommend it and the manuals describe – though not with 100% accuracy – why you might do so).

I am one of the people who will suggest that on the upgrade a client should consider setting the optimizer_features_enable to the version just left behind as a strategy for getting to a newer version of the base code while minimising the threat of plan instability, so I’m going to play devil’s advocate in this case even though, as we shall see, I am nearly 100% in favour of Mike’s complaint.

The first point, of course, is full disclosure to the client.  Eventually they will have to set the parameter to the current database version, all they’re doing to trying to spread out the workload of addressing a perceived threat. Moreover, they are only minimising the threat, not eliminating it. Setting the parameter has the effect of changing the state of a long list of parameters and “fix controls” – but there’s no guarantee that it will reverse out all the code changes between the two versions. One hopes it won’t reverse out a bug-fix (though Mike quotes a MoS note where exactly that problem appears); more significantly it might not reverse out a clever code optimisation that (in a few unlucky cases) happens to make the SQL run more slowly even when a new transformation is not involved. What you’re hoping for when you set this parameter is that the number of places in your application where you get an unlucky change in performance is much smaller than it would be if you didn’t set the parameter.

The second point is that you really want to have the minimum impact possible while doing expending as little human effort as possible. To this end it’s better to think in terms of setting the parameter for specific users (via a logon trigger), or specific sessions (e.g. batch runs), or specific statements (through a hint or SQL Patch). It may take a couple of test runs to spot the critical classes of statements that point you at the right granularity of implementation, but the more of your SQL that runs at the newer optimizer level the better.

If you’re going to aim for minimum impact, though, and if you’ve got the time to do some broad-brush testing it’s worth going back to my comment that this parameter is a big switch for a number of parameters and fix controls. Perhaps you will be able to spot which new feature, or which fix control is the one thing that needs to be changed – in the short-term – for your system.  Again, statement level is preferable to session level, which is preferable to user level, which is preferable to system level.

The thought of adding a controlling parameter as hint to a statement will probably have some people thinking about creating SQL baselines rather than adding hints to code – and if it’s 3rd party code then an SQL Baseline may be the necessary strategy. Bear in mind that a common advisory for upgrades is “create SQL Baselines for all your SQL first” – it wouldn’t have been me that said it, though!  So here’s something to consider in the light of the whole yes/no argument about optimizer_features_enable, what does a baseline look like ? Here, taken from an 11g database is the critical content of a baseline for “select user from dual”:


IGNORE_OPTIM_EMBEDDED_HINTS
ALL_ROWS
DB_VERSION('11.2.0.4')
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
OUTLINE_LEAF(@"SEL$1")

It’s a set of hints – including the optimizer_features_enable() hint.Using SQL Baselines to stabilise your code on the upgrade leaves you exposed (in principle) to exactly the problem in the MoS notes that Mike cited as his example of the parameter undoing a bug-fix and producing wrong results. That, by the way, is why I’m not worried by Mike’s example: if the parameter re-introduces a bug then you would have been running with the bug – or probably a workaround to the bug – anyway (Unless, say, you upgraded from 11.2.0.1 to 11.2.0.4 and decided to set the parameter to 11.2.0.3.1 – but that’s not a strategy compatible with the idea of using the parameter for stability with minimum short-term change.)

The second MoS note that Mike cites is really the one that states – emphasis is mine – a realistic view of the parameter (though I’d view the restriction to Oracle Global Support is a legal cop-out rather than a pure technology requirement):

Modifying the OPTIMIZER_FEATURES_ENABLE parameter generally is strongly discouraged and should only be used as a short term measure at the suggestion of Oracle Global Support.

The follow-up comment is, to my mind, a bit hand-wavy:

By reducing the OPTIMIZER_FEATURES_ENABLE level, new optimizer features are disabled. This has serious potential for negatively affecting performance generally by eliminating the possibility of choosing better plans that are only available with features enabled within the higher revision levels.

Arguing the case against setting the parameter because of the potential for affecting performance negatively – when you’re doing it so that nothing changes – is about as valid as the argument for setting it because of the potential for affecting performance negatively in a tiny percentage of plans that use new features when it’s a very bad idea.

Bottom line: whether or not you set the parameter you’re likely to hit a few edge cases where performance suffers; the less time you have for proper testing in advance the more likely you are to feel the need to set the parameter – but if you start heading in that direction think about using the time you do have available to minimise the scope, or even getting down to the detail of which ACTUAL feature is the problem feature that needs to be disabled for your system.

Footnote

If you want to check which parameters and fix controls change as you set the optimizer_features_enable you could mess around with the dynamic performance views. Alternatively you could take advantage of the optimizer trace – it’s one of the easy things that the 10053 offers.  Enable the trace, optimize a simple statement, then check the trace file for the bit about optimizer parameters – the section you need from 12c trace will be as follows:


***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************

... Some 1,700 lines

***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

In my case I connected to SQL*Plus, enabled the trace and executed “select 1 from dual”; then I reconnected, set the trace again, set the optimizer_features_enable back to 11.2.0.4 (I was on 12.1.0.2 at the time) and executed “select 2 from dual”. Then I deleted everything but the relevant section from the two trace files. One of the joys of Unix is that you can then run commands like the following:

sdiff  -s  or32_ora_2098.trc  or32_ora_2110.trc  |  expand  >ofe_diff.txt

That’s “side by side comparison, showing only the differences, expand tab marks out to spaces”. Here’s the result (with one blank line inserted between the parameters and the fix controls):

                                                              > optimizer_features_enable           = 11.2.0.4
                                                              > _fix_control_key                    = -1750344682
optimizer_features_enable           = 12.1.0.2                <
_optimizer_undo_cost_change         = 12.1.0.2                | _optimizer_undo_cost_change         = 11.2.0.4
_fix_control_key                    = 0                       <
_optimizer_cube_join_enabled        = true                    | _optimizer_cube_join_enabled        = false
_optimizer_hybrid_fpwj_enabled      = true                    | _optimizer_hybrid_fpwj_enabled      = false
_px_replication_enabled             = true                    | _px_replication_enabled             = false
_optimizer_partial_join_eval        = true                    | _optimizer_partial_join_eval        = false
_px_concurrent                      = true                    | _px_concurrent                      = false
_px_object_sampling_enabled         = true                    | _px_object_sampling_enabled         = false
_optimizer_unnest_scalar_sq         = true                    | _optimizer_unnest_scalar_sq         = false
_px_filter_parallelized             = true                    | _px_filter_parallelized             = false
_px_filter_skew_handling            = true                    | _px_filter_skew_handling            = false
_optimizer_multi_table_outerjoin    = true                    | _optimizer_multi_table_outerjoin    = false
_px_groupby_pushdown                = force                   | _px_groupby_pushdown                = choose
_optimizer_ansi_join_lateral_enhance = true                   | _optimizer_ansi_join_lateral_enhance = false
_px_parallelize_expression          = true                    | _px_parallelize_expression          = false
_optimizer_ansi_rearchitecture      = true                    | _optimizer_ansi_rearchitecture      = false
_optimizer_gather_stats_on_load     = true                    | _optimizer_gather_stats_on_load     = false
_px_adaptive_dist_method            = choose                  | _px_adaptive_dist_method            = off
_optimizer_batch_table_access_by_rowid = true                 | _optimizer_batch_table_access_by_rowid = false
_px_wif_dfo_declumping              = choose                  | _px_wif_dfo_declumping              = off
_px_wif_extend_distribution_keys    = true                    | _px_wif_extend_distribution_keys    = false
_px_join_skew_handling              = true                    | _px_join_skew_handling              = false
_px_partial_rollup_pushdown         = adaptive                | _px_partial_rollup_pushdown         = off
_px_single_server_enabled           = true                    | _px_single_server_enabled           = false
_optimizer_dsdir_usage_control      = 126                     | _optimizer_dsdir_usage_control      = 0
_px_cpu_autodop_enabled             = true                    | _px_cpu_autodop_enabled             = false
_optimizer_use_gtt_session_stats    = true                    | _optimizer_use_gtt_session_stats    = false
_optimizer_adaptive_plans           = true                    | _optimizer_adaptive_plans           = false
_optimizer_proc_rate_level          = basic                   | _optimizer_proc_rate_level          = off
_adaptive_window_consolidator_enabled = true                  | _adaptive_window_consolidator_enabled = false
_optimizer_strans_adaptive_pruning  = true                    | _optimizer_strans_adaptive_pruning  = false
_optimizer_null_accepting_semijoin  = true                    | _optimizer_null_accepting_semijoin  = false
_optimizer_cluster_by_rowid         = true                    | _optimizer_cluster_by_rowid         = false
_optimizer_cluster_by_rowid_control = 129                     | _optimizer_cluster_by_rowid_control = 3
_distinct_agg_optimization_gsets    = choose                  | _distinct_agg_optimization_gsets    = off
_gby_vector_aggregation_enabled     = true                    | _gby_vector_aggregation_enabled     = false
_optimizer_vector_transformation    = true                    | _optimizer_vector_transformation    = false
_optimizer_aggr_groupby_elim        = true                    | _optimizer_aggr_groupby_elim        = false
_optimizer_reduce_groupby_key       = true                    | _optimizer_reduce_groupby_key       = false
_optimizer_cluster_by_rowid_batched = true                    | _optimizer_cluster_by_rowid_batched = false
_optimizer_inmemory_table_expansion = true                    | _optimizer_inmemory_table_expansion = false
_optimizer_inmemory_gen_pushable_preds = true                 | _optimizer_inmemory_gen_pushable_preds = false
_optimizer_inmemory_autodop         = true                    | _optimizer_inmemory_autodop         = false
_optimizer_inmemory_access_path     = true                    | _optimizer_inmemory_access_path     = false
_optimizer_inmemory_bloom_filter    = true                    | _optimizer_inmemory_bloom_filter    = false
_optimizer_nlj_hj_adaptive_join     = true                    | _optimizer_nlj_hj_adaptive_join     = false
_px_external_table_default_stats    = true                    | _px_external_table_default_stats    = false
_optimizer_inmemory_minmax_pruning  = true                    | _optimizer_inmemory_minmax_pruning  = false
_optimizer_inmemory_cluster_aware_dop = true                  | _optimizer_inmemory_cluster_aware_dop = false

    fix  9898249 = enabled                                    |     fix  9898249 = disabled
    fix 10004943 = enabled                                    |     fix 10004943 = disabled
    fix  9554026 = enabled                                    |     fix  9554026 = disabled
    fix  9593547 = enabled                                    |     fix  9593547 = disabled
    fix  9833381 = enabled                                    |     fix  9833381 = disabled
    fix 10106423 = enabled                                    |     fix 10106423 = disabled
    fix 10175079 = enabled                                    |     fix 10175079 = disabled
    fix 10236566 = enabled                                    |     fix 10236566 = disabled
    fix  9721228 = enabled                                    |     fix  9721228 = disabled
    fix  9929609 = enabled                                    |     fix  9929609 = disabled
    fix 10182672 = enabled                                    |     fix 10182672 = disabled
    fix  9832338 = enabled                                    |     fix  9832338 = disabled
    fix 11668189 = enabled                                    |     fix 11668189 = disabled
    fix 11940126 = enabled                                    |     fix 11940126 = disabled
    fix 12390139 = enabled                                    |     fix 12390139 = disabled
    fix 11744016 = enabled                                    |     fix 11744016 = disabled
    fix 10216738 = enabled                                    |     fix 10216738 = disabled
    fix 12563419 = enabled                                    |     fix 12563419 = disabled
    fix 12535474 = enabled                                    |     fix 12535474 = disabled
    fix 12561635 = enabled                                    |     fix 12561635 = disabled
    fix 12569245 = enabled                                    |     fix 12569245 = disabled
    fix 12569300 = enabled                                    |     fix 12569300 = disabled
    fix 12569316 = enabled                                    |     fix 12569316 = disabled
    fix 12569321 = enabled                                    |     fix 12569321 = disabled
    fix  9002958 = enabled                                    |     fix  9002958 = disabled
    fix 12810427 = enabled                                    |     fix 12810427 = disabled
    fix 12914055 = enabled                                    |     fix 12914055 = disabled
    fix 12978495 = enabled                                    |     fix 12978495 = disabled
    fix 13110511 = enabled                                    |     fix 13110511 = disabled
    fix 13345888 = enabled                                    |     fix 13345888 = disabled
    fix 13396096 = enabled                                    |     fix 13396096 = disabled
    fix 12999577 = enabled                                    |     fix 12999577 = disabled
    fix 12954320 = enabled                                    |     fix 12954320 = disabled
    fix 13036910 = enabled                                    |     fix 13036910 = disabled
    fix 12648629 = enabled                                    |     fix 12648629 = disabled
    fix 13704977 = enabled                                    |     fix 13704977 = disabled
    fix 11843466 = enabled                                    |     fix 11843466 = disabled
    fix 13909909 = enabled                                    |     fix 13909909 = disabled
    fix 12856200 = enabled                                    |     fix 12856200 = disabled
    fix  9852856 = enabled                                    |     fix  9852856 = disabled
    fix 14033181 = enabled                                    |     fix 14033181 = disabled
    fix 13836796 = enabled                                    |     fix 13836796 = disabled
    fix 13699643 = enabled                                    |     fix 13699643 = disabled
    fix 13735304 = enabled                                    |     fix 13735304 = disabled
    fix 14464068 = enabled                                    |     fix 14464068 = disabled
    fix 13448445 = enabled                                    |     fix 13448445 = disabled
    fix  9114915 = enabled                                    |     fix  9114915 = disabled
    fix 13109345 = enabled                                    |     fix 13109345 = disabled
    fix 14605040 = enabled                                    |     fix 14605040 = disabled
    fix 14633570 = enabled                                    |     fix 14633570 = disabled
    fix 13573073 = enabled                                    |     fix 13573073 = disabled
    fix 16237969 = enabled                                    |     fix 16237969 = disabled
    fix 13994546 = enabled                                    |     fix 13994546 = disabled
    fix 14750443 = enabled                                    |     fix 14750443 = disabled
    fix 14552075 = enabled                                    |     fix 14552075 = disabled
    fix 16324844 = enabled                                    |     fix 16324844 = disabled
    fix 13583529 = enabled                                    |     fix 13583529 = disabled
    fix 14565911 = enabled                                    |     fix 14565911 = disabled
    fix 16368002 = enabled                                    |     fix 16368002 = disabled
    fix 16077770 = enabled                                    |     fix 16077770 = disabled
    fix 11814337 = enabled                                    |     fix 11814337 = disabled
    fix 14764840 = enabled                                    |     fix 14764840 = disabled
    fix 16555865 = enabled                                    |     fix 16555865 = disabled
    fix 16625151 = enabled                                    |     fix 16625151 = disabled
    fix 16609749 = enabled                                    |     fix 16609749 = disabled
    fix 16751246 = enabled                                    |     fix 16751246 = disabled
    fix 16749025 = enabled                                    |     fix 16749025 = disabled
    fix 16750067 = enabled                                    |     fix 16750067 = disabled
    fix 15899648 = enabled                                    |     fix 15899648 = disabled
    fix 16690013 = enabled                                    |     fix 16690013 = disabled
    fix 16544878 = enabled                                    |     fix 16544878 = disabled
    fix 16725982 = enabled                                    |     fix 16725982 = disabled
    fix 14648222 = enabled                                    |     fix 14648222 = disabled
    fix 16507317 = enabled                                    |     fix 16507317 = disabled
    fix 16837274 = enabled                                    |     fix 16837274 = disabled
    fix 14085520 = enabled                                    |     fix 14085520 = disabled
    fix 16713081 = enabled                                    |     fix 16713081 = disabled
    fix 14703295 = enabled                                    |     fix 14703295 = disabled
    fix 16908409 = enabled                                    |     fix 16908409 = disabled
    fix 16212250 = enabled                                    |     fix 16212250 = disabled
    fix 17087729 = enabled                                    |     fix 17087729 = disabled
    fix 17088819 = enabled                                    |     fix 17088819 = disabled
    fix 13848786 = enabled                                    |     fix 13848786 = disabled
    fix 13522189 = enabled                                    |     fix 13522189 = disabled
    fix 16796185 = enabled                                    |     fix 16796185 = disabled
    fix 15950252 = enabled                                    |     fix 15950252 = disabled
    fix 16976121 = enabled                                    |     fix 16976121 = disabled
    fix 16582322 = enabled                                    |     fix 16582322 = disabled
    fix 16712213 = enabled                                    |     fix 16712213 = disabled
    fix 17382690 = enabled                                    |     fix 17382690 = disabled
    fix 14846352 = enabled                                    |     fix 14846352 = disabled
    fix 16516751 = enabled                                    |     fix 16516751 = disabled
    fix  8611462 = enabled                                    |     fix  8611462 = disabled
    fix 14062749 = enabled                                    |     fix 14062749 = disabled
    fix 16346018 = enabled                                    |     fix 16346018 = disabled
    fix 12977599 = enabled                                    |     fix 12977599 = disabled
    fix 14191778 = enabled                                    |     fix 14191778 = disabled
    fix 15939321 = enabled                                    |     fix 15939321 = disabled
    fix 17543180 = enabled                                    |     fix 17543180 = disabled
    fix 17301564 = enabled                                    |     fix 17301564 = disabled
    fix 12373708 = enabled                                    |     fix 12373708 = disabled
    fix 17397506 = enabled                                    |     fix 17397506 = disabled
    fix 14558315 = enabled                                    |     fix 14558315 = disabled
    fix 16615686 = enabled                                    |     fix 16615686 = disabled
    fix 16622801 = enabled                                    |     fix 16622801 = disabled
    fix 16954950 = enabled                                    |     fix 16954950 = disabled
    fix 17728161 = enabled                                    |     fix 17728161 = disabled
    fix 17760375 = enabled                                    |     fix 17760375 = disabled
    fix 17640863 = enabled                                    |     fix 17640863 = disabled
    fix 17716301 = enabled                                    |     fix 17716301 = disabled
    fix 17597748 = enabled                                    |     fix 17597748 = disabled
    fix 17303359 = enabled                                    |     fix 17303359 = disabled
    fix 16673868 = enabled                                    |     fix 16673868 = disabled
    fix 17800514 = enabled                                    |     fix 17800514 = disabled
    fix 14826303 = enabled                                    |     fix 14826303 = disabled
    fix 17663076 = enabled                                    |     fix 17663076 = disabled
    fix 17760755 = enabled                                    |     fix 17760755 = disabled
    fix 17997159 = enabled                                    |     fix 17997159 = disabled
    fix 14733442 = enabled                                    |     fix 14733442 = disabled
    fix 17781659 = enabled                                    |     fix 17781659 = disabled
    fix 17526569 = enabled                                    |     fix 17526569 = disabled
    fix 17760686 = enabled                                    |     fix 17760686 = disabled
    fix 17696414 = enabled                                    |     fix 17696414 = disabled
    fix 18116777 = enabled                                    |     fix 18116777 = disabled
    fix 16052625 = enabled                                    |     fix 16052625 = disabled
    fix 18091750 = enabled                                    |     fix 18091750 = disabled
    fix 17572606 = enabled                                    |     fix 17572606 = disabled
    fix 18196576 = enabled                                    |     fix 18196576 = disabled
    fix 17736165 = enabled                                    |     fix 17736165 = disabled
    fix 16434021 = enabled                                    |     fix 16434021 = disabled
    fix 18035463 = enabled                                    |     fix 18035463 = disabled
    fix 18011820 = enabled                                    |     fix 18011820 = disabled
    fix 16405740 = enabled                                    |     fix 16405740 = disabled
    fix 18365267 = enabled                                    |     fix 18365267 = disabled
    fix 17863980 = enabled                                    |     fix 17863980 = disabled
    fix 18398980 = enabled                                    |     fix 18398980 = disabled
    fix 18304693 = enabled                                    |     fix 18304693 = disabled
    fix 18508675 = enabled                                    |     fix 18508675 = disabled
    fix 18456944 = enabled                                    |     fix 18456944 = disabled
    fix 17908541 = enabled                                    |     fix 17908541 = disabled
    fix 18467455 = enabled                                    |     fix 18467455 = disabled
    fix 16033838 = enabled                                    |     fix 16033838 = disabled
    fix 16809786 = enabled                                    |     fix 16809786 = disabled
    fix 18425876 = enabled                                    |     fix 18425876 = disabled
    fix 18461984 = enabled                                    |     fix 18461984 = disabled
    fix 17023040 = enabled                                    |     fix 17023040 = disabled
    fix 14776289 = enabled                                    |     fix 14776289 = disabled

That’s 50 parameter differences, and 147 fix controls. Quite a lot of fixes between the two versions.

If you’re coming to the upgrade a couple of years late then you might want to consider using the new version number and list of parameters you generate as the criteria as a search for bugs in MoS. You might even find that simply running your eye down the list of parameters gives you a clue about a type of execution plan that you’ve never seen in the older version.

 


In Memoriam

Wed, 2017-07-05 11:19

My mother died a few weeks ago after a couple of months in terminal care. One of my tasks while she was in care was to go through all her paperwork and while doing so I discovered a couple of stories from her past that she had typed out (remember type-writers?) about 30 years ago. I typed them up on my laptop and printed several copies to hand out for people to read at the tea-party we held for her old friends – of all ages, ranging from 15 to 99 – after the funeral; this seemed to give them a lot of pleasure and they found them so interesting that I decided to share them with a larger audience. So here’s the story, written by my mother in 1983, of her evacuation experience at the start of the 2nd world war when she was a little over 14 years old.

The Summer of 1939. Reminiscences of Dorothy Kathleen Lewis (1925 – 2017)

There had been a lot of talk about a war coming. Adolf Hitler and his armies had marched into Austria and were threatening Poland. We had all been issued with gas masks – just in case we would need them – and emergency plans had been made to evacuate all children from the big cities.

During the school holidays I was taken by my parents, with my sister, to my mother’s home village of Llangeitho in Cardiganshire. My mother had a cousin who was a retired school teacher and it was arranged with Auntie Jane that if war broke out Peggy and I would be sent to stay with her. I don’t think we were very pleased with the arrangement because to us she was very old-fashioned, not a bit like our mother. We ended our holiday and went back to London to wait for the school term to begin.

On the 1st September we heard that all children from our school whose parents wanted them to be evacuated should assemble at the school gates with a small suitcase and their gas masks. As we had already been told we were going to Llangeitho if the war broke out we stood and watched all our friends walking in crocodile fashion down the street and mothers and fathers crying watching them go. It was a very sad day, but I wished I was going with them. I didn’t like the idea of staying with Auntie Jane. None of these children knew where they were going, just somewhere in the countryside for safety, and they didn’t know who would be looking after them.

Well, on the morning of 3rd September Neville Chamberlain, our prime minister, spoke on the wireless (we now call it a radio) to say that we were at war with Germany. Immediately the sirens went and everyone ran to the shelters. My parents, Peggy, and I went to Baker Street Station, which has very deep platforms. There were hundreds of people with the same thing on their minds. We all took our gas masks with us. After a short time the all-clear went. My father sent a telegram to Auntie Jane to say Peggy and I would be leaving London on the train at 9:25pm that night. Trains did not travel as fast as they do today and we were due to arrive at Pont Llanio Station at 7:30am on Monday morning. Peggy’s friend and her mother (an Italian lady who did not speak very good English) was coming too, also one of the young people from the village who was working in London.

Paddington Station had very dim lights and when we got on the train there were no lights at all. After a little while we children began to feel a bit less afraid and started to tell ghost stories and play memory games. It was fun going to the toilet on the train because there were people sitting in the corridor and so was their luggage. We could not see them and I don’t think we really tried – it was all a game. We were supposed to be sleeping, but we were too excited for that. When it came time to eat our sandwiches we had to taste them before we knew what we were eating. Can you imagine being in a train without any lights, and there were no lights in the streets or houses or on the station platforms that we passed. Names of stations had already been removed in case the country was invaded by the enemy. The belief was that the enemy would not know were he was if there were no road signs etc. No-one thought about them using maps and compasses as they would now. [ed: 1983]

We eventually arrived in a town called Carmarthen where we had to change trains and take a slow train to Pont Llanio where a car would meet us. Our train from Paddington was very late arriving and the slow train had gone. Someone telephoned Pont Llanio station to say we would be late and to send the car back. The train from Carmarthen was a very slow one and my father used to say “you could get out of the train and pick flowers on the bank and get back into the train again”. It really was very slow and chugged its way along the line. We arrived at last in Pont Llanio and then in Llangeitho after a journey of 16 hours. [ed: 4:30 to 5:00 hours driving time, now; 6 hours by public transport] I am sure we must have looked very dirty and untidy. The trains in those days were steam and there would be plenty of coal smuts flying around.

I did not think Auntie Jane would be very pleased to see us and I was soon to find out that I had thought rightly. The first thing she did was to take up the stair carpet in case we wore it out. I don’t know how she thought we would do that because once we came down in the morning we were not allowed to go back upstairs again until we went to bed. [ed: if you’ve read “Carrie’s War” you may recognise the behaviour]  She also did not know that children eat quite a lot too. For breakfast Auntie Jane would boil an egg and cut it in half, so Peggy and I had half each. And the same for our dinner, we would have two small potatoes – and this was before rationing and shortage of food. We had a lot of friends in the village and if it was not for them asking us out to tea and/or supper we would have been very hungry. Peggy went to school in the village, but I was too old [ed: at 14 yrs 4 months] and had nothing to do all day, but soon found a baby I could take out in the pram and that meant I would be asked if I would like a piece of cake and a drink. After a few weeks and a number of letters home things goT a little better because my mother was sending parcels of food to Auntie Jane. I don’t know what arrangements were made money wise; because we were not Government evacuees Auntie Jane would not have been paid by the authorities to keep us.

One of the things we used to do together with two of our friends was to help the local butcher clean out his slaughter-house after he had killed a beast. This meant he then asked us to supper in his old farm-house with a huge Inglenook fireplace. Another of my mother’s friends used to have us in early for a meal and say “don’t tell Auntie Jane or she will not give you anything else to eat”. I often think back on those days and wonder why she was so mean. She had never married and had children, but being a teacher I would have expected her to be more tolerant.

In December of 1939 Peggy wrote a letter home which was full of complaint and left it somewhere where Auntie Jane found it and this letter was sent to my parents with a letter from Auntie Jane asking that we be sent back to London. A lot of the people in the village were very surprised to think that she should think to send us back to London when there were air-raids (these had not started at that time). People were saying we would be going home to be killed, but as for me I would rather take that chance than be left in Llangeitho.

Going back to London wasn’t much fun – the school was closed so once again we were at a loose end. We stayed in London over Christmas and again the government started evacuating school children and in February we joined a group who were leaving London – this time as London School Evacuees. We were sent to Buckingham to a family with a little girl of 2 years. This seemed to be alright and we went to school in the afternoons whilst the local children went to school in the mornings. It got rather uncomfortable there after a while because the man of the house, aged 24, lost his job (I don’t know why) and there were a lot of arguments in the house. His wife did not make herself look smart and he started to pay too much attention to me. Again a letter home that it was time we left there and one morning my father arrived and said: “pack your bags, you’re coming home”. What joy!

I don’t have much memory about this part of being an evacuee except to say I was relieved to be out of that house and back in the safety of my family. Whilst we were in Buckingham there had been bombing in London and Peggy and I were taken to see some of the damage that had been done. I think this was to frighten us so that we would be willing to go away again. I certainly did not like the sirens going and having to stop what we were doing and go to the shelter[1]. Once again we were on the move and this time I have a very detailed memory of the events.

We were assembled at the school playground all with our cases and gas masks – worried children and even more worried parents and teachers. No one knew where we were going except that we all piled into a double-decker bus. Lots of tears this time because we knew that people were being killed and injured. Would we see our parents again? What was going to happen to us if they were killed? Where would we go, who would look after us? Questions, questions!

We were taken to Marylebone station and put on an underground train. Where was it going? What were mum and dad doing now; were they still blowing their noses? We were not so bothered because we still knew where we were. Next stop Paddington Station and hundreds of children milling about. I remember I was in the Junior Red Cross at that time and a Red Cross nurse saw my badge and came to speak to me. Such a little thing but it meant such a lot and I have never forgotten her kind words, saying I was going somewhere safe and would be alright. Maybe I was crying at the time, I don’t know.

As the train pulled out of Paddington Station we were all trying to get to a window to wave, although we didn’t know anybody and we didn’t know where we were going. Well of all places – we arrived in Banbury. Now my Auntie Kit, my father’s sister, only lived about 2 miles out of Banbury in a village called Bodicote. I knew Banbury well because we had often visited Bodicote. If only she knew I was here. I know we could not stay with her because she had a very small house and she had 4 children already.

Again on another bus and somehow Peggy and I and two other girls from my class got separated from the rest of our school and we were taken to a village called Great Rollright. Peggy and I went to stay with a lady called Mrs. Robinson who kept the village shop, and my two class friends went to a farm.

Mrs. Robinson was a kind lady – she had asked for two boys but somewhere along the line she had two girls instead. It was very strange arriving in Great Rollright. We were all taken into the Church Hall and there were village people there who, to us, had a funny way of speaking. And one after the other they were saying how many children they would take. Mrs. Robinson wasn’t there, so maybe that is why she didn’t get two boys. I thought it was very embarrassing to be standing there with these quaint country people whilst they were deciding whether they wanted one, two, or three children.

Our time with Mrs. Robinson was very happy. Peggy went to the village school and I went to the county school in Chipping Norton, again on a part-time basis. Mrs. Robinson had a pet cockerel which was allowed to roam round the house – I hated that bird and I think it knew it. Its name was Cocky. Every time I wanted to go down the garden to the toilet this bird would follow me and stay outside until I came out again and peck my legs as I raced back up the garden.

There was certainly plenty to eat in this house and we really had an enjoyable time there. We were always sent to bed with a small glass of cider. I never knew then that cider was an alcoholic drink and I thought it was lovely. We didn’t seem any the worse for it anyway.

We got involved with the village church and would have been happy to have stayed there. But doom. One day I came home from school to find my head mistress from London sitting at the table. Unbeknown to us she had been frantically looking for these four girls she had lost at Banbury Station. I don’t know how she had found us, whether she had contacted our parents or hunted through the schools in the area. With a surname like Walklett I don’t think we would have been difficult to find. I don’t think she had been in touch with our parents – what an awful thing to say to them: “I’m very sorry but I may have lost your children”. No, she must have hunted through the school registers.

The upshot of this visit was that she had found somewhere more suitable for us and would we pack our things because she had found somewhere else for us to stay. More tears because we liked Mrs. Robinson, and the village people were all lovely to us and we were invited to their homes. Off we went with Miss Attride in the car to another village called Duns Tew. The strange thing was that none of our school were there, so why were we moved yet again?

This time we stayed with Mr. and Mrs. Beck, his name was Harry and hers was Daisy, but they were Mr. and Mrs. Beck to us. Mr. Beck was a farm hand and he worked with horses. He used to plough the fields of the farm across the road. He must have walked miles in the days he was ploughing. Although I had had many holidays in Wales and Shropshire at haymaking time I knew nothing about ploughing.

Mr. and Mrs. Beck had a young man living with them. He was like their son; although his family lived in the village he had lived with the Becks since he was a baby and they called him their son. His name was Walter. The village was a beautiful place and we lived in No. 73. There were no street names, every house had a name and a number so we were at No. 73 Duns Tew, the last house in the village, a lovely old thatched cottage. There was always a big wood fire in the grate and plenty on the table. Mr. and Mrs. Beck were the nicest people in village.

Peggy now had to go to Steeple Aston School (since moving to Banbury in 1975 I have met the widow of her headmaster there), and I went to a Continuation College which had been evacuated from the East End of London. This was very odd to me – we were taught shorthand, typing, arithmetic, English grammar, French. This was obviously training us for the commercial world. I was much younger than the other girls there but my education was more advanced than theirs so I soon became top of the class. My English was always being complimented. What they didn’t know was that I had a Welsh mother and the Welsh language used very letter in the word. My French was well in advance and my Maths took a flying leap.

I made friends in the class. The class was held in The Hall, North Aston – a country seat. The Hall was so large that there were 9 girls living there and they had servants. The school room was in the Grand Hall and it was so beautiful it seemed a pity to me that there were desks etc. on the polished floor.

In Duns Tew we had one of the masters of the school staying in The Nurseries (which is still there) and every Friday evening the family he stayed with invited those of us in the village to spend the evening in their house and they had a piano so all the war songs were being sung: “Roll out the Barrel”, “We’re going to hang out the washing on the Siegfried line” and many more.

Because the school at North Aston was a long walk I bought a bike, something I had always wanted, and I joined the cycling group. This meant on one day a week we would go for an outing to places like Blenheim Palace [ed: 10 miles away] etc. I became a good cyclist and had plenty of energy when others flagged behind. I certainly made use of my bike.

One particularly happy time was when it snowed in the winter. Yes, we did get snow in London, but not like this. It was white[2] and where the wind blew it was as high as the hedgerows; I couldn’t believe what I saw. Walter the Beck’s son had a sledge and he showed us where it was good to use it. It was a fantastic time.

 

[Banbury, 1983]

 

[1] One of the stories about my mother that I first heard at her funeral was about the time she persuaded her parents to let her stay at home overnight. At the time the family used to head for the air-raid shelter (i.e. the local underground station) at the end of the day and stay there all night long. My mother hated this and persuaded her parents to let her stay at home in her own bed provided she promised to join them at the air-raid shelter as soon as the air-raid sirens sounded. She was only allowed to do this once – because she managed to sleep through two bombing runs and without being woken by the sirens or the explosions.

[2]If you’re wondering why white snow is worth mentioning you probably don’t know about the density of London smog at that time.


Unpivot

Wed, 2017-06-14 09:46

An interesting observation appeared recently as a side-channel on a question on the OTN database forum – how does Oracle execute an unpivot() operation. Here’s an example of such a query:

rem
rem     Script:         unpivot_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem

create table t1( id, col1, col2, col3, col4, col5, padding )
cache
pctfree 95 pctused 5
-- compress for query low
as
select
        1, 100 , 200 , 300 , 400 , 500,rpad('x',100)
from
        all_objects
where
        rownum <= 50000 ; execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for all columns size 1')

select 
        /*+ gather_plan_statistics find this */
        id, max(val) as high_val
from
        t1
unpivot include nulls (
        val for source in (col1, col2, col3, col4, col5)
)
group by id
order by id
; 

I’ve created a table with 50,000 rows (all_objects is quite large in 12.1.0.2 and 12.2.0.1), but with lots of free space per block so that I get three rows per block for a total of roughly 16,667 blocks which is going to make it fairly easy to spot any interesting session statistics. Then I’ve used an unpivot() call that has the effect of turning one row with five columns into five rows with one column.

Here’s the basic execution plan for the query (as pulled from memory):


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        | 10695 (100)|      1 |00:00:00.18 |   16671 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |      1 | 10695   (2)|      1 |00:00:00.18 |   16671 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K| 10653   (2)|    250K|00:00:01.41 |   16671 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.52 |   16671 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  2131   (2)|  50000 |00:00:00.12 |   16671 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

According to the plan Oracle has done one full tablescan on the data (Starts = 1), but the unpivot operation has used this to produce 250,000 rows of output from 50,000 rows (A-Rows=) of input. Note, however, that the cost of the view operation is 5 times the cost of the tablescan but, on the other hand, the number of buffers visited is 16,671 (which matches the size of the table). So have we done 5 tablescans with a union all, or have we done one tablescan ?

The next step is to look at the 10053 (optimizer) trace file, specifically for the “unparsed query” which (despite the final plan table showing the plan we’ve just seen above) looked like this – after applying a few purely cosmetic changes:

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT "from$_subquery$_002"."ID" "ID",MAX("from$_subquery$_002"."VAL") "HIGH_VAL" 
FROM    ( 
                   (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL1' "SOURCE","T1"."COL1" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL2' "SOURCE","T1"."COL2" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL3' "SOURCE","T1"."COL3" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL4' "SOURCE","T1"."COL4" "VAL" FROM "TEST_USER"."T1" "T1") 
        UNION ALL  (SELECT "T1"."ID" "ID","T1"."PADDING" "PADDING",'COL5' "SOURCE","T1"."COL5" "VAL" FROM "TEST_USER"."T1" "T1")
        ) "from$_subquery$_002" 
GROUP BY "from$_subquery$_002"."ID" 
ORDER BY "from$_subquery$_002"."ID"
;

And then there’s the outline (which I can see in the 10053 trace, or in the plan pulled from memory by a call to dbms_xplan.display_cursor()).


Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      DB_VERSION('12.1.0.2')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SEL$3")
      OUTLINE_LEAF(@"SEL$4")
      OUTLINE_LEAF(@"SEL$5")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$6")
      NO_ACCESS(@"SEL$6" "from$_subquery$_002"@"SEL$6")
      FULL(@"SEL$5" "T1"@"SEL$5")
      FULL(@"SEL$4" "T1"@"SEL$4")
      FULL(@"SEL$3" "T1"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      FULL(@"SEL$1" "T1"@"SEL$1")
      END_OUTLINE_DATA
  */

And if I also pull the alias (query block and fully qualified table name) information from memory:


Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$6
   2 - SET$1 / from$_subquery$_002@SEL$6
   3 - SET$1 
   4 - SEL$5 / T1@SEL$5

Then there’s the alternative (though not necessarily correct) way to find an “unparsed” version of the query – dbms_sql2.expand_sql_text() in 11g, dbms_utility.expand_sql_text() in 12c:


variable m_sql_out clob

declare
        m_sql_in    clob :=
                'select /*+ gather_plan_statistics find this */ id, max(val) as high_val
                from    t1
                unpivot include nulls ( val for source in (col1, col2, col3, col4, col5) )
                group by id
                order by id
                '
        ;
begin

--      dbms_sql2.expand_sql_text(        -- 11g
        dbms_utility.expand_sql_text(     -- 12c
                m_sql_in,
                :m_sql_out
        );

end;
/

set long 20000
print m_sql_out

M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  ( (SELECT "A3"."ID" "ID",
"A3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1"
 "A3") UNION ALL  (SELECT "A4"."ID" "ID","A4"."PADDING" "PADDING",'COL2' "SOURCE
","A4"."COL2" "VAL" FROM "TEST_USER"."T1" "A4") UNION ALL  (SELECT "A5"."ID" "ID
","A5"."PADDING" "PADDING",'COL3' "SOURCE","A5"."COL3" "VAL" FROM "TEST_USER"."T
1" "A5") UNION ALL  (SELECT "A6"."ID" "ID","A6"."PADDING" "PADDING",'COL4' "SOUR
CE","A6"."COL4" "VAL" FROM "TEST_USER"."T1" "A6") UNION ALL  (SELECT "A7"."ID" "
ID","A7"."PADDING" "PADDING",'COL5' "SOURCE","A7"."COL5" "VAL" FROM "TEST_USER".
"T1" "A7")) "A1" GROUP BY "A1"."ID" ORDER BY "A1"."ID"

So at some level the optimizer does think it’s doing 5 tablescans – and the t1 reported in the unpivot plan is from the fifth (i.e. last) copy of the table in the union all. But the execution statistics and the session statistics say I’ve done just one tablescan – and this persists even when I make the table so large that it (a) won’t fit the buffer cache, and/or (b) uses direct path reads, and/or (c) runs under Exadata with Hybrid columnar compression.

So what’s (probably) happening ?

I think Oracle has a code path that says it’s doing a union all of tablescans (at least for this example) but tells it that the union all is there as an expansion of an unpivot so (sticking with an example that does a tablescan into the buffer cache) Oracle reads the number of blocks dictated by the current multiblock read count into the cache, pins that one batch of blocks, scans the batch 5 times (or as required), unpins the batch and then reads the next batch. So the session does five tablescans but does them in a way that lets you see only one tablescan in the statistics.

Footenote

There was a change in the results when I tested this on 12.2.0.1; the unparsed query reported only a two-part union all subquery, and the table alias information in the plan identified the referenced table as the copy of the table from the second subquery in the union all. More significantly the cost of the VIEW operation was a close match to the cost of a single tablescan, rather than being a multiple thereof:


select * from table(dbms_xplan.display_cursor('1k077bzp0t6mn',null,'outline alias cost 

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |      1 |        |  1117 (100)|      1 |00:00:00.13 |    8350 |   8334 |       |       |          |
|   1 |  SORT GROUP BY       |      |      1 |    250K|  1117   (6)|      1 |00:00:00.13 |    8350 |   8334 |  2048 |  2048 | 2048  (0)|
|   2 |   VIEW               |      |      1 |    250K|  1076   (2)|    250K|00:00:00.13 |    8350 |   8334 |       |       |          |
|   3 |    UNPIVOT           |      |      1 |        |            |    250K|00:00:00.09 |    8350 |   8334 |       |       |          |
|   4 |     TABLE ACCESS FULL| T1   |      1 |  50000 |  1074   (2)|  50000 |00:00:00.07 |    8350 |   8334 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$3
   2 - SET$1 / from$_subquery$_002@SEL$3
   3 - SET$1
   4 - SEL$2 / T1@SEL$2

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('12.2.0.1')
      DB_VERSION('12.2.0.1')
      ALL_ROWS
      OUTLINE_LEAF(@"SEL$1")
      OUTLINE_LEAF(@"SEL$2")
      OUTLINE_LEAF(@"SET$1")
      OUTLINE_LEAF(@"SEL$3")
      NO_ACCESS(@"SEL$3" "from$_subquery$_002"@"SEL$3")
      FULL(@"SEL$2" "T1"@"SEL$2")
      END_OUTLINE_DATA
  */


M_SQL_OUT
--------------------------------------------------------------------------------
SELECT "A1"."ID" "ID",MAX("A1"."VAL") "HIGH_VAL" FROM  (SELECT "A3"."ID" "ID","A
3"."PADDING" "PADDING",'COL1' "SOURCE","A3"."COL1" "VAL" FROM "TEST_USER"."T1" "
A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL2' "SOURCE","A3
"."COL2" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."
PADDING" "PADDING",'COL3' "SOURCE","A3"."COL3" "VAL" FROM "TEST_USER"."T1" "A3"
UNION ALL SELECT "A3"."ID" "ID","A3"."PADDING" "PADDING",'COL4' "SOURCE","A3"."C
OL4" "VAL" FROM "TEST_USER"."T1" "A3" UNION ALL SELECT "A3"."ID" "ID","A3"."PADD
ING" "PADDING",'COL5' "SOURCE","A3"."COL5" "VAL" FROM "TEST_USER"."T1" "A3") "A1
" GROUP BY "A1"."ID" ORDER BY "A1"."ID"
allstats last'));

It used to be hard enough working out what the run-time engine was doing when the optimizer and the run-time engine claimed that they were working on the same (transformed version of the) SQL; but now we can get the impression that the optimizer is directing the run-time engine to execute a plan that couldn’t possibly match the (unparsed) query that the optimizer had been considering.


dbms_sqldiag

Mon, 2017-06-12 06:48

If you’re familiar with SQL Profiles and SQL Baselines you may also know about SQL Patches – a feature that allows you to construct hints that you can attach to SQL statements at run-time without changing the code. Oracle 12c Release 2 introduces a couple of important changes to this feature:

  • It’s now official – the feature had been moved from package dbms_sqldiag_internal to package dbms_sqldiag.
  • The limitation of 500 characters has been removed from the hint text – it’s now a CLOB column.

H/T to Nigel Bayliss for including this detail in his presentation to the UKOUG last week, and pointing out that it’s also available for Standard Edition.

There are a couple of other little changes as you can see below from the two extract from the 12.2 declarations of dbms_sqldiag and dbms_sqldiag_internal below:


dbms_sqldiag
------------
FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION CREATE_SQL_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

dbms_sqldiag_internal
---------------------
FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_ID                         VARCHAR2                IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

FUNCTION I_CREATE_PATCH RETURNS VARCHAR2
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_TEXT                       CLOB                    IN
 HINT_TEXT                      CLOB                    IN
 CREATOR                        VARCHAR2                IN
 NAME                           VARCHAR2                IN     DEFAULT
 DESCRIPTION                    VARCHAR2                IN     DEFAULT
 CATEGORY                       VARCHAR2                IN     DEFAULT
 VALIDATE                       BOOLEAN                 IN     DEFAULT

  • The function names changes from i_create_patch to create_patch when exposed in dbms_sqldiag.
  • There are two versions of the function – one that requires you to supply the exact SQL text, and a new version that allows you to supply an SQL ID.
  • The internal function also adds a creator to the existing parameter list – and it doesn’t have a default, so if you’ve got some code to use the internal version already it’s not going to work on 12.2 until you change it.

I was prompted to write this note by a tweet asking me if there’s any SQL available to see the contents of an SQL Profile in 11g and 12c. (I published some simple code several years ago for 10g, but Oracle changed the base tables in 11g). The answer is yes, probably on the Internet somewhere, but here’s some code I wrote a couple of years ago to scan the more recent versions of Oracle:

rem
rem     sql_profile_baseline_11g.sql
rem     J.P.Lewis
rem     July 2010
rem

set pagesize 60
set linesize 132
set trimspool on

column hint format a70 wrap word
column signature format 999,999,999,999,999,999,999

break on signature skip 1 on opt_type skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                'Other'
        )       opt_type,
        prf.plan_id,
        extractvalue(value(tab),'.')    hint
from
        (
        select
                /*+ no_eliminate_oby */
                *
        from
                sqlobj$data
        where
                comp_data is not null
        order by
                signature, obj_type, plan_id
        )       prf,
        table(
                xmlsequence(
                        extract(xmltype(prf.comp_data),'/outline_data/hint')
                )
        )       tab
;


This will report the hints associated with SQL Baselines, SQL Profiles, and SQL Patches – all three store the data in the same base table. As a minor variation I also have a query that will reported a named profile/baseline/patch, but this requires a join to the sqlobj$ table. As you can see from the substitution variable near the end of the text, the script will prompt you for an object name.


set pagesize 60
set linesize 180
set trimspool on

column  plan_name format a32
column  signature format 999,999,999,999,999,999,999
column  category  format a10
column  hint format a70 wrap word

break on plan_name skip 1 on signature skip 1 on opt_type skip 1 on category skip 1 on plan_id skip 1

spool sql_profile_baseline_11g

select
        prf.plan_name,
        prf.signature,
        decode(
                obj_type,
                1,'Profile',
                2,'Baseline',
                3,'Patch',
                  'Other'
        )       opt_type,
        prf.category,
        prf.plan_id,
        extractvalue(value(hnt),'.') hint
from
        (
        select
                /*+ no_eliminate_oby */
                so.name         plan_name,
                so.signature,
                so.category,
                so.obj_type,
                so.plan_id,
                sod.comp_data
                from
                        sqlobj$         so,
                        sqlobj$data     sod
                where
                        so.name = '&m_plan_name'
                and     sod.signature = so.signature
                and     sod.category = so.category
                and     sod.obj_type = so.obj_type
                and     sod.plan_id = so.plan_id
                order by
                        signature, obj_type, plan_id
        )       prf,
        table (
                select
                        xmlsequence(
                                extract(xmltype(prf.comp_data),'/outline_data/hint')
                        )
                from
                        dual
        )       hnt
;


Lagniappe:

One of the enhancements that appeared in 12c for SQL Baselines was that the plan the baseline was supposed to produce was stored in the database so that Oracle could check that the baseline would still reproduce the expected plan before applying it. These plans (also generated for Profiles and Patches) are stored in the table sqlobj$plan, and the dbms_xplan package has been enhanced with three new functions to report them:


FUNCTION DISPLAY_SQL_PATCH_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PLAN_BASELINE RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SQL_HANDLE                     VARCHAR2                IN     DEFAULT
 PLAN_NAME                      VARCHAR2                IN     DEFAULT
 FORMAT                         VARCHAR2                IN     DEFAULT

FUNCTION DISPLAY_SQL_PROFILE_PLAN RETURNS DBMS_XPLAN_TYPE_TABLE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NAME                           VARCHAR2                IN
 FORMAT                         VARCHAR2                IN     DEFAULT

e.g.
SQL> select * from table(dbms_xplan.display_sql_profile_plan('SYS_SQLPROF_015c9bd3bceb0000'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL text: select        t1.id, t2.id from       t1, t2 where    t1.id between 10000 and
          20000 and     t2.n1 = t1.n1 and       t2.n1 = t2.v2
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------
SQL Profile Name: SYS_SQLPROF_015c9bd3bceb0000
Status:           ENABLED
Plan rows:        From dictionary
--------------------------------------------------------------------------------

Plan hash value: 3683239666

-----------------------------------------------------------------------------------------------------------------
| Id  | Operation               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |          | 10501 |   287K|   248   (4)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR         |          |       |       |     0   (0)|          |        |      |            |
|   2 |   PX SEND QC (RANDOM)   | :TQ10002 | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | P->S | QC (RAND)  |
|*  3 |    HASH JOIN BUFFERED   |          | 10501 |   287K|   248   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   4 |     PX RECEIVE          |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|   5 |      PX SEND HASH       | :TQ10000 | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   6 |       PX BLOCK ITERATOR |          | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWC |            |
|*  7 |        TABLE ACCESS FULL| T1       | 10002 |    97K|   123   (3)| 00:00:01 |  Q1,00 | PCWP |            |
|   8 |     PX RECEIVE          |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,02 | PCWP |            |
|   9 |      PX SEND HASH       | :TQ10001 |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  10 |       PX BLOCK ITERATOR |          |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWC |            |
|* 11 |        TABLE ACCESS FULL| T2       |   104K|  1845K|   124   (4)| 00:00:01 |  Q1,01 | PCWP |            |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T2"."N1"="T1"."N1")
   7 - filter("T1"."ID"<=20000 AND "T1"."ID">=10000)
  11 - filter("T2"."N1"=TO_NUMBER("T2"."V2"))

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Disclaimer – I’ve checked only the SQL_PROFILE function call on 12.2, after creating a profile to check that my old 11g report still worked in 12c.

 


12.2 Partitions

Fri, 2017-06-09 04:13

At the end of my presentation to the UKOUG Database SIG yesterday I summed up (most) of points I’d made with a slide making the claim:

In 12.2 you can: Convert a simple table to partitioned with multi-column automatic list partitions, partially indexed, with read only segments, filtering out unwanted data, online in one operation.

 

Last night I decided I ought to demonstrate the claim – so here’s a little code, first creating a simple heap table:


rem
rem     Script:         122_features.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017
rem
rem     Last tested
rem             12.2.0.1
rem

create table t1(
        date_start      not null,
        date_end        not null,
        id              not null,
        client_id,
        resort_code,
        uk_flag,
        v1,
        padding,
        constraint t1_range_ck check ((date_end - date_start) in (7, 14, 21))
)
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        trunc(sysdate,'yyyy') + 7 *  mod(rownum, 8)                                     date_start,
        trunc(sysdate,'yyyy') + 7 * (mod(rownum, 8) + trunc(dbms_random.value(1,4)))    date_end,
        rownum                                          id,
        trunc(dbms_random.value(1e5,2e5))               client_id,
        trunc(dbms_random.value(1e4,2e4))               resort_code,
        case when mod(rownum,275) = 0 then 1 end        uk_flag,
        lpad(rownum,10,'0')                             v1,
        lpad('x',100,'x')                               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5 -- > "GT" inserted to avoid WordPress formatting issue
;

create index t1_client_idx on t1(client_id);
create index t1_resort_idx on t1(resort_code);
create index t1_ukflag_idx on t1(uk_flag);

alter table t1 add constraint t1_pk primary key(id);

I’ve got a table which models a travel company that arranges holidays that last one, two, or three weeks and (for convenience) they all start on the same day for the week. So I generate a start and end date for each row, making sure the start date is a multiple of seven days from a base date while the end date is 7, 14, or 21 days later. I’ve got a few indexes on the data, and a primary key constraint. There’s a special flag column on the table for holidays in the UK, which is a small parcentage of the holidays booked.

Eventually, when the data gets too big, I decide that I want to partition this data, and the obvious partitioning idea that springs to mind is to partition it so that holidays with the same start date and duration are all in the same partition and each partition holds a single start/duration.

I’ve also decided that I’m going to make old data read-only, and I’m not interested in the UK holidays once they gone into history so I’m going to get rid of some of them.

The index protecting the primary key will have to be global since it won’t contain the partition key; since the index on uk_flag covers a small amount of data I’m going to keep that global as well, but I want the other two indexes to be local – except for the older data I’m not really interested in keeping the index on client id.

And I don’t want to stop the application while I’m restructuring the data.

So here’s my one SQL statement:


alter table t1 modify 
partition by list (date_start, date_end) automatic (
        partition p11 values (to_date('01-Jan-2017'),to_date('08-Jan-2017')) indexing off read only,
        partition p12 values (to_date('01-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p13 values (to_date('01-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p21 values (to_date('08-Jan-2017'),to_date('15-Jan-2017')) indexing off read only,
        partition p22 values (to_date('08-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p23 values (to_date('08-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p31 values (to_date('15-Jan-2017'),to_date('22-Jan-2017')) indexing off read only,
        partition p32 values (to_date('15-Jan-2017'),to_date('29-Jan-2017')) indexing off read only,
        partition p33 values (to_date('15-Jan-2017'),to_date('05-Feb-2017')) indexing off read only
)
including rows where uk_flag is null or (date_start > to_date('01-feb-2017','dd-mon-yyyy'))
online
update indexes (
        t1_client_idx local indexing partial,
        t1_resort_idx local,
        t1_ukflag_idx indexing partial
)
;

Key Points
  • partition by list (date_start, date_end) — partitioned by a multi-column list
  • automatic — if data arrives for which there is on existing partition a new one will be created
  • indexing off — some of my partitions (the pre-defined (oldest) ones) will be subject to partial indexing
  • read only — some of my partitions (the pre-defined (oldest) ones) will be made read only
  • including rows where — some of my rows will disappear during copying [1]
  • online — Oracle will be journalling the data while I copy and apply the journey at the end
  • update indexes – specify some details about indexes [2]
  • local — some of the rebuilt indexes will be local
  • indexing partial — some of the rebuilt indexes will not hold data (viz: for the partitions declared “indexing off”)

I’ve footnoted a couple of the entries:

[1] – the copy is done read-consistently, so data inserted while the copy takes place will still appear in the final table, even if it looks as if it should have failed the including rows clause.

[2] – indexes which include the partition key will automatically be created as local indexes (and you can declare them here as global, or globally partitioned, if you want to). The manual has an error on this point; it suggests that prefixed indexes will be created as local indexes but then defines “prefixed” to mean contains the partition key” rather than the usual starts with the partition key”.

Job done – except for the exhaustive tests that it’s been done correctly, the load test to see how it behaves when lots of new holidays are being booked and current ones being modified, and a little bit of clearing up of “surprise” partitions that shouldn’t be there and changing some of the automatically generated table partitions to be “indexing off” (if and when necessary).

Here are a few queries – with results – showing the effects this one statement had:


select count(*) from t1;

/*
  COUNT(*)
----------
     99773

-- some rows (old UK) have disappeared from the original 10,000
*/


select
        index_name, partitioned, status, leaf_blocks, num_rows , indexing, orphaned_entries
from
        user_indexes
where   table_name = 'T1'
order by
        partitioned, index_name
;

/*
INDEX_NAME           PAR STATUS   LEAF_BLOCKS   NUM_ROWS INDEXIN ORP
-------------------- --- -------- ----------- ---------- ------- ---
T1_PK                NO  VALID            263      99773 FULL    NO
T1_UKFLAG_IDX        NO  VALID              1        136 PARTIAL NO
T1_CLIENT_IDX        YES N/A              149      62409 PARTIAL NO
T1_RESORT_IDX        YES N/A              239      99773 FULL    NO

-- Indexes: Local or global, full or partial.
*/

select
        segment_type, segment_name, count(*)
from
        user_segments
group by
        segment_type, segment_name
order by
        segment_type desc, segment_name
;

/*
SEGMENT_TYPE       SEGMENT_NAME                COUNT(*)
------------------ ------------------------- ----------
TABLE PARTITION    T1                                24
INDEX PARTITION    T1_CLIENT_IDX                     15
INDEX PARTITION    T1_RESORT_IDX                     24
INDEX              T1_PK                              1
INDEX              T1_UKFLAG_IDX                      1

-- One local index has fewer segments than the other
*/

set linesize 180
set trimspool on

column high_value format a85
break on index_name skip 1
set pagesize 200

select
        index_name, status, leaf_blocks, num_rows, partition_name, high_value
from
        user_ind_partitions
where
        index_name = 'T1_CLIENT_IDX'
--      index_name like 'T1%'
order by
        index_name, partition_position
;

/*
INDEX_NAME           STATUS   LEAF_BLOCKS   NUM_ROWS PARTITION_NAME         HIGH_VALUE
-------------------- -------- ----------- ---------- ---------------------- -------------------------------------------------------------------------------------
T1_CLIENT_IDX        UNUSABLE           0          0 P11                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P12                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P13                    ( TO_DATE(' 2017-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P21                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P22                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P23                    ( TO_DATE(' 2017-01-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P31                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P32                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     UNUSABLE           0          0 P33                    ( TO_DATE(' 2017-01-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
                                                                             )

                     USABLE            10       4126 SYS_P1528              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4198 SYS_P1529              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4211 SYS_P1530              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4214 SYS_P1531              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4195 SYS_P1532              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1533              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE             9       4027 SYS_P1534              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4217 SYS_P1535              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4167 SYS_P1536              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4230 SYS_P1537              ( TO_DATE(' 2017-01-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4113 SYS_P1538              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4069 SYS_P1539              ( TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-03-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4215 SYS_P1540              ( TO_DATE(' 2017-01-22 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4138 SYS_P1541              ( TO_DATE(' 2017-02-19 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-26 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )

                     USABLE            10       4176 SYS_P1542              ( TO_DATE(' 2017-02-05 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                            , TO_DATE(' 2017-02-12 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')
                                                                             )


*/

I’ve limited the index partition output to the index with partial indexing enabled so show that it’s the pre-defined partitions are marked as unusable and, as you can infer from the segement summary, those unusable index partition don’t have any segments space allocated to them.

Stress tests are left to the interested reader.


Histogram Upgrade – 2

Thu, 2017-06-01 12:00

While reading a blog post by Maria Colgan a couple of week ago I came across an observation about histograms that I’d not noticed before; worse still, it was a feature that seemed to make some “damage-limitation” advice I’d been giving for years a really bad idea! The threat appeared in these paragraphs:

Setting SIZE REPEAT ensures a histogram will only be created for any column that already has one. If the table is a partitioned table, repeat ensures a histogram will be created for a column that already has one on the global level.

What’s the down side to doing this?

The current number of buckets used in each histogram becomes the limit on the maximum number of buckets used for any histogram created in the future.

Unfortunately I’ve been saying for a very long time that you have to be very careful with histograms, and should probably create then through PL/SQL code; but if you have some frequency histograms that you’re sure are going to be well-behaved then using “for all columns size repeat” to gather the histogram is probably okay. But after making the claim above Maria’s blog posting demonstrated the truth of the claim, a demonstration that showed the highly undesirable consequences.

So imagine this: you create a frequency histogram which happens to produce 26 buckets on a particular column; from then on every time you run the gather with size repeat Oracle tries to generate 26 buckets. One day the data looks a little different, temporarily there are only 25 distinct values so on the next gather you get just 25 buckets – which means that when the “missing” value re-appears 12c will give you a Top-N histogram or even a hybrid histogram (11g would have to give you a height-balanced histogram if it noticed all 26 values). It is not safe to use size repeat if the number of distinct values that actually exist can vary from day to day.

I have to say that I was fairly shocked that I’d not come across this threat before – so obviously I created a simple model to check how nasty things could get. I had a copy of 11.2.0.4 handy and created a couple of tables cloning the data from all_objects because that’s got a couple of columns that are good for producing frequency histograms.


rem     Script:         histogram_repeat.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2017

drop table t2;
drop table t1;

create table t1 as select * from all_objects;
create table t2 as select * from t1;

delete from t1 where object_type = 'EDITION';
delete from t1 where object_type = 'EVALUATION CONTEXT';
commit;

pause ================  Baseline =======================================

select  count(distinct object_type), count(distinct owner) from t1;

execute dbms_stats.gather_table_stats(user,'t1',method_opt =>'for columns object_type owner')

select  column_name, count(*) 
from    user_tab_histograms 
where   table_name = 'T1' 
and     column_name in ( 'OBJECT_TYPE','OWNER') 
group by column_name 
order by column_name
;

select  column_name, num_buckets, histogram 
from    user_tab_columns 
where   table_name = 'T1' 
and     column_name in ( 'OBJECT_TYPE','OWNER') 
order by column_name
;

insert into t1 select * from t2 where object_type = 'EDITION';
insert into t1 select * from t2 where object_type = 'EVALUATION CONTEXT';
commit;

After creating the data I’ve deleted a few rows from t1, reported the number of distinct values in t1 for owner and object_type, then gathered stats on just those two columns using the default size. I’ve then reported the number of histogram buckets in two ways, by counting them in user_tab_histograms and by reporting them directly (with histogram type) from user_tab_columns. Then I’ve finished off by re-inserting (copying from t2) the rows I previously deleted, giving me a couple more object_type values in the table. Here are the results of the queries:


================  Baseline =======================================

COUNT(DISTINCTOBJECT_TYPE) COUNT(DISTINCTOWNER)
-------------------------- --------------------
                        23                   11

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  17
OWNER                         7

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  17 FREQUENCY
OWNER                         7 FREQUENCY

I’m running on 11.2.0.4 – and I have two frequency histograms that have missed a few of the distinct values. But that’s because on the default settings 11g uses sampling (typically about 5,500 rows for smaller data sets) when creating histograms. So re-running the gather with size repeat shouldn’t allow the number of buckets to grow. Here’s what I got when I re-ran the gather (with size repeat) and two queries a further three times


method_opt =>'for columns object_type size repeat owner size repeat'

================  Repeat 1 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  16
OWNER                         9

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  16 FREQUENCY
OWNER                         9 FREQUENCY
================  Repeat 2 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  18
OWNER                         8

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  18 FREQUENCY
OWNER                         8 FREQUENCY
================  Repeat 3 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  13
OWNER                         9

COLUMN_NAME             Buckets HISTOGRAM
-------------------- ---------- ---------------
OBJECT_TYPE                  13 FREQUENCY
OWNER                         9 FREQUENCY

On the first repeat I got even fewer buckets; but on the second repeat the number of buckets bounced back up and even exceeded the original count; then on the third repeat the number of buckets dropped significantly. If you run the test your results will probably vary, but that’s the effect of the random selection of rows used to generate the histogram. Key point, though, the number of buckets generated by the gather is not limited by the current number of buckets.

But…

What happens with 12.1.0.2 – here are the results. Remember I deleted two sets of object_type before I gathered the first set of stats, then put them back in before doing the repeat gathers. (The number of distinct object_types in 12c is more than I had in 11g).


================  Baseline =======================================

COUNT(DISTINCTOBJECT_TYPE) COUNT(DISTINCTOWNER)
-------------------------- --------------------
                        27                   25

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 1 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 2 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

================  Repeat 3 =======================================

COLUMN_NAME            COUNT(*)
-------------------- ----------
OBJECT_TYPE                  27
OWNER                        25

COLUMN_NAME          NUM_BUCKETS HISTOGRAM
-------------------- ----------- ---------------
OBJECT_TYPE                   27 TOP-FREQUENCY
OWNER                         25 FREQUENCY

The number of distinct values for object_type is initially 27, but after gathering stats the first time I added back two more object_type values; but the subsequent gathers stuck with 27 buckets rather than extending to 29 buckets – so the histogram changed from frequency to Top-N. If you check Maria’s blog again you’ll see that this can make a big difference, particularly if the two new values happen to be the lowest and highest values for the column.

The number of buckets on a REPEAT is fixed by the number of existing buckets in 12c. That to me is a major change in behaviour and one you’ll have to watch out for on the upgrade. In 11g if the number of actual values stored dropped briefly the situation was self-correcting; if some new values were introduced the situation was self-correcting – although in both cases the histogram isn’t necessarily telling the truth the way you’d like it. In 12c the situation doesn’t self-correct. and may introduce a massive change in the arithmetic (as shown in Maria’s example).

The big difference, of course, is that 12c is gathering on a 100% sample using the variation of the approximate_ndv mechanism – so it will always find the right number of values if a frequency histogram is appropriate: presumably this is what was suppposed to make it okay to reproduce the number of buckets previously used. In 11g with its small sample size the number of buckets created couldn’t be guaranteed to match the number of distinct values, so I guess the code in 11g wasn’t written to be so rigorous in its assumption about the number of buckets to use next time.

tl;dr

When you upgrade from 11g to 12c think very carefully about whether or not you can still use a “table-level” size repeat to gather histograms – the upgrade may force you to identify specifically the columns that need histograms so that you can name with with an explicit (large enough) size in gather command.

 


Parallelism

Thu, 2017-05-25 09:48

Headline – if you don’t want to read the note – the /*+ parallel(N) */ hint doesn’t mean a query will use parallel execution, even if there are enough parallel execution server processes to make it possible. The parallel(N) hint tells the optimizer to consider the cost of using parallel execution for each path that it examines, but ultimately the optimizer will still take the lowest cost path (bar the odd few special cases) and that path could turn out to be a serial path.

The likelihood of parallelism appearing for a given query changes across versions of Oracle so you can be fooled into thinking you’re seeing bugs as you test new versions but it’s (almost certainly) the same old rule being applied in different circumstances. Here’s an example – which I’ll start off on 11.2.0.4:


create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 ; create index t1_i1 on t1(id); begin dbms_stats.gather_table_stats( ownname => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

set autotrace traceonly explain

select
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) */
        count(v1)
from    t1
where   id = 10
;

select
        /*+ parallel(4) full(t1) */
        count(v1)
from    t1
where   id = 10
;

set autotrace off

I haven’t declare the index to be unique, but it clearly could be; and it’s obvious that with 1M rows and about 120M of table a parallel full scan is probably a bad idea to acquire one row (even if you’re running Exadata!). So what do we get for the three plans – I’ll skip the predicate section – when we want to collect one row.


Base plan - unhinted
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4)
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    16 |     4   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    16 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |     1 |    16 |     4   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |     1 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Hinted parallel(4) and full(t1)
----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |     1 |    16 |   606   (2)| 00:00:02 |        |      |            |
|   1 |  SORT AGGREGATE        |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |     1 |    16 |            |          |  Q1,00 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE     |          |     1 |    16 |            |          |  Q1,00 | PCWP |            |
|   5 |      PX BLOCK ITERATOR |          |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWC |            |
|*  6 |       TABLE ACCESS FULL| T1       |     1 |    16 |   606   (2)| 00:00:02 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

In 11.2.0.4 the optimizer did consider the parallel hint when it appeared on its own – but it has compared the parallel(4) cost of 606 with the serial index cost of 16 and chosen the indexed access path. This is not a case of ignoring the hint, it’s an example of being fooled if you don’t know how the hint is really supposed to work.

But here’s an interesting change that appeared in 12.2 – this time just the plan with the parallel(4) hint on its own:


---------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |          |     1 |    16 |     4   (0)| 00:00:01 |        |      |            |
|   1 |  SORT AGGREGATE                         |          |     1 |    16 |            |          |        |      |            |
|   2 |   PX COORDINATOR                        |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)                  | :TQ10001 |     1 |    16 |            |          |  Q1,01 | P->S | QC (RAND)  |
|   4 |     SORT AGGREGATE                      |          |     1 |    16 |            |          |  Q1,01 | PCWP |            |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| T1       |     1 |    16 |     4   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX RECEIVE                        |          |     1 |       |     3   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   7 |        PX SEND HASH (BLOCK ADDRESS)     | :TQ10000 |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | S->P | HASH (BLOCK|
|   8 |         PX SELECTOR                     |          |       |       |            |          |  Q1,00 | SCWC |            |
|*  9 |          INDEX RANGE SCAN               | T1_I1    |     1 |       |     3   (0)| 00:00:01 |  Q1,00 | SCWP |            |
---------------------------------------------------------------------------------------------------------------------------------

You get a parallel execution plan – although it starts with a serial index range scan which is operated for the new (12c) PX Selector operator that allocates a serial operation to one of the parallel execution slaves rather than running it through the query coordinator (QC). The serial range scan does a hash distribution (hashed by block address of the rowids it finds to avoid collisions between parallel execution slave as they do their table accesses) with the net effect that the cost of the index range scan drops from 16 to 4.

This is just one cute little trick that makes it worth looking at the upgrade to 12c – this new path is likely to be of benefit to people who had to create global (as opposed to globally partitioned) indexes on partitioned tables.

This note was prompted by a recent twitter comment by Timur Akhmadeev followed in short order by an OTN posting that added further confusion to the problem by running Siebel – which is just one of several 3rd party products that love to configure optimizer parameters with non-standard values like: optimizer_index_cost_adj = 1, or optimizer_mode = first_rows_10. (At the last update I’ve seen on the thread, there seemed to be some other reason why parallelism was being blocked.)


255 Again!

Tue, 2017-05-23 07:10

There’s so many things that can go wrong when you start using tables with more than 255 columns – here’s one I discovered partly because I was thinking about a client requirement, partly because I had a vague memory of a change in behaviour in 12c and Stefan Koehler pointed me to a blog note by Sayan Malakshinov when I asked the Oak Table if anyone remembered seeing the relevant note. Enough of the roundabout route, I’m going to start with a bit of code to create a table, stick a row in it, then update that row:

rem
rem     Script: wide_table_4.sql
rem     Author: Jonathan Lewis
rem     Dated:  May 2017
rem
rem     Last tested
rem             12.2.0.1
rem             12.1.0.2
rem             11.2.0,4
rem

set pagesize 0
set feedback off

spool temp.sql

prompt create table t1(

select
        'col' || to_char(rownum,'fm0000') || '  varchar2(10),'
from
        all_objects
where   rownum <= 320
;

prompt col0321 varchar2(10)
prompt )
prompt /

spool off

@temp

set pagesize 40
set feedback on

insert into t1 (col0010, col0280) values ('0010','0280');
commit;

update t1 set col0320 ='0320';
commit;

column file_no  new_value m_file_no
column block_no new_value m_block_no

select
        dbms_rowid.rowid_relative_fno(rowid)    file_no,
        dbms_rowid.rowid_block_number(rowid)    block_no,
        dbms_rowid.rowid_row_number(rowid)      row_no
from
        t1
;

alter system flush buffer_cache;
alter system dump datafile &m_file_no block &m_block_no;


So I’ve written one of those horrible scripts that write a script and then run it. The script creates a table with 320 columns and inserts a row that populates columns 10 and 280. That gets me two row pieces, one consisting of the 255 columns from columns 26 to 280 that goes in as row piece 0, the other consisting of the first 25 columns that goes in as row piece 1; the remaining 40 columns are not populated so Oracle “forgets” about them (“trailing nulls take no space”). The script then updates the row by setting column 320 to a non-null value.

For convenience I’ve then generated the file and block number (and row number, just to show its head piece went in as row 1 rather than row 0) of the row and done a symbolic block dump. The question is: what am I going to see in that block dump ?

Answer, and lots more, coming some time in the next 48 hours.


255 columns

Fri, 2017-05-19 11:49

This is one of my “black hole” articles – I drafted it six months ago, but forgot to publish it.

A recent post on OTN highlighted some of the interesting oddities that appear when you create tables with more than 255 columns. In fact this was a more subtle case than usual because it reminded us that it’s possible to have a partitioned table which appears to have less than the critical 255 columns while actually having more than 255 columns thanks to the anomaly of how Oracle handles dropping columns in a partitioned table.  (For a useful insight see this note from Dominic Brooks – and for a nice thought about preparing simple tables for an exchange with such a partitioned tables look at the 12.2 feature of “create table for exchange” in Maria Colgan’s article)

The thread took me down the path of trying to recreate some notes I wrote a long time ago and can no longer find, and the OP’s problem wasn’t the basic one I had assumed anyway, but I thought I’d publish a bit of the work I had done so that you can see another of the funny effects that appear when your table definition has too many columns (and you use them).

The OP told us about a table with more than 350 columns, so here’s a little script I wrote to generate a table with 365 columns and some data. (It turned out that the OP had more than 390 columns in the table, but 30+ had been “dropped”.)


rem
rem	Script:		wide_table_2.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Nov 2016
rem	Purpose:	
rem
rem	Last tested 
rem		11.2.0.4
rem

create sequence s1;

declare
	m_statement_1	varchar2(32767) := 
		'create table t1(col0001 varchar2(10),';
	m_statement_2	varchar2(32767) := 
		'insert into t1 values(lpad(s1.nextval,10),';
begin
	for i in 2 .. 365 loop
		m_statement_1 := m_statement_1 ||
			'col' || to_char(i,'FM0000') || ' varchar2(100),'
		;
		m_statement_2 := m_statement_2 ||
			case when i in (2,3,4) 
			-- case when i in (122,123,124) 
			-- case when i in (262,263,264) 
				then 'dbms_random.string(''U'',ceil(dbms_random.value(0,100))),'
			     when i = 365
				then 'lpad(s1.currval,7))'
				else '''COL' || to_char(i,'FM0000') || ''','
			end
		;
	end loop;
	
	m_statement_1 := substr(m_statement_1, 1, length(m_statement_1) - 1);
	m_statement_1 := m_statement_1 || ') pctfree 25';

	execute immediate m_statement_1;

	for i in 1..10000 loop
		execute immediate m_statement_2;
	end loop;

end;
/


I’ve taken a fairly simple approach to building a string that creates a table – and it’s easy to adjust the number of columns – and a string to insert some values into that table. The insert statement will insert a row number into the first and last columns of the table and generate a random length string for a few of the columns. I’ve picked three possible sets of three columns for the random length string; one set is definitely going to be in the first row piece, one set is definitely going to be in the last row piece, and (since the row will split 110/255) one will be somewhere inside whichever is the larger row piece.

If I wanted to do something more sophisticated I’d probably have to switch to a PL/SQL array for the two statements strings – 32,767 characters doesn’t give me much freedom to play if I wanted to test a table with 1,000 columns.

Having created and populated my table, this is what I did the following three tests with it:


analyze table t1 compute statistics;

prompt	====
prompt	CTAS
prompt	====

create table t1a pctfree 25 as select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	======
prompt	Insert
prompt	======

truncate table t1a;
insert into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

prompt	=============
prompt	Insert append
prompt	=============

truncate table t1a;
insert /*+ append */ into t1a select * from t1;
analyze table t1a compute statistics;

select	table_name, num_rows, avg_row_len, blocks, chain_cnt
from	user_tables
where	table_name like 'T1%'
;

The first test creates a new table (t1a, at pctfree 25, matching the original) of the original table with a simple “create as select”.

The second test truncates the table and does a basic “insert as select” to repopulate the copy table.

Third test truncates the table again and does an “insert as select” with the /*+ append */ hint to repopulate the copy table.

In all three cases (and with three variations of where the longer random strings went) I used the analyze command to gather stats on the tables so that I could get a count of the number of chained rows; and I dumped a couple of blocks from the tables to see what the inserted rows looked like.

Here’s a summary of the results when the random-length columns are near the start of the row (the position didn’t really affect the outcome and the results for 12.1.0.2 were very similar):

====
CTAS
====

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

======
Insert
======

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       6676       3287

=============
Insert append
=============

TABLE_NAME             NUM_ROWS AVG_ROW_LEN     BLOCKS  CHAIN_CNT
-------------------- ---------- ----------- ---------- ----------
T1                        10000        3062       6676       3313
T1A                       10000        3062       9504        237

As you can see we get two significantly different results: the CTAS and the “insert append” produce tables with 9,504 blocks and 237 chained rows reported, while the original table (single row inserts) and the regular “insert as select” produce tables with 6,676 blocks and 3,133 chained rows. It seems that the CTAS and direct path insert have minimised the number of chained rows at a cost of a dramatically increased number of blocks.

We know, of course, that every row in this table will consist of two row pieces, one of 110 columns and one of 255 columns; so every row is in some respects chained due to the potential for intra-block chaining of those two pieces, but the analyze command reports only those rows which start in one block and end in another block as chained rows – intra-block chaining doesn’t count (in this version of Oracle).

There are two questions to address in these results: the first is “What’s happening?”, the second, which we ask when we get the answer to the first, is “How come the direct path method still gives us some chained rows?”

I believe the answer to the first question is that the direct path method attempts to avoid chaining unavoidable row-pieces. Even if it means leaving a huge amount of empty space in a block Oracle starts a new row in a new block if there isn’t enough space for both of the anticipated row-pieces to fit in the current block. I think this may be a feature to help Exadata and direct path reads where a relatively small number of chained rows (which might be outside the current Exadata storage unit) could have a catastrophic impact on performance because the system would have to do a single block read to pick up the extra piece – which could have a devastating impact on the performance of the Exadata smart scan.

So why do some rows still see chaining under this strategy – I think it’s because there’s a small error in the arithmetic somewhere (possibly visible only in ASSM tablespaces, perhaps related to row-piece headers) where Oracle thinks there’s enough space for both row pieces but there isn’t quite so it tries to insert both pieces and has to chain the second one. (I’ll back this up with some analysis of block dumps in a moment).

So you have a choice – lots of wasted space and a little row-chaining, or maximum packing of data and (potentially) lots of row-chaining. But there’s more: I’ve said we get one row piece of 110 columns and one of 255 columns for each row, but the point at which the split occurs and the order in which the pieces are inserted depends on the method used.

  • Single row inserts (initial table creation): The split occurred at column 111 – so the leading 110 columns are in one row piece and the trailing 255 columns are in the other; and the row piece with the tail-end columns is inserted first.
  • Array inserts (normal): Exactly the same as the single row inserts.
  • Direct path inserts / CTAS: The split occurred at column 256, with the leading row-piece inserted first and the trailing 110 column row-piece inserted second.

I’m not sure that this really matters very much when you consider the dramatic difference in size that can appear in the comparison between direct path and normal inserts, but maybe there’s someone who will notice a performance (or even space) side effect because of this inconsistency. (And I haven’t even thought about the effects of basic or oltp compression yet!)

I said I’d come back to the row-chaining anomaly. One of the little details that I didn’t include in my code listing was the call to “analyze table report chained rows” that I did (after executing $OARCLE_HOME/rdbms/admin/utlchain.sql) to list the head rowids of the chained rows into the chained_rows table. After doing this I ran a simple pl/sql loop to dump all the relevant blocks to the trace file:

begin
	for r in (
		select 
			dbms_rowid.rowid_relative_fno(head_rowid) file#, 
			dbms_rowid.rowid_block_number(head_rowid) block# 
			from	chained_rows
		) loop
			execute immediate 'alter system dump datafile ' || r.file# || ' block ' || r.block#;
	end loop;
end;
/

Here’s a little extract from the resulting trace file showing you what the start of a row piece looks like when dumped:

tab 0, row 0, @0x1765
tl: 2075 fb: --H-F--- lb: 0x0  cc: 255
nrid:  0x01401bc4.1
col  0: [10]  20 20 20 20 20 20 20 33 36 32
col  1: [19]  41 52 43 4a 4a 42 4e 55 46 4b 48 4c 45 47 4c 58 4c 4e 56
col  2: [ 8]  59 4b 51 46 4a 50 53 55
col  3: [17]  53 58 59 4e 4a 49 54 4a 41 5a 5a 51 44 44 4b 58 4d
col  4: [ 7]  43 4f 4c 30 30 30 35
col  5: [ 7]  43 4f 4c 30 30 30 36

A convenient thing to check is the cc: entry (end of 2nd line). You can see that this row piece has 255 columns, and if you look at the first six columns dumped you can see that it’s the row numbered 362, then there are three columns of different length strings, then two columns with the values ‘COL00005’ and ‘COL0007’ respectively. It’s the “cc:” entry that’s useful though. I’m going to do a bit of simple unix hackery:

grep " cc: " test_ora_24398.trc | sed "s/^.*cc: //"  | sort | uniq -c | sort -n
      1 1
      5 2
    112 108
    125 109
    237 110
    474 255

In my 237 blocks with chained rows I had 474 row pieces of 255 columns and 237 row pieces of 110 columns; then I had 125 row pieces that had lost (and therefore chained) one column and 112 row pieces that had lost and therefore chained 2 columns. I also had a couple of small “tail-end” pieces from earlier blocks scattered in these blocks. These figures suggest that there’s a small error (actually no more than about 20 bytes) in the calculation Oracle does to decide if it can fit a whole row into the current block or whether it has to go on to the next empty block.

Conclusions

When copying a table defined with more than 255 columns there’s the potential for a huge variation in the space usage and chain count depending on whether you do a CTAS (or insert /*+ append */) or a simple insert. You have to decide which option is the biggest threat to your available resources.

There is a little anomaly with the way in which rows are split that is also dependent on the method used for copying – this may also have some effect, though perhaps small enough to be ignored when compared with the space/chaining difference.

Even though CTAS/direct path insert can eliminate a lot of row chaining it is still possible to find some row chaining in the resulting data. This may be the result of a calculation error (or possibly a deliberate space saving compromise).

Lagniappe

Oracle 12.2 may do things very differently.

 


Quantum Space

Wed, 2017-05-10 07:31

Here’s a not very serious note that makes a serious point.  I’ve got a small tablespace made up of 4 files, and here’s a little report I can run against the data dictionary for that tablespace:


select 'File space' What, nvl(sum(user_bytes)/1048576,0) MB from dba_data_files where tablespace_name = 'LOB_TEST'
union all
select 'Free space',      nvl(sum(bytes/1048576),0)         from dba_free_space where tablespace_name = 'LOB_TEST'
union all
select 'Extents',         nvl(sum(bytes/1048576),0)         from dba_extents    where tablespace_name = 'LOB_TEST'
union all
select 'Segments',        nvl(sum(bytes/1048576),0)         from dba_segments   where tablespace_name = 'LOB_TEST'
;

The name of the tablespace isn’t significant – it happens to be a tablespace I created to do some tests relating to space allocation with securefile LOBs, and it’s been hanging around ever since.

The query first reports the “user” space defined by the files – remember that there will be some space for header information and other metadata, and there may be some space near the end of the file which is smaller than the minimum extent allowed for that tablespace.

After the statement of total possible usable space we get: the free space, the space taken up by used extents, and the space taken up by segments.

Here’s the result I get currently:


WHAT               MB
---------- ----------
File space        196
Free space        196
Extents             0
Segments          196

The total usable space is 196MB, of which 196MB are free with no space allocated to extents — except 196MB HAS been allocated to segments. Depending how you look at it the tablespace is either full, or empty.

If you’re in the know the puzzle resolves itself if you query for the segment names, which look like: “BIN$TrDKUCvcVQbgUwEAAH9K2Q==$0” – I have one segment that is in the recycle bin, so I can recover it from the bin if I need it, but until I do Oracle can reuse the space if it wants it. Like Schrodinger’s cat the objects is both alive and dead until someone decides to peek.

After a call to “purge recyclebin” the result changes to:


WHAT               MB
---------- ----------
File space        196
Free space        196
Extents             0
Segments            0

On a production system you might need to issue “purge dba_recyclebin” (if you have the appropriate privilege) to resolve the apparent contradiction as the call to “purge recyclebin” applies only to objects in your own schema.

This note was prompted by a question on the OTN database forum about contradiction between a traditional SQL statement to report free and used space and a screen dump from Enterprise Manager.  I don’t think the thread reached a firm conclusion – but apart from the potential for the recyclebin to confuse the issue, there are extra possibilities thanks to auto-extensible data files, and the Enterprise Manager’s scope for querying a complete different set of views such as dba_tablespace_usage_metrics and v$filespace_usage. In fact the EM code clearly had at least one error in it (which makes any of its results suspect) because it managed to report the critical tablespace as 390% used!

 

 


opt_estimate

Mon, 2017-05-08 02:04

The opt_estimate hint is one of many that shouldn’t be used in end-user code and isn’t officially documented. Nevertheless – like so many other hints – it’s a hint that is hard to ignore when you see it floating around the code generated by the Oracle software. This note is prompted by a twitter question from fellow Oak Table member Stefan Koehler asking the about working of the hint’s index_filter parameter. Checking my library I knew the answer was yes – so after a quick exchange on twitter I said I’d write up a short note about my example, and this is it.

Although the hint is not one that you should use it’s worth writing this note as a reminder of the significance to index range scans of the access predicates and filter predicates that Oracle reports in the predicate section of an execution plan.

When a query does an index range scan it’s going to walk through a (logically) consecutive set of index leaf blocks, looking at each individual index entry in turn (and those index entries will be correctly “sorted” within the leaf block) to see if it should use the rowid it finds there to visit the table. For “perfect” use of an index Oracle may be able to identify the starting and ending positions it needs in the index and know that it should use every rowid in between to visit the table – there will no “wasted”examinations of index entries on the way; however in a query involving a multi-column index and multiple predicates Oracle might have to use predicates on the first column(s) of the index to identify the starting and ending positions, but use further predicates on later columns in the index to decide whether or not to use each index entry to visit the table.

The predicates that Oracle can use to identify the range of leaf blocks it should visit are called access predicates, and the predicates that Oracle can use to further eliminate rowids as it walks along the leaf blocks are called filter predicates.

The simplest way to demonstrate this is with a query of the form: “Index_Column1 = … and Index_Column3 = …”, and that’s what I’ll be using in my model:


rem
rem     Script:         opt_est_ind_filter.sql
rem     Author:         Jonathan Lewis
rem
rem     Last tested
rem             11.2.0.4
rem             10.2.0.5
rem

create table t1
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                          id,
        mod(rownum - 1,100)             n1,
        rownum                          n2,
        mod(rownum - 1, 100)            n3,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to bypass WordPress formatting issue
;

create index t1_i1 on t1(n1,n2,n3) nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          => 'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

select leaf_blocks from user_indexes where index_name = 'T1_I1';

The number of leaf blocks in the index was 3,062.

I’ve defined n1 and n3 to match, and for any value between 0 and 99 there are 10,000 rows in the table where n1 and n3 hold that value. However, in the absence of a column group defined on (n1, n3), the optimizer is going to use its standard “no correlation” arithmetic to decide that there are 10,000 possible combinations of n1 and n3, and 100 rows per combination. Let’s see what this does for a simple query:


set autotrace traceonly explain

select  count(v1)
from    t1
where   n1 = 0 and n3 = 0
;

set autotrace off


--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |   134   (1)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |   134   (1)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |   100 |       |    34   (3)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)

The plan shows an index range scan where n3=0 is used as a filter predicate and n1=0 (with a tiny bit of extra accuracy from the n3=0) predicate is used as the access predicate, and the optimizer has calculated that 100 rowids will be retrieved from the index and used to find 100 rows in the table.

The cost of the range scan is 34: The optimizer’s estimate is that the scale of the initial access to the index will be due to the predicate n1 = 0 which is responsible for 1% of the index – giving us 3,062/100 leaf blocks (rounded up). Added to that there will be a little extra cost for the trip down the blevel of the index and a little extra for the CPU usage.

Now let’s tell the optimizer that its cardinality estimate is out by a factor of 25 (rather than 100 we actually know it to be) in one of two different ways:

prompt  ============================
prompt  index_scan - scale_rows = 25
prompt  ============================

select
        /*+
                qb_name(main)
                index(@main t1(n1, n2, n3))
                opt_estimate(@main index_scan   t1, t1_i1, scale_rows=25)
        */
        count(v1)
from    t1 
where   n1 = 0 and n3 = 0
;

prompt  ==============================
prompt  index_filter - scale_rows = 25
prompt  ==============================

select
        /*+
                qb_name(main)
                index(@main t1(n1, n2, n3))
                opt_estimate(@main index_filter t1, t1_i1, scale_rows=25)
        */
        count(v1)
from    t1 
where   n1 = 0 and n3 = 0
;

In both examples I’ve hinted the index to stop the optimizer from switching to a tablescan; but in the first case I’ve told Oracle that the entire index range scan has to be scaled up by a factor of 25 while in the second case I’ve told Oracle that its estimate due to the final filter has to be scaled up by a factor of 25. How does this affect the costs and cardinalities of the plans:


============================
index_scan - scale_rows = 25
============================
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |  3285   (1)| 00:00:17 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |  3285   (1)| 00:00:17 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |  2500 |       |   782   (2)| 00:00:04 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)



==============================
index_filter - scale_rows = 25
==============================
--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    17 |  2537   (1)| 00:00:13 |
|   1 |  SORT AGGREGATE              |       |     1 |    17 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |   100 |  1700 |  2537   (1)| 00:00:13 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |  2500 |       |    34   (3)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=0 AND "N3"=0)
       filter("N3"=0)

In both cases the cardinality estimate has gone up by a factor of 25 for the index range scan. Notice, though, that the optimizer is now suffering from cognitive dissonance – it “knows” that it’s got 2,500 rowids to use to visit the table, it “knows” there are no extra predicates to eliminate rows from the table when it gets there, but it also “knows” that it’s going to find only 100 rows. Messing around with opt_estimate() and cardinality() hints is difficult to get right.

More significantly for the purposes of this note, are the costs. When we use the index_filter parameter the optimizer still thinks it’s going to access the same number of leaf blocks and the only correction it has to make is the number of rowids it finds in those blocks – so the index range scan cost hasn’t changed (though I supposed in some cases it might change slightly due to increased CPU costs). When we use the index_scan parameter the optimizer scales up its estimate of the number of leaf blocks (hence cost), which we can see in the figures 782 / 25 = 31.28. (Without going into the trace file and checking exact details that’s close enough to the previously reported 34 for me to think it’s allowing for 25 times the number of leaf blocks plus a chunk more CPU)

Conclusion

As I said at the outset, opt_estimate() really isn’t a hint you should be playing with, but I hope that this note has helped shed some light on the significance of access predicates and filter predicates in relation to the costs of index range scans.

Footnote

There were two significant details in the notes I had in my script. First was the frequency of the expression “it looks as if” – which is my shorthand for “I really ought to do some more tests before I publish any conclusions”; second was that my most recent testing had been on 10.2.0.5 (where the results were slightly different thanks to sampling in the statistics). Given that Stefan Koehler had mentioned 11.2.0.3 as his version I ran up an instance of 11.1.0.7 – and found that the index_filter example didn’t scale up the cardinality – so maybe his problem is a version problem.

 


Aliases

Tue, 2017-05-02 03:23

Here’s a performance problem that came up on OTN recently. The following query (reforematted) takes “ages” to run – how do you address the problem:

SELECT
	COUNT(*) 
FROM
	smp_dbuser2.workflow_step_report
WHERE
	report_ID IN (
		SELECT	report_id
		FROM	smp_dbuser2.workflow_report
		WHERE	trunc(start_time) = '28-Apr-2017'
		AND	user_id = 'nbi_ssc'
	)
;


Various pieces of relevant information were supplied (the workflow_report table holds 1.4M rows the workflow_step_report table holds 740M rows and some indexes were described), but most significantly we were given the execution plan:

--------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name                 | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                      |     1 |     6 |    10T  (1)|999:59:59 |       |       |
|   1 |  SORT AGGREGATE       |                      |     1 |     6 |            |          |       |       |
|*  2 |   FILTER              |                      |       |       |            |          |       |       |
|   3 |    PARTITION HASH ALL |                      |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|   4 |     TABLE ACCESS FULL | WORKFLOW_STEP_REPORT |   731M|  4187M|  5363K  (1)| 17:52:47 |     1 |   128 |
|*  5 |    FILTER             |                      |       |       |            |          |       |       |
|   6 |     PARTITION HASH ALL|                      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
|*  7 |      TABLE ACCESS FULL| WORKFLOW_REPORT      |     2 |    38 | 14161   (1)| 00:02:50 |     1 |    32 |
--------------------------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( EXISTS (SELECT 0 FROM "SMP_DBUSER2"."WORKFLOW_REPORT" "WORKFLOW_REPORT" WHERE :B1=:B2
              AND "USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28 00:00:00',
              'syyyy-mm-dd hh24:mi:ss')))
   5 - filter(:B1=:B2)
   7 - filter("USER_ID"='nbi_ssc' AND TRUNC(INTERNAL_FUNCTION("START_TIME"))=TO_DATE(' 2017-04-28
              00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

You’ll notice that the optimizer has transformed the IN subquery into an EXISTS subquery – operation 2 is a FILTER operation, and you can see that the filter predicate at operation 2 shows the existence subquery that would be executed.

If you look carefully at the execution plan (all of it), what can you deduce from it ? What, then, should be your next step in dealing with this performance problem ?

Answers at the end of the (UK) day. Resist the temptation to examine my comments in the OTN thread.


RTFM

Mon, 2017-05-01 06:55

Imagine you’re fairly new to Oracle and don’t have a lot of background information at your fingertips; then one day someone tells you to read the manual pages for the view dba_free_space. Look carefully at this sentence:

Note that if a data file (or entire tablespace) is offline in a locally managed tablespace, you will not see any extent information.

Can you spot the error ? Did you spot the error when you first read the sentence – or did you fill in the gap without noticing what you were doing ?

Let’s demonstrate the accuracy of the statement (simple cut-n-paste from an SQL*Plus session on 12.1.0.2 running in archivelog mode, and with a locally managed tablespace consisting of 4 (oracle managed) files on a filesystem):


SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                7        128   51380224       6272            7
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

4 rows selected.

SQL> select file#, ts#, name from v$datafile;

     FILE#        TS# NAME
---------- ---------- ----------------------------------------------------------------------
         1          0 /u02/app/oracle/oradata/OR32/datafile/o1_mf_system_cbcysq2o_.dbf
         2          9 /u02/app/oracle/oradata/OR32/datafile/o1_mf_undotbs_d84db0s2_.dbf
         3          1 /u02/app/oracle/oradata/OR32/datafile/o1_mf_sysaux_cbcyrmyd_.dbf
         4         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchn57_.dbf
         5          6 /u02/app/oracle/oradata/OR32/datafile/o1_mf_test_8k__cbd120yc_.dbf
         6          4 /u02/app/oracle/oradata/OR32/datafile/o1_mf_users_cbcyv47y_.dbf
         7         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf
         8         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpcho47_.dbf
         9         15 /u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchok1_.dbf

9 rows selected.

SQL> alter database datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf' offline;

Database altered.

SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

3 rows selected.

SQL> recover datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf';
Media recovery complete.
SQL> alter database datafile '/u02/app/oracle/oradata/OR32/datafile/o1_mf_lob_test_dhpchnnq_.dbf' online;

Database altered.

SQL> select * from dba_free_space where tablespace_name = 'LOB_TEST';

TABLESPACE_NAME                   FILE_ID   BLOCK_ID      BYTES     BLOCKS RELATIVE_FNO
------------------------------ ---------- ---------- ---------- ---------- ------------
LOB_TEST                                4        128   51380224       6272            4
LOB_TEST                                7        128   51380224       6272            7
LOB_TEST                                8        640   47185920       5760            8
LOB_TEST                                9        128   51380224       6272            9

4 rows selected.

SQL> spool off

See the bit in the middle where I have “3 rows selected” for the lob_test tablespace: the manual says I “will not see any extent information” – but the only change in the output is the absence of information about the one data file that I’ve put offline.

You may want to argue that “obviously” the statement was only about the data file that was offline – but is that a couple of years experience allowing you to interpret the text ? Some might assume (with a little prior experience and if they hadn’t done the experiment and given the parenthetical reference to “entire tablespace”) that the statement was about the effect on a single tablespace  – and maybe others would criticise them for making unwarranted assumptions.

But maybe you’re a novice and believed what the manual actually said.

It’s a fairly silly example, of course, but the point of this note is that when you tell someone to RTFM remember that they might actually do exactly that and not have the benefit of being able to know (unthinkingly) that the manual is wrong. If you go one step further and tell them to “stop making assumptions and RTFM” then just remember that you probably make a lot of assumptions without realising it when you read the manuals, and maybe it’s your assumptions that lead you to the correct interpretation of the manual.

Footnote:

If you’re feeling in the mood to split hairs, don’t forget that dba_free_space doesn’t usually give you any information about extents when it’s reporting locally managed tablespaces, it tells you about the space in which extents can be created; the one exception (that I know of) is when you have an object in the recyclebin and each extent of that object is listed as free space (see this article and the footnote here).  It’s only for dictionary managed tablespaces that dba_free_space reports extent information – the rows stored in the fet$ table.

 


Quiz Night

Thu, 2017-04-27 11:29

If this is the closing section of a trace file from a single end-user session that has a performance problem, what’s the most obvious deduction you can make about the cause of the problem, and what sort of action would you take next ?


OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      296      0.06       0.10          0        651          0           0
Execute    440      1.39       2.24          7       4664          0         146
Fetch      345     29.38      48.27          0    1709081          0         346
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1081     30.84      50.63          7    1714396          0         492

Misses in library cache during parse: 5
Misses in library cache during execute: 7

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                     497        0.00          0.00
  SQL*Net message from client                   496       27.03         50.35
  direct path read                                1        0.00          0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse   128199      6.94      11.46          0       2740          0           0
Execute 2274845    371.25     605.60         30   10031162          0       68200
Fetch   2225314     10.94      18.17          5     879297          0      577755
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   4628358    389.14     635.23         35   10913199          0      645955

Misses in library cache during parse: 9701
Misses in library cache during execute: 134

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  direct path read                                1        0.00          0.00
  latch: shared pool                              3        0.00          0.00

12666  user  SQL statements in session.
  495  internal SQL statements in session.
13161  SQL statements in session.
********************************************************************************
Trace file: {concealed file name}.trc
Trace file compatibility: 11.1.0.7
Sort options: default

       1  session in tracefile.
   12666  user  SQL statements in trace file.
     495  internal SQL statements in trace file.
   13161  SQL statements in trace file.
    5266  unique SQL statements in trace file.
 39046623  lines in trace file.
     742  elapsed seconds in trace file.


There’s no absoutely right answer to the last question, by the way – there are a couple of variations in approach that are likely to get to the same result in the same time, and some of those variations might have made looking at the tail end data the 2nd or 3rd step in the sequence.


Undo Understood

Fri, 2017-04-21 04:45

It’s hard to understand all the ramifications of Oracle’s undo handling, and it’s not hard to find cases where the resulting effects are very confusing. In a recent post on the OTN database forum resulted in one response insisting that the OP was obviously updating a table with frequent commits from one session while querying it from another thereby generating a large number of undo reads in the querying session.

It’s a possible cause of the symptoms that had been described – although not the only possible cause, especially since the symptoms hadn’t been described completely. It’s actually possible to see this type of activity when there are no updates and no outstanding commits taking place at all on the target table. Unfortunately it’s quite hard to demonstrate this with a quick, simple, script in recent versions of Oracle unless you do some insanely stupid things to make the problem appear – but I know how to do “insanely stupid” in Oracle, so here we go; first some data creation:

rem
rem     Script:         undo_rec_apply_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          March 2017
rem

create table t2(v1 varchar2(100));
insert into t2 values(rpad('x',100));
commit;

create table t1
nologging
pctfree 99 pctused 1
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,10,'0') as varchar2(10))       v1,
        cast(lpad('x',100,'x') as varchar2(100))        padding
from
        generator       v1,
        generator       v2
where
        rownum <= 8e4 -- > comment to bypass WordPress formatting issue
;

alter table t1 add constraint t1_pk primary key(id)
;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

The t2 table is there as a target for a large of updates from a session other than the one demonstrating the problem. The t1 table has been defined and populated in a way that puts one row into each of 80,000 blocks (though, with ASSM and my specific tablespace definition of uniform 1MB extents, the total space is about 80,400 blocks). I’ve got a primary key declaration that allows me to pick single rows/blocks from the table if I want to.

At this point I’m going to do a lot of updates to the main table using a very inefficient strategy to emulate the type of thing that can happen on a very large table with lots of random updates and many indexes to maintain:


begin
        for i in 1..800 loop
                update t1 set v1 = upper(v1) where id = 100 * i;
                execute immediate 'alter system switch logfile';
                execute immediate 'alter system flush buffer_cache';
                commit;
                dbms_lock.sleep(0.01);
        end loop;
end;
/

set transaction read only;

I’m updating every 100th row/block in the table with single row commits, but before each commit I’m switching log files and flushing the buffer cache.

This is NOT an experiment to try on a production system, or even a development system if there are lots of busy developers or testers around – and if you’re running your dev/test in archivelog mode (which, for some of your systems you should be) you’re going to end up with a lot of archived redo logs. I have to do this switch to ensure that the updated blocks are unpinned so that they will be written to disc and flushed from the cache by the flush buffer cache. (This extreme approach would not have been necessary in earlier versions of Oracle, but the clever developers at Oracle Corp. keep adding “damage limitation” touches to the code that I have to work around to create small tests.) Because the block has been flushed from memory before the commit the session will record a “commit cleanout failures: block lost” on each commit. By the time this loop has run to completion there will be 800 blocks from the table on disc needing a “delayed block cleanout”.

Despite the extreme brute force I use in this loop, there is a further very important detail that has to be set before this test will work (at least in 11.2.0.4, which is what I’ve used in my test runs). I had to start the database with the hidden parameter _db_cache_pre_warm set to false. If I don’t have the database started with this feature disabled Oracle would notice that the buffer cache had a lot of empty space and would “pre-warm” the cache by loading a few thousand blocks from t1 as I updated one row – with the side effect that the update from the previous cycle of the loop would be cleaned out on the current cycle of the loop. If you do run this experiment, remember to reset the parameter and restart the instance when you’ve finished.

I’ve finished this chunk of code with a call to “set transaction read only” – this emulates the start of a long-running query: it captures a point in time (through the current SCN) and any queries that run in the session from now on have to be read-consistent with that point in time. After doing this I need to use a second session to do a bit of hard work – in my case the following:

execute snap_rollstats.start_snap

begin
        for i in 1..10000 loop
                update t2 set v1 = upper(v1);
                update t2 set v1 = lower(v1);
                commit;
        end loop;
end;
/

execute snap_rollstats.end_snap

The calls to the snap_rollstats package simply read v$rollstat and give me a report of the changes in the undo segment statistics over the period of the loop. I’ve executed 10,000 transactions in the interval, which was sufficient on my system to use each undo segment header at least 1,000 times and (since there are 34 transaction table slots in each undo segment header) overwrite each transaction table slot about 30 times. You can infer from these comments that I have only 10 undo segments active at the time, your system may have many more (check the number of rows in v$rollstat) so you may want to scale up that 10,000 loop count accordingly.

At this point, then, the only thing I’ve done since the start of my “long running query” is to update another table from another session. What happens when I do a simple count() from t1 that requires a full tablescan ?

alter system flush buffer_cache;

execute snap_filestat.start_snap
execute snap_my_stats.start_snap

select count(v1) from t1;

execute snap_my_stats.end_snap
execute snap_filestat.end_snap

I’ve flushed the buffer cache to get rid of any buffered undo blocks – again an unreasonable thing to do in production but a valid way of emulating the aging out of undo blocks that would take place in a production system – and surrounded my count() with a couple of packaged call to report the session stats and file I/O stats due to my query. (If you’re sharing your database then the file I/O stats will be affected by the activity of other users, of course, but in my case I had a private database.)

Here are the file stats:


--------------
Datafile Stats
--------------
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs    Max      Writes      Blocks   Avg Csecs    Max
File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------    ---      ------      ------   ---------    ---
    1          17          17       1.000        .065          17        .065           0        .000      6           0           0        .000     15
/u01/app/oracle/oradata/TEST/datafile/o1_mf_system_938s4mr3_.dbf
    3         665         665       1.000        .020         665        .020           0        .000      6           0           0        .000     15
/u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         631      80,002     126.786        .000           2        .045         629        .000      6           0           0        .000     17
/u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__cz1w7tz1_.dbf

As expected I’ve done a number of multiblock reads of my data tablespace for a total of roughly 80,000 blocks read. What you may not have expected is that I’ve done 665 single block reads of the undo tablespace.

What have I been doing with all those undo blocks ? Check the session stats:


Session stats
-------------
Name                                                                     Value
----                                                                     -----
transaction tables consistent reads - undo records applied              10,014
transaction tables consistent read rollbacks                                10

We’ve been reading undo blocks so that we can create read-consistent copies of the 10 undo segment headers that were active in my instance. We haven’t (and you’ll have to trust me on this, I can’t show you the stats that aren’t there!) reported any “data blocks consistent reads – undo records applied”.

If you want to see a detailed explanation of what has happened you’ll need to read Oracle Core (UK source), chapter 3 (and possibly chapter 2 to warm yourself up for the topic). In outline the following type of thing happens:

  • Oracle gets to the first block updated in t1 and sees that there’s an ITL (interested transaction list) entry that hasn’t been marked as committed (we flushed the block from memory before the commit cleanout could take place so the relevant transaction is, apparently, still running and the row is still marked as locked).
  • Let’s say the ITL entry says the transaction was for undo segment 34, transaction table slot 11, sequence 999. Oracle reads the undo segment header block for undo segment 34 and checks transaction table slot 11, which is now at sequence 1032. Oracle can infer from this that the transaction that updated the table has committed – but can’t yet know whether it committed before or after the start of our “long running query”.
  • Somehow Oracle has to get slot 11 back to sequence 999 so that it can check the commit SCN recorded in the slot at that sequence number. This is where we see “undo records applied” to make the “transaction table read consistent”. It can do this because the undo segment header has a “transaction control” section in it that records some details of the most recent transaction started in that segment. When a transaction starts it updates this information, but saves the old version of the transaction control and the previous version of its transaction table slot in its first undo record, consequently Oracle can clone the undo segment header block, identify the most recent transaction, find its first undo record and apply it to unwind the transaction table information. As it does so it has also wound the transaction control section backwards one step, so it can use that (older) version to go back another step … and so on, until it takes the cloned undo segment header so far back that it takes our transaction table slot back to sequence 999 – and the job is done, we can now check the actual commit SCN.  (Or, if we’re unlucky, we might receive an ORA-01555 before we get there)

So – no changes to the t1 table during the query, but lots of undo records read because OTHER tables have been changing.

Footnote:

In my example the tablescan used direct path reads – so the blocks that went through delayed block cleanout were in private memory, which means they weren’t in the buffer cache and didn’t get written out to disc. When I flushed the buffer cache (again to emulate aging out of undo blocks etc.) and repeated the tablescan Oracle had to go through all that work of creating read consistent transaction tables all over again.

Footnote 2:

Despite the grand claim implied by the title I hope everyone realises that this is just one little detail of how undo handling can interfere with their expectations of performance.

 

 


Pages