Feed aggregator

SQLcl , yet again

Kris Rice - Fri, 2016-06-10 15:03
By the Numbers There's a new SQLcl out for download.  In case, there are too many to keep track of the build numbers are quite easy to tell if you have the latest.  The build posted today is  sqlcl-4.2.0.15.167.0827-no-jre.zip Here's what we are doing 4.2.0 <-- doesn't matter at all 15     <- year 167   <- day in julian 0827 <- time the build was done So yes, this build was done today at 8am

Repeating another SQLCL EA release

Kris Rice - Fri, 2016-06-10 15:03
    It's been a month and a lot of bugs have been fixed in that time and new features added. That adds up to time for another EA for our new sqlcl.  The third sqlcl EA is out now and can be found here on the SQL Developer 4.1 download page.     Not sure what this sqlcl thing is ? Check here. LOAD:  Making CSV loading to the db very easy Here's a very quick example of the possibilities.

3 new things about sdsql

Kris Rice - Fri, 2016-06-10 15:03
New Name ! The first is a new name this EA it's named sqlcl for sql command line.  However, the binary to start it up is simply sql.  Nothing is easier when you need to run some sql than typing 'sql' and hitting enter. #./sql klrice/klrice@//localhost/orcl SQLcl: Release 4.1.0 Beta on Fri Jan 30 12:53:05 2015 Copyright (c) 1982, 2015, Oracle. All rights reserved. Connected to: Oracle

SDSQL - Output Formats

Kris Rice - Fri, 2016-06-10 15:03
   In SQL Developer, Jeff has shown in many tips & tricks talks and here on his blog how to get data from any select into a few formats.  The most common I would say is csv.   Nothing here has changed. Well, almost.  What will be new is the html format is getting a facelift and a json format is being added neither are in the version of sdsql or sql developer that are on otn yet     As expected

SDSQL - Pre and Post Commands

Kris Rice - Fri, 2016-06-10 15:03
  This is in the process of evolving but should be enough for anyone to give it a whirl.  The idea is that for any given statement you may want to run something before and possible after.   Here's a trivial example.  I'm issuing 2 statements 1) a drop table and 2) creating a table of select * from v$mystat before each command.  Then after I'm doing sql that shows me the changes from that point

Aliases with sdsql and simpler CTAS

Kris Rice - Fri, 2016-06-10 15:03
First, we just put up a new build of sdsql.  Go get it or the last thing here will not work.   SQL is a great and verbose language so there's many ways to shorten what we have to type.  As simple as a view or saving a script to call later with @/path/to/sessions.sql  SDSQL is taking it a step further and we added aliases to the tool.  Almost as if right on queue, John asked if we could add them.

DIY SQCL Commands

Kris Rice - Fri, 2016-06-10 15:03
As mentioned once or twice or 100 times, sqlcl exposes javascript scripting with nashorn to make things very scriptable.  To learn more on Nashorn itself there's a lot of great write ups such as http://www.oracle.com/technetwork/articles/java/jf14-nashorn-2126515.html  So far, the scripting examples have been along the lines of conditional or looping of existing sqlcl commands. Here's an

SQLCL Monitoring itself with Longops

Kris Rice - Fri, 2016-06-10 15:03
Longops is a great way to monitor things that take some time to do work.  There's an easy example of using longops on oracle-base here.  I borrowed the script from there and put it into a file named my_slow_thing.sql.  Now here's a nice example of what's possible with sqlcl. The easy way to using this is add it to your login.sql which is what I did. Of course the script could run at anytime

Assertions in a future Oracle release

Jeff Moss - Fri, 2016-06-10 14:34

I just found this link on OTN to vote for including assertions in a future release of the Oracle database.

A great idea – please vote for it.

One of the most important votes this month…well, I do live in England! Smile

Create Temporary Tables in Oracle

Learn DB Concepts with me... - Fri, 2016-06-10 13:33

Global Temporary Tables in Oracle

Temporary tables are useful in applications where a result set is to be buffered, perhaps because it is constructed by running multiple DML operations. For example, consider the following:

A Web-based airlines reservations application allows a customer to create several optional itineraries. Each itinerary is represented by a row in a temporary table. The application updates the rows to reflect changes in the itineraries. When the customer decides which itinerary she wants to use, the application moves the row for that itinerary to a persistent table.

During the session, the itinerary data is private. At the end of the session, the optional itineraries are dropped.

This statement creates a temporary table that is transaction specific:

NOTE : Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.

*********************************************************************************
HERE is an example to create a global temporary table with on commit DELETE ROWS :
*********************************************************************************

sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT DELETE ROWS;
      
      
    
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.


sql> select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql> select * from ADMIN_WORK_AREA;


NOTE: records in this temp table will be deleted upon commit. This is equivalent to truncating table on commit.

*********************************************************************************
HERE is an example to create a global temporary table with on commit PRESERVE ROWS :
*********************************************************************************


sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT PRESERVE ROWS;
      
            
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.

1 row inserted.

sql>  select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql>  select * from ADMIN_WORK_AREA;


NOW exit the session and login back and select the table.

sql>  select * from ADMIN_WORK_AREA;

table is empty

NOTE: records (rows) in this temp table will be deleted upon session exit only, as long as you are using same session you can see these rows. 
This is equivalent to truncating table on session exit.

Categories: DBA Blogs

Create Temporary Tables in Oracle

Learn oracle 12c database management - Fri, 2016-06-10 13:32

Global Temporary Tables in Oracle
Temporary tables are useful in applications where a result set is to be buffered, perhaps because it is constructed by running multiple DML operations. For example, consider the following:

A Web-based airlines reservations application allows a customer to create several optional itineraries. Each itinerary is represented by a row in a temporary table. The application updates the rows to reflect changes in the itineraries. When the customer decides which itinerary she wants to use, the application moves the row for that itinerary to a persistent table.

During the session, the itinerary data is private. At the end of the session, the optional itineraries are dropped.

This statement creates a temporary table that is transaction specific:

NOTE : Indexes can be created on temporary tables. They are also temporary and the data in the index has the same session or transaction scope as the data in the underlying table.

*********************************************************************************
HERE is an example to create a global temporary table with on commit DELETE ROWS :
*********************************************************************************

sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT DELETE ROWS;
     
     
   
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.


sql> select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql> select * from ADMIN_WORK_AREA;


NOTE: records in this temp table will be deleted upon commit. This is equivalent to truncating table on commit.

*********************************************************************************
HERE is an example to create a global temporary table with on commit PRESERVE ROWS :
*********************************************************************************


sql>  CREATE GLOBAL TEMPORARY TABLE admin_work_area
        (startdate DATE,
         enddate DATE,
         class CHAR(20))
      ON COMMIT PRESERVE ROWS;
     
           
sql>  insert into ADMIN_WORK_AREA values (sysdate,sysdate+ 1,'A');

1 row inserted.

1 row inserted.

sql>  select * from ADMIN_WORK_AREA;

commit;

Commit complete.

sql>  select * from ADMIN_WORK_AREA;


NOW exit the session and login back and select the table.

sql>  select * from ADMIN_WORK_AREA;

table is empty

NOTE: records (rows) in this temp table will be deleted upon session exit only, as long as you are using same session you can see these rows.
This is equivalent to truncating table on session exit.


Categories: DBA Blogs

Inserting Data into table with DML Error Logging (catching errors whiles inserting data into table)

Learn DB Concepts with me... - Fri, 2016-06-10 10:29


Inserting Data with DML Error Logging:
When you load a table using an INSERT statement with subquery, if an error occurs, the statement is terminated and rolled back in its entirety. This can be wasteful of time and system resources. For such INSERT statements, you can avoid this situation by using the DML error logging feature.

To use DML error logging, you add a statement clause that specifies the name of an error logging table into which the database records errors encountered during DML operations. When you add this error logging clause to the INSERT statement, certain types of errors no longer terminate and roll back the statement. Instead, each error is logged and the statement continues. You then take corrective action on the erroneous rows at a later time.

DML error logging works with INSERT, UPDATE, MERGE, and DELETE statements. This section focuses on INSERT statements.


--------------------------------------------------------
--  DDL for Table ATEST1
--------------------------------------------------------


  CREATE TABLE "ATOORPU"."ATEST1"
   (    "ID" NUMBER constraint ATEST1_PK PRIMARY KEY,
    "TDATE" DATE,
    "AMOUNT" VARCHAR2(20 BYTE),
    "ORD_NO" NUMBER
   ) ;

--------------------------------------------------------
INSERT SOME VALUES
--------------------------------------------------------


Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (1,to_date('04-APR-16','DD-MON-RR'),null,300);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (2,to_date('04-APR-16','DD-MON-RR'),null,300);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (3,to_date('01-MAR-16','DD-MON-RR'),null,100);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (4,to_date('01-MAR-16','DD-MON-RR'),'100',200);

--------------------------------------------------------
CREATE ERROR LOG TABLE USING THE DBMS PACKAGE :
--------------------------------------------------------


EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('ATEST1', 'ERR_ATEST1');   -- ATEST1 source table and ERR_ATEST1 error log table


Error Logging Restrictions and Caveats
  • Oracle Database logs the following errors during DML operations:
  • Column values that are too large
  • Constraint violations (NOT NULL, unique, referential, and check constraints)
  • Errors raised during trigger execution
  • Errors resulting from type conversion between a column in a subquery and the corresponding column of the table
  • Partition mapping errors

Certain MERGE operation errors (ORA-30926: Unable to get a stable set of rows for MERGE operation.)

--------------------------------------------------------
-- This will generate some insert errors
--------------------------------------------------------


INSERT INTO ATEST1
  SELECT ID+3,TDATE,AMOUNT,ORD_NO
  FROM ATEST1
  WHERE id > 1
  LOG ERRORS INTO ERR_ATEST1 ('daily_load') REJECT LIMIT 9;

--- daily_load is TAG, REJECT LIMT will set the max errs before terminating insert statement.


Note:

If the statement exceeds the reject limit and rolls back, the error logging table retains the log entries recorded so far.

--------------------------------------------------------
-- This will generate some update errors
--------------------------------------------------------


update ATEST1 set ID=3 where ID>5 LOG ERRORS INTO ERR_ATEST1 ('daily_load') REJECT LIMIT 9;

--- daily_load is TAG, REJECT LIMT will set the max errs before terminating insert statement.


--------------------------------------------------------
LETS CHECK THE ERROR MESSAGES RECORDED:
--------------------------------------------------------


select * from ERR_ATEST1;
Categories: DBA Blogs

Inserting Data into table with DML Error Logging (catching errors whiles inserting data into table)

Learn oracle 12c database management - Fri, 2016-06-10 09:59

 Inserting Data with DML Error Logging:

When you load a table using an INSERT statement with sub query, if an error occurs, the statement is terminated and rolled back in its entirety. This can be wasteful of time and system resources. For such INSERT statements, you can avoid this situation by using the DML error logging feature.

To use DML error logging, you add a statement clause that specifies the name of an error logging table into which the database records errors encountered during DML operations. When you add this error logging clause to the INSERT statement, certain types of errors no longer terminate and roll back the statement. Instead, each error is logged and the statement continues. You then take corrective action on the erroneous rows at a later time.


DML error logging works with INSERT, UPDATE, MERGE, and DELETE statements. This section focuses on INSERT statements.

--------------------------------------------------------
--  DDL for Table ATEST1
--------------------------------------------------------


  CREATE TABLE "ATOORPU"."ATEST1"
   (    "ID" NUMBER constraint ATEST1_PK PRIMARY KEY,
    "TDATE" DATE,
    "AMOUNT" VARCHAR2(20 BYTE),
    "ORD_NO" NUMBER
   ) ;

--------------------------------------------------------
INSERT SOME VALUES INTO TEST TABLE
--------------------------------------------------------


Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (1,to_date('04-APR-16','DD-MON-RR'),null,300);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (2,to_date('04-APR-16','DD-MON-RR'),null,300);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (3,to_date('01-MAR-16','DD-MON-RR'),null,100);
Insert into ATEST1 (ID,TDATE,AMOUNT,ORD_NO) values (4,to_date('01-MAR-16','DD-MON-RR'),'100',200);

--------------------------------------------------------
CREATE ERROR LOG TABLE USING THE DBMS PACKAGE :
--------------------------------------------------------


EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('ATEST1', 'ERR_ATEST1');   -- ATEST1 source table and ERR_ATEST1 error log table


Error Logging Restrictions and Caveats:

  • Oracle Database logs the following errors during DML operations:
  • Column values that are too large
  • Constraint violations (NOT NULL, unique, referential, and check constraints)
  • Errors raised during trigger execution
  • Errors resulting from type conversion between a column in a subquery and the corresponding column of the table
  • Partition mapping errors
Certain MERGE operation errors (ORA-30926: Unable to get a stable set of rows for MERGE operation.)

--------------------------------------------------------
-- This will generate some insert errors
--------------------------------------------------------


INSERT INTO ATEST1
  SELECT ID+3,TDATE,AMOUNT,ORD_NO
  FROM ATEST1
  WHERE id > 1
  LOG ERRORS INTO ERR_ATEST1 ('daily_load') REJECT LIMIT 9; 

--- daily_load is TAG, REJECT LIMT will set the max errs before terminating insert statement.


Note:
 
If the statement exceeds the reject limit and rolls back, the error logging table retains the log entries recorded so far.

--------------------------------------------------------
-- This will generate some update errors
--------------------------------------------------------


update ATEST1 set ID=3 where ID>5 LOG ERRORS INTO ERR_ATEST1 ('daily_load') REJECT LIMIT 9;

--- daily_load is TAG, REJECT LIMT will set the max errs before terminating insert statement.


--------------------------------------------------------
LETS CHECK THE ERROR MESSAGES RECORDED:
--------------------------------------------------------


select * from ERR_ATEST1;
Categories: DBA Blogs

“Unstructured Data” – No such thing!

Jeff Moss - Fri, 2016-06-10 06:58

I keep hearing this term lately and I dislike it.

There is no such thing as Unstructured Data. All data has structure. If it didn’t have structure we wouldn’t be able to use it.

What about free text? Well, that’s just a single column value (stored in a CLOB in Oracle, for example) and the free text is, more often than not, on a row with other columns, such as identifiers and timestamps, i.e. yet more structure.

I think what people mean when they use this “marketing foam”TM term is “data that we have not yet defined the structure for”, but in order to use it at some later stage, the structure will need to be defined – that definition process doesn’t actually give the data structure in and of itself, it simply defines what that structure is, in order to be able to use it.

Interestingly, the Wikipedia article for Unstructured Data calls out the imprecise nature of the term:

The term is imprecise for several reasons:

  1. Structure, while not formally defined, can still be implied.
  2. Data with some form of structure may still be characterized as unstructured if its structure is not helpful for the processing task at hand.
  3. Unstructured information might have some structure (semi-structured) or even be highly structured but in ways that are unanticipated or unannounced.

In other words, it does have structure, but maybe we’ve not written it down, or the structure isn’t helpful to processing or is structured in ways we were not expecting – so what?…it’s still structured!

All of the above seem to me to support the view that all data does indeed have structure.

Enter your bets on Euro2016Challenge.eu now

Dimitri Gielis - Fri, 2016-06-10 05:55
Looks like I forgot to put on my blog also this year we created a bet site for the European Cup Soccer. Thanks to the people who reminded me to put this post on my blog :)

It all started in 2006 when I first created a site to promote Oracle Application Express (APEX). The site allowed to bet on the games of the World Cup. At that time everybody was using Excel files internally to put the scores together, enter the bets of the people... so I thought why not build it in APEX :) Oh the betting is for fun and honour ... so no money involved!

Since then every two years we have updated the site and enabled it again. Today almost 3000 people are playing with us. We changed a few times from url; first it was called DG Tournament, than the World Cup Challenge and this year it's the Euro 2016 Challenge.

So if you didn't put your bets in, there're a few hours left ... happy betting and that the best may win!


This year we (Belgium) have a chance to come far in the tournament, go go go Belgium! :)

Categories: Development

My Glamorous Life : What’s the point?

Tim Hall - Fri, 2016-06-10 04:19

Last night I put out a video of my trip home.

I’ve written about the My Glamorous Life series of videos before (here). A couple of people commented recently that “nothing happens in them” and they are “quite boring”. That’s the point really.

Temporary Post Used For Theme Detection (4a0f3a36-664d-47d9-93c7-24f2096e004d – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

Jeff Moss - Fri, 2016-06-10 02:18

This is a temporary post that was not deleted. Please delete this manually. (31074fcb-66af-42a8-a620-ab6d94cd18a0 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

database option - tuning

Pat Shuff - Fri, 2016-06-10 02:07
Today we are going to look at using the diagnostics and tuning package that comes with the High Performance and Extreme Performance Editions of the database. We do not get these options with the Standard Edition or Enterprise Edition and if we use Amazon RDS, EC2, Oracle IaaS, or Microsoft Azure Compute to use the tuning option we must bring along a license for the options. Diagnostics are licensed at $150 per named user or $7,500 per processor. This correlates to $294 per processor per month. Options like the SQL Tuning Advisor and Automatic SQL Tuning are part of the Tuning pack option. Tuning pack is $100 per named user or $5,000 per processor. This comes in at $196 per processor per month if we use the four year amortization that we talked about last week.

There are three ways to look at the SQL Tuning Advisor. We can use Enterprise Manager in a central site and analyze historic data from days, weeks, and months back. Unfortunately, we can not use this in conjunction with Amazon RDS. We can use the Enterprise Manager Express which is part of the database and gives you three hours of history of database performance. Again, we can not use this in conjunction with Amazon RDS. These features are disabled and turned off as part of the Amazon installation. We can use SQL Developer to connect to the database on all platforms. This allows us to pull down real time diagnostics and look at live database performance. We will go through an Oracle by Example SQL Tuning Advisor Tutorial that details how to enable and use the tuning advisor packs. The database version that we will be using is the 11g version of the database. These same steps should work with 12c because the features have not changed and SQL Developer knows what to do between the two versions of the database and present a common user interface to do SQL Tuning.

The first step that we have to do is find out the ip address of our 11g database. We do this by going to the database console and looking at our instance detail.

We then create a connection to the database with SQL Developer. This is done first as the sys user as sysdba connecting to the ORCL instance at the ip address of the database. We can verify that we are connected to a High Performance Edition by issuing a select statement against the v$version table.

select * from v$version;

Before we can execute step 8 in the Tuning Advisor Tutorial we must enable the user scott and set a password for the account. To do this we expand the Other Users selection at the bottom left of the screen, find the user scott, and enable the account while setting the password.

We can now connect to the 11g instance and give user scott permission to attach to the sql resources with the commands

grant advisor to scott;
grant administer sql tuning set to scott;

We then clear the existing statistics to make sure we are not looking at old artifacts but what we are going to execute. This is done by exeucting

exec DBMS_STATS.DELETE_SCHEMA_STATS ('scott');

At this point we switch over to the user scott and execute a select statement

select sum(e.sal), avg(e.sal), count(1), e.deptno from dept d, emp e group by e.deptno order by e.deptno;

We can launch the SQL Tuning Advisor from the icon at the top of the screen. This opens a new tab next to the resulting output from the select statement.

The output from the tuning advisor has four parts. We can look at the statistics that were gathered, look at suggested indexes, sql profile, and restructuring statement recommendations. The index output did not say anything but the other three had recommendations.

The restructuring statement suggests that we remove the dept d definition since we really are not using it in the select statement. We then execute the following modified command

select sum(e.sal), avg(e.sal), count(1), e.deptno from emp e group by e.deptno order by e.deptno;

When we rerun the command without the dept d in the select statement we get a clean output from the SQL Advisor.

In summary, we can use Enterprise Manager, Enterprise Manager Express, or SQL Developer to run the tuning advisor. We walked through a simple example of how to do this with SQL Developer on a single select statement. We walked through the SQL Developer because it works on all cloud platforms and the Enterprise Manager solutions do not work well with Amazon RDS. With these tools we can dive into SQL performance issues, tune the database, and optimize the cloud system to utilize fewer resources and cost us less money. If we can reduce the processor count by a couple of processors that more than pays for the cost of the High Performance Edition incremental cost over the Enterprise Edition.

Uniquely parallel

Jonathan Lewis - Fri, 2016-06-10 01:36

Here’s a surprising (to me) execution plan from 12.1.0.2 – parallel execution to find one row in a table using a unique scan of a unique index – produced by running the following script (data creation SQL to follow):


set serveroutput off
set linesize 180
set trimspool on
set pagesize 60

alter session set statistics_level = all;

variable b1 number
exec :b1 := 50000

select /*+ parallel (3) */ id, v1 from t2 where id=:b1;

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

break on dfo_number skip 1 on tq_id skip 1 on server_type

select
        dfo_number, tq_id, server_type, instance, process, num_rows
from
        v$pq_tqstat
order by
        dfo_number, tq_id, server_type desc, instance, process
;

All I’ve done is enable rowsource execution statistics, set a bind variable to a value, query a table with a /*+ parallel(3) */ hint to find the one row that will be identified by primary key, and then reported the actual execution plan. When I first ran the test Oracle didn’t report the execution statistics correctly so I’ve also queried v$pq_tqstat to show the PX servers used and the flow of data through the plan. Here’s the plan, followed by the  results from v$pq_tqstat:


SQL_ID  0dzynh9d29pt9, child number 0
-------------------------------------
select /*+ parallel (3) */ id,v1 from t2 where id=:b1

Plan hash value: 247082613

---------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   |
---------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   1 |  PX COORDINATOR                   |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |  Q1,01 | P->S | QC (RAND)  |      0 |00:00:00.01 |
|   3 |    TABLE ACCESS BY INDEX ROWID    | T2       |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   4 |     BUFFER SORT                   |          |      0 |        |  Q1,01 | PCWC |            |      0 |00:00:00.01 |
|   5 |      PX RECEIVE                   |          |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |  Q1,00 | S->P | HASH (BLOCK|      0 |00:00:00.01 |
|   7 |        PX SELECTOR                |          |      0 |        |  Q1,00 | SCWC |            |      0 |00:00:00.01 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |      0 |      1 |  Q1,00 | SCWP |            |      0 |00:00:00.01 |
---------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   8 - access("ID"=:B1)

Note
-----
   - Degree of Parallelism is 3 because of hint

DFO_NUMBER      TQ_ID SERVER_TYP   INSTANCE PROCES   NUM_ROWS
---------- ---------- ---------- ---------- ------ ----------
         1          0 Producer            1 P003            0
                                          1 P004            1
                                          1 P005            0
                      Consumer            1 P000            0
                                          1 P001            1
                                          1 P002            0

                    1 Producer            1 P000            0
                                          1 P001            1
                                          1 P002            0
                      Consumer            1 QC              1

As you can see the table access follows a unique scan of an index and, although the rowsource execution stats report zero starts for the unique scan, we can see from v$pq_tqstat that slave P004 acquired a “row” (actually a rowid) and passed it to slave P001 which then acquired a row from the table and passed that row to the query coordinator. Oracle really did execute a parallel query, starting and stopping a total of 6 sessions to perform a single unique index access.

You’ll notice operation 7 is one you’ve only seen in the latest version of Oracle. The PX SELECTOR was introduced in 12c to reduce the number of times a complex parallel query would funnel into the query coordinator (parallel to serial) and then fan out again (serial to parallel) generating a new data flow operation tree (DFO tree) spawning one or two new parallel server groups as it did so. To stop this happening a step that needs to serialise in a 12c parallel plan can nominate one of the existing PX server processes (from each set, if necessary) to do the job so that the same set of PX servers can carry on running the query without the need for a new DFO tree to appear.

This enhancement to parallel execution plans is a good idea – except when it appears in my silly little query and turns something that ought to be quick and cheap into a job that is far more resource-intensive than it should be.

At this point, of course, you’re probably wondering what kind of idiot would put a parallel() hint into a query that was doing nothing but selecting one row by primary key – the answer is: “the Oracle optimizer in 12c”. I discovered this anomaly while creating a demonstration of the way that a distributed parallel query has to serialise through a single database link even if the operations at the two ends of the link run parallel. Here’s the SQL I wrote for the full demonstration:


rem     Script:         distributed_pq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016

define m_remote='test@loopback'
define m_remote='orcl@loopback'

create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,8,'0') as varchar2(8))         v1,
        cast(rpad('x',100) as varchar2(100))            padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e5
;

create table t2
as
select  *
from    t1
where   mod(id,100) = 0
;

alter table t2 add constraint t2_pk primary key(id);

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'
        );
end;
/

set serveroutput off

select
        /*+ parallel(3) */
        t1.v1, t2.v1
from
        t1,
        t2@&m_remote
where
        mod(t1.id,10) = 0
and     t2.id = t1.id
and     mod(to_number(t2.v1),10) = 1
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

If you want to run this demo you’ll need to do something about formatting the output; more importantly you’ll have to create a database link (with a loopback link) and set up a define identifying it at the line where I’ve got orcl@loopback and test@loopback (which are my 12c and 11g loopback links respectively).

Here’s the plan (with rowsource stats) I got from the 12c test:


----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |          |      1 |        |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   1 |  NESTED LOOPS         |          |      1 |     10 |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   2 |   PX COORDINATOR      |          |      1 |        |        |      |            |  10000 |00:00:00.11 |       7 |      0 |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 |      0 |   1000 |  Q1,00 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |
|   4 |     PX BLOCK ITERATOR |          |      3 |   1000 |  Q1,00 | PCWC |            |  10000 |00:03:17.72 |    1745 |   1667 |
|*  5 |      TABLE ACCESS FULL| T1       |     39 |   1000 |  Q1,00 | PCWP |            |  10000 |00:00:00.06 |    1745 |   1667 |
|   6 |   REMOTE              | T2       |  10000 |      1 |        |      |            |      0 |00:01:14.44 |       0 |      0 |
----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(:Z>=:Z AND :Z<=:Z)
       filter(MOD("T1"."ID",10)=0)

Remote SQL Information (identified by operation id):
----------------------------------------------------
   6 - SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1
       (accessing 'ORCL@LOOPBACK' )

Note
-----
   - Degree of Parallelism is 3 because of hint

I have hacked this output a little – the “Remote SQL” section didn’t get reported by display_cursor(), so I’ve inserted the remote sql I got from a call to dbms_xplan.display() after using explain plan to generate a plan. Note the /*+ shared(3) */ hint that appears in the remote SQL – that’s the internal version of a parallel(3) hint.

In 11g the query complete in 2.4 seconds, in 12c the query took nearly 75 seconds to run thanks to the 12c enhancement that allowed it to obey the hint! Looking at the time column (and ignoring the anomalous 3:17 at operation 4 – which might roughly be echoing 3 * 1:14) we can see that the time goes on the calls to the remote database (and a check of v$session_event shows this time spent in “SQL*Net message from db link”), so the obvious thing to do is check what actually happened at the remote database and we can do that by searching the library cache for a recognizable piece of the remote SQL – here’s the SQL to do that, with the results from 11g followed by the results from 12c:


SQL> select sql_id, child_number, executions, px_servers_executions, sql_text from v$sql
  2  where sql_text like '%SHARED%' and sql_text not like 'select sql_id%';

11g results
SQL_ID        CHILD_NUMBER EXECUTIONS PX_SERVERS_EXECUTIONS
------------- ------------ ---------- ---------------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
c0f292z5czhwk            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE MOD(TO_NUMBER("V1"),10)=1 AND "ID"=:1


12c results
SQL_ID        CHILD_NUMBER EXECUTIONS PX_SERVERS_EXECUTIONS
------------- ------------ ---------- ---------------------
SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
7bk51w7vtagwd            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

7bk51w7vtagwd            1          0                 59995
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

It’s not surprising to see that the query has executed 10,000 times – that’s what we were told by the Starts statistic from dbms_output.display_cursor(), but 12c has 60,000 (with a little error) PX Servers executions of the statement. That’s 10,000 executions * degree 3 * the 2 slave sets we saw in my original execution plan. (It’s an odd little quirk of the two versions of Oracle that the order of predicates in the remote SQL was reversed between 11g and 12c – leading to two different SQL_IDs).

By enabling rowsource execution stats at the system level I was able to capture the remote execution plan with its stats:


SQL_ID  7bk51w7vtagwd, child number 0
-------------------------------------
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND
MOD(TO_NUMBER("V1"),10)=1

--------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |          |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  PX COORDINATOR                   |          |      0 |        |      0 |00:00:00.01 |       0 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  3 |    TABLE ACCESS BY INDEX ROWID    | T2       |  29983 |      1 |      0 |00:00:22.21 |    1000 |
|   4 |     BUFFER SORT                   |          |  29995 |        |    999 |00:00:21.78 |       0 |
|   5 |      PX RECEIVE                   |          |  29924 |      1 |    994 |00:00:21.21 |       0 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |      0 |00:00:00.01 |       0 |
|   7 |        PX SELECTOR                |          |  29993 |        |    999 |00:00:06.08 |   19992 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |  29999 |      1 |   1000 |00:00:00.24 |   20000 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter(MOD(TO_NUMBER("V1"),10)=1)
   8 - access("ID"=:1)

Unlike the test case I started with, this output did show the number of starts (with a few missing) and the work done across the slaves. Our index probe had to do two buffer gets on every execution, and we have 10,000 executions of the query so 20,000 buffer gets on the index unique scan. Even though only one slave actually does any work with the PX Selector, all three slaves in that set seem to “start” the relevant operations. The definition of the data meant that only one index probe in 10 was successful so we only visited 1,000 buffers from the table. If you’re familiar with reading plans with rowsource execution stats you’ll appreciate that something has gone wrong in the reporting here – that 1,000 at operation 3 should read 21,000 because it ought to include the 20,000 from the index scan (at least, that’s what a serial plan would do).

If you’re still wondering why running this query as a parallel query should take so long – after all it’s only 10,000 executions in 70 seconds – bear in mind that Oracle has to allocate and deallocate 6 PX servers to new sessions each time it starts; the instance activity stats showed “logons cumulative” going up by 60,000 each time I ran the driving query: that’s about 850 logons (and log offs) per second. I don’t think my test machine would give a realistic impression of the impact of a couple of copies of this query running simultaneously, but when I tried the contention introduce increased the run time to 93 seconds.

tl;dr

Watch out for poor performance becomg disastrous for distributed parallel queries when you upgrade from 11g to 12c

 

 

.


Pages

Subscribe to Oracle FAQ aggregator