Feed aggregator

PDF417 for E-Business Suite

Tim Dexter - Mon, 2015-10-19 16:49

A while back I wrote up a how to on 2D barcode formats. I kept things generic and covered the basics of getting the barcodes working.  Tony over in Bahrain (for we are truly international :) has had a tough time getting it working under EBS. Mostly to do with the usual bug bear of the JAVA_TOP, XX_TOP and getting class paths set up. Its finally working and Tony wanted to share a document on the 'how' to get PDF417s working under EBS.

Document available here.

Thanks for sharing Tony!

Categories: BI & Warehousing

Oracle Open World Less Than A Week Away!

PeopleSoft Technology Blog - Mon, 2015-10-19 13:40

Open World is Oracle's biggest and most important conference, and PeopleSoft/PeopleTools is well-represented again this year. The PeopleTools tech team is presenting many sessions on topics that we think you will find interesting and informative.  (This optimism is born out by the pre-registration numbers for the Tech sessions.)  Come visit us in the PeopleTools demo pods as well. 

We hope you will be attending Open World '15, and if you do, please consider coming to see us at these sessions:

SESSION TITLE DATE / TIME LOCATION CON8596 PeopleSoft Developer: Tips and Techniques Monday, Oct 26
12:15 MW 3007 CON8203 Using PeopleSoft Fluid User Interface for Exceptional HR Self-Service Tuesday, Oct 27
11:00 Palace - Twin Peaks S CON8585 PeopleSoft Technology Update Tuesday, Oct 27
4:00 MW 2008 CON8587 PeopleSoft’s User Experience Roadmap: It Is PeopleSoft Fluid User Interface Tuesday, Oct27
5:15 MW 3016 CON8598 Develop an Effective PeopleSoft Selective Adoption Strategy Wednesday, Oct 28
11:00 MW 3016 CON8591 PeopleSoft in the Cloud: PeopleSoft’s Cloud Deployment Architecture Wednesday, Oct 28
12:15 MW 3014 CON8600 Customer View: Adapting to Take Advantage of PeopleSoft Wednesday, Oct 28
3:00 MW 3016 CON8595 Putting It to Work: PeopleSoft Automated Configuration Manager Wednesday, Oct 28
3:00 MW 3014 CON8593 PeopleTools Security Features Wednesday, Oct 28
4:15 MW 3016 CON8592 Simplified Analytics: The Latest Generation of PeopleSoft Reporting Wednesday, Oct 28
4:15 MW 3014 CON8204 How to Make the Most of the PeopleSoft Selective Adoption Model Wednesday, Oct 28
4:15 Palace - Twin Peaks N CON8602 PeopleTools Platform and Infrastructure Roadmap Thursday, Oct 29
9:30 MW 3016 CON8599 Putting It to Work: Rolling Out PeopleSoft’s Suite of Lifecycle Management Tools Thursday, Oct 29
10:45 MW 3016 CON8589 Putting It to Work: PeopleSoft Fluid User Interface Component Thursday, Oct 29
12:00 MW 3014 CON8588 Simplifying Complex Processes with New PeopleSoft Fluid User Interface Activity Guides Thursday, Oct 29
10:45 MW 3014 CON8594 Putting It to Work: Securing Your PeopleSoft Environment Thursday, Oct 29
12:00 MW 3016 CON8586 PeopleSoft Technology: Executive Panel Thursday, Oct 29
1:15 MW 3014 CON8590 Putting It to Work: Configuring a PeopleSoft Applications Cluster Thursday, Oct 29
2:30 MW 3014 CON8601 Putting It to Work: Getting the Most Out of PeopleSoft RESTful Services Thursday, Oct 29
2:30 MW 3016

Show Us Your Vintage OTN/Java Gear

OTN TechBlog - Mon, 2015-10-19 12:27

Happy Monday!  Over the last few years the OTN team has put together some really fun shirts and promotional items.  Do us a favor and show us yours!  Either tweet a selfie to @oracleotn or post on the OTN Facebook page - http://www.facebook.com/OracleTechnologyNetwork. 

Don't forget to join the OTN at Oracle OpenWorld Group and follow the blog to stay in the know about all the GREAT activities and events that the OTN team is planning/organizing for Oracle OpenWorld in San Francisco this October. Here are links to the most recent posts -

OTN's Community Kiosk at Oracle OpenWorld OTN Kick Off Event - Sunday, October 25th, 1-5 pm Oracle ACE Sessions at OpenWorld Cloud Hour at Oracle OpenWorld


Oracle Management Cloud Launch Event

Debu Panda - Mon, 2015-10-19 10:33
Wow… I have not blogged for a while. I joined back Oracle last year to work on an exciting new product named OracleManagement Cloud.  Oracle Management Cloud will be unveiled during Oracle World 2015.  If you are attending Oracle Open World 2015, don’t miss the opportunity to attend the exciting launch event on October 27, 2015 (Tuesday) 11am, presented by our SVP, Prakash Ramamurthy.

GEN9778 - Oracle Management Cloud: Real-Time Monitoring, Log, & IT Operations Analytics

Tuesday, October 27 at 11AM in Moscone South Room 102


Hereyou can find more about the launch event and Oracle Management Cloud.

Register for this event here

How many startups did Google purchase and what did it do with them?

Nilesh Jethwa - Mon, 2015-10-19 09:54

Ever wondered, how may startups or companies did Google acquire so far and how much did it pay for them?

But the main question is, what happened to the acquired companies? Was it all waste of money?

With the recent re-structuring, Google became a subsidiary of Alphabet Inc., which now owns most of the parts.

Using infocaptor kpi dashboard software, we analyze the list of companies, products and services that Google has acquired since 2001.

Google has acquired around 184 companies as of October 2015. Google has spent atleast 28 billion USD on acquisitions.

Read the complete list of top google acquisitions in an interactive dashboard

exec_as_oracle_script

Pakistan's First Oracle Blog - Sat, 2015-10-17 01:04
There might be a situation where executing some DDL in pluggable database may cause the following error:

ORA-65040: operation not allowed from within a pluggable database

This error could occur if a tablespace is being dropped from within PDB and this tablespace is a former default tablespace having some of the system objects. Even system objects cannot be moved with simple alter statements from within PDBs.

So in order to move these objects from within PDBs, you should be using procedure dbms_pdb.exec_as_oracle_script which is undocumented so far.

For example:

exec dbms_pdb.exec_as_oracle_script('alter table . move tablespace ');

From My Oracle Support, Doc ID 1943303.1 lists:

--   This procedure enables execution of certain restricted statements (most DDLs) on metadata-linked objects, from within a PDB.
Categories: DBA Blogs

Fluid Navigtion Design Guidelines Available

PeopleSoft Technology Blog - Fri, 2015-10-16 17:49

Many customers have asked us about guidance for developing in PeopleSoft's Fluid User Interface.  To address that, PeopleSoft's Information Development team has recently published a new guide for designing navigation in Fluid.  The way users navigate in the Fluid UI is quite different (and more efficient) than in classic PeopleSoft pages.  The new guidelines are available here.

The guidelines cover the philosophy of Fluid navigation focusing on the following:

  • Multiple paths for users to access functionality.
  • Users’ ability to choose the path that is most effective for them.
  • Use of Related Actions to make navigation work across all paths.

Classic PIA navigation uses a locator link (“breadcrumb”) menu with a deep menu hierarchy. Fluid navigation takes an entirely different approach, using homepages and tiles as the starting points for navigating to both Fluid and Classic pages. Further navigation to Fluid and Classic transactions is achieved through a variety of vectors like the Nav Bar, Related Actions, Search, and more.  The following 'how-to' topics are covered:

  • Create new homepages.
  • Add tiles to homepages via the Tile Repository.
  • Add a component tile to any homepage from within the component.
  • Add tiles to the NavBar.
  • Reorder tiles in the NavBar.
  • Add favorite components to My Favorites in the NavBar

In addition, we've published a Fluid UI documentation update, that covers troubleshooting, clarifications to official product docs, updates, and significantly, guides on the CCS that PeopleSoft uses to develop our applications.

New PeopleTalk on Security

PeopleSoft Technology Blog - Fri, 2015-10-16 17:15

So it looks like I've gone mainstream!

I had the opportunity to have a talk with Marc Weintraub which has been posted on the PeopleSoft YouTube channel, here:

In this episode of PeopleSoft Talk, Marc talks with Greg Kelly about PeopleSoft security.
https://youtu.be/w9n00P2xJCE

Don't forget to review the links in the associated My Oracle Support document:
My Oracle Support - Useful PeopleSoft Security Links: DocID 2060772.1  https://support.oracle.com/epmos/faces/DocumentDisplay?id=2060772.1

 

See other videos on the PeopleSoft channel https://www.youtube.com/channel/UCUhQ7fBYvSqlGVSNMxRNPfg

__ 

Thanks to all #odevchoice voters!

XTended Oracle SQL - Fri, 2015-10-16 17:11

And special thanks to all the great people who voted for me! :)
I want to list all voters for all nomenees in one page: http://orasql.org/odevchoice/all-the-voters.html
The query (using xt_http of course :):

Spoiler

with 
  finalists(category, userid, name) as (
      --                               SQL~ Voting:
      ------ ------------------ -------------------------------------------------------------------------------------
      select 'SQL'     , 6899,  'Stew Ashton      ' from dual union all
      select 'SQL'     , 6900,  'Sean Stuber      ' from dual union all
      select 'SQL'     , 6901,  'Sayan Malakshinov' from dual union all
      select 'SQL'     , 6902,  'Matthias Rogel   ' from dual union all
      select 'SQL'     , 6903,  'Kim Berg Hansen  ' from dual union all
      select 'SQL'     , 6904,  'Justin Cave      ' from dual union all
      select 'SQL'     , 6905,  'Erik Van Roon    ' from dual union all
      select 'SQL'     , 6906,  'Emrah Mete       ' from dual union all
      --                               PL/SQL~ Voting
      ------ ------------------ -------------------------------------------------------------------------------------
      select 'PL/SQL'  , 6907,  'Sean Stuber      ' from dual union all
      select 'PL/SQL'  , 6908,  'Roger Troller    ' from dual union all
      select 'PL/SQL'  , 6909,  'Patrick Barel    ' from dual union all
      select 'PL/SQL'  , 6910,  'Morten Braten    ' from dual union all
      select 'PL/SQL'  , 6911,  'Kim Berg Hansen  ' from dual union all
      select 'PL/SQL'  , 6912,  'Bill Coulam      ' from dual union all
      select 'PL/SQL'  , 6913,  'Adrian Billington' from dual union all
      --                               ORDS ~ Voting
      ------ ------------------ -------------------------------------------------------------------------------------
      select 'ORDS'    , 6881,  'Tim St. Hilaire  ' from dual union all
      select 'ORDS'    , 6882,  'Morten Braten    ' from dual union all
      select 'ORDS'    , 6883,  'Kiran Pawar      ' from dual union all
      select 'ORDS'    , 6884,  'Dimitri Gielis   ' from dual union all
      select 'ORDS'    , 6885,  'Dietmar Aust     ' from dual union all
      select 'ORDS'    , 6886,  'Anton Nielsen    ' from dual union all
      --                               APEX ~ Voting
      ------ ------------------ -------------------------------------------------------------------------------------
      select 'ORDS'    , 6887,  'Trent Schafer    ' from dual union all
      select 'ORDS'    , 6888,  'Paul MacMillan   ' from dual union all
      select 'ORDS'    , 6889,  'Morten Braten    ' from dual union all
      select 'ORDS'    , 6890,  'Kiran Pawar      ' from dual union all
      select 'ORDS'    , 6891,  'Karen Cannell    ' from dual union all
      select 'ORDS'    , 6893,  'Juergen Schuster ' from dual union all
      select 'ORDS'    , 6894,  'Jari Laine       ' from dual union all
      --                             DatabaseDesign ~ Voting
      ------ ----------------- -------------------------------------------------------------------------------------
      select 'DbDesign', 6896,  'Michelle Kolbe   ' from dual union all
      select 'DbDesign', 6897,  'Mark Hoxey       ' from dual union all
      select 'DbDesign', 6898,  'Heli Helskyaho   ' from dual union all
      select 'DbDesign', 6919,  'Rob Lockard      ' from dual
)
select 
       f.category
      ,f.name         as Nominee
      ,row_number()over(partition by f.category,f.name 
                        order by 
                        case 
                           when regexp_like(t.column_value,'^user\d+$')                      then 2
                           when regexp_like(t.column_value,'^\d+$')                          then 3
                           when regexp_like(t.column_value,'\w{6}-\w{4}-\w{4}-\w{4}-\w{12}') then 4
                           else 1
                        end
                        ) n
      ,t.column_value as VoterName 
from finalists f,
     table(
         xt_http.get_matches(
            pUrl     => 'https://community.oracle.com/voting-history.jspa?ideaID='||to_char(f.userid,'fm0000')||'&start=0&numResults=1000'
           ,pPattern => 'alt="([^"]+)"'
           ,pGroup   => 1
         )
    ) t
order by 1,2,3,4

[collapse]

Categories: Development

12c Parallel Execution New Features: Parallel FILTER Subquery Evaluation - Part 2: Distribution Methods

Randolf Geist - Fri, 2015-10-16 15:31
Picking up from the first part of this instalment I'll focus in this post on the available distribution methods for the new parallel FILTER subquery feature.

In this post I won't go into the details how the optimizer selects the distribution method automatically - this will be covered in the last part.

Here I merely describe the different available methods and how to control them using the new PQ_FILTER hint, which is also mentioned in the official documentation, although I find a bit hard to follow the description there.

There are four different options available to the PQ_FILTER hint, and only two of them actually describe a distribution method. One of them tells to not distribute the data at all, and the last one reverts to the former pre-12c plan shape:

SERIAL: This tells the optimizer to use the pre-12c plan shape where the FILTER operator is executed by the Query Coordinator. Depending on the SQL features used and combined, 12c sometimes still reverts to this plan shape and it looks like in those cases you can't force the parallel filter evaluation via the PQ_FILTER hint - the outline then contains a PQ_FILTER hint that is supposed to result in a parallel filter evaluation, but the serial plan shape gets used anyway despite the hint.

NONE: This tells the optimizer to not redistribute the data before executing the FILTER operator driving the subqueries, which means in whatever way the data is distributed the Parallel Execution Servers will execute the filter and the corresponding subqueries for the data currently processed

HASH: Redistribute the data by HASH before running the FILTER operator and the corresponding subqueries.

RANDOM: Redistribute the data by ROUND-ROBIN before running the FILTER operator and the corresponding subqueries. Despite being called RANDOM the distribution method shown in the plan is ROUND-ROBIN rather than RANDOM.

In this post I want to show an example for each of those variants and address the following questions in addition:

1. Where and how has the PQ_FILTER hint to be specified? How has the hint to be used in case of multiple FILTERs with subqueries in the execution plan?

2. In case of a HASH distribution and multiple subqueries what keys get used as input for the HASH function used for distribution?

Let's start with an example for each of the hints used to demonstrate the different plan shapes. For that purpose I'll re-use the same setup and query from the initial part of this instalment, with the index set to invisible:

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel 4;

create index t_1_idx on t_1 (id) invisible;

explain plan for
select /*+
pq_filter(<DIST_METHOD>)
*/ count(*) from
t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id);

-- 12.1.0.2 plan shape with index invisible and PQ_FILTER(SERIAL)
------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
|* 2 | FILTER | | | | |
| 3 | PX COORDINATOR | | | | |
| 4 | PX SEND QC (RANDOM)| :TQ20000 | Q2,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | Q2,00 | PCWC | |
| 6 | TABLE ACCESS FULL| T_1 | Q2,00 | PCWP | |
| 7 | PX COORDINATOR | | | | |
| 8 | PX SEND QC (RANDOM)| :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 9 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
|* 10 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
------------------------------------------------------------------------

-- 12.1.0.2 plan shape with index invisible and PQ_FILTER(NONE)
-------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
|* 5 | FILTER | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T_1 | | | |
-------------------------------------------------------------------------

-- 12.1.0.2 plan shape with index invisible and PQ_FILTER(RANDOM)
---------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,01 | PCWP | |
|* 5 | FILTER | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | Q1,01 | PCWP | |
| 7 | PX SEND ROUND-ROBIN| :TQ10000 | Q1,00 | P->P | RND-ROBIN |
| 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL | T_1 | | | |
---------------------------------------------------------------------------

-- 12.1.0.2 plan shape with index invisible and PQ_FILTER(HASH)
---------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,01 | PCWP | |
|* 5 | FILTER | | Q1,01 | PCWP | |
| 6 | PX RECEIVE | | Q1,01 | PCWP | |
| 7 | PX SEND HASH | :TQ10000 | Q1,00 | P->P | HASH |
| 8 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 9 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
|* 10 | TABLE ACCESS FULL | T_1 | | | |
---------------------------------------------------------------------------
So we can see with PQ_FILTER(SERIAL) we get the former, pre-12c plan shape, which in this case results in a parallel full table scan in the filter subquery and a decomposition into multiple DFO trees, both things that should be avoided in general.

I've omitted the cost estimates but there seems to be an odd inconsistency: The new parallel FILTER evaluation results in a higher cost estimate than the old serial one, simply due to the fact that the full table scan cost is lowered by the chunked parallel table scan (PX BLOCK ITERATOR) in the old plan shape, whereas the complete full table scans performed in the Parallel Execution Servers is treated as serial full table scan cost-wise. Nevertheless by default the plan with the higher costs is selected by the optimizer in 12c.

Let's address the first question how to specify the hint: As we can see from the simple example here when using the format of the hint mentioned in the official documentation it should be put into the main (driving) query, not into the filter subqueries, which makes kind of sense, since there could be multiple subqueries but only one FILTER operation in the main query driving them.

Let's have a look at a slightly more complicated query:

explain plan for
select /*+
qb_name(main)
-- When merging the two query blocks there will be a single FILTER in the main query block
--pq_filter(@main hash)
-- Global form of the PQ_FILTER hint
no_merge(@qb1)
no_merge(@qb2)
pq_filter(@qb1 random)
pq_filter(@qb2 hash)
*/
*
from
(
select /*+
qb_name(qb1)
-- Alternatively use local form of the hints
--pq_filter(random)
--no_merge
*/ *
from
t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id)
) a,
(
select /*+
qb_name(qb2)
-- Alternatively use local form of the hints
--pq_filter(hash)
--no_merge
*/ *
from t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id)
) b
where b.id = a.id
;

-- Plan shape with the hints as specified
-------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10004 | Q1,04 | P->S | QC (RAND) |
|* 3 | HASH JOIN BUFFERED | | Q1,04 | PCWP | |
| 4 | PX RECEIVE | | Q1,04 | PCWP | |
| 5 | PX SEND HYBRID HASH | :TQ10002 | Q1,02 | P->P | HYBRID HASH|
| 6 | STATISTICS COLLECTOR | | Q1,02 | PCWC | |
| 7 | BUFFER SORT | | Q1,02 | PCWP | |
| 8 | VIEW | | Q1,02 | PCWP | |
|* 9 | FILTER | | Q1,02 | PCWP | |
| 10 | PX RECEIVE | | Q1,02 | PCWP | |
| 11 | PX SEND ROUND-ROBIN| :TQ10000 | Q1,00 | P->P | RND-ROBIN |
| 12 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 13 | TABLE ACCESS FULL| T_1 | Q1,00 | PCWP | |
|* 14 | TABLE ACCESS FULL | T_1 | | | |
| 15 | PX RECEIVE | | Q1,04 | PCWP | |
| 16 | PX SEND HYBRID HASH | :TQ10003 | Q1,03 | P->P | HYBRID HASH|
| 17 | BUFFER SORT | | Q1,03 | PCWP | |
| 18 | VIEW | | Q1,03 | PCWP | |
|* 19 | FILTER | | Q1,03 | PCWP | |
| 20 | PX RECEIVE | | Q1,03 | PCWP | |
| 21 | PX SEND HASH | :TQ10001 | Q1,01 | P->P | HASH |
| 22 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 23 | TABLE ACCESS FULL | T_1 | Q1,01 | PCWP | |
|* 24 | TABLE ACCESS FULL | T_1 | | | |
-------------------------------------------------------------------------------
So in this example here we can see that multiple FILTER operations with subqueries are possible - in principle one per query block. Here I've used the global form of the hint, which includes a query block name to assign the hint to the corresponding FILTER operation. The same plan shape could be achieved by using the local hints within the query blocks. If no NO_MERGE hint gets used, both query blocks will be merged and there will be only a single FILTER operation - which then can be influenced by a corresponding PQ_FILTER hint in the main/outer query block.

Regarding the second question, what keys get used as input for the HASH distribution, we can check the corresponding "Projection information" section of the DBMS_XPLAN output, and we can see there that it is pretty straightforward and as expected: All columns / expressions used in the filter subqueries for correlation will be used as input. Using the variation of above query with merged query blocks and HASH distribution of the single FILTER with two subqueries results in the following:

explain plan for
select /*+
qb_name(main)
pq_filter(@main hash)
--no_merge(@qb1)
--no_merge(@qb2)
--pq_filter(@qb1 random)
--pq_filter(@qb2 hash)
*/
*
from
(
select /*+
qb_name(qb1)
--pq_filter(random)
--no_merge
*/ *
from
t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id)
) a,
(
select /*+
qb_name(qb2)
--pq_filter(hash)
--no_merge
*/ *
from t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id)
) b
where b.id = a.id
;

--------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10003 | Q1,03 | P->S | QC (RAND) |
| 3 | BUFFER SORT | | Q1,03 | PCWP | |
|* 4 | FILTER | | Q1,03 | PCWP | |
| 5 | PX RECEIVE | | Q1,03 | PCWP | |
| 6 | PX SEND HASH | :TQ10002 | Q1,02 | P->P | HASH |
|* 7 | HASH JOIN BUFFERED | | Q1,02 | PCWP | |
| 8 | PX RECEIVE | | Q1,02 | PCWP | |
| 9 | PX SEND HYBRID HASH | :TQ10000 | Q1,00 | P->P | HYBRID HASH|
| 10 | STATISTICS COLLECTOR | | Q1,00 | PCWC | |
| 11 | PX BLOCK ITERATOR | | Q1,00 | PCWC | |
| 12 | TABLE ACCESS FULL | T_1 | Q1,00 | PCWP | |
| 13 | PX RECEIVE | | Q1,02 | PCWP | |
| 14 | PX SEND HYBRID HASH | :TQ10001 | Q1,01 | P->P | HYBRID HASH|
| 15 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
| 16 | TABLE ACCESS FULL | T_1 | Q1,01 | PCWP | |
|* 17 | TABLE ACCESS FULL | T_1 | | | |
|* 18 | TABLE ACCESS FULL | T_1 | | | |
--------------------------------------------------------------------------------

Column Projection Information (identified by operation id):
-----------------------------------------------------------
...
6 - (#keys=2) "T"."ID"[NUMBER,22], "T"."ID"[NUMBER,22], "T"."FILLER"[VARCHAR2,100], "T"."FILLER"[VARCHAR2,100]
...
So both T.IDs from each subquery are used as keys "(#keys=2)" for the hash function.

In the last part of this instalment I'll focus on how the optimizer chooses the distribution method for the parallel filter operation.

What AWR isn’t telling you

Dominic Brooks - Fri, 2015-10-16 14:08

It is well-known that AWR, and Statspack before, take snapshots of V$ views (or rather the underlying objects) to produce the data in AWR.

It is also well-known that, when considering sql and its statistics in the shared pool, if something big hitting happens but the big-hitter is no longer in the shared pool by the time of the snapshot, then it can’t be recorded in your AWR picture of activity.

But like many things supposedly well-known, it can still come back and remind you that you’ve forgotten or overlooked this behaviour.

Here is a little illustration which happened earlier this week.

This week I’ve been looking closely at activity which happens on a particular DB between 16:00 and 17:00 because the IO subsystem is showing signs of stress and the timings for ‘Average Synchronous Single-Block Read Latency’ (V$SYSMETRIC_SUMMARY / DBA_HIST_SYSMETRIC_SUMMARY) have pushed above our amber warning levels.

Don’t get me started but our amber level for a single block read is 20ms!
Way too high for my liking for such sustained average but that is apparently the SLA from the SAN.

Why do we have such an alert?
The purpose is two-fold.

  1. It can sometimes be a warning that some of our IO-sensitive batch work might show some performance degradations, threatening SLAs
  2. In the past, it has been an effective warning that a sql statement executed concurrently in multiple threads from the applications has switched to a "bad" execution plan whose impact is system-wide (normally heavy concurrent direct path reads).

So… I was hovering around this system during the relevant timeframe looking at what was going on.

And I observed some heavy hitting queries running in parallel originating from another system (i.e. not my application).

There were two executions of one sql id (0cu2sm062jutc) executing concurrently with one single execution of a another sql statement.

The top-level RTSM execution metrics for these three executions were all very similar to this:

Executing for about 10 minutes, in parallel doing a lot of physical direct path reads (38GB) and accumulating over 1 hour in parallel processing time.

 Status              :  DONE (ALL ROWS)          
 Instance ID         :  2                        
 SQL ID              :  0cu2sm062jutc            
 SQL Execution ID    :  33554432                 
 Execution Started   :  10/13/2015 16:03:35      
 First Refresh Time  :  10/13/2015 16:03:37      
 Last Refresh Time   :  10/13/2015 16:13:19      
 Duration            :  584s                     
 Module/Action       :  JDBC Thin Client/-       
 Program             :  JDBC Thin Client         
 Fetch Calls         :  257                      

Global Stats
===================================================================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency | Cluster  |  Other   | Fetch | Buffer | Read  | Read  | Write | Write |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) | Waits(s) | Calls |  Gets  | Reqs  | Bytes | Reqs  | Bytes |
===================================================================================================================================
|    4041 |     252 |     3782 |        0.06 |        1.75 |     0.97 |     3.95 |   257 |     6M | 75051 |  38GB |  1070 | 125MB |
===================================================================================================================================

And then whilst I was fiddling with various queries against V$SQL the queries in question disappeared from V$SQL.

No big surprise of course, stuff is liable to be released from the library cache whenever it sees fit.

But then the remembrance penny hit home and I was left thinking that it was no wonder that I couldn’t see this regularly when looking retrospectively at this timeframe via AWR.

But this cumulative exection time above is larger than the number 3 SQL statement in AWR by elapsed time … so I’m not making this up. It’s significant.

This got me thinking – how can we try to identify heavy-hitting SQL which is being missed by AWR – ASH surely? – and what can we do to help – COLORED SQL?

Here is my first attempt at such an ASH query, trying to find queries which have taken longer than 5 minutes in a single execution in a single snapshot period but which aren’t in AWR.

SELECT * 
FROM   (SELECT h.dbid
        ,      h.instance_number
        ,      h.snap_id
        ,      h.sql_id
        ,      h.sql_exec_id
        ,      h.sql_exec_start
        ,      h.qc_instance_id||'::'||h.qc_session_id||'::'||h.qc_session_serial# px_details
        ,      COUNT(*)
        ,      COUNT(DISTINCT h.session_id)
        --,      MIN(h.snap_id) min_snap
        --,      MAX(h.snap_id) max_snap
        ,      MIN(h.sample_time)
        ,      MAX(h.sample_time)
        ,    ((EXTRACT(DAY    FROM (MAX(h.sample_time) - MIN(h.sample_time)))*86400)
            + (EXTRACT(HOUR   FROM (MAX(h.sample_time) - MIN(h.sample_time)))*3600)
            + (EXTRACT(MINUTE FROM (MAX(h.sample_time) - MIN(h.sample_time)))*60)
            + (EXTRACT(SECOND FROM (MAX(h.sample_time) - MIN(h.sample_time))))) duration
        FROM   dba_hist_active_sess_history h
        ,      dba_hist_snapshot            s
        WHERE  h.dbid            = s.dbid
        AND    h.snap_id         = s.snap_id
        AND    h.instance_number = s.instance_number
        AND    sql_id           IS NOT NULL
        AND    sql_exec_id      IS NOT NULL
        AND    NOT EXISTS (SELECT 1
                           FROM   dba_hist_sqlstat st
                           WHERE  st.dbid            = h.dbid
                           AND    st.snap_id         = h.snap_id --BETWEEN xxx.min_snap AND xxx.max_snap
                           AND    st.instance_number = h.instance_number
                           AND    st.sql_id          = h.sql_id)
        GROUP BY 
               h.dbid
        --
        ,      h.snap_id
        --
        ,      h.instance_number
        ,      h.sql_id
        ,      h.sql_exec_id
        ,      h.sql_exec_start
        ,      h.qc_instance_id
        ,      h.qc_session_id
        ,      h.qc_session_serial#) xxx
WHERE  duration > 600
ORDER BY snap_id DESC, duration DESC;

The main problem with this query is that DURATION does not tell us how long the query has been active in the database.

We could have a query which has been active all that time but was doing relatively little in the DB.

I had a quick look (again) at TM_DELTA_TIME and TM_DELTA_DB_TIME but they are still unreliable – in some of the rows it looks ok, in others not.

Let’s forget that and pretend that the SQL above works ok.

Perhaps we can add a limit WHERE the COUNT(*) > threshold because otherwise this is currently giving me 100s of statements in my production database.

That gives me a token 166 statements over the past 30 days.

And that’s a problem because my intention was to use this to identify SQL statements to add to AWR via ADD_COLORED_SQL.
And that has a limit of 100 statements.

Ah… but no, the other penny has just dropped whilst writing this…

If I add color this SQL, that will make no difference. That just means AWR will record that SQL if it’s there.

But if it’s not in the shared pool at snapshot time, it’s not in the shared pool so it’s not in AWR.

Duh!

It’s almost as if we need a process which if a sql statement has breached a certain threshold then it won’t let it be flushed until MMON has written it to AWR.

No, that’s not right. That would present other problems.

I could shorten the AWR snapshot interval but I’m not convinced.

I think this is an oft-overlooked yet significant shortcoming.

So, for the moment at least, I’m left with what I think is a problem and I can’t think of a good solution… can you?

Or are you thinking mountain vs molehill?


Accessibility Guide Updated

PeopleSoft Technology Blog - Fri, 2015-10-16 12:38

We've updated the PeopleSoft Accessibility Guidelines for 8.54.  This guide is for people developing using PeopleTools, who want to insure that their designs conform to current accessibility standards.  The PeopleSoft Applications development teams employ these same guidelines.  You can download them from My Oracle Support:  https://mosemp.us.oracle.com/epmos/faces/DocumentDisplay?id=1941471.1   This document applies to the new Fluid User Interface.

PeopleSoft publishes our VPATs (accessibility conformance declarations) here.

General accessibility guidelines for PeopleSoft are documented here.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

About Bugs

Floyd Teter - Thu, 2015-10-15 16:53
Been a few weeks since I last checked in.  Onboarding with Oracle has been like drinking from a data firehose, so I've been a bit pressed for time...

As I write this, I'm sitting out on my backyard patio right around sundown.  It's been unseasonable warm here in Utah of late, so the flying bugs are out in abundance.  While they're a bit irritating, they're not the type of bugs I have on my mind this evening.  I'm thinking more about software bugs.

I design a software bug as a flaw that causes said software to perform differently than designed or intended.  Simple definition for my simple mind, I suppose.

One of the eye-opening insights in having an insider's perspective at Oracle has been looking at software bugs.  Not the volume of bugs so much as the type of bugs.  If I apply my simple definition of a software bug, over half the logged bugs are not really bugs at all.  The software is working as designed or intended, but we're not seeing the expected result.  Could be any one of a number of reasons:  applying the software incorrectly due to lack of knowledge, desire for features not provided, violation of business rules, data quality flaws in converted or interfaced data, errors in writing data...the list goes on and on.

Wading through this data, I see a couple of trends.  First, it's pretty obvious that enterprise software vendors could do a better job of educating and enabling customers and partners on how their software works...especially from a systems engineering perspective.  Second, the industry needs better tools for evaluating data quality prior to converting or interfacing data between data sources...a symptom of the old "garbage in, garbage out" rule.

If we could improve in this area, think of the reduced workload for application developers.  Keep in mind that most enterprise software application development teams are dealing with at least four application versions simultaneously:  the previously released version(s) in the field, the latest release in the field, the next release being built, and the design work for the release after the version currently being built.  Anything we can do to alleviate the bug resolution workload allows vendors to apply that extra bandwidth in ways that will shorten release cycle times...something everybody wants.

I'm sure there are more trends to add to the list.  You have a contribution to make?  The comments await.

Oracle Priority Support Infogram for 15-OCT-2015

Oracle Infogram - Thu, 2015-10-15 16:10

Oracle OpenWorld









RDBMS

Loading blobs without a spoon at Kris’ Blog: SQLCL - Blob loading ALL the files

Fusion

Release 10: Understanding RESTful Services in HCM Cloud, from Fusion Applications Developer Relations.

Java

Available: NetBeans IDE 8.1 RC 2, from Geertjan’s Blog.


Mobile Computing


From the same source: Oracle MAF 2.2 New Features

Big Data

Big Data SQL 2.0 - Now Available, from The Data Warehouse Insider.

Oracle Utilities

OUAF 4.3.0.1.0 Feature: Required Field Indicator, from The Shorten Spot, along with a couple of other good postings:



Solaris

Solaris 11.2 SysAdmin Handbook is now available, from Dr Cloud's Flying Software Circus.

And a blog on Solaris Studio and related topics I haven’t seen before, All in a day’s work, had several hands on postings:





Ops Center


EBS

From the Oracle E-Business Suite Support blog:



From the Oracle E-Business Suite Technology blog:




Couchbase 4.0 and related subjects

Curt Monash - Thu, 2015-10-15 10:17

I last wrote about Couchbase in November, 2012, around the time of Couchbase 2.0. One of the many new features I mentioned then was secondary indexing. Ravi Mayuram just checked in to tell me about Couchbase 4.0. One of the important new features he mentioned was what I think he said was Couchbase’s “first version” of secondary indexing. Obviously, I’m confused.

Now that you’re duly warned, let me remind you of aspects of Couchbase timeline.

  • 2 corporate name changes ago, Couchbase was organized to commercialize memcached. memcached, of course, was internet companies’ default way to scale out short-request processing before the rise of NoSQL, typically backed by manually sharded MySQL.
  • Couchbase’s original value proposition, under the name Membase, was to provide persistence and of course support for memcached. This later grew into a caching-oriented pitch even to customers who weren’t already memcached users.
  • A merger with the makers of CouchDB ensued, with the intention of replacing Membase’s SQLite back end with CouchDB at the same time as JSON support was introduced. This went badly.
  • By now, however, Couchbase sells for more than distributed cache use cases. Ravi rattled off a variety of big-name customer examples for system-of-record kinds of use cases, especially in session logging (duh) and also in travel reservations.
  • Couchbase 4.0 has been in beta for a few months.

Technical notes on Couchbase 4.0 — and related riffs :) — start:

  • There’s a new SQL-like language called N1QL (pronounced like “nickel”). I’m hearing a lot about SQL-on-NoSQL these days. More on that below.
  • “Index”, “data” and “query” are three different services/tiers.
    • You can run them all on the same nodes or separately. Couchbase doesn’t have enough experience yet with the technology to know which choice will wind up as a best practice.
    • I’m hearing a lot about heterogeneous-node/multi-tier DBMS architectures these days, and would no longer stand by my 2009 statement that they are unusual. Other examples include Oracle Exadata, MySQL, MongoDB (now that it has pluggable storage engines), MarkLogic, and of course the whole worlds of Hadoop and Spark.
  • To be clear — the secondary indexes are global, and not tied to the same nodes as the data they index.
  • There’s a new back end called ForestDB, but if I understood correctly, it’s used just for the indexes, not for the underlying data.
  • ForestDB represents Couchbase indexes in something that resembles b-trees, but also relies on tries. Indeed, if I’m reading the relevant poster correctly, it’s based on a trie of b-trees.
  • In another increasingly common trend, Couchbase uses Bloom filters to help decide which partitions to retrieve for any particular query.

Up to a point, SQL-on-NoSQL stories can be fairly straightforward.

  • You define some kind of a table,* perhaps in a SQL-like DDL (Data Description Language).
  • SELECT, FROM and WHERE clauses work in the usual way.
  • Hopefully, if a column is going to have a lot of WHERE clauses on it, it also has an index.

For example, I think that’s the idea behind most ODBC/JDBC drivers for NoSQL systems. I think it’s also the idea behind most “SQL-like” languages that NoSQL vendors ship.

*Nobody I talk to about this ever wants to call it a “view”, but it sure sounds like a view to me — not a materialized view, of course, but a view nonetheless.

JOIN syntax can actually be straightforward as well under these assumptions. As for JOIN execution, Couchbase pulls all the data into the relevant tier, and nested loop execution there. My new clients at SequoiaDB have a similar strategy, by the way, although in their case there’s a hash join option as well.

But if things stopped there, they would miss an important complication: NoSQL has nested data. I.e., a value can actually be an array, whose entries are arrays themselves, and so on. That said, the “turtles all the way down” joke doesn’t quite apply, because at some point there are actual scalar or string values, and those are the ones SQL wants to actually operate on.

Most approaches I know of to that problem boil down to identifying particular fields as table columns, with or without aliases/renaming; I think that’s the old Hadapt/Vertica strategy, for example. Couchbase claims to be doing something a little different however, with a SQL-extending operator called UNNEST. Truth be told, I’m finding the N1QL language reference a bit terse, and haven’t figured out what the practical differences vs. the usual approach are, if any. But it sounds like there may be some interesting ideas in there somewhere.

Quick Cross-Period AWR Comparison

Dominic Brooks - Thu, 2015-10-15 04:43

Here’s a query which I find useful in order to have a very quick comparison across AWR snapshots of the high level time model statistics.
The numbers should match those in the associated section in the AWR report.

If you feel compulsed, obsessively, with tuning then you may see some blips here and there which then encourage you to dive into the AWR detail for that snapshot.

Or quite often I get in in the morning and there might be an email about slow overnight processes and generic “database has been slow” enquiries and before I start hunting around to prove or disprove the allegations, this can be a useful frame.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
SELECT ss.inst
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/1000000/60,2),'999999990.99')                                  db_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END)/(ss.duration*1000000),1),'999999990.99')        aas
,      (SELECT round(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = ss.dbid
        AND    sm.snap_id         = ss.e_snap
        AND    sm.instance_number = ss.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2)                                                                                                                   assbl
,      (SELECT round(average,2)
FROM   dba_hist_sysmetric_summary sm
WHERE  sm.dbid            = ss.dbid
AND    sm.snap_id         = ss.e_snap
AND    sm.instance_number = ss.inst
AND    sm.metric_name     = 'Host CPU Utilization (%)'
AND    sm.group_id        = 2)                                                                                                                   cpu_util
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)/1000000,2),'999999990.99')                                      db_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                    sql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')               plsql_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)/1000000,2),'999999990.00')                          parse_time
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                   failed_parse
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')  hard_parse_sharing
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)/1000000,2),'999999990.99')              rman_cpu
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'connection management call elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')     connection_mgmt
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'sequence load elapsed time' THEN em.value - bm.value END)/1000000,2),'999999990.99')                  sequence_load
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'DB CPU' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           db_cpu_perc
,      TO_CHAR(ROUND(100*MAX(CASE WHEN bm.stat_name = 'sql execute elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           sql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'PL/SQL execution elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           plsql_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'parse time elapsed' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           parse_time_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'failed parse elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           failed_parse_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'hard parse (sharing criteria) elapsed time' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           hard_parse_sharing_perc
,      TO_CHAR(ROUND(MAX(CASE WHEN bm.stat_name = 'RMAN cpu time (backup/restore)' THEN em.value - bm.value END)
           / NULLIF(MAX(CASE WHEN bm.stat_name = 'DB time' THEN em.value - bm.value END),0),2),'999999990.99')                                           rman_cpu_perc
FROM  subq_snaps              ss
,     dba_hist_sys_time_model em
,     dba_hist_sys_time_model bm
WHERE bm.dbid                   = ss.dbid
AND   bm.snap_id                = ss.b_snap
AND   bm.instance_number        = ss.inst
AND   em.dbid                   = ss.dbid
AND   em.snap_id                = ss.e_snap
AND   em.instance_number        = ss.inst
AND   bm.stat_id                = em.stat_id
GROUP BY
       ss.dbid
,      ss.inst
,      ss.b_day
,      ss.b_snap
,      ss.e_snap
,      ss.b_time
,      ss.e_time
,      ss.duration
--HAVING b_day NOT IN (6,7)
--AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'
ORDER BY b_snap DESC;

Also, similar query for comparing the different IO metrics.
Again should match numbers in AWR.

WITH subq_snaps AS
(SELECT dbid                dbid
 ,      instance_number     inst
 ,      snap_id             e_snap
 ,      lag(snap_id) over (partition by instance_number, startup_time order by snap_id) b_snap
 ,      TO_CHAR(begin_interval_time,'D') b_day
 ,      TO_CHAR(begin_interval_time,'DD-MON-YYYY HH24:MI') b_time
 ,      TO_CHAR(end_interval_time,'HH24:MI')   e_time
 ,    ((extract(day    from (end_interval_time - begin_interval_time))*86400)
     + (extract(hour   from (end_interval_time - begin_interval_time))*3600)
     + (extract(minute from (end_interval_time - begin_interval_time))*60)
     + (extract(second from (end_interval_time - begin_interval_time)))) duration
 FROM   dba_hist_snapshot)
,    io_stats AS
(SELECT ss.*
 ,      bv.event_name
 ,      ev.time_waited_micro_fg - bv.time_waited_micro_fg time_waited_micro
 ,      ev.total_waits_fg       - bv.total_waits_fg       waits
 FROM   subq_snaps            ss
 ,      dba_hist_system_event bv
 ,      dba_hist_system_event ev
 WHERE  bv.dbid                   = ss.dbid
 AND    bv.snap_id                = ss.b_snap
 AND    bv.instance_number        = ss.inst
 AND    bv.event_name            IN ('db file sequential read','direct path read','direct path read temp','db file scattered read','db file parallel read')
 AND    ev.dbid                   = ss.dbid
 AND    ev.snap_id                = ss.e_snap
 AND    ev.instance_number        = ss.inst
 AND    ev.event_id               = bv.event_id)
SELECT io.dbid
,      io.inst
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      (SELECT ROUND(average,2)
        FROM   dba_hist_sysmetric_summary sm
        WHERE  sm.dbid            = io.dbid
        AND    sm.snap_id         = io.e_snap
        AND    sm.instance_number = io.inst
        AND    sm.metric_name     = 'Average Synchronous Single-Block Read Latency'
        AND    sm.group_id        = 2) assbl
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END) END) single_waits
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END) END) multi_waits
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END) END) prefch_wait
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits  END) END) direct_waits
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END) END)  temp_waits
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END/duration) END) iops_single
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND(CASE WHEN time_waited_micro/1000/1000 &lt; 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) single_secs_total
,      MAX(CASE WHEN event_name = 'db file sequential read' THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) single_avg
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END/duration) END) iops_multi
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND(CASE WHEN time_waited_micro/1000/1000 &lt; 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) multi_secs_total
,      MAX(CASE WHEN event_name = 'db file scattered read'  THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) multi_avg
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END/duration) END) iops_prefch
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 &lt; 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) prefch_secs_total
,      MAX(CASE WHEN event_name = 'db file parallel read'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) prefch_avg
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits  END/duration) END) iops_direct
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND(CASE WHEN time_waited_micro/1000/1000 &lt; 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) direct_secs_total
,      MAX(CASE WHEN event_name = 'direct path read'        THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) direct_avg
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN waits &lt; 0 THEN NULL ELSE waits END/duration) END) iops_temp
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND(CASE WHEN time_waited_micro/1000/1000 &lt; 0 THEN NULL ELSE time_waited_micro/1000/1000 END) END) temp_secs_total
,      MAX(CASE WHEN event_name = 'direct path read temp'   THEN ROUND((time_waited_micro/1000)/NULLif(waits,0)) END) temp_avg
FROM   io_stats io
GROUP BY
       io.dbid
,      io.inst
,      io.b_day
,      io.b_snap
,      io.e_snap
,      io.b_time
,      io.e_time
,      io.duration
HAVING b_day NOT IN (6,7)
AND    inst = 2
--AND b_snap = 18673
--AND    e_time = '17:00'
ORDER BY b_snap DESC;

Finally, one of the queries I used the most for quick checks – comparing top SQL in AWR across periods:

SELECT x.*, (SELECT sql_text from dba_hist_sqltext t where t.sql_id = x.sql_id and rownum = 1) txt
FROM (
SELECT sn.snap_id
,      TO_CHAR(sn.end_interval_time,'DD-MON-YYYY HH24:MI') dt
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
,      SUM(st.fetches_delta) fch
,      SUM(rows_processed_delta) rws
,      SUM(executions_delta)     execs
,      ROUND(SUM(elapsed_time_delta)/1000/1000)   elp
,      ROUND(SUM(elapsed_time_delta)/1000/1000/NVL(NULLIF(SUM(executions_delta),0),1),2)   elpe
,      ROUND(SUM(cpu_time_delta)/1000/1000)       cpu
,      SUM(buffer_gets_delta)    gets
,      ROUND(SUM(iowait_delta)/1000/1000)         io
,      ROUND(SUM(clwait_delta)/1000/1000)         cl
,      ROUND(SUM(ccwait_delta)/1000/1000)         cc
,      ROUND(SUM(apwait_delta)/1000/1000)         ap
,      ROUND(SUM(plsexec_time_delta)/1000/1000)   pl
,      ROUND(SUM(disk_reads_delta))         disk_reads
,      ROUND(SUM(direct_writes_delta))        direct_writes
,      ROW_NUMBER() over (PARTITION BY sn.dbid, sn.snap_id, st.instance_number
                          ORDER BY SUM(elapsed_time_delta) desc) rn
FROM   dba_hist_snapshot sn
,      dba_hist_sqlstat  st
WHERE  st.dbid            = sn.dbid
AND    st.snap_id         = sn.snap_id
AND    sn.instance_number = st.instance_number
GROUP BY
       sn.dbid
,      sn.snap_id
,      sn.end_interval_time
,      st.sql_id
,      st.instance_number
,      st.parsing_schema_name
,      st.plan_hash_value
) x
WHERE rn &lt;= 5
ORDER by snap_id DESC, instance_number, rn;

Let me know if you spot any errors or anomolies or obvious improvements / additions.


Wheres my TNS connection? SHOW TNS

Barry McGillin - Thu, 2015-10-15 04:19
Lots of users have been head scratching as to which tnsnames.ora is being found and used when  connecting to the database with SQLDeveloper and with SQLcl.

In the latest release we've added another new command.

SHOW TNS

What this will do is walk the locations where we look for tnsnames.ora and list these out in order.  Then it will tell you which one the tool will actually use and list the entries for you.

So, with nothing set, no ORACLE_HOME, no TNS_ADMIN, here's what you get.  TNS is going to look in your home directory for a file called tnsnames.ora or .tnsnames.


Now, if we have an ORACLE_HOME set, we'll look for $ORACLE_HOME/network/admin/tnsnames.ora


Further, if we set TNS_ADMIN, it will override ORACLE_HOME and go to that location as shown here.


Lastly, we'll come back to the User directory. If you have a tnsnames.ora there or a .tnsnames, it will override everything and this is what will be used.



Now, go ahead and make a connection.  You can then do another new command called

SHOW CONNECTION

which will show you how you are connected and what you are connected to.


Finally, you can get a look at which driver you are using for a connection using

SHOW JDBC

which will show something like this, detailing types, versions and the URL of the connection you have.


Scaling FLUID pages for iPhone 6

Javier Delgado - Thu, 2015-10-15 03:06
We are currently developing FLUID pages for a customer on PeopleSoft HCM 9.1. As they cannot benefit from the standard functionality delivered through Update Manager until they upgrade to PeopleSoft HCM 9.2, they have decided to provisionally implement FLUID through customisations.

When doing this, we have identified an issue in iPhone 6 by which the FLUID pages were not correctly scaling:



As you see, the text is barely readable. After some research, we have identified that standard pages deal with this scaling issue by using the following PeopleCode (normally in the component PostBuild event):

Declare Function GetDefaultViewportSetting PeopleCode PTLAYOUT.FUNCLIB FieldFormula;
Declare Function SetViewport PeopleCode PTLAYOUT.FUNCLIB FieldFormula;

Local string &Viewport;
Local number &Pos;

&Viewport = GetDefaultViewportSetting();

If %Request.BrowserDeviceFormFactor = 0 And
      %Request.BrowserPlatformClass = "ios" Then
   &Pos = Find("minimal-ui", &Viewport);
   If &Pos = 0 Then
      &Viewport = &Viewport | ", minimal-ui";
   End-If;
End-If;

SetViewport(&Viewport);
AddMetaTag("format-detection", "telephone=no");

The page now displays in a much better format:




Opening of APEX R&D New Zealand

Dimitri Gielis - Thu, 2015-10-15 02:20
We're really excited to announce that next to our Belgium office, we have opened an office in New Zealand too. This will allow us to cover multiple timezones and better serve the Australian and New Zealand market. You can read more about it on Lino's blog

 APEX R&D

Contact our team of experienced professionals for innovative, reliable and cost effective Oracle Application Express solutions. 

Regarding any custom application development, training/coaching or consulting related to Oracle APEX technology we would love to hear from you.

For more information please contact our Belgium or New Zealand office.

We look forward working with you.

Categories: Development

node-oracledb 1.3.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Thu, 2015-10-15 01:24

Version 1.3 of node-oracledb, the add-on for Node.js that powers high performance Oracle Database applications, is available on NPM

We kept the changes minimal in this release for several reasons. It has two small enhancements and a few bug fixes.

  • A new oracledb.oracleClientVersion attributes gives the version of the Oracle client libraries that node-oracledb is linked with.

    A connection.oracleServerVersion attribute gives the Oracle Database version used by the connection.

    These attributes are handy for code that needs to run in multiple environments. See examples/version.js for usage.

  • The major bug fix resolves some corruption with result.outBinds when calling PL/SQL blocks. This was sometimes causing a crash.

See CHANGELOG for the other changes.

Issues and questions about node-oracledb can be posted on GitHub.

node-oracledb installation instructions are here.

node-oracledb documentation is here.

Pages

Subscribe to Oracle FAQ aggregator