Skip navigation.

Feed aggregator

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2015-03-24 23:05

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Source Dependent Extract and Source Independent Load

Dylan's BI Notes - Tue, 2015-03-24 17:40
Typical data warehousing ETL process involves Extract, Transform, and Load. The concept of Source Dependent Extract (SDE) and Source Independent Load (SIL) are unique part of the BI Apps ETL since BI Apps has a universal data warehouse. Since the staging schema are designed according to the universal data warehouse design, the logic of loading data […]
Categories: BI & Warehousing

“Speeding up the innovation cycle with SaaS” by Steve Miranda

Linda Fishman Hoyle - Tue, 2015-03-24 14:56

It wasn’t so very long ago that cloud discussions, and department pleas to adopt cloud, raised the hackles of IT departments. IT professionals ranged from being wary to resistant. Now, in many companies, the foe has turned into a friend. In this post, EVP Steve Miranda (pictured left) takes a crack at answering the question: “How did IT became a friend of cloud?”

Miranda gives a brief, but concise, explanation of the traditional on-premises process, which made it very difficult for IT departments to deliver what the business was asking for. On the flip side, the power of the cloud lands vendors and IT departments in a place that is much more sustainable, more efficient, and more relevant. Miranda says, “In essence, the cloud has made it possible for vendors and software providers to deliver a better product to customers faster and at a lower price.” That’s good news for IT departments.

Read the article: Speeding up the innovation cycle with SaaS


Parallel Execution -- 3 Limiting PX Servers

Hemant K Chitale - Tue, 2015-03-24 09:05
In my previous posts, I have demonstrated how Oracle "auto"computes the DoP when using the PARALLEL Hint by itself, even when PARALLEL_DEGREE_POLICY is set to MANUAL.  This "auto"computed value is CPU_COUNT x PARALLEL_THREADS_PER_CPU.

How do we limit the DoP ?

1.  PARALLEL_MAX_SERVERS is an instance-wide limit, not usable at the session level.

2.  Resource Manager configuration can be used to limit the number of PX Servers used

3.  PARALLEL_DEGREE_LIMIT, unfortunately, is not usable when PARALLEL_DEGREE_POLICY is MANUAL

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Mar 24 22:57:18 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 4
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel_degree_policy

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy string MANUAL
SYS>show parameter parallel_max

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers integer 64
SYS>
SYS>select * from dba_rsrc_io_calibrate;

no rows selected

SYS>
SYS>connect hemant/hemant
Connected.
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
1 16 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

As expected, the query uses 16 PX Servers (and not the table-level definition of 4).  Can we use PARALLEL_DEGREE_LIMIT ?

HEMANT>alter session set parallel_degree_limit=4;

Session altered.

HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
2 32 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

No, it actually still used 16 PX servers f or the second execution.

What about PARALLEL_MAX_SERVERS ?

HEMANT>connect / as sysdba
Connected.
SYS>alter system set parallel_max_servers=4;

System altered.

SYS>connect hemant/hemant
Connected.
HEMANT>select /*+ PARALLEL */ count(*) from Large_Table;

COUNT(*)
----------
4802944

HEMANT>select executions, px_servers_executions, sql_fulltext
2 from v$sqlstats
3 where sql_id = '8b0ybuspqu0mm';

EXECUTIONS PX_SERVERS_EXECUTIONS SQL_FULLTEXT
---------- --------------------- --------------------------------------------------------------------------------
3 36 select /*+ PARALLEL */ count(*) from Large_Table

HEMANT>

Yes, PARALLEL_MAX_SERVERS restricted the next run of the query to 4 PX Servers.  However, this parameter limits the total concurrent usage of PX Servers at the instance level.  It cannot be applied or derived to the session level.

.
.

.
Categories: DBA Blogs

Oracle Exadata Performance: Latest Improvements and Less Known Features

Tanel Poder - Tue, 2015-03-24 08:57

Here are the slides of a presentation I did at the IOUG Virtual Exadata conference in February. I’m explaining the basics of some new Oracle 12c things related to Exadata, plus current latest cellsrv improvements like Columnar Flash Cache and IO skipping for Min/Max retrieval using Storage Indexes:

Note that Christian Antognini and Roger MacNicol have written separate articles about some new features:

Enjoy!

 

Related Posts

A Glance at Smartwatches in the Enterprise: A Moment in Time Experience

Usable Apps - Tue, 2015-03-24 05:30

Ultan O’Broin (@usableapps) talks to Oracle Applications User Experience (OAUX) Vice President Jeremy Ashley (@jrwashley) about designing apps for that smartwatch, and every other smartwatch, too.

Nobody wants their device to disrupt them from what they are doing or to have to move to another one to continue working. Keeping users in the moment of their tasks—independent of the devices they’re using—is central to any great user experience.

The ability to apply our Oracle Applications Cloud design philosophy to the smartwatch demonstrates an ideal realization of the “glance” method, keeping users in that moment: Making the complex simple, flexible, intuitive, and most of all, convenient. OAUX recognizes the need for smartwatch wearers to experience that “right here, right now” feeling, the one in which you have just what you need, just when you need it.

The wearable technology space is currently focused on smartwatches. We’re excited by Apple’s announcement about their smartwatch, and we’re even more thrilled to now show you our proof of concept glance designs for the Oracle Applications Cloud on the Apple Watch. We want to hear your reaction! 

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch


Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud for Apple Watch proof of concept designs

For the smartwatch specifically, VP Jeremy Ashley explained how our glance approach applies to smartwatch wearers, regardless of their choice of device:

“The most common wearable user interaction is to glance at something. The watch works as the wearer’s mini dialog box to the cloud, making microtransactions convenient on the wrist, and presenting the right information to the wearer at the right time. How quickly and easily someone can do something actually useful is the key activity."

Glance brings cloud interaction to wearers in a personal way, requesting and not demanding attention, while eliminating a need to switch to other devices to “dig in,” or to even have to pull a smartphone out of the pocket to respond.

“To continue the journey to completing a task using glance is as simple and natural as telling the time on your wrist”, says Jeremy.

Being able to glance down at your wrist at a stylish smartwatch experience—one that provides super-handy ways to engage with gems of information— enhances working in the cloud in powerful and productive ways, whether you’re a sales rep walking from your car to an opportunity engagement confidently glancing at the latest competitive news, or a field technician swiping across a watchface to securely record time on a remote job.

Glancing at a UI is the optimal wearable experience for the OAUX mobility strategy, where the cloud, not the device, is our platform. This means you can see our device-agnostic glance design at work not only on an Apple Watch, but on Android Wear, Pebble, and other devices, too.

Glance on Android Wear Samsung Gear Live and Pebble

Glance for Oracle Applications Cloud proof of concept apps on Android Wear Samsung Gear Live and Pebble

Designing a Glanceable Platform

The path to our glance designs began with OAUX research into every kind of smartwatch we could get on our wrists so that we could study their possibilities, experience how they felt, how they looked, and how they complemented everyday work and life activities. Then we combined ideas and experiences with Oracle Cloud technology to deliver a simplified design strategy that we can apply across devices. As a result, our UI designs are consistent and familiar to users as they work flexibly in the cloud, regardless of their device, type of operating system, or form factor.

This is not about designing for any one specific smartwatch. It’s a platform-agnostic approach to wearable technology that enables Oracle customers to get that awesome glanceable, cloud-enabled experience on their wearable of choice.

Why Smartwatches?

Smartwatches such as the Apple Watch, Pebble, and Android Wear devices have resonated strongly with innovators and consumers of wearable technology. The smartwatch succeeds because we’re already familiar and comfortable with using wristwatches, and they’re practical and easy to use.

From first relying on the sun to tell the time, to looking up at town hall clocks, to taking out pocket watches, and then being able to glance at our wrists to tell the time, we’ve seen an evolution in glanceable technology analogous to the miniaturization of computing from large mainframes to personal, mobile devices for consumers.

Just like enterprise apps, watches have already been designed for many specializations and roles, be they military, sport, medical, fashion, and so on. So the evolution of the smartwatch into an accepted workplace application is built on a firm foundation.

More Information

Again, OAUX is there, on trend, ready and offering a solution grounded in innovation and design expertise, one that responds to how we work today in the cloud.

In future articles, we’ll explore more examples that showcase how we’re applying the glance approach to wearable technology, and we’ll look at design considerations in more detail. You can read more about our Oracle Applications Cloud design philosophy and other trends and innovations that influence our thinking in our free eBook.

Check the Usable Apps website for events where you can experience our smartwatch and other innovations for real, read our Storify feature on wearable technology, and see our YouTube videos about our UX design philosophy and strategy.

RM BI Forum 2015 : Justification Letters for Employers

Rittman Mead Consulting - Tue, 2015-03-24 03:48

(Thanks to Christian Berg @Nephentur for the suggestion, and acknowledgements to ODTUG KScope for the original idea – our favourite conference after the BI Forum)

The Rittman Mead BI Forum 2015 promises to be our best BI Forum yet, with fantastic speakers at each event, keynotes and guest speakers from Oracle and John Foreman, author of the bestselling book “Data Smart”, a data visualisation challenge and an optional one-day masterclass on delivering Oracle’s new Information Management and Big Data reference architecture by Rittman Mead’s Mark Rittman and Jordan Meyer. Uniquely amongst Oracle BI events we keep the numbers attending very limited and run just a single stream at each event, so everyone takes part in the same sessions and gets to meet all the attendees and speakers over the three days.

Sometimes though, management within organizations require special justification for team members to attend events like these, and to help you put your case together and get across the unique education and networking benefits of the Rittman Mead BI Forum, we’ve prepared justification letters for you to complete with your details, one each for the Brighton and Atlanta events. Click on the links below to download sample justification letters for the Brighton BI Forum running on May 6th-8th 2015, and the Atlanta one running the week after on May 13th-15th 2015:

Full details on the BI Forum 2015 agenda and how to register can be found on the Rittman Mead BI Forum 2015 home page, with registration open until the weekend before each event – hurry though as attendee numbers are strictly limited.

Categories: BI & Warehousing

OBIEE nqcmd Tidbits

Rittman Mead Consulting - Mon, 2015-03-23 21:42

nqcmd is the ODBC command line tool that always has, and hopefully always will, shipped with OBIEE. It enables you to manually fire queries directly at the BI Server, rather than through the usual way of Presentation Services generating Logical SQL and sending it to BI Server. This can be useful in several cases:

  1. Automated cache purging, by sending one of the SAPurge[…] ODBC commands to the BI Server, usually done as part of a script
  2. Automated execution of Logical SQL, often done to support testing scenarios
  3. Load Testing the BI Server (via a magic undocumented switch, SA_NQCMD_ADVANCED)
  4. Manual interogation of the BI Server – if you want to poke and prod nqsserver without launching a web browser, nqcmd is your friend :)

In using nqcmd there’re a couple of things I want to demonstrate here that I find useful but haven’t seen discussed [in detail] elsewhere.

Query Log via nqcmd

All BI Server queries run with a LOGLEVEL>=1 will write some log details to nqquery.log. The usual route to view this is either on the server directly itself, transferring it off with a tool such as WinSCP, or through the Administration page of OBIEE. Another option that is available is from nqcmd itself. You need to do two things:

  1. Set the environment variable SA_NQCMD_ADVANCED to Yes
  2. Include the command line arguments -ShowQueryLog -H when you invoke nqcmd. I don’t know what -H does – it’s just specified as being required for this to work.

Here’s a simple example in action:

[oracle@demo ~]$ export SA_NQCMD_ADVANCED=Yes
[oracle@demo ~]$ nqcmd -d AnalyticsWeb -u prodney -p Admin123 -ShowQueryLog -H

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------



Connection open with info:
[0][State: 01000] [DataDirect][ODBC lib] Application's WCHAR type must be UTF16, because odbc driver's unicode type is UTF16

        [T]able info
        [C]olumn info
        [D]ata type info
        [F]oreign keys info
        [P]rimary key info
        [K]ey statistics info
        [S]pecial columns info
        [Q]uery statement
Select Option: Q

Give SQL Statement: SET VARIABLE LOGLEVEL=1:SELECT "A - Sample Sales"."Base Facts"."1- Revenue" s_1 FROM "A - Sample Sales"
SET VARIABLE LOGLEVEL=1:SELECT "A - Sample Sales"."Base Facts"."1- Revenue" s_1 FROM "A - Sample Sales"
-----------------------
s_1
-----------------------
70000000.00
-----------------------
Row count: 1
-----------------------
[2015-03-21T16:36:31.000+00:00] [OracleBIServerComponent] [TRACE:1] [USER-0] [] [ecid: 0054Sw944KmFw000jzwkno0003ac0000rl,0] [tid: 56660700] [requestid: 201f0002] [sessionid: 201f0000] [username: prodney] ###
########################################### [[
-------------------- SQL Request, logical request hash:
d2294415
SET VARIABLE LOGLEVEL=1:SELECT "A - Sample Sales"."Base Facts"."1- Revenue" s_1 FROM "A - Sample Sales"

]]
[2015-03-21T16:36:31.000+00:00] [OracleBIServerComponent] [TRACE:1] [USER-34] [] [ecid: 0054Sw94mRzFw000jzwkno0003ac0000ro,0] [tid: 56660700] [requestid: 201f0002] [sessionid: 201f0000] [username: prodney] -------------------- Query Status: Successful Completion [[

]]
[2015-03-21T16:36:31.000+00:00] [OracleBIServerComponent] [TRACE:1] [USER-28] [] [ecid: 0054Sw94mRzFw000jzwkno0003ac0000ro,0] [tid: 56660700] [requestid: 201f0002] [sessionid: 201f0000] [username: prodney] -------------------- Physical query response time 0 (seconds), id <<333971>> [[

]]

]]
[2015-03-21T16:36:31.000+00:00] [OracleBIServerComponent] [TRACE:1] [USER-29] [] [ecid: 0054Sw94mRzFw000jzwkno0003ac0000ro,0] [tid: 56660700] [requestid: 201f0002] [sessionid: 201f0000] [username: prodney] -------------------- Physical Query Summary Stats: Number of physical queries 1, Cumulative time 0, DB-connect time 0 (seconds) [[

]]
[2015-03-21T16:36:31.000+00:00] [OracleBIServerComponent] [TRACE:1] [USER-33] [] [ecid: 0054Sw94mRzFw000jzwkno0003ac0000ro,0] [tid: 56660700] [requestid: 201f0002] [sessionid: 201f0000] [username: prodney] -------------------- Logical Query Summary Stats: Elapsed time 0, Response time 0, Compilation time 0 (seconds) [[

]]

Neat! But so what? Well, I see two uses straight away:

  1. In some situations you may not have access to the filesystem of the server on which the BI Server is running. For example, as a consultant I’ve been to clients where I’m given the Administration Tool client installation only. If I want to debug an RPD that I’m developing I’ll usually want to poke around in nqquery.log to see quite what physical SQL is being generated – and now I can.
  2. There was a discussion on the EMG mailing list recently about generating Physical SQL without executing it on the database. I’m going to discuss this in the next section of this article, and to do the analysis for this rapidly I’m using the inline query log.
Generating Physical SQL for OBIEE without Executing it – SKIP_PHYSICAL_QUERY_EXEC

OBIEE generates the Physical SQL that it runs against the database dynamically, at runtime. It takes the Logical request (“Logical SQL”), runs it through the RPD and generates one or more “Physical SQL” statements to be executed on the database as required to pull back the necessary data. A question arose recently on the EMG mailing list as to whether it is possible to get the Physical SQL – without executing it. You can imagine the benefits of this (namely, regression testing) since executing the database query each time is typically going to be expensive in machine resource and time consuming.

In SampleApp v406 there is a /home/oracle/scripts/PhysicalSQLGenerator, which does two things. First off it generates the Logical SQL for a given analysis, presumably using the generateReportSQL web service. It then takes that and runs it through nqcmd, scraping the nqquery.log for the resulting Physical SQL. In all of this no database queries get run. Very cool. But what’s the “secret sauce” at play here – can we distill it down in order to use it ourselves?

First, let’s look at how the SampleApp script does it. It sets some additional request variables in the Logical SQL:

[oracle@demo PhysicalSQLGenerator]$ cat lsql-out-dir/q1.lsql
SET VARIABLE SKIP_PHYSICAL_QUERY_EXEC=1, LOGLEVEL=2, DISABLE_CACHE_HIT=1, DISABLE_CACHE_SEED=1, QUERY_SRC_CD='SampleApp-PSQLGEN', SAW_SRC_PATH='/users/prodney/folder/request variable example':SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY
;

And if we extract the relevant part out of the bash script we can see that it also uses a couple of extra command line arguments (-q -NoFetch) when invoking nqcmd:

nqcmd -q -NoFetch -d AnalyticsWeb -u weblogic -p Admin123 -s lsql-out-dir/q1.lsql

When it’s run we check nqquery.log and lo-and-behold we get this: (edited for brevity)

------------------- Sending query to database named 01 - Sample App Data (ORCL) (id: <<69923>>), connection pool named Sample Relational Connection, logical request hash dd4fb54f, physical request hash 8d6f36
3d: [[
WITH
SAWITH0 AS (select sum(T42442.Revenue) as c1
from
     BISAMPLE.SAMP_REVENUE_FA2 T42442 /* F21 Rev. (Aggregate 2) */ )
select D1.c1 as c1, D1.c2 as c2 from ( select distinct 0 as c1,
     D1.c1 as c2
from
     SAWITH0 D1 ) D1 where rownum <= 5000001

]]

Query Status: Successful Completion [[

Rows 0, bytes 24 retrieved from database query id: <<69923>> Simulation Gateway 

Physical query response time 0 (seconds), id <<69923>> Simulation Gateway

Whilst the log says it is “Sending query to database” it does no such thing, and the “Simulation Gateway” is the giveaway clue. Proof that it doesn’t connect to the database? I shut the database down, and it still worked just fine. Crude, yes, but effective.

I’ll intersperse here the little trick that I mentioned in the first part of this article : -ShowQueryLog. It’s tedious switching back and forth between nqcmd and the nqquery.log when doing this kind of testing, so let’s do it all as one:

export SA_NQCMD_ADVANCED=Yes
nqcmd -H -ShowQueryLog -q -NoFetch -d AnalyticsWeb -u weblogic -p Admin123 -s lsql-out-dir/q1.lsql

Unfortunately it looks like -ShowQueryLog is mutually exclusive to -q and -NoFetch since it doesn’t return anything, even though the nqquery.log did get additional entries. But that’s fine, since by removing these two flags in order to get -ShowQueryLog to work we’re whittling down what is actually needed to generate the physical SQL on its own without database execution. Here’s the nqcmd, showing the query log inline and showing still the “Simulation Gateway” indicative of no physical query execution:

[oracle@demo PhysicalSQLGenerator]$ export SA_NQCMD_ADVANCED=Yes
[oracle@demo PhysicalSQLGenerator]$ nqcmd -H -ShowQueryLog -d AnalyticsWeb -u weblogic -p Admin123 -s lsql-out-dir/q1.lsql

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

[...]

------------------------------------
s_0          s_1
------------------------------------
------------------------------------
Row count: 0
------------------------------------
[2015-03-23T05:52:57.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-0] [] [ecid: 0054Ut7AJ33Fw000jzwkno0005UZ00005Q,0] [tid: 8f194700] [requestid: 8a1e0002] [sessionid: 8a1e0000] [username: weblogic] ############################################## [[
-------------------- SQL Request, logical request hash:
dd4fb54f
SET VARIABLE SKIP_PHYSICAL_QUERY_EXEC=1, LOGLEVEL=2, DISABLE_CACHE_HIT=1, DISABLE_CACHE_SEED=1, QUERY_SRC_CD='SampleApp-PSQLGEN', SAW_SRC_PATH='/users/prodney/folder/request variable example':SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY



[...]

[2015-03-23T05:52:57.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-18] [] [ecid: 0054Ut7AK5DFw000jzwkno0005UZ00005S,0] [tid: 8f194700] [requestid: 8a1e0002] [sessionid: 8a1e0000] [username: weblogic] -------------------- Sending query to database named 01 - Sample App Data (ORCL) (id: <<70983>>), connection pool named Sample Relational Connection, logical request hash dd4fb54f, physical request hash 8d6f363d: [[
WITH
SAWITH0 AS (select sum(T42442.Revenue) as c1
from
     BISAMPLE.SAMP_REVENUE_FA2 T42442 /* F21 Rev. (Aggregate 2) */ )
select D1.c1 as c1, D1.c2 as c2 from ( select distinct 0 as c1,
     D1.c1 as c2
from
     SAWITH0 D1 ) D1 where rownum <= 5000001

]]
[2015-03-23T05:52:57.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-34] [] [ecid: 0054Ut7AYi0Fw000jzwkno0005UZ00005T,0] [tid: 8f194700] [requestid: 8a1e0002] [sessionid: 8a1e0000] [username: weblogic] -------------------- Query Status: Successful Completion [[

]]
[2015-03-23T05:52:57.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-26] [] [ecid: 0054Ut7AYi0Fw000jzwkno0005UZ00005T,0] [tid: 8f194700] [requestid: 8a1e0002] [sessionid: 8a1e0000] [username: weblogic] -------------------- Rows 0, bytes 24 retrieved from database query id: <<70983>> Simulation Gateway [[

]]
[2015-03-23T05:52:57.000+00:00] [OracleBIServerComponent] [TRACE:2] [USER-28] [] [ecid: 0054Ut7AYi0Fw000jzwkno0005UZ00005T,0] [tid: 8f194700] [requestid: 8a1e0002] [sessionid: 8a1e0000] [username: weblogic] -------------------- Physical query response time 0 (seconds), id <<70983>> Simulation Gateway [[

[...]

It’s clear that the “-q -Nofetch” parameters used in nqcmd don’t have an effect on whether the physical query is executed (they’re to do with whether nqcmd as an ODBC client pulls back and displays the data you ask for). It’s actually just a single request variable that does the job, and it goes under the rather obvious name of SKIP_PHYSICAL_QUERY_EXEC. When set to 1 it generates all the necessary physical SQL but doesn’t execute it, and the presence of “Simulation Gateway” in the log signals this.

Categories: BI & Warehousing

R12.2 Documentation link in html format

Vikram Das - Mon, 2015-03-23 19:35
This link has the R12.2 documentation in HTML format:

https://docs.oracle.com/cd/E26401_01/index.htm 
Categories: APPS Blogs

The Four Ps of Standards/Procurement Requirements/”Whatevahs”

Mary Ann Davidson - Mon, 2015-03-23 18:43



Normal
0





false
false
false

EN-US
X-NONE
X-NONE




























I am a veteran – not merely a military veteran, but an information security veteran. I don’t get medals for the latter, but I do have battle scars. Many of the scars are relatively recent: a result of tearing my hair out from many, many, many mind-numbing reviews of publications, draft standards and other kinds of documents which are ostensibly meant to make security better, cybersecurity being “hot” and all. Alas, many of these documents have linguistic and operational difficulties that often make it highly unlikely that they will achieve their stated “better security” objectives.


After reviewing so many documents and running into common patterns, I decided to take a cue from my MBA days and categorize my concerns in a catchy way. Though not a marketing major, I vaguely recall the “four Ps” of marketing (product, price, place and promotion) and decided to adapt them to the world of standards/procurement requirements/whatevahs (which I will now refer to as SPW). They are:


Pr    Problem Statement
Precise Language and Scope
Pragmatic Solutions
Prescriptive Minimizations


I t     I offer the "four Ps of SPW" for those who are attempting to improve cybersecurity by fiat, or in other ways intended to compel the market, in hopes that we may collectively get to better security without sinking into the swamp of despair, dallying in the desert of dashed hopes, trekking through the tundra of too-obscure requirements (nice use of alliteration, no?) … you get the point. While I think my advice is generally applicable in the SPW (say “spew”) realm, the context for my discussion is assurance slash supply chain risk mitigation since that’s what I seem to review most often.


Problem Statement


I cannot tell you how many SPW documents I have read in which Someone Was Attempting to Make Someone Else Do Something More Securely, only it wasn’t clear what, exactly, or more importantly, why (or even that the requirements would result in “better security”). Anything that seeks to impose Something Security-Oriented On Someone needs a clear problem statement. Without this, a proposed SPW becomes an expensive wish list with no associated benefits to it. Ultimately, the seller has no idea what the buyer really wants or needs. If a government agency cannot explain what they are really worried about, in language the “comply-ee” can understand, they shouldn’t be surprised if they get a chocolate-covered cockroach (eew) when they ask for something sweet, crunchy and locally sourced. (I’d add “sustainable,” as there seems to be no shortages of cockroaches.)


With regard to security, “supply chain” has become the mantra for attempting to regulate almost 100% of what businesses do. Poor quality, “backdoor boogiemen,” assurance, “supply chain shutdown” are all very (very!) different problems. Worse, the ambiguity around proposing a standard for “supply chain security” may encompass 100% of business operations. Example: my employer does not make their own paper clips or wood stirrers for coffee cups. Do we really need to worry about a shortage of either? No? Then don’t describe “supply chain requirements” that ask technology suppliers to track the wood sourced for our coffee stirrers. Buying a poor quality product, for example, is a business risk. It’s not, per se, a supply chain risk. Furthermore, while poor quality may lead to poor security, not all security problems are a result of quality issues. Some are a result of buyers not understanding that commercial off-the-shelf (COTS) software, while general purpose and often very good, is not “all purpose” and not designed for all threat environments.


The second aspect of a problem statement is the provision of use cases. A use cases is a fancy way of saying, “for example.” Use cases are very important to help turn a problem statement into an “aha” moment for the reader. Moreover, use cases are important to limit scope and ensure that the SPW requirements are appropriate to serve its stated objectives. Absent a use case, you never really know what’s being asked for (and where it applies and where it does not apply). Use cases absolutely need to be contained within a requirements document.


For example, consider the US National Institute of Standards and Technology (NIST) Special Publication 800-152 A Profile for U.S. Federal Cryptographic Key Management Systems Draft 3 (December 2014). This special pub describes a combination of technical standards and policies around cryptographic key management systems. The problem is, nowhere in reading the document is it evident what, exactly, this applies to. Is this just “special, super secret key management systems for classified US government systems?” Or, does it apply to key management for things like Transport Layer Security (TLS) (or other cryptographic protocols that are well-established standards)? Why it matters: because if there are not use cases that define applicability, someone will assume it applies to everything. And, applying these requirements may conflict with (if not break) other standards.


90% of life isn’t showing up, it’s solving the right problem. You can’t solve the right problem if you don’t know (or cannot articulate) what it is, with some “for instances.”


Precise Language and Scope


It is astonishing to me how many SPW documents do not define core terminology used therein. Without a precise set of definitions, nobody really knows what is meant, and if something is vague, it’s going to be misinterpreted. (Worse, an undefined term may end up meaning whatever a “certifier” or other compliance overlord thinks it means: nobody ever really knows if they are compliant if compliant depends on what the certifier thinks it means.) Core terminology must be precisely and narrowly defined within the document. As the famous line goes from Let’s Call The Whole Thing Off,


“You like potato and I like potahto
You like tomato and I like tomahto
Potato, potahto, tomato, tomahto
Let’s call the whole thing off.” (Lyrics by Ira Gershwin, melody by George
Gershwin)


The problem is, if a SPW is enshrined and applied, you can’t call it off. At least until the next revision. Figure out what to call a spud and make it clear, please!


For example, in the context of software, what is a vulnerability? A configuration error (leading to a security weakness)? A defect in software (that leads to a security weakness)? Any defect in software (regardless of the impact)? What if the design was intentional? Is a policy violation a vulnerability? A vulnerability cannot, surely, be all the above! And in fact, it isn’t, but just saying “vulnerability” and conflating all the above means that nobody will be able to come up with a remedy that works for all cases. (Note: for configurable software, if you configure it so my grandmother can hack into it, it’s not a “vulnerability,” it’s “user error.” There is only so much you can do to prevent a user shooting self in the foot when we are talking about firearms that allow you to point them at your feet.) Another example, what is a “module?” The answer may be very different depending on whether you are a hardware person or a software person.


If ‘it’ is not clear, ‘it’ is going to be misinterpreted.


Pragmatic Solutions


One of my biggest concerns with a lot of SPW documents is that they almost never take into account the value of pragmatism over perfection. Perfection is not achievable (much less at an acceptable cost) while “better” usually is achievable. (Surely “better” that everyone can do is better than “perfect” that is unachievable?) To those who insist, “evil slug vendors are profit driven and always want to do the minimum,” my response is that economics rules the world and doesn’t necessarily argue for the minimum. Generally speaking, it’s more profitable to find security vulnerabilities and fix them earlier in a product release cycle than waiting until you ship six affected versions of product and now have to produce 120 patches for a single issue (or patch 120 cloud instances). Most vendors know this (or find out the hard way). Customers certainly know this and complain if they have to apply too many patches (or if their cloud service uptime is negatively impacted by a lot of patch-related downtime).


More to the point, unless you can print money, invent a time machine or perfect cloning, time, money and people are always constrained resources so using them well is a must. Doing more X means – often – doing less of Y, because you can’t add more resource you don’t have or can’t find. Worse, doing more of X required for compliance may mean doing less of the Y that actually improves security, since they are mutually exclusive as long as resources are constrained and regulations are written by (or interpreted by) the Knights Who Say Ni.


In particular, I see little evidence that people proposing SPW have done much or any economic analysis of the cost of compliance. I know the government knows how to do this kind of analysis because – for example – the US Department of Defense does resource planning that among other things looks at “how many conflicts are we prepared to fight simultaneously?” rather than, “in a perfect world with unlimited resources and cyborg soldiers, we could take on Frabistatians, the Foobarians, and open a third front combating the Little Green Men from Marsians.” How I wish that other entities – any other entity – would analyze (e.g., do a reality check) on what the impact of X is before it becomes part of a SPW.


Any SPW should include an economic analysis of impact – and look at options. Included in that analysis should be the bane of (quasi-)regulatory ambition, “unintended consequences.” There are almost always unintended consequences of SPW, even those created with good motives. One of the big ones is, if you make it too expensive for suppliers to deal with you, there will be fewer suppliers. And that means choice will decrease and cost will increase. Any SPW should explicitly ask the question, “What would matter the most, be broadly implementable and cost the least (or be the most cost effective for all parties)?”


To provide an example, the NIST Interagency Report 7622 Notional Supply Chain Risk Management Practices for Federal Information Systems (the draft requirement has, I believe, since been excised) at one time wanted the “supplier” (e.g., a vendor) to notify the acquirer (e.g., a government agency) of “all personnel changes involving maintenance.” I suspect that the intent was something to the effect that, if the acquirer (let’s say, DoD) outsources a service, and that service involves a fundamental change of venue – e.g., the maintenance for the US Department of Defense manpower system is outsourced to Hostile Foreign Country, DoD wants to be notified. However, that is not what the requirement stated. One interpretation would be that any time someone touched code who didn’t write the original code (“a personnel change involving maintenance”) that a vendor would have to notify the government. Ok, Oracle has almost 5000 products (and lots and lots of clouds), billions of lines of code, and every day there are a lot of code checkouts where someone is changing something he or she did not write. Are we supposed to tweet all that stuff? What is that going to do for the acquirer? “Kaitlyn checked out and changed code that, like, Ashley wrote, LOL, OMG!”


Figure out what you really want, and what it is worth to you to get it.


Prescriptive Minimization


With rare exceptions, non-technical* process or management standards should not tell industry how exactly to do something, if for no other reason than there is no such thing as “best practice.” There are certainly better or worse practices, but arguably no single practice that everyone does, exactly the same way, that will work equally well for everyone subject to the requirements, for any length of time. Worse, SPW diktats often stifle innovation, drive up costs (without commensurate benefit) and fall prey to the buggy whip effect (where you are specifying how to use buggy whips long after people have moved from horse-and-buggy to Model Ts - or better). Add to all these reasons the economic impact referenced above.


To provide one example, consider (draft) NIST Special Publication 800-160 Systems Security Engineering, containing a requirement that, in the event of a discovered security bug, the engineering team should conduct root cause analysis. This sounds like a Mom and Apple Pie requirement on the face of it, so what could possibly be wrong with that? A clear Best Practice, right? Well, no, not really, on grounds of pragmatism and context.


Consider a security bug that is not only high impact but for which there is an exploit circulating in the wild. For commercial software vendors, job 1 will be getting a patch into customers’ hands (or at least the hands of their customers’ system administrators) and/or patching their cloud instances, as the case may be. Protection of customers under these circumstances is initially way more important than determining causation.


Second, it doesn’t necessarily make sense to do a root cause analysis on every single security bug of every severity. What does make sense is to deep dive on the more severe bugs (e.g., high Common Vulnerability Scoring System (CVSS) Base Score bugs), because those are the ones you really want to ensure you fixed completely (and avoid in the future). You might want to ask the following as part of your analysis:


“How/when did this get into the code base?”
“What is the resulting vulnerability (how can it be exploited)?”
“Have we looked elsewhere for similar problems?”
“Have we added test cases to regression tests and other test suites (like static analysis tools) to ensure that we can automate finding other instances?”
“Have we fixed it everywhere (or everywhere that is relevant?)”and
“Have we attempted to enshrine/transfer knowledge of the severity and impact of this bug across the development organization (so everyone knows why it’s a big deal and how to avoid it in future)?”


Given scarce resources, I’d argue that root cause analysis on a CVSS 0 bug is not as important as thoroughly addressing – and in future avoiding – a CVSS 9.0 or 10.0 bug, along the lines of the above analysis. If a standard enshrines the former, it leads to suboptimal resource allocation (like spreading peanut butter over too many slices of bread). Worse, any company doing the “better” thing will get dinged as being non-standards compliant if there is a Best Practice enshrined in SPW that calls for root cause analysis of everything, regardless of severity. Perfection works against actual security improvement.


Another “best practice” I see shilled relentlessly is third party static analysis. I’ve opined on why that is not a best practice in previous blogs, but I have new reasons to avoid it like the plague it is, which is a real world example of the high cost and low utility. Recently, we were made aware that a customer of Oracle (without asking our permission, that we would not have given if asked) submitted our software to a third party that does static analysis on binaries. Where to start with how extremely bad this is? Numero uno: the customer violated their license agreement with Oracle, which alone made their actions completely unacceptable. Add to that, the report we were furnished included alleged vulnerabilities not merely in Oracle but in another product Not Made By Oracle. (Needless to say, we could neither analyze those issues nor fix them in the event they turned out to be actual vulnerabilities and really, we did not want to see alleged vulnerabilities in Someone Else’s Code. That information is extremely sensitive and should not have been given to us.) Last but far from least was the fact that – drum roll – not one of the alleged security issues the third party reported was, in fact, an actual security vulnerability. 0% accuracy: zilch, zip, nada, bubkes, a’ohe mea. Further, one of our best security leads (I’d bill him out at least $2,000 bucks an hour) wasted his very valuable time determining that there was “no there, there.”


Running a tool (if and only if you have permission to do it) is nothing; the ability to analyze the results is everything. Third parties cannot do that since they have no actual code knowledge of what they are running the tool on, especially not on a code base as big as Oracle’s is. Third party static analysis is thus only a best practice if you want to waste time and money. But it’s the vendor’s time that is being wasted (maybe that third party should reimburse us the $2K an hour our kahuna spent analyzing their errata?), and the customer’s money. And last, but really first, violating licensing terms is unacceptable business conduct.


Summary


Nobody is perfect, but with all the attention being focused on cybersecurity, it would be really helpful if attempted problem solvers writing SPW could sharpen their – I was going to say, knives, but I am not sure I mean that! – focus. Yes, a sharpened focus is what is needed. Cybersecurity is an important area. Better security is achievable, but only if we know what we are worried about, we speak the same language, we can look at relative costs and benefits, and we allow for latitude in how we get to better. We can’t do everything, but everybody can do something. Let’s do the some of the things that matter – and that won’t make us spend resources checking boxes instead of making sure nobody can break into the boxes.


· I    * I note that one reason for technical standards is, of course, interoperability. In which case, people do need to implement, say, the Secure Whateverworks Protocol (SWP) a particular way, or it won’t work with another vendor’s implementation of SWP.


For More Information


Ruthlessly self-serving announcement follows: my sister and I, writing as Maddi Davidson, are pleased to announce that we have completed our third book in the Miss-Information Technology Mystery Series, With Murder You Get Sushi. (Also, our short story “Heartfelt” will appear in Mystery Times Ten this month, published by Buddhapuss Ink.)


Apropos of nothing having to do with security, I have discovered and become totally addicted to The Palliser Novels by Anthony Trollope. Like high class soap opera, only you get classics points for reading them. (Best of all, nobody in the book is named “Kardashian.”)



DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0in 5.4pt 0in 5.4pt;
mso-para-margin-top:0in;
mso-para-margin-right:0in;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0in;
line-height:115%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;}

BI Apps has an Universal Data Warehouse

Dylan's BI Notes - Mon, 2015-03-23 13:48
BI Apps data warehouse design is based on an assumption that the data warehouse schema design is independent from OLTP system. The staging schema is an universal staging and the data warehouse is an universal data warehouse. The assumption is that no matter what the source system you are using, the business questions the BI […]
Categories: BI & Warehousing

Manually applying Global Payroll Rules Packages downloaded from an Update Image

Javier Delgado - Mon, 2015-03-23 08:47
Last week we've faced an issue while applying a Tax Update in one of our PeopleSoft HCM 9.2 customers. The Tax Update was delivered as a PeopleSoft Release Patchset, that needs to be first applied to the Update Image before creating the Change Package using PeopleSoft Update Manager.

Unfortunately, during the process, one rules package delivered in the Tax Update was not included within the Change Assistant steps, and therefore it was missed. Some days after, we reported the error to Oracle and they pointed out to the original zip file containing the PeopleSoft Release Patchset, which indeed contained the missing package.

We did not want to repeat the entire Change Package definition steps, as it would have required to restore a couple of backups. Instead, we decided to manually apply the rules package.

Not so fast...Unfortunately, within Update Manager the Rules Packages are not delivered in the usual format used to import, compare and copy them. Instead, specific steps are followed when Update Manager is used.

In the end, we managed to find a way to manually import the package, which is documented below.

Importing the Rules PackageThe rules packages in Update Manager are delivered using the Data Migration Workbench. The process to import them starts by defining the directories from which the Data Migration projects should be picked:

PeopleTools > Lifecycle Tools > Migrate Data > Manage File Locations

The path should point to the PTADSAEPRCS directory within the patch (whose zip file needs to be extracted before). Once the path is defined, the Data Migration project can be copied using the Data Migration Workbench:
PeopleTools > Lifecycle Tools > Migrate Data > Data Migration Workbench
The project should now be uploaded using the Load Project From File link. A list of the projects found the previously defined path will be shown:


Once the project is selected and the Load button pressed, the Project Definition page within the Workbench will be shown:

Applying the Data Migration project is quite simple. In first place, the project needs to be compared using the Compare button and once the comparison has finished, the project has to be submitted for copy (using the Submit for Copy button). 
Note: Data Migration project submissions may need approval. In such case, make sure the request is approved, so the project is actually copied.
The best way to validate whether the project has been correctly copied or not is to check the contents of the PS_GP_PKG_ADS_DFN table, which should now contain the imported rules package.
Rules Package MergeOnce the rules package has been imported, it needs to be merged. The merge process actually takes all the imported rules package and builds a single rule package to simplify its application. Unfortunately, the process is not available from the user interface, but it can still be run using the command line:
<PS_HOME>\psae.exe -CT <database type> -CD <database name> -CO <PeopleSoft user> -CP <PS user password> -R ESP -AI GP_PKG_ADSMR -I 0 -OT 2 -OF 13 -OP <output directory> -CI <connect id> -CW <connect password>
Once the rules package is merged, an usual Rules Package will be accessible within the Global Payroll Packages functionality. From there on, the package can be applied using the steps we were used to.

NYOUG Spring General Meeting

The Oracle Instructor - Mon, 2015-03-23 08:42

The New York Oracle User Group held their Spring General Meeting recently and I was presenting there about the Data Guard Broker and also about the Recovery Area.

Many thanks to the board for organizing this event, I really enjoyed being there! Actually, the Broker demonstration went not so smoothly – always dangerous to do things live – but I managed to get out of the mess in time and without losing too much of the message I wanted to get through. At least that’s what I hope ;-)

I took the opportunity to do some sightseeing in New York as well:

Me on Liberty Island


Tagged: NYOUG
Categories: DBA Blogs

apex.oracle.com - Upgrade to APEX 5.0

Denes Kubicek - Mon, 2015-03-23 03:16
It is time to get used to the new APEX builder interface. apex.oracle.com has been upgraded to the latest pre-production version of APEX 5.0. Some functionality like "Administration" are not visible any more and you have to search a bit. The overall impression is good and my old demo application seem to still work in the most of the cases.



Enjoy.
Categories: Development

Announcing the BI Forum 2015 Data Visualisation Challenge

Rittman Mead Consulting - Mon, 2015-03-23 03:00

The Rittman Mead BI Forum 2015 is running in Brighton from May 6th-8th 2015, and Atlanta from May 13th – 15th 2015. At this year’s events we’re introducing our first “data visualization challenge”, open to all attendees and with the dataset and scenario open from now until the start of each event. Using Oracle Business Intelligence 11g and any plugins or graphics libraries that embed and interact with OBIEE (full details and rules below), we challenge you to create the most effective dashboard or visualisation and bring it along to demo on the Friday of each event.

Help DonorsChoose.org Donors Use their Funds Most Effectively

This year’s inaugural data visualisation challenge is based around the DonorsChoose.org project and dataset, an online charity that makes it easy for anyone to help public school classroom projects that need funding (Rittman Mead will be making donations on behalf of the Brighton and Atlanta BI Forums to show our support for this great initiative). The Donorschoose.org project and dataset have been used in several hackathons and data crunching contests around the world, with analysis and visualisations helping to answer questions such as:

  • Why do some projects get funded, while others don’t?
  • Who donates to projects from different subjects?
  • Does proximity to schools change donation behavior?
  • What types of materials are teachers lacking the most? (eg chalk, paper, markers, etc)
  • Do poorer schools ask for more or less money from their donors?
  • If I need product x, what is the difference between projects asking for x that were successful vs those that aren’t.

More details on uses of the Donorschoose.org dataset can be found on the Donorschoose data blog, and example visualisations you could use to get some ideas and inspiration are on the Donorschoose.org Data Gallery showcase page.

NewImage

Your challenge is to import this dataset into your analytical database of choice, and then create the best visualisation or dashboard in OBIEE to answer the following question: “Which project can I donate to, where my donation will have most impact?”

How Do I Take Part?

For more on the BI Forum 2015 Data Visualization Challenge including how to download the dataset and the rules of the challenge, take a look at the Rittman Mead BI Forum 2015 Data Visualisation Challenge web page where we’ve provided full details. You can either enter as an individual or as part of a team, but you must be registered for either the Brighton or Atlanta BI Forum events and come along in-person to demonstrate your solution – numbers at each event are strictly limited though, so make sure you register soon at the Rittman Mead BI Forum 2015 home page.

Categories: BI & Warehousing

A new logical data layer?

DBMS2 - Sun, 2015-03-22 23:36

I’m skeptical of data federation. I’m skeptical of all-things-to-all-people claims about logical data layers, and in particular of Gartner’s years-premature “Logical Data Warehouse” buzzphrase. Still, a reasonable number of my clients are stealthily trying to do some kind of data layer middleware, as are other vendors more openly, and I don’t think they’re all crazy.

Here are some thoughts as to why, and also as to challenges that need to be overcome.

There are many things a logical data layer might be trying to facilitate — writing, querying, batch data integration, real-time data integration and more. That said:

  • When you’re writing data, you want it to be banged into a sufficiently-durable-to-acknowledge condition fast. If acknowledgements are slow, performance nightmares can ensue. So writing is the last place you want an extra layer, perhaps unless you’re content with the durability provided by an in-memory data grid.
  • Queries are important. Also, they formally are present in other tasks, such as data transformation and movement. That’s why data manipulation packages (originally Pig, now Hive and fuller SQL) are so central to Hadoop.

Trivial query routing or federation is … trivial.

  • Databases have or can be given some kind of data catalog interface. Of course, this is easier for databases that are tabular, whether relational or MOLAP (Multidimensional OnLine Analytic Processing), but to some extent it can be done for anything.
  • Combining the catalogs can be straightforward. So can routing queries through the system to the underlying data stores.

In fact, what I just described is Business Objects’ original innovation — the semantic layer — two decades ago.

Careless query routing or federation can be a performance nightmare. Do a full scan. Move all the data to some intermediate server that lacks capacity or optimization to process it quickly. Wait. Wait. Wait. Wait … hmmm, maybe this wasn’t the best data-architecture strategy.

Streaming goes well with federation. Some data just arrived, and you want to analyze it before it ever gets persisted. You want to analyze it in conjunction with data that’s been around longer. That’s a form of federation right there.

There are ways to navigate schema messes. Sometimes they work.

  • Polishing one neat relational schema for all your data is exactly what people didn’t want to do when they decided to store a lot of the data non-relationally instead. Still, memorializing some schema after that fact may not be terribly painful.
  • Even so, text search can help you navigate the data wilds. So can collaboration tools. Neither helps all the time, however.

Neither extreme view here — “It’s easy!” or “It will never work!” — seems right. Rather, I think there’s room for a lot of effort and differentiation in exposing cross-database schema information.

I’m leaving out one part of the story on purpose — how these data layers are going to be packaged, and specifically what other functionality they will be bundled with. Confidentially would screw up that part of the discussion; so also would my doubts as to whether some of those plans are fully baked yet. That said, there’s an aspect of logical data layer to CDAP, and to Kiji as well. And of course it’s central to BI (Business Intelligence) and ETL (Extract/Transform/Load) alike.

One way or another, I don’t think the subject of logical data layers is going away any time soon.

Related link

Categories: Other

Instrumenting OBIEE Database Connections For Improved Performance Diagnostics

Rittman Mead Consulting - Sun, 2015-03-22 19:30

Nearly four years ago I wrote a blog post entitled “Instrumenting OBIEE – The Final Chapter”. With hindsight, that title suffix (“The Final Chapter”) may have been a tad presumptuous and naïve of me (or perhaps I can just pretend to be ironic now and go for a five-part-trilogy style approach…). Back then OBIEE 11g had only just been released (who remembers 11.1.1.3 in all its buggy-glory?), and in the subsequent years we’ve had significant patchset releases of OBIEE 11g bringing us up to 11.1.1.7.150120 now and with talk of OBIEE 12c around the corner.

As a fanboi of Cary Millsap and his approach to measuring and improving performance, instrumenting code in general – and OBIEE specifically – is something that’s interested me for a long time. The article was the final one that I wrote on my personal blog before joining Rittman Mead and it’s one that I’ve been meaning to re-publish here for a while. A recent client engagement gave me cause to revisit the instrumentation approach and refine it slightly as well as update it for a significant change made in OBIEE 11.1.1.7.1.

What do I mean by instrumentation? Instrumentation is making your program expose information about what is being done, as well as actually doing it. Crudely put, it’s something like this:

10 PRINT "THE TIME IS " NOW()
20 CALL DO_MY_THING()
30 PRINT "I'VE DONE THAT THING, IT TOOK " X " SECONDS"
40 GOTO 10

Rather than just firing some SQL at the database, instead we associate with that SQL information about what program sent it, and what that program was doing, who was using it, and so on. Instrumentation enables you to start analysing performance metrics against tangible actions rather than just amorphous clumps of SQL. It enables you to understand the workload profile on your system and how that’s affecting end users.

Pop quiz: which of these is going to be easier to work with for building up an understanding of a system’s behaviour and workload?

CLIENT_INFO          MODULE                    ACTION       CPU_TIME DISK_READS 
-------------------- ------------------------  ---------- ---------- ---------- 
                                               a17ff8e1         2999          1 
                                               fe6abd92         1000          6 
                                               a264593a         5999          2 
                                               571fe814         5000         12 
                                               63ea4181         7998          4 
                                               7b2fcb68        11999          5

or

CLIENT_INFO          MODULE                    ACTION       CPU_TIME DISK_READS
-------------------- ------------------------  ---------- ---------- ----------
06 Column Selector   GCBC Dashboard/Performan  a17ff8e1         2999          1
05 Table with condit GCBC Dashboard/Performan  a264593a         5999          2
06 View Selector     GCBC Dashboard/Performan  571fe814         5000         12
05 Table with condit GCBC Dashboard/Performan  63ea4181         7998          4
<unsaved analysis>   nqsserver@obi11-01        fe6abd92         1000          6
<unsaved analysis>   nqsserver@obi11-01        7b2fcb68        11999          5

The second one gives us the same information as before, plus the analysis being run by OBIEE, and the dashboard and page.

The benefits of instrumentation work both ways. It makes DBAs happy because they can look at resource usage on the database and trace it back easily to the originating OBIEE dashboard and user. Instrumentation also makes life much easier for troubleshooting OBIEE performance because it’s easy to trace a user’s entire session through from browser, through the BI Stack, and down into the database.

Instrumentation for OBIEE – Step By Step

If you want the ‘tl;dr’ version, the “how” rather than the “why”, here we go. For full details of why it works, see later in the article.

  1. In your RPD create three session variables. These are going to be the default values for variables that we’re going to send to the database. Make sure you set “Enable any user to set the value”.
    • SAW_SRC_PATH
    • SAW_DASHBOARD
    • SAW_DASHBOARD_PG


  2. Set up a session variable initialization block to populate these variables. It is just a “dummy” init block as all you’re doing is setting them to empty/default values, so a ‘SELECT … FROM DUAL’ is just fine:

  3. For each Connection Pool you want to instrument, go to the Connection Scripts tab and add these three scripts to the Execute before query section:

    -- Pass the OBIEE user's name to CLIENT_IDENTIFIER
    call dbms_session.set_identifier('VALUEOF(NQ_SESSION.USER)')

    -- Pass the Analysis name to CLIENT_INFO
    call dbms_application_info.set_client_info(client_info=>SUBSTR('VALUEOF(NQ_SESSION.SAW_SRC_PATH)',(LENGTH('VALUEOF(NQ_SESSION.SAW_SRC_PATH)')-instr('VALUEOF(NQ_SESSION.SAW_SRC_PATH)','/',-1,1))*-1))

    -- Pass the dashboard name & page to MODULE
    -- NB OBIEE >=11.1.1.7.131017 will set ACTION itself so there is no point setting it here (it will get overridden)
    call dbms_application_info.set_module(module_name=> SUBSTR('VALUEOF(NQ_SESSION.SAW_DASHBOARD)', ( LENGTH('VALUEOF(NQ_SESSION.SAW_DASHBOARD)') - INSTR('VALUEOF(NQ_SESSION.SAW_DASHBOARD)', '/', -1, 1) ) *- 1) || '/' || 'VALUEOF(NQ_SESSION.SAW_DASHBOARD_PG)' ,action_name=> '' );

    You can leave the comments in there, and in fact I’d recommend doing so to make it clear for future RPD developers what these scripts are for.

    Your connection pool should look like this:


    An important point to note is that you generally should not be adding these scripts to connection pools that are used for executing initialisation blocks. Initialisation block queries won’t have these request variables so if you did want to instrument them you’d need to find something else to include in the instrumentation.

Once you’ve made the above changes you should see MODULE, CLIENT_IDENTIFIER and CLIENT_INFO being populated in the Oracle system views :

SELECT SID, 
       PROGRAM, 
       CLIENT_IDENTIFIER, 
       CLIENT_INFO, 
       MODULE, 
       ACTION 
FROM   V$SESSION 
WHERE  LOWER(PROGRAM) LIKE 'nqsserver%';

SID PROGRAM CLIENT_ CLIENT_INFO              MODULE                       ACTION
--- ------- ------- ------------------------ ---------------------------- --------
 17 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 32846912
 65 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 4bc2a368
 74 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 35c9af67
193 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 10bdad6c
302 nqsserv prodney Geographical Analysis 1  11.10 Flights Delay/Overview 3a39d178
308 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 1fad81e0
421 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 4e5d36c1

You’ll note that we don’t set ACTION – that’s because OBIEE now sends a hash of the physical query text across in this column, meaning we can’t use it ourselves. Unfortunately the current version of OBIEE doesn’t store the physical query hash anywhere other than in nqquery.log, meaning that you can’t take advantage of it (i.e. link it back to data from Usage Tracking) within the database alone.

That’s all there is to it – easy! If you want to understand exactly how and why it works, read on…

Instrumentation for OBIEE – How Does it Work? Connection Pools

When OBIEE runs a dashboard, it does so by taking each analysis on that dashboard and sending a Logical Request for that analysis to the BI Server (nqsserver). The BI Server parses and compiles that Logical request into one or more Physical requests which it then sends to the source database(s).

OBIEE connects to the database via a Connection Pool which specifies the database-specific connection information including credentials, data source name (such as TNS for Oracle). The Connection Pool, as the name suggests, pools connections so that OBIEE is not going through the overhead of connecting and disconnecting for every single query that it needs to run. Instead it will open one or more connections as needed, and share that connection between queries as needed.


As well as the obvious configuration options in a connection pool such as database credentials, OBIEE also supports the option to send additional SQL to the database when it opens a connection and/or sends a new query. It’s this nice functionality that we piggy-back to enable our instrumentation.


Variables

The information that OBIEE can send back through its database connection is limited by what we can expose in variables. From the BI Server’s point of view there are three types of variables:

  1. Repository
  2. Session
  3. Request

The first two are fairly simple concepts; they’re defined within the RPD and populated with Initialisation Blocks (often known as “init blocks”) that are run by the BI Server either on a schedule (repository variables) or per user (session variables). There’s a special type of session variables known as System Session Variables, of which USER is a nice obvious example. These variables are pre-defined in OBIEE and are generally populated automatically when the user session begins (although some, like LOGLEVEL, still need an init block to set them explicitly).

The third type of variable, request variable, is slightly less obvious in function. In a nutshell, they are variables that are specified in the logical request sent to the BI Server, and are passed through to the internals of the BI Server. They’re often used for activating or disabling certain functionality. For example, you can tell OBIEE to specifically not use its cache for a request (even if it finds a match) by setting the request variable DISABLE_CACHE_HIT.

Request variables can be set manually inline in an analysis from the Advanced tab:


And they can also be set from Variable Prompts either within a report prompt or as a standalone dashboard prompt object. The point about request variables is that they are freeform; if they specify the name of an existing session variable then they will override it (if permitted), but they do not require the session variable to exist. We can see this easily enough – and see a variable request prompt in action at the same time. From the Prompts tab of an analysis I’ve added a Variable Prompt (rather than the usual Column Prompt) and given it a made up name, FOO:


Now when I run the analysis I specify a value for it:


and in the query log there’s the request variable:

-------------------- SQL Request, logical request hash:
bfb12eb6
SET VARIABLE FOO='BAR';
SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY

I’ve cut the quoted Logical SQL down to illustrate the point about the variable, because what was actually there is this:

-------------------- SQL Request, logical request hash:
bfb12eb6
SET VARIABLE QUERY_SRC_CD='Report',SAW_SRC_PATH='/users/prodney/request variable example',FOO='BAR', PREFERRED_CURRENCY='USD';
SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY

which brings me on very nicely to the key point here. When Presentation Services sends a query to the BI Server it does so with a bunch of request variables set, including QUERY_SRC_CD and SAW_SRC_PATH. If you’ve worked with OBIEE for a while then you’ll recognise these names – they’re present in the Usage Tracking table S_NQ_ACCT. Ever wondered how OBIEE knows what values to store in Usage Tracking? Now you know. It’s whatever Presentation Services tells it to. You can easily test this yourself by playing around in nqcmd:

[oracle@demo ~]$ rlwrap nqcmd -d AnalyticsWeb -u prodney -p Admin123 -NoFetch

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

[...]

Give SQL Statement: SET VARIABLE QUERY_SRC_CD='FOO',SAW_SRC_PATH='BAR';SELECT 0 s_0 FROM "A - Sample Sales"
SET VARIABLE QUERY_SRC_CD='FOO',SAW_SRC_PATH='BAR';SELECT 0 s_0 FROM "A - Sample Sales"

Statement execute succeeded

and looking at the results in S_NQ_ACCT:

BIEE_BIPLATFORM@pdborcl > select to_char(start_ts,'YYYY-MM-DD HH24:MI:SS') as start_ts,saw_src_path,query_src_cd from biee_biplatform.s_nq_acct where start_ts > sysdate -1 order by start_ts;

START_TS            SAW_SRC_PATH                             QUERY_SRC_CD
------------------- ---------------------------------------- --------------------
2015-03-21 11:55:10 /users/prodney/request variable example  Report
2015-03-21 12:44:41 BAR                                      FOO
2015-03-21 12:45:26 BAR                                      FOO
2015-03-21 12:45:28 BAR                                      FOO
2015-03-21 12:46:23 BAR                                      FOO

Key takeaway here: Presentation Services defines a bunch of useful request variables when it sends Logical SQL to the BI Server:

  • QUERY_SRC_CD
  • SAW_SRC_PATH
  • SAW_DASHBOARD
  • SAW_DASHBOARD_PG
Embedding Variables in Connection Script Calls

There are four options that we can configure when connecting to the database from OBIEE. These are:

  • CLIENT_IDENTIFIER
  • CLIENT_INFO
  • MODULE
  • ACTION

As of OBIEE version 11.1.1.7.1 (i.e. OBIEE >= 11.1.1.7.131017) OBIEE automatically sets the ACTION field to a hash of the physical query – for more information see Doc ID 1941378.1. That leaves us with three remaining fields (since OBIEE sets ACTION after anything we do with the Connection Pool):

  • CLIENT_IDENTIFIER
  • CLIENT_INFO
  • MODULE

The syntax of the command in a Connection Script is physical SQL and the VALUEOF function to extract the OBIEE variable:

  • VALUEOF(REPOSITORY_VARIABLE)
  • VALUEOF(NQ_SESSION.SESSION_VAR)
  • VALUEOF(NQ_SESSION.REQUEST_VAR)

As a simple example here is passing the userid of the OBIEE user, using the Execute before query connection script:

-- Pass the OBIEE user's name to CLIENT_IDENTIFIER
call dbms_session.set_identifier('VALUEOF(NQ_SESSION.USER)')


This would be set for every Connection Pool – but only those used for query execution – not init blocks. Run a query that is routed through the Connection Pool you defined the script against and check out V$SESSION:

SQL> select sid,program,client_identifier from v$session where program like 'nqsserver%';

       SID PROGRAM                                          CLIENT_IDENTIFIER
---------- ------------------------------------------------ ----------------------------------------------------------------
        22 nqsserver@demo.us.oracle.com (TNS V1-V3)         prodney

The USER session variable is always present, so this is a safe thing to do. But, what about SAW_SRC_PATH? This is the path in the Presentation Catalog of the analysis being executed. Let’s add this into the Connection Pool script, passing it through as the CLIENT_INFO:

-- Pass the Analysis name to CLIENT_INFO
call dbms_application_info.set_client_info(client_info=>'VALUEOF(NQ_SESSION.SAW_SRC_PATH)')

This works just fine for analyses within a dashboard, or standalone analyses that have been saved. But what about a new analysis that hasn’t been saved yet? Unfortunately the result is not pretty:


[10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed:
[nQSError: 23006] The session variable, NQ_SESSION.SAW_SRC_PATH, has no value definition.
Statement execute failed

That’s because SAW_SRC_PATH is a request variable and since the analysis has not been saved Presentation Services does not pass it to BI Server as a request variable. The same holds true for SAW_DASHBOARD and SAW_DASHBOARD_PG if you run an analysis outside of a dashboard – the respective request variables are not set and hence the connection pool script causes the query itself to fail.

The way around this is we cheat, slightly. If you create a session variable with the names of these request variables that we want to use in the connection pool scripts then we avoid the above nasty failures. If the request variables are set then all is well, and if they are not then we fall back on whatever value we initialise the session variable with.

The final icing on the cake of the solution given above is a bit of string munging with INSTR and SUBSTR to convert and concatenate the dashboard path and page into a single string, so instead of :

/shared/01. QuickStart/_portal/1.30 Quickstart/Overview

we get:

1.30 Quickstart/Overview

Which is much easier on the eye when looking at dashboard names. Similarly with the analysis path we strip all but the last section of it.

Granular monitoring of OBIEE on the database

Once OBIEE has been configured to be more articulate in its connection to the database, it enables the use of DBMS_MONITOR to understand more about the performance of given dashboards, analyses, or queries for a given user. Through DBMS_MONITOR the collection of statistics such as DB time, DB CPU, and so can be triggered, as well as trace-file generation for queries matching the criteria specified.

As an example, here is switching on system statistics collection for just one dashboard in OBIEE, using SERV_MOD_ACT_STAT_ENABLE

call dbms_monitor.SERV_MOD_ACT_STAT_ENABLE(
    module_name=>'GCBC Dashboard/Overview'
    ,service_name=>'orcl'
);

Now Oracle stats to collect information whenever that particular dashboard is run, which we can use to understand more about how it is performing from a database point of view:

SYS@orcl AS SYSDBA> select module,stat_name,value from V$SERV_MOD_ACT_STATS;

MODULE                   STAT_NAME                           VALUE
------------------------ ------------------------------ ----------
GCBC Dashboard/Overview  user calls                             60
GCBC Dashboard/Overview  DB time                              6789
GCBC Dashboard/Overview  DB CPU                               9996
GCBC Dashboard/Overview  parse count (total)                    15
GCBC Dashboard/Overview  parse time elapsed                    476
GCBC Dashboard/Overview  execute count                          15
GCBC Dashboard/Overview  sql execute elapsed time             3887
[...]

Similarly the CLIENT_IDENTIFIER field can be used to collect statistics with CLIENT_ID_STAT_ENABLE or trigger trace file generation with CLIENT_ID_TRACE_ENABLE. What you populate CLIENT_IDENTIFIER with it up to you – by default the script I’ve detailed at the top of this article inserts the OBIEE username in it, but you may want to put the analysis here if that’s of more use from a diagnostics point of view on the database side. The CLIENT_INFO field is still available for the other item, but cannot be used with DBMS_MONITOR for identifying queries.

Categories: BI & Warehousing

12c Parallel Execution New Features: Concurrent UNION ALL - Part 3

Randolf Geist - Sun, 2015-03-22 15:17
In the final part of this instalment I want to focus on the possible optimisation of remote access that I outlined in the initial part, which is based on the idea of running multiple concurrent remote branches of a UNION ALL to overcome the query coordinator bottleneck of straightforward remote queries that need to transfer larger amounts of data.

For that purpose I now simply change my sample query to access the serial table T2 via the DB link defined in the setup of the initial part, like that:

set echo on timing on time on

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
which gives me this execution plan:

--------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)| Inst |IN-OUT|
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 0 (0)| | |
| 1 | REMOTE | | | LOOP | R->S |
--------------------------------------------------------------
Ouch, not exactly what I wanted. Of course it's nice that the optimizer recognizes that this is a statement that can be executed fully remotely, but for my particular purpose I don't want that to happen. So let's add a dummy local branch:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0

);
which gives now this plan:

---------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
| 8 | REMOTE | T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
| 10 | REMOTE | T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
| 12 | REMOTE | T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
| 14 | REMOTE | T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
| 16 | REMOTE | T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
| 18 | REMOTE | T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
| 20 | REMOTE | T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
| 22 | REMOTE | T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
| 24 | REMOTE | T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
| 26 | REMOTE | T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | FILTER | | Q1,00 | PCWP | |
| 29 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
---------------------------------------------------------------------------
That is more what I wanted. One funny side effect of the dummy branch is that the automatically derived degree is now 11, since there are 11 branches. Although the optimizer knows that the 11th branch won't be executed (the filter operator ID = 28 is "NULL IS NOT NULL") it gets its own PX SELECTOR assigned, so that's probably the explanation why the calculation arrives at 11.

So let's see what happens at runtime using the setup from the previous part where T2 has 2M rows:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| 4485K| | 1| 4| 5|@**** (5) |
2| 45M| | 0| 10| 10|********** (10) |
3| 46M| | 1| 9| 10|@********* (10) |
4| 46M| | 0| 10| 10|********** (10) |
5| 46M| | 0| 10| 10|********** (10) |
6| 46M| | 0| 10| 10|********** (10) |
7| 46M| | 0| 10| 10|********** (10) |
8| 46M| | 0| 10| 10|********** (10) |
9| 46M| | 0| 10| 10|********** (10) |
10| 46M| | 0| 10| 10|********** (10) |
11| 46M| | 0| 10| 10|********** (10) |
12| 46M| | 0| 10| 10|********** (10) |
13| 46M| | 0| 10| 10|********** (10) |
14| 46M| | 0| 10| 10|********** (10) |
15| 46M| | 0| 10| 10|********** (10) |
16| 46M| | 0| 10| 10|********** (10) |
17| 46M| | 0| 10| 10|********** (10) |
18| 46M| | 0| 10| 10|********** (10) |
19| 46M| | 0| 10| 10|********** (10) |
20| 46M| | 0| 10| 10|********** (10) |
21| 46M| | 0| 10| 10|********** (10) |
22| 46M| | 0| 10| 10|********** (10) |
23| 46M| | 0| 10| 10|********** (10) |
24| 46M| | 0| 10| 10|********** (10) |
25| 46M| | 1| 9| 10|@********* (10) |
26| 46M| | 1| 9| 10|@********* (10) |
27| 46M| | 0| 10| 10|********** (10) |
28| 46M| | 0| 10| 10|********** (10) |
29| 46M| | 0| 10| 10|********** (10) |
30| 46M| | 0| 10| 10|********** (10) |
31| 46M| | 0| 10| 10|********** (10) |
32| 46M| | 0| 10| 10|********** (10) |
33| 46M| | 1| 9| 10|@********* (10) |
34| 46M| | 0| 10| 10|********** (10) |
35| 46M| | 1| 9| 10|@********* (10) |
36| 46M| | 0| 10| 10|********** (10) |
37| 46M| | 0| 10| 10|********** (10) |
38| | | 0| 0| 0| (0) |
39| 46M| | 2| 8| 10|@@******** (10) |
40| 46M| | 0| 10| 10|********** (10) |
41| 46M| | 0| 10| 10|********** (10) |
42| 46M| | 0| 10| 10|********** (10) |
43| 46M| | 1| 9| 10|@********* (10) |
44| 46M| | 0| 10| 10|********** (10) |
45| 46M| | 0| 10| 10|********** (10) |
46| 46M| | 0| 10| 10|********** (10) |
47| 46M| | 0| 10| 10|********** (10) |
48| 46M| | 0| 10| 10|********** (10) |
49| 46M| | 0| 10| 10|********** (10) |
50| 46M| | 0| 10| 10|********** (10) |
51| 46M| | 0| 10| 10|********** (10) |
52| 46M| | 0| 10| 10|********** (10) |
53| 46M| | 1| 9| 10|@********* (10) |
54| 46M| | 0| 10| 10|********** (10) |
55| 46M| | 0| 10| 10|********** (10) |
56| 46M| | 0| 10| 10|********** (10) |
57| 46M| | 0| 10| 10|********** (10) |
58| 46M| | 0| 10| 10|********** (10) |
59| 36M| | 0| 8| 8|******** (8) |
60| 4609K| | 0| 1| 1|* (1) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 20M | 43 | 1 | 1 | # | 1:P002(1)[2000K],P00A(0)[2000K],P001(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 6 | UNION-ALL | | 11 | 20M | | | | | 0:P00A(0)[2000K],P001(0)[2000K],P002(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 7 | PX SELECTOR | | 11 | 2000K | | | | | 0:P006(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 8 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 3:P006(58)[2000K],P004(1)[0],P007(1)[0],P00A(0)[0],P000(0)[0],... |
| 9 | PX SELECTOR | | 11 | 2000K | | | | | 0:P008(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 10 | REMOTE | T2 | 11 | 2000K | 1 | 58 | 57 | #################### | 1:P008(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 11 | PX SELECTOR | | 11 | 2000K | | | | | 0:P00A(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 1:P00A(58)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 13 | PX SELECTOR | | 11 | 2000K | | | | | 0:P004(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 14 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P004(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 15 | PX SELECTOR | | 11 | 2000K | | | | | 0:P007(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 16 | REMOTE | T2 | 11 | 2000K | 2 | 59 | 58 | #################### | 1:P007(58)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 17 | PX SELECTOR | | 11 | 2000K | | | | | 0:P005(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 18 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P005(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 19 | PX SELECTOR | | 11 | 2000K | | | | | 0:P002(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 56 | #################### | 1:P002(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 21 | PX SELECTOR | | 11 | 2000K | | | | | 0:P009(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 22 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P009(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 23 | PX SELECTOR | | 11 | 2000K | | | | | 0:P003(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 24 | REMOTE | T2 | 11 | 2000K | 2 | 57 | 56 | #################### | 1:P003(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 25 | PX SELECTOR | | 11 | 2000K | | | | | 0:P001(0)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P001(57)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
|* 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
So in principle this works as desired, with the automatically derived degree all remote branches are active at the same time. If I queried now different chunks of the same remote object to speed up the transfer and maximize throughput this should give me what I want.

If you wonder why the execution took now almost one minute when the processing of the same query in the previous part just took 40-45 seconds: This seems to be a side-effect of running that many concurrent CPU intensive processes on my test system, which nominally has 12 cores / 24 CPUs, but obviously doesn't scale linearly, either due to some architectural issues, or simply because the CPU speed was lowered with that many cores active.

However, look what happens when I change the remote query slightly so that no rows will be returned:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0
);
The runtime profile now looks like this:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
6| 36M| | 0| 9,2| 9,2|********* (9,2) |
12| 36M| | 0| 11| 11|*********** (11) |
18| 36M| | 0| 11| 11|*********** (11) |
24| 36M| | 0| 11| 11|*********** (11) |
30| 36M| | 0| 11| 11|*********** (11) |
36| 36M| | 0| 11| 11|*********** (11) |
42| 36M| | 0| 9,2| 9,2|********* (9,2) |
48| 36M| | 0| 11| 11|*********** (11) |
54| 36M| | 0| 11| 11|*********** (11) |
60| 32M| | 0| 11| 11|*********** (11) |
66| 9641K| | 0| 11| 11|*********** (11) |
72| 9641K| | 0| 11| 11|*********** (11) |
78| 9641K| | 0| 11| 11|*********** (11) |
84| 9641K| | 0| 11| 11|*********** (11) |
90| 9641K| | 0| 11| 11|*********** (11) |
96| 9641K| | 0| 11| 11|*********** (11) |
102| 9641K| | 0| 9,2| 9,2|********* (9,2) |
108| 9641K| | 0| 11| 11|*********** (11) |
114| 9801K| | 0| 11| 11|*********** (11) |
120|10281K| | 0| 11| 11|*********** (11) |
126|10281K| | 0| 11| 11|*********** (11) |
132|10281K| | 0| 11| 11|*********** (11) |
138|10281K| | 0| 11| 11|*********** (11) |
144|10281K| | 0| 11| 11|*********** (11) |
150|10281K| | 0| 11| 11|*********** (11) |
156|10281K| | 0| 11| 11|*********** (11) |
162|10281K| | 0| 9,2| 9,2|********* (9,2) |
168|10281K| | 0| 11| 11|*********** (11) |
174|10281K| | 0| 11| 11|*********** (11) |
180|10281K| | 0| 11| 11|*********** (11) |
186|10281K| | 0| 11| 11|*********** (11) |
192|10281K| | 0| 11| 11|*********** (11) |
198|10281K| | 0| 11| 11|*********** (11) |
204|10281K| | 0| 11| 11|*********** (11) |
210|10281K| | 0| 11| 11|*********** (11) |
216|10281K| | 0| 11| 11|*********** (11) |
222|10281K| | 0| 9,2| 9,2|********* (9,2) |
228|10281K| | 0| 11| 11|*********** (11) |
234|10281K| | 0| 11| 11|*********** (11) |
240|10281K| | 0| 11| 11|*********** (11) |
246|10281K| | 0| 11| 11|*********** (11) |
252|10281K| | 0| 11| 11|*********** (11) |
258|10281K| | 0| 11| 11|*********** (11) |
264|10281K| | 0| 11| 11|*********** (11) |
270|10281K| | 0| 11| 11|*********** (11) |
276|10281K| | 0| 11| 11|*********** (11) |
282|10281K| | 0| 9,2| 9,2|********* (9,2) |
287|10281K| | 0| 11| 11|*********** (11) |
292|10281K| | 0| 11| 11|*********** (11) |
297|10281K| | 0| 11| 11|*********** (11) |
302|10281K| | 0| 11| 11|*********** (11) |
307|10281K| | 0| 11| 11|*********** (11) |
312|10281K| | 0| 11| 11|*********** (11) |
317|10281K| | 0| 11| 11|*********** (11) |
322|10281K| | 0| 11| 11|*********** (11) |
327|10281K| | 0| 11| 11|*********** (11) |
332|10281K| | 0| 11| 11|*********** (11) |
337|10281K| | 0| 11| 11|*********** (11) |
342|10281K| | 0| 8,8| 8,8|********* (8,8) |
347|10281K| | 0| 11| 11|*********** (11) |
352|10281K| | 0| 11| 11|*********** (11) |
357|10281K| | 0| 11| 11|*********** (11) |
362|10281K| | 0| 11| 11|*********** (11) |
367|10281K| | 0| 11| 11|*********** (11) |
372|10281K| | 0| 11| 11|*********** (11) |
377|10281K| | 0| 11| 11|*********** (11) |
382|10281K| | 0| 11| 11|*********** (11) |
387|10281K| | 0| 11| 11|*********** (11) |
392|10281K| | 0| 11| 11|*********** (11) |
397|10281K| | 0| 11| 11|*********** (11) |
402|10281K| | 0| 8,8| 8,8|********* (8,8) |
407|10281K| | 0| 11| 11|*********** (11) |
412|10281K| | 0| 11| 11|*********** (11) |
417|10281K| | 0| 11| 11|*********** (11) |
422|10281K| | 0| 11| 11|*********** (11) |
427|10281K| | 0| 11| 11|*********** (11) |
432|10281K| | 0| 11| 11|*********** (11) |
437|10281K| | 0| 11| 11|*********** (11) |
442|10281K| | 0| 11| 11|*********** (11) |
447|10281K| | 0| 11| 11|*********** (11) |
452|10281K| | 0| 11| 11|*********** (11) |
457|10281K| | 0| 11| 11|*********** (11) |
462|10281K| | 0| 8,8| 8,8|********* (8,8) |
467|10281K| | 0| 11| 11|*********** (11) |
472|10281K| | 0| 11| 11|*********** (11) |
477|10281K| | 0| 11| 11|*********** (11) |
482|10281K| | 0| 11| 11|*********** (11) |
487|10281K| | 0| 11| 11|*********** (11) |
492|10281K| | 0| 11| 11|*********** (11) |
497|10281K| | 0| 11| 11|*********** (11) |
502|10281K| | 0| 11| 11|*********** (11) |
507|10281K| | 0| 11| 11|*********** (11) |
512|10281K| | 0| 11| 11|*********** (11) |
517|10281K| | 0| 11| 11|*********** (11) |
522|10281K| | 0| 8,8| 8,8|********* (8,8) |
527|10281K| | 0| 11| 11|*********** (11) |
532|10281K| | 0| 11| 11|*********** (11) |
537| 9535K| | 0| 10| 10|********** (10) |
542| 7902K| | 0| 8,4| 8,4|******** (8,4) |
547| 4894K| | 0| 5,2| 5,2|***** (5,2) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 6 | UNION-ALL | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 7 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 8 | REMOTE | T2 | 11 | 0 | 2 | 59 | 58 | ### | 11:P001(58)[0],P002(58)[0],P004(58)[0],P005(58)[0],P00A(57)[0],... |
| 9 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 10 | REMOTE | T2 | 11 | 0 | 58 | 57 | 56 | ### | 11:P00A(53)[0],P001(53)[0],P002(53)[0],P006(53)[0],P007(53)[0],... |
| 11 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 0 | 111 | 58 | 57 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P002(53)[0],P004(53)[0],... |
| 13 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 14 | REMOTE | T2 | 11 | 0 | 163 | 61 | 60 | ### | 11:P00A(54)[0],P001(54)[0],P004(54)[0],P000(53)[0],P002(53)[0],... |
| 15 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 16 | REMOTE | T2 | 11 | 0 | 216 | 61 | 60 | ### | 11:P00A(55)[0],P000(54)[0],P005(54)[0],P006(54)[0],P001(53)[0],... |
| 17 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 18 | REMOTE | T2 | 11 | 0 | 269 | 65 | 64 | #### | 11:P005(58)[0],P007(57)[0],P00A(56)[0],P000(56)[0],P004(56)[0],... |
| 19 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 0 | 324 | 64 | 63 | #### | 11:P006(55)[0],P00A(53)[0],P000(53)[0],P004(53)[0],P008(53)[0],... |
| 21 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 22 | REMOTE | T2 | 11 | 0 | 376 | 66 | 65 | #### | 11:P007(54)[0],P00A(53)[0],P005(53)[0],P001(52)[0],P003(52)[0],... |
| 23 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 24 | REMOTE | T2 | 11 | 0 | 429 | 67 | 66 | #### | 11:P004(54)[0],P008(54)[0],P00A(53)[0],P000(53)[0],P001(53)[0],... |
| 25 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 0 | 481 | 67 | 66 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P003(53)[0],P009(53)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Ouch, so this took now more than nine times longer, and although we see the concurrent UNION ALL plan shape there is not much concurrency visible from the runtime activity - each branch seems to be executed by all 11 PX servers concurrently, and only then the next branch gets executed again by all 11 PX servers... It more or less looks like the PX SELECTOR wouldn't work correctly and instead of assigning the branch only to one PX server it seems to get executed by all of them - which we've already seen in the previous parts of this series is what actually happens, but for non-remote branches the PX SELECTOR functionality made sure that only one of them actually did something whereas the other ones simply skipped the operation.

We can see this problem confirmed by looking at the SQL execution statistics of the remote queries executed via V$SQLSTATS or DBA_HIST_SQLSTAT:

SQL_ID FETCHES END_OF_FETCH_COUNT EXECUTIONS PARSE_CALLS ROWS_PROCESSED
------------- ---------- ------------------ ---------- ----------- --------------
dtjb3bxg1ysdk 730 10 110 110 20000100
d36r1d00yaunc 110 110 108 108 0
The first SQL_ID corresponds to the remote query used by my first successful example - it already confirms that the remote query was actually executed 110 times (=> 11 PX servers times 10 branches) instead of the expected 10 times. The "ROWS_PROCESSED" and "END_OF_FETCH_COUNT" suggest what seems to have happened: Ten of the executions actually fetched the rows to the end (10 times 2M rows), but 100 of them fetched only a single row and then stopped the processing.

This also explains why my slightly changed query influenced the runtime profile so dramatically: Since no rows were returned by the remote query all 110 executions had to run to the end, since they would only stop early after fetching the first row, but there were no rows to fetch.

This problem is tracked via bug 19565803: INEFFICIENT EXECUTION OF PARALLEL UNION-ALL INVOLVING REMOTE TABLES and is mentioned to be fixed in 12.2, but there doesn't seem to be patch/fix available for 12.1

So in principle my original idea should work, assuming that the remote queries just access different chunks/partitions of the same segment the issue just described shouldn't matter. However, depending on what the remote queries exactly do, some other scenarios might be affected. The possible overhead depends on how much work the remote queries have to perform before they return the first row.

Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots

Michael Feldstein - Sun, 2015-03-22 14:55

By Phil HillMore Posts (300)

While at SXSWedu, I was able to visit Austin Community College’s ACCelerator lab, which got a fair bit of publicity over the past month. While the centerpiece of ACCelerator usage is for developental math, the 600+ workstation facility spread over 32,000 square feet also supports Tutoring in a variety of subjectsFirst year experienceGroup advisingAcademic CoachingAdult EducationContinuing EducationCollege readiness assessment preparation, and Student skills workshops.

ACCelerator

But it is the developmental math course that has received the most coverage.

Austin Community College welcomed second lady Dr. Jill Biden and Under Secretary of Education Dr. Ted Mitchell on Monday, March 9, to tour the Highland Campus’ ACCelerator and meet with students and faculty of the college’s new developmental math course, MATD 0421. [snip]

“I teach a lot of developmental students,” says Dr. Biden. “The one stumbling block does seem to be math and math anxiety and ‘Can I do it?’. This (course) seems to be so empowering and so positive. Students can see immediate success.”

MATD 0421 is a self-paced, emporium-style course that encompasses all three levels of developmental math. Paul Fain at Inside Higher Ed had an excellent article that included a description of the motivation.

Dismal remedial success rates have been a problem at Austin, which enrolls 60,000 students. So faculty members from the college looked around for alternative approaches to teaching math.

“Really, there’s nothing to lose,” said [Austin CC president] Rhodes.

The Highland Campus, where the ACCelerator lab is located, is built in a former shopping mall. Student in Austin CC can choose courses at any of the 8 campuses or 5 centers. All developmental math at the Highland Campus is run through MATD 0421, so students across the system can choose traditional approaches at other campuses of the emporium approach at Highland.

Austin CC picked this approach after researching several other initiatives (Fain describes Virginia Tech and Montgomery College examples). The IHE article then describes the design:

Austin officials decided to try the emporium method. They paired it with adaptive courseware, which adjusts to individual learners based on their progress and ability to master concepts. The college went with ALEKS, an adaptive software platform from McGraw-Hill Education.

Fain describes the personalization aspect:

The new remedial math course is offered at the ACCelerator. The computer stations are arranged in loose clusters of 25 or so. Faculty members are easy to spot in blue vests. Student coaches and staff wear red ones.

This creates a more personalized form of learning, said Stacey Güney, the ACCelerator’s director. That might seem paradoxical in computer lab that has a bit of a Matrix feel. But Güney said that instead of a class size of 25 students per instructor, the course features 25 classes of one student.

“In here there is no back of the classroom,” she said.

While the program is fairly new (second term), there are some initial results described by the official site:

In MATD 0421’s inaugural semester:

  • The withdrawal rate was less than half the rate for traditional developmental math courses.
  • 75 percent of the students completed the equivalent of one traditional course.
  • Nearly 45 percent completed the equivalent to a course and one-half.
  • Over 14 percent completed the equivalent to two courses.
  • 13 students completed all the equivalent of three courses.

Go read the full IHE article for a thorough description. I would offer the following observations.

  • Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.
  • While the computer lab and software are easy headlines, I hope people don’t miss the heavy staffing that are a central feature of this lab – there are more than 90 faculty and staff working there, teaching the modular courses, roving the aisles to provide help, and working in help desks. The ACCelerator is NOT an exercise in replacing faculty with computers.
  • During my tour, instructor Christie Allen-Johnson and associate professor Ann P. Vance described their plans to perform a more structured analysis of the results. Expect to see more validated outcomes starting at the end of CY2015.
  • When and if Austin CC proves the value and results of the model, that would be the time to migrate most of the remaining developmental math courses into this emporium model.
  • The one area that concerns me is the lack of structured time for students away from the workstations. Developmental students in community colleges often have not experienced academic success – knowing how to succeed, learning how to learn, believing in their ability to succeed – and often this non-cognitive aspect of math is as important as the actual coursework. Allen-Johnson described the availability of coaching that goes beyond coursework, but that is different than providing structure for coaching and self-regulated learning.

The post Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots appeared first on e-Literate.

SQLDeveloper XML Extensions and auto-navigation

The Anti-Kyte - Sun, 2015-03-22 09:42

It’s official, England are now the second best cricket team in the British Isles !
After all, Scotland were dispatched with ease and as for Wales…they didn’t even make it to the Cricket World Cup.
OK, technically they did because they’re part of England for the purposes of cricket…although you’d be hard pressed to get them to admit it.
Ireland are, of course, some way in front having actually managed to actually win the odd game against Test Playing Nations.
Whilst it takes quite some effort to find silver lining in the cloud of English Cricket’s latest debacle, the same cannot be said if SQLDeveloper is your Oracle Database IDE of choice …

Why I use SQLDeveloper

Now, I’m well aware that each developer has their own favourite IDE. TOAD and PL/SQL Developer are fine tools in their own right, each with their own strengths. Which of these tools you swear by, or swear at, is always likely to be based on some fairly subjective criteria.

One of the main reasons I have for using SQLDeveloper is that it’s so easy to extend by means of a judicious application of XML, combined with a bit of SQL and/or PL/SQL.

Fun with Foreign Keys

As with all of the mainstream IDE’s, SQLDeveloper displays a table’s constraints as standard. However, what it doesn’t show is which tables have Foreign Keys to the table you’re looking at.

If, for example, you want to find out which tables have a Foreign Key to HR.COUNTRIES, you need to run a query like this :

select owner, table_name, constraint_name
from all_constraints
where constraint_type = 'R'
and (r_owner, r_constraint_name) in 
(
    select owner, constraint_name
    from all_constraints
    where constraint_type in ('U', 'P')
    and owner = 'HR'
    and table_name = 'COUNTRIES'
)
order by owner, table_name, constraint_name
/

OWNER                          TABLE_NAME                     CONSTRAINT_NAME
------------------------------ ------------------------------ ------------------------------
HR                             LOCATIONS                      LOC_C_ID_FK

SQL> 

Wouldn’t it be handy if you could somehow add a tab onto the table view in SQLDeveloper and have this information
pop-up for the table you’re looking at.
Well, funny you should say that….

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner, table_name, constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

It may not be immediately obvious how this xml code is going to help us.
The answer is that it’s the SQLDeveloper extension that we’ve just written. Yep, just that one xml file.

It’s probably worth taking a closer look at the structure of this file in terms of the XML…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Enter the Tab Name Here]]></title>
		<query>
			<sql>
				<![CDATA[Enter your query here]]>
			</sql>
		</query>
	</item>
</items>

If you want to add a node to a different object type, it seems that you just need to change the node attribute of the item tag as appropriate.
For example, specifying “Viewnode” should enable you to add a Tab to your Views.

Anyway, to add this to SQLDeveloper, we just need to do a bit of pointing and clicking…

To start with, save the above code into an xml file. I’ve called mine deps_fk.xml.

In SQLDeveloper, go to the Tools menu and select Preferences

In the tree on the left hand side expand the Database node. Then click on User Defined Extensions.

sqld_add_ext1

Now click the Add Row button at the bottom of the Right Hand Pane and you should see a new Row appear under the Type and Location headings like this :

sqld_add_ext2

If you now click in the row you’ve just created under the Type heading, you should get a drop-down list of values.
Select Editor from the list :

sqld_add_ext3

In the Location field, you just need to tell SQLDeveloper where your xml file is :

sqld_add_ext4

Finally, hit the OK button at the bottom of the window to save your changes and then re-start SQLDeveloper.

Now, if we open the HR.COUNTRIES table, we can see that there’s now an additional tab called Child Tables.
When we go to this tab, we are rewarded with :

sqld_tab1

This stuff has been done before. As well as this post from Tony Andrews, I also posted something similar several years ago.
I must be getting old, I’m starting to repeat myself !

Whilst all this is quite handy, wouldn’t it be good to be to be able to link directly to the Child table ?

Linking to other objects

Jeff Smith’s post here might be just what we’re looking for.

Using Jeff’s linking technique, we can improve our extension a bit…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner,  
						'SQLDEV:LINK:'
						    ||owner||':TABLE:'||table_name
						    ||':oracle.dbtools.raptor.controls.grid.DefaultDrillLink' table_name,
						constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

Now when I run this, the table_name appears in blue.

sqld_ext_final1

By clicking it, I can navigate directly to the child table…

sqld_ext_final2

When the code is run, in this example, the string generated for the table_name column is :

SQLDEV:LINK:HR:TABLE:LOCATIONS:oracle.dbtools.raptor.controls.grid.DefaultDrillLink

You can link to all sorts of things using this technique. In fact, it would seem to be that you can link to any object that has a node in the SQLDeveloper Navigator Tree. If you want to play around, just select the string from dual in SQLDeveloper (using F9 to execute).

For example :

select 'SQLDEV:LINK:HR:TRIGGER:UPDATE_JOB_HISTORY:oracle.dbtools.raptor.controls.grid.DefaultDrillLink'
from dual;

…returns a link to the UPDATE_JOB_HISTORY trigger in the results grid.
Note that the link syntax is case sensitive in that the owner, object type and object name all need to be in uppercase for this to work.

Another point worth noting, specifying a type of PACKAGE takes you to the package header.
If you want to link to the body, then specify PACKAGE BODY.

Other SQLDeveloper Extension Types

There’s a whole bunch of stuff you can do to customise SQLDeveloper using nothing more than a bit of boilerplate xml.
In addition to Jeff’s blog, there is an Oracle Wiki which contains, among other things,

It’s definitely worth a look, especially if, like me, you need something to take your mind off the cricket.


Filed under: SQLDeveloper Tagged: adding tabs to sqldeveloper, all_constraints, finding child constraints, linking to tabs in sqldeveloper, oracle.dbtools.raptor.controls.grid.DefaultDrillLink, sqldeveloper xml extensions