Skip navigation.

Feed aggregator

Manually applying Global Payroll Rules Packages downloaded from an Update Image

Javier Delgado - Mon, 2015-03-23 08:47
Last week we've faced an issue while applying a Tax Update in one of our PeopleSoft HCM 9.2 customers. The Tax Update was delivered as a PeopleSoft Release Patchset, that needs to be first applied to the Update Image before creating the Change Package using PeopleSoft Update Manager.

Unfortunately, during the process, one rules package delivered in the Tax Update was not included within the Change Assistant steps, and therefore it was missed. Some days after, we reported the error to Oracle and they pointed out to the original zip file containing the PeopleSoft Release Patchset, which indeed contained the missing package.

We did not want to repeat the entire Change Package definition steps, as it would have required to restore a couple of backups. Instead, we decided to manually apply the rules package.

Not so fast...Unfortunately, within Update Manager the Rules Packages are not delivered in the usual format used to import, compare and copy them. Instead, specific steps are followed when Update Manager is used.

In the end, we managed to find a way to manually import the package, which is documented below.

Importing the Rules PackageThe rules packages in Update Manager are delivered using the Data Migration Workbench. The process to import them starts by defining the directories from which the Data Migration projects should be picked:

PeopleTools > Lifecycle Tools > Migrate Data > Manage File Locations

The path should point to the PTADSAEPRCS directory within the patch (whose zip file needs to be extracted before). Once the path is defined, the Data Migration project can be copied using the Data Migration Workbench:
PeopleTools > Lifecycle Tools > Migrate Data > Data Migration Workbench
The project should now be uploaded using the Load Project From File link. A list of the projects found the previously defined path will be shown:


Once the project is selected and the Load button pressed, the Project Definition page within the Workbench will be shown:

Applying the Data Migration project is quite simple. In first place, the project needs to be compared using the Compare button and once the comparison has finished, the project has to be submitted for copy (using the Submit for Copy button). 
Note: Data Migration project submissions may need approval. In such case, make sure the request is approved, so the project is actually copied.
The best way to validate whether the project has been correctly copied or not is to check the contents of the PS_GP_PKG_ADS_DFN table, which should now contain the imported rules package.
Rules Package MergeOnce the rules package has been imported, it needs to be merged. The merge process actually takes all the imported rules package and builds a single rule package to simplify its application. Unfortunately, the process is not available from the user interface, but it can still be run using the command line:
<PS_HOME>\psae.exe -CT <database type> -CD <database name> -CO <PeopleSoft user> -CP <PS user password> -R ESP -AI GP_PKG_ADSMR -I 0 -OT 2 -OF 13 -OP <output directory> -CI <connect id> -CW <connect password>
Once the rules package is merged, an usual Rules Package will be accessible within the Global Payroll Packages functionality. From there on, the package can be applied using the steps we were used to.

NYOUG Spring General Meeting

The Oracle Instructor - Mon, 2015-03-23 08:42

The New York Oracle User Group held their Spring General Meeting recently and I was presenting there about the Data Guard Broker and also about the Recovery Area.

Many thanks to the board for organizing this event, I really enjoyed being there! Actually, the Broker demonstration went not so smoothly – always dangerous to do things live – but I managed to get out of the mess in time and without losing too much of the message I wanted to get through. At least that’s what I hope ;-)

I took the opportunity to do some sightseeing in New York as well:

Me on Liberty Island


Tagged: NYOUG
Categories: DBA Blogs

apex.oracle.com - Upgrade to APEX 5.0

Denes Kubicek - Mon, 2015-03-23 03:16
It is time to get used to the new APEX builder interface. apex.oracle.com has been upgraded to the latest pre-production version of APEX 5.0. Some functionality like "Administration" are not visible any more and you have to search a bit. The overall impression is good and my old demo application seem to still work in the most of the cases.



Enjoy.
Categories: Development

Announcing the BI Forum 2015 Data Visualisation Challenge

Rittman Mead Consulting - Mon, 2015-03-23 03:00

The Rittman Mead BI Forum 2015 is running in Brighton from May 6th-8th 2015, and Atlanta from May 13th – 15th 2015. At this year’s events we’re introducing our first “data visualization challenge”, open to all attendees and with the dataset and scenario open from now until the start of each event. Using Oracle Business Intelligence 11g and any plugins or graphics libraries that embed and interact with OBIEE (full details and rules below), we challenge you to create the most effective dashboard or visualisation and bring it along to demo on the Friday of each event.

Help DonorsChoose.org Donors Use their Funds Most Effectively

This year’s inaugural data visualisation challenge is based around the DonorsChoose.org project and dataset, an online charity that makes it easy for anyone to help public school classroom projects that need funding (Rittman Mead will be making donations on behalf of the Brighton and Atlanta BI Forums to show our support for this great initiative). The Donorschoose.org project and dataset have been used in several hackathons and data crunching contests around the world, with analysis and visualisations helping to answer questions such as:

  • Why do some projects get funded, while others don’t?
  • Who donates to projects from different subjects?
  • Does proximity to schools change donation behavior?
  • What types of materials are teachers lacking the most? (eg chalk, paper, markers, etc)
  • Do poorer schools ask for more or less money from their donors?
  • If I need product x, what is the difference between projects asking for x that were successful vs those that aren’t.

More details on uses of the Donorschoose.org dataset can be found on the Donorschoose data blog, and example visualisations you could use to get some ideas and inspiration are on the Donorschoose.org Data Gallery showcase page.

NewImage

Your challenge is to import this dataset into your analytical database of choice, and then create the best visualisation or dashboard in OBIEE to answer the following question: “Which project can I donate to, where my donation will have most impact?”

How Do I Take Part?

For more on the BI Forum 2015 Data Visualization Challenge including how to download the dataset and the rules of the challenge, take a look at the Rittman Mead BI Forum 2015 Data Visualisation Challenge web page where we’ve provided full details. You can either enter as an individual or as part of a team, but you must be registered for either the Brighton or Atlanta BI Forum events and come along in-person to demonstrate your solution – numbers at each event are strictly limited though, so make sure you register soon at the Rittman Mead BI Forum 2015 home page.

Categories: BI & Warehousing

A new logical data layer?

DBMS2 - Sun, 2015-03-22 23:36

I’m skeptical of data federation. I’m skeptical of all-things-to-all-people claims about logical data layers, and in particular of Gartner’s years-premature “Logical Data Warehouse” buzzphrase. Still, a reasonable number of my clients are stealthily trying to do some kind of data layer middleware, as are other vendors more openly, and I don’t think they’re all crazy.

Here are some thoughts as to why, and also as to challenges that need to be overcome.

There are many things a logical data layer might be trying to facilitate — writing, querying, batch data integration, real-time data integration and more. That said:

  • When you’re writing data, you want it to be banged into a sufficiently-durable-to-acknowledge condition fast. If acknowledgements are slow, performance nightmares can ensue. So writing is the last place you want an extra layer, perhaps unless you’re content with the durability provided by an in-memory data grid.
  • Queries are important. Also, they formally are present in other tasks, such as data transformation and movement. That’s why data manipulation packages (originally Pig, now Hive and fuller SQL) are so central to Hadoop.

Trivial query routing or federation is … trivial.

  • Databases have or can be given some kind of data catalog interface. Of course, this is easier for databases that are tabular, whether relational or MOLAP (Multidimensional OnLine Analytic Processing), but to some extent it can be done for anything.
  • Combining the catalogs can be straightforward. So can routing queries through the system to the underlying data stores.

In fact, what I just described is Business Objects’ original innovation — the semantic layer — two decades ago.

Careless query routing or federation can be a performance nightmare. Do a full scan. Move all the data to some intermediate server that lacks capacity or optimization to process it quickly. Wait. Wait. Wait. Wait … hmmm, maybe this wasn’t the best data-architecture strategy.

Streaming goes well with federation. Some data just arrived, and you want to analyze it before it ever gets persisted. You want to analyze it in conjunction with data that’s been around longer. That’s a form of federation right there.

There are ways to navigate schema messes. Sometimes they work.

  • Polishing one neat relational schema for all your data is exactly what people didn’t want to do when they decided to store a lot of the data non-relationally instead. Still, memorializing some schema after that fact may not be terribly painful.
  • Even so, text search can help you navigate the data wilds. So can collaboration tools. Neither helps all the time, however.

Neither extreme view here — “It’s easy!” or “It will never work!” — seems right. Rather, I think there’s room for a lot of effort and differentiation in exposing cross-database schema information.

I’m leaving out one part of the story on purpose — how these data layers are going to be packaged, and specifically what other functionality they will be bundled with. Confidentially would screw up that part of the discussion; so also would my doubts as to whether some of those plans are fully baked yet. That said, there’s an aspect of logical data layer to CDAP, and to Kiji as well. And of course it’s central to BI (Business Intelligence) and ETL (Extract/Transform/Load) alike.

One way or another, I don’t think the subject of logical data layers is going away any time soon.

Related link

Categories: Other

Instrumenting OBIEE Database Connections For Improved Performance Diagnostics

Rittman Mead Consulting - Sun, 2015-03-22 19:30

Nearly four years ago I wrote a blog post entitled “Instrumenting OBIEE – The Final Chapter”. With hindsight, that title suffix (“The Final Chapter”) may have been a tad presumptuous and naïve of me (or perhaps I can just pretend to be ironic now and go for a five-part-trilogy style approach…). Back then OBIEE 11g had only just been released (who remembers 11.1.1.3 in all its buggy-glory?), and in the subsequent years we’ve had significant patchset releases of OBIEE 11g bringing us up to 11.1.1.7.150120 now and with talk of OBIEE 12c around the corner.

As a fanboi of Cary Millsap and his approach to measuring and improving performance, instrumenting code in general – and OBIEE specifically – is something that’s interested me for a long time. The article was the final one that I wrote on my personal blog before joining Rittman Mead and it’s one that I’ve been meaning to re-publish here for a while. A recent client engagement gave me cause to revisit the instrumentation approach and refine it slightly as well as update it for a significant change made in OBIEE 11.1.1.7.1.

What do I mean by instrumentation? Instrumentation is making your program expose information about what is being done, as well as actually doing it. Crudely put, it’s something like this:

10 PRINT "THE TIME IS " NOW()
20 CALL DO_MY_THING()
30 PRINT "I'VE DONE THAT THING, IT TOOK " X " SECONDS"
40 GOTO 10

Rather than just firing some SQL at the database, instead we associate with that SQL information about what program sent it, and what that program was doing, who was using it, and so on. Instrumentation enables you to start analysing performance metrics against tangible actions rather than just amorphous clumps of SQL. It enables you to understand the workload profile on your system and how that’s affecting end users.

Pop quiz: which of these is going to be easier to work with for building up an understanding of a system’s behaviour and workload?

CLIENT_INFO          MODULE                    ACTION       CPU_TIME DISK_READS 
-------------------- ------------------------  ---------- ---------- ---------- 
                                               a17ff8e1         2999          1 
                                               fe6abd92         1000          6 
                                               a264593a         5999          2 
                                               571fe814         5000         12 
                                               63ea4181         7998          4 
                                               7b2fcb68        11999          5

or

CLIENT_INFO          MODULE                    ACTION       CPU_TIME DISK_READS
-------------------- ------------------------  ---------- ---------- ----------
06 Column Selector   GCBC Dashboard/Performan  a17ff8e1         2999          1
05 Table with condit GCBC Dashboard/Performan  a264593a         5999          2
06 View Selector     GCBC Dashboard/Performan  571fe814         5000         12
05 Table with condit GCBC Dashboard/Performan  63ea4181         7998          4
<unsaved analysis>   nqsserver@obi11-01        fe6abd92         1000          6
<unsaved analysis>   nqsserver@obi11-01        7b2fcb68        11999          5

The second one gives us the same information as before, plus the analysis being run by OBIEE, and the dashboard and page.

The benefits of instrumentation work both ways. It makes DBAs happy because they can look at resource usage on the database and trace it back easily to the originating OBIEE dashboard and user. Instrumentation also makes life much easier for troubleshooting OBIEE performance because it’s easy to trace a user’s entire session through from browser, through the BI Stack, and down into the database.

Instrumentation for OBIEE – Step By Step

If you want the ‘tl;dr’ version, the “how” rather than the “why”, here we go. For full details of why it works, see later in the article.

  1. In your RPD create three session variables. These are going to be the default values for variables that we’re going to send to the database. Make sure you set “Enable any user to set the value”.
    • SAW_SRC_PATH
    • SAW_DASHBOARD
    • SAW_DASHBOARD_PG


  2. Set up a session variable initialization block to populate these variables. It is just a “dummy” init block as all you’re doing is setting them to empty/default values, so a ‘SELECT … FROM DUAL’ is just fine:

  3. For each Connection Pool you want to instrument, go to the Connection Scripts tab and add these three scripts to the Execute before query section:

    -- Pass the OBIEE user's name to CLIENT_IDENTIFIER
    call dbms_session.set_identifier('VALUEOF(NQ_SESSION.USER)')

    -- Pass the Analysis name to CLIENT_INFO
    call dbms_application_info.set_client_info(client_info=>SUBSTR('VALUEOF(NQ_SESSION.SAW_SRC_PATH)',(LENGTH('VALUEOF(NQ_SESSION.SAW_SRC_PATH)')-instr('VALUEOF(NQ_SESSION.SAW_SRC_PATH)','/',-1,1))*-1))

    -- Pass the dashboard name & page to MODULE
    -- NB OBIEE >=11.1.1.7.131017 will set ACTION itself so there is no point setting it here (it will get overridden)
    call dbms_application_info.set_module(module_name=> SUBSTR('VALUEOF(NQ_SESSION.SAW_DASHBOARD)', ( LENGTH('VALUEOF(NQ_SESSION.SAW_DASHBOARD)') - INSTR('VALUEOF(NQ_SESSION.SAW_DASHBOARD)', '/', -1, 1) ) *- 1) || '/' || 'VALUEOF(NQ_SESSION.SAW_DASHBOARD_PG)' ,action_name=> '' );

    You can leave the comments in there, and in fact I’d recommend doing so to make it clear for future RPD developers what these scripts are for.

    Your connection pool should look like this:


    An important point to note is that you generally should not be adding these scripts to connection pools that are used for executing initialisation blocks. Initialisation block queries won’t have these request variables so if you did want to instrument them you’d need to find something else to include in the instrumentation.

Once you’ve made the above changes you should see MODULE, CLIENT_IDENTIFIER and CLIENT_INFO being populated in the Oracle system views :

SELECT SID, 
       PROGRAM, 
       CLIENT_IDENTIFIER, 
       CLIENT_INFO, 
       MODULE, 
       ACTION 
FROM   V$SESSION 
WHERE  LOWER(PROGRAM) LIKE 'nqsserver%';

SID PROGRAM CLIENT_ CLIENT_INFO              MODULE                       ACTION
--- ------- ------- ------------------------ ---------------------------- --------
 17 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 32846912
 65 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 4bc2a368
 74 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 35c9af67
193 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 10bdad6c
302 nqsserv prodney Geographical Analysis 1  11.10 Flights Delay/Overview 3a39d178
308 nqsserv prodney Delayed Fligth % history 11.10 Flights Delay/Overview 1fad81e0
421 nqsserv prodney Geographical Analysis 2  11.10 Flights Delay/Overview 4e5d36c1

You’ll note that we don’t set ACTION – that’s because OBIEE now sends a hash of the physical query text across in this column, meaning we can’t use it ourselves. Unfortunately the current version of OBIEE doesn’t store the physical query hash anywhere other than in nqquery.log, meaning that you can’t take advantage of it (i.e. link it back to data from Usage Tracking) within the database alone.

That’s all there is to it – easy! If you want to understand exactly how and why it works, read on…

Instrumentation for OBIEE – How Does it Work? Connection Pools

When OBIEE runs a dashboard, it does so by taking each analysis on that dashboard and sending a Logical Request for that analysis to the BI Server (nqsserver). The BI Server parses and compiles that Logical request into one or more Physical requests which it then sends to the source database(s).

OBIEE connects to the database via a Connection Pool which specifies the database-specific connection information including credentials, data source name (such as TNS for Oracle). The Connection Pool, as the name suggests, pools connections so that OBIEE is not going through the overhead of connecting and disconnecting for every single query that it needs to run. Instead it will open one or more connections as needed, and share that connection between queries as needed.


As well as the obvious configuration options in a connection pool such as database credentials, OBIEE also supports the option to send additional SQL to the database when it opens a connection and/or sends a new query. It’s this nice functionality that we piggy-back to enable our instrumentation.


Variables

The information that OBIEE can send back through its database connection is limited by what we can expose in variables. From the BI Server’s point of view there are three types of variables:

  1. Repository
  2. Session
  3. Request

The first two are fairly simple concepts; they’re defined within the RPD and populated with Initialisation Blocks (often known as “init blocks”) that are run by the BI Server either on a schedule (repository variables) or per user (session variables). There’s a special type of session variables known as System Session Variables, of which USER is a nice obvious example. These variables are pre-defined in OBIEE and are generally populated automatically when the user session begins (although some, like LOGLEVEL, still need an init block to set them explicitly).

The third type of variable, request variable, is slightly less obvious in function. In a nutshell, they are variables that are specified in the logical request sent to the BI Server, and are passed through to the internals of the BI Server. They’re often used for activating or disabling certain functionality. For example, you can tell OBIEE to specifically not use its cache for a request (even if it finds a match) by setting the request variable DISABLE_CACHE_HIT.

Request variables can be set manually inline in an analysis from the Advanced tab:


And they can also be set from Variable Prompts either within a report prompt or as a standalone dashboard prompt object. The point about request variables is that they are freeform; if they specify the name of an existing session variable then they will override it (if permitted), but they do not require the session variable to exist. We can see this easily enough – and see a variable request prompt in action at the same time. From the Prompts tab of an analysis I’ve added a Variable Prompt (rather than the usual Column Prompt) and given it a made up name, FOO:


Now when I run the analysis I specify a value for it:


and in the query log there’s the request variable:

-------------------- SQL Request, logical request hash:
bfb12eb6
SET VARIABLE FOO='BAR';
SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY

I’ve cut the quoted Logical SQL down to illustrate the point about the variable, because what was actually there is this:

-------------------- SQL Request, logical request hash:
bfb12eb6
SET VARIABLE QUERY_SRC_CD='Report',SAW_SRC_PATH='/users/prodney/request variable example',FOO='BAR', PREFERRED_CURRENCY='USD';
SELECT
   0 s_0,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_1
FROM "A - Sample Sales"
ORDER BY 1
FETCH FIRST 5000001 ROWS ONLY

which brings me on very nicely to the key point here. When Presentation Services sends a query to the BI Server it does so with a bunch of request variables set, including QUERY_SRC_CD and SAW_SRC_PATH. If you’ve worked with OBIEE for a while then you’ll recognise these names – they’re present in the Usage Tracking table S_NQ_ACCT. Ever wondered how OBIEE knows what values to store in Usage Tracking? Now you know. It’s whatever Presentation Services tells it to. You can easily test this yourself by playing around in nqcmd:

[oracle@demo ~]$ rlwrap nqcmd -d AnalyticsWeb -u prodney -p Admin123 -NoFetch

-------------------------------------------------------------------------------
          Oracle BI ODBC Client
          Copyright (c) 1997-2013 Oracle Corporation, All rights reserved
-------------------------------------------------------------------------------

[...]

Give SQL Statement: SET VARIABLE QUERY_SRC_CD='FOO',SAW_SRC_PATH='BAR';SELECT 0 s_0 FROM "A - Sample Sales"
SET VARIABLE QUERY_SRC_CD='FOO',SAW_SRC_PATH='BAR';SELECT 0 s_0 FROM "A - Sample Sales"

Statement execute succeeded

and looking at the results in S_NQ_ACCT:

BIEE_BIPLATFORM@pdborcl > select to_char(start_ts,'YYYY-MM-DD HH24:MI:SS') as start_ts,saw_src_path,query_src_cd from biee_biplatform.s_nq_acct where start_ts > sysdate -1 order by start_ts;

START_TS            SAW_SRC_PATH                             QUERY_SRC_CD
------------------- ---------------------------------------- --------------------
2015-03-21 11:55:10 /users/prodney/request variable example  Report
2015-03-21 12:44:41 BAR                                      FOO
2015-03-21 12:45:26 BAR                                      FOO
2015-03-21 12:45:28 BAR                                      FOO
2015-03-21 12:46:23 BAR                                      FOO

Key takeaway here: Presentation Services defines a bunch of useful request variables when it sends Logical SQL to the BI Server:

  • QUERY_SRC_CD
  • SAW_SRC_PATH
  • SAW_DASHBOARD
  • SAW_DASHBOARD_PG
Embedding Variables in Connection Script Calls

There are four options that we can configure when connecting to the database from OBIEE. These are:

  • CLIENT_IDENTIFIER
  • CLIENT_INFO
  • MODULE
  • ACTION

As of OBIEE version 11.1.1.7.1 (i.e. OBIEE >= 11.1.1.7.131017) OBIEE automatically sets the ACTION field to a hash of the physical query – for more information see Doc ID 1941378.1. That leaves us with three remaining fields (since OBIEE sets ACTION after anything we do with the Connection Pool):

  • CLIENT_IDENTIFIER
  • CLIENT_INFO
  • MODULE

The syntax of the command in a Connection Script is physical SQL and the VALUEOF function to extract the OBIEE variable:

  • VALUEOF(REPOSITORY_VARIABLE)
  • VALUEOF(NQ_SESSION.SESSION_VAR)
  • VALUEOF(NQ_SESSION.REQUEST_VAR)

As a simple example here is passing the userid of the OBIEE user, using the Execute before query connection script:

-- Pass the OBIEE user's name to CLIENT_IDENTIFIER
call dbms_session.set_identifier('VALUEOF(NQ_SESSION.USER)')


This would be set for every Connection Pool – but only those used for query execution – not init blocks. Run a query that is routed through the Connection Pool you defined the script against and check out V$SESSION:

SQL> select sid,program,client_identifier from v$session where program like 'nqsserver%';

       SID PROGRAM                                          CLIENT_IDENTIFIER
---------- ------------------------------------------------ ----------------------------------------------------------------
        22 nqsserver@demo.us.oracle.com (TNS V1-V3)         prodney

The USER session variable is always present, so this is a safe thing to do. But, what about SAW_SRC_PATH? This is the path in the Presentation Catalog of the analysis being executed. Let’s add this into the Connection Pool script, passing it through as the CLIENT_INFO:

-- Pass the Analysis name to CLIENT_INFO
call dbms_application_info.set_client_info(client_info=>'VALUEOF(NQ_SESSION.SAW_SRC_PATH)')

This works just fine for analyses within a dashboard, or standalone analyses that have been saved. But what about a new analysis that hasn’t been saved yet? Unfortunately the result is not pretty:


[10058][State: S1000] [NQODBC] [SQL_STATE: S1000] [nQSError: 10058] A general error has occurred.
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed:
[nQSError: 23006] The session variable, NQ_SESSION.SAW_SRC_PATH, has no value definition.
Statement execute failed

That’s because SAW_SRC_PATH is a request variable and since the analysis has not been saved Presentation Services does not pass it to BI Server as a request variable. The same holds true for SAW_DASHBOARD and SAW_DASHBOARD_PG if you run an analysis outside of a dashboard – the respective request variables are not set and hence the connection pool script causes the query itself to fail.

The way around this is we cheat, slightly. If you create a session variable with the names of these request variables that we want to use in the connection pool scripts then we avoid the above nasty failures. If the request variables are set then all is well, and if they are not then we fall back on whatever value we initialise the session variable with.

The final icing on the cake of the solution given above is a bit of string munging with INSTR and SUBSTR to convert and concatenate the dashboard path and page into a single string, so instead of :

/shared/01. QuickStart/_portal/1.30 Quickstart/Overview

we get:

1.30 Quickstart/Overview

Which is much easier on the eye when looking at dashboard names. Similarly with the analysis path we strip all but the last section of it.

Granular monitoring of OBIEE on the database

Once OBIEE has been configured to be more articulate in its connection to the database, it enables the use of DBMS_MONITOR to understand more about the performance of given dashboards, analyses, or queries for a given user. Through DBMS_MONITOR the collection of statistics such as DB time, DB CPU, and so can be triggered, as well as trace-file generation for queries matching the criteria specified.

As an example, here is switching on system statistics collection for just one dashboard in OBIEE, using SERV_MOD_ACT_STAT_ENABLE

call dbms_monitor.SERV_MOD_ACT_STAT_ENABLE(
    module_name=>'GCBC Dashboard/Overview'
    ,service_name=>'orcl'
);

Now Oracle stats to collect information whenever that particular dashboard is run, which we can use to understand more about how it is performing from a database point of view:

SYS@orcl AS SYSDBA> select module,stat_name,value from V$SERV_MOD_ACT_STATS;

MODULE                   STAT_NAME                           VALUE
------------------------ ------------------------------ ----------
GCBC Dashboard/Overview  user calls                             60
GCBC Dashboard/Overview  DB time                              6789
GCBC Dashboard/Overview  DB CPU                               9996
GCBC Dashboard/Overview  parse count (total)                    15
GCBC Dashboard/Overview  parse time elapsed                    476
GCBC Dashboard/Overview  execute count                          15
GCBC Dashboard/Overview  sql execute elapsed time             3887
[...]

Similarly the CLIENT_IDENTIFIER field can be used to collect statistics with CLIENT_ID_STAT_ENABLE or trigger trace file generation with CLIENT_ID_TRACE_ENABLE. What you populate CLIENT_IDENTIFIER with it up to you – by default the script I’ve detailed at the top of this article inserts the OBIEE username in it, but you may want to put the analysis here if that’s of more use from a diagnostics point of view on the database side. The CLIENT_INFO field is still available for the other item, but cannot be used with DBMS_MONITOR for identifying queries.

Categories: BI & Warehousing

12c Parallel Execution New Features: Concurrent UNION ALL - Part 3

Randolf Geist - Sun, 2015-03-22 15:17
In the final part of this instalment I want to focus on the possible optimisation of remote access that I outlined in the initial part, which is based on the idea of running multiple concurrent remote branches of a UNION ALL to overcome the query coordinator bottleneck of straightforward remote queries that need to transfer larger amounts of data.

For that purpose I now simply change my sample query to access the serial table T2 via the DB link defined in the setup of the initial part, like that:

set echo on timing on time on

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
);
which gives me this execution plan:

--------------------------------------------------------------
| Id | Operation | Name | Cost (%CPU)| Inst |IN-OUT|
--------------------------------------------------------------
| 0 | SELECT STATEMENT | | 0 (0)| | |
| 1 | REMOTE | | | LOOP | R->S |
--------------------------------------------------------------
Ouch, not exactly what I wanted. Of course it's nice that the optimizer recognizes that this is a statement that can be executed fully remotely, but for my particular purpose I don't want that to happen. So let's add a dummy local branch:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0

);
which gives now this plan:

---------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | PX COORDINATOR | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | Q1,00 | PCWP | |
| 5 | VIEW | | Q1,00 | PCWP | |
| 6 | UNION-ALL | | Q1,00 | PCWP | |
| 7 | PX SELECTOR | | Q1,00 | PCWP | |
| 8 | REMOTE | T2 | Q1,00 | PCWP | |
| 9 | PX SELECTOR | | Q1,00 | PCWP | |
| 10 | REMOTE | T2 | Q1,00 | PCWP | |
| 11 | PX SELECTOR | | Q1,00 | PCWP | |
| 12 | REMOTE | T2 | Q1,00 | PCWP | |
| 13 | PX SELECTOR | | Q1,00 | PCWP | |
| 14 | REMOTE | T2 | Q1,00 | PCWP | |
| 15 | PX SELECTOR | | Q1,00 | PCWP | |
| 16 | REMOTE | T2 | Q1,00 | PCWP | |
| 17 | PX SELECTOR | | Q1,00 | PCWP | |
| 18 | REMOTE | T2 | Q1,00 | PCWP | |
| 19 | PX SELECTOR | | Q1,00 | PCWP | |
| 20 | REMOTE | T2 | Q1,00 | PCWP | |
| 21 | PX SELECTOR | | Q1,00 | PCWP | |
| 22 | REMOTE | T2 | Q1,00 | PCWP | |
| 23 | PX SELECTOR | | Q1,00 | PCWP | |
| 24 | REMOTE | T2 | Q1,00 | PCWP | |
| 25 | PX SELECTOR | | Q1,00 | PCWP | |
| 26 | REMOTE | T2 | Q1,00 | PCWP | |
| 27 | PX SELECTOR | | Q1,00 | PCWP | |
|* 28 | FILTER | | Q1,00 | PCWP | |
| 29 | TABLE ACCESS FULL| T2 | Q1,00 | PCWP | |
---------------------------------------------------------------------------
That is more what I wanted. One funny side effect of the dummy branch is that the automatically derived degree is now 11, since there are 11 branches. Although the optimizer knows that the 11th branch won't be executed (the filter operator ID = 28 is "NULL IS NOT NULL") it gets its own PX SELECTOR assigned, so that's probably the explanation why the calculation arrives at 11.

So let's see what happens at runtime using the setup from the previous part where T2 has 2M rows:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
1| 4485K| | 1| 4| 5|@**** (5) |
2| 45M| | 0| 10| 10|********** (10) |
3| 46M| | 1| 9| 10|@********* (10) |
4| 46M| | 0| 10| 10|********** (10) |
5| 46M| | 0| 10| 10|********** (10) |
6| 46M| | 0| 10| 10|********** (10) |
7| 46M| | 0| 10| 10|********** (10) |
8| 46M| | 0| 10| 10|********** (10) |
9| 46M| | 0| 10| 10|********** (10) |
10| 46M| | 0| 10| 10|********** (10) |
11| 46M| | 0| 10| 10|********** (10) |
12| 46M| | 0| 10| 10|********** (10) |
13| 46M| | 0| 10| 10|********** (10) |
14| 46M| | 0| 10| 10|********** (10) |
15| 46M| | 0| 10| 10|********** (10) |
16| 46M| | 0| 10| 10|********** (10) |
17| 46M| | 0| 10| 10|********** (10) |
18| 46M| | 0| 10| 10|********** (10) |
19| 46M| | 0| 10| 10|********** (10) |
20| 46M| | 0| 10| 10|********** (10) |
21| 46M| | 0| 10| 10|********** (10) |
22| 46M| | 0| 10| 10|********** (10) |
23| 46M| | 0| 10| 10|********** (10) |
24| 46M| | 0| 10| 10|********** (10) |
25| 46M| | 1| 9| 10|@********* (10) |
26| 46M| | 1| 9| 10|@********* (10) |
27| 46M| | 0| 10| 10|********** (10) |
28| 46M| | 0| 10| 10|********** (10) |
29| 46M| | 0| 10| 10|********** (10) |
30| 46M| | 0| 10| 10|********** (10) |
31| 46M| | 0| 10| 10|********** (10) |
32| 46M| | 0| 10| 10|********** (10) |
33| 46M| | 1| 9| 10|@********* (10) |
34| 46M| | 0| 10| 10|********** (10) |
35| 46M| | 1| 9| 10|@********* (10) |
36| 46M| | 0| 10| 10|********** (10) |
37| 46M| | 0| 10| 10|********** (10) |
38| | | 0| 0| 0| (0) |
39| 46M| | 2| 8| 10|@@******** (10) |
40| 46M| | 0| 10| 10|********** (10) |
41| 46M| | 0| 10| 10|********** (10) |
42| 46M| | 0| 10| 10|********** (10) |
43| 46M| | 1| 9| 10|@********* (10) |
44| 46M| | 0| 10| 10|********** (10) |
45| 46M| | 0| 10| 10|********** (10) |
46| 46M| | 0| 10| 10|********** (10) |
47| 46M| | 0| 10| 10|********** (10) |
48| 46M| | 0| 10| 10|********** (10) |
49| 46M| | 0| 10| 10|********** (10) |
50| 46M| | 0| 10| 10|********** (10) |
51| 46M| | 0| 10| 10|********** (10) |
52| 46M| | 0| 10| 10|********** (10) |
53| 46M| | 1| 9| 10|@********* (10) |
54| 46M| | 0| 10| 10|********** (10) |
55| 46M| | 0| 10| 10|********** (10) |
56| 46M| | 0| 10| 10|********** (10) |
57| 46M| | 0| 10| 10|********** (10) |
58| 46M| | 0| 10| 10|********** (10) |
59| 36M| | 0| 8| 8|******** (8) |
60| 4609K| | 0| 1| 1|* (1) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 20M | 43 | 1 | 1 | # | 1:P002(1)[2000K],P00A(0)[2000K],P001(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 6 | UNION-ALL | | 11 | 20M | | | | | 0:P00A(0)[2000K],P001(0)[2000K],P002(0)[2000K],P003(0)[2000K],P004(0)[2000K],... |
| 7 | PX SELECTOR | | 11 | 2000K | | | | | 0:P006(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 8 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 3:P006(58)[2000K],P004(1)[0],P007(1)[0],P00A(0)[0],P000(0)[0],... |
| 9 | PX SELECTOR | | 11 | 2000K | | | | | 0:P008(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 10 | REMOTE | T2 | 11 | 2000K | 1 | 58 | 57 | #################### | 1:P008(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 11 | PX SELECTOR | | 11 | 2000K | | | | | 0:P00A(0)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 2000K | 1 | 59 | 58 | #################### | 1:P00A(58)[2000K],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 13 | PX SELECTOR | | 11 | 2000K | | | | | 0:P004(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 14 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P004(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 15 | PX SELECTOR | | 11 | 2000K | | | | | 0:P007(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 16 | REMOTE | T2 | 11 | 2000K | 2 | 59 | 58 | #################### | 1:P007(58)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 17 | PX SELECTOR | | 11 | 2000K | | | | | 0:P005(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 18 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P005(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 19 | PX SELECTOR | | 11 | 2000K | | | | | 0:P002(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 56 | #################### | 1:P002(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P003(0)[0],... |
| 21 | PX SELECTOR | | 11 | 2000K | | | | | 0:P009(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 22 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P009(57)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 23 | PX SELECTOR | | 11 | 2000K | | | | | 0:P003(0)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 24 | REMOTE | T2 | 11 | 2000K | 2 | 57 | 56 | #################### | 1:P003(56)[2000K],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 25 | PX SELECTOR | | 11 | 2000K | | | | | 0:P001(0)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 2000K | 2 | 58 | 57 | #################### | 1:P001(57)[2000K],P00A(0)[0],P000(0)[0],P002(0)[0],P003(0)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
|* 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
So in principle this works as desired, with the automatically derived degree all remote branches are active at the same time. If I queried now different chunks of the same remote object to speed up the transfer and maximize throughput this should give me what I want.

If you wonder why the execution took now almost one minute when the processing of the same query in the previous part just took 40-45 seconds: This seems to be a side-effect of running that many concurrent CPU intensive processes on my test system, which nominally has 12 cores / 24 CPUs, but obviously doesn't scale linearly, either due to some architectural issues, or simply because the CPU speed was lowered with that many cores active.

However, look what happens when I change the remote query slightly so that no rows will be returned:

select /*+ PQ_CONCURRENT_UNION(@"SET$1") */ count(*) from (
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2@loop
where regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') > regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
union all
select id, regexp_replace(t2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as result from t2
where 1 = 0
);
The runtime profile now looks like this:

Activity Timeline based on ASH
-----------------------------------------------

| | | | | | |
| | | | | AVERAGE|AVERAGE |
| | | | | ACTIVE|ACTIVE SESSIONS |
DURATION_SECS|PGA |TEMP | CPU| OTHER| SESSIONS|GRAPH |
-------------|------|------|----------|----------|----------|----------------------------|
6| 36M| | 0| 9,2| 9,2|********* (9,2) |
12| 36M| | 0| 11| 11|*********** (11) |
18| 36M| | 0| 11| 11|*********** (11) |
24| 36M| | 0| 11| 11|*********** (11) |
30| 36M| | 0| 11| 11|*********** (11) |
36| 36M| | 0| 11| 11|*********** (11) |
42| 36M| | 0| 9,2| 9,2|********* (9,2) |
48| 36M| | 0| 11| 11|*********** (11) |
54| 36M| | 0| 11| 11|*********** (11) |
60| 32M| | 0| 11| 11|*********** (11) |
66| 9641K| | 0| 11| 11|*********** (11) |
72| 9641K| | 0| 11| 11|*********** (11) |
78| 9641K| | 0| 11| 11|*********** (11) |
84| 9641K| | 0| 11| 11|*********** (11) |
90| 9641K| | 0| 11| 11|*********** (11) |
96| 9641K| | 0| 11| 11|*********** (11) |
102| 9641K| | 0| 9,2| 9,2|********* (9,2) |
108| 9641K| | 0| 11| 11|*********** (11) |
114| 9801K| | 0| 11| 11|*********** (11) |
120|10281K| | 0| 11| 11|*********** (11) |
126|10281K| | 0| 11| 11|*********** (11) |
132|10281K| | 0| 11| 11|*********** (11) |
138|10281K| | 0| 11| 11|*********** (11) |
144|10281K| | 0| 11| 11|*********** (11) |
150|10281K| | 0| 11| 11|*********** (11) |
156|10281K| | 0| 11| 11|*********** (11) |
162|10281K| | 0| 9,2| 9,2|********* (9,2) |
168|10281K| | 0| 11| 11|*********** (11) |
174|10281K| | 0| 11| 11|*********** (11) |
180|10281K| | 0| 11| 11|*********** (11) |
186|10281K| | 0| 11| 11|*********** (11) |
192|10281K| | 0| 11| 11|*********** (11) |
198|10281K| | 0| 11| 11|*********** (11) |
204|10281K| | 0| 11| 11|*********** (11) |
210|10281K| | 0| 11| 11|*********** (11) |
216|10281K| | 0| 11| 11|*********** (11) |
222|10281K| | 0| 9,2| 9,2|********* (9,2) |
228|10281K| | 0| 11| 11|*********** (11) |
234|10281K| | 0| 11| 11|*********** (11) |
240|10281K| | 0| 11| 11|*********** (11) |
246|10281K| | 0| 11| 11|*********** (11) |
252|10281K| | 0| 11| 11|*********** (11) |
258|10281K| | 0| 11| 11|*********** (11) |
264|10281K| | 0| 11| 11|*********** (11) |
270|10281K| | 0| 11| 11|*********** (11) |
276|10281K| | 0| 11| 11|*********** (11) |
282|10281K| | 0| 9,2| 9,2|********* (9,2) |
287|10281K| | 0| 11| 11|*********** (11) |
292|10281K| | 0| 11| 11|*********** (11) |
297|10281K| | 0| 11| 11|*********** (11) |
302|10281K| | 0| 11| 11|*********** (11) |
307|10281K| | 0| 11| 11|*********** (11) |
312|10281K| | 0| 11| 11|*********** (11) |
317|10281K| | 0| 11| 11|*********** (11) |
322|10281K| | 0| 11| 11|*********** (11) |
327|10281K| | 0| 11| 11|*********** (11) |
332|10281K| | 0| 11| 11|*********** (11) |
337|10281K| | 0| 11| 11|*********** (11) |
342|10281K| | 0| 8,8| 8,8|********* (8,8) |
347|10281K| | 0| 11| 11|*********** (11) |
352|10281K| | 0| 11| 11|*********** (11) |
357|10281K| | 0| 11| 11|*********** (11) |
362|10281K| | 0| 11| 11|*********** (11) |
367|10281K| | 0| 11| 11|*********** (11) |
372|10281K| | 0| 11| 11|*********** (11) |
377|10281K| | 0| 11| 11|*********** (11) |
382|10281K| | 0| 11| 11|*********** (11) |
387|10281K| | 0| 11| 11|*********** (11) |
392|10281K| | 0| 11| 11|*********** (11) |
397|10281K| | 0| 11| 11|*********** (11) |
402|10281K| | 0| 8,8| 8,8|********* (8,8) |
407|10281K| | 0| 11| 11|*********** (11) |
412|10281K| | 0| 11| 11|*********** (11) |
417|10281K| | 0| 11| 11|*********** (11) |
422|10281K| | 0| 11| 11|*********** (11) |
427|10281K| | 0| 11| 11|*********** (11) |
432|10281K| | 0| 11| 11|*********** (11) |
437|10281K| | 0| 11| 11|*********** (11) |
442|10281K| | 0| 11| 11|*********** (11) |
447|10281K| | 0| 11| 11|*********** (11) |
452|10281K| | 0| 11| 11|*********** (11) |
457|10281K| | 0| 11| 11|*********** (11) |
462|10281K| | 0| 8,8| 8,8|********* (8,8) |
467|10281K| | 0| 11| 11|*********** (11) |
472|10281K| | 0| 11| 11|*********** (11) |
477|10281K| | 0| 11| 11|*********** (11) |
482|10281K| | 0| 11| 11|*********** (11) |
487|10281K| | 0| 11| 11|*********** (11) |
492|10281K| | 0| 11| 11|*********** (11) |
497|10281K| | 0| 11| 11|*********** (11) |
502|10281K| | 0| 11| 11|*********** (11) |
507|10281K| | 0| 11| 11|*********** (11) |
512|10281K| | 0| 11| 11|*********** (11) |
517|10281K| | 0| 11| 11|*********** (11) |
522|10281K| | 0| 8,8| 8,8|********* (8,8) |
527|10281K| | 0| 11| 11|*********** (11) |
532|10281K| | 0| 11| 11|*********** (11) |
537| 9535K| | 0| 10| 10|********** (10) |
542| 7902K| | 0| 8,4| 8,4|******** (8,4) |
547| 4894K| | 0| 5,2| 5,2|***** (5,2) |

--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Execs | A-Rows| Start | Dur(T)| Dur(A)| Time Active Graph | Parallel Distribution ASH |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 1 | SORT AGGREGATE | | 1 | 1 | | | | | 0:sqlplus.exe(0)[1],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 2 | PX COORDINATOR | | 12 | 11 | | | | | 0:sqlplus.exe(0)[11],P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],... |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 4 | SORT AGGREGATE | | 11 | 11 | | | | | 0:P00A(0)[1],P000(0)[1],P001(0)[1],P002(0)[1],P003(0)[1],... |
| 5 | VIEW | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 6 | UNION-ALL | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 7 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 8 | REMOTE | T2 | 11 | 0 | 2 | 59 | 58 | ### | 11:P001(58)[0],P002(58)[0],P004(58)[0],P005(58)[0],P00A(57)[0],... |
| 9 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 10 | REMOTE | T2 | 11 | 0 | 58 | 57 | 56 | ### | 11:P00A(53)[0],P001(53)[0],P002(53)[0],P006(53)[0],P007(53)[0],... |
| 11 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 12 | REMOTE | T2 | 11 | 0 | 111 | 58 | 57 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P002(53)[0],P004(53)[0],... |
| 13 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 14 | REMOTE | T2 | 11 | 0 | 163 | 61 | 60 | ### | 11:P00A(54)[0],P001(54)[0],P004(54)[0],P000(53)[0],P002(53)[0],... |
| 15 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 16 | REMOTE | T2 | 11 | 0 | 216 | 61 | 60 | ### | 11:P00A(55)[0],P000(54)[0],P005(54)[0],P006(54)[0],P001(53)[0],... |
| 17 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 18 | REMOTE | T2 | 11 | 0 | 269 | 65 | 64 | #### | 11:P005(58)[0],P007(57)[0],P00A(56)[0],P000(56)[0],P004(56)[0],... |
| 19 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 20 | REMOTE | T2 | 11 | 0 | 324 | 64 | 63 | #### | 11:P006(55)[0],P00A(53)[0],P000(53)[0],P004(53)[0],P008(53)[0],... |
| 21 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 22 | REMOTE | T2 | 11 | 0 | 376 | 66 | 65 | #### | 11:P007(54)[0],P00A(53)[0],P005(53)[0],P001(52)[0],P003(52)[0],... |
| 23 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 24 | REMOTE | T2 | 11 | 0 | 429 | 67 | 66 | #### | 11:P004(54)[0],P008(54)[0],P00A(53)[0],P000(53)[0],P001(53)[0],... |
| 25 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 26 | REMOTE | T2 | 11 | 0 | 481 | 67 | 66 | ### | 11:P008(54)[0],P000(53)[0],P001(53)[0],P003(53)[0],P009(53)[0],... |
| 27 | PX SELECTOR | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 28 | FILTER | | 11 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
| 29 | TABLE ACCESS FULL| T2 | 0 | 0 | | | | | 0:P00A(0)[0],P000(0)[0],P001(0)[0],P002(0)[0],P003(0)[0],... |
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Ouch, so this took now more than nine times longer, and although we see the concurrent UNION ALL plan shape there is not much concurrency visible from the runtime activity - each branch seems to be executed by all 11 PX servers concurrently, and only then the next branch gets executed again by all 11 PX servers... It more or less looks like the PX SELECTOR wouldn't work correctly and instead of assigning the branch only to one PX server it seems to get executed by all of them - which we've already seen in the previous parts of this series is what actually happens, but for non-remote branches the PX SELECTOR functionality made sure that only one of them actually did something whereas the other ones simply skipped the operation.

We can see this problem confirmed by looking at the SQL execution statistics of the remote queries executed via V$SQLSTATS or DBA_HIST_SQLSTAT:

SQL_ID FETCHES END_OF_FETCH_COUNT EXECUTIONS PARSE_CALLS ROWS_PROCESSED
------------- ---------- ------------------ ---------- ----------- --------------
dtjb3bxg1ysdk 730 10 110 110 20000100
d36r1d00yaunc 110 110 108 108 0
The first SQL_ID corresponds to the remote query used by my first successful example - it already confirms that the remote query was actually executed 110 times (=> 11 PX servers times 10 branches) instead of the expected 10 times. The "ROWS_PROCESSED" and "END_OF_FETCH_COUNT" suggest what seems to have happened: Ten of the executions actually fetched the rows to the end (10 times 2M rows), but 100 of them fetched only a single row and then stopped the processing.

This also explains why my slightly changed query influenced the runtime profile so dramatically: Since no rows were returned by the remote query all 110 executions had to run to the end, since they would only stop early after fetching the first row, but there were no rows to fetch.

This problem is tracked via bug 19565803: INEFFICIENT EXECUTION OF PARALLEL UNION-ALL INVOLVING REMOTE TABLES and is mentioned to be fixed in 12.2, but there doesn't seem to be patch/fix available for 12.1

So in principle my original idea should work, assuming that the remote queries just access different chunks/partitions of the same segment the issue just described shouldn't matter. However, depending on what the remote queries exactly do, some other scenarios might be affected. The possible overhead depends on how much work the remote queries have to perform before they return the first row.

Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots

Michael Feldstein - Sun, 2015-03-22 14:55

By Phil HillMore Posts (300)

While at SXSWedu, I was able to visit Austin Community College’s ACCelerator lab, which got a fair bit of publicity over the past month. While the centerpiece of ACCelerator usage is for developental math, the 600+ workstation facility spread over 32,000 square feet also supports Tutoring in a variety of subjectsFirst year experienceGroup advisingAcademic CoachingAdult EducationContinuing EducationCollege readiness assessment preparation, and Student skills workshops.

ACCelerator

But it is the developmental math course that has received the most coverage.

Austin Community College welcomed second lady Dr. Jill Biden and Under Secretary of Education Dr. Ted Mitchell on Monday, March 9, to tour the Highland Campus’ ACCelerator and meet with students and faculty of the college’s new developmental math course, MATD 0421. [snip]

“I teach a lot of developmental students,” says Dr. Biden. “The one stumbling block does seem to be math and math anxiety and ‘Can I do it?’. This (course) seems to be so empowering and so positive. Students can see immediate success.”

MATD 0421 is a self-paced, emporium-style course that encompasses all three levels of developmental math. Paul Fain at Inside Higher Ed had an excellent article that included a description of the motivation.

Dismal remedial success rates have been a problem at Austin, which enrolls 60,000 students. So faculty members from the college looked around for alternative approaches to teaching math.

“Really, there’s nothing to lose,” said [Austin CC president] Rhodes.

The Highland Campus, where the ACCelerator lab is located, is built in a former shopping mall. Student in Austin CC can choose courses at any of the 8 campuses or 5 centers. All developmental math at the Highland Campus is run through MATD 0421, so students across the system can choose traditional approaches at other campuses of the emporium approach at Highland.

Austin CC picked this approach after researching several other initiatives (Fain describes Virginia Tech and Montgomery College examples). The IHE article then describes the design:

Austin officials decided to try the emporium method. They paired it with adaptive courseware, which adjusts to individual learners based on their progress and ability to master concepts. The college went with ALEKS, an adaptive software platform from McGraw-Hill Education.

Fain describes the personalization aspect:

The new remedial math course is offered at the ACCelerator. The computer stations are arranged in loose clusters of 25 or so. Faculty members are easy to spot in blue vests. Student coaches and staff wear red ones.

This creates a more personalized form of learning, said Stacey Güney, the ACCelerator’s director. That might seem paradoxical in computer lab that has a bit of a Matrix feel. But Güney said that instead of a class size of 25 students per instructor, the course features 25 classes of one student.

“In here there is no back of the classroom,” she said.

While the program is fairly new (second term), there are some initial results described by the official site:

In MATD 0421’s inaugural semester:

  • The withdrawal rate was less than half the rate for traditional developmental math courses.
  • 75 percent of the students completed the equivalent of one traditional course.
  • Nearly 45 percent completed the equivalent to a course and one-half.
  • Over 14 percent completed the equivalent to two courses.
  • 13 students completed all the equivalent of three courses.

Go read the full IHE article for a thorough description. I would offer the following observations.

  • Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.
  • While the computer lab and software are easy headlines, I hope people don’t miss the heavy staffing that are a central feature of this lab – there are more than 90 faculty and staff working there, teaching the modular courses, roving the aisles to provide help, and working in help desks. The ACCelerator is NOT an exercise in replacing faculty with computers.
  • During my tour, instructor Christie Allen-Johnson and associate professor Ann P. Vance described their plans to perform a more structured analysis of the results. Expect to see more validated outcomes starting at the end of CY2015.
  • When and if Austin CC proves the value and results of the model, that would be the time to migrate most of the remaining developmental math courses into this emporium model.
  • The one area that concerns me is the lack of structured time for students away from the workstations. Developmental students in community colleges often have not experienced academic success – knowing how to succeed, learning how to learn, believing in their ability to succeed – and often this non-cognitive aspect of math is as important as the actual coursework. Allen-Johnson described the availability of coaching that goes beyond coursework, but that is different than providing structure for coaching and self-regulated learning.

The post Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots appeared first on e-Literate.

SQLDeveloper XML Extensions and auto-navigation

The Anti-Kyte - Sun, 2015-03-22 09:42

It’s official, England are now the second best cricket team in the British Isles !
After all, Scotland were dispatched with ease and as for Wales…they didn’t even make it to the Cricket World Cup.
OK, technically they did because they’re part of England for the purposes of cricket…although you’d be hard pressed to get them to admit it.
Ireland are, of course, some way in front having actually managed to actually win the odd game against Test Playing Nations.
Whilst it takes quite some effort to find silver lining in the cloud of English Cricket’s latest debacle, the same cannot be said if SQLDeveloper is your Oracle Database IDE of choice …

Why I use SQLDeveloper

Now, I’m well aware that each developer has their own favourite IDE. TOAD and PL/SQL Developer are fine tools in their own right, each with their own strengths. Which of these tools you swear by, or swear at, is always likely to be based on some fairly subjective criteria.

One of the main reasons I have for using SQLDeveloper is that it’s so easy to extend by means of a judicious application of XML, combined with a bit of SQL and/or PL/SQL.

Fun with Foreign Keys

As with all of the mainstream IDE’s, SQLDeveloper displays a table’s constraints as standard. However, what it doesn’t show is which tables have Foreign Keys to the table you’re looking at.

If, for example, you want to find out which tables have a Foreign Key to HR.COUNTRIES, you need to run a query like this :

select owner, table_name, constraint_name
from all_constraints
where constraint_type = 'R'
and (r_owner, r_constraint_name) in 
(
    select owner, constraint_name
    from all_constraints
    where constraint_type in ('U', 'P')
    and owner = 'HR'
    and table_name = 'COUNTRIES'
)
order by owner, table_name, constraint_name
/

OWNER                          TABLE_NAME                     CONSTRAINT_NAME
------------------------------ ------------------------------ ------------------------------
HR                             LOCATIONS                      LOC_C_ID_FK

SQL> 

Wouldn’t it be handy if you could somehow add a tab onto the table view in SQLDeveloper and have this information
pop-up for the table you’re looking at.
Well, funny you should say that….

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner, table_name, constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

It may not be immediately obvious how this xml code is going to help us.
The answer is that it’s the SQLDeveloper extension that we’ve just written. Yep, just that one xml file.

It’s probably worth taking a closer look at the structure of this file in terms of the XML…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Enter the Tab Name Here]]></title>
		<query>
			<sql>
				<![CDATA[Enter your query here]]>
			</sql>
		</query>
	</item>
</items>

If you want to add a node to a different object type, it seems that you just need to change the node attribute of the item tag as appropriate.
For example, specifying “Viewnode” should enable you to add a Tab to your Views.

Anyway, to add this to SQLDeveloper, we just need to do a bit of pointing and clicking…

To start with, save the above code into an xml file. I’ve called mine deps_fk.xml.

In SQLDeveloper, go to the Tools menu and select Preferences

In the tree on the left hand side expand the Database node. Then click on User Defined Extensions.

sqld_add_ext1

Now click the Add Row button at the bottom of the Right Hand Pane and you should see a new Row appear under the Type and Location headings like this :

sqld_add_ext2

If you now click in the row you’ve just created under the Type heading, you should get a drop-down list of values.
Select Editor from the list :

sqld_add_ext3

In the Location field, you just need to tell SQLDeveloper where your xml file is :

sqld_add_ext4

Finally, hit the OK button at the bottom of the window to save your changes and then re-start SQLDeveloper.

Now, if we open the HR.COUNTRIES table, we can see that there’s now an additional tab called Child Tables.
When we go to this tab, we are rewarded with :

sqld_tab1

This stuff has been done before. As well as this post from Tony Andrews, I also posted something similar several years ago.
I must be getting old, I’m starting to repeat myself !

Whilst all this is quite handy, wouldn’t it be good to be to be able to link directly to the Child table ?

Linking to other objects

Jeff Smith’s post here might be just what we’re looking for.

Using Jeff’s linking technique, we can improve our extension a bit…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner,  
						'SQLDEV:LINK:'
						    ||owner||':TABLE:'||table_name
						    ||':oracle.dbtools.raptor.controls.grid.DefaultDrillLink' table_name,
						constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

Now when I run this, the table_name appears in blue.

sqld_ext_final1

By clicking it, I can navigate directly to the child table…

sqld_ext_final2

When the code is run, in this example, the string generated for the table_name column is :

SQLDEV:LINK:HR:TABLE:LOCATIONS:oracle.dbtools.raptor.controls.grid.DefaultDrillLink

You can link to all sorts of things using this technique. In fact, it would seem to be that you can link to any object that has a node in the SQLDeveloper Navigator Tree. If you want to play around, just select the string from dual in SQLDeveloper (using F9 to execute).

For example :

select 'SQLDEV:LINK:HR:TRIGGER:UPDATE_JOB_HISTORY:oracle.dbtools.raptor.controls.grid.DefaultDrillLink'
from dual;

…returns a link to the UPDATE_JOB_HISTORY trigger in the results grid.
Note that the link syntax is case sensitive in that the owner, object type and object name all need to be in uppercase for this to work.

Another point worth noting, specifying a type of PACKAGE takes you to the package header.
If you want to link to the body, then specify PACKAGE BODY.

Other SQLDeveloper Extension Types

There’s a whole bunch of stuff you can do to customise SQLDeveloper using nothing more than a bit of boilerplate xml.
In addition to Jeff’s blog, there is an Oracle Wiki which contains, among other things,

It’s definitely worth a look, especially if, like me, you need something to take your mind off the cricket.


Filed under: SQLDeveloper Tagged: adding tabs to sqldeveloper, all_constraints, finding child constraints, linking to tabs in sqldeveloper, oracle.dbtools.raptor.controls.grid.DefaultDrillLink, sqldeveloper xml extensions

Background REST Service Access with A-Team Mobile Persistence Accelerator

Andrejus Baranovski - Sun, 2015-03-22 06:35
REST service transfers light data, but service execution time could bring significant delay to the enterprise mobile application. I have already introduced you to the A-Team Mobile Persistence Accelerator (AMPA) in previous post - REST Service Access with A-Team Mobile Persistence Accelerator. Based on AMPA author - Steven Davelaar suggestions, I will post today updated application, where REST service call will be handled in background. This will allow mobile user to continue working with the MAF application, while REST call is being processed in background thread.

Here you can download updated sample application - MobileServiceBusApp_v7.zip. Remote read in background is configurable through AMPA persistence mapping, I have changed it to be false - let's see how it works for slow REST service execution:


Data Control method is executing (sequential in this example) remote findAll operation through AMPA, to fetch employees data. Data collection is loaded to the UI, after service execution is completed:


Server side ADF BC VO is set to wait 30 seconds (waiting for 30 seconds after VO was executed), before completing SOAP response. This allows to simulate slow REST service execution and check how MAF mobile application behaves with sequential service call:


Executing search operation over slow REST operation blocks entire MAF mobile application. User can't navigate to other screens and is locked into current screen, until response comes. This is how it looks like, when I changed search criteria - mobile application waits for the response:


Obviously this is inappropriate, because it blocks application and user can't continue his work. Let's test with AMPA configured to execute REST calls in background - remoteReadInBackground = true:


Data Control method responsible to execute REST action is refactored. I'm only starting remote findAll operation - not waiting it to complete. AMPA generated service class EmployeeService is changed to include additional constructor, where I'm passing instance of Data Control class and a flag to prevent auto query. Here is applyFilter method from Data Control class, it call REST service in background, through AMPA:


AMPA generated class is changed with overriden method refreshEntityList. This method is called automatically by AMPA, when background REST call is completed. Here I'm calling Data Control class method, responsible to refresh UI and display data fetched from the background service:


Data Control class method responsible for UI refresh - it updates Data Control collection and invokes synch with UI:


I will describe a test I have completed, with REST service execution in background. Perform search action with a parameter:


ADF BC on server side executes VO with SQL statement, there is a wait time of 30 seconds:


MAF mobile application is not blocked anymore, as it was with sequential REST service execution. User can navigate to other screen and do different actions:


Once ADF BC VO completes execution and SOAP service returns response, Service Bus is transforming SOAP response to REST. Mobile application receives data and UI refresh happens, to present latest changes. User can view the changes, once he is back to the screen:


In the same way, user could run another search:


While search is running, user could view details for present data:


After returning back to the search list, results for the new search query are displayed - data from REST service call executed in background:


User could load details screen and view the data:

HP Mini T210 3F0 boot failure and System Rollback Data disk space issues with Roxio BackOnTrack / aswrvt.sys

Gareth Roberts - Sun, 2015-03-22 04:10

If you have an HP Mini 210 or 100 and have had issues with disk running out of space, failures to boot, I feel for you. After a heap of hassle, I sorted my out problems without having to reinstall Windows, and so I'm sharing my experiences as I found many many people with the same issue, but no central resolution. The issues I've had are as follows

  • Failure to boot: 3F0 Harddisk does not exist
  • Running out of space (see Roxio BackOnTrack)
  • Black screen after attempting to uninstall Roxio BackOnTrack
3F0 Harddisk does not exist

IF you get a failure to startup, screen with 3F0 Harddisk does not exist:

  1. To access the BIOS, turn on the computer and immediately press the esc key to display the Startup Menu, and then press the F10 key.
  2. Press F9 to reset the BIOS defaults, and press Enter to confirm the action.
  3. Press F10 to save the change and exit the BIOS, and then press Enter to confirm.

Hopefully now the computer will restart.

Low disk space - "System Rollback Data" hidden secret directory

This netbook came preinstalled with Roxio BackOnTrack, which is the primary source of my woes. Recently I found I was running out of disk space and after deleting a heap of data, then finding I was running out again I installed WinDirStat to check what was going on. On inspection I found a super secret hidden folder System Rollback Data, which is not even visible with "show system files" switched on. Note this is not to be confused with Microsoft Windows System Restore. This folder contained 160Gb out of my 250Gb SSD !!

So after reading I found many pointers for Roxio BackOnTrack being the culprit of the System Restore Data directory size. Note that it seems Roxio BackOnTrack and this super secret directory may have a hook into the boot loader. So beware!

Anyway, without further ado here's what I hit and how to fix it.

  • Uninstall Roxio BackOnTrack from the Programs & Features or similar. Note this may take a long time if the backup store is large. I didn't wait, and end task'ed the uninstall process. That may have been a bad idea.
  • After killing the uninstall and rebooting I got the dreaded 3F0 hard disk does not exist, fix as per above.
  • After fixing 3F0 issue, Windows would not boot, I tried safe mode, command prompt, last known good configuration, all that happened was a black screen after the initial Windows splash. On logging I found Windows load stopped at aswrvt.sys, no further progress. After a lot of searching I found a post that mentioned to delete the file delete the syscow32.sys file in WINDOWS/system32/drivers folder. The next issue was trying to get a utility running so I could get access to the C:\ Drive.
  • As the HP Mini 210 doesn't have a CD/DVD drive I decided to run a USB Boot program, after trying Windows restore disk, which didn't boot, I decided to use EaseUS Todo Back Emergency Disk, so I did the following:
    • Installed EaseUS Todo Backup Free to another computer and ran it.
    • Tools > Create Emergency disk, then created USB bootable drive
    • Put that in the HP Mini and changed boot options to move USB to top of the list
    • When EaseUS started up went to Tools > Windows Command Prompt and ran commands along the lines of the following and found I only had syscow32x.sys, so backed up and deleted that:
      c:
      cd windows\system32\drivers
      copy syscow32x.sys c:\temp
      del syscow32x.sys
      exit
    • Closed EaseUS Todo Backup
  • Rebooted - filesystem check kicked in, and Windows booted successfully !

  • Get rid of Roxio BackOnTrack - run uninstallapp.exe from C:\Program Files\Roxio ... takes a long time (10's of minutes dependeing on data size), watch the disk space free up as it runs :-)

  • Remove Roxio BackOnTrack via Control Panel - Programs and Features

After all that, my machine is running sweetly again. Hopefully this post saves someone from throwing a perfectly fine HP Mini 100 / 210 out the window !!

Catch ya!
Gareth

This is a post from Gareth's blog at http://garethroberts.blogspot.com
References

Installing Languages on PeopleSoft Update Images

Javier Delgado - Sat, 2015-03-21 02:52
One of great things about PeopleSoft Update Manager images is that they could be used as a Demo environment to try the latest and greatest features of the PeopleSoft application. All you need to do is to download the image and install it and you can already play with the application.

However, the initial install of the Update Image will only have the English language enabled. If you are using PeopleSoft Update Manager, once you upload the target environment and define the change package, the application will automatically install the languages you have in place in your own environment. However, if you just want to install the Update Image and you do not have a target environment to upload, this approach is not feasible.

Below I describe the steps to follow in order to install additional languages into an Update Image without having a target environment.


1.- Launch the Update Image.

2.- Install the client database connectivity tools by running the installer shipped in the oracle-12c-client-64bit shared folder.
3.- Connect to the database using SQL Developer and run the following command:
insert into PS_PTIASPTARGETLNG values ('ESPDEMO', '<language>');

4.- Install the PeopleTools client by running the installer located in the client-854 shared folder.
5.- Connect to the Update Image environment using Application Designer and open the PTIASPLANG_VW.PTIASPLANGCD.FieldFormula PeopleCode. Once in there, add the lines in bold.
(...)Function AssembleDMoverCommand() Returns array of string   Local array of string &arrRet = CreateArrayRept("", 0);      Local string &srvName;   Local string &userID;   Local string &userPwd;   Local string &dmsLogPath;   Local string &dmsPath;   Local string &psdmtxPath;   Local string &mlDatPath;      Local string &paramName;   Local string &paramValue;   REM prepare data mover parameters from database;   Local SQL &sqlParam = GetSQL(SQL.PTIASPDMPARAM);   While &sqlParam.Fetch(&paramName, &paramValue)      Evaluate &paramName      When "USERID"         &userID = &paramValue;         Break;      When "USERPWD"         &userPwd = &paramValue;         Break;      When "DMSPATH"         &dmsPath = &paramValue;         Break;      When "PSDMTXPATH"         &psdmtxPath = &paramValue;         Break;      When "SVRNAME"         &srvName = &paramValue;         Break;      When "DMLOGPATH"         &dmsLogPath = &paramValue;         Break;      When "MLDATPATH"         &mlDatPath = &paramValue;         Break;      When-Other         Break;      End-Evaluate;   End-While;   &sqlParam.Close();      Local string &cmdString;   &cmdString = &psdmtxPath | " -CT " | %DbType;   If All(&srvName) Then      &cmdString = &cmdString | " -CS " | &srvName;   End-If;   /* BNB - J.Delgado - 08 Mar 2015 - BEGIN */   Local string &tmp;   &tmp = &userPwd;   /* BNB - J.Delgado - 08 Mar 2015 - END */   If All(&userPwd) Then      &userPwd = Decrypt("mldmpswd", &userPwd);   End-If;   /* BNB - J.Delgado - 08 Mar 2015 - BEGIN */   If None(&userPwd) Then      &userPwd = &tmp;   End-If;   /* BNB - J.Delgado - 08 Mar 2015 - END */   &cmdString = &cmdString | " -CD " | %DbName | " -CO " | &userID | " -CP " | &userPwd | " -FP " | &dmsPath;      REM DMover executable file location, whole DMover command, DMS Log File, Dat file Path, DMS file Path are pushed into the array for following process;   &arrRet.Push(&psdmtxPath, &cmdString, &dmsLogPath, &mlDatPath, &dmsPath);   Return &arrRet;End-Function;(...)
Note: This change removes the requirement of a previous target environment upload. 
6.- Connect to the PeopleSoft application using PIA. Associate the PTIASPMLLOAD Application Engine process to the AE_REQUEST component.


7.- Run the PTIASPMLLOAD process using the Request AE page. The language to be installed should associated to the PTIASPMLLDAET.PTIASPPROPVAL field.

Once the process is run, make sure you reboot the web server and application server in order to use the Update Image in the newly installed language.

Free Apache Cassandra Training Event in Cambridge, MA March 23

Pythian Group - Fri, 2015-03-20 14:24

I’ll be speaking, along with DataStax and Microsoft representatives at Cassandra Essentials Day this coming Monday (March 23) in Cambridge. MA. This free training event will cover the basics of Apache Cassandra and show you how to try it out quickly, easily, and free of charge on the Azure cloud. Expect to learn about the unique aspects of Cassandra and DataStax Enterprise and to dive into real-world use cases.

Space is limited, so register online to reserve a spot.

Categories: DBA Blogs

My Co-op Experience at Pythian

Pythian Group - Fri, 2015-03-20 06:30
That's me in front of our office. I promise there is a bigger Pythian logo!

That’s me in front of our office. I promise there is a bigger Pythian logo!

Unlike most other engineering physics students at Carleton who prefer to remain within the limits of engineering, I had chosen to apply for a software developer co-op position at Pythian in 2014. For those of you who do not know much about the engineering physics program (I get that a lot and so I will save you the trip to Google and tell you), this is how Stanford University describes their engineering physics program: “Engineering Physics prepares students to apply physics to tackle 21st century engineering challenges and to apply engineering to address 21st century questions in physics.” As you can imagine, very little to do with software development. You might ask, then why apply to Pythian?

Programming is changing the way our world functions. Look at the finance sectors: companies rely on complicated algorithms to determine where they should be investing their resources which in turn determines the course of growth for the company. In science and technology, algorithms help us make sense of huge amounts of unstructured data which would otherwise take us years to process, and help us understand and solve many or our 21st century problems. Clearly, learning how to write these algorithms or code cannot be a bad idea, rather, one that will be invaluable. A wise or a not so wise man once said, (you will know what I mean if you have seen the movie iRobot): “If you cannot solve a problem, make a program that can.” In a way, maybe I intend to apply physics to tackle all of 21st century problems by writing programs. (That totally made sense in my head).

Whatever it might be, my interest in programming or my mission to somehow tie physics, engineering, and programming together, I found myself looking forward to an interview with Pythian. I remember having to call in for a Skype interview. While waiting for my interviewers to join the call, I remember thinking about all the horror co-op stories I had heard: How you will be given piles of books to read over your work term (you might have guessed from this blog so far, not much of a reader, this one. If I hit 500 words, first round’s on me!). Furthermore, horror stories of how students are usually labeled as a co-op and given no meaningful work at all.

Just as I was drifting away in my thoughts, my interviewers joined the call. And much to my surprise they were not the traditional hiring managers in their formal dresses making you feel like just another interviewee in a long list of interviewees. Instead they were warm and friendly people who were genuinely interested in what I could offer to the company as a co-op student. The programming languages I knew, which one was my favourite, the kind of programs I had written, and more. They clearly stated the kind of work I could expect as a co-op student, which was exactly the same kind of work that the team was going to be doing. And most importantly, my interviewers seemed to be enjoying the kind of work they do and the place they work at.

So, when I was offered the co-op position at Pythian. I knew I had to say yes!

My pleasant experience with Pythian has continued ever since. The most enjoyable aspect of my work has been the fact that I am involved in a lot of the team projects which means I am always learning something new and gaining more knowledge each day, after each project. I feel that in an industry like this, the best way to learn is by experience and exposure. At Pythian that is exactly what I am getting.

And if those are not good enough reasons to enjoy working for this company, I also have the privilege of working with some extremely experienced and knowledgeable people in the web development industry. Bill Gates had once suggested that he wants to hire the smartest people at Microsoft and surround himself with them. This would create an environment where everyone would learn from each other and excel in their work. And I agree with that. Well now if you are the next Bill Gates, go ahead, create your multibillion dollar company and hire the best of the best and immerse yourself in the presence of all that knowledge and intelligence. But I feel I have found myself a great alternative, a poor man approach, a student budget approach or whatever you want to call it, take full advantage of working with some really talented people and learn as much as you can.

Today, five months into my yearlong placement with Pythian, I could not be more sure and proud of becoming a part of this exciting company, becoming a Pythianite. And I feel my time spent in this company has put me well in course to complete my goal of tying physics, engineering and programming together.

Categories: DBA Blogs

Log Buffer #415, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-03-20 06:25

This Log Buffer Edition covers the Oracle, SQL Server and MySQL with a keen look on the novel ideas.

Oracle:

The case was to roll forward a physical standby with an RMAN SCN incremental backup taken from primary.

Oracle Database 12c: Smart upgrade

This blog covers how to specify query parameters using the REST Service Editor.

Production workloads blend Cloud and On-Premise Capabilities

ALTER DATABASE BEGIN BACKUP and ALTER DATABASE END BACKUP

SQL Server:

Mail Fails with SQLCMD Error

How to get Database Design Horribly Wrong

Using the ROLLUP, CUBE, and GROUPING SETS Operators

The Right and Wrong of T-SQL DML TRIGGERs (SQL Spackle)

How converting extensive, repetitive code to a data-driven approach resolved a maintenance headache and helped identify bugs

MySQL:

Distributing innodb tables made simpler!

Choosing a good sharding key in MongoDB (and MySQL)

Update a grails project from version 2.3.8 to version 2.4.4

MySQL Enterprise Backup 3.12.0 has been released

If table is partitioned it makes it easy to maintain. Table has grown so huge and the backups are just keep running long then probably you need to think of archival or purge.

Categories: DBA Blogs

Using strace to debug application errors in linux

Pythian Group - Fri, 2015-03-20 06:24

strace is a very useful tool which traces system calls and signals for a running process. This helps a lot while debugging application level performance issues and bugs. Aim of this post is to demonstrate the power of strace in pinning down an application bug.

I came across an issue in which nagios was sending the following alerts for a RHEL6 system.

***** Nagios ***** Notification Type: PROBLEM Service: NTP Host: xxxxx Address: xx.xx.xx.xx State: UNKNOWN Date/Time: Tue Feb 17 10:08:36 EST 2015 Additional Info: cant create socket connection

On manually executing the nagios plugin on the affected system, we can see that the command is not running correctly.

# /usr/lib64/nagios/plugins/check_ntp_time -H localhost -w 1 -c 2
can’t create socket connection

I ran strace on the command. This would create a file /tmp/strace.out with strace output.

# strace -xvtto /tmp/strace.out /usr/lib64/nagios/plugins/check_ntp_time -H localhost -w 1 -c 2

Following are the options which I passed.

-x Print all non-ASCII strings in hexadecimal string format.
-v Print unabbreviated versions of environment, stat, termios, etc. calls. These structures
are very common in calls and so the default behavior displays a reasonable subset of struc?
ture members. Use this option to get all of the gory details.
-tt If given twice, the time printed will include the microseconds.
-o filename Write the trace output to the file filename rather than to stderr. Use filename.pid if -ff
is used. If the argument begins with `|’ or with `!’ then the rest of the argument is
treated as a command and all output is piped to it. This is convenient for piping the
debugging output to a program without affecting the redirections of executed programs.

Time stamps displayed with -tt option is not very useful in this example, but it is very useful while debugging application performance issues. -T which shows the time spend in each system call is also useful for those issues.

From strace output,

10:26:11.901173 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901279 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.901413 getsockname(3, {sa_family=AF_INET, sin_port=htons(38673), sin_addr=inet_addr(“127.0.0.1″)}, [16]) = 0
10:26:11.901513 close(3) = 0
10:26:11.901621 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901722 connect(3, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.901830 close(3) = 0
10:26:11.901933 socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3
10:26:11.902033 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.902130 socket(PF_INET6, SOCK_DGRAM, IPPROTO_UDP) = 4
10:26:11.902238 connect(4, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.902355 fstat(1, {st_dev=makedev(0, 11), st_ino=3, st_mode=S_IFCHR|0620, st_nlink=1, st_uid=528, st_gid=5, st_blksize=1024, st_blocks=0, st_rdev=makedev(136, 0), st_atime=2015/02/17-10:26:11, st_mtime=2015/02/17-10:26:11, st_ctime=2015/02/17-10:16:32}) = 0
10:26:11.902490 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fc5a8752000
10:26:11.902608 write(1, “can’t create socket connection”, 30) = 30

Let us have a deeper look,

You can see that socket() is opening a socket with PF_INET (IP v4) domain and IPPROTO_IP (tcp) protocol. This returns file descriptor 3. Then connect() is connecting to the socket using the same file descriptor and connects to ntp port (123) in localhost. Then it calls getsockname and closes the file descriptor for the socket.

10:26:11.901173 socket(PF_INET, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901279 connect(3, {sa_family=AF_INET, sin_port=htons(123), sin_addr=inet_addr(“127.0.0.1″)}, 16) = 0
10:26:11.901413 getsockname(3, {sa_family=AF_INET, sin_port=htons(38673), sin_addr=inet_addr(“127.0.0.1″)}, [16]) = 0
10:26:11.901513 close(3) = 0

Next it does the same but with PF_INET6 (IP v6) domain. But you can see that connect() fails with ENETUNREACH.

10:26:11.901621 socket(PF_INET6, SOCK_DGRAM, IPPROTO_IP) = 3
10:26:11.901722 connect(3, {sa_family=AF_INET6, sin6_port=htons(123), inet_pton(AF_INET6, “::1″, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable) <—————-
10:26:11.901830 close(3)

From connect man page,

ENETUNREACH
Network is unreachable.

This process is repeated with IPPROTO_UDP (udp) protocol as well.

On checking the system, I see that that only IPv4 is enabled. ‘inet6 addr’ line is missing.

[root@pbsftp ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:90:2E:31
inet addr:xx.xx.xx.xx Bcast:xx.xx.xx.xx Mask:xx.xx.xx.xx <——————–
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5494691 errors:0 dropped:0 overruns:0 frame:0
TX packets:4014672 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:5877759230 (5.4 GiB) TX bytes:5608605924 (5.2 GiB)

IPv6 is disabled in the system using following /etc/sysctl.conf entries.

net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6 = 1

This behavior of nagios plugin is wrong as it should not die when one of the connect fails.

Issue is fixed in upstream patch.

Enabling IPv6 by removing following entries from /etc/sysctl.conf and running ‘sysctl -p’ would act as a workaround.

net.ipv6.conf.default.disable_ipv6=1
net.ipv6.conf.all.disable_ipv6 = 1

To fix the issue, the upstream patch need to be either backported manually to create an rpm or a support ticket need to be opened with the operating system vendor to backport the patch in their product release.

Categories: DBA Blogs

Using OEM System Targets with Oracle Utilities Products

Anthony Shorten - Thu, 2015-03-19 17:43

One of the common requirements of an IT group is to rack the availability of a system. A system is the total end to end architecture for a particular product. All of the components need to be tracked to truly determine whether the system is available or not.

Within Oracle Enterprise Manager it is possible to track each component individually and is also possible to create a definition which groups the targets in an architecture together with some availability rules. This is known as a Generic System target within Oracle Enterprise Manager. Using the Oracle Application Management Pack for Oracle Utilities in combination with other targets it is possible to define such a target to track the overall availability of the Oracle Utilities product.

Here is how you can set it up. Note: The example below uses Oracle Utilities Customer Care And Billing as an example, this technique applies to any Oracle Utilities product supported by the Oracle Application Management Pack for Oracle Utilities.

  • Logon to Oracle Enterprise Manager using an appropriate administration user.
  • Navigate to the Setup --> Add Target --> Generic System menu option to initiate the maintenance of the Generic System target. For example:

Setup --> Add Target --> Generic System

  • Fill in the appropriate settings for your Generic System.
    • Name the system appropriately for your site.
    • Add comments which are useful for other administrators to understand the target. This might sound trivial but some comments can help other administrators understand the system represented by this target
    • The Privilege Propogating System allows administrators to grant priviledges to other administrators in a manner in which new administrators get the same privileges as its member targets. In other words, granting a privilege to a system target may grant the same privileges to the member targets, if Privilege Propogating System is enabled.
    • Time Zone is set for the target just like any other target for scheduling etc..
    • System Properties allows you to add additional tagging information.
Generic System Information
  • Select the targets to be added to the System definition. At a minimum consider the following:
    • Add the Oracle Utilities Batch Server, Oracle Utilities Web Application and Oracle Utilities Web Services Application to the Generic System. Avoid adding the Oracle Utilities Home and Oracle Utilities System Environment targets as they do not have any availability status.
    • Add the base Fusion Middleware targets including the Oracle WebLogic Domain, Oracle WebLogic Server and Application Deployment targets associated with Oracle Utilities products. Customers using Inbound Web Services should also add that Web Services Application Deployment to track Web Services availability. Oracle WebLogic Cluster targets can be also added if desired.
    • Add the database targets associated with the environment (if you want to track them as well) whether that the Database Instance target (for non-PDB or Oracle 11 and below targets) or Pluggable Database targets (for PDB implementations in Oracle 12c).
    • Avoid duplication of targets. For example, avoid adding Host targets as the child targets are already covered.
    • If you have other targets in your architecture then consider adding them such as SOA etc.. Remember that you may only use targets that are in the base Oracle Enterprise Manager or you have licensed.
Example Target selection
  • Once all the targets are selected, you can proceed to the next stages. For example:

Selecting Targets

  • Confirm the relationships between the targets. Oracle Enterprise Manager will typically default all of it for you but you can add or modify your own relationships as necessary. For example:

Confirming Relationships

  • The next step is to identify the Key Members used to track availability. Targets that are not Key Members are also tracked but do not contribute to the overall system definition of availability. You might ask then why we should added them into a definition. Well, Generic System targets are used for other purposes (that will be covered in another article). Here are a few guidelines:
    • Select targets that are key to the architecture that will represent the critical points of your architecture.
    • Use parent objects to cover as many child objects as possible. For example, use Oracle WebLogic Servers or Oracle WebLogic Cluster as Key Members to cover the Applications deployed within those servers/clusters. This optimizes the definition by minimizing the definitions.
    • Ensure you cover each component of the architecture at least once in your Key Member definition to cover the end to end architecture.
    • Decide whether you want to consider either all or at least one target is available to consider the system available.
    • At a minimum use Oracle WebLogic Server target associated with the Oracle Utilities product deployments to cover online and Web Services, at least one Oracle Utilities Batch Server to cover batch and the database as Key Members for Oracle Utilities products. For example:

Key Targets example

  • The next step allows you to define the metrics and columns to be displayed on the home screen and dashboard for the Generic System target. For the example, the defaults are taken. For example:

Portal Definition

  • Review the definition for completeness. You can alter the definition at any time after saving it or now using the breadcrumbs at the top of the page to navigate to the definition you want to change. For example:

Review the setup

  • The Generic System target is now available for your tracking within Oracle Enterprise Manager. For example:


  • The Generic System target is now also available from the targets page like all other targets. For example:

Example Target on target screen

These targets are very useful and can be defined at a low or high level (and any level in between those extremes). They can be modelled on your IT group or just generically. This technique can combine targets from many different compoennts of the architecture.

    Oracle Priority Support Infogram for 19-MAR-2015

    Oracle Infogram - Thu, 2015-03-19 15:38

    RDBMS
    A new blog I’ve never seen before, Emre Baransel, Support Engineer's Blog, looks good and hands-on. Here are some links:
    TIMESTAMP to DATE Conversion with Online Redefinition
    Standby Database SCN - x$kcvfh
    Auditing on Oracle Database in a Nutshell (11gR2)
    MySQL
    Two from MySQL Enterprise Backup:
    Distributing innodb tables made simpler!
    MySQL Enterprise Backup 3.12.0 has been released
    Java
    WebSocket Client API – Java 8, from Pavel Bucek’s Blog.
    From The Java Source: Java 9 and Beyond
    ADF
    Uploading and showing image file from absolute server path ADF, from WebLogic Partner Community EMEA.
    EPM and BI
    From Business Analytics - Proactive Support:
    Documentation for OBIA 11.1.1.9.1 - Important Information Please Read.
    Patch Set Update: Oracle Hyperion Essbase Family 11.1.2.4
    Patch Set Update: Oracle Hyperion Disclosure Management 11.1.2.3.587
    Oracle BI Applications 11.1.1.9.1 now available for download!
    Tons of good videos on these channels:
    Oracle EPMWebcasts Channel Oracle EPM & BI Tutorials YouTube ChannelOracle Learning LibraryCloud Help Center
    From BI & Analytics Pulse: What’s New in Oracle BI Applications 11.1.1.9.1?
    SOA
    Service Bus 12c – Exposing a Pipeline as a REST Service, from the SOA & BPM Partner Community Blog.
    And from New Generation Database Access: OAuth 2.0 Security Concepts: Part I "Two Key Use Cases De-Mystified"
    Eclipse
    REST Service Editor Query Parameters, from the Oracle Enterprise Pack for Eclipse Blog
    Studio C++
    Boost with Oracle Solaris Studio C++ 12.4, from Studio C++ stories.
    EBS
    From the Oracle E-Business Suite Technology blog:

    March 2015 Updates to AD and TXK for EBS 12.2

    QlikView Tips & Tricks - part II

    Yann Neuhaus - Thu, 2015-03-19 15:00

    QlikView.png
    In a previous blog entry, I started to share some Tips & Tricks about the installation and configuration of some QlikView components/software. I will try to complete this list with some other interesting things to know. On this blog entry, I will try to explain how to enable the Task Performance Summary, to debug or at least enable the Communication with Salesforce and how to install the QlikView Management Console Connector (QlikView Management API).


    For the whole blog entry, let's define the following values:

    • %SYS_PROFILE% = C:/Windows/System32/config/systemprofile
    • %QV_SRV_HOME% = D:/Apps/QlikView Server (defined during the QlikView Server installation)
    • %DS_DATA_HOME% = D:/QlikView/DistributionService (defined in the QMC: System ˃ Setup ˃ Distribution Services ˃ QDS@hostname ˃ General ˃ Settings for QDS ˃ Application Data Folder)
    • %PROXY_HOST% = your-proxy-url.domain.com
    • %PROXY_PORT% = 11210
    • %PROXY_PORT_S% = 11211
    • %HOST_FQDN% = your-qlikview-hostname.domain.com
    • %HOST_IP% = 160.160.20.20
    • %HOST_ALIAS% = qlikview-alias.domain.com
    • %QV_GRPS% = QV_GRP_SUP + QV_GRP_DEV + QV_GRP_ADM (some groups for QlikView, all under the domain "DOMAIN")
    • %QV_SYS_USER% = QV_SYS_USER (the "DOMAIN" user under which QlikView is running)


    Each time you will see one of these parameters or values in the text below, don't forget to replace them with YOUR OWN values. I only associate them for an example and to help you to find a match in your environment.

    I. Task Performance Summary


    Introduced in QlikView 11.2 SR7 and disabled by default, the Task Performance Summary is a new feature of QlikView to analyse the performance of the Tasks (I'm sure you already understood that from the title!). This new feature simply launch a little benchmark during the execution of all tasks to record some useful data like:

    • Name of the process that ran the task with its PID
    • CPU used by the process (Average, Peak)
    • CPU used by the Windows Server (Peak)
    • Virtual RAM used by the process (Average, Peak)
    • Virtual RAM used by the Windows Server (Peak)
    • Physical RAM used by the process (Average, Peak)
    • Physical RAM used by the Windows Server (Peak)
    • Duration of the reload task


    For debugging, performance analysis and performance improvements, it's pretty cool to have this kind of information. Enable the Task Performance Summary is quite simple:

    1. Login to the Windows Server with any Administrator account
    2. Open the file: %SYS_PROFILE%/AppData/Roaming/QlikTech/QlikViewBatch/Settings.ini
      1. Add at the end: EnableQVBProcessSummary=1
      2. Add an empty line at the end of the file (VERY IMPORTANT: the last line MUST be an empty line)
    3. Open a command prompt as Administrator and execute the command: "%QV_SRV_HOME%/Distribution Service/qvb.exe"
    4. In the command prompt, execute another command: services.msc
    5. Restart all QlikView Services
    6. Open the folder: %DS_DATA_HOME%/TaskResults/
    7. Refresh the folder's content until there is a new file created (if nothing new appears, force the execution of a QlikView Task)
    8. Open the last created xml file and check that inside there is a line which include CPU and RAM consumption

    Task1.pngModification of the Settings.ini file to enable the Task Performance Summary

    Task2.pngLine added in the TaskResults' xml files by the Task Performance Summary


    That's it, the Task Performance Summary is now enabled and will record the performance of the future task's executions.

    II. Communication with Salesforce


    If it's your first time with the QlikView Salesforce Connector, then there is one thing that is essential to understand: the QlikView Salesforce Connector requires an internet access to work. To be more precise, the connector will have to be able to access to the Salesforce Website to retrieve some data models that will be used by QlikView for any communication with a Salesforce Data Source. Therefore, if your enterprise network uses a proxy or anything else to prevent or restrict the access to internet, then this will need to be fixed.


    SalesForceIssueNW_P.pngLog file generated during a task execution when the proxy configuration isn't done

    SalesForceIssueW_P.pngLog file generated during a task execution when the proxy configuration is properly done

    On this section, I will just describe how to configure your Windows Server to allow the communication with the Salesforce Website by configuring the proxy settings:

    1. The QlikView Salesforce Connector must be properly installed
    2. Login to the Windows Server with the account under which QlikView is running
    3. Open: Internet Explorer ˃ Internet Options ˃ Connections ˃LAN settings
      1. Click on: Use a proxy server for your LAN
      2. Click on: Bypass proxy server for local addresses
      3. Click on: Advanced
      4. HTTP: Address = %PROXY_HOST%
      5. HTTP: Port = %PROXY_PORT%
      6. Secure: Address = %PROXY_HOST%
      7. Secure: Port = %PROXY_PORT_S%
      8. Exceptions = 127.0.0.1;localhost;%HOST_FQDN%;%HOST_IP%;%HOST_ALIAS%
    4. Click on: OK (3 times)

    SalesForce1.pngConfiguration of the proxy in the Internet Explorer's options


    After this modification, the communication and therefore the reload of QlikView Documents using Salesforce as a Data Source should be successful.

    III. QMC Connector


    In QlikView by default, the license management through the QlikView Management Console is quite difficult... That's why the QlikView Community is really active on this topic and some solutions have been developed to easily manage the QlikView licenses. The most common solution is to use the QMC Connector. This connector will use the QlikView Management API to manage a lot of things directly from a QlikView Document. The installation of this connector is quite easy:

    1. Download the QlikView Connector: QVSManager.zip
    2. Login to the Windows Server with any Administrator account
    3. Extract the QVSManager zip files into: C:/Program Files/Common Files/QlikTech/Custom Data/. This will create the folder "QVSManager" and in this folder, there should be 4 files
    4. Create a Windows local group:
      1. Open a command prompt as Administrator and execute: compmgmt.msc
      2. Open: System Tools ˃ Local Users and Groups ˃ Groups
      3. Create a new group with the following entries:
        1. Name = QlikView Management API
        2. Description = QlikView Management API
        3. Members = %QV_GRPS% + %QV_SYS_USER%
      4. Click on: Create
    5. Restart all QlikView Services
    6. Deploy the QlikView Document: QVSCALManager.qvw (I put it in a zip file for convenience)

    QMCConnector1.pngDeployment of the QVSManager.zip file in the "Custom Data" directory of QlikView

    QMCConnector2.pngCreation and configuration of the Windows Server's local group for the QVSManager

    Once this new QlikView Document is ready (Document deployed, new task created and scheduled, aso...), you should be able to see it in the QlikView AccessPoint and manage the QlikView licenses directly from this new application. Believe me, it will greatly facilitate your work!


    Okay, that was my second blog entry about QlikView, I guess that's enough for now :). I hope some of you found these tips useful and if needed, don't hesitate to let me a little comment below and I'll do my best to help you. See you soon!

    Might need to use Tunneling with Discoverer 11g

    Michael Armstrong-Smith - Thu, 2015-03-19 13:40
    I have noticed a few instances recently of Discoverer 11g Plus failing to open or taking an awful long time to open. In both of the cases where this has been reported to me by my clients, changing the plus communication protocol from Default to Tunneling did the trick.

    To enable tunneling for use with Discoverer Plus, use this workflow:

    1. Launch Enterprise Manager using something like: http://server.domain.com:7002/em
    2. Enter your Username and Password. Username is typically Weblogic
    3. Under Farm, on left-hand side, expand Discoverer and Discoverer(11.1.1.x.0)
    4. In the Components window, highlight Discoverer Plus then click the Configure button
    5. In the Communication Protocols window, click the Tunneling radio button (see below)
    6. Click the Apply button
    7. Shut Down the Discoverer service from the top link by clicking on Discoverer | Control | Shut Down - confirm the action
    8. Restart the Discoverer service from the top link by clicking on Discoverer | Control | Start Up - confirm the action (sometimes you have to do this twice)