Feed aggregator

Undocumented Application Engine Parameter: EnableAEMonitoring

David Kurtz - Wed, 2015-03-04 13:57
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Oracle Support Document 1640355.1: E-AE: Performance Degradation When Using Do Loop Action to Process Large Amount of Data on PT 8.52 & PT 8.53 describes a performance problem in Application Engine when a small but frequently executed loop.  Application Engine calls DBMS_APPLICATION_INFO to set MODULE and ACTION on v$session each time it goes round the loop.

However, Oracle Bug 10130415  Latch contention on "resmgr group change latch" acknowledges a problem in some versions of Oracle, but it is resolved in 11.2.0.3 and 12c.
Updated 8 May 2015: Oracle support do not know of any explicit link to this database bug.

A new parameter was introduced in PT8.54 and backported to PT8.52.23 and PT8.53.13 in the Database Options section of the process scheduler configuration file (psprcs.cfg).  EnableAEMonitor controls whether Application Engine calls DBMS_APPLICATION_INFO.  The default value for this parameter is 0.

The PeopleSoft support document does not reference the database bug report, but it seems reasonable to infer that the new parameter was introduced to work around the database bug.

This new parameter is not described in PeopleBooks.  It does appear in the delivered configuration files on at least 8.53.13.  However, it is not present in the delivered 8.54.05 configuration file (bug 21055140). Therefore, by default, Application Engine will not set the module and Action unless you add it to the configuration file.

[Database Options]
;=========================================================================
; Database-specific configuration options
;=========================================================================

;DMK - added to enable DBMS_APPLICATION_INFO instrumentation
EnableAEMonitoring=1
Then the behaviour is then as it has been since 8.52, described in PeopleTools 8.52 Application Engine sets MODULE and ACTION.
My Recommendation I certainly think that you should add this parameter to all process scheduler configuration files at relevant PeopleTools version.  Unless you specifically have the problem described in the support note, I recommend that you also set the parameter to 1 as shown above. I have never seen the problem in affected database versions, and it is fixed in the terminal release of 11g.

Without setting the parameter, you will loose the ability to relate Enterprise Manager and ASH data to specific application engine steps.  If you need to make a code change to achieve a performance improvement you will have to go through the manual process of finding the SQL in an application engine trace.

Oracle APEX_WEB_SERVICE REST API call

Kubilay Çilkara - Wed, 2015-03-04 12:15
In this post I will try to show you how I used the Oracle Apex and the APEX_WEB_SERVICE  PL/SQL package to quickly send a request to a public Internet API and how I handled the response. The code below was written during a 'Hackday' and hasn't been extensively tested.

My use case is integrating Oracle Apex with the public Mendeley REST API for Mendeley Catalog Search.

The idea was to build an application in Oracle Apex to query the Mendeley REST API Catalog with a keyword. Mendeley REST API gives JSON response so I used PL/JSON to parse it.  I hear in Oracle 12c JSON is going to be a native data-type. My Oracle Apex host is running Oracle 11g and I had to use PL/JSON for ease.

To cut it short here is how the Mendeley Catalog Search on Oracle Apex application look  like. (Click image to go to app or visit http://apex2.enciva.co.uk/apex/f?p=864:2






To integrate with Mendeley REST API from Oracle Apex, I used one PL/SQL function and one procedure.

I used the function to obtain the Mendeley REST API Client Credentials Authorisation flow token and the procedure to do make the API request to Mendeley Catalog Search and to handle the response.

Here is the MENDELEY_CALL PL/SQL function I created:

This function returns the Client Credentials Authorisation Flow token from the Mendeeley REST API

create or replace function mendeley_call (p_id in varchar2)
return varchar2
is
v_token varchar2(1000);
token varchar2(1000);
jtoken json;
v_grant_type varchar2(400:= 'client_credentials';
v_client_id varchar2(500:= p_id;
v_client_secret varchar2(500:= '<put_your_mendeley_client_secret_here>';
v_scope varchar2(300:= 'all';
begin

/*----------Setting Headers----------------------------------------*/                                      
apex_web_service.g_request_headers(1).name := 'Content-Type';
apex_web_service.g_request_headers(1).Value := 'application/x-www-form-urlencoded; charset=utf-8';
/*-----------------------------------------------------------------*/

token := apex_web_service.make_rest_request
    (
      p_url         => 'https://api.mendeley.com/oauth/token'
    p_http_method => 'POST'
    p_parm_name   => apex_util.string_to_table('grant_type:client_id:client_secret:scope')
    p_parm_value  => apex_util.string_to_table(v_grant_type||':'||v_client_id||':'||v_client_secret||':'

||v_scope)
    p_wallet_path => 'file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'
    p_wallet_pwd  => '<put_your_oracle_wallet_password_here>'
    );
-- debug
-- dbms_output.put_line(token);
jtoken := json(token);
v_token := json_ext.get_string(jtoken,'access_token');
-- debug
-- dbms_output.put_line(v_token);
return v_token;
EXCEPTION
WHEN OTHERS THEN
   raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
end;



Here is the anonymous procedure which I put into a PL/SQL region on the Oracle Apex page:

This procedure incorporates the function above and makes the request and handles the response from the Mendeley REST API

Note how the procedure calls the function MENDELEY_CALL (above) to load the variable v_token. 

DECLARE
  v_token  VARCHAR2(599:= mendeley_call(put_your_mendeley_client_id_here);
  v_search VARCHAR2(500);
  mendeley_document NCLOB;
  v_status VARCHAR2(100);
  obj json_list;
  v_id VARCHAR2(100);
  v_title NVARCHAR2(1000);
  v_abstract NCLOB;--varchar2(32000);
  v_link     VARCHAR2(1000);
  v_source   VARCHAR2(500);
  v_type     VARCHAR2(100);
  v_pct_hit  VARCHAR2(10);
  v_rows     NUMBER(10);
  v_batch_id NUMBER(10);
BEGIN
  -- Oracle Wallet
  utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'

'my_secret_password');
  -- Set Authorisation headers and utf8
  -- the following lilne is necessary if you need to use languages other than latin and 
  -- you will use APEX_WEB_SERVICE package 
  utl_http.set_body_charset('UTF-8');
  -- build the Authorisation header
  apex_web_service.g_request_headers(1).name  := 'Content-Type';
  apex_web_service.g_request_headers(1).value := 'application/jsonrequest';
  apex_web_service.g_request_headers(1).name  := 'Authorization';
  apex_web_service.g_request_headers(1).value := 'Bearer '||v_token||'';
  
  -- Make the request and load the response into a CLOB 
  mendeley_document := apex_web_service.make_rest_request 
      
        p_url => 'https://api.mendeley.com:443/search/catalog' 
      p_http_method => 'GET' 
      p_parm_name => apex_util.string_to_table('title:limit'
      p_parm_value => apex_util.string_to_table('Mendeley:10'
      );
  -- Load the response to JSON_LIST PL/JSON object
  obj := json_list(mendeley_document);
  -- Start extracting values from the JSON and writhe some HTML
  -- Traverse over JSON_LIST extract elements you like
  FOR IN 1..obj.count
  LOOP
    v_id       := json_ext.get_string(json(obj.get(i)),'id');
    v_title    := json_ext.get_string(json(obj.get(i)),'title');
    v_abstract := json_ext.get_string(json(obj.get(i)),'abstract');
    v_link     := json_ext.get_string(json(obj.get(i)),'link');
    v_source   := json_ext.get_string(json(obj.get(i)),'source');
    v_type     := json_ext.get_string(json(obj.get(i)),'type');
    -- write extracted data
   dbms_output.put_line(v_title||' ==> '||v_abstract);
   END LOOP;
 END;

 END;

This shows how easy is, in this case using one function and one procedure to make a REST API request to an external Web Service from Oracle Apex. 
Categories: DBA Blogs

OAM 11GR2PS2 in a day

Frank van Bortel - Wed, 2015-03-04 10:40
Get Access Manager 11gRel2 PS2 installed in a day Goal is to get OAM installed and configured in a day - with full control; that is without using the Installation Wizard. Virtual Box Start with Virtual Box. Allow plenty of memory (10GB), and disk (120GB). Attach V33411-01.iso (Oracle Server V6.3) to the CD, and boot. Minimal (not Basic server!) install, configure network with static IP Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Little League, Big Data

Scott Spendolini - Tue, 2015-03-03 13:36
Last week, I participated in my first Little League draft for my son's baseball team.  This was new territory, as up until now, play has been non-competitive.  This year we will actually have to keep score, and there will be winners and losers.

In preparation for the draft, we had tryouts a few weeks ago where we evaluated the kids on a number of different criteria.  Never have I seen so many scared 7 and 8 year olds march through the cages as dozens of coaches with clipboards watched and recorded their every move.  I camped out and watched them pitch, as from what many veteran coaches told me, the key to keeping the game moving along is the pitcher.

In preparation for the draft, we were sent a couple of key spreadsheets.  The first one had an average rating of all of the kids tryouts assessments, done by the board members.  The second one contained coaches evaluations for some of the players from past seasons. Lots and lots of nothing more than raw data.

Time to fire up APEX.  I created a workspace on my laptop, as I was not sure if we would have WiFi at the draft.  From there, I imported both spreadsheets into tables, and got to work on creating a common key.  Luckily, the combination of first and last name produced no duplicates, so it was pretty easy to link the two tables.  Next, I created a simple IR based on the EVALS table - which was the master.  This report showed all of the tryout scores, and also ranked each player based on the total score.

Upon editing a row in EVALS, I had a second report that showed a summary of the coach's evaluation from prior seasons.  I could also make edits to the EVALS table, such as identify players that I was interested in, players that were already drafted, and any other comments that I wanted to track.

After about 20 minutes of reviewing the data, I noticed something.  I was using data collected while the player was under a lot of stress.  The data set was also small, as each player only got 5 pitches, 5 catches, 5 throws, etc.  The better indicator as to a player's talents was in the coach's evaluations, as that represents an entire season of interaction with the player, not just a 3-4 minute period.

Based on this, I was quickly able to change my IR on the first page to also include a summary of the coach's evaluations alongside the tryout evaluations.  I sorted my report based on that, and got a very different order.  This was the order that I was going to go with for my picks.

Once the draft started, it was very easy to mark each player as drafted, so that any drafted player would no longer show up in the report.  It was also trivial to toggle the "must draft" column on and off, ensuring that if there were any younger players that I wanted, I could get them in the early rounds before we had to only draft older players.

Each time it was my pick, I already knew which player that I was going to draft.  Meanwhile, the other coaches shuffled stacks of marked up papers and attempted to navigate multiple spreadsheets when it was theirs.  Even the coordinator commented on how I was always ready and kept things moving along.

Unless you're some sort of youth athletics coach that does a draft, this application will likely do you little good.  But the concept can go a long way.  In almost any role in any organization, you likely have data for something scattered across a few different sources or spreadsheets.  This data, when isolated, only paints a blurry part of the whole picture.  But when combined and analyzed, the data can start to tell a better story, as was the case in my draft.

The technical skills required to build this application were also quite minimal.  The bulk of what I used was built-in functionality of the Interactive Report in APEX.  Merging the data and linking the two tables was really the only true technical portion of this, and that's even something that can be done by a novice.

So the next time you have a stack of data that may be somehow related, resist the temptation to use old methods when trying to analyze it.  Get it into the database, merge it as best you can, and let APEX do the rest.

Oracle Data Provider for .NET now on NuGet

Christian Shay - Mon, 2015-03-02 08:30

ODP.NET, Managed Driver is now on NuGet, meaning that you can add ODP.NET to your Visual Studio project with just a few clicks in the NuGet Package Manager. We've also published an Oracle By Example walkthrough to take you step by step through the process of using NuGet and ODP.NET.

Here we are in the NuGet Package Manager:




When searching for us in the package manager, make sure to get the official package - look for the word "Official" in the title.



There's actually two NuGet packages available:

ODP.NET, Managed Driver - Official
NuGet id: Oracle.ManagedDataAccess

This adds Oracle.ManagedDataAccess.dll to your project and also makes needed configuration entries in your app.config or web.config.


ODP.NET, Managed Entity Framework Driver - Official
NuGet id: Oracle.ManagedDataAccess.EntityFramework

This adds Oracle.ManagedDataAccess.EntityFramework.dll as well as config file configuration. It also has a dependency on the ODP.NET package above and will pull it into your project  as well as EF 6 if needed.

If you want to host this package on your local intranet, it is also available for download on the OTN .NET download page.

Please note that if you want to use Visual Studio integration features, such as browsing your Oracle Schema in Server Explorer, or using Entity Designer or Table Adapter Configuration wizard, you should still install Oracle Developer Tools for Visual Studio, as a NuGet package  does not provide any of the Visual Studio integration components needed to do design time work.

The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes

Vikram Das - Sun, 2015-03-01 14:53
I got this error while upgrading an R12.1.3 instance to R12.2.4, when I completed AD.C.Delta 5 patches with November 2014 bundle patches for AD.C and was in the process of applying TXK.C.Delta5 with November 2014 bundle patches for TXK.C :

Validation successful. All expected nodes are listed in ADOP_VALID_NODES table.
[START 2015/03/01 04:53:16] Check if services are down
        [INFO] Run admin server is not down
     [WARNING]  Hotpatch mode should only be used when directed by the patch readme.
  [EVENT]     [START 2015/03/01 04:53:17] Performing database sanity checks
    [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
    Log file: /erppgzb1/erpapp/fs_ne/EBSapps/log/adop/adop_20150301_045249.log


[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"

to get the list of the log files along with snippet of the error message corresponding to each log file.


adop exiting with status = 1 (Fail)

I was really surprised as I had already run EBS technology codelevel checker (patch 17537119) script checkDBpatch.sh on racnode1.

To investigate I checked inside checkDBpatch.sh and found that it create a table called TXK_TCC_RESULTS.  

SQL> desc txk_tcc_results
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TCC_VERSION                               NOT NULL VARCHAR2(20)
 BUGFIX_XML_VERSION                        NOT NULL VARCHAR2(20)
 NODE_NAME                                 NOT NULL VARCHAR2(100)
 DATABASE_NAME                             NOT NULL VARCHAR2(64)
 COMPONENT_NAME                            NOT NULL VARCHAR2(10)
 COMPONENT_VERSION                         NOT NULL VARCHAR2(20)
 COMPONENT_HOME                                     VARCHAR2(600)
 CHECK_DATE                                         DATE
 CHECK_RESULT                              NOT NULL VARCHAR2(10)
 CHECK_MESSAGE                                      VARCHAR2(4000)

SQL> select node_name from txk_tcc_results;

NODE_NAME
--------------------------------------------------------------------------------
RACNODE1

I ran checkDBpatch.sh again, but the patch failed again with previous error:

   [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .

It was Saturday 5 AM already working through the night.  So I thought, it is better to sleep now and tackle this on Sunday.  On Sunday morning after a late breakfast, I looked at the problem again.  This time, I realized that the error was complaining about racnode1 (in lower case) and the txk_tcc_results table had RACNODE1(in upper case).  To test my hunch, I immediately updated the value:

update txk_tcc_results
set node_name='racnode1' where node_name='RACNODE1';

commit;

I restarted the patch, and it went through.  Patch was indeed failing because it was trying to look for a lower case value.  I will probably log an SR with Oracle, so that they change their code to make the node_name check case insensitive.

Further, I was curious, why node_name was stored in all caps in fnd_nodes and txk_tcc_results.  The file /etc/hosts had it in lowercase.  I tried the hostname command on linux prompt:

$ hostname
RACNODE1

That was something unusual, as in our environment, hostname always returns the value in lowercase.  So I further investigated.
[root@RACNODE1 ~]# sysctl kernel.hostname
kernel.hostname = RACNODE1

So I changed it

[root@RACNODE1 ~]# sysctl kernel.hostname=RACNODE1
kernel.hostname = racnode1
[root@RACNODE1 ~]# sysctl kernel.hostname
kernel.hostname = racnode1
[root@RACNODE1 ~]#
[root@RACNODE1 ~]# hostname
racnode1

Logged in again to see if root prompt changed:

[root@racnode1 ~]#

I also checked
[root@tsgld5811 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
NOZEROCONF=yes
HOSTNAME=RACNODE1

Changed it here also:
[root@tsgld5811 ~]# cat /etc/sysconfig/network
NETWORKING=yes
NETWORKING_IPV6=no
NOZEROCONF=yes
HOSTNAME=racnode1

I also changed it on racnode2.
Categories: APPS Blogs

What happened to “when the application is fast enough to meet users’ requirements?”

Cary Millsap - Fri, 2015-02-27 15:00
On January 5, I received an email called “Video” from my friend and former employee Guđmundur Jósepsson from Iceland. His friends call him Gummi (rhymes with “who-me”). Gummi is the guy whose name is set in the ridiculous monospace font on page xxiv of Optimizing Oracle Performance, apparently because O’Reilly’s Linotype Birka font didn’t have the letter eth (đ) in it. Gummi once modestly teased me that this is what he is best known for. But I digress...

His email looked like this:


It’s a screen shot of frame 3:12 from my November 2014 video called “Why you need a profiler for Oracle.” At frame 3:12, I am answering the question of how you can know when you’re finished optimizing a given application function. Gummi’s question is, «Oi! What happened to “when the application is fast enough to meet users’ requirements?”»

Gummi noticed (the good ones will do that) that the video says something different than the thing he had heard me say for years. It’s a fair question. Why, in the video, have I said this new thing? It was not an accident.
When are you finished optimizing?The question in focus is, “When are you finished optimizing?” Since 2003, I have actually used three different answers:
When are you are finished optimizing?
  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    Source: Optimizing Oracle Performance (2003) pages 302–304.
  2. When the application is fast enough to meet your users’ requirements.
    Source: I have taught this in various courses, conferences, and consulting calls since 1999 or so.
  3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
    Source: “Why you need a profiler for Oracle” (2014) frames 2:51–3:20.
My motive behind answers A and B was the idea that optimizing beyond what your business needs can be wasteful. I created these answers to deter people from misdirecting time and money toward perfecting something when those resources might be better invested improving something else. This idea was important, and it still is.

So, then, where did C come from? I’ll begin with a picture. The following figure allows you to plot the response time for a single application function, whatever “given function” you’re looking at. You could draw a similar figure for every application function on your system (although I wouldn’t suggest it).


Somewhere on this response time axis for your given function is the function’s actual response time. I haven’t marked that response time’s location specifically, but I know it’s in the blue zone, because at the bottom of the blue zone is the special response time RT. This value RT is the function’s top speed on the hardware you own today. Your function can’t go faster than this without upgrading something.

It so happens that this top speed is the speed at which your function will run if and only if (i) it contains no unnecessary calls and (ii) the calls that remain run at hardware speed. ...Which, of course, is the idea behind this new answer C.
Where, exactly, is your “requirement”?Answer B (“When the application is fast enough to meet your users’ requirements”) requires that you know the users’ response time requirement for your function, so, next, let’s locate that value on our response time axis.

This is where the trouble begins. Most DBAs don’t know what their users’ response time requirements really are. Don’t despair, though; most users don’t either.

At banks, airlines, hospitals, telcos, and nuclear plants, you need strict service level agreements, so those businesses investment into quantifying them. But realize: quantifying all your functions’ response time requirements isn’t about a bunch of users sitting in a room arguing over which subjective speed limits sound the best. It’s about knowing your technological speed limits and understanding how close to those values your business needs to pay to be. It’s an expensive process. At some companies, it’s worth the effort; at most companies, it’s just not.

How about using, “well, nobody complains about it,” as all the evidence you need that a given function is meeting your users’ requirement? It’s how a lot of people do it. You might get away with doing it this way if your systems weren’t growing. But systems do grow. More data, more users, more application functions: these are all forms of growth, and you can probably measure every one of them happening where you’re sitting right now. All these forms of growth put you on a collision course with failing to meet your users’ response time requirements, whether you and your users know exactly what they are, or not.

In any event, if you don’t know exactly what your users’ response time requirements are, then you won’t be able to use “meets your users’ requirement” as your finish line that tells you when to stop optimizing. This very practical problem is the demise of answer B for most people.
Knowing your top speedEven if you do know exactly what your users’ requirements are, it’s not enough. You need to know something more.

Imagine for a minute that you do know your users’ response time requirement for a given function, and let’s say that it’s this: “95% of executions of this function must complete within 5 seconds.” Now imagine that this morning when you started looking at the function, it would typically run for 10 seconds in your Oracle SQL Developer worksheet, but now after spending an hour or so with it, you have it down to where it runs pretty much every time in just 4 seconds. So, you’ve eliminated 60% of the function’s response time. That’s a pretty good day’s work, right? The question is, are you done? Or do you keep going?

Here is the reason that answer C is so important. You cannot responsibly answer whether you’re done without knowing that function’s top speed. Even if you know how fast people want it to run, you can’t know whether you’re finished without knowing how fast it can run.

Why? Imagine that 85% of those 4 seconds are consumed by Oracle enqueue, or latch, or log file sync calls, or by hundreds of parse calls, or 3,214 network round-trips to return 3,214 rows. If any of these things is the case, then no, you’re absolutely not done yet. If you were to allow some ridiculous code path like that to survive on a production system, you’d be diminishing the whole system’s effectiveness for everybody (even people who are running functions other than the one you’re fixing).

Now, sure, if there’s something else on the system that has a higher priority than finishing the fix on this function, then you should jump to it. But you should at least leave this function on your to-do list. Your analysis of the higher priority function might even reveal that this function’s inefficiencies are causing the higher-priority functions problems. Such can be the nature of inefficient code under conditions of high load.

On the other hand, if your function is running in 4 seconds and (i) its profile shows no unnecessary calls, and (ii) the calls that remain are running at hardware speeds, then you’ve reached a milestone:
  1. if your code meets your users’ requirement, then you’re done;
  2. otherwise, either you’ll have to reimagine how to implement the function, or you’ll have to upgrade your hardware (or both).
There’s that “users’ requirement” thing again. You see why it has to be there, right?

Well, here’s what most people do. They get their functions’ response times reasonably close to their top speeds (which, with good people, isn’t usually as expensive as it sounds), and then they worry about requirements only if those requirements are so important that it’s worth a project to quantify them. A requirement is usually considered really important if it’s close to your top speed or if it’s really expensive when you violate a service level requirement.

This strategy works reasonably well.

It is interesting to note here that knowing a function’s top speed is actually more important than knowing your users’ requirements for that function. A lot of companies can work just fine not knowing their users’ requirements, but without knowing your top speeds, you really are in the dark. A second observation that I find particularly amusing is this: not only is your top speed more important to know, your top speed is actually easier to compute than your users’ requirement (…if you have a profiler, which was my point in the video).

Better and easier is a good combination.
Tomorrow is important, tooWhen are you are finished optimizing?
  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
  2. When the application is fast enough to meet your users’ requirements.
  3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
Answer A is still a pretty strong answer. Notice that it actually maps closely to answer C. Answer C’s prescription for “no unnecessary calls” yields answer A’s goal of call reduction, and answer C’s prescription for “calls that remain run at hardware speed” yields answer A’s goal of latency reduction. So, in a way, C is a more action-oriented version of A, but A goes further to combat the perfectionism trap with its emphasis on the cost of action versus the cost of inaction.

One thing I’ve grown to dislike about answer A, though, is its emphasis on today in “…exceeds the cost of the performance you’re getting today.” After years of experience with the question of when optimization is complete, I think that answer A under-emphasizes the importance of tomorrow. Unplanned tomorrows can quickly become ugly todays, and as important as tomorrow is to businesses and the people who run them, it’s even more important to another community: database application developers.
Subjective goals are treacherous for developersMany developers have no way to test, today, the true production response time behavior of their code, which they won’t learn until tomorrow. ...And perhaps only until some remote, distant tomorrow.

Imagine you’re a developer using 100-row tables on your desktop to test code that will access 100,000,000,000-row tables on your production server. Or maybe you’re testing your code’s performance only in isolation from other workload. Both of these are problems; they’re procedural mistakes, but they are everyday real-life for many developers. When this is how you develop, telling you that “your users’ response time requirement is n seconds” accidentally implies that you are finished optimizing when your query finishes in less than n seconds on your no-load system of 100-row test tables.

If you are a developer writing high-risk code—and any code that will touch huge database segments in production is high-risk code—then of course you must aim for the “no unnecessary calls” part of the top speed target. And you must aim for the “and the calls that remain run at hardware speed” part, too, but you won’t be able to measure your progress against that goal until you have access to full data volumes and full user workloads.

Notice that to do both of these things, you must have access to full data volumes and full user workloads in your development environment. To build high-performance applications, you must do full data volume testing and full user workload testing in each of your functional development iterations.

This is where agile development methods yield a huge advantage: agile methods provide a project structure that encourages full performance testing for each new product function as it is developed. Contrast this with the terrible project planning approach of putting all your performance testing at the end of your project, when it’s too late to actually fix anything (if there’s even enough budget left over by then to do any testing at all). If you want a high-performance application with great performance diagnostics, then performance instrumentation should be an important part of your feedback for each development iteration of each new function you create.
My answerSo, when are you finished optimizing?
  1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
  2. When the application is fast enough to meet your users’ requirements.
  3. When there are no unnecessary calls and the calls that remain run at hardware speed.
There is some merit in all three answers, but as Dave Ensor taught me inside Oracle many years ago, the correct answer is C. Answer A specifically restricts your scope of concern to today, which is especially dangerous for developers. Answer B permits you to promote horrifically bad code, unhindered, into production, where it can hurt the performance of every function on the system. Answers&nnbsp;A and B both presume that you know information that you probably don’t know and that you may not need to know. Answer C is my favorite answer because it is tells you exactly when you’re done, using units you can measure and that you should be measuring.

Answer C is usually a tougher standard than answer A or B, and when it’s not, it is the best possible standard you can meet without upgrading or redesigning something. In light of this “tougher standard” kind of talk, it is still important to understand that what is optimal from a software engineering perspective is not always optimal from a business perspective. The term optimized must ultimately be judged within the constraints of what the business chooses to pay for. In the spirit of answer A, you can still make the decision not to optimize all your code to the last picosecond of its potential. How perfect you make your code should be a business decision. That decision should be informed by facts, and these facts should include knowledge of your code’s top speed.

Thank you, Guđmundur Jósepsson, of Iceland, for your question. Thank you for waiting patiently for several weeks while I struggled putting these thoughts into words.

The German ADF-Community-Book was released today

Gerd Volberg - Thu, 2015-02-26 06:00
The German ADF- (and Forms-) Community released their first book.

The "ADF book" is a compilation of German lectures, articles and workshop tutorials in Oracle ADF (some of the contributions are in English!). The authors are members of the German ADF community. The time frame covers the years 2010 to 2014. The project "ADF book" was implemented by a team of staff from partner companies and Oracle.

40 authors wrote 70 contributions on 1400 pages in this book.

It's a 110 MB PDF and can be downloaded for free from the ADF-Community-Page.


My part in the book has the title "Modernizing Oracle Forms" in which I demonstrate, how easy it is to enhance Oracle Forms with modern frameworks like LAF (Look and Feel-Framework from Francois Degrelle) and how to integrate ADF in Oracle Forms via OraFormsFaces (A framework from Wilfred van der Deijl).

Have fun with the book
Gerd

Complément : A-Team Chronicles

Jean-Philippe Pinte - Thu, 2015-02-26 02:31
Le site A-Team Chronicles aggrège le contenu produit par les membres de la A-Team : meilleures pratiques, astuces, conseils, etc

PeopleTools 8.54: Oracle Resource Manager

David Kurtz - Wed, 2015-02-25 04:11
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.
PeopleTools FeatureThis feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

  • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
SELECT DISTINCT group_or_subplan, type
FROM dba_rsrc_plan_directives
WHERE plan = 'MIXED_WORKLOAD_PLAN'
ORDER BY 2 DESC,1
/

GROUP_OR_SUBPLAN TYPE
------------------------------ --------------
ORA$AUTOTASK_SUB_PLAN PLAN
BATCH_GROUP CONSUMER_GROUP
INTERACTIVE_GROUP CONSUMER_GROUP
ORA$DIAGNOSTICS CONSUMER_GROUP
OTHER_GROUPS CONSUMER_GROUP
SYS_GROUP CONSUMER_GROUP
If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE
  • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
SELECT PT_ORA_CONSUMR_GRP 
FROM PS_PT_ORA_RESOURCE
, PS_PTEXEC2RESOURCE
WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
AND PT_ORA_CONSUMR_GRP <> ' '
AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

PT_ORA_CONSUMR_GRP
------------------------
INTERACTIVE_GROUP

And then the PeopleSoft process explicitly switches its group, thus:
DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;
Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.
SetupThe PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.
BEGIN
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
END;

BEGIN
FOR i IN(
SELECT DISTINCT r.pt_ora_consumr_grp
FROM sysadm.ps_pt_ora_resource r
WHERE r.pt_ora_consumr_grp != ' '
AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
) LOOP
dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
(GRANTEE_NAME => 'SYSADM'
,CONSUMER_GROUP => i.pt_ora_consumr_grp
,GRANT_OPTION => FALSE);
END LOOP;
END;
/

The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.
NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------------------
resource_manager_plan string MIXED_WORKLOAD_PLAN

I question one or two of the mappings on PS_PTEXEC2RESOURCE.
SELECT * FROM PS_PTEXEC2RESOURCE …

PT_EXECUTABLE_NAME PT_RESOURCE_NAME
-------------------------------- -----------------

PSAPPSRV APPLICATION SERVE
PSQED MISCELLANEOUS
PSQRYSRV QUERY SERVER

  • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
    • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
  • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.
Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group MappingsI would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/

However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings
exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
, module, action, resource_consumer_group
FROM v$session
WHERE module IN('PROCESSMONITOR','WIBBLE')
ORDER BY program, module, action
/

So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.
PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 
exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.
DECLARE 
old_group varchar2(30);
BEGIN
DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
END;
/

PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.
PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
---------------- ---------------- ---------------- ------------------------
PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.
UPDATE ps_ptexec2resource 
SET pt_resource_name = 'DO_NOT_USE'
WHERE pt_executable_name = 'PSAPPSRV'
AND pt_resource_name = 'APPLICATION SERVER'
/
COMMIT
/
BEGIN
DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
END;
/
BEGIN
DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'CLIENT_PROGRAM'
,value => 'PSAPPSRV'
,consumer_group => 'INTERACTIVE_GROUP');

DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
(attribute => 'MODULE_NAME'
,value => 'PROCESSMONITOR'
,consumer_group => 'SYS_GROUP');

DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
explicit => 1,
oracle_user => 2,
service_name => 3,
module_name_action => 4, --note higher than just module
module_name => 5, --note higher than program
service_module => 6,
service_module_action => 7,
client_os_user => 8,
client_program => 9, --note lower than module
client_machine => 10
);
DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
END;
/
So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

ConclusionI have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.
  • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
  • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
    • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
    • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.
Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

Annonce : Oracle Database In-Memory Advisor

Jean-Philippe Pinte - Wed, 2015-02-25 01:27
Oracle Database In-Memory Advisor est maintenant disponible.
Pour utiliser cet assistant, le "Database Tuning Pack" est nécessaire.

Plus d'information :
  • Page OTN
  • Note MOS 1965342.1

Oracle Database In-Memory Advisor Released

Asif Momen - Tue, 2015-02-24 16:22
Oracle Database In-Memory option was released with Oracle Database 12c (12.1.0.2) and the In-Memory Advisor (IMA) has been much awaited since then. The Oracle Database In-Memory is designed to achieve the following goals:

  1.  Speed up analytical queries
  2.  Speed up OLTP transactions
  3.  NO application changes


Without the In-Memory Advisor, a DBA has to manually identify the tables to be placed in the In-Memory Column Store (IMCS). This manual task is no more required as the IMA, analyzes the analytical workload of the database and produces a recommendation report (which includes SQL commands to place the tables in IMCS).


For more information on IMA please refer to MOS: 1965343.1 and you may also download the best practices white paper from here.




cannot set user id: Resource temporarily unavailable or Fork: Retry: Resource Temporarily Unavailable

Vikram Das - Tue, 2015-02-24 10:01
Amjad reported this error while trying to login to the server:

cannot set user id: Resource temporarily unavailable

In the past he had reported this error:

Fork: Retry: Resource Temporarily Unavailable

This is due to the fact that the user has run out of free stacks.  In OEL 6.x , the stack setting is not done in /etc/security/limits.conf but in the file:

/etc/security/limits.d/90-nproc.conf

The default content in the file is:

cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     1024
root       soft    nproc     unlimited

I changed this to:

After
$ cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     16384
root       soft    nproc     unlimited
$

As soon as this change was made, Amjad was able to login.

Categories: APPS Blogs

Who enjoys the feather display of a male peacock?

FeuerThoughts - Tue, 2015-02-24 08:25


Who appreciates the display of feathers by a male peacock?

Female peacocks seem to get a kick out of them. They seem to play a role in mating rituals.

Who else? Why, humans, of course!

We know that humans greatly appreciate those displays, because of the aaahing and ooohing that goes on when we see them. We like those colors. We like the irridescence. We like the shapes and patterns.

If one were to speculate on why a female peacock gets all worked up about a particular male's feather display, we would inevitably hear about instinctual responses, hard-wiring, genetic determinism, and so on.

And if one were to speculate on why a human goes into raptures, we would then experience a major shift in explanation. 

Time to talk about anything but a physiological, hard-wired sort of response.

No, for humans, the attraction has to do with our big brains, our ability to create and appreciate "art". And that is most definitely not something other animals do, right?

Oh, sure, right. Like these instinctive, hard-wired bowerbird mating nests:


That clearly has nothing to do with an aesthetic sense or "art". Just instinct.

Why? Because we humans say so. We just assert this "fact."

Most convenient, eh?

Categories: Development

Fronting Oracle Maven Repository with Sonatype Nexus

Steve Button - Mon, 2015-02-23 16:44
The Sonatype team have announced the release of Nexus 2.1.1 which is a minor update that now works with the Oracle Maven Repository.

I was going to write a bit up about it but Manfred Moser from Sonatype has already put together a blog and video on it:
With the new Nexus 2.11.2 release we are supporting the authentication mechanism used for the Oracle Maven repository in both Nexus OSS and Nexus Pro. This allows you to proxy the repository in Nexus and makes the components discoverable via browsing the index as well as searching for components. You will only need to set this up once in Nexus and all your projects. Developers and CI servers get access to the components and the need for any manual work disappears.  On the Nexus side, the configuration changes can be done easily as part of your upgrade to the new release.
Check it out @ Using the Oracle Maven Repository with Nexus








Exadata 12c New Features RMOUG Slides

Fuad Arshad - Mon, 2015-02-23 08:33
I've finally gotten around to post my RMOUG Slide Deck on Slideshare. Hopefully this is helpful to folks looking at new features in Exadata.

PeopleTools 8.54: Multiple Query Security Records

David Kurtz - Mon, 2015-02-23 04:26
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

This post is not about a database feature newly supported in PeopleTools, but PeopleTools is capable of doing something new that could negatively impact the database.  When I saw the following warning in the PeopleTools 8.54 release notes, I thought I should look into it.
"PeopleTools has added an Advanced Query Security option to Application Designer. This feature allows up to five Query Security Records to be associated with a single record, including the ability to associate security with non-key fields. While powerful, this feature should be used sparingly because multiple additional joins will affect query performance."

The PeopleTools documentation shows how to add multiple query security records in Application Designer, but doesn't really explain what effect it will have on queries on that record.
PeopleTools has always allowed a query security record to be defined on a record.  This is the record properties for JOB.


I am going to create a simple example query that joins PS_JOB and PS_NAMES.  These records have different query security records, so both query security records appear in the PS/Query.
SELECT B.EMPLID, B.DEPTID
FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_NAMES A, PS_PERALL_SEC_QRY A1
WHERE ( B.EMPLID = B1.EMPLID
AND B.EMPL_RCD = B1.EMPL_RCD
AND B1.OPRID = 'PS'
AND A.EMPLID = A1.EMPLID
AND A1.OPRID = 'PS'
AND ( B.EFFDT =
(SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
WHERE B.EMPLID = B_ED.EMPLID
AND B.EMPL_RCD = B_ED.EMPL_RCD
AND B_ED.EFFDT <= SYSDATE)
AND B.EFFSEQ =
(SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
WHERE B.EMPLID = B_ES.EMPLID
AND B.EMPL_RCD = B_ES.EMPL_RCD
AND B.EFFDT = B_ES.EFFDT)
AND B.EMPLID = A.EMPLID
AND A.EFFDT =
(SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
WHERE A.EMPLID = A_ED.EMPLID
AND A.NAME_TYPE = A_ED.NAME_TYPE
AND A_ED.EFFDT <= SYSDATE) ))


The new version of the same query, but this time with multiple query security records if force, is below.  Note that:
  • A1, B1 are the query security records defined on the record properties that have always been present in PeopleTools.
  • B4, B5, B6 are the advanced query security records.  Note that EMPLNT_SRCH_QRY has join criteria on the columns specified in the Advanced Query Security Mapping dialogue.
  • EMPLMT_SRCH_QRY gets joined twice to JOB because it is the query security record B1 and an advanced query security record B4, so the advanced settings are in addition to the standard setting.  Be careful not to duplicate records.  The PS/Query Security Join Optimization setting (introduced in PeopleTools 8.52) does not help with this.
  • The standard query security record is stored in PSRECDEFN.QRYSRCRECNAME, while the advanced query security record definitions are stored in a new PeopleTools table PSRECSECFLDMAPS.
SELECT B.EMPLID, B.DEPTID
FROM PS_JOB B, PS_EMPLMT_SRCH_QRY B1, PS_EMPLMT_SRCH_QRY B4,
PS_PERALL_SEC_QRY B5, PS_PERS_SRCH_QRY B6,
PS_NAMES A, PS_PERALL_SEC_QRY A1
WHERE ( B.EMPLID = B1.EMPLID
AND B.EMPL_RCD = B1.EMPL_RCD
AND B1.OPRID = 'PS'
AND B.EMPLID = B4.EMPLID
AND B.EMPL_RCD = B4.EMPL_RCD
AND B4.OPRID = 'PS'
AND B.EMPLID = B5.EMPLID
AND B5.OPRID = 'PS'
AND B.EMPLID = B6.EMPLID
AND B6.OPRID = 'PS'

AND A.EMPLID = A1.EMPLID
AND A1.OPRID = 'PS'
AND ( B.EFFDT =
(SELECT MAX(B_ED.EFFDT) FROM PS_JOB B_ED
WHERE B.EMPLID = B_ED.EMPLID
AND B.EMPL_RCD = B_ED.EMPL_RCD
AND B_ED.EFFDT <= SYSDATE)
AND B.EFFSEQ =
(SELECT MAX(B_ES.EFFSEQ) FROM PS_JOB B_ES
WHERE B.EMPLID = B_ES.EMPLID
AND B.EMPL_RCD = B_ES.EMPL_RCD
AND B.EFFDT = B_ES.EFFDT)
AND B.EMPLID = A.EMPLID
AND A.EFFDT =
(SELECT MAX(A_ED.EFFDT) FROM PS_NAMES A_ED
WHERE A.EMPLID = A_ED.EMPLID
AND A.NAME_TYPE = A_ED.NAME_TYPE
AND A_ED.EFFDT <= SYSDATE) ))
Conclusion I know from previous experience that having just two different query security records on different base records, as in the first example above can lead to significant performance problems.  This new feature has the potential to add up to five more per record.
I can see that this feature could have occasional application where the additional security is not joined by a key field.  However, I would generally echo the sentiment in the release notes, and use it sparingly.  Instead of two query security records, could you merge them into one security record?

PeopleTools 8.54: %SelectDummyTable Meta-SQL

David Kurtz - Sun, 2015-02-22 11:57
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
  
PeopleTools simply evaluates this meta-SQL as 'DUAL' on Oracle.

In Oracle, DUAL is just a convenience table.  You don't need to use it, you can use anything you want. PeopleSoft applications often use PS_INSTALLATION when they needs a single row source in a query.  This was another example of platform agnosticism.  PS_INSTALLATION is available on every platform and every PeopleSoft installation, DUAL is not.

Instead of coding this (the example is taken from ESPP_REF_REVMAIN.PSHUP.Do When)
%Select(IF_FLAG) 
SELECT 'X'
FROM PS_INSTALLATION
WHERE %Bind(ST_SEND_SRC) = 'N'
You can now code this instead:
%Select(IF_FLAG) 
SELECT 'X'
FROM %SelectDummyTable
WHERE %Bind(ST_SEND_SRC) = 'N'
Which resolves to:
%Select(IF_FLAG)  
SELECT 'X'
FROM DUAL
WHERE %Bind(ST_SEND_SRC) = 'N'
There are two advantages to using DUAL.
  • In the database, the Oracle optimizer knows that DUAL is a special one row, one column table.  When you use it in queries, it uses this knowledge when generating the execution plan.
  • If you used a real table, there was a risk that it could have no rows, or more than one row.  Either could cause application problems.  In previous versions of Oracle, when it was a real physical table, that problem could also occur with DUAL.  However, since Oracle 10g it is just a memory structure in the database, and accessing it involves neither a logical nor a physical read. 
ConclusionPeopleSoft have created a Meta-SQL that evaluates as DUAL so that this Oracle optimization is implemented in a PeopleSoft platform generic manner. 
I would not bother to go back and change existing code to use this, unless perhaps I was visiting the code anyway, but I would certainly recommend its use going forward.

Pages

Subscribe to Oracle FAQ aggregator