Feed aggregator

Implementing the Tree Navigation Oracle Alta UI Design Pattern

Shay Shmeltzer - Fri, 2015-03-13 16:10

The Oracle Alta UI design patterns offer many new approaches for navigation in your application as you can see in the navigation patterns section. One of those is the Tree Navigation pattern - which is an updated approach to the way that many applications display menus today.

While the "old" way of building these menus was using the tree component, the new design uses an interface that works better on mobile devices and is easier on the eyes. It uses animation to do in-place replacement of one level in the menu with the next one. 

old new img

You could also use this approach to represent other types of hierarchical/master-detail relationships. 

In the demo below I show you how to quickly implement such navigation pattern with ADF Faces and a combination of af:listView components along with the af:deck component.

There are a bunch of further things you might want to do in your actual application beyond what the demo does.

One is to show on the right side of the page the information on the object you select on the left side. Using a deck component there you can also switch that section to show either Dept or Emp data in the same area. You'll already have the actionListener in place to do the switch of display, and ADF already has the right record selected - so just dropping the same data control on the right as a form will be enough.

Another nice enhancement would be to condition the showing of the right caret to be based on whether there are actually details. This should be easy to achieve with a calculated attribute using groovy - as shown here

In the demo I also show how to invoke the makeCurrent row selection functionality from a managed bean, this allows me to do two operations when a menu option is selected. The code I use ,which is based on code I found on Ashish's blog, is:

public void deptSelect(SelectionEvent selectionEvent) {
        ELContext elcontext = FacesContext.getCurrentInstance().getELContext();
        MethodExpression methodExpression =
            FacesContext.getCurrentInstance().getApplication().getExpressionFactory().createMethodExpression(elcontext,
                                                "#{bindings.DepartmentsView1.treeModel.makeCurrent}",
                                                                                                             Object.class, new Class[] {
                                                                                                             SelectionEvent.class });
        methodExpression.invoke(elcontext, new Object[] { selectionEvent });
        deck.setDisplayedChild("pgl2");
        AdfFacesContext.getCurrentInstance().addPartialTarget(deck);
    } 

I also use styleClass="AFAppNavbarButton" for the "back" button to make it look a bit better. 

The full source of the JSF page is:

<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html>
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
    <af:document title="untitled3.jsf" id="d1">
        <af:messages id="m1"/>
        <af:form id="f1">
            <af:pageTemplate viewId="/oracle/templates/tabletFirstTemplate.jspx" id="pt1">
                <f:facet name="header"/>
                <f:facet name="status"/>
                <f:facet name="appNav"/>
                <f:facet name="globalLinks"/>
                <f:facet name="footer"/>
                <f:facet name="center"/>
                <f:facet name="start">
                    <af:deck id="d2" binding="#{mb3.deck}" displayedChild="pgl1">
                        <af:panelGroupLayout id="pgl1">
                            <af:listView value="#{bindings.DepartmentsView1.collectionModel}" var="item"
                                         emptyText="#{bindings.DepartmentsView1.viewable ? 'No data to display.' : 'Access Denied.'}"
                                         fetchSize="#{bindings.DepartmentsView1.rangeSize}" id="lv1" selection="single"
                                         selectionListener="#{mb3.deptSelect}">
                                <af:listItem id="li1">
                                    <af:panelGridLayout id="pgl3">
                                        <af:gridRow marginTop="5px" height="auto" marginBottom="5px" id="gr1">
                                            <af:gridCell marginStart="5px" width="80%" id="gc1">
                                                <af:outputFormatted value="#{item.bindings.DepartmentName.inputValue}"
                                                                    id="of1"/>
                                            </af:gridCell>
                                            <af:gridCell marginStart="5px" width="20%" marginEnd="5px" id="gc2">
                                                <af:image source="func_caretright_16_ena.png" id="i1"/>
                                            </af:gridCell>
                                        </af:gridRow>
                                    </af:panelGridLayout>
                                </af:listItem>
                            </af:listView>
                        </af:panelGroupLayout>
                        <af:panelGroupLayout id="pgl2">
                            <af:button text="#{bindings.DepartmentName.inputValue}" id="b1"
                                       actionListener="#{mb3.backToDept}" styleClass="AFAppNavbarButton"
                                       icon="/func_caretleft_16_ena.png"/>
                            <af:listView value="#{bindings.EmployeesView4.collectionModel}" var="item"
                                         emptyText="#{bindings.EmployeesView4.viewable ? 'No data to display.' : 'Access Denied.'}"
                                         fetchSize="#{bindings.EmployeesView4.rangeSize}" id="lv2">
                                <af:listItem id="li2">
                                    <af:panelGridLayout id="pgl4">
                                        <af:gridRow marginTop="5px" height="auto" marginBottom="5px" id="gr2">
                                            <af:gridCell marginStart="5px" width="80%" id="gc3">
                                                <af:outputFormatted value="#{item.bindings.LastName.inputValue}"
                                                                    id="of2"/>
                                            </af:gridCell>
                                            <af:gridCell marginStart="5px" width="20%" marginEnd="5px" id="gc4">
                                                <af:image source="func_caretright_16_ena.png" id="i2"/>
                                            </af:gridCell>
                                        </af:gridRow>
                                    </af:panelGridLayout>
                                </af:listItem>
                            </af:listView>
                        </af:panelGroupLayout>
                        <af:transition triggerType="forwardNavigate" transition="slideLeft"/>
                        <af:transition triggerType="backNavigate" transition="slideRight"/>
                    </af:deck>
                </f:facet>
                <f:facet name="end"/>
                <f:attribute name="endWidth" value="0px"/>
                <f:attribute name="startWidth" value="200px"/>
            </af:pageTemplate>
        </af:form>
    </af:document>
</f:view> 

Categories: Development

APEX 5.0: pimping the Login page

Dimitri Gielis - Wed, 2015-03-11 18:29
When you create a new application in APEX 5.0, the login page probably looks like this:


I love the build-in login page of APEX itself - luckily it's easy enough to build that in our own apps too. Thank you APEX Dev team!

The first step is to change the region type to be of Login Region Template:


We want to add a nice icon on top of the Login text. You can use the Icon CSS Class in the Region options - in this case I opted for fa-medkit:


Next up is making the Login button bigger and make it the complete width like the items.
In APEX 5.0 you can use the Template Options to do that:


Once we stretched the Login button it fits the entire size.

Next up is getting some icons in the username and password field.
For the username we use the "icon-login-username" css class.
Instead of the label we make that hidden and specify a placeholder, so before you start typing you see the word username and when you start typing the text disappears.


For the password field we do the same thing, but for the css class we specify "icon-login-password".


Finally your login screen looks like this:


Great? Absolutely - and so easy with APEX 5.0!

What's next? Is there anything better? Euh... yes, what about live validation?
Sure we can do that in APEX 5.0 without too much hassle :)) Thanks once again APEX Dev Team!

In the item make sure the item is set to Value Required and add in the Post Text following span:


That will give you a nice visual indication if you entered text:


Cool? Creating login pages in APEX 5.0 is ... (you fill in the word)

Interested in more? We're doing an APEX 5.0 UI Training in May.
Categories: Development

Annonce : DB12c certifiée avec EM12c

Jean-Philippe Pinte - Wed, 2015-03-11 01:55


Il est désormais possible d’utiliser une base 12.1.0.2.1 pour le référentiel d’ Oracle Enterprise Manager 12.1.0.4 (OMR) : http://ora.cl/tY3 
 


Loading CSV files with special characters in Oracle DB

Dimitri Gielis - Tue, 2015-03-10 11:08
I often need to load the data of Excel or CSV files into the Oracle Database.

Ever got those annoying question marks when you try to load the data? or instead of question marks you just get empty blanks when the file is using special characters? Here's an example:


My database characterset is UTF-8, so ideally you want to load your data UTF-8 encoded.

With Excel I've not found an easy way to specify the encoding to UTF-8 when saving to a CSV file.
Although in Excel (OSX) - Preferences - General - Web Options - Encoding, I specified UTF-8, it still saves the file as Western (Mac OS Roman).

I've two workarounds I use to get around the issue. Open the file in a text editor e.g. BBEdit and click the encoding option and select UTF-8.


Another way is to open Terminal and use the iconv command line tool to convert the file

iconv -t UTF8 -f MACROMAN < file.csv > file-utf8.csv

If you get a CSV file and you want to import it in Excel first, the best way I found is to create a new Workbook and import the CSV file (instead of opening directly). You can import either by using File - Import or Data - Get External Data - Import Text File. During the import you can specify the File origin and you can see which data format works for you.


After the manipulations in Excel you can save again as CSV as outlines above to make sure you resulting CSV file is UTF-8 encoded.

Finally to import the data you can use APEX, SQL Developer or SQLcl to load your CSV file into your table.
Categories: Development

PeopleTools 8.54: Performance Performance Monitor Enhancements

David Kurtz - Tue, 2015-03-10 05:09
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Transaction History Search ComponentThere are a number of changes:
  • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
    • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
  • You can also specify multiple transaction types, rather than have to search each transaction type individually.
This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
%DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
This expands to
ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
   >= (PM_MAX_HIST_AGE * 24 *  60)
which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
"... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
  • New Table
    • PS_PTPMJMXUSER - keyed on PM_AGENTID
  • New Columns
    • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
    • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
  • New Component

    Oracle Database Tools updated - check out SQLcl

    Dimitri Gielis - Mon, 2015-03-09 17:31
    Today Oracle released new versions of:

    Also Oracle REST Data Services 3 got a new EA2 version.
    You may want to check Kris Rice's blog for new features.

    I already blogged about all of the tools before, but not yet about SQLcl.
    This is a command line tool, I call it "SQL*Plus on steroids" (or as Jeff calls it SQL Developer meets SQL*Plus). It's particularly useful when you're on your server and quickly need to run some queries. Or if you're a command line guy/girl all the time, this tool is for you.

    Here's a screenshot how to connect to your database with SQLcl from Linux.


    Typing help will show you a list of quick shortcuts.

    For example if you type APEX you get a list of your APEX applications


    What I really like about SQLcl is that it formats the output so nicely. With SQL*Plus you had to set column widths, page sizes etc. Not with SQLcl, it's smart and formats it nicely.

    Next to that you can quickly output your query in JSON format by typing "set sqlformat json":


    There're many more features - a good starting point is this presentation and video by Jeff Smith.
    Categories: Development

    Collaborate 2015

    Jim Marion - Fri, 2015-03-06 18:16

    Collaborate 2015 is just a few weeks away. I will be presenting PeopleSoft PeopleTools Developer: Tips and Techniques on Tuesday, 4/14/2015 at 3:15 PM in Surf B. If you are presenting a PeopleTools technical session, then please post a comment with your session title, date, time, and venue. I look forward to seeing you next month!

    The Ideal APEX Application (When & Where You Write Code)

    Joel Kallman - Fri, 2015-03-06 01:23
    The real title of this post should be "What I Really Meant to Say Was....".

    Bob Rhubart of the Oracle Technology Network OTNArchBeat fame was kind enough to give me an opportunity to shoot a 2-minute Tech Tip.  I love Bob's goals for a 2-minute Tech Tip - has to be technical, can't be marketing fluff, and you have to deliver it in 120 seconds - no more, no less.  So I took some notes, practiced it out loud a couple times, and then I was ready.  But because I didn't want to sound like I was merely reading my notes, I ad-libbed a little and...crumbled under the clock.  I don't think I could have been more confusing and off the mark.  Oh...did I forget to mention that Bob doesn't like to do more than one take?



    So if I could distill what I wished to convey into a few easily consumable points:
    1. Use the declarative features of APEX as much as possible, don't write code.  If you have to choose between writing something in a report region with a new template, or hammer out the same result with a lovingly hand-crafted PL/SQL region, opt for the former.  If you have a choice between a declarative condition (e.g., Item Not Null) or the equivalent PL/SQL expression, choose the declarative condition.  It will be faster at execution time, it will be easier to manage and report upon, it will be easier to maintain, it will be less pressure on your database with less parsing of PL/SQL.
    2. When you need to venture outside the declarative features of APEX and you need to write code in PL/SQL, be smart about it.  Define as much PL/SQL in statically compiled units (procedures, functions, packages) in the database and simply invoke them from your APEX application.  It will be easier to maintain (because it will simply be files that correspond to your PL/SQL procedures/functions/packages), it will be easier to version control, it will be easier to diff and promote, you can choose which PL/SQL optimization level you wish, you can natively compile, and it will be much more efficient on your database.
    3. Avoid huge sections of JavaScript and use Dynamic Actions wherever possible.  If you have the need for a lot of custom JavaScript, put it into a library and into a file, served by your Web Server (or, at a minimum, as a shared static file of your application).
    4. APEX is just a thin veneer over your database - architect your APEX applications as such.  Let the Oracle Database do the heavy lifting.  Your APEX application definition should have very little code. It should be primarily comprised of SQL queries and simple invocations of your underlying PL/SQL programs.

    My rule of thumb - when you're editing code in a text area/code editor in the Application Builder of APEX and you see the scroll bar, it's time to consider putting it into a PL/SQL package.  And of course, if you catch yourself writing the same PL/SQL logic a second time, you should also consider putting it into a PL/SQL package.

    There's more to come from the Oracle APEX team on @OTNArchBeat.

    Adding An EMC XtremIO Volume As An ASM Disk With Oracle Database 12c On Linux – It Does Not Get Any Easier Than This.

    Kevin Closson - Wed, 2015-03-04 19:07
    When Something Is Simple It Must Be Simple To Prove

    Provisioning high-performance storage has always been a chore. Care and concern over spindle count, RAID type, RAID attributes, number of controller arms involved and a long list of other complexities have burdened storage administrators. Some of these troubles were mitigated by the advent of Automatic Storage Management–but not entirely.

    Wouldn’t it be nice if the complexity of storage provisioning could be boiled down to but a single factor? Wouldn’t it be nice if that single factor was, simply, capacity? With EMC XtremIO the only factor storage administrators need to bear in mind when provisioning storage is, indeed, capacity.

    With EMC XtremIO a storage administrator hears there is a need for, say, one terabyte of storage and that is the entirety of information needed. No more questions about the I/O pattern (e.g., large sequential writes ala redo logging, etc). The Database Administrator simply asks for capacity with a very short sentence and the Storage Administrator clicks 3 buttons in the XtremIO GUI and that’s all there is to it.

    Pictures Speak Thousands of Words

    I too enjoy the simplicity of XtremIO in my engineering work. Just the other day I ran short on space in a tablespace while testing Oracle Database 12c intra-node parallel query. I was studying a two-node Real Application Clusters setup attached to an EMC XtremIO array via 8 paths of 8GFC Fibre Channel. The task at hand was a single parallel CTAS (Create Table As Select) but the command failed because my ASM disk group ran out of space when Oracle Database tried to extend the BIGFILE tablespace.

    Since I had to add some space I thought I’d take a few screen shots to show readers of this blog how simple it is to perform the full cycle of tasks required to add space to an active cluster with ASM in an XtremIO environment.

    The following screen shot shows the error I was reacting to:

    2015-02-23-pic1

    Since the following example shows host configuration steps please note the Linux distribution (Oracle Linux) and kernel version (UEK) I was using:

    linux-release

    The following screenshot shows the XtremIO GUI configuration tab. I selected “Add” and then typed a name and size (1TB) of the volume I wanted to create:

    NOTE: Right click the embedded images for greater clarity

    2015-02-23-pic2

    The following screenshot shows how I then selected the initiators (think hosts) from the right-hand column that I wanted to see the new volume:

    2015-02-23-pic4

    After I clicked “apply” I could see my new volume in my “12C” folder. With the folder construct I can do things like create zero-overhead, immediate, writable snapshots with a single mouse click. As the following screenshot shows, I highlighted “data5” so I could get details about the volume in advance of performing tasks on the host. The properties tab shows me the only information I need to proceed–the NAA Identifier. Once I had the NAA Identifier I moved on to the task of discovering the new volume on the hosts.

     

    2015-02-23-pic5

    Host Discovery

    Host discovery consists of three simple steps:

    1. Multipath discovery
    2. Updating the udev rules file with a text editor
    3. Updating udev state with udevadm commands
    Multipath Discovery

    On both nodes of the cluster I executed the following series of commands. This series of commands generates a lot of terminal output so I won’t show that in this blog post.

    # multipath -F ;service multipathd restart ; rescan-scsi-bus.sh -r

    After executing the multipath related commands I was able to see the new volume (0002a) on both nodes of the cluster. Notice how the volume has different multipath names (mpathab, mpathai) on the hosts. This is not an issue since the volumes will be controlled by udev:

    2015-02-23-pic7

    Updating Udev Rules File and Udev State

    After verifying the volumes were visible under DM-MPIO I moved on to the udev actions. The following screenshot shows how I added an ACTION line in the udev rules file and copied it to the other RAC host and then executed the udev update commands on both RAC hosts:

    2015-02-23-pic9

    I then could see “/dev/asmdisk6” on both RAC hosts:

    2015-02-23-pic10

    Adding The New XtremIO Volume As An ASM Disk

    The next task was to use ASMCA (ASM Configuration Assistant) to add the XtremIO volume to the ASM disk group called “DATA”:

    2015-02-23-pic8

    As the following screenshot shows the volume is visible as /dev/asmdisk6:

    2015-02-23-pic11

    I selected asmdisk6 and the task was complete:

    2015-02-23-pic12

    I then saw evidence of ASM rebalancing in the XtremIO GUI Performance tab:

    2015-02-23-pic15

     

    Summary

    With EMC XtremIO you provision capacity and that allows you to speak in very short sentences with the application owners that share space in the array.

    It doesn’t get any easier than this.


    Filed under: oracle

    Undocumented Application Engine Parameter: EnableAEMonitoring

    David Kurtz - Wed, 2015-03-04 13:57
    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Oracle Support Document 1640355.1: E-AE: Performance Degradation When Using Do Loop Action to Process Large Amount of Data on PT 8.52 & PT 8.53 describes a performance problem in Application Engine when a small but frequently executed loop.  Application Engine calls DBMS_APPLICATION_INFO to set MODULE and ACTION on v$session each time it goes round the loop.

    However, Oracle Bug 10130415  Latch contention on "resmgr group change latch" acknowledges a problem in some versions of Oracle, but it is resolved in 11.2.0.3 and 12c.
    Updated 8 May 2015: Oracle support do not know of any explicit link to this database bug.

    A new parameter was introduced in PT8.54 and backported to PT8.52.23 and PT8.53.13 in the Database Options section of the process scheduler configuration file (psprcs.cfg).  EnableAEMonitor controls whether Application Engine calls DBMS_APPLICATION_INFO.  The default value for this parameter is 0.

    The PeopleSoft support document does not reference the database bug report, but it seems reasonable to infer that the new parameter was introduced to work around the database bug.

    This new parameter is not described in PeopleBooks.  It does appear in the delivered configuration files on at least 8.53.13.  However, it is not present in the delivered 8.54.05 configuration file (bug 21055140). Therefore, by default, Application Engine will not set the module and Action unless you add it to the configuration file.

    [Database Options]
    ;=========================================================================
    ; Database-specific configuration options
    ;=========================================================================

    ;DMK - added to enable DBMS_APPLICATION_INFO instrumentation
    EnableAEMonitoring=1
    Then the behaviour is then as it has been since 8.52, described in PeopleTools 8.52 Application Engine sets MODULE and ACTION.
    My Recommendation I certainly think that you should add this parameter to all process scheduler configuration files at relevant PeopleTools version.  Unless you specifically have the problem described in the support note, I recommend that you also set the parameter to 1 as shown above. I have never seen the problem in affected database versions, and it is fixed in the terminal release of 11g.

    Without setting the parameter, you will loose the ability to relate Enterprise Manager and ASH data to specific application engine steps.  If you need to make a code change to achieve a performance improvement you will have to go through the manual process of finding the SQL in an application engine trace.

    Oracle APEX_WEB_SERVICE REST API call

    Kubilay Çilkara - Wed, 2015-03-04 12:15
    In this post I will try to show you how I used the Oracle Apex and the APEX_WEB_SERVICE  PL/SQL package to quickly send a request to a public Internet API and how I handled the response. The code below was written during a 'Hackday' and hasn't been extensively tested.

    My use case is integrating Oracle Apex with the public Mendeley REST API for Mendeley Catalog Search.

    The idea was to build an application in Oracle Apex to query the Mendeley REST API Catalog with a keyword. Mendeley REST API gives JSON response so I used PL/JSON to parse it.  I hear in Oracle 12c JSON is going to be a native data-type. My Oracle Apex host is running Oracle 11g and I had to use PL/JSON for ease.

    To cut it short here is how the Mendeley Catalog Search on Oracle Apex application look  like. (Click image to go to app or visit http://apex2.enciva.co.uk/apex/f?p=864:2






    To integrate with Mendeley REST API from Oracle Apex, I used one PL/SQL function and one procedure.

    I used the function to obtain the Mendeley REST API Client Credentials Authorisation flow token and the procedure to do make the API request to Mendeley Catalog Search and to handle the response.

    Here is the MENDELEY_CALL PL/SQL function I created:

    This function returns the Client Credentials Authorisation Flow token from the Mendeeley REST API

    create or replace function mendeley_call (p_id in varchar2)
    return varchar2
    is
    v_token varchar2(1000);
    token varchar2(1000);
    jtoken json;
    v_grant_type varchar2(400:= 'client_credentials';
    v_client_id varchar2(500:= p_id;
    v_client_secret varchar2(500:= '<put_your_mendeley_client_secret_here>';
    v_scope varchar2(300:= 'all';
    begin

    /*----------Setting Headers----------------------------------------*/                                      
    apex_web_service.g_request_headers(1).name := 'Content-Type';
    apex_web_service.g_request_headers(1).Value := 'application/x-www-form-urlencoded; charset=utf-8';
    /*-----------------------------------------------------------------*/

    token := apex_web_service.make_rest_request
        (
          p_url         => 'https://api.mendeley.com/oauth/token'
        p_http_method => 'POST'
        p_parm_name   => apex_util.string_to_table('grant_type:client_id:client_secret:scope')
        p_parm_value  => apex_util.string_to_table(v_grant_type||':'||v_client_id||':'||v_client_secret||':'

    ||v_scope)
        p_wallet_path => 'file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'
        p_wallet_pwd  => '<put_your_oracle_wallet_password_here>'
        );
    -- debug
    -- dbms_output.put_line(token);
    jtoken := json(token);
    v_token := json_ext.get_string(jtoken,'access_token');
    -- debug
    -- dbms_output.put_line(v_token);
    return v_token;
    EXCEPTION
    WHEN OTHERS THEN
       raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
    end;



    Here is the anonymous procedure which I put into a PL/SQL region on the Oracle Apex page:

    This procedure incorporates the function above and makes the request and handles the response from the Mendeley REST API

    Note how the procedure calls the function MENDELEY_CALL (above) to load the variable v_token. 

    DECLARE
      v_token  VARCHAR2(599:= mendeley_call(put_your_mendeley_client_id_here);
      v_search VARCHAR2(500);
      mendeley_document NCLOB;
      v_status VARCHAR2(100);
      obj json_list;
      v_id VARCHAR2(100);
      v_title NVARCHAR2(1000);
      v_abstract NCLOB;--varchar2(32000);
      v_link     VARCHAR2(1000);
      v_source   VARCHAR2(500);
      v_type     VARCHAR2(100);
      v_pct_hit  VARCHAR2(10);
      v_rows     NUMBER(10);
      v_batch_id NUMBER(10);
    BEGIN
      -- Oracle Wallet
      utl_http.set_wallet('file:/home/app/oracle/product/11.2.0/dbhome_1/owm/wallets/oracle'

    'my_secret_password');
      -- Set Authorisation headers and utf8
      -- the following lilne is necessary if you need to use languages other than latin and 
      -- you will use APEX_WEB_SERVICE package 
      utl_http.set_body_charset('UTF-8');
      -- build the Authorisation header
      apex_web_service.g_request_headers(1).name  := 'Content-Type';
      apex_web_service.g_request_headers(1).value := 'application/jsonrequest';
      apex_web_service.g_request_headers(1).name  := 'Authorization';
      apex_web_service.g_request_headers(1).value := 'Bearer '||v_token||'';
      
      -- Make the request and load the response into a CLOB 
      mendeley_document := apex_web_service.make_rest_request 
          
            p_url => 'https://api.mendeley.com:443/search/catalog' 
          p_http_method => 'GET' 
          p_parm_name => apex_util.string_to_table('title:limit'
          p_parm_value => apex_util.string_to_table('Mendeley:10'
          );
      -- Load the response to JSON_LIST PL/JSON object
      obj := json_list(mendeley_document);
      -- Start extracting values from the JSON and writhe some HTML
      -- Traverse over JSON_LIST extract elements you like
      FOR IN 1..obj.count
      LOOP
        v_id       := json_ext.get_string(json(obj.get(i)),'id');
        v_title    := json_ext.get_string(json(obj.get(i)),'title');
        v_abstract := json_ext.get_string(json(obj.get(i)),'abstract');
        v_link     := json_ext.get_string(json(obj.get(i)),'link');
        v_source   := json_ext.get_string(json(obj.get(i)),'source');
        v_type     := json_ext.get_string(json(obj.get(i)),'type');
        -- write extracted data
       dbms_output.put_line(v_title||' ==> '||v_abstract);
       END LOOP;
     END;

     END;

    This shows how easy is, in this case using one function and one procedure to make a REST API request to an external Web Service from Oracle Apex. 
    Categories: DBA Blogs

    OAM 11GR2PS2 in a day

    Frank van Bortel - Wed, 2015-03-04 10:40
    Get Access Manager 11gRel2 PS2 installed in a day Goal is to get OAM installed and configured in a day - with full control; that is without using the Installation Wizard. Virtual Box Start with Virtual Box. Allow plenty of memory (10GB), and disk (120GB). Attach V33411-01.iso (Oracle Server V6.3) to the CD, and boot. Minimal (not Basic server!) install, configure network with static IP Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

    Little League, Big Data

    Scott Spendolini - Tue, 2015-03-03 13:36
    Last week, I participated in my first Little League draft for my son's baseball team.  This was new territory, as up until now, play has been non-competitive.  This year we will actually have to keep score, and there will be winners and losers.

    In preparation for the draft, we had tryouts a few weeks ago where we evaluated the kids on a number of different criteria.  Never have I seen so many scared 7 and 8 year olds march through the cages as dozens of coaches with clipboards watched and recorded their every move.  I camped out and watched them pitch, as from what many veteran coaches told me, the key to keeping the game moving along is the pitcher.

    In preparation for the draft, we were sent a couple of key spreadsheets.  The first one had an average rating of all of the kids tryouts assessments, done by the board members.  The second one contained coaches evaluations for some of the players from past seasons. Lots and lots of nothing more than raw data.

    Time to fire up APEX.  I created a workspace on my laptop, as I was not sure if we would have WiFi at the draft.  From there, I imported both spreadsheets into tables, and got to work on creating a common key.  Luckily, the combination of first and last name produced no duplicates, so it was pretty easy to link the two tables.  Next, I created a simple IR based on the EVALS table - which was the master.  This report showed all of the tryout scores, and also ranked each player based on the total score.

    Upon editing a row in EVALS, I had a second report that showed a summary of the coach's evaluation from prior seasons.  I could also make edits to the EVALS table, such as identify players that I was interested in, players that were already drafted, and any other comments that I wanted to track.

    After about 20 minutes of reviewing the data, I noticed something.  I was using data collected while the player was under a lot of stress.  The data set was also small, as each player only got 5 pitches, 5 catches, 5 throws, etc.  The better indicator as to a player's talents was in the coach's evaluations, as that represents an entire season of interaction with the player, not just a 3-4 minute period.

    Based on this, I was quickly able to change my IR on the first page to also include a summary of the coach's evaluations alongside the tryout evaluations.  I sorted my report based on that, and got a very different order.  This was the order that I was going to go with for my picks.

    Once the draft started, it was very easy to mark each player as drafted, so that any drafted player would no longer show up in the report.  It was also trivial to toggle the "must draft" column on and off, ensuring that if there were any younger players that I wanted, I could get them in the early rounds before we had to only draft older players.

    Each time it was my pick, I already knew which player that I was going to draft.  Meanwhile, the other coaches shuffled stacks of marked up papers and attempted to navigate multiple spreadsheets when it was theirs.  Even the coordinator commented on how I was always ready and kept things moving along.

    Unless you're some sort of youth athletics coach that does a draft, this application will likely do you little good.  But the concept can go a long way.  In almost any role in any organization, you likely have data for something scattered across a few different sources or spreadsheets.  This data, when isolated, only paints a blurry part of the whole picture.  But when combined and analyzed, the data can start to tell a better story, as was the case in my draft.

    The technical skills required to build this application were also quite minimal.  The bulk of what I used was built-in functionality of the Interactive Report in APEX.  Merging the data and linking the two tables was really the only true technical portion of this, and that's even something that can be done by a novice.

    So the next time you have a stack of data that may be somehow related, resist the temptation to use old methods when trying to analyze it.  Get it into the database, merge it as best you can, and let APEX do the rest.

    Oracle Data Provider for .NET now on NuGet

    Christian Shay - Mon, 2015-03-02 08:30

    ODP.NET, Managed Driver is now on NuGet, meaning that you can add ODP.NET to your Visual Studio project with just a few clicks in the NuGet Package Manager. We've also published an Oracle By Example walkthrough to take you step by step through the process of using NuGet and ODP.NET.

    Here we are in the NuGet Package Manager:




    When searching for us in the package manager, make sure to get the official package - look for the word "Official" in the title.



    There's actually two NuGet packages available:

    ODP.NET, Managed Driver - Official
    NuGet id: Oracle.ManagedDataAccess

    This adds Oracle.ManagedDataAccess.dll to your project and also makes needed configuration entries in your app.config or web.config.


    ODP.NET, Managed Entity Framework Driver - Official
    NuGet id: Oracle.ManagedDataAccess.EntityFramework

    This adds Oracle.ManagedDataAccess.EntityFramework.dll as well as config file configuration. It also has a dependency on the ODP.NET package above and will pull it into your project  as well as EF 6 if needed.

    If you want to host this package on your local intranet, it is also available for download on the OTN .NET download page.

    Please note that if you want to use Visual Studio integration features, such as browsing your Oracle Schema in Server Explorer, or using Entity Designer or Table Adapter Configuration wizard, you should still install Oracle Developer Tools for Visual Studio, as a NuGet package  does not provide any of the Visual Studio integration components needed to do design time work.

    The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes

    Vikram Das - Sun, 2015-03-01 14:53
    I got this error while upgrading an R12.1.3 instance to R12.2.4, when I completed AD.C.Delta 5 patches with November 2014 bundle patches for AD.C and was in the process of applying TXK.C.Delta5 with November 2014 bundle patches for TXK.C :

    Validation successful. All expected nodes are listed in ADOP_VALID_NODES table.
    [START 2015/03/01 04:53:16] Check if services are down
            [INFO] Run admin server is not down
         [WARNING]  Hotpatch mode should only be used when directed by the patch readme.
      [EVENT]     [START 2015/03/01 04:53:17] Performing database sanity checks
        [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .
        Log file: /erppgzb1/erpapp/fs_ne/EBSapps/log/adop/adop_20150301_045249.log


    [STATEMENT] Please run adopscanlog utility, using the command

    "adopscanlog -latest=yes"

    to get the list of the log files along with snippet of the error message corresponding to each log file.


    adop exiting with status = 1 (Fail)

    I was really surprised as I had already run EBS technology codelevel checker (patch 17537119) script checkDBpatch.sh on racnode1.

    To investigate I checked inside checkDBpatch.sh and found that it create a table called TXK_TCC_RESULTS.  

    SQL> desc txk_tcc_results
     Name                                      Null?    Type
     ----------------------------------------- -------- ----------------------------
     TCC_VERSION                               NOT NULL VARCHAR2(20)
     BUGFIX_XML_VERSION                        NOT NULL VARCHAR2(20)
     NODE_NAME                                 NOT NULL VARCHAR2(100)
     DATABASE_NAME                             NOT NULL VARCHAR2(64)
     COMPONENT_NAME                            NOT NULL VARCHAR2(10)
     COMPONENT_VERSION                         NOT NULL VARCHAR2(20)
     COMPONENT_HOME                                     VARCHAR2(600)
     CHECK_DATE                                         DATE
     CHECK_RESULT                              NOT NULL VARCHAR2(10)
     CHECK_MESSAGE                                      VARCHAR2(4000)

    SQL> select node_name from txk_tcc_results;

    NODE_NAME
    --------------------------------------------------------------------------------
    RACNODE1

    I ran checkDBpatch.sh again, but the patch failed again with previous error:

       [ERROR]     The EBS Technology Codelevel Checker (available as Patch 17537119) needs to be run on the following nodes: .

    It was Saturday 5 AM already working through the night.  So I thought, it is better to sleep now and tackle this on Sunday.  On Sunday morning after a late breakfast, I looked at the problem again.  This time, I realized that the error was complaining about racnode1 (in lower case) and the txk_tcc_results table had RACNODE1(in upper case).  To test my hunch, I immediately updated the value:

    update txk_tcc_results
    set node_name='racnode1' where node_name='RACNODE1';

    commit;

    I restarted the patch, and it went through.  Patch was indeed failing because it was trying to look for a lower case value.  I will probably log an SR with Oracle, so that they change their code to make the node_name check case insensitive.

    Further, I was curious, why node_name was stored in all caps in fnd_nodes and txk_tcc_results.  The file /etc/hosts had it in lowercase.  I tried the hostname command on linux prompt:

    $ hostname
    RACNODE1

    That was something unusual, as in our environment, hostname always returns the value in lowercase.  So I further investigated.
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = RACNODE1

    So I changed it

    [root@RACNODE1 ~]# sysctl kernel.hostname=RACNODE1
    kernel.hostname = racnode1
    [root@RACNODE1 ~]# sysctl kernel.hostname
    kernel.hostname = racnode1
    [root@RACNODE1 ~]#
    [root@RACNODE1 ~]# hostname
    racnode1

    Logged in again to see if root prompt changed:

    [root@racnode1 ~]#

    I also checked
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=RACNODE1

    Changed it here also:
    [root@tsgld5811 ~]# cat /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    NOZEROCONF=yes
    HOSTNAME=racnode1

    I also changed it on racnode2.
    Categories: APPS Blogs

    What happened to “when the application is fast enough to meet users’ requirements?”

    Cary Millsap - Fri, 2015-02-27 15:00
    On January 5, I received an email called “Video” from my friend and former employee Guđmundur Jósepsson from Iceland. His friends call him Gummi (rhymes with “who-me”). Gummi is the guy whose name is set in the ridiculous monospace font on page xxiv of Optimizing Oracle Performance, apparently because O’Reilly’s Linotype Birka font didn’t have the letter eth (đ) in it. Gummi once modestly teased me that this is what he is best known for. But I digress...

    His email looked like this:


    It’s a screen shot of frame 3:12 from my November 2014 video called “Why you need a profiler for Oracle.” At frame 3:12, I am answering the question of how you can know when you’re finished optimizing a given application function. Gummi’s question is, «Oi! What happened to “when the application is fast enough to meet users’ requirements?”»

    Gummi noticed (the good ones will do that) that the video says something different than the thing he had heard me say for years. It’s a fair question. Why, in the video, have I said this new thing? It was not an accident.
    When are you finished optimizing?The question in focus is, “When are you finished optimizing?” Since 2003, I have actually used three different answers:
    When are you are finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
      Source: Optimizing Oracle Performance (2003) pages 302–304.
    2. When the application is fast enough to meet your users’ requirements.
      Source: I have taught this in various courses, conferences, and consulting calls since 1999 or so.
    3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
      Source: “Why you need a profiler for Oracle” (2014) frames 2:51–3:20.
    My motive behind answers A and B was the idea that optimizing beyond what your business needs can be wasteful. I created these answers to deter people from misdirecting time and money toward perfecting something when those resources might be better invested improving something else. This idea was important, and it still is.

    So, then, where did C come from? I’ll begin with a picture. The following figure allows you to plot the response time for a single application function, whatever “given function” you’re looking at. You could draw a similar figure for every application function on your system (although I wouldn’t suggest it).


    Somewhere on this response time axis for your given function is the function’s actual response time. I haven’t marked that response time’s location specifically, but I know it’s in the blue zone, because at the bottom of the blue zone is the special response time RT. This value RT is the function’s top speed on the hardware you own today. Your function can’t go faster than this without upgrading something.

    It so happens that this top speed is the speed at which your function will run if and only if (i) it contains no unnecessary calls and (ii) the calls that remain run at hardware speed. ...Which, of course, is the idea behind this new answer C.
    Where, exactly, is your “requirement”?Answer B (“When the application is fast enough to meet your users’ requirements”) requires that you know the users’ response time requirement for your function, so, next, let’s locate that value on our response time axis.

    This is where the trouble begins. Most DBAs don’t know what their users’ response time requirements really are. Don’t despair, though; most users don’t either.

    At banks, airlines, hospitals, telcos, and nuclear plants, you need strict service level agreements, so those businesses investment into quantifying them. But realize: quantifying all your functions’ response time requirements isn’t about a bunch of users sitting in a room arguing over which subjective speed limits sound the best. It’s about knowing your technological speed limits and understanding how close to those values your business needs to pay to be. It’s an expensive process. At some companies, it’s worth the effort; at most companies, it’s just not.

    How about using, “well, nobody complains about it,” as all the evidence you need that a given function is meeting your users’ requirement? It’s how a lot of people do it. You might get away with doing it this way if your systems weren’t growing. But systems do grow. More data, more users, more application functions: these are all forms of growth, and you can probably measure every one of them happening where you’re sitting right now. All these forms of growth put you on a collision course with failing to meet your users’ response time requirements, whether you and your users know exactly what they are, or not.

    In any event, if you don’t know exactly what your users’ response time requirements are, then you won’t be able to use “meets your users’ requirement” as your finish line that tells you when to stop optimizing. This very practical problem is the demise of answer B for most people.
    Knowing your top speedEven if you do know exactly what your users’ requirements are, it’s not enough. You need to know something more.

    Imagine for a minute that you do know your users’ response time requirement for a given function, and let’s say that it’s this: “95% of executions of this function must complete within 5 seconds.” Now imagine that this morning when you started looking at the function, it would typically run for 10 seconds in your Oracle SQL Developer worksheet, but now after spending an hour or so with it, you have it down to where it runs pretty much every time in just 4 seconds. So, you’ve eliminated 60% of the function’s response time. That’s a pretty good day’s work, right? The question is, are you done? Or do you keep going?

    Here is the reason that answer C is so important. You cannot responsibly answer whether you’re done without knowing that function’s top speed. Even if you know how fast people want it to run, you can’t know whether you’re finished without knowing how fast it can run.

    Why? Imagine that 85% of those 4 seconds are consumed by Oracle enqueue, or latch, or log file sync calls, or by hundreds of parse calls, or 3,214 network round-trips to return 3,214 rows. If any of these things is the case, then no, you’re absolutely not done yet. If you were to allow some ridiculous code path like that to survive on a production system, you’d be diminishing the whole system’s effectiveness for everybody (even people who are running functions other than the one you’re fixing).

    Now, sure, if there’s something else on the system that has a higher priority than finishing the fix on this function, then you should jump to it. But you should at least leave this function on your to-do list. Your analysis of the higher priority function might even reveal that this function’s inefficiencies are causing the higher-priority functions problems. Such can be the nature of inefficient code under conditions of high load.

    On the other hand, if your function is running in 4 seconds and (i) its profile shows no unnecessary calls, and (ii) the calls that remain are running at hardware speeds, then you’ve reached a milestone:
    1. if your code meets your users’ requirement, then you’re done;
    2. otherwise, either you’ll have to reimagine how to implement the function, or you’ll have to upgrade your hardware (or both).
    There’s that “users’ requirement” thing again. You see why it has to be there, right?

    Well, here’s what most people do. They get their functions’ response times reasonably close to their top speeds (which, with good people, isn’t usually as expensive as it sounds), and then they worry about requirements only if those requirements are so important that it’s worth a project to quantify them. A requirement is usually considered really important if it’s close to your top speed or if it’s really expensive when you violate a service level requirement.

    This strategy works reasonably well.

    It is interesting to note here that knowing a function’s top speed is actually more important than knowing your users’ requirements for that function. A lot of companies can work just fine not knowing their users’ requirements, but without knowing your top speeds, you really are in the dark. A second observation that I find particularly amusing is this: not only is your top speed more important to know, your top speed is actually easier to compute than your users’ requirement (…if you have a profiler, which was my point in the video).

    Better and easier is a good combination.
    Tomorrow is important, tooWhen are you are finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    2. When the application is fast enough to meet your users’ requirements.
    3. When there are no unnecessary calls, and the calls that remain run at hardware speed.
    Answer A is still a pretty strong answer. Notice that it actually maps closely to answer C. Answer C’s prescription for “no unnecessary calls” yields answer A’s goal of call reduction, and answer C’s prescription for “calls that remain run at hardware speed” yields answer A’s goal of latency reduction. So, in a way, C is a more action-oriented version of A, but A goes further to combat the perfectionism trap with its emphasis on the cost of action versus the cost of inaction.

    One thing I’ve grown to dislike about answer A, though, is its emphasis on today in “…exceeds the cost of the performance you’re getting today.” After years of experience with the question of when optimization is complete, I think that answer A under-emphasizes the importance of tomorrow. Unplanned tomorrows can quickly become ugly todays, and as important as tomorrow is to businesses and the people who run them, it’s even more important to another community: database application developers.
    Subjective goals are treacherous for developersMany developers have no way to test, today, the true production response time behavior of their code, which they won’t learn until tomorrow. ...And perhaps only until some remote, distant tomorrow.

    Imagine you’re a developer using 100-row tables on your desktop to test code that will access 100,000,000,000-row tables on your production server. Or maybe you’re testing your code’s performance only in isolation from other workload. Both of these are problems; they’re procedural mistakes, but they are everyday real-life for many developers. When this is how you develop, telling you that “your users’ response time requirement is n seconds” accidentally implies that you are finished optimizing when your query finishes in less than n seconds on your no-load system of 100-row test tables.

    If you are a developer writing high-risk code—and any code that will touch huge database segments in production is high-risk code—then of course you must aim for the “no unnecessary calls” part of the top speed target. And you must aim for the “and the calls that remain run at hardware speed” part, too, but you won’t be able to measure your progress against that goal until you have access to full data volumes and full user workloads.

    Notice that to do both of these things, you must have access to full data volumes and full user workloads in your development environment. To build high-performance applications, you must do full data volume testing and full user workload testing in each of your functional development iterations.

    This is where agile development methods yield a huge advantage: agile methods provide a project structure that encourages full performance testing for each new product function as it is developed. Contrast this with the terrible project planning approach of putting all your performance testing at the end of your project, when it’s too late to actually fix anything (if there’s even enough budget left over by then to do any testing at all). If you want a high-performance application with great performance diagnostics, then performance instrumentation should be an important part of your feedback for each development iteration of each new function you create.
    My answerSo, when are you finished optimizing?
    1. When the cost of call reduction and latency reduction exceeds the cost of the performance you’re getting today.
    2. When the application is fast enough to meet your users’ requirements.
    3. When there are no unnecessary calls and the calls that remain run at hardware speed.
    There is some merit in all three answers, but as Dave Ensor taught me inside Oracle many years ago, the correct answer is C. Answer A specifically restricts your scope of concern to today, which is especially dangerous for developers. Answer B permits you to promote horrifically bad code, unhindered, into production, where it can hurt the performance of every function on the system. Answers&nnbsp;A and B both presume that you know information that you probably don’t know and that you may not need to know. Answer C is my favorite answer because it is tells you exactly when you’re done, using units you can measure and that you should be measuring.

    Answer C is usually a tougher standard than answer A or B, and when it’s not, it is the best possible standard you can meet without upgrading or redesigning something. In light of this “tougher standard” kind of talk, it is still important to understand that what is optimal from a software engineering perspective is not always optimal from a business perspective. The term optimized must ultimately be judged within the constraints of what the business chooses to pay for. In the spirit of answer A, you can still make the decision not to optimize all your code to the last picosecond of its potential. How perfect you make your code should be a business decision. That decision should be informed by facts, and these facts should include knowledge of your code’s top speed.

    Thank you, Guđmundur Jósepsson, of Iceland, for your question. Thank you for waiting patiently for several weeks while I struggled putting these thoughts into words.

    The German ADF-Community-Book was released today

    Gerd Volberg - Thu, 2015-02-26 06:00
    The German ADF- (and Forms-) Community released their first book.

    The "ADF book" is a compilation of German lectures, articles and workshop tutorials in Oracle ADF (some of the contributions are in English!). The authors are members of the German ADF community. The time frame covers the years 2010 to 2014. The project "ADF book" was implemented by a team of staff from partner companies and Oracle.

    40 authors wrote 70 contributions on 1400 pages in this book.

    It's a 110 MB PDF and can be downloaded for free from the ADF-Community-Page.


    My part in the book has the title "Modernizing Oracle Forms" in which I demonstrate, how easy it is to enhance Oracle Forms with modern frameworks like LAF (Look and Feel-Framework from Francois Degrelle) and how to integrate ADF in Oracle Forms via OraFormsFaces (A framework from Wilfred van der Deijl).

    Have fun with the book
    Gerd

    Complément : A-Team Chronicles

    Jean-Philippe Pinte - Thu, 2015-02-26 02:31
    Le site A-Team Chronicles aggrège le contenu produit par les membres de la A-Team : meilleures pratiques, astuces, conseils, etc

    PeopleTools 8.54: Oracle Resource Manager

    David Kurtz - Wed, 2015-02-25 04:11
    This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.

    Oracle Resource manager is about prioritising one database session over another, or about restricting the overhead of one session for the good of the other database users.  A resource plan is a set of rules that are applied to some or all database sessions for some or all of the time.  Those rules may be simple or complex, but they need to reflect the business's view of what is most important. Either way Oracle resource manager requires careful design.
    I am not going to attempt to further explain here how the Oracle feature works, I want to concentrate on how PeopleSoft interfaces with it.
    PeopleTools FeatureThis feature effectively maps Oracle resource plans to PeopleSoft executables.  The resource plan will then manage the database resource consumption of that PeopleSoft process.  There is a new component that maps PeopleSoft resource names to Oracle consumer groups.  For this example I have chosen some of the delivered plans in the MIXED_WORKLOAD_GROUP that is delivered with Oracle 11g.

    • The Oracle Consumer Group field is validated against the name of the Oracle consumer groups defined in the database, using view     .
    SELECT DISTINCT group_or_subplan, type
    FROM dba_rsrc_plan_directives
    WHERE plan = 'MIXED_WORKLOAD_PLAN'
    ORDER BY 2 DESC,1
    /

    GROUP_OR_SUBPLAN TYPE
    ------------------------------ --------------
    ORA$AUTOTASK_SUB_PLAN PLAN
    BATCH_GROUP CONSUMER_GROUP
    INTERACTIVE_GROUP CONSUMER_GROUP
    ORA$DIAGNOSTICS CONSUMER_GROUP
    OTHER_GROUPS CONSUMER_GROUP
    SYS_GROUP CONSUMER_GROUP
    If you use Oracle SQL Trace on a PeopleSoft process (in this case PSAPPSRV) you find the following query.  It returns the name of the Oracle consumer group that the session should use.The entries in the component shown above are stored in PS_PT_ORA_RESOURCE
    • PS_PTEXEC2RESOURCE is another new table that maps PeopleSoft executable name to resource name.
    SELECT PT_ORA_CONSUMR_GRP 
    FROM PS_PT_ORA_RESOURCE
    , PS_PTEXEC2RESOURCE
    WHERE PT_EXECUTABLE_NAME = 'PSAPPSRV'
    AND PT_ORA_CONSUMR_GRP <> ' '
    AND PS_PT_ORA_RESOURCE.PT_RESOURCE_NAME = PS_PTEXEC2RESOURCE.PT_RESOURCE_NAME

    PT_ORA_CONSUMR_GRP
    ------------------------
    INTERACTIVE_GROUP

    And then the PeopleSoft process explicitly switches its group, thus:
    DECLARE 
    old_group varchar2(30);
    BEGIN
    DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
    END;
    Unfortunately, the consequence of this explicit switch is that it overrides any consumer group mapping rules, as I demonstrate below.
    SetupThe PeopleSoft owner ID needs some additional privileges if it is to be able to switch to the consumer groups.
    BEGIN
    DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SYSTEM_PRIVILEGE
    ('SYSADM', 'ADMINISTER_RESOURCE_MANAGER',FALSE);
    END;

    BEGIN
    FOR i IN(
    SELECT DISTINCT r.pt_ora_consumr_grp
    FROM sysadm.ps_pt_ora_resource r
    WHERE r.pt_ora_consumr_grp != ' '
    AND r.pt_ora_consumr_grp != 'OTHER_GROUPS'
    ) LOOP
    dbms_output.put_line('Grant '||i.pt_ora_consumr_grp);
    DBMS_RESOURCE_MANAGER_PRIVS.GRANT_SWITCH_CONSUMER_GROUP
    (GRANTEE_NAME => 'SYSADM'
    ,CONSUMER_GROUP => i.pt_ora_consumr_grp
    ,GRANT_OPTION => FALSE);
    END LOOP;
    END;
    /

    The RESOURCE_MANAGER_PLAN initialisation parameters should be set to the name of the plan which contains the directives.
    NAME                                 TYPE        VALUE
    ------------------------------------ ----------- ----------------------
    resource_manager_plan string MIXED_WORKLOAD_PLAN

    I question one or two of the mappings on PS_PTEXEC2RESOURCE.
    SELECT * FROM PS_PTEXEC2RESOURCE …

    PT_EXECUTABLE_NAME PT_RESOURCE_NAME
    -------------------------------- -----------------

    PSAPPSRV APPLICATION SERVE
    PSQED MISCELLANEOUS
    PSQRYSRV QUERY SERVER

    • PSNVS is the nVision Windows executable.  It is in PeopleTools resource MISCELLANEOUS.  This is nVision running in 2-tier mode.  I think I would put nVision into the same consumer group as query.  I can't see why it wouldn't be possible to create new PeopleSoft consumer groups and map them to certain executables.  nVision would be a candidate for a separate group. 
      • For example, one might want to take a different approach to parallelism in GL reporting having partitioned the LEDGER tables by FISCAL_YEAR and ACCOUNTING_PERIOD
    • PSQED is also in MISCELLANEOUS.  Some customers use it to run PS/Query in 2-tier mode, and allow some developers to use it to run queries.  Perhaps it should also be in the QUERY SERVER group.
    Cannot Mix PeopleSoft Consumer Groups Settings with Oracle Consumer Group MappingsI would like to be able to blend the PeopleSoft configuration with the ability to automatically associate Oracle consumer groups with specific values of MODULE and ACTION.  Purely as an example, I am trying to move the Process Monitor component into the SYS_GROUP consumer group.
    BEGIN
    DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();

    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'MODULE_NAME'
    ,value => 'PROCESSMONITOR'
    ,consumer_group => 'SYS_GROUP');
    DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
    END;
    /

    However, it doesn't work because the explicit settings overrides any rules, and you cannot prioritise other rules above explicit settings
    exec dbms_application_info.set_module('PROCESSMONITOR','PMN_PRCSLIST');
    SELECT REGEXP_SUBSTR(program,'[^.@]+',1,1) program
    , module, action, resource_consumer_group
    FROM v$session
    WHERE module IN('PROCESSMONITOR','WIBBLE')
    ORDER BY program, module, action
    /

    So I have created a new SQL*Plus session and set the module/action and it has automatically mover into the SYS_GROUP.  Meanwhile, I have been into the Process Monitor in the PIA and the module and action of the PSAPPSRV session has been set, but they remain in the interactive group.
    PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus PROCESSMONITOR PMN_PRCSLIST SYS_GROUP

    If I set the module to something that doesn't match a rule, the consumer group goes back to OTHER_GROUPS which is the default. 
    exec dbms_application_info.set_module('WIBBLE','PMN_PRCSLIST');

    PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus WIBBLE PMN_PRCSLIST OTHER_GROUPS

    Now, if I explicitly set the consumer group exactly as PeopleSoft does my session automatically moves into the INTERACTIVE_GROUP.
    DECLARE 
    old_group varchar2(30);
    BEGIN
    DBMS_SESSION.SWITCH_CURRENT_CONSUMER_GROUP('INTERACTIVE_GROUP', old_group, FALSE);
    END;
    /

    PROGRAM MODULE ACTION RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus WIBBLE PMN_PRCSLIST INTERACTIVE_GROUP

    Next, I will set the module back to match the rule, but the consumer group doesn't change because the explicit setting takes priority over the rules.
    PROGRAM          MODULE           ACTION           RESOURCE_CONSUMER_GROUP
    ---------------- ---------------- ---------------- ------------------------
    PSAPPSRV PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    PSAPPSRV PROCESSMONITOR PMN_SRVRLIST INTERACTIVE_GROUP
    sqlplus PROCESSMONITOR PMN_PRCSLIST INTERACTIVE_GROUP
    You can rearrange the priority of the other rule settings, but explicit must have the highest priority (if you try will get ORA-56704). So, continuing with this example, I cannot assign a specific component to a different resource group unless I don't use the PeopleSoft configuration for PSAPPSRV.
    Instead, I could create a rule to assign a resource group to PSAPPSRV via the program name, and have a higher priority rule to override that when the module and/or action is set to a specific value.  However, first I have to disengage the explicit consumer group change for PSAPPSRV by removing the row from PTEXEC2RESOURCE.
    UPDATE ps_ptexec2resource 
    SET pt_resource_name = 'DO_NOT_USE'
    WHERE pt_executable_name = 'PSAPPSRV'
    AND pt_resource_name = 'APPLICATION SERVER'
    /
    COMMIT
    /
    BEGIN
    DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
    END;
    /
    BEGIN
    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'CLIENT_PROGRAM'
    ,value => 'PSAPPSRV'
    ,consumer_group => 'INTERACTIVE_GROUP');

    DBMS_RESOURCE_MANAGER.SET_CONSUMER_GROUP_MAPPING
    (attribute => 'MODULE_NAME'
    ,value => 'PROCESSMONITOR'
    ,consumer_group => 'SYS_GROUP');

    DBMS_RESOURCE_MANAGER.set_consumer_group_mapping_pri(
    explicit => 1,
    oracle_user => 2,
    service_name => 3,
    module_name_action => 4, --note higher than just module
    module_name => 5, --note higher than program
    service_module => 6,
    service_module_action => 7,
    client_os_user => 8,
    client_program => 9, --note lower than module
    client_machine => 10
    );
    DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();
    END;
    /
    So, you would have to choose between using either the PeopleSoft configuration or the Oracle Resource Manager configuration.  It depends on your requirements.  This is going to be a decision you will have to take when you design your resource management.  Of course, you can always use just the mapping approach in versions of PeopleTools prior to 8.54.

    ConclusionI have never seen Oracle Resource Manager used with PeopleSoft.  Probably because setting it up is not trivial, and then it is difficult to test the resource plan.  I think this enhancement is a great start, that makes it very much easier to implement Oracle Resource Manager on PeopleSoft.  However, I think we need more granularity.
    • I would like to be able to put specific process run on the process scheduler by name into specific consumer groups.  For now, you could do this with a trigger on PSPRCSRQST that fires on process start-up that makes an explicit consumer group change (and puts it back again for Application Engine on completion). 
    • I would like the ability to set different resource groups for the same process name in different application server domains.  For example,
      • I might want to distinguish between PSQRYSRV processes used for ad-hoc PS/Queries on certain domains from PSQRYSRVs used to support nVision running in 3-tier mode on other domains.
      • I might have different PIAs for backup-office and self-service users going to different applications servers.  I might want to prioritise back-office users over self-service users.
    Nonetheless, I warmly welcome the new support for Oracle Resource Manager in PeopleTools.  It is going to be very useful for RAC implementations, I think it will be essential for multi-tenant implementations where different PeopleSoft product databases are plugged into the same container database overrides any rules

    Annonce : Oracle Database In-Memory Advisor

    Jean-Philippe Pinte - Wed, 2015-02-25 01:27
    Oracle Database In-Memory Advisor est maintenant disponible.
    Pour utiliser cet assistant, le "Database Tuning Pack" est nécessaire.

    Plus d'information :
    • Page OTN
    • Note MOS 1965342.1

    Pages

    Subscribe to Oracle FAQ aggregator