Feed aggregator

Documentum story – Jobs in a high availability installation

Yann Neuhaus - Wed, 2016-10-19 04:55

When you have an installation with one Content Server (CS) you do not take care where the job will be running. It’s always on your single CS.
But how should you configure the jobs in case you have several CS? Which jobs have to be executed and which one not? Let’s see that in this post.

When you have to run your jobs in a high availability installation you have to configure some files and objects.

Update the method_verb of the dm_agent_exec method:

API> retrieve,c,dm_method where object_name = 'agent_exec_method'
API> get,c,l,method_verb
API> set,c,l,method_verb
SET> ./dm_agent_exec -enable_ha_setup 1
API> get,c,l,method_verb
API> save,c,l
API> reinit,c


The java methods have been updated to be restartable:

update dm_method object set is_restartable=1 where method_type='java';


On our installation we use jms_max_wait_time_on_failures = 300 instead the default one (3000).
In server.ini (Primary Content Server) and server_HOSTNAME2_REPO01.ini (Remote Content Server), we have:



Based on some issues we faced, for instance with the dce_clean job that ran twice when we had both JMS projected to each CS, EMC advised us to each JMS with its local CS only. With this configuration, in case the JMS is down on the primary CS, the job (using a java method) is started on the remote JMS via the remote CS.

Regarding which jobs have to be executed – I am describing only the one which are used for the housekeeping.
So the question to answer is which job does what and what is “touched”, either metadata or/and content.

To verify that, check how many CS are used and where they are installed:

select object_name, r_host_name from dm_server_config
REPO1               HOSTNAME1.DOMAIN


Verify on which CS the jobs will run and “classify” them.
Check the job settings:

select object_name, target_server, is_inactive from dm_job

The following jobs work only on metadata, they can run anywhere so the target_server has to be empty

 object_name target_server is_inactive dm_ConsistencyChecker False dm_DBWarning False dm_FileReport False dm_QueueMgt False dm_StateOfDocbase False



The following jobs work only on content.


As we are using a NAS for the Data directory which is shared for both servers, only one of the two jobs has to run. Per default the target_server is defined. So for the one which has to run, target_server has to be empty.

  object_name  target_server  is_inactive dm_ContentWarning False dm_ContentWarningHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_DMClean False dm_DMCleanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True Metadata and Content

These following jobs are working on metadata and content.


Filescan scans the NAS content storage. As said above, it is shared and therefore the job only need to be execute once: the target_server has to be empty to be run everywhere.

LogPurge is also cleaning files under $DOCUMENTUM/dba/log and subfolders which are obviously not shared and therefore both dm_LogPurge jobs have to run. You just have to use another start time to avoid an overlap when objects are removed from the repository.

   object_name   target_server   is_inactive dm_DMFilescan False dm_DMFilescanHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN True dm_LogPurge  REPO1.REPO1@HOSTNAME1.DOMAIN False dm_LogPurgeHOSTNAME2_REPO1  REPO1.HOSTNAME2_REPO1@HOSTNAME2.DOMAIN False

Normally with this configuration your housekeeping jobs should be well configured.

One point you have to take care is when you use DA to configure your jobs. Once you open the job properties, the “Designated Server” is set to one of your server and not to “Any Running Server” which means target_server = ‘ ‘. If you click the OK button, you will set the target server and in case this CS is down, the job will fail because it cannot use the second CS.


Cet article Documentum story – Jobs in a high availability installation est apparu en premier sur Blog dbi services.

Get the hostname of the executing server in BPEL

Darwin IT - Wed, 2016-10-19 04:48
This week I got involved in a question on the Oracle Forums on getting the hostname of the server executing the bpel process. In itself this is not possible in BPEL. Also if you have a long running async process, the process gets dehydrated at several points (at a receive, wait, etc.). After an incoming signal, another server could process it further. You can't be sure that one server will process it to the end.

However, using Java, you can get the hostname of an executing server, quite easily. @AnatoliAtanasov suggested this question on stackOverflow. I thought that it would be fun to try this out.

Although you can opt for creating an embedded java activity, I used my earlier article on SOA and Spring Contexts to have it in a separate bean. By the way, in contrast to my suggestions in the article, you don't have to create a separate spring context for every bean you use.

My java bean looks like:
package nl.darwinit.soasuite;
import java.net.InetAddress;
import java.net.UnknownHostException;

public class ServerHostBeanImpl implements IServerHostBean {
public ServerHostBeanImpl() {

public String getHostName(String hostNameDefault){
String hostName;
InetAddress addr;
addr = InetAddress.getLocalHost();
hostName = addr.getHostName();
catch (UnknownHostException ex)
System.out.println("Hostname can not be resolved");
hostName = hostNameDefault;
return hostName;


The interface class I generated is:
package nl.darwinit.soasuite;

public interface IServerHostBean {
String getHostName(String hostNameDefault);

Then I defined a Spring Context, getHostNameContext, with the following content
<?xml version="1.0" encoding="UTF-8" ?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util"
xmlns:jee="http://www.springframework.org/schema/jee" xmlns:lang="http://www.springframework.org/schema/lang"
xmlns:aop="http://www.springframework.org/schema/aop" xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:sca="http://xmlns.oracle.com/weblogic/weblogic-sca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tool http://www.springframework.org/schema/tool/spring-tool.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task.xsd http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd http://www.springframework.org/schema/lang http://www.springframework.org/schema/lang/spring-lang.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc.xsd http://www.springframework.org/schema/jms http://www.springframework.org/schema/jms/spring-jms.xsd http://www.springframework.org/schema/oxm http://www.springframework.org/schema/oxm/spring-oxm.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://xmlns.oracle.com/weblogic/weblogic-sca META-INF/weblogic-sca.xsd">
<!--Spring Bean definitions go here-->
<sca:service name="GetHostService" target="ServerHostBeanImpl" type="nl.darwinit.soasuite.IServerHostBean"/>
<bean id="ServerHostBeanImpl" class="nl.darwinit.soasuite.ServerHostBeanImpl"/>

After wiring the context to my BPEL the composite looks like:

Then, deploying and running it, gives the following output:

Nice, isn't it?

Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer

Yann Neuhaus - Wed, 2016-10-19 04:32

The Load Balancers do not provide the client IP address by default. The WebLogic HTTP log file (access_log) does not provide the client IP address but the Load Balancer one.
This is sometimes a problem when diagnosing issues and the Single Sign On configuration does not provide the user name in the HTTP log either.

In most of  the cases, the Load Balancer can provides an additional header named “X-Forwarded-For” but it needs to be configured from the Load Balancer administration people.
If the “X-Forwarded-For” Header is provided, it can be fetched using the WebLogic Server HTTP extended logging.

To enable the WebLogic Server HTTP logging to fetch the “X-Forwarded-For” Header follow the steps below for each WebLogic Server in the WebLogic Domain:

  1. Browse to the WebLogic Domain administration console and sign in as an administrator user
  2. Open the servers list and select the first managed server
  3. Select the logging TAB and the HTTP sub-tab
  4. Open the advanced folder and change the format to “extended” and the Extended Logging Format Fields to:
    "cs(X-Forwarded-For) date time cs-method cs-uri sc-status bytes"
  5. Save
  6. Browse back to the servers list and repeat the steps for each WebLogic Server from the domain placed behind the load balancer.
  7. Activate the changes.
  8. Stop and restart the complete WebLogic domain.

After this, the WebLogic Servers HTTP Logging (access_log) should display the client IP address and not the Load Balancer one.

When using the WebLogic Server extended HTTP logging, the username field is not available any more.
This feature is described in the following Oracle MOS article:
Missing Username In Extended Http Logs (Doc ID 1240135.1)

To get the authenticated usename displayed, an additional custom filed provided by a custom Java class needs to be used.

Here is an example of such Java class:

import weblogic.servlet.logging.CustomELFLogger; 
import weblogic.servlet.logging.FormatStringBuffer; 
import weblogic.servlet.logging.HttpAccountingInfo;

/* This example outputs the User-Agent field into a
 custom field called MyCustomField

public class MyCustomUserNameField implements CustomELFLogger{

public void logField(HttpAccountingInfo metrics,
  FormatStringBuffer buff) {

The next step is to compile and create a jar library.

Set the environment running the WebLogic setWLSEnv.sh script.

javac MyCustomUserNameField.java

jar cvf MyCustomUserNameField.jar MyCustomUserNameField.class

Once done, copy the jar library file under the WebLogic Domain lib directory. This way, it will be made available in the class path of each WebLogic Server of this WebLogic Domain.

The WebLogic Server HTTP Extended log format can now be modified to include a custom field named “x-MyCustomUserNameField”.


Cet article Documentum story – How to display correct client IP address in the log file when a WebLogic Domain is fronted by a load Balancer est apparu en premier sur Blog dbi services.

SQL group by query shenanigans

Tom Kyte - Wed, 2016-10-19 03:26
Hi Chris or Connor, Saw you guys at OOW so I thought I toss a basic SQL query to you. I?ll use the HR.EMPLOYEES table to represent my problem so forgive me if it's a bit contrived. I?ll like to construct a query to sum the salaries grouped by ...
Categories: DBA Blogs

Grant Access on Table

Tom Kyte - Wed, 2016-10-19 03:26
I've given grant to one user on a table but user unable to access the table though I'm getting output as grant succeeded. The scenario is 1. There is one database A on remote location. 2. A is trying to access some table on database B(place on othe...
Categories: DBA Blogs

View that opens and runs once. The next time it's opened it hangs.

Tom Kyte - Wed, 2016-10-19 03:26
I have a view that when you open the view in say something like TOAD, or MS Access, or SMSS it opens fine the first time. The next time you open it or select from it it hangs. Creating the view I get no errors or warnings. If I open the view i...
Categories: DBA Blogs

Bulk Collection Save Exception

Tom Kyte - Wed, 2016-10-19 03:26
Dear Tom, Please help me on the below... we are having BULK COLLECT option which will save exception like below FORALL i in 1tab.count SAVE EXCEPTIONS INSERT INTO table values(obj(i)); exception when excep_bulk_err then ...
Categories: DBA Blogs

Oracle RAC without ASM

Tom Kyte - Wed, 2016-10-19 03:26
Hi My aim is to install oracle rac 11gR2 without ASM. What's required steps to achieve this and this system (rac installation) and what's pros / cons for this kind of installation . Regards, Oussema
Categories: DBA Blogs

DBLink for Local Tables

Tom Kyte - Wed, 2016-10-19 03:26
Nice day i'm from peru so my english is not the best. The database is a: Oracle Database 11g Release - 64bit Production PL/SQL Release - Production "CORE Production" TNS for Linux: Version - Product...
Categories: DBA Blogs

Database PL/SQL developer

Tom Kyte - Wed, 2016-10-19 03:26
what are the roles and responsibilities of PL/SQL developer? what kind of knowledge required for oracle PL/SQL developer? what will be the future of PL/SQL developer?
Categories: DBA Blogs

PL/SQL Database Programming Question

Tom Kyte - Wed, 2016-10-19 03:26
I am struggling to figure out which LOOP statement to use. Here's the question: Each day, starting on Monday, the price will drop 5% from the previous day?s price. Monday?s sale price will be 5% less than what is stored in the database in the BB...
Categories: DBA Blogs

Oracle IaaS Workshop for EMEA Partners

Oracle Cloud Platform: Infrastructure as a Service Workshop for Partners ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Manage GitHub behind a proxy

Yann Neuhaus - Wed, 2016-10-19 02:00

I’m quite used to GitHub since I’m using it pretty often but I actually never tried to use it behind a proxy. In the last months, I was working on a project and I had to use GitHub to version control the repository that contained scripts, monitoring configurations, aso… When setting up my local workstation (Windows) using the GUI, I faced an error showing that GitHub wasn’t able to connect to the repository while I was able to access it using my Web Browser… This is the problem I faced some time ago and I just wanted to share this experience because even if I’m writing a lot of blogs related to Documentum, it is sometimes good to change your mind you know… Therefore today is GitHub Day!


After some research and analysis and you already understood it if you read the first paragraph of this blog, I thought that maybe it was a proxy that is automatically setup in the Web Browser and that would prevent the GitHub process to access the GitHub repository and I was right! So GitHub behind a proxy, how can you manage that? Actually that’s pretty simple because everything is there so you just need to configure it. Unfortunately, I didn’t find any options in the GUI that would allow you to do that and therefore I had to use the Command Line Interface for that purpose. If there is a way to do that using the GUI, you are welcome to share!


Ok so let’s define some parameters:

  • PROXY_USER = The user’s name to be used for the Proxy Server
  • PROXY_PASSWORD = The password of the proxy_user
  • PROXY.SERVER.COM = The hostname of your Proxy Server
  • PORT = The port used by your Proxy Server in HTTP
  • PORT_S = The port used by your Proxy Server in HTTPS


With these information, you can execute the following commands to configure GitHub using the Command Line Interface (Git Shell on Windows). These two lines will simply tell GitHub that it needs to use a proxy server in order to access Internet properly:

git config --global http.proxy http://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY_USER:PROXY_PASSWORD@PROXY.SERVER.COM:PORT_S


If your Proxy Server is public (no authentication needed), then you can simplify these commands as follow:

git config --global http.proxy http://PROXY.SERVER.COM:PORT
git config --global https.proxy https://PROXY.SERVER.COM:PORT_S


With this simple configuration, you should be good to go. Now you can decide, whenever you want, to just remove this configuration. That’s also pretty simple since you just have to unset this configuration with the same kind of commands:

git config --global --unset http.proxy
git config --global --unset https.proxy


The last thing I wanted to show you is that if it is still not working, then you can check what you entered previously and what is currently configured by executing the following commands:

git config --global --get http.proxy
git config --global --get https.proxy


This conclude this pretty small blog but I really wanted to share this because I think it can help a lot of people!


Cet article Manage GitHub behind a proxy est apparu en premier sur Blog dbi services.

Microsoft Accounts Fail To Log In To Windows 10 with “User Profile Service failed the login” Error.

Jeff Moss - Wed, 2016-10-19 01:05

My kids are getting to the age where they can’t keep away from the laptop, various pads or the Smart TV to go online…time for some protection I thought.

I figured, for the Windows 10 laptops, that I’d use the Microsoft Accounts approach and use the “big brother” features there to stop the kids watching things they shouldn’t and restrict their access time.

First step was to convert my local account into a Microsoft one – simple enough and worked fine.

Next step was to create additional Microsoft accounts and then have them linked up as part of the “Family” – again, fine.

Then tell the PC to add those users – again all fine and simple to do.

All going well up until now, but then when I try to logout of my working Microsoft account on the laptop and login to one of the Family Microsoft accounts, it fails with the “User profile Service failed the login:

Image result for user profile service service logon fail windows 10


After much googling and trying various things, the one which worked for me was to copy the directory C:\Users\Default from a working Windows 7 Ultimate machine onto the laptop with the problem (where the directory did not exist at all). The advice I found actually referred to copying from another Windows 10 machine, but I didn’t have one of those – only a Windows 7 one.

I then added the family Microsoft accounts back in and after logging out and trying to login as one of these added accounts then worked fine!

I can’t be certain what the issue was, but various reading suggested an issue where the machine was upgraded from Windows 7/8 to 10 and where the local profile (C:\Users\Default) was either missing or corrupted. Copying in a working one from another machine fixed the issue in my case.

SGMB_URL = "http://www.oramoss.com/wp-content/plugins/social-media-builder/"; jQuery(".dropdownWrapper").hide();

Oracle E-Business Suite 11i - October 2016 is Last Critical Patch Update

Starting with the April 2016 Critical Patch Update (CPU), Oracle E-Business Suite 11.5.10 CPU patches are only available for customers with additional fee Tier 1 support contracts.  As of December 2016, no more CPU patches are available for Oracle E-Business Suite 11i.  October 2016 is the last CPU patch for Oracle E-Business Suite 11i.  For 12.0, the last CPU patch was October 2015.

Even though there are no more security patches, many, if not most, vulnerabilities discovered and patched in Oracle E-Business Suite 12.x are also present and exploitable in 11i.  A significant number of these security bugs are SQL injection bugs which allow an attacker to execute SQL as the Oracle E-Business Suite APPS database account.  These attacks can easily compromise the entire application and database.

As there are no more security patches for 11i and 12.0, we strongly recommend all 11i and 12.0 customers who have not yet upgraded to 12.x take immediate defensive steps to protect the Oracle E-Business Suite 11i, especially those with Internet facing modules such as iSupplier, iStore, iRecruitment, and iSupport.  A key layer of defense is Integrigy’s web application firewall for Oracle E-Business Suite, AppDefend, which provides virtual patching for these security bugs and additional protection from generic web application attack like SQL injection and cross-site scripting (XSS) and common Oracle E-Business Suite security misconfigurations.

Reference: AppDefend for the Oracle E-Business Suite

Oracle E-Business Suite, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Oracle Database Critical Patch Update October 2016: and Only

The list of Oracle Database versions supported for Critical Patch Updates (CPU) is getting shorter and shorter.  Starting with the October 2016 CPU, only and are supported.  In order to apply CPU security patches for all other Oracle versions, the database must be upgraded to or  As these are terminal database releases, the final CPU patch for is July 2021 and for is October 2020.  For those who have not yet applied 12c CPU patches, only Patch Set Updates (PSU) are available which include both security fixes and a large number of high priority fixes - Security Patch Updates (SPU) which include only security fixes are not available for 12c.

The October 2016 CPU fixes 12 security bugs in 7 database components.  Only the APEX (Application Express) security bug is remotely exploited without authentication – as with all APEX patches, this is a separate patch and upgrades APEX to

This CPU should be considered HIGH risk due to the 5 security bugs that require only CREATE SESSION privilege in order to exploit.  These bugs can be exploited by any database user and can be used to compromise the entire database.

Oracle Database, Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

October 2016 Critical Patch Update Released

Oracle Security Team - Tue, 2016-10-18 14:59

Oracle today released the October 2016 Critical Patch Update.

This Critical Patch Update provides fixes for a wide range of product families including: Oracle Database Server, Oracle E-Business Suite, Oracle Industry Applications, Oracle Fusion Middleware, Oracle Sun Products, Oracle Java SE, and Oracle MySQL.

Oracle recommends this Critical Patch Update be applied as soon as possible. A summary and analysis of this Critical Patch Update has been published on My Oracle Support (Doc ID 2193091.1)

For More Information:

The Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/security-advisory/cpuoct2016-2881722.html

My Oracle Support Note 2193091.11 is located at https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=2193091.1 (MOS account required).

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:8.0pt; mso-para-margin-left:0in; line-height:107%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Critical Patch Update for October 2016 Now Available

Steven Chan - Tue, 2016-10-18 14:49

The Critical Patch Update (CPU) for October 2016 was released on October 18, 2016. Oracle strongly recommends applying the patches as soon as possible.

The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents. 

Supported products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.

The Critical Patch Update Advisory is available at the following location:

It is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches.

The next four Critical Patch Update release dates are:

  • January 17, 2017
  • April 18, 2017
  • July 18, 2017
  • October 17, 2017
References Related Articles
Categories: APPS Blogs

Oracle JET Example - Implementing Editable Collection Table

Andrejus Baranovski - Tue, 2016-10-18 13:09
Oracle JET allows to implement inline editable tables. User can double click row or press Enter to switch to edit mode. Use Esc to switch back to readonly mode. With F2 can toggle between editable and readonly. Check it yourself in JET cookbook Editable Collection Table to see how it works.

I have followed instructions from the cookbook and implemented editable JET table on top of ADF BC REST service. Row with key 201 is switched to edit mode:

When we exit edit mode, event is handled in JavaScript and changed row is printed to the log. Here we can collect all changed rows into array and submit to the server all at once or we can fire individual REST calls per each changed row - depends on implementation:

To handle row edit, table must be set with editMode: 'rowEdit' property:

Row template property must be defined to render different HTML elements in readonly and editable modes. This property can be initialized dinamically with different template name retrieved from function:

Based on row edit mode, function returns template name:

Readonly template renders output texts, editable template renders input texts:

Changed row data is logged by ojbeforeroweditend listener function:

Download sample application - JETEditableTableApp.zip.

Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On

Yann Neuhaus - Tue, 2016-10-18 10:50


In one of my missions, I was requested to  run performance and load tests on a ADF application running in a Oracle Fusion Middleware environment protected using Oracle Access Manager. For this task we decided to use Apache JMeter because it  provides the control needed on the tests and uses multiple threads to emulate multiple users. It can be used to do distributed testing which uses multiple systems to do stress test.  Additionally, the GUI interface provides an easy way to manage the load test scenarios that can be easily recorded using the HTTP(s) Test Script Recorder.

Prepare a JMeter test plan

A first start is to review the following Blog: My Shot on Using JMeter to Load Test Oracle ADF Applications

The blog above explains how to record and use a test plan in JMeter.
It provides a SimplifiedADFJMeterPlan.jmx  JMeter test plan that can be used as a base for the JMeter test plan creation.
But this ADF starter test plan has to be reviewed for the jsessionId and afrLoop Extractors. As the regular expression associated with them might need to be adapted as they might change depending on the version of the ADF software.

In this environment, Oracle Fusion Middleware ADF WebLogic Server 10.3.6 and Oracle Access Manager 11.2.3 were used.
The regular expressions for afrLoop and jsessionid needed to be updated as shown below:

reference name regular expression afrLoop _afrLoop\’, \';([0-9]{13,16}) jsesionId ;jsessionid=([-_0-9A-Za-z!]{62,63})

Coming to the single Sign On layer, it appears that the Oracle Access Manager compatible login screen requires three parameters:

  • username
  • password
  • request_id

First username and password pattern values will be provided by the recording of the test scenario. To run the same scenario with multiple users, a CSV file is used to store test users and passwords. This will be detailed later in this blog.
The request_id is provided by the Oracle Access Manager Single Sign On layer and needs to be fetched and re-injected to the authentication URL.
To resolve this, a new variable needed to be created and the regular expression below is used.

reference name regular expression requestId name=\’request_id\’ value=\'([&amp;#;0-9]{18,25})\';

Once the test plan scenario is recorded, look for the OAM standard “/oam/server/auth_cred_submit” URL and change the request_id parameter to use the defined requestId variable.

**  click on the image to increase the size
OAM Authentication URL
name: request_id   value: ${requestId}

After those changes, the new JMeter test plan can be run.

Steps to run the test plan with multiple users

In JMeter,
Right click on the “Thread Group” on the tree.
Select “Add” – “Config Element” – “CSV Data Set Config”.
Add CSV config in JMeter

Create a CSV file which contains USERNAME,PASSWORD and saved it in a folder on your Jmeter server. Make sure the users exist in OAM/OID:


Adapt the path in the “CSV Data Set Config” and define the variable values (USERNAME and PASSWORD) in “Variable Names comma-delimited”
Look for the URL that is submitting the authentication – /oam/server/auth_cred_submit- and click on it. In the right frame, replace the username and password got during the recording with respectively ${USERNAME} and ${PASSWORD} as shown below:
At last you can adapt the thread group of your test plan to the number of users (Number of Threads) and loop (Loop Count) you want to run and execute it. The Ramp-Up Period in Seconds is the time between the Threads start.
JMeter test plan IMG5
The test plan can be executed now and results visualised in tree, graph or table views.



Cet article Using JMeter to run load test on a ADF application protected by Oracle Access Manager Single Sign On est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator