Skip navigation.

Feed aggregator

Kerberos SSO with Documentum D2 3.1 SP1

Yann Neuhaus - Thu, 2014-03-27 02:06

Last week, when I was working on how to setup the Kerberos SSO for D2 3.1 SP1, I faced some issues due to a documentation that doesn’t seem to be up to date… In fact, our version is D2 3.1 SP1 and there is no specific documentation for SP1. In consequence, I read the D2 3.1 Installation Guide and the D2 4.0 Installation Guide. The first time I read the D2 3.1 documentation, I found it very light and I knew I would have some problems. Fortunately, I already had experience with Kerberos, essentially with the MIT distribution on Linux (how to setup a KDC, kerberize an application, manage users, etc.).

The first thing that is important to know is that as D2 isn’t a WDK client, the setup of the SSO using Kerberos just involves the D2 host server and the Active Directory server. There is no need to setup the Content Store to use Kerberos. So here is the configuration that were used:

  • Microsoft Active Directory on Windows Server 2008 R2. Let’s name this server ad001 and the related domain
  • D2 3.1 SP1 on Microsoft Server 2008 R2. Let’s name this server wd231

The second thing that is important to know is that not all Directory Servers are supported. Indeed, Documentum doesn't support Linux Key Distribution Center (KDC).

1. D2 3.1 documentation steps

The first part of this blog will describe which steps the official D2 3.1 Installation Guide provides to help Administrator to setup a SSO using Kerberos in D2. You will see that those steps aren’t very descriptive but with a little bit of imagination, you could do something with that.

a. Edit the shiro.ini file

Open the file shiro.ini and add the following lines:

/** = D2-Kerberos

b. On the AD

Create a user on the AD with the computer name of your application server and add the following options:

  • Use Kerberos DES encryption types for this account
  • This account supports Kerberos AES 128 bit encryption

Generate a keytab using the command below. Well in fact the official documentation only display the command and don’t explain what is it or where to execute it.


Transfer this keytab on the D2 host server.

c. krb5.ini

    default_realm = DOMAINNAME
        kdc = adserver.domainName

d. On Windows 7

  1. Run gpedit.msc
  2. Expand “Local Computer Policy” / “Computer Configuration” / “Windows Settings” / “Security Settings” / “Local Policies” / “Security Options” / “Network security:
  3. Configure encryption types allowed for Kerberos”
  4. Double click “Network security: Configure encryption types allowed for Kerberos”
  5. Select all.
  6. Press “OK”


2. D2 3.1 SP1 steps

The second part of this blog will present which steps must be done to get Kerberos SSO working with D2 3.1 SP1. If you only follow steps describes in the official documentation, as some explanations are missing, you will probably get issues. Now here are the steps that were required to get the SSO working on our D2 3.1 SP1:

Let’s begin with the beginning contrary to the official documentation which begin with the end. So logic!

a. On the Active Directory

Create a user on the Active Directory with the following properties:

  • Username doesn’t matter (don’t need to be the D2 server hostname). Let’s name this user: dmskrbsso
  • Password: dmskrbssoPassword
  • Password never expire
  • This account support Kerberos AES 128 bits encryption
  • Trust for Delegation to any service (Kerberos Only)
  • This account support Kerberos DES encryption

This last configuration isn’t mandatory as Kerberos will always use the most secure encryption available (AES 256 in general). So there is no problem if your Active Directory admin doesn’t want to enable DES encryption as this isn’t enough secure.

When the user is successfully created, open an administrator command prompt and create the keytab for D2 using the following command:


This command will create a file named wd231.keytab which should be transfer on the D2 host server. Let’s place this file at the following location: C:/Kerberos/wd231.keytab

b. On Documentum Administrator

Create a repository user with the same Name, Login Name and password as the one created on the AD:

  • State: Active
  • Name: dmskrbsso
  • Login Name: dmskrbsso
  • Login Domain:
  • User Source: Inline Password
  • Password: dmskrbssoPassword
  • Privileges: None
  • Extended Privileges: None
  • Client Capability: Consumer

This first user will just reflect the new user created on the AD but I think this user isn’t mandatory.

Create another repository user which will be used by the shiro.ini file to connect all other users through SSO:

  • State: Active
  • Name: d2krbsso
  • Login Name: d2krbsso
  • Login Domain: (This is mandatory! The SSO will not work without the AD domain here)
  • User Source: Inline Password
  • Password: d2krbssoPassword
  • Privileges: Superuser
  • Extended Privileges: None
  • Client Capability: Consumer

From a command prompt on the D2 server, execute the following command to get the encrypted password of the user d2krbsso:


This command assume that your D2-Client web application is at “C:/Tomcat6D2-Client31SP1/webapps/D2-Client/”. Remember the result of this command as the encrypted password will be needed later in the shiro.ini file. Let's name this encrypted password userEncryptedPw

c. krb5.ini

Create the file C:/Kerberos/krb5.ini with the following content:

  default_realm = DOMAIN.COM
    kdc =
[domain_realm] = DOMAIN.COM = DOMAIN.COM
  default = FILE:C:/Kerberos/logs/kdc_default.log
  kdc = FILE:C:/Kerberos/logs/kdc.log

Don’t forget to create the folder C:/Kerberos/logs/.

d. Edit the shiro.ini file

The shiro.ini file is the file used by D2 to authenticate user through Kerberos. This file could be found at “C:/Tomcat6D2-Client31SP1/webapps/D2-Client/WEB-INF/classes/shiro.ini”. Replace the properties in this file with the following:


docbase1 correspond to a repository using Kerberos. You could set more than 1 docbase using the following property:

Maybe you didn’t see the difference with the original documentation but if you look at the property named “D2-Kerberos”, you will see that the Java Class D2KerberosHttpAuthenticationFilter isn’t in the same package. In our D2 3.1 SP1, this Java Class is located in com.emc.d2.web.filters whereas the D2 3.1 official documentation indicate it on eu.c6.d2.web.filters. Something funny is that on the D2 4.0 official documentation, this property indicate again another location: eu.c6.d2.portal.server.filters.

e. Verify the Java Runtime Environment

It seems that Kerberos SSO for D2 require a java 1.6 jre. The simplest way to verify this is to check the service configuration of your application server. Another way could be to take a look at the registry:

  • Run regedit
  • Navigate HKEY_LOCAL_MACHINE / SOFTWARE / Wow6432Node / Apache Software Foundation / Procrun 2.0 / tomcat6D231SP1 / Parameters / Java
  • Verify that the JVM parameter point to Java 1.6: C:/Program Files/Java/jre6/bin/server/jvm.dll

This assume that the JAVA_HOME of your application server is C:/Program Files/Java/jre6/

f. Install the Java Cryptography Extension (JCE)

Download Java Cryptography Extension (JCE) for the java version used by the JVM and copy both jars into:
C:/Program Files/Java/jre6/lib/security/

g. Restart

Restart your D2 application server and look for errors on the tomcat error log files:

h. User configuration

For D2 Kerberos SSO, there is no need to change anything on user properties. That means that, for example, a Login Domain of LDAP and User Source of LDAP is fine.

i. On client computer

  1. Run gpedit.msc
  2. Expand “Local Computer Policy” / “Computer Configuration” / “Windows Settings” / “Security Settings” / “Local Policies” / “Security Options” / “Network security:
  3. Configure encryption types allowed for Kerberos”
  4. Double click “Network security: Configure encryption types allowed for Kerberos”
  5. Select all.
  6. Press “OK

Please be aware that the D2-Client URL must be detected by Internet Explorer as an “Intranet Site”. This could be done through Internet Explorer options.

This finally concludes the configuration of Kerberos SSO in D2 3.1 SP1. To get a Kerberos ticket, just log in on the client machine with a user defined in AD and if you have followed the steps above, SSO should work. If this is not the case, please let me know and I might be able to help.

Good luck!

Using JAX-RS 2.0 in WebLogic Server 12.1.3

Steve Button - Wed, 2014-03-26 22:05

Please note: this blog discusses WebLogic Server 12.1.3
which has not yet been released.

We've been working on adding some Java API updates to the coming WebLogic Server 12.1.3 release.

One that I think is going to be very popular is JAX-RS 2.0, which includes some useful new capabilities around filtering, interception and a really useful new client API.

 In the WebLogic Server 12.1.3 release we are providing this in the form of an optional shared-library that contains: the JAX-RS 2.0 API, a Jersey 2.x implementation, some common Jersey extensions such as media support and a utility that works to expose the JAX-RS 2.0 API to referencing applications.

To make use of it, developers first deploy the jax-rs-2.0.war shared-library from the $ORACLE_HOME/wlserver/common/deployable-libraries directory to the server (or cluster) then use it in an application by referencing it as a library using a weblogic deployment descriptor.

Using the library-name and specification-version attributes from the JSF library, an example of a weblogic.xml to use it would be (lines:8-11):

<?xml version="1.0" encoding="UTF-8"?>

Using the JAX-RS 2.0 Client API with WebLogic Server 12.1.3

Steve Button - Wed, 2014-03-26 22:05

Please note: this blog discusses WebLogic Server 12.1.3
which has not yet been released.

As part of the JAX-RS 2.0 support we are providing with WebLogic Server 12.1.3, one really useful new feature is the new Client API it provides, enabling applications to easily interact with REST services to consume and publish information.

By way of a simple example, I'll build out an application that uses the REST service to lookup the physical location of a specified IP address or domain name and deploy it to WebLogic Server 12.1.3.

The first step to perform is to make a call to the REST API and examine the JSON payload that is returned.

$ curl

{"ip":"","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}

The next step is to build a Java class to represent the JSON payload that is returned. In this case, it's quite simple because the JSON payload that is returned doesn't contain any relationships or complex data structures.

* @author sbutton
* {"ip":"","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}
public class GeoIp implements Serializable {

private String ipAddress;
private String countryName;
private String regionName;
private String city;
private String zipCode;
private String latitude;
private String longitude;

public String getIpAddress() {
return ipAddress;

public void setIpAddress(String ipAddress) {
this.ipAddress = ipAddress;


With the GeoIP class defined, the next step is to consider how to convert the JSON payload into an instance of the GeoIP class. I'll show two ways this can be done.

The first way to do it is to create a class that reads the result of the REST request, parses the JSON payload and constructs a representative instance of the GeoIP class. Within the JAX-RS API, there is an interface MessageBodyReader that can be implemented to convert a Stream into a Java type.

Implementing this interface gives you the readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) method which supplies an InputStream containing the response to read. The method then parses out the JSON payload and constructs a responding GeoIP instance from it.

Parsing the JSON payload is straightforward with WebLogic Server 12.1.3 since we've included the (JSR-353) Java API for JSON Processing implementation which provides an API for reading and creating JSON objects.

package oracle.demo.wls.jaxrs.client.geoip;

import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import javax.json.Json;

public class GeoIpReader implements MessageBodyReader {

public boolean isReadable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return GeoIp.class.isAssignableFrom(type) ;

public GeoIp readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) throws IOException, WebApplicationException {
GeoIp g = new GeoIp();
JsonParser parser = Json.createParser(entityStream);
while (parser.hasNext()) {
switch ( {
case KEY_NAME:
String key = parser.getString();;
switch (key) {
case "ip":
case "country_name":
case "latitude":
case "longitude":
case "region_name":
case "city":
case "zipcode":
return g;

Once this class is built, it can be registered with the Client so that it can be called when necessary to convert a payload of MessageType.APPLICATION_JSON type into an instance of the GeoIP object, here done in an @PostConstruct method on a JSF Bean

public void init() {
client = ClientBuilder.newClient();

The alternative way to do thi is to use the EcliseLink MOXY JAXB implementation that is provided with WebLogic Server, which can automatically marhsall and unmarshall JSON payloads to and from Java objects. Helpfully, the JAX-RS 2.0 shared-library that WebLogic Server 12.1.3 contains the jersey-media-moxy extension that enables the EclipseLInk MOXY implementation to be simply registered and used by applications when conversion is needed.

To use the JAXB/MOXY approach, the GeoIPReader class can be thrown away. No manual parsing of the payload is required. Instead, the base GeoIP class is annotated with JAXB annotations to denote it as being JAXB enabled and to provide some assistance in the mapping of the class properties to the payload property names.

package oracle.demo.wls.jaxrs.client.geoip;

import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlRootElement;

* @author sbutton
* {"ip":"","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}

public class GeoIp implements Serializable {

@XmlAttribute(name = "ip")
private String ipAddress;
@XmlAttribute(name = "country_name")
private String countryName;
@XmlAttribute(name = "region_name")
private String regionName;
@XmlAttribute(name = "city")
private String city;
@XmlAttribute(name = "zipcode")
private String zipCode;
@XmlAttribute(name = "latitude")
private String latitude;
@XmlAttribute(name = "longitude")
private String longitude;



With the JAXB annotations placed on the GeoIP class to enable it to be automatically marshalled/unmarshalled from JSON, the last step is to register the EclipseLink MOXY implementation with the Client. This is done with the assistance of a small utility method, as shown in the Jersey User Guide Media chapter.

public static ContextResolver createMoxyJsonResolver() {
final MoxyJsonConfig moxyJsonConfig = new MoxyJsonConfig();

Map namespacePrefixMapper = new HashMap(1);
namespacePrefixMapper.put("", "xsi");

return moxyJsonConfig.resolver();
This method is then used to register the relevant ContextResolver with the Client to use to handle JSON_conversions, instead of the GeoIPReader class that was used before.<

public void init() {
client = ClientBuilder.newClient();

With the JSON payload to GeoIP conversion now covered, the JAX-RS Client API can be used to make the call to the freegeoip REST service and process the response.

To make a client call, two classes are used: and .

The Jersey User Guide provides a good description of theses two classes and their relationship:

The JAX-RS Client API is a designed to allow fluent programming model. This means, a construction of a Client instance, from which a WebTarget is created, from which a request Invocation is built and invoked can be chained in a single "flow" of invocations ... Once you have a Client instance you can create a WebTarget from it ... A resource in the JAX-RS client API is an instance of the Java class WebTarget and encapsulates an URI. The fixed set of HTTP methods can be invoked based on the WebTarget. The [base] representations are Java types, instances of which, may contain links that new instances of WebTarget may be created from.
In this example application, the Client is opened in an @PostConstruct method and closed in a @PreDestroy method, with the WebTarget being created and its GET method called when the lookup is executed by the user.

public class GeoIpBackingBean {

private WebTarget target = null;
private Client client = null;


public void init() {
client = ClientBuilder.newClient();

public void byebye() {

public void lookupAddress() {
try {
target =, addressToLookup));
geoIp = target.request().get(GeoIp.class);
} catch (Exception e) {
FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Error executing REST call: " + e.getMessage()));


Bringing it all together as a JSF based application results in a JSF Bean being created that allows the IP address to be entered and a method that invokes the JAX-RS Client API to call out to the REST service to retrieve the JSON payload containing the location information. A simple JSF facelet page is used to support the entering of the IP address and the display of the relevant data from the GeoIP object.

<h:panelGrid columns="2" style="vertical-align: top;">
<h:outputLabel value="Address"/>
<h:inputText value="${geoIpBackingBean.addressToLookup}"/>
<h:outputLabel value=""/>
<h:commandButton action="${geoIpBackingBean.lookupAddress()}" value="Lookup" style="margin: 5px;"/>

<h:panelGrid columns="2">
<h:outputText value="IP:"/>
<h:outputText value="${geoIpBackingBean.geoIp.ipAddress}"/>
<h:outputText value="Country Code:"/>
<h:outputText value="${geoIpBackingBean.geoIp.countryName}"/>
<h:outputText value="State:"/>
<h:outputText value="${geoIpBackingBean.geoIp.regionName}"/>
<h:outputText value="City"/>
<h:outputText value="${}"/>
<h:outputText value="Zipcode:"/>
<h:outputText value="${geoIpBackingBean.geoIp.zipCode}"/>
<h:outputText value="Coords:"/>
<c:if test="${geoIpBackingBean.geoIp.ipAddress != null}">
<h:outputText value="${geoIpBackingBean.geoIp.latitude},${geoIpBackingBean.geoIp.longitude}"/>

The last step to perform is to add a weblogic.xml deployment descriptor with a library-ref to the [jsf,2.0] shared-library, which must be deployed as I described earlier in Using JAX-RS 2.0 with WebLogic Server 12.1.3.

The application is now ready to to deploy and run.

JSON Parsing is Cake with WebLogic Server 12.1.3

Steve Button - Wed, 2014-03-26 22:04

Please note: this blog discusses WebLogic Server 12.1.3
which has not yet been released.

Another feature of WebLogic Server 12.1.3 that I'm sure developers will find useful once it is released is the inclusion of an implementation of JSR-353 Java API for JSON Processing.

JSR 353: JavaTM API for JSON Processing

This new API, working from the foundations provided by earlier implementations such as Jackson, Jettison and Google JSon, provides a standard API for working with JSON from Java. The goals and objectives of the API are described in the specification request as:
 JSON(JavaScript Object Notation) is a lightweight data-interchange format.

Many popular web services use JSON format for invoking and returning the data.

Currently Java applications use different implementation libraries to produce/consume JSON from the web services. Hence, there is a need to standardize a Java API for JSON so that applications that use JSON need not bundle the implementation libraries but use the API. Applications will be smaller in size and portable.

The goal of this specification is to develop such APIs to:
  • Produce and consume JSON text in a streaming fashion(similar to StAX API for XML)
  • Build a Java object model for JSON text using API classes(similar to DOM API for XML)
WebLogic Server 12.1.3 includes a module which contains the API/implementation of this relatively lightweight but important API, enabling developers and applications to more easily work with JSON in a portable, standard manner.

 Unlike JAX-RS 2.0 and JPA 2, both of which have pre-existing specification versions that need to be supported by default, there are no additional steps required for applications to use this API with WebLogic Server 12.1.3.  It's simply included as a default module of the server and available for any application to make use of.
The API and implementation is located in this jar file in a WebLogic Server 12.1.3 installation:


In the my previous post, Using the JAX-RS 2.0 Client API with WebLogic Server 12.1.3
I have a short example of using the API to parse an JAX-RS supplied InputStream to marshall a JSON payload into a Java object.

GeoIp g = new GeoIp();
JsonParser parser = Json.createParser(entityStream);
while (parser.hasNext()) {
switch ( {
case KEY_NAME:
String key = parser.getString();;
switch (key) {
case "ip":
case "country_name":
case "latitude":
case "longitude":
case "region_name":
case "city":
case "zipcode":
The Java EE 7 tutorial has a section showing how to use the new javax.json API which is well worth having a look at if working with JSON is your thing.

Arun Gupta also has a good hands-on lab under development for Java EE 7 that uses the JSON API to read and write JSON into Java objects that represent a movie database.   His examples collaborate with JAX-RS to issue both GET and POST calls to read and update data using JSON payload.

I plan to post a more detailed example of using the API as WebLogic Server 12.1.3 is released.

But for now, the good news is that we are supporting this important new API in our upcoming release of WebLogic Server.

Oracle 12c Auditing Chapters

Bobby Durrett's DBA Blog - Wed, 2014-03-26 17:51

Spent a good amount of time yesterday and today reading about auditing in Oracle 12c.  Can’t say I read every word, but I think it was worth reading the three chapters in the Security manual related to auditing:

Chapter 21 Introduction to Auditing
Chapter 22 Configuring Audit Policies
Chapter 23 Administering the Audit Trail

I haven’t used these features but it seems like a major new piece of code with the Unified Audit Trail.

I also read this chapter of the VLDB guide because it seemed to have a lot of things that were either new to 12c or new to me:

Chapter 5 Managing and Maintaining Time-Based Information

This chapter describes features that cause data to age out and get moved on to less expensive storage automatically over time.

Anyway, just wanted to pass on some full chapters that I’ve read and am pondering as I try to comprehend the new 12c features.

- Bobby





Categories: DBA Blogs

Oracle Memory Troubleshooting, Part 4: Drilling down into PGA memory usage with V$PROCESS_MEMORY_DETAIL

Tanel Poder - Wed, 2014-03-26 16:58

If you haven’t read them – here are the previous articles in Oracle memory troubleshooting series: Part 1Part 2, Part 3.

Let’s say you have noticed that one of your Oracle processes is consuming a lot of private memory. The V$PROCESS has PGA_USED_MEM / PGA_ALLOC_MEM columns for this. Note that this view will tell you what Oracle thinks it’s using – how much of allocated/freed bytes it has kept track of. While this doesn’t usually tell you the true memory usage of a process, as other non-Oracle-heap allocation routines and the OS libraries may allocate (and leak) memory of their own, it’s a good starting point and usually enough.

Then, the V$PROCESS_MEMORY view would allow you to see a basic breakdown of that process’es memory usage – is it for SQL, PL/SQL, Java, unused (Freeable) or for “Other” reasons. You can use either the smem.sql or pmem.sql scripts for this (report v$process_memory for a SID or OS PID):

SQL> @smem 198
Display session 198 memory usage from v$process_memory....

---------- ---------- ---------- --------------- ---------- ---------- -------------
       198         43         17 Freeable           1572864          0
       198         43         17 Other              5481102                  5481102
       198         43         17 PL/SQL                2024        136          2024
       198         43         17 SQL              117805736  117717824     118834536

From the above output we see that this session has allocated over 100MB of private memory for “SQL” reasons. This normally means SQL workareas, so we can break this down further by querying V$SQL_WORKAREA_ACTIVE that shows us all currently in-use cursor workareas in the instance. I’m using a script wrka.sql for convenience – and listing only my SID-s workareas:

SQL> @wrka sid=198
Show Active workarea memory usage for where sid=198...

---------- ---------- ---------- ---------- ------------- ------------------------------ ---------- ------------------------ ---------- --------------- ------------ -------------- ------------- ------------ ------------------------------
         1        198                       ff8v9qhv21pm5 SORT (v2)                               1 AUTO                           14.6        64741376    104879104       97623040             0   2253389824 TEMP
         1        198                       ff8v9qhv21pm5 HASH-JOIN                               6 AUTO                           14.8         1370112      1370112        2387968             0
         1        198                       ff8v9qhv21pm5 BUFFER                                 25 AUTO                           14.8        11272192     11272192       11272192             0

The ACTUAL_MEM_USED column above shows the currently used memory by this workarea (that happens to be a SORT (v2) operation in that cursor’s execution plan line #1). It was only about 64MB at the time I got to query this view, but the MAX_MEM_USED shows it was about 100MB at its peak. This can happen due to multipass operations where the merge phase may use less memory than the sort phase or once the sorting completed and the rowsource was ready to start sending sorted rows back, not that much memory would have been needed anymore for just buffering the blocks read from TEMP (the sort_area_size vs sort_area_retained_size thing from past).

For completeness, I also have a script called wrkasum.sql that summarizes the workarea memory usage of all sessions in an instance (so if you’re not interested in a single session, but rather a summary of which operation types tend to consume most memory etc) you can use that:

SQL> @wrkasum
Top allocation reason by PGA memory usage

------------------- ----------- ------------- -------------- ---------- ---------- ---------- ------------ 
SORT (v2)           AUTO                   58            100       1525          0          1            1            
BUFFER              AUTO                   11             11                     0          1            1            
HASH-JOIN           AUTO                    1              2                     0          1            1

You may want to modify the script to change the GROUP BY to SQL_ID you want to list the top workarea-memory consuming SQL statement across the whole instance (or any other column of interest – like QC_INST_ID/QCSID).

But what about the following example:

SQL> @pmem 27199
Display process memory usage for SPID 27199...

       SID SPID                            PID    SERIAL# CATEGORY         ALLOCATED       USED MAX_ALLOCATED     CON_ID
---------- ------------------------ ---------- ---------- --------------- ---------- ---------- ------------- ----------
      1516 27199                           120        198 Freeable            786432          0                        0
      1516 27199                           120        198 Other            842807461                842807461          0
      1516 27199                           120        198 PL/SQL              421064      77296        572344          0
      1516 27199                           120        198 SQL                2203848      50168       2348040          0

Most of the memory (over 800MB) is consumed by category “Other”?! Not that helpful, huh? V$SQL_WORKAREA_ACTIVE didn’t show anything either as it deals only with SQL workareas and not all the other possible reasons why an Oracle process might allocate memory.

So we need a way to drill down into the Other category and see which allocation reasons have taken the most of this memory. Historically this was only doable with a PGA/UGA memory heapdump and by aggregating the resulting dumpfile. You have to use oradebug to get the target process to dump its own private memory breakdown as it’s private memory and other processes can not just read it directly. I have written about it in Part 1 of the Oracle memory troubleshooting series.

Update: an alternative to ORADEBUG is to use ALTER SESSION SET EVENTS ‘immediate trace name pga_detail_get level N’ where N is the Oracle PID of the process. 

However starting from Oracle 10.2 you can get similar detailed breakdown info by querying V$PROCESS_MEMORY_DETAIL, no need for post-processing tracefiles! However when you just query it, the view does not return any rows:

SQL> SELECT * FROM v$process_memory_detail;

no rows selected

Again this is for the abovementioned reasons – your current process can not just read the contents of some other process’es private memory – the OS ensures that. You will have to ask that target process to populate the V$PROCESS_MEMORY_DETAIL with its memory allocation breakdown. You can do this by using the ORADEBUG DUMP PGA_DETAIL_GET command:

Statement processed.
Statement processed.

The number 49 above is the Oracle PID (v$ of the target process I want to examine. The oradebug PGA_DETAIL_GET command will not immediately make the target process to report its usage – it will merely set a flag somewhere and the target process itself checks it when it is active. In other words, if the target process is idle or sleeping for a long time (due to some lock for example), then it won’t populate the V$ view with required data. In my test environment, the V$PROCESS_MEMORY_DETAIL got populated only after I ran another dummy command in the target session. This shouldn’t be an issue if you are examining a process that’s actively doing something (and not idle/sleeping for a long time).

The output below is from another dummy demo session that wasn’t using much of memory:

SQL> SELECT * FROM v$process_memory_detail ORDER BY pid, bytes DESC;

       PID    SERIAL# CATEGORY        NAME                       HEAP_NAME            BYTES ALLOCATION_COUNT HEAP_DES PARENT_H
---------- ---------- --------------- -------------------------- --------------- ---------- ---------------- -------- --------
        49          5 Other           permanent memory           pga heap            162004               19 11B602C0 00
        49          5 SQL             QERHJ Bit vector           QERHJ hash-joi      131168                8 F691EF4C F68F6F7C
        49          5 Other           kxsFrame4kPage             session heap         57736               14 F68E7134 11B64780
        49          5 SQL             free memory                QERHJ hash-joi       54272                5 F691EF4C F68F6F7C
        49          5 Other           free memory                pga heap             41924                8 11B602C0 00
        49          5 Other           miscellaneous                                   39980              123 00       00
        49          5 Other           Fixed Uga                  Fixed UGA heap       36584                1 F6AA44B0 11B602C0
        49          5 Other           permanent memory           top call heap        32804                2 11B64660 00
        49          5 Other           permanent memory           session heap         32224                2 F68E7134 11B64780
        49          5 Other           free memory                top call heap        31692                1 11B64660 00
        49          5 Other           kgh stack                  pga heap             17012                1 11B602C0 00
        49          5 Other           kxsFrame16kPage            session heap         16412                1 F68E7134 11B64780
        49          5 Other           dbgeInitProcessCtx:InvCtx  diag pga             15096                2 F75A8630 11B602C0

The BYTES column shows the sum of memory allocated from private memory heap HEAP_NAME for the reason shown in NAME column. If you want to know the average allocation (chunk) size in the heap, divide BYTES by ALLOCATION_COUNT.
For example, the top PGA memory user in that process is an allocation called “permanent memory”, 162004 bytes taken straight from the top-level “pga-heap”. It probably contains all kinds of low-level runtime allocations that the process needs for its own purposes. It may be possible to drill down into the subheaps inside that allocation with the Oracle memory top-5 subheap dumping I have written about before.

The 2nd biggest memory user is in category SQL – “QERHJ Bit vector” allocation, 131168 bytes allocated in 8 chunks of ~16kB each (on average). QERHJ should mean Query Execution Row-source Hash-Join and the hash join bit vector is a hash join optimization (somewhat like a bloom filter on hash buckets) – Jonathan Lewis has written about this in his CBO book.

I do have a couple of scripts which automate running the ORAEDBUG command, waiting for a second so that the target process would have a chance to publish its data in the V$PROCESS_MEMORY_DETAIL and then query it. Check out smem_detail.sql and pmem_detail.sql.

Now, let’s look into a real example from a problem case – a stress test environment on Oracle 12c:

SQL> @smem 1516
Display session 1516 memory usage from v$process_memory....

---------- ---------- ---------- --------------- ---------- ---------- ------------- ----------
      1516        120        198 Freeable            786432          0                        0
      1516        120        198 Other            844733773                844733773          0
      1516        120        198 PL/SQL              421064      77296        572344          0
      1516        120        198 SQL                 277536      45904       2348040          0

The Other memory usage of a session has grown to over 800MB!

Let’s drill down deeper. The script warns that it’s experimental and asks you to press enter to continue as it’s using ORADEBUG. I haven’t seen any problems with it, but use it at your own risk (and stay away from critical background processes on production systems)!

SQL> @smem_detail 1516

WARNING! About to run an undocumented ORADEBUG command
for getting heap details.
This script is EXPERIMENTAL, use at your own risk!

Press ENTER to continue, or CTRL+C to cancel

PL/SQL procedure successfully completed.


If the status above is not COMPLETE then you need to wait
for the target process to do some work and re-run the
v$process_memory_detail query in this script manually
(or just take a heapdump level 29 to get heap breakdown
in a tracefile)

       SID CATEGORY        NAME                       HEAP_NAME            BYTES ALLOCATION_COUNT
---------- --------------- -------------------------- --------------- ---------- ----------------
      1516 Other           permanent memory           qmxlu subheap    779697376           203700
      1516 Other           free memory                qmxlu subheap     25960784           202133
      1516 Other           XVM Storage                XVM subheap of     5708032               51
      1516 Other           free memory                session heap       2722944              598
      1516 Other           permanent memory           pga heap            681992               36
      1516 Other           qmushtCreate               qmtmInit            590256                9
      1516 Other           free memory                top uga heap        449024              208
      1516 Other           qmtmltAlloc                qmtmInit            389680             1777
      1516 Other           permanent memory           kolarsCreateCt      316960               15
      1516 Other           free memory                pga heap            306416               17
      1516 Other           miscellaneous                                  297120              105
      1516 Other           permanent memory           qmxtgCreateBuf      279536               73
      1516 Other           free memory                koh dur heap d      239312              134
      1516 Other           kxsFrame4kPage             session heap        232512               56
      1516 Other           permanent memory           qmcxdDecodeIni      228672               21
      1516 Other           permanent memory           qmxtigcp:heap       215936              730
      1516 Other           permanent memory           session heap        189472               28
      1516 Other           free memory                lpxHeap subhea      182760               32
      1516 Other           kfioRqTracer               pga heap            131104                1
      1516 Other           free memory                top call heap       129312                4
      1516 PL/SQL          recursive addr reg file    koh-kghu sessi      110592               10
      1516 Other           free memory                callheap            109856                4
      1516 Other           koh-kghu session heap      session heap         88272               36
      1516 Other           Fixed Uga                  pga heap             72144                1
      1516 PL/SQL          PL/SQL STACK               PLS PGA hp           68256                4

Well, there you go – the power of measuring & profiling. Most of that big memory usage comes from something called qmxlu subheap. Now, while this name is cryptic and we don’t know what it means – we are already half-way there, we at least know what to focus on now. We can ignore all the other hundreds of cryptic memory allocations in the output and just try to figure out what “qmxlu subheap” is. A quick MOS search might just tell it and if there are known bugs related to this memory leak, you might just find what’s affecting you right away (as Oracle support analysts may have pasted some symptoms, patch info and workarounds into the bug note):


Indeed, there are plenty of results in MOS and when browsing through them to find one matching our symptoms and environment the closest, I looked into this: ORA-4030 With High Allocation Of “qmxdpls_subheap” (Doc ID 1509914.1). It came up in the search as the support analyst had pasted a recursive subheap dump containing our symptom – “qmxlu subheap” there:

Summary of subheaps at depth 2
5277 MB total:
 5277 MB commented, 128 KB permanent
 174 KB free (110 KB in empty extents),
   2803 MB, 1542119496 heaps:   "               "
   1302 MB, 420677 heaps:   "qmxlu subheap  "
    408 MB, 10096248 chunks:  "qmxdplsArrayGetNI1        " 2 KB free held
    385 MB, 10096248 chunks:  "qmxdplsArrayNI0           " 2 KB free held

In this note, the reference bug had been closed as “not a bug” and hinted that it may be an application issue (an application “object” leak) instead of an internal memory leak that causes this memory usage growth.


The cause of this problem has been identified in:
closed as “not a bug”. The problem is caused by the fact that the XML document is created with XMLDOM.CREATEELEMENT, but after creation XMLDOM.FREEDOCUMENT is not called. This causes the XML used heaps to remain allocated. Every new call to XMLDOM.CREATEELEMENT will then allocate a new heap, causing process memory to grow over time, and hence cause the ORA-4030 error to occur in the end.


To implement a solution for this issue, use XMLDOM.FREEDOCUMENT to explicitly free any explicitly or implictly created XML document, so the memory associated with that document can be released for reuse.

And indeed, in our case it turned out that it was an application issue – the application did not free the XMLDOM documents after use, slowly accumulating more and more open document memory structures, using more memory and also more CPU time (as, judging by the ALLOCATION_COUNT figure in smem_detail output above, the internal array used for managing the open document structures had grown to 203700). Once the application object leak issue was fixed, the performance and memory usage problem went away.


V$PROCESS_MEMORY_DETAIL allows you to conveniently dig deeper into process PGA memory usage. The alternative is to use Oracle heapdumps. A few more useful comments about it are in an old Oracle-L post.

Normally my process memory troubleshooting & drilldown sequence goes like that (usually only steps 1-2 are enough, 3-4 are rarely needed):

  1. v$process / v$process_memory / top / ps
  2. v$sql_workarea_active
  3. v$process_memory_detail or heapdump_analyzer
  4. pmap -x at OS level

#1,2,3 above can show you “session” level memory usage (assuming that you are using dedicated servers with 1-1 relationship between a session and a process) and #4 can show you a different view into the real process memory usage from the OS perspective.

Even though you may see cryptic allocation reason names in the output, if reason X causes 95% of your problem, you’ll need to focus on finding out what X means and don’t need to waste time on anything else. If there’s an Oracle bug involved, a MOS search by top memory consumer names would likely point you to the relevant bug right away.

Oracle troubleshooting is fun!

Note that this year’s only Advanced Oracle Troubleshooting class takes place in the end of April/May 2014, so sign up now if you plan to attend this year!

Related Posts

Two-Year Anniversary of Blackboard Acquisition of Moodlerooms and NetSpot

Michael Feldstein - Wed, 2014-03-26 16:39

Two years ago today, Blackboard made a dramatic change of course with a series of public announcements:

At the time I described these changes:

Most of the discussion in articles and blogs follows the meme of Blackboard entering open source, or even the meme of Blackboard acquiring competitors. I think the news is more significant than either of these two memes.

Blackboard just did a 180-degree turn on their strategy for their core LMS business. They have moved from consolidating all customers into Learn 9.1 to providing products and services that are almost LMS-agnostic.

Archive of Statements

Given this dramatic turn of events, I wrote an additional post that captured the public statements (press releases, blog posts) from Blackboard, Moodlerooms, NetSpot, and even Blackboard competitors for the purpose of checking to see if the acquisitions really did signal a true change in strategy and support for open source. This two-year anniversary seems the perfect time to check up.

Bb’s Previous Open Source View

Just how big of a change did the announcements represent? Consider Blackboard’s moves regarding alternative LMS solutions in the previous six years.

  • Blackboard acquired WebCT, their biggest competitor, in 2006.
  • Blackboard sued Desire2Learn over patent infringement from 2006 — 2009.
  • Blackboard acquired ANGEL in 2009.
  • Blackboard argues that Moodle is just meeting a minimum standard of competence as an LMS.

On that last point, note that less than three years prior to the acquisition of Moodlerooms and NetSpot, Blackboard publicly argued that Moodle was not a viable enterprise system, as described and quoted by Michael, in response to a study from the North Carolina Community College System (snippets below are quotes from Blackboard).

“The Moodle product roadmap offers only a short-term view into the direction of future product upgrades and is controlled by a single person, Martin Dougiamas.” [snip]

“Global innovation is by nature not static and moves at a rapid pace. Open source may meet some minimum standard of LMS competency, but only Blackboard can grow, adapt, and evolve with the customized and ever changing needs of your [NCCCS] students and faculty”

Bb’s Current Open Source View

Fast forward to March 2012, and Blackboard’s press release shows a remarkable change in its dealings with Dougiamas.

Leaders from each company recently traveled to Perth, Australia to meet with Martin Dougiamas, founder of Moodle and Managing Director of Moodle Pty Ltd, and present their plans. The meeting included Blackboard CEO Michael Chasen and Chief Technology Officer Ray Henderson, Moodlerooms CEO Lou Pugliese and Chief Architect Tom Murdock, and NetSpot Managing Director Allan Christie.

In a recent phone interview Mark Strassman called out the continued support for Moodle by stating:

It’s really clear that one size does not fit all, and we’re thrilled that we can help those who want Moodle, giving them a mechanism for support. Having different platforms for different types of customers makes sense.

Change in Management

In the past two years most of the senior leadership involved in the acquisitions has changed, yet the basic approach to open source seems to remain the same.

  • Michael Chasen and Ray Henderson are gone from Blackboard (at least operational management of products, as Ray is still on the board), replaced by Jay Bhatt, Mark Strassman and Gary Lang (the team formerly together at AutoDesk).
  • Lou Pugliese, David Mills, and Tom Murdock are gone from Moodlerooms, although Phill Miller remains.
  • The original NetSpot leadership, however, has actually grown in importance in the Blackboard organization. Allan Christie is now General Manager, Mark Drechsler is now in charge of consulting services, and Stephen Watt is in charge of sales & marketing — for most of Blackboard’s products and not just Moodle support. This support includes bringing Moodlerooms services into Australia.
  • Charles Severance recently announced his departure from Blackboard.
Statement of Principles

In terms of open source support, the most important document to review is Blackboard’s statement of principles signed by Chasen, Henderson, Pugliese, Christie and Severance (note that only one remains). This short document was directly aimed at countering skepticism about Blackboard’s actual intents with the acquisitions.

We are committed to supporting the growth, development and use of open source technologies in education worldwide. We expect to make significant contributions to the community to help ensure that open source options remain strong, reliable and sustainable for all. Our work will be guided by the following principles:

1. We will work in a way that supports and honors the values of the communities we serve, and will continue to participate and support important community gatherings.

2. We will continue contributions of code from products we support and from in-kind development activities conducted in partnership with community organizations like Moodle Trust.

3. We will continue to focus on supporting open standards through organizations like the IMS Global Learning Consortium to ensure interoperability and eliminate vendor lock-in for all LMS options, commercial and open source.

4. We will provide financial support for Moodle Trust and other open source organizations as our strategy evolves.

5. We will work to deliver innovative, visually elegant, and technologically robust education solutions to clients regardless of whether they are open source, proprietary, or a blend between them.

On principles 1 — 4, I would argue that Blackboard has met their commitments, with support for MoodleMoots, investment in data centers, contribution of code (LTI plugin, common cartridge backup/restore, outcomes system), and development and support of LTI 2.0 code. Principle 5 in my opinion is really a marketing statement subject to evaluation by customers.

Severance, in his departure note, pointed out his view of Blackboard supporting open source communities even beyond Moodle.

Blackboard is a great supporter of open source, Sakai and open standards, and it’s investments helped move Sakai 2.9 towards its ultimate release. [snip]

Arguably the most significant effort supported by Blackboard was my work on IMS LTI 2.0.

Financial Contributions

On the financial contribution (principle #4), there are some indications that this support is quite significant. At a recent presentation to Moodle partners, it was claimed that the Moodlerooms and NetSpot combined contributions to Moodle Trust was more than 50% of the total partner contributions.

NOTE: I have not been able to confirm this information from additional sources, and every person I asked during interviews declined to speak on the subject. I went back to the Twitter stream, and it appears that the original tweets (and therefore retweets) have been deleted. Open source does not equal open books, apparently.

Whether this information is correct or not, Blackboard has certainly maintained their financial support of Moodle through the partners program. If the information is accurate, however, there is a risk that Blackboard is contributing too much financial support for Moodle Trust.

Change in Emphasis

One change worth noting with the new Blackboard executive team is the change in emphasis on international markets. Jay Bhatt has stated to me that Blackboard has under-penetrated international markets, especially in growth areas such as Latin American and China. The company has pegged much of its growth potential overseas, and according to Bhatt Moodle is a natural fit in this role. Moodle already has a strong international base of users, and many markets cannot afford or support a full-scale Learn LMS deployment. The same could be said for U.S. K-12 markets.

Blackboard has also kept their commitment (to date) in supporting the ANGEL platform, which I described in more detail here. Some customers have pointed to this June 2012 message from Blackboard as somewhat of a mixed message for ANGEL clients:

Previously we had announced an end of life target for Release 8..0 of the ANGEL Edition LMS for October 2014. In order to ensure that you have sufficient time to plan and execute a move to Blackboard Learn 9.1, we are extending that end of life window beyond the previously announced EOL date. Blackboard will evaluate the decision on an ongoing basis and provide sufficient notice of any future change to plans for support. We plan to make select enhancements to ANGEL 8.0 over time to ensure that the solution is a viable platform until you are ready to plan a migration.

Nevertheless, it is two years after the announcement, ANGEL is still supported, and Blackboard’s management still encourages the usage of ANGEL. In a recent phone interview Gary Lang stated:

Customers are happy with ANGEL, people at Blackboard are still working on the software, and there is no end-of-life (EOL) planned. For many customers the LMS is not just about feature parity — some are religious about their choices — and Blackboard is fine with this choice.

Does It Matter?

While Blackboard has kept their word and made a major change in strategy, the question arises of whether that matters. According to the Campus Computing Survey for 2011 and 2013, Blackboard’s market share (combining Learn, WebCT, and ANGEL product lines) has continue to fall in the US over the past two years, from 51% of institutions to 41%. Moodle has risen from 19% to 23% (these numbers do not separate the market share of Moodlerooms clients, just usage of Moodle). Within US higher education, at least, Blackboard has not succeeded in stopping their erosion of market share, but they have tapped into a still-growing Moodle community.

It is difficult to evaluate the company’s fortunes in K-12, professional ed (corporate and for-profit clients) and international markets without reliable market numbers. Phill Miller and Allan Christie stated that Moodlerooms and NetSpot have continued to grow since the acquisition.

Bhatt, Strassman and Lang all emphasize that Blackboard is no longer primarily an LMS company. As Strassman indicated in our call, Blackboard is “expanding its purview beyond the LMS to the broader teaching and learning continuum”. As they do so, the company will focus on interoperability and providing choices in platforms.

At the end of the day, this story is not that dramatic. Blackboard has kept their word, followed their stated principles, and kept their support of multiple LMS solutions including open source over the past two years. What is more interesting is to step back and see this update as confirmation on just how big of a change Blackboard made two years ago.

The post Two-Year Anniversary of Blackboard Acquisition of Moodlerooms and NetSpot appeared first on e-Literate.

Introducing obi-metrics-agent – an Open-Source OBIEE Metrics Collector

Rittman Mead Consulting - Wed, 2014-03-26 14:47

Understanding what is going on inside OBIEE is important for being able to diagnose issues that arise, monitor its health, and dig deep into its behaviour under stress in a load test. OBIEE exposes a set of metrics through the Dynamic Monitoring Service (DMS) and viewable through Enterprise Manager (EM) Fusion Middleware Control. EM is a great tool but doesn’t meet all requirements for accessing these metrics, primarily because it doesn’t retain any history.

obi-metrics-agent is a tool that extracts OBIEE’s performance metrics from the Dynamic Monitoring Service (DMS) functionality. Venkat wrote the original version, which I have rewritten in python and added additional functionality. It polls DMS on a specified interval and output the data to a variety of formats. It was written to aid OBIEE performance monitoring either as part of testing or longer-term use. Its features include:

  • Multiple output options, including CSV, XML, and Carbon (for rendering in Graphite etc)
  • Parse data as it is collected, or write to disk
  • Parse data collected previously


How does it work?

obi-metrics-agent is written in Python, and uses the documented OPMN functionality to expose DMS metrics opmnctl metric op=query. We experimented with the WLST route but found the overhead was too great. OPMN supplies the DMS data as a large XML message, which obi-metrics-agent can either store raw, or parse out into the constituent metrics. It can write these to CSV or XML files, generate INSERT statements for sending them to a database, or send them to graphite (see below).

obi-metrics-agent can also parse previously-extracted raw data, so if you want to store data in graphite but don’t have the server to hand at execution time it can be loaded retrospectively.


On which platforms does it work?
  • Tested thoroughly on Oracle Linux 5 and 6
  • Works on Windows 2003, should work on later versions
Which OBI metrics are collected?

All of the ones that OPMN supports. Currently, BI Server and BI Presentation Services, plus the opmn process metrics (such as CPU time of each OBI component)

To explore the DMS metrics available, you can use Enterprise Manager, or the DMS Spy servlet that is installed by default with OBIEE and available at http://<obi-server>:7001/dms/ (assuming your AdminServer is on port 7001).


I have used DMS metrics primarily when investigating OBIEE’s behaviour under stress in performance testing, but some of the higher-level metrics are useful for day-to-day monitoring too. The DMS metrics let you peer into OBIEE’s workings and deduce or hypothesise the cause of behaviour you are seeing.

  • How many users does OBIEE see as logged in?
  • How many active requests are there from Presentation Services to BI Server (nqserver)?
  • How many active connections are there from each Connection Pool to the database? How many queued connections?
  • What’s the average (careful…) response time by database?
  • What’s the error rate for Presentation Services queries?
  • How does the memory profile of each OBIEE component behave during testing?
  • How are the BI Server’s internal thread pools coping with the load? Do they need resizing?
  • How many queries per second are being run on each database?
  • How is the graphing engine behaving? Is it queuing requests?
  • What’s the Presentation Services and BI Server cache hit rate?
Sounds great, where do I get it?

Rittman Mead have released obi-metrics-agent as open source. You can find it on GitHub:

Simply clone the repository and run the python script. You need to install the lxml library first – full details are supplied in the repository.

$ export OPMN_BIN=$FMW_HOME/instances/instance1/bin/opmnctl
$ python ./

# ===================================================================
# Developed by @rmoff / Rittman Mead (
# Absolutely no warranty, use at your own risk
# Please include this notice in any copy or reuse of the script you make
# ===================================================================

Output format             : csv
raw/csv/xml/carbon/sql    : False/True/False/False/False
Data dir                  : ./data
FMW instance              : None
OPMN BIN                  : /u01/app/oracle/product/fmw/instances/instance1/bin/opmnctl
Sample interval (seconds) : 5

--Gather metrics--
        Time of sample: Wed, 26 Mar 2014 10:38:38 +0000 (1395830318)

        Get metrics for coreapplication_obips1
                 Processed :    469 data values @ Wed, 26 Mar 2014 10:38:38 +0000       Oracle BI Presentation Server
                        Appended CSV data to ./data/metrics.csv
        Get metrics for coreapplication_obis1
                 Processed :    230 data values @ Wed, 26 Mar 2014 10:38:38 +0000       Oracle BI Server
                        Appended CSV data to ./data/metrics.csv
        Get metrics for opmn
                 Processed :    91 data values @ Wed, 26 Mar 2014 10:38:38 +0000        opmn
                        Appended CSV data to ./data/metrics.csv

        Processed: 3    Valid: 3 (100.00%)      Invalid: 0 (0.00%)

See the next blog post for a step-by-step on getting it set up and running using SampleApp v309R2 as an example server, including with Graphite and Collectl for visualising the data and collecting OS stats too.

Visualising the collected data – Graphite

Graphite is an open-source graphing tool that comes with a daemon called carbon that receives incoming data and stores it its own times-series database (called whisper). Graphite is a very popular tool meaning there’s lots of support out there for it and additional tools written to complement it, some of which I’ll be exploring in later blog posts. It’s also very easy to get data into graphite, and because it stores it in a time series you can then display OBIEE DMS data alongside anything else you may have – for example, OS metrics from collectl, or jmeter performance test counters.

obi-metrics-agent architecture7defbbf23a98ad16184c80041fdf1bc9

Whilst obi-metrics-agent can be used on its own and data stored to CSV for subsequent parsing, or accessing in Oracle as an external table, the focus on this and subsequent blog posts will primarily be on using obi-metrics-agent writing data to graphite and the benefits this brings when it comes to visualising it.

“Graphite?! Haven’t we got a visualisation tool already in OBIEE?”

You can graph this data out through OBIEE, if you want. The OBI metric data can be loaded in by external table from the CSV files, or using the generated INSERT statements.

The benefit of using Graphite is twofold:

  1. Its primary purpose is graphing time-based metrics. Metrics in, time-based graphs out. You don’t need to build an RPD or model a time dimension. It also supports one-click rendering of wildcarded metric groups (for example, current connection count on all connection pools), as well as one-click transformations such as displaying deltas of a cumulative measure.
  2. It gives an alternative to a dependency on OBIEE. If we use OBIEE we’re then rendering the data on the system that we’re also monitoring, thus introducing the significant risk of just measuring the impact of our monitoring! To then set up a second OBIEE server just for rendering graphs then it opens up the question of what’s the best graphing tool for this particular job, and Graphite is a strong option here.

Graphite just works very well in this particular scenario, which is why I use it….YMMV.


There are several obvious features that could be added so do please feel free to fork the repository and submit your own pull requests! Ideas include:

  • support for scaled-out clusters
  • init.d / run as a daemon
  • selective metric collection
Known Issues

Prior to OBIEE, there was a bug in the opmn process which causes corrupt XML sometimes. This could sometimes be as much as 15% of samples. On corrupt samples, the datapoint is just dropped.

The FMW patch from Oracle for this issue is 13055259.


Developed by @rmoff / Rittman Mead (
Absolutely no warranty, use at your own risk
Please include this notice in any copy or reuse of the script you make

What next?

There are several posts on this has subject to come, including :

  1. Installing obi-metrics-agent, graphite, and collectl.
  2. Exploring the graphite interface, building simple dashboards
  3. Alternative front-ends to graphite
Categories: BI & Warehousing

Oracle Midlands : Event #2 Summary

Tim Hall - Wed, 2014-03-26 14:34

The second Oracle Midlands event took place last night.

The most important part of the evening was the opportunity to win an Oracle Press teddy bear.


Oh yeah, there were a couple of talks too…

I was planning to arrive early and go for a quick drink with my dad, but the Birmingham traffic put that plan to rest. Instead we just chatted a bit before the event started.

First up was my dad (Graham Wood), who dropped by on his way to the UKOUG Real World Performance Day in London, speaking about “Advanced ASH Architecture and Usage”. I’ve seen Graham doing this presentation a few times, but each time something new jumps out at me. If Graham came back next week and did the same talk, I would still go and watch it. Every year that goes by I find myself using AWR reports and SQL Trace less and less because I can get the information I need out of ASH.

After saying an emotional farewell to Graham, I drowned my sorrows in vegetable samosas and chatted to people, including Martin Widlake, who I assume was lost since he’s not from round these parts… :)

Next up was Nikolay Manchev, who drove up from London to speak about “Using Clusterware 11g to protect single-instance databases”. This was essentially using Oracle Clusterware to mimic RAC One-Node without having to buy a RAC One-Node license. Almost the whole of this presentation was live demonstration. I love doing demos, but I don’t think I am brave enough to do live demos of Oracle clusterware and failover etc. To much to go wrong on a little laptop. Nikolay laughs in the face of danger and jumps right into that stuff. :) He had a couple of little hiccups with the projector not playing well with his laptop, but he got through the demo and his cold failover of the database, listener and VIP worked just fine. Neat! :)

After the event was over, a few of us wandered over to a nearby pub and sat chatting for a couple of hours more. :)

All in all a really cool event! Here come the thank you messages.

  • Thanks to Mike for actually organising this event. I think you are doing a fantastic job!
  • Thank you to the speakers for coming along to the event.
  • Thanks to those kind folks at Red Gate Software, whose sponsorship allowed this to be a free event.
  • Thanks to Oracle Press for the raffle prizes of bears, books and t-shirts. I really wanted a bear, but I didn’t win.
  • Thanks to the attendees for coming, some from very far afield. Please keep coming and please keep the word-of-mouth about these events going. Local user groups like this live or die based on your support!

The next event should be on May 20th, with Christian Antognini doing both sessions. Christian is the author of probably my favourite technical book, Troubleshooting Oracle Performance, so I’m really looking forward to this. You never know, you might get to win the second edition of this book, which must be about to arrive… :) Keep an eye on the Oracle Midlands website for registration information.




Oracle Midlands : Event #2 Summary was first posted on March 26, 2014 at 9:34 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

See you at AIIM 2014!

WebCenter Team - Wed, 2014-03-26 13:48


I hope you have the opportunity to attend the AIIM Conference this year and that we have the opportunity to meet you there. This years event is in Orlando, Florida and promises to be a fast and informative three days of sessions.  

In particular, if you are there, I hope you will join us for a session on Wednesday at Noon entitled "Information into Action - Moving Beyond Information Governance". I'll be speaking about the importance of extracting value from the information you probably already have locked away somewhere in your organization and suggestions on how to make the changes necessary to do so. Every session is about 30 minutes in length so it will be fast and hopefully informative!

Oracle is a platinum level sponsor of this year's event and we will be exhibiting in the solution showcase as well.  Be sure to stop by and speak with us about your business challenges related to information management, process improvement opportunities or even just to stop by and talk about restaurant options for dinner.  We will be there!

We hope to see you there. 

C14 OakTable World Las Vegas

Pythian Group - Wed, 2014-03-26 12:55

If you haven’t yet made the decision to attend COLLABORATE 14 – IOUG Forum in Las Vegas taking place on 7-11 April, 2014 at the Venetian Hotel, this might just help you to make the call. You know you want to be there.

OakTable Network will be holding its OakTable World for the very first time during the COLLABORATE conference. While it’s a little bit last moment, IOUG was able to provide a room for us to use for the whole day and we at OakTable quickly put the schedule together. The agenda is selected by the OakTable speakers on the topics they are really passionate about. As history shows, this is generally what your also want to hear about.

The history of OakTable World comes from the underground event started by Mogens Nørgaard during Oracle OpenWorld somewhere around 2007-2009. After several years of success and growing popularity, the even has become knows as OakTable World run during OOW12 (link) and OOW13 (link) and was a huge success. Last year, we have also run OTWUK13 around UKOUG TEBS 13 conference in Manchester. Needless to say it was a success.

Thanks to many good companies sponsoring the events all those years — you know who you are. This year, the sponsor is really IOUG who managed to find a room for us. I’ll probably gather few usual suspect to print special t-shirts again so stay tuned for that update — I wanted to get the info out-there ASAP so that people could plan attending.

The up to date schedule is in Google Calendar. You can also sign up to the calendar feed using XML or iCal feeds.

Embedded schedule is below:

Note that we have two hours of TED-style lightning talks with or without slides. Each talk is 10-15 minutes when the author shares an innovative perspective relevant to the database industry. The talks are the mix of technical and non-technical topics. It’s been first attempted at UKOUG conference in 2011 and 2012 I believe and were very popular and well attended.

You will hear about some specific technologies or methods that are under-utilized, about provocative approaches to database tasks we have at hand, about disruptive trends and about totally non-technical things that will make you a better IT professional — stories of someone’s life or professional career. It’s usually entertaining and thoughts-provoking. I strongly recommend them.

As you can see the speakers for the rest of the day are stellar. Frits Hoogland of Enkitec Europe fame comes to C14 all the way from Holland and he will start the day by getting very very very deep inside Oracle execution code. Frits will demonstrate live use of Linux GDB debugger techniques to diagnose and troubleshoot very complex scenarios when you really need to understand internal of certain database operations. You really need your brain fresh for this session so scheduling it first thing in the morning is the only reasonable time to reduce the risk of your brain being fried in the first 15 minutes. Hardcode folks will love this.

Next goes Kyle Hailey with Agile Data story to teach us about modern techniques in making Oracle database deployment, cloning and things as easy as deploying a Linux RPM package with YUM. Following thatMoved to 3pm, Graham Wood from Oracle will be presenting Unreal World Performance use cases themed after popular Real World Performance Tour (but without two other amigos) with fresh look at bad practices of managing connections to Oracle databases — time for a refresher on those very popular RWP Youtube videos.

After lunch is when Lightning talks will be running and also a very cool session by Jeff Needham of Scalabilities where you can learn about disruptive trends in modern data warehousing and what customers are doing today (and more will be doing tomorrow) to scale their data processing in cost effective way. How vendors respond to Big Data technologies disrupting their traditional products and how customers handle it (note that I’m trying hard to be politically correct here — come, it will be very refreshing). By the way, Jeff will give away his book copies (Disruptive Possibilities) during Lightning talk as well.

Unfortunately Jeff won’t be able to be here. We might still get some of his books. However, we have Gwen Shapira presenting on loading Twitter data into Hadoop and analyzing with complete live on the spot demo!

Late Karl Arao addition is also on the schedule!

Here are the schedule for the Lightning Talks. The schedule might change slightly between two slots (and we still have one slot to be confirmed):

Lightning Talk Part I:

  • Kyle Hailey – “Importance of Data Visualization”
  • Jeff Needham – “Hadoop Myth-Busting (plus books giveaway)”
  • Tim Gorman – “Eagles and Seagulls” about being an independent IT consultant
  • Kellyn Pot’vin – “Finding Your Path”

Lightning Talk Part II:

  • Jonathan Gennick – “Seven Ways toward Better Writing”
  • Kellyn Pot’vin – “Snapclone with DBaaS in EM12c”
  • Graham Wood – How Hardware Timings Affect Your Oracle Database Performance
  • TBC

Lightning talks details are in the events details in the calendar if you open the event. Remember that there might be some last minute changes so I recommend simply signing up for the calendar feed on your mobile device you carry with you at the conference.

Also, there will be the OakTable folks and other crowd hanging around that room for the whole day so come over. I think it would be a good experiment to reintroduce couple things that Mogens had back in the old days:

  • FIVE-MINUTES-OF-FAME – Any attendee can get up and present a technical solution (5-minute time limit) to the group for admiration.
  • FIVE-MINUTES-OF-HELP – Any attendee can get up and pose a technical problem (5-minute time limit) to the group for help.

Well, that’s the news. If you are excited — post here in the comments that you are coming, share this on Twitter, Facebook, LinkedIn and what not. Looking forward to see as many of you there as possible. As Tim Gorman said — “more fun than a barrel of drunken DBAs, is the un-conference formerly known as “Oracle Closed World“, now known as “Oak Table World“.” And this year we are in Vegas!

Categories: DBA Blogs

What’s New with You?

Oracle AppsLab - Wed, 2014-03-26 11:23

It’s been a busy month around these parts.

Noel (@noelportugal) and I went to the Netherlands, specifically Amsterdam, Utrect, and Nieuwegein, to visit AMIS and show some of the cool stuff Applications UX has been doing. By all accounts the event was a massive success with something like 450 people visiting AMIS during the day to visit.

The Shutters of Amsterdam

The Shutters of Amsterdam

Here’s some press coverage in Dutch if you’re so inclined.

AUX had about ten different stations showing various demos, including the newly-minted Release 8 Simplified UI for HCM and Sales Cloud, Mobilytics, Oracle Voice, UX Design Patterns, UX Direct, the almighty eye-tracker and our Glass, Pebble and robot stuff, including the hot new robot arm, which we’re now controlling remotely, more on that to come.

As if that weren’t enough, there was also a Secret Chamber that required a non-disclosure for entrance with cutting edge stuff.

I spent my day locked away in the Secret Chamber, while Noel and Patrick (@patch72) handled the crush of people eager to get their hands on Google Glass. The beginning of the event was exclusively for students, and at one point, Noel and Patrick were swarmed by about 60-70 people trying to get a turn with Glass.

Sidebar, Glass elicited some very curious reactions around the NL. People seemed genuinely interested, a different reaction than you get here in the States where people can be outwardly aggressive about Glass infringing their privacy. Noel wore his most of the time, and several people stopped him to ask about them. Several times the exchange went like this:

Is that Google Glass?
Is it real?

Strange follow-up question, maybe there’s a market for bogus tech there.

Anyway, the event was awesome, and everyone at AMIS was so friendly and accommodating and generous to us. Everything about the trip was fantastic.

Speaking of trips, Noel and I will be at COLLABORATE 2014 in Las Vegas, April 7-11, as will other members of AUX. Check out all of the AUX activities over at VoX. Noel and I will be working the booth, so stop by and say hello if you’re attending the conference.

That’s my month so far.

Find the comments.Possibly Related Posts:

VirtualBox 4.3.10

Tim Hall - Wed, 2014-03-26 09:28

VirtualBox 4.3.10 has been released. The downloads and changelog are in the usual places.

At the time of writing, the link to the Windows version seems to be broken, but the Mac, Oracle Linux and Fedora versions are there.

Happy upgrading!



Update: Probably best to wait a while before downloading this new version. This version is currently on its 3rd build since I first downloaded it and now the download links are broken. :(


VirtualBox 4.3.10 was first posted on March 26, 2014 at 4:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

I (re) Join Oracle Corporation!

FeuerThoughts - Wed, 2014-03-26 09:01
On March 17, 2014, I became an employee of Oracle Corporation for the second time. My first round with Oracle started in August 1987. My second son, Eli, was less than a year old. I'd been incredibly bored with my consulting gig, which consisted of babysitting a reporting system on a DEC10 "mainframe", based on a flat-file database – but a database.

So I checked the Help Wanted pages (no Internet, no smartphones, no LinkedIn) and came across an ad from Oracle Corporation. It contained the word "database", so I figured: "Why ?"

I was hired, even though I was completely ignorant of relational databases. Ok, not completely. I'd read an article by Codd and memorized "Twelve Rules of Relational Databases." But no one ever asked me about relational theory. Instead the key  question seemed to be: "Are you comfortable talking in front of groups, large and small?" I was, after all, interviewing for a "pre-sales" (sales consultant) position.

Fortunately (?), I'd been very active for the past several years organizing Americans to protest facets of our government's policies in Central America, and yes I'd spoken often to groups large and small. My manager-to-be at Oracle seemed pleased enough with this, and I got the job. I never thought my political activity would help me land a software job, but that's exactly what happened.

Looking back on that moment, I see now that it foreshadowed a significant, but not widely recognized characteristic of my career: The popularity of my books and trainings stem as much from my communication skills (the delivery) as from whatI am communicating (the content).

I'll get back to that in a moment. Well, joining Oracle changed my life. For one thing, I had to go out and not only buy some suits, but wear them every day. And then after five years with the company, I left to do some consulting, and a few years later ended up publishing Oracle PL/SQL Programming (O'Reilly Media) in 1995. Now that really changed my life!

For the next almost-19 years, I have focused almost exclusively on the Oracle PL/SQL language. I wrote nine more books on the language (probably about 4 too many, actually), of which over 400,000 copies have been sold. I traveled to dozens of countries to share my obsession (expertise) with PL/SQL in trainings and presentations. I built and designed PL/SQL testing tools, code generators, code libraries, and more. I wrote lots of articles for Oracle Magazine and other publications. I attended many, many Kaleidoscopes and Collaborates and International Oracle User Weeks and Oracle Open Worlds wife got really tired of my traveling. Sigh....and that is why I have pledged that in Round 2 with Oracle, I would not start living on airplanes again.

For much of those 19 years, I worked for Quest Software and then Dell as a PL/SQL Evangelist. Quest and Dell helped sstrengthen the PL/SQL community not only by offering such amazing tools as Toad for Oracle, but also by funding my position and giving me a tremendous amount of freedom to continue learning about, writing and writing about PL/SQL.

But I decided last year that I wanted to close out my career as a software professional (I will, after all, be 56 in September 2014) with the company that created the programming language that transformed my life: Oracle Corporation.

Wasn't I lucky that the head of all product development at Oracle, Thomas Kurian, was also a former PL/SQL product manager! Otherwise, Oracle might not have been interested in having me back. ☺

So what will I be doing at Oracle Corporation?

My title continues to be PL/SQL Evangelist, and PL/SQL will continue to be my main focus, of course. I will help promote the language, add to the collateral available for PL/SQL, write articles for Oracle Magazine and post content on Oracle Technology Network, present at the key Oracle developer-related conferences. In other words, all the usual stuff.

But I see my evangelism as a two way street: I want to make sure that developers around the world take the fullest possible advantage of PL/SQL, yet I also want to make sure that Oracle generally and the PL/SQL development team in particular recognize the importance of the PL/SQL community, and leverage it fully.

Ever since 2010 I have been writing daily quizzes (and more) on the PL/SQL Challenge. I have been amazed at the enthusiasm of hundreds of developers to test their knowledge on this site. And it has been fantastic to see many PL/SQL experts who might otherwise never be known or recognized by their peers step forward to share their expertise. This was one of my "hidden" goals of the PL/SQL Challenge.

You see, I have never been entirely comfortable with being (one of) the "go to guys" on PL/SQL. I know very well that for all of my depth and focus on PL/SQL, I am really not very strong technically. I am no Tom Kyte, no Bryn Llewellyn. I only took three computer programming courses in college, all 101 level. I mostly got lucky - and fell into programming at a time when a degree in computer science simply wasn't a requirement (1979!).

It turns out that my main strength, the main reason (I believe) that my books and presentations became so popular, is that I am a good at communicating ideas, techniques, etc. in a way that people find accessible. I never learned how to write and think like a computer scientist, so people can actually understand - and enjoy - what I write. Because of the limitations of my formal training, I often have to think my way step by stepto an understanding of how things work (I can't just know things from my university days). I then share that step-by-step process with my readers, which helps them understand. Finally, I seem to find it impossible to keep my sense of humor out of what I say and write - and boy did my readers appreciate that! :-)

Bottom line: it makes me a little nervous when so many people look to me for "all the answers" to their PL/SQL-related problems. I don't have all the answers. But I am pretty sure that if I do not, there is someone out there, some Oracle technologist who has worked with PL/SQL for years, who has a computer science degree, who has faced different challenges than me, who might just have the answer you need, a code sample to save you hours of work, a piece of advice that can save several bangs of the head against the wall.

But how to get the question to the person who can answer it? Of course the OTN discussion forums and places like Stackoverflow provide a way to expose this expertise and make it available to many. I hope to complement those kinds of efforts with new initiatives at Oracle.  You will see announcements over the next year regarding this community building effort. But in the meantime if you have any ideas for me on this topic, please do not hesitate to send me an email.

The Two Me's Online

I have, for years, offered my thoughts (some might say "rants") on my Feuerthoughts blog and @stevefeuerstein twitter account. Going forward, I will cleanly separate my Oracle-related posts from my personal content. So here's a quick guide to the sites and accounts I will be using.

Blog -
Twitter - @SFonPLSQL
LinkedIn -

Home -
Blog -
Twitter - @stevefeuerstein
Facebook - Steven Feuerstein

If you follow my @stevefeuerstein twitter account, I urge you (if an Oracle technologist and not my mom) to also follow me on @sfonplsql. I will soon ramp up with daily PL/SQL tips and more.

Time to Get to Work!

Lots to do, lots to do. Including coming up to speed on a Macbook. I am making the switch after 30 years with DOS and Windows. Fun, scary, frustrating, liberating. More on that, too, to follow
Categories: Development


Jonathan Lewis - Wed, 2014-03-26 08:13

Here’s a little test you might want to try. Examine the following script, and decide what sort of symptoms you would see in the AWR report.

create global temporary table gtt1(n1 number);

execute dbms_workload_repository.create_snapshot;

insert into gtt1 values(1);
truncate table gtt1;

-- repeat insert/truncate for a total of 100 cycles

execute dbms_workload_repository.create_snapshot;

-- generate an AWR report across the interval.

I don’t need anyone to tell me their results – but if your predictions and the actual results match then you can give yourself a pat on the head.
You might also like to enable SQL trace for all the inserts/truncate to see if that shows you anything interesting.

This is one of the simpler scripts of the 3,500 I have on my laptop that help me interpret the symptoms I see in client systems.


Jonathan Lewis - Wed, 2014-03-26 08:13

Here’s a little test you might want to try. Examine the following script, and decide what sort of symptoms you would see in the AWR report.

create global temporary table gtt1(n1 number);

execute dbms_workload_repository.create_snapshot;

insert into gtt1 values(1);
truncate table gtt1;

-- repeat insert/truncate for a total of 100 cycles

execute dbms_workload_repository.create_snapshot;

-- generate an AWR report across the interval.

I don’t need anyone to tell me their results – but if your predictions and the actual results match then you can give yourself a pat on the head.
You might also like to enable SQL trace for all the inserts/truncate to see if that shows you anything interesting.

This is one of the simpler scripts of the 3,500 I have on my laptop that help me interpret the symptoms I see in client systems.

Access Management alternatives (Part 1: Directory Services)

Frank van Bortel - Wed, 2014-03-26 07:18
Intro At the governmental institute that hired me, I'm working hard to get the full Oracle Identity and Access Management (IAM) stack implemented. A colleague suggested OpenIAM, which -at closer look- turns out to be a fork of what I believe to be the origin of the Oracle stack, Sun's OpenSSO. So, I started at looking at this stack, which is available from ForgeRock. Let's start with the basis: Frank

Department of Defense adopts new security standard

Chris Foot - Wed, 2014-03-26 07:14

Due to the rising sophistication of cybercriminals, both public and private organizations throughout the United States are consulting database experts regarding best practices designed to deter network infiltration attempts. The Information Age has effectively connected virtually everyone with access to a computer, meaning that a plethora of sensitive information is being held in company and government data stores.

Skill levels are rising
As software vendors and cloud developers have consistently created new applications and technologies, malevolent figures infiltrating digital platforms have managed to adapt to the advancing environment. According to CIO, cybercriminal organizations have evolved from ad hoc groups motivated by gaining notoriety to vast networks of highly skilled individuals and groups working to obtain financial information. Law enforcement has had a difficult time capturing these entities, as they are spread over unspecified geographic locations.

In addition, remote database support personnel have noticed that these figures are able to encrypt the monetary data they steal and utilize various forms of cryptocurrency to make payments anonymously. These techniques make it difficult for federal and state authorities to effectively trace where the original transaction was placed. The news source acknowledged that cybercriminals are increasingly utilizing exploit kits to steal credit card numbers and sensitive data from computers. Apparently, depending on the sophistication of the underground enterprise, these groups have the potential to gain more profit than those involved in the drug trade.

Public measures
The United States federal government has responded by issuing data compliance standards, which has in turn fostered private investment in database administration services. However, InformationWeek contributor Leonard Marzigliano reported that the Department of Defense recently adopted a new risk-focused security approach assembled by the National Institute of Standards and Technology. Teri Takai, the DOD's chief intelligence officer, announced the decision March 12, stating that this is the first time the organization has aligned itself with compliance regulations originally designed for civilian enterprises.

Takai told the source that the military entity will focus more on risk assessment, management and authorization techniques previously disregarded by the organization. The new policy will encompass all DOD information in electronic format and all of its subsidiary departments, such as the U.S. Navy. She stated further that the measure will be used to assess the cybersecurity of all IT residing in weapons, in objects in space, or on vehicles, aircraft and medical devices owned by the department.

ADF Alert - Facelets Vulnerability in ADF 11g R2 and 12c

Andrejus Baranovski - Wed, 2014-03-26 06:18
If you are running your application in ADF 11g R2 or 12c environment and using facelets - you should double check, if a source code for the facelet pages is not accessible through the URL. There is another security vulnerability in ADF 11g R2, documented here - Alert for ADF Security - JSF 2.0 Vulnerability in ADF 11g R2. Apparently this is a patch from Oracle for JSF 2.0 vulnerability and also there is a manual fix. However neither patch or manual fix are not applied by default, potentially your source code could be exposed for public access.  This is why I post it on the blog - for all ADF users to be aware.

I don't have solution for vulnerability described in this post, you should contact Oracle support and ask for a patch. To reproduce this vulnerability is pretty easy - you could remove "faces" from URL and try to access your page (for example main.jsf), source code for the page will be loaded.

Sample application  - was tested with ADF 11g R2 and 12c runtime.

It doesn't help to set .jsf extension name in web.xml context parameter, as it does for ADF security vulnerability described in the previous post:

When we reference ADF web page with "faces" in the context root, page content is rendered as expected:

However, if you remove "faces" in the context root and try to access main.jsf - instead of returning error, ADF 11g R2 runtime will bring main.jsf page source code (a bit unexpected, right?):

The same with ADF 12c runtime:

Update from Oracle Support: Patch CVE-2013-3827 is available for this issue in October 2013 CPU.

Installing SQLDeveloper 4 on Mint and Ubuntu – Minus the Alien

The Anti-Kyte - Wed, 2014-03-26 05:57

Deb recently bought a new kettle.
Now, a kettle used to simply boil water and turn itself off when it was done.
Not this thing.
It lights up as it boils and announces the fact that it’s finished with a melodious ping.
It’s got gauges and lights and switches.
I’ve decided that it’s probably a Dalek in disguise.
Like Daleks (or at least, the original Daleks), it can’t go up stairs – or if it can, it’s not advertising the fact.
Every morning, descending to the kitchen is filled with trepidation.
When will the Dalek tire of vaporizing innocent water molecules and move on to World Domination…

Doc-tor ! Doc-tor ! I feel like a ket-tle !

Doc-tor ! Doc-tor ! I feel like a ket-tle !

I wouldn’t be entirely surprised to find that, like most whizzy modern appliances, it runs on Java.
Which brings us, by a fairly circuitous route, to the topic at hand – SQLDeveloper.

Oracle’s latest incarnation of it’s IDE does indeed run on Java – the version 7 JDK to be precise.
In this post, I’ll go through the steps required on Mint to :

  • Install the Java 7 JDK
  • Install SQLDeveloper 4
  • Persuade SQLDeveloper 4 to play nicely with Java
  • Add SQLDeveloper to the Cinnamon Menu

The good news is that we can do all of this without the messy alien conversion of an rpm package to .deb format.

NOTE – I’ve followed these steps on Mint13, but they should be pretty much the same for any Debian Distro.
Anyway, without further ado…

Install Java 7 What’s installed now

Before getting into the installation, it’s probably a good idea to establish what Java version you have at present.
To do this, open a Terminal Window and :

java -version

This will probably return something like :

java version "1.6.0_30"
OpenJDK Runtime Environment (IcedTea6 1.13.1) (6b30-1.13.1-1ubuntu2~
OpenJDK 64-Bit Server VM (build 23.25-b01, mixed mode)

Assuming you haven’t got a version that is 1.7.0 or greater, you’re going to need to update it for SQLDeveloper.

Getting the latest and greatest Java version

The easiest way to do this is – courtesy of instructions found here :

sudo add-apt-repository ppa:webupd8team/java

Running this will give you a message similar to the following :

You are about to add the following PPA to your system:
 Oracle Java (JDK) Installer (automatically downloads and installs Oracle JDK6 / JDK7 / JDK8). There are no actual Java files in this PPA. More info:

Debian installation instructions:
 More info:
Press [ENTER] to continue or ctrl-c to cancel adding it

Hit ENTER and…

Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.HQYZnMcKX0 --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp:// --recv 7B2C3B0889BF5709A105D03AC2518248EEA14886
gpg: requesting key EEA14886 from hkp server
gpg: key EEA14886: public key "Launchpad VLC" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

I’m not sure if the following step is necessary, but I ran it to be on the safe side. Ensure that your packages are up-to-date by running :

sudo apt-get update 

Now for the installation itself…

Installing Java 7

Start by getting the installer :

sudo apt-get install oracle-java7-installer

The output looks like this :

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
Suggested packages:
  visualvm ttf-baekmuk ttf-unfonts ttf-unfonts-core ttf-kochi-gothic
  ttf-sazanami-gothic ttf-kochi-mincho ttf-sazanami-mincho ttf-arphic-uming
The following NEW packages will be installed
  gsfonts-x11 oracle-java7-installer
0 to upgrade, 2 to newly install, 0 to remove and 31 not to upgrade.
Need to get 26.7 kB of archives.
After this operation, 228 kB of additional disk space will be used.
Do you want to continue [Y/n]? 

Yes, we do want to continue so enter ‘Y’.
At this point you’ll be presented with the following screen :

Takes me back to the early 90's...

Takes me back to the early 90′s…

Hit ENTER and …

Yes, I do want to install

Yes, I do want to install DOOM…er…Java

Use the left arrow key to navigate to Yes and hit ENTER.

You will then get feedback to the effect that it’s downloading stuff. This should end with something like :

Oracle JDK 7 installed
update-alternatives: using /usr/lib/jvm/java-7-oracle/jre/lib/amd64/ to provide /usr/lib/mozilla/plugins/ ( in auto mode.
Oracle JRE 7 browser plugin installed
Setting up gsfonts-x11 (0.22) ...

To check that it’s done what we wanted :

java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
Where’s the JDK ?

At this point, the system is using the java executable from the Java Runtime Environment (JRE).
SQLDeveloper will need to know where the Java Developer Kit (JDK) is located.

You can verify this and find the JDK itself as follows :

which java

If you look at /usr/bin/java you’ll see it’s a symbolic link…which points to another symbolic link…which eventually points to the actual location of the java executable…

ls -l /usr/bin/java
/usr/bin/java -> /etc/alternatives/java
ls -l /etc/alternatives/java
/etc/alternatvies/java -> /usr/lib/jvm/java-7-orale/jre/bin/java

In this case, the Java executable is in the JRE, not the JDK.

Fortunately, the JDK itself is also present. We can find this in :


We need to keep a note of that directory for when we fire up SQLDeveloper.
Speaking of which…

Getting the right SQLDeveloper

To get the SQLDeveloper download, you will need an Oracle Technet account. Fortunately, this is free.
You can do this by going to the Technet Home Page.

Once you’re set up, go to the SQLDeveloper Downloads Page.

There are a couple of things to note here.
First, the package you need to download is labelled Other Platforms.
Secondly, if you do happen to look at the Release instructions for this package, it will mention JDK 1.6.0_11 or above.
These instructions are out of date in this regard and refer to the previous version of SQLDeveloper.

Anyway, on the download page, click accept License Agreement and click on the Other Platforms Download.

Free you say ? That's my kind of price.

Free you say ? I’ll take it.

At this point, you may be asked to re-enter your Technet credentials.

When prompted by Mint, save the file.

Once completed, go to the Downloads directory and you should see…

cd $HOME/Downloads
Installing SQLDeveloper4

We’re going to install SQLDeveloper under /opt so…

sudo mkdir /opt/sqldeveloper401

Now to copy and extract the zip file …

sudo cp $HOME/Downloads/ /opt/sqldeveloper401/.
cd /opt/sqldeveloper401
sudo unzip

At this point, the output will look something like this :

 inflating: sqldeveloper/svnkit/licenses/COPYING  
  inflating: sqldeveloper/svnkit/licenses/JAVAHL-LICENSE  
  inflating: sqldeveloper/svnkit/licenses/SEQUENCE-LICENSE  
  inflating: sqldeveloper/svnkit/licenses/SQLJET-LICENSE  
  inflating: sqldeveloper/svnkit/licenses/TRILEAD-LICENSE  
  inflating: sqldeveloper/svnkit/licenses/license.txt  
  inflating: sqldeveloper/svnkit/sequence.jar  
  inflating: sqldeveloper/svnkit/sqljet.jar  
  inflating: sqldeveloper/svnkit/svnClientAdapter.jar  
  inflating: sqldeveloper/svnkit/svnjavahl.jar  
  inflating: sqldeveloper/svnkit/svnkit.jar  
  inflating: sqldeveloper/svnkit/trilead.jar  
  inflating: sqldeveloper/view-source-paths.lis  

Once completed, we’re now ready to finalise the configuration.

Setting the Java Path for SQLDeveloper

To do this, we simply need to run SQLDeveloper. The first time it starts, it will ask for a path to the JDK.
This will be the path we figured out earlier. So…

cd /opt/sqldeveloper401/sqldeveloper
sudo chmod+x
. ./

When we execute the shell script to start SQLDeveloper we’ll get…

 Oracle SQL Developer
 Copyright (c) 1997, 2014, Oracle and/or its affiliates. All rights reserved.

Type the full pathname of a JDK installation (or Ctrl-C to quit), the path will be stored in /home/mike/.sqldeveloper/4.0.0/product.conf

At this point, we enter the path, minus the bin directory (which SQLDeveloper will look in automatically) :


If you have a previous version of SQLDeveloper installed, you will be asked if you want to copy all the settings, connections etc to the latest version :


Finally, SQLDeveloper will start.

Adding SQLDeveloper4 to the Cinnamon Menu

To add SQLDeveloper to the menu…

Right-click the Menu in the bottom left corner of the screen and select Configure :


Click the Open the menu editor button.


Select New Item

Name : SQLDeveloper4.0.1
Command : /opt/sqldeveloper4/sqldeveloper/

Click on the rocket icon and select the sqldeveloper icon at :

Now you should see the SQLDeveloper4.0.1 option under the Programming Menu.

After all that, I feel like a cup of coffee. Now where did I leave that Sonic Screwdriver…

Filed under: Linux, SQLDeveloper Tagged: add-apt-repository, adding an item to the menu in Cinnamon, Debian, finding the JDK, Java 7 install on Mint, SQLDeveloper 4 on Mint, which