Skip navigation.

Feed aggregator

Simple Android Oracle client

XTended Oracle SQL - Mon, 2014-12-29 18:54

I am happy to announce, that I’ve just published my first android app – Simple oracle client for android!
Since this is only the first version, I’m sure that it contains various UI bugs, so I’ll wait for reviews and bug reports!

Several screenshots:





Get it on Google Play

Categories: Development

Oracle multitenant dictionary: object links

Yann Neuhaus - Mon, 2014-12-29 15:28

I've described Oracle 12c metadata and object links internals in a previous post. But before that, the first time I investigated on it, I made a wrong assumption because I was looking at AUDIT_ACTIONS which is not correctly implemented. That investigation came from a question on dba-village. And recently Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table, so I'll explain here what is special with it.


Here is how is defined AUDIT_ACTIONS:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('DBA_AUDIT_TRAIL','AUDIT_ACTIONS') order by object_name,object_type;

-------------------- --------------- -------------
It's a sharing=object table so you expect that the data is common to all containers. And we will also query a view that reads that table - DBA_AUDIT_TRAIL.

Then let's query the table from CDB$ROOT and from a PDB and check from ROWID if we read the same rows:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG7AABAAACo5AAD          3 SELECT           1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,action,name,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','AUDIT_ACTIONS') file_id from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME       FILE_ID
------------------ ---------- ------- ----------
AAABG5AABAAAA3pAAD          3 SELECT           8

The rows are not coming from the same file, but from the local SYSTEM tablespace of each container. This is a proof that this OBJECT LINK table is not common at all.


Now I want to check what happens when we query through the view. I don't have the ROWID so let's update the table in the PDB so that we can distinguish rows coming from CDB$ROOT and from PDB1:

SQL> update AUDIT_ACTIONS set name='select' where action=3;

1 row updated.

SQL> select rowid,action,name from AUDIT_ACTIONS where action=3;

ROWID                  ACTION NAME
------------------ ---------- -------
AAABG5AABAAAA3pAAD          3 select

SQL> select distinct dbid,action,action_name from DBA_AUDIT_TRAIL;

---------- ---------- ----------------------------
 314687597          3 select

Ok. I've changed one 'ACTION_NAME' to lowercase - only in the PDB1. And when I query through the view I see the local row. This definitly prooves that the implementation of AUDIT_ACTIONS is not achieving the goal of multinenant dictionary: store common oracle objects only in CDB$ROOT to avoid duplication and faciliate upgrade. Note that it is not a big problem anyway as it is just a 200 rows table.


In order to show the normal behaviour of object links I'll do the same with DBA_CPOOL_INFO which is a view over SYS.CPOOL$. I've described this behaviour previously by creating my own objects but here I'll show how it is used to store the DRCP information which is at CDB level. Here are the involved table and views:

SQL> select object_name,object_type,sharing from dba_objects where object_name in ('CPOOL$','INT$DBA_CPOOL_INFO','DBA_CPOOL_INFO') order by object_name,object_type;

-------------------- --------------- -------------
CPOOL$               TABLE           OBJECT LINK
CPOOL$ is defined with sharing=object. An internal view INT$DBA_CPOOL_INFO is defined on it with sharing=object as well. And finally that view is exposed through DBA_CPOOL_INFO.

As before, I check the ROWID of CPOOL$ row from CDB$ROOT and PDB1:
SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz5AABAAADb5AAA          4          1

SQL> alter session set container=PDB1;
Session altered.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          4          8

So this is the same as we have seen before: an OBJECT LINK has its data in each PDB.

But what is different here is the view charing which is sharing=object. Let's query that view after changing the value in PDB1:

SQL> update SYS.CPOOL$ set minsize=0;
1 row updated.

SQL> select rowid,minsize,dbms_rowid.rowid_to_absolute_fno(rowid,'SYS','CPOOL$') file_id from SYS.CPOOL$;

ROWID                 MINSIZE    FILE_ID
------------------ ---------- ----------
AAABz3AABAAABQJAAA          0          8

SQL> select minsize from INT$DBA_CPOOL_INFO;


SQL> select minsize from DBA_CPOOL_INFO;


Now we have a view which will always show the CDB$ROOT rows, even when we are in a PDB container. We still have rows in the PDB containers, but they will not be used. Once again, this defeats the goal of deduplication, but this is a very small table.

AWR tables

The main advantage of multitenant dictionary architecture is with the big tables storing data which is common in the whole CDB, such as the AWR data:

SQL> alter session set container=CDB$ROOT;
Session altered.

SQL> select con_id,count(*) from containers(WRH$_SQLTEXT) group by con_id;

    CON_ID   COUNT(*)
---------- ----------
         1       5549

SQL> alter session set container=PDB1;

Session altered.

SQL> select count(*) from WRH$_SQLTEXT;


This information - stored only from CDB$ROOT - is shared in all PDB through the OBJECT LINK view.

Compliance and File Monitoring in EM12c

Fuad Arshad - Mon, 2014-12-29 14:36
I was recently asked to help a customer set up File Monitoring in Enterprise Manager and I thought since I haven’t blogged in a while, this could be a good way to start back up again..Enterprise manager 12c provides a very nice Compliance and File Monitoring Framwork. There are many Built in Frameworks include for PCI DSS and STIG but this How-to will only focus on a custom file monitoring framework. Prior to Setting up Compliance features . Ensure that Privilege Delegation is set to sudo or whatever Privilege delegation provider you are using.  and Credentials for Realtime Monitoring are setup for hosts. All the Prereqs are explained here important in the above link is how every OS interacts with these features.

Go To Enterprise -→ Compliance → Library
Create a New Compliance Standard

Name and Describe the Framework

You will see  the Framework Created

Now lets add some Facets to monitor > In this example I selected a tnsnames from my rdbms home

Below is a finished facet

Next lets create a rule that uses that facet
After Selecting the right rule lets Add more color
Lets add the facet that defined what file(s) will be monitored
For this example I will select all aspects  for testing but ensure that you have sized your respository as well as understand all the consequences  for each aspectRead thru the Additional Setup for RTM

After defining the monitoring actions, you have the option to filtor and create monitoring rules based on specific events.I will skip this for nowAs we inch towards the end we can authorize changes and each event manually or incorporate a Change Management System that has a connector available in EM12c.
After We have completed this, we now have an opportunity to review the setting and then make this rule production.Now lets create a Standard. We are creating a custom File Monitoring Standard With a RTM type Standard Applicable to host
We will add rules to the File Monitor . In this Case we will add the tnsnames rule we created to the Standard. You can add standard as well as rules to a Standard
Next Lets Associate Targets to this Standard. You will be asked to confirm
Optionally now  you can add this to the compliance framework for one stop monitoring
Now that we have set everything up. Lets Test this. Here is the original tnsnames.oraLets add another tns entry
Prior to the change . here is that the Compliance Results Page Looks Like. As you can see the evaluation was successful. And we are 100% compliancet

Now  If If go to Compliance -> Real time observations . I can see that I didn’t install the Kernel module needed for granular control and this cannot use certain functionality
So I’m going to remove these from my rule for now. Now I have made a whole bunch of changes including even moving the file. It is all captured .
There are many changes here and we can actually compare what changedIf you select unauthorized as the audited event  for the change the compliance score drops and you can use it for see how many violations for a given rule happen.
In Summary. Em12c Provides a very robust framework of monitoring compliance standards as well as custom created frameworks to ensure your auditors and IT Managers are happy.

FBI concludes North Korean hackers responsible for Sony breach [VIDEO]

Chris Foot - Mon, 2014-12-29 12:24


Hi, welcome to RDX! Just before Sony Pictures was set to release “The Interview,” a previously unidentified group of hackers released confidential files stored in Sony’s databases.

“The Interview” is a comedy about a TV host ordered to assassinate North Korean dictator Kim Jong-un. After a two-week investigation the Federal Bureau of Investigation confirmed that the North Korean government is responsible for the data breach. As the film was a satire about Kim Jong-un’s regime, it makes sense that such a damaging attack would originate from North Korea.

From RDX’s perspective, deterring these kinds of attacks require businesses to install database security monitoring software. Any time an unauthorized user begins copying information, alerting database administrators is essential.

Thanks for watching!

The post FBI concludes North Korean hackers responsible for Sony breach [VIDEO] appeared first on Remote DBA Experts.

2015 - Less, But Better

Floyd Teter - Mon, 2014-12-29 11:25
Life is really simple, but we insist on making it complicated.
                                                        - attributed to Confucius

It's the end of 2014.  2015 is staring us in the face.  And as we celebrate the ending of a trip around the sun and kick off yet another one, many of us are predicting what the new year will bring.  Sorry, you won't get that here.  My crystal ball never has worked all that well, so I don't see much value in sharing my woefully inaccurate predictions.

What I will share is my lone resolution for 2105 and some of the actions I'll be taking as a result, in the hopes that you'll find some value you can apply in your own endeavors.

The motto:  Less, But Better.  (Yes, I'm a fan of Dieter Rams).  The idea is to simplify, eliminating the obtrusive, while improving the end result.  Some thoughts on applying this idea.
  • Experience Design:  Seems as though everyone is into the Experience Design game these days.  User Experience, Customer Experience, Student Experience ... it's a longer list than I'm able to quote here.  But I also see that many of these efforts miss the point.  Experience Design is not just about things look ... icons, colors, interface layouts. Experience design is about providing simple, elegant solutions to complex problems.  I'll be spending calories voicing this point in 2015, especially in terms of how it applies in the world of Oracle software.
  • Advocating Oracle Cloud Applications:  The big idea behind Oracle Cloud Applications and the SaaS service model is simplification: trade off your in-house maintenance and licensing burdens for a pay-as-you-go service model.  Pretty cool in theory.  But, from my perspective, we'll still dealing with heavily-engineered products.  It's a tough issue for advocates and partners.  Among other things, that means I just can't run Oracle's Cloud Applications on my personal test bench anymore.  Too many moving parts.  Done trying.  At the same time, it seems that Oracle is tightening up the accessibility of Oracle Cloud Applications for players in the Oracle eco-system: their own pre-sales consultants and Oracle eco-system partners.  Tough even getting access through the Oracle Partner Network without jumping through serious hoops (and you know how I love hoop jumping!).  Have to come up with a simple answer for that in 2015.  It may come down to running a demo environment of Oracle Cloud Applications on AWS.  Or possibly running that demo environment on internal Sierra-Cedar servers.  Neither is the simplest approach I can think of, but one or the other may be the simplest approach available.
  • More Use Of Simple Oracle Development Tools:  In the world of enterprise applications, we sometimes limit ourselves with the tools that we use.  We bring out sledgehammers to kill fruit flies.  Forcing those sledgehammers to fit our purpose results in solutions that are less than optimal.  In the spirit of "Less, But Better",  I plan to spend some time researching on ways to use simple tools to extend and enhance enterprise applications - especially Oracle Cloud Applications.  Oracle APEX seems to be one of those tools.  So my plan is to start there.  I also plan to spend more time testing the boundaries of The Oracle Simplified User Interface ("SUI") Rapid Development Kit.
  • Extensions and Integration:  As more customers jump into Oracle Cloud, functional application implementation is becoming more and more of a fungible commodity service.  The lowest price wins the business above all other factors.  With this shift in the market and the impact on per project profit margins, implementation partners are beginning to develop strategies of adding value around the edges of an implementation project: extensions and integrations.  Those partners who can add simple, elegant extensions and integrations to Oracle Cloud Application implementation projects are the partners who will thrive as the market shifts.  This is easier said than done, as many of us in the Oracle eco-system tend to unintentionally sacrifice simplicity by over-engineering our solutions for very specific and unique use cases.  I'll be working to reverse that trend in my own work:  simple integrations and extensions, applicable to a wide set of use cases, that can be applied repeatedly.  I've already mentioned some of the tools I'll use in this effort.  This is really about the design of extensions and integrations:  simple, elegant solutions to complex problems.
So, these are some of the Oracle-specific areas that I'll be working through the "Less, But Better" concept in 2015.  There are more, like reducing the number of personal tech devices I use, but this is the upshot in the realm of Oracle enterprise software.
What about you?  Any inspiring thoughts from reading this?  Or maybe a bit of chortling or laughter?  Whatever.  Share your thoughts in the comments.  And best wishes for a great 2015!

Connecting OBIEE11g on Windows to a Kerberos-Secured CDH5 Hadoop Cluster using Cloudera HiveServer2 ODBC Drivers

Rittman Mead Consulting - Sun, 2014-12-28 16:55

In a few previous posts and magazine articles I’ve covered connecting OBIEE11g to a Hadoop cluster, using OBIEE and Cloudera CDH4 and CDH5 as the examples. Things get a bit complicated in that the DataDirect Apache Hive ODBC drivers that Oracle ship are only for HiveServer1 and not the HiveServer2 version that CDH4 and CDH5 use, and the Linux version of OBIEE won’t work with the Cloudera Hive ODBC drivers that you have to use to connect to Hive on CDH4/5. You can however connect OBIEE on Windows to HiveServer2 on CDH4 and CDH5 if you use the Cloudera Hive ODBC drivers for Windows, and although this isn’t supported by Oracle in my experience it does work, albeit with the general OBIEE11g Hive restrictions and caveats detailed in the Metadata Repository Builder’s Guide, and the fact that in-practice Hive is too slow to use for ad-hoc reporting.

However … most enterprise-type customers who run Hadoop on their internal networks have their clusters configured as “secured”, rather than the unsecured cluster examples that you see in most OBIEE connection examples. By default, Hadoop clusters are very trusting of incoming network and client connections and assume that whoever’s connecting is who they say they are, and HDFS and the other cluster components don’t perform any authentication themselves of incoming client connections. In addition, by default all network connections between Hadoop cluster components run in clear text and without any mutual authentication, which is great for a research cluster or PoC but not really appropriate for enterprise customers looking to use Hadoop to store and analyse customer data.

Instead, these customers configure their clusters to run in secured mode, using Kerberos authentication to secure incoming connections, encrypt network traffic and secure connections between the various services in the cluster. How this affects OBIEE though is that your Hive connections through to the cluster also need to use Kerberos authentication, and you (and the OBIEE BI Server) need to have a valid Kerberos ticket when connecting through the Hive ODBC driver. So how do we set this up, and how do we get hold of a secure Hadoop cluster using Kerberos authentication to test against? A few of our customers have asked this question recently, so I thought it’d be worth jotting down a few notes on how to set this up.

At a high-level, if you want to connect OBIEE to a secure, Kerberos-authenticated CDH cluster, there’s three main steps you need to carry out:

  1. Get hold of a Kerberos-secured CDH cluster, and establish the connection details you’ll need to use to connect to it
  2. Make sure the Kerberos server has the correct entries/principals/user details for the user you’re going to securely-connect as
  3. Configure the host environment for OBIEE to work with Kerberos authentication, and then create the connection from OBIEE to the CDH cluster using the correct Kerberos credentials for your user

In my case, I’ve got a Cloudera CDH5.3.0 cluster running in the office that’s been configured to use MIT Kerebos 5 for authentication, set up using an OEL6 VM as the KDC (Key Distribution Centre) and the cluster configured using the new Kerebos setup wizard that was introduced with CDH5.1. Using this wizard automates the creation of the various Kerberos service account and host principals in the Kerberos database, and configures each of the cluster components – YARN, Hive, HDFS and so on – to authenticate with each other using Kerberos authentication and use encrypted network connections for inter-service and inter-node communication.


Along with the secured Hadoop cluster, key bits of information and configuration data you’ll need for the OBIEE side are:

  • The krb5.conf file from the Kerberos KDC, which contains details of the Kerberos realm, URL for the KDC server, and other key connection details
  • The name of the Kerberos principal used for the Hive service name on the Hadoop cluster – typically this is “hive”; if you want to connect to Hive first using a JDBC tool such as beeline, you’ll also need the full principal name for this service, in my case “hive/”
  • The hostname (FQDN) of the node in the CDH cluster that contains the HiveServer2 RPC interface that OBIEE connects to, to run HiveQL queries
  • The Port that HiveServer2 is running on – typically this is “10000”, and the Hive database name (for example, “default’)
  • The name of the Kerebos Realm you’ll be connecting to – for example, MYCOMPANY.COM or in my case, RITTMANDEV.COM (usually in capitals)

In my case, the krb5.conf file that is used to configure Kerebos connections to my KDC looks like this – in your company it might be a bit more complex, but this example defines a simple MIT Kerebos 5 domain:

    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log
    default_realm = RITTMANDEV.COM
    dns_lookup_realm = false
    dns_lookup_kdc = false
    ticket_lifetime = 24h
    renew_lifetime = 7d
    forwardable = true
    kdc =
    admin_server =

In my setup, the CDH Hadoop cluster has been configured to use Kerberos authentication for all communications between cluster components and any connections from the outside that use those components; the cluster itself though can still be accessed via unsecured (non-Kerebos authenticated) SSH, though of course this aspect could be secured too. To test out the Hive connectivity before we get into the OBIEE details you can use the beeline CLI that ships with CDH5, and to do this you’ll need to be able to SSH into one of the cluster nodes (if you’ve not got beeline installed on your own workstation) and you’ll need an account (principal) created for you in the Kerebos database to correspond to the Linux user and HDFS/Hive user that has access to the Hive tables you’re interested in. To create such a Kerebos principal for my setup, I used the kadmin.local command on the KDC VM to create a user that matched my Linux/HDFS username and gave it a password:

kadmin.local:  addprinc mrittman
WARNING: no policy specified for mrittman@RITTMANDEV.COM; defaulting to no policy
Enter password for principal "mrittman@RITTMANDEV.COM":
Re-enter password for principal "mrittman@RITTMANDEV.COM":
Principal "mrittman@RITTMANDEV.COM" created.

SSH’ing into one of the secure CDH cluster nodes, I first have to authenticate using the kinit command which when successful, creates a Kerebos ticket that gets cached for a set amount of time, and beeline can thereafter use as part of its own authentication process:

officeimac:.ssh markrittman$ ssh mrittman@bda3node4
mrittman@bda3node4's password: 
[mrittman@bda3node4 ~]$ kinit -p mrittman
Password for mrittman@RITTMANDEV.COM: 
[mrittman@bda3node4 ~]$

Now I can use beeline, and pass the Hive service principal name in the connection details along with the usual host, port and database name. When beeline prompts for my username and password, I use the Kerberos principal name that matches the Linux/HDFS one, and enter that principal’s password:

[mrittman@bda3node4 ~]$ beeline
Beeline version 0.13.1-cdh5.3.0 by Apache Hive
beeline> !connect jdbc:hive2://bda3node2:10000/default;principal=hive/
scan complete in 2ms
Connecting to jdbc:hive2://bda3node2:10000/default;principal=hive/
Enter username for jdbc:hive2://bda3node2:10000/default;principal=hive/ mrittman
Enter password for jdbc:hive2://bda3node2:10000/default;principal=hive/ ********
Connected to: Apache Hive (version 0.13.1-cdh5.3.0)
Driver: Hive JDBC (version 0.13.1-cdh5.3.0)
0: jdbc:hive2://bda3node2:10000/default> show tables;
|     tab_name     |
| posts            |
| things_mrittman  |
2 rows selected (0.162 seconds)
0: jdbc:hive2://bda3node2:10000/default> select * from things_mrittman;
| things_mrittman.thing_id  | things_mrittman.thing_name  |
| 1                         | Car                         |
| 2                         | Dog                         |
| 3                         | Hat                         |
3 rows selected (0.251 seconds)

So at this point I’ve covered off the first two steps; established the connection details for the secure CDH cluster, and got hold of and confirmed the Kerberos principal details that I’ll need to connect to Hive – now its time to set up the OBIEE element.

In this particular example we’re using Windows to host OBIEE, as this is the only platform that we can get the HiveServer2 ODBC drivers to work, in this case the Cloudera Hive ODBC drivers available on their website (free download but registration may be needed). Before we can get this ODBC driver to work though, we need to install the Kerberos client software on the Windows machine so that we can generate the Kerberos ticket that the ODBC driver will need to pass over as part of the authentication process.

To configure the Windows environment for Kerberos authentication, in my case I used the Kerberos for Windows 4.x client software downloadable for free from the MIT website and copied across the krb5.conf file from the KDC server, renaming it to krb5.ini and storing it the default location of c:\ProgramData\MIT\Kerberos5.


You also need to define a system environment variable, KRB5CCNAME, to point to a directory where the Kerebos tickets can be cached, in my case I used c:\temp\krb5cache. Once this is done, reboot the Windows environment and you should then be prompted after login to authenticate yourself to the Kerebos KDC.


The ticket then stays valid for a set number of days/hours, or you can configure OBIEE itself to authenticate and cache its own ticket – for now though, we’ll create the ticket manually and connect to the secured cluster using these cached ticket details.

After installing the Cloudera Hive ODBC drivers, I create the connection using Kerebos as the Authentication Mechanism, and enter the realm name, HiveServer2 host and the Hive Kerebos principal name, like this:


In my case both the BI Administration tool and the OBIEE BI Server were on the same Windows VM, and therefore shared the same ODBC driver install, so I then moved over to the BI Administration tool to import the Hive table metadata details into the RPD and create the physical, logical and presentation layer RPD elements. Depending on how your CDH cluster is set up you might be able to test the connection now by using the View Data… menu item in BI Administration, but in my case I had to do two more things on the CDH cluster itself before I could get Hive queries under this Kerberos principal to run properly.


First, as secured CDH Hadoop clusters usually configure HiveServer2 to use “user impersonation” (connecting to Hive as the user you authenticate as, not the user that HiveServer2 authenticates to the Hive service as), YARN and MapReduce jobs run under your account and not the usual “Hive” account that unsecured Hive connections use. Where this causes a problem on CDH installations on RHEL-derived platforms (RHEL, OEL, Centos etc) is that YARN normally blocks jobs running on behalf of users with a UID of <1000 (as this on other Linux distributions typically signifies a system account), RHEL starts user UIDs at 500 and YARN therefore blocks them from running jobs. To fix this, you need to go into Cloudera Manager and edit the YARN configuration settings to lower this UID threshold to something under 500, for example 250:


I also needed to alter the group ownership of the temporary directory each node used for the YARN NodeManager’s user files so that YARN could write its temporary files correctly; on each node in the cluster I ran the following Linux commands as root to clear down any files YARN had created before, and recreate the directories with the correct permissions (Hive jobs would fail until I did this, with OBIEE just reporting an ODBC error):

rm -rf /yarn
mkdir -p /yarn/nm
chown -R yarn /yarn
chgrp -R yarn /yarn

Once this is done, queries from the BI Administration tool and from the OBIEE BI Server should connect to the Kerberos-secured CDH cluster successfully, using the Kerberos ticket you obtained using the MIT Kerberos Ticket Manager on login and then passing across the user details under which the YARN, and then Hive job should run.


If you’re interested, you can go back to the MIT Kerberos Ticket Manager and see the other Kerberos tickets that were requested and then cached by the Cloudera Hive ODBC driver when it mutually authenticated with the HiveServer2 RPC interface – Kerebos authenticates both ways to ensure that who you’re connecting to is actually who they say they are, in this case checking the HiveServer2 connection you’re connecting to isn’t being spoofed by someone else.


So that’s the process for connecting OBIEE to a Kerberos-secured CDH Hadoop cluster in a nutshell; in the New Year I’ll put something together on using Apache Sentry to provide role-based access control for Hive and Impala tables and as of CDH 5.3, HDFS directories, and I’ll also take a look at the new extended ACLs feature in CDH5.2 that goes beyond HDFS’s standard POSIX security model.

Categories: BI & Warehousing

2015 - "The" Year of Oracle Application Express (APEX)

Dimitri Gielis - Sun, 2014-12-28 16:04
The year 2014 was the 10th anniversary of Oracle Application Express (APEX). I still find it unbelievable 10 years have pasted. Time flew by... you might think that after 10 years of building APEX applications the technology is out-dated or you get tired of it, but more the opposite is true.

Oracle Application Express is a web technology and the web evolves fast, which keeps it interesting and fun. We can follow the latest and greatest in web world, integrate it with APEX and give our apps any look and feel we want. Next to that, APEX is build on top of the Oracle Database, so we can leverage all the functionalities of the database.

So the longer you work with the Oracle database and the more you know of web technologies, the more you can do in APEX. That's why I don't find it boring after 10 years - it's fun!

And now 2015 is just around the corner, so what about APEX?

The 5.0 release of APEX will go live in Q1 - it's the biggest release of Oracle Application Express in the last 10 years. It has the most advanced development interface in history. Every single page is updated within the Builder and it comes with a gorgeous new UI.
In short: it's the best release ever.

So the year 2015, will be "The" year of Oracle Application Express.

To celebrate "The" APEX year - I put myself a challenge ... on January 5th I'll start my chain of blogging and will do a new (APEX related) blog post every single day.  I hope to get a chain of at least a 100... let's see how far I get :) thanks for being part of it!
Categories: Development

Creating a schema synonym in Oracle - an unsupported feature

Yann Neuhaus - Sun, 2014-12-28 14:59

Ivica Arsov (@IvicaArsov) has made an interesting comment about AUDIT_ACTIONS object link table. I'll blog about it soon, but in the meantime when checking its definition in cataudit.sql it came upon the following:

/* SCHEMA SYNONYMS will be added in 12g */
-- insert into audit_actions values (222, 'CREATE SCHEMA SYNONYM');
-- insert into audit_actions values (224, 'DROP SCHEMA SYNONYM');

which caught my attention.


OT: On a musical note for 2014's year ending

Grumpy old DBA - Fri, 2014-12-26 12:32
There have been some really strong albums that impressed me this year.  I tend to like loud rock stuff but do mix it up somewhat.  What follows is just some ramblings:

The Drive By Truckers have a tremendous album out "English Oceans" if you like rock this is a no brainer.

Jolie Hollands "Wine Dark Sea" album is stunningly magnificent.  It varies quite a bit rocks out and blues it out and then just charms you at times.

Taylor Swift knocked it out of the ballpark with 1989.

Also digging new one by Lana Del Rey.

Discovered one that I should have known about a long time ago "Gov't Mule" album Live with a little help from our friends is straight out loud good rock and roll well put together.

My latest addition that I am just listening to is the Delphines "Colfax" this one strays fairly close to some kind of cross between rock and roll and folk/country but seems like a well put together sophisticated album that is a little slower paced than many.

Categories: DBA Blogs

Our Week at UKOUG

Oracle AppsLab - Fri, 2014-12-26 12:05

Earlier this month, Noel (@noelportugal) and I (@joybot12) represented the AppsLab crew at the UKOUG Apps 14 and Tech 14 conferences in Liverpool.

I conducted customer feedback sessions with users who fit the “c-level executive” user profile, to collect feedback on some of our new interactive data visualizations. Unfortunately, I can’t share any of these design concepts just yet, but I can share a bunch of pics of Noel, who gave several talks over the course of the 3-day conference.

This first photo is a candid taken after Noel’s talk on Monday about “Wearables at Work.”

2014-12-08 12.58.50

Photo by Joyce Ohgi

I was thrilled to see so many conference attendees sticking around afterwards to pepper Noel with questions; usually at conferences, people leave promptly to get to their next session, but in this case, they stuck around to chat with Noel (and try on Google Glass for the first time).

Here’s another of Noel taken by Misha Vaughan (@mishavaughan) with his table of goodies.


Photo by Misha Vaughan

The next photo is from Tuesday, where Noel and Vivek Naryan hosted a roundtable panel on UX. Because this was a more intimate, round-table style talk, the conference attendees felt comfortable speaking up and adding to the conversation. They raised concerns about data privacy, their thoughts on where technology is headed in the future, and generally chatted about the future of UX and technology.

2014-12-08 15.20.51

Photo by Joyce Ohgi

This last photo is from Monday afternoon, when I made Noel take a break from his grueling schedule to play table tennis with me. The ACC Liverpool conference center thoughtfully provided table tennis in their Demo Grounds as a way to relieve stress and get some exercise (was a bit too cold to run around outside).

I put up a valiant effort, but Noel beat me handily. In my defense, I played the first half of the game in heels; once I took those off my returns improved markedly. I’ll get him next time! :) A special thank-you to Gustavo Gonzalez (@ggonza4itc), CTO at IT Convergence for the great action shot, and also for giving excellent feedback and thoughtful input about the design concepts I showed him the day following.

Photo by Gustavo Gonzalez

Photo by Gustavo Gonzalez

All-in-all, we enjoyed the Apps 14 and Tech 14 conferences. It’s always great to get out among the users of our products to collect real feedback.

For more on the OAUX team’s activities at the 2014 editions of the UKOUG’s annual conferences, check out the Storify thread.Possibly Related Posts:

Oracle Audit Vault Oracle Database Plug-In

The Oracle Audit Vault uses Plug-Ins to define data sources.  The following table summarizes several of the important facts about the Oracle Audit Vault database plug for Oracle databases –

Oracle Database Plug-In for the Oracle Audit Vault

Plug-in Specification


Plug-in directory


Secured Target Versions

Oracle 10g, 11g, 12c Release 1 (12.1)

Secured Target Platforms


Solaris /x86-64

Solaris /SPARC64


Windows /86-64

HP-UX Itanium

Secured Target Location (Connect String)


AVDF Audit Trail Types




SYSLOG (Linux only)

EVENT LOG (Windows only)


Audit Trail Location

For TABLE audit trails: sys.aud$Sys.fga_log$dvsys.audit_trail$



For DIRECTORY audit trails: Full path to the directory containing AUD or XML files.


For SYSLOG audit trails: Full path to the directory containing the syslog file.


For TRANSACTION LOG, EVENT LOG and NETWORK audit trails: no trail location required.

If you have questions, please contact us at

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

MySQL versions performance comparison

Yann Neuhaus - Fri, 2014-12-26 03:18

This blog aims to make a performance comparison between the different MySQL versions/editions and also comparing the differents MySQL forks such as Percona Server and MariaDB.  Indeed number of improvements as been done to innodb storage engine in the last MySQL versions. You can find below some of the performance improvements applied to InnoDB these last years (non exhaustive list):

MySQL 5.0

1. New compact storage format which can save up to 20% of the disk space required in previous MySQL/InnoDB versions.
2. Faster recovery from a failed or aborted ALTER TABLE.
3. Faster implementation of TRUNCATE TABLE.

MySQL 5.5

1. MySQL Enterprise Thread Pool, As of MySQL 5.5.16, MySQL Enterprise Edition distributions include a thread pool plugin that provides an alternative thread-handling model designed to reduce overhead and improve performance.
2. Changes to the InnoDB I/O subsystem enable more effective use of available I/O capacity. The changes also provide more control over configuration of the I/O subsystem.

MySQL 5.6

1. Improvements to the algorithms for adaptive flushing make I/O operations more efficient and consistent under a variety of workloads. The new algorithm and default configuration values are expected to improve performance and concurrency for most users. Advanced users can fine-tune their I/O responsiveness through several configuration options.
2. InnoDB has several internal performance enhancements, including reducing contention by splitting the kernel mutex, moving flushing operations from the main thread to a separate thread, enabling multiple purge threads, and reducing contention for the buffer pool on large-memory systems.
3. You can now set the InnoDB page size for uncompressed tables to 8KB or 4KB, as an alternative to the default 16KB. This setting is controlled by the innodb_page_size configuration option. You specify the size when creating the MySQL instance. All InnoDB tablespaces within an instance share the same page size. Smaller page sizes can help to avoid redundant or inefficient I/O for certain combinations of workload and storage devices, particularly SSD devices with small block sizes.

MySQL 5.7

1. In MySQL 5.7.2, InnoDB buffer pool dump and load operations are enhanced. A new system variable, innodb_buffer_pool_dump_pct, allows you to specify the percentage of most recently used pages in each buffer pool to read out and dump. When there is other I/O activity being performed by InnoDB background tasks, InnoDB attempts to limit the number of buffer pool load operations per second using the innodb_io_capacity setting.

2. As of MySQL 5.7.4, InnoDB supports multiple page cleaner threads for flushing dirty pages from buffer pool instances. A new system variable, innodb_page_cleaners, is used to specify the number of page cleaner threads. The default value of 1 maintains the pre-MySQL 5.7.4 configuration in which there is a single page cleaner thread. This enhancement builds on work completed in MySQL 5.6, which introduced a single page cleaner thread to offload buffer pool flushing work from the InnoDB master thread.


You can find an exhaustive performance improvement list on:

  Test limitations

This test won't take into consideration all new possible optimizations provided through new variables and functionnalities. The aim of this one is simply to demonstrate the performance improvement with a non optimized but consistent configuration. In this context, a limited set of variables available in all MySQL versions (since version 5.0) have been set up.

This test is obvisously not representative of your own environnement (hardware, queries, database schema, storage engine, data type, etc..). Therefore you probably won't have the same performance behavior.


MySQL performance test Hardware configuration

This test has been done with sysbench 0.5, it has been run on a laptop equiped with a Processor Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GH and 16Go RAM. The data are stored on a Samsung SSD 840 PRO Series.


First step: Installation

The first step consists in installing several different MySQL versions. Thanks to mysql_multi I've been able to run the following versions in parallel:



MySQL Server








Community Edition





Community Edition





Community Edition





Community Edition





Community Edition





Enterprise Edition












These servers have been setup with the same settings. However depending on the MySQL version, the default MySQL settings are different. For instance, on MySQL 5.0.15 the default value for global variable innodb_buffer_pool_size is 8388608 wheras on MySQL 5.1.73 the default value is 134217728. The default MySQL version settings have not been changed.

The only variables which have been set up are the following:

  • max_connections = 8000
  • table_open_cache=8000
  • open_files_limit = 8192

max_connections: The maximum permitted number of simultaneous client connections
table_open_cache: (or table_cache): The number of open tables for all threads:
open_files_limit: The number of files that the operating system permits mysqld to open. The value of this variable at runtime is the real value permitted by the system and might be different from the value you specify at server startup.


The OFA (Optimal Flexible Architecture) directory structure has been used to install the MySQL Servers.


You can find below an example of this structure:

port           = 33001
mysqladmin     = /u00/app/mysql/product/mysql-5.0.15/bin/mysqladmin
mysqld         = /u00/app/mysql/product/mysql-5.0.15/bin/mysqld
socket         = /u00/app/mysql/admin/mysqld1/socket/mysqld1.sock
pid-file       = /u00/app/mysql/admin/mysqld1/socket/
log-error      = /u00/app/mysql/admin/mysqld1/log/mysqld1.err
datadir        = /u01/mysqldata/mysqld1
basedir        = /u00/app/mysql/product/mysql-5.0.15

Second step: Test preparation

Once all MySQL Server installed and running, the second step is to prepare the table containing the records where the queries will be performed. In this test I decided to create only one table. This one is automatically named sbtest1 by sysbench. Notice that it is possible to create several tables by using “oltp-table-count” parameter.

The number of rows in this table is specified by the parameter “oltp-table-size”. This test table will contain 20'000'000 rows. The test mode is OLTP. According to sysbench documentation, this test mode was written to benchmark a real database performance.

At the prepare stage the following table is created:

mysql> desc sbtest1;
| Field | Type             | Null | Key | Default | Extra          |
| id    | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| k     | int(10) unsigned | NO   | MUL | 0       |                |
| c     | char(120)        | NO   |     |         |                |
| pad   | char(60)         | NO   |     |         |                |

Each record contains random strings in the fields c and pad and random integers between 1 and oltp-table-size in the field k as presented in the following picture:



Sysbench prepare script:

sysbench \
--db-driver=mysql \
--mysql-table-engine=innodb \
--oltp-table-size=20000000 \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--mysql-port=33301 \
--mysql-db=sysbench \
--mysql-user=sbtest \
--mysql-password=sbtest \
--test=/home/mysql/sysbench/oltp.lua \

In order to be sure to have the same set of data on each server a MySQL dump has been done on the server after the first load. This dump has been imported on each server.


Third step: Running the test

The test has been run with different number of threads in order to understand how the different version/edition and fork of MySQL scale depending on the number of threads. The parameter max-request limits the total number of requests. The OLTP test mode (oltp.lua) has been written to improve performance's benchmarking of database servers by providing a realistic scenario of an OLTP database.


sysbench \
--db-driver=mysql \
--test=oltp \
--num-threads=1 \
--mysql-user=sbtest \
--mysql-password=sbtest \
--mysql-db=sysbench \
--max-requests=10000 \
--oltp-test-mode=complex \
--test=/home/mysql/sysbench/oltp.lua \
--mysql-socket=/u00/app/mysql/admin/mysqld1/socket/mysqld1.sock \
--oltp-table-name=sbtest1 \


In order to ensure correct results, avoiding any side effects due to external process and ensuring consistent results over time, the benchmark has been run twice.


Fourth step: Collecting results

All the results have been collected in an excel sheet and the following graph directly comes from these results:


  Fifth step: results analysis

1. innodb has been improved over time in regards of scalability and the tests results tempt to proove that. The performance with 64 threads are radically different depending on the MySQL Version:

MySQL 5.0.15 – 1237 tps
MySQL 5.1.73 – 1818 tps
MySQL 5.5.39 -  2978 tps
MySQL 5.6.20 – 2801 tps
MySQL 5.6.21 – 2830 tps
MySQL 5.7.4 – 2725 tps
Percona 5.6.21 – 2853 tps
Mariadb 10.0.15 – 2941 tps


2. For application using only one thread the peformance between MySQL version (with default settings) is more or less equivalent (+/-10%):

MySQL 5.0.15 – 163 tps
MySQL 5.1.73 – 158 tps
MySQL 5.5.39 -  150 tps
MySQL 5.6.20 – 145 tps
MySQL 5.6.21 – 149 tps
MySQL 5.7.4 – 145 tps
Percona 5.6.21 – 145 tps
Mariadb 10.0.15 – 143 tps


3. For large number of threads it definitively worth to use pool of threads plugin from Percona. During these tests a improvement factor of x30 has been observed. Unfortunately I didn't see any performance improvement with MySQL 5.6.21 with the thread_pool plugin and thread_pool_size parameter set to 36 ( Best performances with Sysbench according to . Regarding Percona I set up the parameter thread_pool_high_prio_mode to transactions. You can find below the results with 4096 thread:

MySQL 5.0.15 – error
MySQL 5.1.73 – 3.97 tps
MySQL 5.5.39 -  9.05 tps
MySQL 5.6.20 – 9.29 tps
MySQL 5.6.21 – 9.07 tps
MySQL 5.6.21 pool of thread plugin – 8.75
MySQL 5.7.4 – 5.64 tps
Percona 5.6.21 – 9.83 tps
Percona 5.6.21 pool of thread plugin – 295.4 tps
Mariadb 10.0.15 – 8.04 tps

It is interesting to notice that performance degradation can occur with the thread pool plugin activated for MySQL and for Percona. This performance degradation has been observed for a number of thread between 16 and 128 for Percona and 32 and 512 with MySQL.



These results tempt to prove that last MySQL releases perform better than older ones especially with several threads (64 threads in this case). The only exception is MySQL 5.7.4 which is a development release.

Applications using only one thread won't benefit from a huge performance improvement with the last MySQL versions. However enhancements provided in last versions such as ONLINE DDL, faster deadlock detection, dynamic innodb_buffer_pool_size parameter, etc, etc.. will for sure save you lots of time.

MySQL forks such as Percona and MariaDB, perform as MySQL Server. In addition I didn't observe any performance difference between MySQL Enterprise Edition and MySQL Community Edition. It is interesting to notice that thread pool plugin provided by Percona provide a huge performance improvement with large number of threads compared to standard behavior.

Regarding MySQL Enterprise Edition I haven't been able to see any performance improvement with MySQL Thread Pool plugin activated even with large number of threads. This is perhaps due to a misconfiguration from my side... however I presented these results to an Oracle MySQL specialist present on the Oracle UKOUG booth and he hasn't been able to find any error in my configuration.

Red Samurai ADF Performance Audit Tool v 3.3 - Audit Improvements

Andrejus Baranovski - Thu, 2014-12-25 11:13
Christmas present from Red Samurai - ADF Performance Audit Tool v 3.3. This is a next version after 3.2 (Red Samurai ADF Performance Audit Tool v 3.2 - Large Fetch and Full Scan Audit Optimizations) with a set of features improving audit process.

Implemented features in v 3.3:

1. Logging audit data from multiple WebLogic servers

Audit is improved to log data from several WebLogic servers into the same DB schema, Audit UI dashboard allows to select data from specific server or display combined data from all of them. This helps when ADF application is installed in the cluster environment or different application instances are running on different servers.

Changing current audit server address in UI dashboard, to display audit data logged from that server. Here is the example of showing data from all servers, this is by default:

If user selects DefaultServer:7101, data is filtered from selected server only (there are 30 issues displayed):

Select another server - TestServer:7101 and only one logged issue will be displayed:

2. Data Source switch

UI dashboard is capable to switch between different data sources. This is useful, if there are different DB schemas, where audit data is logged. From the single UI dashboard, on runtime, user could decide which audit data he wants to display

3. Option to turn on/off audit globally with -Dredsamurai.audit=on JVM parameter

It is much easier now to turn on/off audit when it is installed in ADF application. This can be controlled with JVM parameter -Dredsamurai.audit=on/off:

Happy holidays from the grumpy old dba!

Grumpy old DBA - Thu, 2014-12-25 08:57
Best wishes for everyone heading into 2015!

I am looking forward to RMOUG Training days 2015 ( speaking there ) while the planning for our conference here in Cleveland GLOC 2015 kicks into high gear.

Our conference has call for abstracts open now and conference registration is also open.  For us the registrations typically don't start rolling in big time until March timeframe.

Please consider submitting a presentation abstract!

GLOC 2015 speaker application

GLOC 2015 conference registration ( May 18-20 2015 )
Categories: DBA Blogs

Windows Server users: There's no need to panic

Chris Foot - Wed, 2014-12-24 09:39


Hi, welcome to RDX! Many of you have probably heard of a Windows Server vulnerability that allows hackers to assign domain user accounts the same access privileges as administrator accounts.

As many Windows Server experts know, this enables attackers to easily infiltrate computers and other machines within a Windows Server domain. However, a hacker would have to possess accepted domain credentials to take advantage of the bug.

Thankfully, Microsoft released an update to Windows Server 2012 R2 and its predecessors to resolve the issue. This fix ensures that a Kerberos service ticket cannot be forged. Companies looking for Windows Server gurus with extensive experience in security should check out RDX’s Windows service package.

Thanks for watching!

The post Windows Server users: There's no need to panic appeared first on Remote DBA Experts.

last partition

Laurent Schneider - Wed, 2014-12-24 04:19

if you really need to quickly find the latest partition per table, I have written this little gem

  d DATE;
  IF DBMS_LOB.SUBSTR (b, 1, 1) = hextoraw('07') and len=83
    DBMS_STATS.convert_raw_value (DBMS_LOB.SUBSTR (b, 12, 2), d);
    d := NULL;
SELECT owner, table_name,
  max(d (bhiboundval, hiboundlen)) last_partition
FROM sys.tabpart$ tp
  JOIN sys.obj$ o USING (obj#)
  JOIN sys.user$ u ON u.user# = o.owner#
group by,
order by last_partition desc;

It doesn’t cover all partitioning type, but it is pretty fast and simple

Configuring MDS Customisation Layer and Layer Value Combination in ADF

Andrejus Baranovski - Wed, 2014-12-24 04:07
With this post I would like to dive a bit deeper into MDS customisation and layer combination handling. By default, there is defined customisation layer - site. Typically we set our own customisation values for this layer, as a result - all customisations usually are stored under site layer. There could be situations, when more advanced setup would be required - to be able to control layer and layer value combination in a custom way. In other words - to be able to define your own custom layer and then provide customisation values for this layer (MDS customisations will be stored under custom_layer/custom_layer_value, instead of default site/custom_layer_value). Oracle docs would not describe how to handle on runtime layer name to be dynamic and retrieve it from some sort of configuration file. I'm going to describe a technique for this, allowing to combine and group MDS customisations under custom layer and layer values folders.

Sample application - is implemented with a separate JDEV project for MDS customisation files. There is no site layer, it starts with profile1/profile2 and then goes with MDS layer values group1/group2. Layer profile1/profile2 switch is dynamic and handled by custom MDS customisation class implemented in the project. This is how it looks like in JDEV (Customisation Context is set with profile2 name) - MDS layer name is retrieved from a custom JAR file stored under JDEV structure (I will describe it below):

In Oracle docs you would find an example of custom class with MDS layer, where layer name will be set as static. In my sample app, I have implemented it as dynamic - layer name is retrieved from configuration file. Layer name is retrieved and assigned during first application load, it is not reset until the next restart. Here is the example of layer name variable initialisation:

Method getValue is overriden as well, to return different MDS layer customisation values, based on ADF Security.

Method getName is overriden to return MDS layer name on application initialisation. Custom method retrieveDynamicLayerName is implemented to retrieve MDS layer name from configuration file. This method works on design and runtime, this means it can be used for MDS design time seeded customisations:

In order to use custom SiteProfileCC class on runtime, we need to package it into separate JAR file and include this JAR into EAR. In my example, configuration file is packaged together with the class (this would allow to use it for design time MDS seeded customizations):

You must copy JAR file with MDS seeded customisation class SiteProfileCC into JDEV directory - jdeveloper/jdev/lib/patches, this would make it visible for design time MDS seeded customisations:

I have defined multiple MDS layers with layer values. Two layers are used for the test - profile1 and profile2. Each of these layers is assigned with group1 and group2 MDS layer values:

Application must be configured to use custom class SiteProfileCC, this is done in add-config.xml file:

Customisations are implemented in the separate JDEV application, all customisations are deployed to MAR file (we can export them directly and apply to the running instance of ADF application):

MAR file is included into main application deployment profile, under EAR level. You should notice - MDS customisation class JAR file is included to be packaged under lib folder on EAR level (this is important, otherwise application will not be started, because it will fail to load custom SiteProfileCC class):

Let's see how it works. I have provided profile1 for MDS layer in configuration file and redeployed application:

Login with user redsam1, the one granted with Group One role:

Application screen is loaded with customisations based on MDS layer and layer value - read-only table is rendered on the right side:

Login with user redsam2, the one granted with Group Two role:

Customisations for profile1 and group two are applied. Instead of Submit button, Save button is implemented:

Let's change MDS layer to be profile2 and test again with the same users:

User redsam1 gets customisation applied with Jobs block rendered below Employees:

User redsam2 gets customisations with Save and Cancel buttons included:

A prosperous New Year

Anthony Shorten - Tue, 2014-12-23 17:31

It has been a very very 2014 and 2015 is shaping up to be a bumper year for a number of the products we deliver. I have no updated this blog as much as I wanted the last few months for various reasons, mainly I have been very busy getting new versions and new products out of the door. More about that in the new year.

2015 is shaping up to be a stellar year for the products I manage personally with announcements and exciting new features I am sure customers and partners will embrace.

I wish all my readers, our partners and our customers a happy holidays and a prosperous new year.

Iranian hackers pose threat to global security

Chris Foot - Tue, 2014-12-23 15:40


Hi, welcome to RDX! At times, cybercriminals may be acting for political or nationalistic reasons. One hacker cell has been suspected of harboring such motivations.

Cylance, a cybersecurity research firm based out of California, reported the group has successfully infiltrated notable energy, defense and airline companies. The study’s authors warned that if attacks from the Iranian cell continue, it could impact the physical safety of world citizens. An Iranian diplomat informed news sources that Cylance’s assertion was unsubstantiated.

To help prevent cyber-attacks, it’s imperative that defense contractors, energy firms and other such businesses reevaluate their database security protocols. Applying monitoring tools capable of identifying anomalies is the first step, but proactively searching for bugs and applying patches is an absolute must.

Thanks for watching!

The post Iranian hackers pose threat to global security appeared first on Remote DBA Experts.

Season's Greetings!

WebCenter Team - Tue, 2014-12-23 09:49

Season's Greetings We wish you much success in the year ahead and we sincerely thank you for your continued partnership! We'll be back in 2015 with new assets, programs and education on Oracle WebCenter.
Happy Holidays! - The WebCenter Team