Skip navigation.

Feed aggregator

PFCLScan Version 1.3 Released

Pete Finnigan - Wed, 2015-07-01 02:35

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

Oracle Utilities SDK V4. Available

Anthony Shorten - Tue, 2015-06-30 22:42

The Oracle Utilities SDK V4. has been release and is available from My Oracle Support for download.This release is the latest SDK for extending Oracle Utilities Application Framework V4.3.x based applications.

The download is available from My Oracle Support as Patch 21209648.

Minimizing Extension Deployment Times

Anthony Shorten - Tue, 2015-06-30 22:37

One of the major activities in any implementation of an Oracle Utilities Application Framework based product is to manage extensions to the product. These are customizations that partners, customers or consultants have built to extend or alter the product to suit the needs of an individual site.

 There are typically a number of components that constitute extensions:

  • Oracle Utilities SDK code - These are java or javascript pieces of code that are extensions to the product. In older versions of Oracle Utilities Customer Care And Billing we also supported COBOL based extensions. All these objects are typically managed using the Oracle Utilities SDK deployment utilities that allow you to apply extensions to a staging area, build full releases from a staging area and build patches between releases from a staging area.
  • ConfigTools objects - In newer versions of Oracle Utilities Application Framework special configuration based objects were introduced to allow implementers to build configuration based objects such as Business Objects, Business Services, Service Scripts, Data Areas and UI Maps. These are typically managed using Bundling, Configuration Migration Assistant or ConfigLab. The latter is available for older versions of Oracle Utilities Customer Care And Billing as Configuration Migration Assistant was introduced in Oracle Utilities Application Framework V4.2 and above.
  • Administration Data - One of the major features of the Oracle Utilities Application Framework is the ability to define business values, business rules and business logic in administration data. This data is typically available from the Administration menu of the product and is managed using Configuration Migration Assistant or ConfigLab. The latter is available for older versions of Oracle Utilities Customer Care And Billing as Configuration Migration Assistant was introduced in Oracle Utilities Application Framework V4.2 and above
  • Database Scripts - Database changes are new database objects delivered with the extensions that conform to the guidelines in the Oracle Utilities SDK and DBA Guides shipped with the products. They are typically managed by the database tools of choice at a site.
  • Miscellaneous Files -  One of the facilities in the Oracle Utilities Application Framework is the ability to extend the technical configuration with a set of custom templates and custom user-exits which include any extensions related to the technical configuration of the product. These are typically managed manually.

Now to minimize the impact of all these changes the following guidelines are recommended:

  • For SDK based files use the Oracle Utilities SDK tools to deploy your customizations. Avoid directly undeploying and manually building WAR/EAR files. This will avoid manual effort and also reduce manual errors. Do not manually splice and dice your code into the product files. Whilst it is technically possible to manually use jar and ant to build files, using the SDK utilities are lower risk and ensure the customizations, regardless of complexity, are placed in the right locations.
  • When using the SDK utilities, always build full releases but use the patch build facility to build smaller deployment files. Using these patches will greatly minimize the build times and subsequent deployment times.
  • Avoid deploying the appViewer, unless necessary. The appViewer is only a tool helpful for developers and administrators, it is not recommended for production use or even for environments where developers are not working regularly.  Administrators and other people can use the appViewer deployed on a local server, in offline mode, rather than included in your implementation. The main reason is that the appViewer is a large set of documentation and takes time to rebuild. This will greatly reduce outages in patch installation as well as deployment. In newer versions of Oracle Utilities Application Framework, appViewer is now a completely optional installation.
  • If using ConfigLab or Configuration Migration Assistant, run the comparison ahead of time. This will mean that the application of the changes will be the only thing that needs to be applied. Ensure all changes have been preapproved or at least approved by the business prior to the application. If you are a customer that has access to both ConfigLab and Configuration Migration Assistant then use the latter as it is more logical and quicker for application of changes. Note: ConfigLab has been removed from Oracle Utilities Customer Care And Billing V2.5 and above.
  • Use the Database Lifecycle Management Pack to manage database changes. Oracle ships an extension pack to track and manage database changes across your enterprise. Additionally third party solutions are also available to manage database change history.
  • Centralize your templates and user exits for deployment. Custom Templates and Custom user exits can be environment specific or global depending on individual needs. If they are global, using commonly used configuration management practices can be used to copy them. It is recommended that copying of these files be done in the initial phases of the migration to take advantage of any rebuilds.
  • Avoid multiple rebuilds. The application of changes in EAR/WAR files requires a rebuild of the files and so do other activities such as patching. By using the options on the initialSetup utility you can minimize the build and deployment process. This is documented in the Server Administration Guides (or Operations and Configuration Guides) shipped with your product.
  • Consider using native installations rather than embedded installations. This is quite a saving. In the embedded installation the product must be down to build the EAR/WAR files as they need to be updated and are actively being used by the Oracle WebLogic server. You cannot update open files. In the native installation, the files are deployed in the Oracle WebLogic domain through the deployment (or update deployment process). This means you can build the EAR/WAR files when they are being used as they are copied during the deployment or update deployment process. The update deployment process can be executed from the WLST command line, Oracle WebLogic console or Oracle Enterprise Manager (on the Oracle WebLogic targets). This update process will take a far shorter time than a full load and deployment. In some cases this can be done live. Customers using Inbound Web Services already use this technique as it updates the deployment (effectively it is being copied to Oracle WebLogic's staging area to be used). For example, on test systems we notice that update deployment takes less than a minute (depending on the hardware of course).

These are a summary of the techniques outlined in the Software Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support. This series includes the following documents:

  • Concepts - General Concepts and Introduction.
  • Environment Management - Principles and techniques for creating and managing environments.
  • Version Management - Integration of Version control and version management of configuration items.
  • Release Management - Packaging configuration items into a release.
  • Distribution - Distribution and installation of releases across environments.
  • Change Management - Generic change management processes for product implementations.
  • Status Accounting - Status reporting techniques using product facilities.
  • Defect Management - Generic defect management processes for product implementations.
  • Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation.
  • Implementing Service Packs - Discussion on the service packs and how to use them in an implementation.
  • Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades.

Apache Phoenix, SQL is getting closer to Big Data

Kubilay Çilkara - Tue, 2015-06-30 15:50

Here is a post about another project in the Big Data world, like Apache Hive from my previous post, enables you to do SQL on Big Data. It is called Apache Phoenix.

Phoenix is a bit different, a bit closer to my heart too, as I read the documentation on Apache Phoenix, the word 'algebra' and 'relational algebra' came across few times, and that mean only one thing, SQL! The use of the word algebra in the docs did give me a lot of confidence. SQL has closure, is based on a database systems model which has it's roots in logic and maths and especially a subset of algebra, The Set Theory.

Apache Phoenix is developed in Salesforce and is now one of the popular projects in Apache. Apache Phoenix is a SQL skin on top of HBase, the columnar (NoSQL) database of the Hadoop ecosystem, capable of storing very large tables and data and query them via 'scans'. HBase is part of the Hadoop ecosystem and the file system it uses is usually HDFS. Apache Phoenix is using JDBC on the client as a driver.

In the race to bring the easiest to use tools for Big Data, I think Apache Phoenix is very close. It is the SQL we know used since the 1970s. The Apache Phoenix team seems to be committed and willing to introduce all of the missing parts of SQL, including transaction processing with different isolation levels.  Making Phoenix a fully operational Relational Database layer on HBase. Have a look in their roadmap. The amount of current and suggested future SQL compatibility is remarkable, and this makes me take them really seriously.
  • Transactions
  • Cost-based Query Optimization! (Wow)
  • Joins
  • OLAP
  • Subqueries
  • Striving for full SQL-92 compliance
In addition to all this, it is also possible to turn an existing HBase table to an Apache Phoenix table using CREATE TABLE or even CREATE VIEW, the DDL statements that we know. How handy is that? Suddenly you can SQL enable your existing HBase database!
How to install and use Phoenix

The SQL skin can be installed to an existing Hadoop HBase installation very quickly. All you need to do is to download and extract the tarball. You can setup a standalone Hadoop environment, look at my previous blog post for that, and then install HBase and install Apache Phoenix
Once the Apache  Phoenix software is installed, then you can start it and query it with SQL like this.

From within the bin/ directory of Phoenix install directory run

$ ./  localhost

That will bring you to the phoenix prompt

0: jdbc:phoenix:localhost> select * from mytable;

Categories: DBA Blogs

Multitenant vs. schema based consolidation

Yann Neuhaus - Tue, 2015-06-30 11:12

If you want install multiple instances of a software, for example you host the ERP for several companies or subsidiaries, you have 3 solutions:

  • have one database and multiple schema
  • have multiple databases
  • have one database and multiple pluggable databases

Of course, this is exactly the reason for pluggable databases: multitenant. You have good isolation but still share resources. A lot of reasons have been given why multiple schema - or schema based consolidation - is not a good solution. I don't agree with most of them. But there is one very good reason that I'll show later and it's about cursor sharing.

schema based consolidation

Let's take the Oracle white paper presenting multitenancy.

Name collision might prevent schema-based consolidation

Yes some applications have a fixed schema name. If your ERP must be installed in SYSERP schema, then you cannot install several ones in the same database.

However, you should challenge your application provider for that before changing all your infrastructure and buying expensive options. Maybe I'm too optimistic here, but I  think it's something from the past. I remember a telco billing software I've installed 15 years ago. The schema was 'PB'. It had nothing to do with the software name or the vendor name. But when I asked if I can change, answer was No. That schema name was hard-coded everywhere. It got it when the main developer came to visit us... his name was Pierre B.

About public synonyms, and public database links... please just avoid them.

Schema-based consolidation brings weak security

Same idea. If your application requires a 'SELECT ANY PRIVILEGE' then don't do it. In 12c you have privilege analysis that can help to identify the minimal rights you need to grant.


Per application backend point-in-time recovery is prohibitively difficult

I don't see the point. Currently multitenant do not give us more options because pluggable database point in time recovery, nor flashback pluggable database, is currently possible in-place. But I know it's planned for the future. You can already read about it at

Of course, when using schema-based consolidation you should used different tablespaces and you have TSPITR.


Resource management between application backends is difficult

Well you don't need pluggable database to use services. Multitenant is just an easy way to force the application to use specific services.


Patching the Oracle version for a single application backend is not possible

Yes, plugging a PDB into a different version CDB can be faster for those applications that have lot of objects. But it is not as easy as the doc says. The PDB dictionary must be patched. It's still a good think when the system metadata is a lot smaller than the application metadata


Cloning a single application backend is difficult

Cloning a PDB is easy. Right. 

Finally, multitenant is nice because of pluggable databases. Do you know that all occurrence of 'multitenant' in 12c code or documentation was 'pluggable database' one month before the release?

But wait a minute, I'm not talking about test environments here. I'm talking about consolidating the similar production databases. And all the plug/unplug has the same problem as transportable tablespaces: source must be made read-only.


Cursor sharing in schema based consolidation

Time to show you what is the big advantage of multitenant.

10 years ago I worked on a database that had 3000 schemas. Well we had 5 databases like that. You can think of them as specialized datamarts: same code, same data model, but different data, used by application services provided to different customers. A total of 45TB was quite nice at that time.

That was growing very fast and we had 3 issues.

Issue one was capacity planning. The growth was difficult to predict. We had to move those schemas from one database to another, from one Storage system to another... It was 10g - no online datafile move at that time. Transportable tablespaces was there, but see next point.

The second issue was the number of files. At first, each datamart had its set of tablespaces. But >5000 datafiles on a database was too much for several reasons. One of the reason was RMAN. I remember a duplicate with skip tablespace took 2 days to initialize... 

Then we have consolidated several datamarts into same tablespaces. When I think about it, the multitenant database we can have today (12c) would not have been an easy solution. Lot of pluggable databases mean lot of datafiles. I hope those RMAN issues have been fixed. But there are other ones. Did you ever try to query DBA_EXTENTS on a >5000 datafiles database? I had to when we had some block corruption on the SAN (you know, because of issue 1 we did lot of online reorg of the filesystems, and SAN software had a bug) This is where I made my alternative to DBA_EXTENTS.

Then the third issue was cursor sharing.

Let me give you an example

I create the same table in two schemas (DEMO1 and DEMO2) of same database.

SQL> connect demo1/demo@//
SQL> create table DEMO as select * from dual;

Table created.

SQL> select * from DEMO;


SQL> select prev_sql_id from v$session where sid=sys_context('userenv','sid');


SQL> connect demo2/demo@//
SQL> create table DEMO as select * from dual;

Table created.

SQL> select * from DEMO;


I'm in multitenant here because of the second test I'll do, but it's the same pluggable database PDB1.

 You see that I've executed exactly the same statement - SELECT * FROM DEMO - in both connections. Same statement but on different tables. Let's look at the cursors:


The optimization tried to share the same cursor. The parent cursor is the same because the sql text is the same. Then it follows the child list in order to see if a child can be shared. But semantic verification sees that it's not the same 'DEMO' table and it had to hard parse.

The problem is not hard parse. It's not the same table, then it's another cursor. Only the name is the same.

Imagine what happened on my database where I had 3000 identical queries on different schemas. We didn't have 'perf flame graphs' at that time, or we would have seen a large flame over kkscsSearchChildList.

Looking at thousand of child cursors in the hope to find one that can be shared is very expensive. And because it's the same parent cursor, there is a high contention on the latch protecting the parent.

The solution at that time was to add a comment into the sql statements with the name of the datamart, so that each one is a different sql text - different parent cursor. But that was a big change of code with dynamic SQL.

Cursor sharing in multitenant consolidation

So, in 12c if I run the same query on different pluggable databases. After the previous test where I had two child cursors in the PDB1 (CON_ID=5) I have run the same in PDB2 (CON_ID=4) and here is the view of parent and child cursors from the CDB:


We have the two child cursors from the previous test and we have a new child for CON_ID=4

The child number may be misleading but the search for shareable cursor is done only for the current container, so the same query when run from another pluggable database did not try to share a previous cursor. We can see that because there is not an additional 'reason' in V$SQL_SHARED_CURSOR.

SQL> select con_id,sql_id,version_count from v$sqlarea where sql_id='0m8kbvzchkytt';

---------- ------------- -------------
         5 0m8kbvzchkytt             3
         4 0m8kbvzchkytt             3

The V$SQLAREA is also misleading because VERSION_COUNT aggregates the versions across containers.

But the real behavior is visible in V$SQL_SHARED_CURSOR above and if you run that with a lot of child cursor you will see the difference in CPU time, latching activity, etc.


I'm not talking about pluggable databases here. Pluggable database do not need the multitenant option as you can plug/unplug database in single-tenant. Pluggable database is a nice evolution of transportable database.

When it comes to multitenant - having several pluggable database in the same container, in order to have several 'instances' of your software without demultiplicating the instances of your RDBMS - then here is the big point: consolidation scalability.

You can add new pluggable databases, and run same application code on them, without increasing contention, because most of the instance resources are isolated to one container. 

New Tools releases , now with Java

Kris Rice - Tue, 2015-06-30 10:28
What's New   For the 90%+ of people using sqldev/modeler on windows, the JDK bundled versions are back.  So no more debating what to install or worrying about conflicting Java versions.   Lots of bug fixes.    My favorite bug is now fixed so you can use emojis in your sql> prompt. RESTful CSV Loading   We wrapped he same CSV import code in SQL Developer into the REST Auto-Enablement

Interaction Hub Image Now Available on the PeopleSoft Update Manager Home Page

PeopleSoft Technology Blog - Tue, 2015-06-30 10:23

As noted in a recent post, the PeopleSoft Interaction Hub is now part of the Selective Adoption Process.  You can get the first image now on the PUM home page.  (At the PUM home page, choose the PeopleSoft Update Image Home Pages tab, then select the Interaction Hub Update Image page from the drop down.)  This means customers can use the PeopleSoft Update Manager and our other life cycle tools to manage their upgrade and maintenance process for the Hub.  There is also a white paper posted there that describes the baseline customers must reach to start taking these images. 

Note that this will be the only way for customers to take maintenance and updates going forward, so we encourage everyone to move to the Selective Adoption process as soon as is feasible for your organization. This move brings the Interaction Hub in line with all other PeopleSoft applications, which use the Selective Adoption process.  This process also offers customers additional value and control, and enables you to benefit from the value of the latest features with a greatly streamlined life cycle process.  

For customers that are eager to learn more, there are many resources on Selective Adoption and PUM on the PUM home page as well as on our YouTube channel.

This first image of the Interaction Hub is functionally equivalent to the current release (9.1/Revision 3), but taking it gets you on the Selective Adoption process.  Some great enhancements are coming the the next image.

The Week That Was Kscope15

Oracle AppsLab - Tue, 2015-06-30 10:11

Noel (@noelportugal), Raymond (@yuhuaxie), Mark (@mvilrokx) and I traveled to sunny Hollywood, Florida last week to attend Kscope15 (#kscope15), the annual conference of the Oracle Development Tools User Group (@odtug).

Check out some highlights of our week.

IMG_20150621_181205 IMG_20150621_192710 IMG_20150621_195446 download_20150622_223725

If you read here, you probably know that this year, Noel had cooked up something new and different for the conference, a scavenger hunt.

This year was my fourth Kscope, and as we have in past years, we planned to do something fun. At the end of Kscope14, Monty Latiolais (@monty_odtug), the President of the ODTUG Board of Directors, approached us to collaborate on something cool for Kscope15.

We didn’t know what exactly, but we all wanted to do something new, something fun, something befitting of Kscope, which is always a great conference. So, we spent the next few months chatting with Crystal (@crystal_walton), Lauren (@lprezby) and Danny (@dbcapoeira) intermittently, developing ideas.

We eventually settled on a scavenger hunt, which would allow attendees to experience all the best parts of the conference, almost like a guided tour.

Once we had a list of tasks, Noel developed the game, and with Mark and Raymond pitching in, they built it over the course of a few months. Tasks were completed one of three ways, by checking in to a Raspberry Pi station via NFC, by staff confirmation, and by tweeting a picture or video with the right hashtags.

We arrived in Hollywood unsure of how many players we’d get. We didn’t do much promotion in advance, and we decided to limit the game to 500 players to ensure it didn’t get too crazy.

Over the first few days, we registered nearly 150 players, and of them, about 100 completed at least one task, both well above my conservative expectations.

During the conference, we had a core of about 10-20 dedicated players who made the game fun to watch. They jockeyed back and forth in the top spots, trolling each other on Twitter, and waiting to complete tasks to allow fleeting hope to the other players.

In the end, we had a tie that we had to break at the conference’s closing session. Here are the final standings:


Congratulations winners, and thank you to everyone who played for making the game a success.

And finally an enormous thank you to ODTUG and the Kscope15 organizers for allowing us this opportunity. We’re already noodling ways to improve the game for Kscope16 in Chicago.

Stay tuned for other Kscope15 posts.Possibly Related Posts:

July 8th: Overhead Door Corporation HCM Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2015-06-30 09:38

Join us for an Oracle HCM Cloud Customer Forum on Wednesday, July 8, 2015, to hear Larry Freed, Chief Information Officer at Overhead Door Corporation. He will explain the company's desire for a massive HR transformation to include changing its benefits, payroll, core HR, employee self-service, and manager self-service. The transformation would provide the employees with a single source solution so the HR field staff could become more strategic.

During this Customer Forum call, Freed will talk about Overhead Door's selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the expectations and benefits of its new modern HR system.

Register now to attend the live Forum on Wednesday, July 8, 2015, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time, and learn more directly from the CIO of Overhead Door Corporation.

U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet

Michael Feldstein - Tue, 2015-06-30 09:17

By Phil HillMore Posts (341)

It would be interesting to read (or write) a post mortem on this project some day.

Two and a half years ago I wrote a post describing the University of Phoenix investment of a billion dollars on new IT infrastructure, including hundreds of millions of dollars spent on a new, adaptive-learning LMS. In another post I described a ridiculous patent awarded to Apollo Group, parent company of U of Phoenix, that claimed ownership of adaptive activity streams. Beyond the patent, Apollo Group also purchased Carnegie Learning for $75 million as part of this effort.

And that’s all going away, as described by this morning’s Chronicle article on the company planning to go down to just 150,000 students (from a high of 460,000 several years ago).

And after spending years and untold millions on developing its own digital course platform that it said would revolutionize online learning, Mr. Cappelli said the university would drop its proprietary learning systems in favor of commercially available products. Many Apollo watchers had long expected that it would try to license its system to other colleges, but that never came to pass.

I wonder what the company will do with the patent and with Carnegie Learning assets now that they’re going with commercial products. I also wonder who is going to hire many of the developers. I don’t know the full story, but it is pretty clear that even with a budget of hundreds of millions of dollars and adjunct faculty with centralized course design, the University of Phoenix did not succeed in building the next generation learning platform.

Update: Here is full quote from earnings call:

Fifth. We plan to move away from certain proprietary and legacy IT systems to more efficiently meet student and organizational needs over time. This means transitioning an increased portion of our technology portfolio to commercial software providers, allowing us to focus more of our time and investment on educating and student outcomes. While Apollo was among the first to design an online classroom and supporting system, in today’s world it’s simply not as efficient to continue to support complicated, custom-designed systems particularly with the newer quality systems we have more recently found with of the self providers that now exist within the marketplace. This is expected to reduce costs over the long term, increase operational efficiency and effectiveness while still very much supporting a strong student experience.

The post U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet appeared first on e-Literate.

Stay Up to Date with Key My Oracle Support Resources of Your Choice using Hot Topics

Joshua Solomin - Tue, 2015-06-30 09:01

Hot Topics sends automated emails when a selected resource is added or updated, keeping you informed of changes.

Resources available for Hot Topics updates include: knowledge documents, bugs, service requests, desupport notices, product newsletters, and field action bulletins. Notification of your favorite document is also available. Each Hot Topics email contains links to content that has changed based on your settings.

You can choose the frequency of e-mail and select only the resources important and relevant to you.

Along with Hot Topics you can opt to receive My Oracle Support Site Alerts. When enabled, this option informs you when a My Oracle Support outage is scheduled.

Click to watch a video on using Hot Topics.

To set up and enable Hot Topics E-mail:

  1. Go to the Settings tab in My Oracle Support and click Hot Topics E-mail.
  2. Select how often you want to receive emails and the content format (plain text or HTML).
  3. Make any selections in the Content to Include as desired.
  4. In the Selected Products section, click + Add then specify a product to monitor.
  5. Make other selections as required, then click Apply or OK.
    • Note that if you click Apply, the Add Product window remains open so you can add additional products.
  6. To receive e-mail notifications about health recommendations, in the Health Recommendations section, select a recommendation category (Severity, Favorite Targets, By Support Identifier, or By Target Type).
  7. If you select By Target Type, click Add... then specify a target type to monitor.
  8. Make other selections as required, then click OK.
  9. To save changes to Hot Topics E-mail, click Apply at the bottom of the window.

For more information about Hot Topics E-mail, see Knowledge Document 793436.1, "Use My Oracle Support Hot Topics Email to subscribe to Support Product News, SRs, Bugs, etc. and events so that you Never Miss an Important Update."

Health Sciences Partner Support Best Practices & Resources

Chris Warticki - Tue, 2015-06-30 07:31

Thanks to all of our Health Sciences Partners that joined today's webcast on Support Best Practices and Resources.
Below is the leave-behind list of all the links to the information discussed.

First – The #1 investment is the product itself, therefore be a student of the product

OTN for Health Sciences Documentation
Healthcare Applications Training
Health Sciences Documentation
Life Sciences Applications Training
Health Sciences Knowledge Zones

Oracle University
All Product-specific landing pages
Oracle Learning Library aka, Oracle By Example – 6000+ Free Tutorials/Demos
Public list of all available webconferences
Advisor Webcast Current Schedule and Archives too from Support (ID 740966.1)
Oracle E-Business Suite Transfer of Information (TOI) courses (ID 807319.1)
Information about the functional changes in Release 12.1 and Release 12.1.x Release Update Packs (RUPs).

#2 – Remain In-the-Know from Oracle Support and Oracle Corporation

Setup Hot-Topics emails from My Oracle Support
Subscribe to available Newsletters from major product lines and technologies
Events & Webcasts Schedule and Archives
Product Support Newsletters from Oracle Support teams

#3 – Personalize My Oracle Support

Customize your Dashboard & use Powerviews

#4 – FIND it, the FIRST time, FAST!

Use the Knowledge Browser in My Oracle Support
Check out available Product Information Centers, like the one for OC/RDC
Know what Support knows with 100% certainty whether you need to open a Service Request or not.

#5 – Leverage ALL the available Diagnostics tools and Scripts

Proactive Support Porfolio - Categorical List of all Tools, Diagnotics, Scripts and Best Practices (by Product Family)
Configuration Manager
Install - Remote Diagnostic Agent (RDA) for Database, Server Tech & other Products
Over 25 built-in tools and tests. Over 80 seeded profiles
Ora-600/7445 Internal Errors Tool
Performance Diagnostics Guide and Tuning Diagnostics
PL/SQL Tuning Scripts

Install - EBusiness Diagnostic Support Pack for Applications

PSFT – Change Assistant
PSFT – Change Impact Analyzer
PSFT – Performance Monitor
PSFT – Setup Manager

JDE – Change Assistant
JDE – Configuration Assistant
JDE – Support Assistant

Guardian Resource Center

SUN Systems Mgmt & Diagnostic Tools
Oracle ASR Product Page
Oracle STB Product Page
Oracle Sun System Analysis Product Page
Oracle Oracle Shared Shell Product Page
Oracle Secure File Transport
Oracle Hardware Service Request Automated Diagnosis
Oracle Validation Test Suite
PC Check
Oracle Hardware Installation Assistant
Oracle Hardware Installation Assistant Product Page
Cediag Memory DIMM Replacement Management Tool

#6 – Engage with Oracle Support

Check Configuration Manager Healthchecks and Patch Recommendations
Fill-out Service Request Templates completely
Use all Diagnostics & Data Collectors (432.1)
Upload ALL reports if logging a Service Request
Leverage Oracle Collaborative Support (web conferencing)
Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)
Request Management Attention as necessary

#7 – Expand your Circles of Influence

Facebook: Oracle Health Sciences

Linkedin: Oracle in Healthcare and Life Science

Twitter: Oracle Health Sciences on Twitter


#8 – Understand Oracle Support Policies and Processes

All Technical Support Policies
Lifetime Support Policy
Oracle Support Technical Support Policies
Database, FMW, EM Grid Control and OCS Software Error Correction Policy
Ebusiness Suite Software Error Correction Policy

- Chris Warticki
#Oracle News, Info & Support

LATERAL Inline Views, CROSS APPLY and OUTER APPLY Joins in 12c

Tim Hall - Tue, 2015-06-30 07:26

love-sqlI was looking for something in the New Features Manual and I had a total WTF moment when I saw this stuff.

If you look at the final section of the article, you can see in some cases these just get transformed to regular joins and outer joins, but there is certainly something else under the hood, as shown by the pipelined table function example.

I think it’s going to take me a long time before I think of using these in my regular SQL…



Update: The optimizer has used LATERAL inline views during some query transformations for some time, but they were not documented and therefore not supported for us to use directly until now. Thanks to Dominic Brooks and Sayan Malakshinov for the clarification.

LATERAL Inline Views, CROSS APPLY and OUTER APPLY Joins in 12c was first posted on June 30, 2015 at 2:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MTOM using SoapUI and OSB

Darwin IT - Tue, 2015-06-30 06:40
MTOM (Message Transmission Optimization Mechanism) is incredibly hard... to find practical information about, on SoapUI and OSB. There are loads of articles. Like:
But I need to process documents that are send using MTOM to my service. And to be able to test it, I need to create a working example of a SoapUI project to do exactly that. Also about SoapUI and MTOM there are loads of examples, and it is quite simple really. But I had a more complex wsdl that I was able to use for Soap with Attachments (SwA) wich is also simple really. But how to connect those two in a simple working example? Well, actually, it turns out not so hard either... So bottom-line, MTOM with SoapUI and OSB is not so hard. If you know how, that is.

So let's work this out on a step-by-step basis.
XSD/WSDL I'll start with a simple XSD:
<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd=""
<xsd:element name="mtomRequest" type="MtomRequestType"/>
<xsd:complexType name="MtomRequestType">
<xsd:element name="document" type="xsd:base64Binary"/>
<xsd:element name="mtomResponse" type="MtomResponseType"/>
<xsd:complexType name="MtomResponseType">
<xsd:element name="document" type="xsd:string"/>

In JDeveloper, this looks like:
The key is the 'xsd:base64Binary' type of the request document. In the response I have a string: in this example I'll base64-encode the attachment using a java-class. Just to show how to process the document. But in my project this is what I need to do.

The WSDL is just as easy, plain synchronous Request-Response:

<wsdl:definitions name="MTOMService" targetNamespace="" xmlns:wsdl="" xmlns:inp1="" xmlns:tns="" xmlns:soap="">
<xsd:schema xmlns:xsd="">
<xsd:import namespace="" schemaLocation="../xsd/MTOMRequestResponse.xsd"/>
<wsdl:message name="requestMessage">
<wsdl:part name="part1" element="inp1:mtomRequest"/>
<wsdl:message name="replyMessage">
<wsdl:part name="part1" element="inp1:mtomResponse"/>
<wsdl:portType name="execute_ptt">
<wsdl:operation name="execute">
<wsdl:input message="tns:requestMessage"/>
<wsdl:output message="tns:replyMessage"/>
<wsdl:binding name="execute_pttSOAP11Binding" type="tns:execute_ptt">
<soap:binding style="document" transport=""/>
<wsdl:operation name="execute">
<soap:operation style="document" soapAction=""/>
<soap:body use="literal" parts="part1"/>
<soap:body use="literal" parts="part1"/>
<wsdl:service name="execute_ptt">
<wsdl:port name="execute_pttPort" binding="tns:execute_pttSOAP11Binding">
<soap:address location=""/>
Did you know that in JDeveloper it is really easy to create this WSDL? Just, create a SOA Project, drag and drop a Webservice on the exposed services lane, define a wsdl as synchronous, with a request and response message. Then open the wsdl in the wsdl editor and drag the operations to the binding pane and then the binding to the services pane:
The SoapUI Part Now, create a new SoapUI project based on this WSDL. It turns out that SoapUI interprets this base64Binary field and creates special content:

This body refers to an attachment, that is not yet added:
Let's add an image to it, by opening the 'Attachments' tab and clicking on the plus-button: You can select the 'Part' to which the attachment is to be linked. Doing so will change the 'Type' into 'CONTENT'. Edit either the 'ContentID' or the id in the document-element (indicated by 'cid:') to match eachother.

At this point, you can create a mock-service on the request and set the host of the mockservice to 'localhost' and 'MTOMService' in the mock-service editor:
Then you can right-click on the Mock-server and select 'Add endpoint to interface'.

Running the Request, will send the following message to the Mock Service:
(Altough the title is 'Response 1', what you see here is the request received by the Mock Service).
Apparently SoapUI base64 encoded the attachment and embedded it into the document-element.

Now you can enable MTOM on the request. Select the Request and go to the properties pane:
When running the request again SoapUI won't base 64 encode the attachment but send it as a compressed MIME/Multipart-attachment, with a reference in the document:
In the http-log you'll find:
POST /MTOMService HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: multipart/related; type="application/xop+xml"; start="<>"; start-info="text/xml"; boundary="----=_Part_11_531670487.1435664879005"
SOAPAction: ""
MIME-Version: 1.0
Content-Length: 39605
Host: localhost:8080
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)


Content-Type: application/xop+xml; charset=UTF-8; type="text/xml"

Content-Transfer-Encoding: 8bit

Content-ID: <>

<soapenv:Envelope xmlns:soapenv="" xmlns:mtom="">
<mtom:document><inc:Include href="cid:915251933163" xmlns:inc=""/></mtom:document>


Content-Type: image/jpeg; name=SoapUIMTOMRequest.jpg

Content-Transfer-Encoding: binary

Content-ID: <915251933163>

Content-Disposition: attachment; name="SoapUIMTOMRequest.jpg"; filename="SoapUIMTOMRequest.jpg"


[0x14][0xe][0xf][0xc][0x10][0x17][0x14][0x18][0x18][0x17][0x14][0x16][0x16][0x1a][0x1d]%[0x1f][0x1a][0x1b]#[0x1c][0x16][0x16] , #&')*)[0x19][0x1f]-0-(0%()([0xff][0xdb][0x0]C[0x1][0x7][0x7][0x7]

[0x13]([0x1a][0x16][0x1a](((((((((((((((((((((((((((((((((((((((((((((((((([0xff][0xc0][0x0][0x11][0x8][0x0][0xdc][0x3]7[0x3][0x1]"[0x0][0x2][0x11][0x1][0x3][0x11][0x1][0xff][0xc4][0x0][0x1b][0x0][0x1][0x0][0x2][0x3][0x1][0x1][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x5][0x6][0x2][0x3][0x4][0x1][0x7][0xff][0xc4][0x0]P[0x10][0x0][0x1][0x2][0x5][0x2][0x3][0x4][0x5][0x8][0x6][0x8][0x4][0x4][0x6][0x3][0x0][0x1][0x2][0x3][0x0][0x4][0x5][0x11][0x12][0x13]![0x6]"#[0x14][0x15]1QAVa[0x95][0xd2][0x7]2[0x81][0x92][0xa5][0xb3][0xd3][0xd4][0x16]37Ru[0x94]45Bqt[0xb1][0xb4][0xd1]$6r[0x91]%Dbs[0x17]&C[0xa1][0xc1][0xe1]cd[0xb2][0xff][0xc4][0x0][0x19][0x1][0x1][0x1][0x1][0x1][0x1][0x1][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x1][0x2][0x3][0x4][0x5][0xff][0xc4][0x0]5[0x11][0x0][0x1][0x3][0x0][0x8][0x4][0x6][0x2][0x2][0x2][0x2][0x3][0x1][0x0][0x0][0x0][0x0][0x1][0x2][0x11][0x3][0x12]!1AQa[0xf0][0x4][0x14]q[0xc1][0x13]"[0x81][0x91][0xa1][0xd1][0xb1][0xe1]2[0xf1]Bb[0x5][0x92]#r[0xd2]R[0xff][0xda][0x0][0xc][0x3][0x1][0x0][0x2][0x11][0x3][0x11][0x0]?[0x0][0xfb];[0xf2][0x8d]O|[0xa3][0x19][0x19][0x85]L[0x9]4[0xd2];@i[0x99][0x87][0x19][0x19][0x87][0xad][0x97]![0x1b][0xd8][0xda];;[0xbf][0x87][0xae][0xa0]{[0xc0][0x14][0xa8][0xa0][0x85]O[0xcd][\r][0xc1][0xb1][0xf1]W[0xb3][0xc7][0xd3][0xe3][0x1a][0xd8][0xfd][0xaa][0xb9][0xfc][0x8][0xfd][0xfc]j[0x9d][0xfe][0x98][0xff][0x0][0xfe][0xe2][0xbf][0xce]=[0x1c]=[0x12]R[0xaa][0xa2][0x9c])[0xe9]V[0x8d][0x11]P[0xea][0xee][0xfe][0x1d][0xf3][0x9f][0xf7][0x84][0xd7][0xc7][0xe][0xef][0xe1][0xdf]9[0xff][0x0]xM|q[E~[0x9a][0xb9][0x99][0xb6][0xb5][0xd6][0x94][0xca][0xa5]ky[0xf5][0xb2][0xb4][0xcb][0xa4] [0xd9]}b4[0xc9]I[0xb8] *[0xe0][0xa5]W[0xf9][0xa6][0xdd][0x14][0xba][0x94][0xad]R]OI[0xad]d!E[0xb]C[0x8d])[0xa7][0x1b]U[0x81][0xb2][0x90][0xb0][0x14][0x93]b[\r][0x88][0x17][0x4][0x1f][0x2][0xc]z[0xf9]&[0xea]y[0xf9][0x9a]D[0xc0][0x9c][0xee][0xfe][0x1d][0xf3][0x9f][0xf7][0x84][0xd7][0xc7][0xe][0xef][0xe1][0xdf]9[0xff][0x0]xM|q^[0xab][0xd7])4].[0xf8][0xaa]HHk_O[0xb5]L![0xac][0xed]k[0xdb]"/k[0x8f][0xf]1[0x1d]4[0xf9][0xe9]J[0x94][0x9b]st[0xe9][0xa6]&[0xe5]\[0xbe][0xf]0[0xe0]q
[0xb1] [0xd9]Ccb[0x8][0xfa]#<[0xa5][0x1c][0xc4][0xa8][0xe6][0x9f]|[0x13][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe1][0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e]#[0xe1][0x17][0x93]fjNm[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2]>[0x10][0xe4][0xd9][0x9a][0x8e]m[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2]>[0x10][0xe4][0xd9][0x9a][0x8e]m[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2][0x1d][0xf9][0xe9]F'%[0xa5][0x1f][0x9a]a[0xa9][0xa9][0xac][0xb4][0x19][[0x81]+w[0x11]u`[0x93][0xba][0xac]76[0xf0][0x8d][0xb3]

Where I removed all the new-line and timing codings, for readability. This is what actually goes 'over the line'.
The OSB PartNow we're ready for the OSB part. Create a new OSB project and add the wsdl and xsd to it. If you created the wsdl, like I did, in JDeveloper, you can create the OSB project with the same name in the same folder as the JDeveloper project.

Create a new Proxy Service, and name it 'MTOMService' for instance. Base it on the MTOMService wsdl, created above.
I added a Pipeline, with stages and alerts to log the $attachments and $body variables. However, it turns out that since we're using MTOM via a base64Binary-element, the Attachments variable is empty. The body variable contains the message as seen in SoapUI.

Now, the most interesting part here is: 'How to get to the attachment-content?' Using 'Soap with Attachments' (SwA), the $attachments variable gives access to the binary content, with an expression like:
Where 'ctx:' is an internal namespace of OSB:

But since the $attachments is empty, this won't work. It is the base64Binary element that gives access to the content, in just the same way. So the expression is:

I added an assign with this as an expression to a seperate variable called 'documentBin'.

Then I added a Java Callout to my Base64-encoding method. For this I used the class described in my previous article. I jarred it and added the jar to my project. The input of this class is a 'byte[] bytes' and the output is a 'String' for wich I used the variable 'documentB64'. Then I added a replace with the following to pass back the response:
<mtom:mtomResponse xmlns:mtom="">

Then, an important setting: enable MTOM: go to the Message Handling tab of the proxy service:
Check the box 'Enabled' of 'XOP/MTOM Support'. Leave radio-button to 'Include Binary Data by Reference'. Save the proxy service.
The proof in the eatingNow, publish it to a running OSB server and change the Endpoint URL within SoapUI to the OSB Service.
Running the SoapUI Request via OSB results in the following response:
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Header xmlns:mtom=""/>
<soapenv:Body xmlns:mtom="">

The Alert of the documentB64 variable shows:
ConclusionI spent quite some time searching the internet-area on usable articles on SoapUI, OSB and MTOM. But in the end, writing this article cost me more time then implementing this. I hope this article can be rightfully categorized in my 'FMW made Simple'-series.
DownloadsI made my projects downloadable via:

ODA - VMs possibilities & performances

Yann Neuhaus - Tue, 2015-06-30 02:56

As you know it is possible to install the ODA in a virtualized mode and to take avantages from all cores not licensed with Enterprise Edition for additional VMs.

The question is what could we do with it and which performances could we expect...

Make even more of UKOUG Tech15: APEX 5.0 UI Training - Dec 10th in Birmingham

Dimitri Gielis - Tue, 2015-06-30 00:48

APEX 5.0 has been released this spring. People who have already spent some time on this new version know this version is packed with new features aimed to make APEX developers even more productive, like the Page Designer.
Another striking new subset of features is aimed at creating better looking user interfaces for your APEX applications in an easy and maintainable way. The definition of user interface components in APEX 5.0 is very different to what we're used to. For example there is a new Universal Theme with Template Options and a Theme Roller. To get you up and running with this new toolset as quickly as possible, Dimitri Gielis of APEX R&D and Roel Hartman of APEX Consulting have joined forces and set up a one day course fully aimed at APEX 5.0 UI. So if you want to know not only how to use the new Theme, but also how to modify it to fit your needs, this is the event you should attend!
The training will be at the Jury’s Inn in Birmingham (UK) on Thursday Dec 10 - so conveniently immediately after the UKOUG Tech15 conference.More information and registration see
If you are from another country and think this training should be available in your country as well, please contact us - then we'll see what we can do!
Categories: Development

ReConnect 2015

Jim Marion - Mon, 2015-06-29 17:43

It is just a little less than a month until the PeopleSoft ReConnect conference in Rosemont, Illinois. I will be presenting PeopleTools Developer: Tips and Techniques on Thursday from 11:30 AM to 12:20 PM in Grand Ballroom H.

ASU Is No Longer Using Khan Academy In Developmental Math Program

Michael Feldstein - Mon, 2015-06-29 17:37

By Phil HillMore Posts (340)

In these two episodes of e-Literate TV, we shared how Arizona State University (ASU) started using Khan Academy as the software platform for a redesigned developmental math course[1] (MAT 110). The program was designed in Summer 2014 and ran through Fall 2014 and Spring 2015 terms. Recognizing the public information shared through e-Literate TV, ASU officials recently informed us that they had made a programmatic change and will replace their use of Khan Academy software with McGraw-Hill’s LearnSmart software that is used in other sections of developmental math.

To put this news in context, here is the first episode’s mention of Khan Academy usage.

Phil Hill: The Khan Academy program that you’re doing, as I understand, it’s for general education math. Could you give just a quick summary of what the program is?

Adrian Sannier: Absolutely. So, for the last three-and-a-half years, maybe four, we have been using a variety of different computer tutor technologies to change the pedagogy that we use in first-year math. Now, first-year math begins with something we call “Math 110.” Math 110 is like if you don’t place into either college algebra, which has been the traditional first-year math course, or into a course we call “college math,” which is your non-STEM major math—if you don’t place into either of those, then that shows you need some remediation, some bolstering of some skills that you didn’t gain in high school.

So, we have a course for that. Our first-year math program encompasses getting you to either the ability to follow a STEM major or the ability to follow majors that don’t require as intense of a math education. What we’ve done is create an online mechanism to coach students. Each student is assigned a trained undergraduate coach under the direction of our instructor who then helps that student understand how to use the Khan Academy and other tools to work on the skills that they show deficit in and work toward being able to satisfy the very same standards and tests that we’ve always used to ascertain whether a student is prepared for the rest of their college work.

Luckily, the episode on MAT 110 focused mostly on the changing roles of faculty members and TAs when using an adaptive software approach, rather than focusing on Khan Academy itself. After reviewing the episode again, I believe that it stands on its own and is relevant even with the change in software platform. Nevertheless, I appreciate that ASU officials were proactive to let me know about this change, so that we can document the change here and in e-Literate TV transmedia.

The Change

Since the change has not been shared outside of this notification (limiting my ability to do research and analysis), I felt the best approach would be to again interview Adrian Sannier, Chief Academic Technology Officer at ASU Online. Below is the result of an email interview, followed by short commentary [emphasis added].

Phil Hill: Thanks for agreeing to this interview to update plans on the MAT 110 course featured in the recent e-Literate TV episode. Could you describe the learning platforms used by ASU in the new math programs (MAT 110 and MAT 117 in particular) as well as describe any changes that have occurred this year?

Adrian Sannier: Over the past four years, ASU has worked with a variety of different commercially available personalized math tutors from Knewton, Pearson, McGraw Hill and the Khan Academy applied to 3 different courses in Freshman Math at ASU – College Algebra, College Math and Developmental Math. Each of these platforms has strengths and weaknesses in practice, and the ASU team has worked closely with the providers to identify ways to drive continuous improvement in their use at ASU.

This past year ASU used a customized version of Pearson’s MyMathLab as the instructional platform for College Algebra and College Math. In Developmental Math, we taught some sections using the Khan Academy Learning Dashboard and others using McGraw Hill’s LearnSmart environment.

This Fall, ASU will be using the McGraw Hill platform for Developmental Math and Pearson’s MyMathLab for College Algebra and College Math. While we also achieved good results with the Khan Academy this past year, we weren’t comfortable with our current ability to integrate the Khan product at the institutional level.

ASU is committed to the personalized adaptive approach to Freshman mathematics instruction, and we are continuously evaluating the product space to identify the tools that we feel will work best for our students.

Phil Hill: I presume this means that ASU’s usage of McGraw Hill’s LearnSmart for Developmental Math will continue and also expand to essentially replace the usage of Khan Academy. Is this correct? If so, what do you see as the impact on faculty and students involved in the course sections that previously used Khan Academy?

Adrian Sannier: That’s right Phil. Based on our experience with the McGraw Hill product we don’t expect any adverse effects.

Phil Hill: Could you further explain the comment “we weren’t comfortable with our current ability to integrate the Khan product at the institutional level”? I believe that Khan Academy’s API approach is more targeted to B2C [business-to-consumer] applications, allowing individual users to access information rather than B2B [business-to-business] enterprise usage, whereas McGraw Hill LearnSmart and others are set up for B2B usage from an API perspective. Is this the general issue you have in mind?

Adrian Sannier: That’s right Phil. We’ve found that the less cognitive load an online environment places on students the better results we see. Clean, tight integrations into the rest of the student experience result in earlier and more significant student engagement, and better student success overall.


Keep in mind that ASU is quite protective of its relationship with multiple software vendors and that they go out of their way to not publicly complain or put their partners in a bad light, even if a change is required as in MAT 110. Adrian does make it clear, however, that the key issue is the ability to integrate reliably between multiple systems. As noted in the interview, I think a related issue here is a mismatch of business models. ASU wants enterprise software applications where they can deeply integrate with a reliable API to allow a student experience without undue “cognitive load” of navigating between applications. Khan Academy’s core business model relies on people navigating to their portal on their website, and this does not fit the enterprise software model. I have not interviewed Khan Academy, but this is how it looks from the outside.

There is another point to consider here. While I can see Adrian’s argument that “we don’t expect any adverse effects” in the long run, I do think there are switching costs in the short term. As Sue McClure told me via email, as an instructor she spent significantly more time than usual on this course due to course design and ramping up the new model. In addition, ASU added 11 TAs for the course sections using Khan Academy.  These people have likely learned important lessons about supporting students in an adaptive learning setting, but a great deal of their Khan-specific time is now gone. Plus, they will need to spend time learning LearnSmart before getting fully comfortable in that environment.

Unfortunately, with the quick change, we might not see hard data to determine if the changes were working. I believe ASU’s plans were to analyze and publish the results from this new program after the third term which will not happen.

If I find out more information, I’ll share it here.

  1. The terms remedial math and developmental math are interchangeable in this context.

The post ASU Is No Longer Using Khan Academy In Developmental Math Program appeared first on e-Literate.

The Hybrid World is Coming

Tanel Poder - Mon, 2015-06-29 17:14

Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.

It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).

However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay. They are already taking over many workloads traditionally executed on in-house RDBMS systems on SAN storage arrays – especially all kinds of reporting and analytics. Oracle is just one of the many vendors affected by all this and they’ve also jumped onto the Hadoop bandwagon.

However, it would be naive to to think that Hadoop would somehow replace all your transactional or ERP systems or existing application code with thousands of complex SQL reports. Many of the traditional systems aren’t going away any time soon.

But the hybrid world is coming. It’s been a very good idea for Oracle DBAs to additionally learn Linux over the last 5-10 years, now is pretty much the right time to start learning Hadoop too. More about this in a future article ;-)

Check out the keynote video here:

Enjoy :-)