Skip navigation.

Feed aggregator

Multitenant vs. schema based consolidation

Yann Neuhaus - Tue, 2015-06-30 11:12

If you want install multiple instances of a software, for example you host the ERP for several companies or subsidiaries, you have 3 solutions:

  • have one database and multiple schema
  • have multiple databases
  • have one database and multiple pluggable databases

Of course, this is exactly the reason for pluggable databases: multitenant. You have good isolation but still share resources. A lot of reasons have been given why multiple schema - or schema based consolidation - is not a good solution. I don't agree with most of them. But there is one very good reason that I'll show later and it's about cursor sharing.

schema based consolidation

Let's take the Oracle white paper presenting multitenancy.

Name collision might prevent schema-based consolidation

Yes some applications have a fixed schema name. If your ERP must be installed in SYSERP schema, then you cannot install several ones in the same database.

However, you should challenge your application provider for that before changing all your infrastructure and buying expensive options. Maybe I'm too optimistic here, but I  think it's something from the past. I remember a telco billing software I've installed 15 years ago. The schema was 'PB'. It had nothing to do with the software name or the vendor name. But when I asked if I can change, answer was No. That schema name was hard-coded everywhere. It got it when the main developer came to visit us... his name was Pierre B.

About public synonyms, and public database links... please just avoid them.

Schema-based consolidation brings weak security

Same idea. If your application requires a 'SELECT ANY PRIVILEGE' then don't do it. In 12c you have privilege analysis that can help to identify the minimal rights you need to grant.


Per application backend point-in-time recovery is prohibitively difficult

I don't see the point. Currently multitenant do not give us more options because pluggable database point in time recovery, nor flashback pluggable database, is currently possible in-place. But I know it's planned for the future. You can already read about it at

Of course, when using schema-based consolidation you should used different tablespaces and you have TSPITR.


Resource management between application backends is difficult

Well you don't need pluggable database to use services. Multitenant is just an easy way to force the application to use specific services.


Patching the Oracle version for a single application backend is not possible

Yes, plugging a PDB into a different version CDB can be faster for those applications that have lot of objects. But it is not as easy as the doc says. The PDB dictionary must be patched. It's still a good think when the system metadata is a lot smaller than the application metadata


Cloning a single application backend is difficult

Cloning a PDB is easy. Right. 

Finally, multitenant is nice because of pluggable databases. Do you know that all occurrence of 'multitenant' in 12c code or documentation was 'pluggable database' one month before the release?

But wait a minute, I'm not talking about test environments here. I'm talking about consolidating the similar production databases. And all the plug/unplug has the same problem as transportable tablespaces: source must be made read-only.


Cursor sharing in schema based consolidation

Time to show you what is the big advantage of multitenant.

10 years ago I worked on a database that had 3000 schemas. Well we had 5 databases like that. You can think of them as specialized datamarts: same code, same data model, but different data, used by application services provided to different customers. A total of 45TB was quite nice at that time.

That was growing very fast and we had 3 issues.

Issue one was capacity planning. The growth was difficult to predict. We had to move those schemas from one database to another, from one Storage system to another... It was 10g - no online datafile move at that time. Transportable tablespaces was there, but see next point.

The second issue was the number of files. At first, each datamart had its set of tablespaces. But >5000 datafiles on a database was too much for several reasons. One of the reason was RMAN. I remember a duplicate with skip tablespace took 2 days to initialize... 

Then we have consolidated several datamarts into same tablespaces. When I think about it, the multitenant database we can have today (12c) would not have been an easy solution. Lot of pluggable databases mean lot of datafiles. I hope those RMAN issues have been fixed. But there are other ones. Did you ever try to query DBA_EXTENTS on a >5000 datafiles database? I had to when we had some block corruption on the SAN (you know, because of issue 1 we did lot of online reorg of the filesystems, and SAN software had a bug) This is where I made my alternative to DBA_EXTENTS.

Then the third issue was cursor sharing.

Let me give you an example

I create the same table in two schemas (DEMO1 and DEMO2) of same database.

SQL> connect demo1/demo@//
SQL> create table DEMO as select * from dual;

Table created.

SQL> select * from DEMO;


SQL> select prev_sql_id from v$session where sid=sys_context('userenv','sid');


SQL> connect demo2/demo@//
SQL> create table DEMO as select * from dual;

Table created.

SQL> select * from DEMO;


I'm in multitenant here because of the second test I'll do, but it's the same pluggable database PDB1.

 You see that I've executed exactly the same statement - SELECT * FROM DEMO - in both connections. Same statement but on different tables. Let's look at the cursors:


The optimization tried to share the same cursor. The parent cursor is the same because the sql text is the same. Then it follows the child list in order to see if a child can be shared. But semantic verification sees that it's not the same 'DEMO' table and it had to hard parse.

The problem is not hard parse. It's not the same table, then it's another cursor. Only the name is the same.

Imagine what happened on my database where I had 3000 identical queries on different schemas. We didn't have 'perf flame graphs' at that time, or we would have seen a large flame over kkscsSearchChildList.

Looking at thousand of child cursors in the hope to find one that can be shared is very expensive. And because it's the same parent cursor, there is a high contention on the latch protecting the parent.

The solution at that time was to add a comment into the sql statements with the name of the datamart, so that each one is a different sql text - different parent cursor. But that was a big change of code with dynamic SQL.

Cursor sharing in multitenant consolidation

So, in 12c if I run the same query on different pluggable databases. After the previous test where I had two child cursors in the PDB1 (CON_ID=5) I have run the same in PDB2 (CON_ID=4) and here is the view of parent and child cursors from the CDB:


We have the two child cursors from the previous test and we have a new child for CON_ID=4

The child number may be misleading but the search for shareable cursor is done only for the current container, so the same query when run from another pluggable database did not try to share a previous cursor. We can see that because there is not an additional 'reason' in V$SQL_SHARED_CURSOR.

SQL> select con_id,sql_id,version_count from v$sqlarea where sql_id='0m8kbvzchkytt';

---------- ------------- -------------
         5 0m8kbvzchkytt             3
         4 0m8kbvzchkytt             3

The V$SQLAREA is also misleading because VERSION_COUNT aggregates the versions across containers.

But the real behavior is visible in V$SQL_SHARED_CURSOR above and if you run that with a lot of child cursor you will see the difference in CPU time, latching activity, etc.


I'm not talking about pluggable databases here. Pluggable database do not need the multitenant option as you can plug/unplug database in single-tenant. Pluggable database is a nice evolution of transportable database.

When it comes to multitenant - having several pluggable database in the same container, in order to have several 'instances' of your software without demultiplicating the instances of your RDBMS - then here is the big point: consolidation scalability.

You can add new pluggable databases, and run same application code on them, without increasing contention, because most of the instance resources are isolated to one container. 

New Tools releases , now with Java

Kris Rice - Tue, 2015-06-30 10:28
What's New   For the 90%+ of people using sqldev/modeler on windows, the JDK bundled versions are back.  So no more debating what to install or worrying about conflicting Java versions.   Lots of bug fixes.    My favorite bug is now fixed so you can use emojis in your sql> prompt. RESTful CSV Loading   We wrapped he same CSV import code in SQL Developer into the REST Auto-Enablement

Interaction Hub Image Now Available on the PeopleSoft Update Manager Home Page

PeopleSoft Technology Blog - Tue, 2015-06-30 10:23

As noted in a recent post, the PeopleSoft Interaction Hub is now part of the Selective Adoption Process.  You can get the first image now on the PUM home page.  (At the PUM home page, choose the PeopleSoft Update Image Home Pages tab, then select the Interaction Hub Update Image page from the drop down.)  This means customers can use the PeopleSoft Update Manager and our other life cycle tools to manage their upgrade and maintenance process for the Hub.  There is also a white paper posted there that describes the baseline customers must reach to start taking these images. 

Note that this will be the only way for customers to take maintenance and updates going forward, so we encourage everyone to move to the Selective Adoption process as soon as is feasible for your organization. This move brings the Interaction Hub in line with all other PeopleSoft applications, which use the Selective Adoption process.  This process also offers customers additional value and control, and enables you to benefit from the value of the latest features with a greatly streamlined life cycle process.  

For customers that are eager to learn more, there are many resources on Selective Adoption and PUM on the PUM home page as well as on our YouTube channel.

This first image of the Interaction Hub is functionally equivalent to the current release (9.1/Revision 3), but taking it gets you on the Selective Adoption process.  Some great enhancements are coming the the next image.

July 8th: Overhead Door Corporation HCM Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2015-06-30 09:38

Join us for an Oracle HCM Cloud Customer Forum on Wednesday, July 8, 2015, to hear Larry Freed, Chief Information Officer at Overhead Door Corporation. He will explain the company's desire for a massive HR transformation to include changing its benefits, payroll, core HR, employee self-service, and manager self-service. The transformation would provide the employees with a single source solution so the HR field staff could become more strategic.

During this Customer Forum call, Freed will talk about Overhead Door's selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the expectations and benefits of its new modern HR system.

Register now to attend the live Forum on Wednesday, July 8, 2015, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time, and learn more directly from the CIO of Overhead Door Corporation.

U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet

Michael Feldstein - Tue, 2015-06-30 09:17

By Phil HillMore Posts (333)

It would be interesting to read (or write) a post mortem on this project some day.

Two and a half years ago I wrote a post describing the University of Phoenix investment of a billion dollars on new IT infrastructure, including hundreds of millions of dollars spent on a new, adaptive-learning LMS. In another post I described a ridiculous patent awarded to Apollo Group, parent company of U of Phoenix, that claimed ownership of adaptive activity streams. Beyond the patent, Apollo Group also purchased Carnegie Learning for $75 million as part of this effort.

And that’s all going away, as described by this morning’s Chronicle article on the company planning to go down to just 150,000 students (from a high of 460,000 several years ago).

And after spending years and untold millions on developing its own digital course platform that it said would revolutionize online learning, Mr. Cappelli said the university would drop its proprietary learning systems in favor of commercially available products. Many Apollo watchers had long expected that it would try to license its system to other colleges, but that never came to pass.

I wonder what the company will do with the patent and with Carnegie Learning assets now that they’re going with commercial products. I also wonder who is going to hire many of the developers. I don’t know the full story, but it is pretty clear that even with a budget of hundreds of millions of dollars and adjunct faculty with centralized course design, the University of Phoenix did not succeed in building the next generation learning platform.

Update: Here is full quote from earnings call:

Fifth. We plan to move away from certain proprietary and legacy IT systems to more efficiently meet student and organizational needs over time. This means transitioning an increased portion of our technology portfolio to commercial software providers, allowing us to focus more of our time and investment on educating and student outcomes. While Apollo was among the first to design an online classroom and supporting system, in today’s world it’s simply not as efficient to continue to support complicated, custom-designed systems particularly with the newer quality systems we have more recently found with of the self providers that now exist within the marketplace. This is expected to reduce costs over the long term, increase operational efficiency and effectiveness while still very much supporting a strong student experience.

The post U of Phoenix: Losing hundreds of millions of dollars on adaptive-learning LMS bet appeared first on e-Literate.

Health Sciences Partner Support Best Practices & Resources

Chris Warticki - Tue, 2015-06-30 07:31

Thanks to all of our Health Sciences Partners that joined today's webcast on Support Best Practices and Resources.
Below is the leave-behind list of all the links to the information discussed.

First – The #1 investment is the product itself, therefore be a student of the product

OTN for Health Sciences Documentation
Healthcare Applications Training
Health Sciences Documentation
Life Sciences Applications Training
Health Sciences Knowledge Zones

Oracle University
All Product-specific landing pages
Oracle Learning Library aka, Oracle By Example – 6000+ Free Tutorials/Demos
Public list of all available webconferences
Advisor Webcast Current Schedule and Archives too from Support (ID 740966.1)
Oracle E-Business Suite Transfer of Information (TOI) courses (ID 807319.1)
Information about the functional changes in Release 12.1 and Release 12.1.x Release Update Packs (RUPs).

#2 – Remain In-the-Know from Oracle Support and Oracle Corporation

Setup Hot-Topics emails from My Oracle Support
Subscribe to available Newsletters from major product lines and technologies
Events & Webcasts Schedule and Archives
Product Support Newsletters from Oracle Support teams

#3 – Personalize My Oracle Support

Customize your Dashboard & use Powerviews

#4 – FIND it, the FIRST time, FAST!

Use the Knowledge Browser in My Oracle Support
Check out available Product Information Centers, like the one for OC/RDC
Know what Support knows with 100% certainty whether you need to open a Service Request or not.

#5 – Leverage ALL the available Diagnostics tools and Scripts

Proactive Support Porfolio - Categorical List of all Tools, Diagnotics, Scripts and Best Practices (by Product Family)
Configuration Manager
Install - Remote Diagnostic Agent (RDA) for Database, Server Tech & other Products
Over 25 built-in tools and tests. Over 80 seeded profiles
Ora-600/7445 Internal Errors Tool
Performance Diagnostics Guide and Tuning Diagnostics
PL/SQL Tuning Scripts

Install - EBusiness Diagnostic Support Pack for Applications

PSFT – Change Assistant
PSFT – Change Impact Analyzer
PSFT – Performance Monitor
PSFT – Setup Manager

JDE – Change Assistant
JDE – Configuration Assistant
JDE – Support Assistant

Guardian Resource Center

SUN Systems Mgmt & Diagnostic Tools
Oracle ASR Product Page
Oracle STB Product Page
Oracle Sun System Analysis Product Page
Oracle Oracle Shared Shell Product Page
Oracle Secure File Transport
Oracle Hardware Service Request Automated Diagnosis
Oracle Validation Test Suite
PC Check
Oracle Hardware Installation Assistant
Oracle Hardware Installation Assistant Product Page
Cediag Memory DIMM Replacement Management Tool

#6 – Engage with Oracle Support

Check Configuration Manager Healthchecks and Patch Recommendations
Fill-out Service Request Templates completely
Use all Diagnostics & Data Collectors (432.1)
Upload ALL reports if logging a Service Request
Leverage Oracle Collaborative Support (web conferencing)
Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)
Request Management Attention as necessary

#7 – Expand your Circles of Influence

Facebook: Oracle Health Sciences

Linkedin: Oracle in Healthcare and Life Science

Twitter: Oracle Health Sciences on Twitter


#8 – Understand Oracle Support Policies and Processes

All Technical Support Policies
Lifetime Support Policy
Oracle Support Technical Support Policies
Database, FMW, EM Grid Control and OCS Software Error Correction Policy
Ebusiness Suite Software Error Correction Policy

- Chris Warticki
#Oracle News, Info & Support

LATERAL Inline Views, CROSS APPLY and OUTER APPLY Joins in 12c

Tim Hall - Tue, 2015-06-30 07:26

love-sqlI was looking for something in the New Features Manual and I had a total WTF moment when I saw this stuff.

If you look at the final section of the article, you can see in some cases these just get transformed to regular joins and outer joins, but there is certainly something else under the hood, as shown by the pipelined table function example.

I think it’s going to take me a long time before I think of using these in my regular SQL…



Update: The optimizer has used LATERAL inline views during some query transformations for some time, but they were not documented and therefore not supported for us to use directly until now. Thanks to Dominic Brooks and Sayan Malakshinov for the clarification.

LATERAL Inline Views, CROSS APPLY and OUTER APPLY Joins in 12c was first posted on June 30, 2015 at 2:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MTOM using SoapUI and OSB

Darwin IT - Tue, 2015-06-30 06:40
MTOM (Message Transmission Optimization Mechanism) is incredibly hard... to find practical information about, on SoapUI and OSB. There are loads of articles. Like:
But I need to process documents that are send using MTOM to my service. And to be able to test it, I need to create a working example of a SoapUI project to do exactly that. Also about SoapUI and MTOM there are loads of examples, and it is quite simple really. But I had a more complex wsdl that I was able to use for Soap with Attachments (SwA) wich is also simple really. But how to connect those two in a simple working example? Well, actually, it turns out not so hard either... So bottom-line, MTOM with SoapUI and OSB is not so hard. If you know how, that is.

So let's work this out on a step-by-step bases.
XSD/WSDL I'll start with a simple XSD:
<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd=""
<xsd:element name="mtomRequest" type="MtomRequestType"/>
<xsd:complexType name="MtomRequestType">
<xsd:element name="document" type="xsd:base64Binary"/>
<xsd:element name="mtomResponse" type="MtomResponseType"/>
<xsd:complexType name="MtomResponseType">
<xsd:element name="document" type="xsd:string"/>

In JDeveloper, this looks like:
The key is the 'xsd:base64Binary' type of the request document. In the response I have a string: in this example I'll base64-encode the attachment using a java-class. Just to show how to process the document. But in my project this is what I need to do.

The WSDL is just as easy, plain synchronous Request-Response:

<wsdl:definitions name="MTOMService" targetNamespace="" xmlns:wsdl="" xmlns:inp1="" xmlns:tns="" xmlns:soap="">
<xsd:schema xmlns:xsd="">
<xsd:import namespace="" schemaLocation="../xsd/MTOMRequestResponse.xsd"/>
<wsdl:message name="requestMessage">
<wsdl:part name="part1" element="inp1:mtomRequest"/>
<wsdl:message name="replyMessage">
<wsdl:part name="part1" element="inp1:mtomResponse"/>
<wsdl:portType name="execute_ptt">
<wsdl:operation name="execute">
<wsdl:input message="tns:requestMessage"/>
<wsdl:output message="tns:replyMessage"/>
<wsdl:binding name="execute_pttSOAP11Binding" type="tns:execute_ptt">
<soap:binding style="document" transport=""/>
<wsdl:operation name="execute">
<soap:operation style="document" soapAction=""/>
<soap:body use="literal" parts="part1"/>
<soap:body use="literal" parts="part1"/>
<wsdl:service name="execute_ptt">
<wsdl:port name="execute_pttPort" binding="tns:execute_pttSOAP11Binding">
<soap:address location=""/>
Did you know that in JDeveloper it is really easy to create this WSDL? Just, create a SOA Project, drag and drop a Webservice on the exposed services lane, define a wsdl as synchronous, with a request and response message. Then open the wsdl in the wsdl editor and drag the operations to the binding pane and then the binding to the services pane:
The SoapUI Part Now, create a new SoapUI project based on this WSDL. It turns out that SoapUI interprets this base64Binary field and creates special content:

This body refers to an attachment, that is not yet added:
Let's add an image to it, by opening the 'Attachments' tab and clicking on the plus-button: You can select the 'Part' to which the attachment is to be linked. Doing so will change the 'Type' into 'CONTENT'. Edit either the 'ContentID' or the id in the document-element (indicated by 'cid:') to match eachother.

At this point, you can create a mock-service on the request and set the host of the mockservice to 'localhost' and 'MTOMService' in the mock-service editor:
Then you can right-click on the Mock-server and select 'Add endpoint to interface'.

Running the Request, will send the following message to the Mock Service:
(Altough the title is 'Response 1', what you see here is the request received by the Mock Service).
Apparently SoapUI base64 encoded the attachment and embedded it into the document-element.

Now you can enable MTOM on the request. Select the Request and go to the properties pane:
When running the request again SoapUI won't base 64 encode the attachment but send it as a compressed MIME/Multipart-attachment, with a reference in the document:
In the http-log you'll find:
POST /MTOMService HTTP/1.1
Accept-Encoding: gzip,deflate
Content-Type: multipart/related; type="application/xop+xml"; start="<>"; start-info="text/xml"; boundary="----=_Part_11_531670487.1435664879005"
SOAPAction: ""
MIME-Version: 1.0
Content-Length: 39605
Host: localhost:8080
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.1.1 (java 1.5)


Content-Type: application/xop+xml; charset=UTF-8; type="text/xml"

Content-Transfer-Encoding: 8bit

Content-ID: <>

<soapenv:Envelope xmlns:soapenv="" xmlns:mtom="">
<mtom:document><inc:Include href="cid:915251933163" xmlns:inc=""/></mtom:document>


Content-Type: image/jpeg; name=SoapUIMTOMRequest.jpg

Content-Transfer-Encoding: binary

Content-ID: <915251933163>

Content-Disposition: attachment; name="SoapUIMTOMRequest.jpg"; filename="SoapUIMTOMRequest.jpg"


[0x14][0xe][0xf][0xc][0x10][0x17][0x14][0x18][0x18][0x17][0x14][0x16][0x16][0x1a][0x1d]%[0x1f][0x1a][0x1b]#[0x1c][0x16][0x16] , #&')*)[0x19][0x1f]-0-(0%()([0xff][0xdb][0x0]C[0x1][0x7][0x7][0x7]

[0x13]([0x1a][0x16][0x1a](((((((((((((((((((((((((((((((((((((((((((((((((([0xff][0xc0][0x0][0x11][0x8][0x0][0xdc][0x3]7[0x3][0x1]"[0x0][0x2][0x11][0x1][0x3][0x11][0x1][0xff][0xc4][0x0][0x1b][0x0][0x1][0x0][0x2][0x3][0x1][0x1][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x5][0x6][0x2][0x3][0x4][0x1][0x7][0xff][0xc4][0x0]P[0x10][0x0][0x1][0x2][0x5][0x2][0x3][0x4][0x5][0x8][0x6][0x8][0x4][0x4][0x6][0x3][0x0][0x1][0x2][0x3][0x0][0x4][0x5][0x11][0x12][0x13]![0x6]"#[0x14][0x15]1QAVa[0x95][0xd2][0x7]2[0x81][0x92][0xa5][0xb3][0xd3][0xd4][0x16]37Ru[0x94]45Bqt[0xb1][0xb4][0xd1]$6r[0x91]%Dbs[0x17]&C[0xa1][0xc1][0xe1]cd[0xb2][0xff][0xc4][0x0][0x19][0x1][0x1][0x1][0x1][0x1][0x1][0x1][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x0][0x1][0x2][0x3][0x4][0x5][0xff][0xc4][0x0]5[0x11][0x0][0x1][0x3][0x0][0x8][0x4][0x6][0x2][0x2][0x2][0x2][0x3][0x1][0x0][0x0][0x0][0x0][0x1][0x2][0x11][0x3][0x12]!1AQa[0xf0][0x4][0x14]q[0xc1][0x13]"[0x81][0x91][0xa1][0xd1][0xb1][0xe1]2[0xf1]Bb[0x5][0x92]#r[0xd2]R[0xff][0xda][0x0][0xc][0x3][0x1][0x0][0x2][0x11][0x3][0x11][0x0]?[0x0][0xfb];[0xf2][0x8d]O|[0xa3][0x19][0x19][0x85]L[0x9]4[0xd2];@i[0x99][0x87][0x19][0x19][0x87][0xad][0x97]![0x1b][0xd8][0xda];;[0xbf][0x87][0xae][0xa0]{[0xc0][0x14][0xa8][0xa0][0x85]O[0xcd][\r][0xc1][0xb1][0xf1]W[0xb3][0xc7][0xd3][0xe3][0x1a][0xd8][0xfd][0xaa][0xb9][0xfc][0x8][0xfd][0xfc]j[0x9d][0xfe][0x98][0xff][0x0][0xfe][0xe2][0xbf][0xce]=[0x1c]=[0x12]R[0xaa][0xa2][0x9c])[0xe9]V[0x8d][0x11]P[0xea][0xee][0xfe][0x1d][0xf3][0x9f][0xf7][0x84][0xd7][0xc7][0xe][0xef][0xe1][0xdf]9[0xff][0x0]xM|q[E~[0x9a][0xb9][0x99][0xb6][0xb5][0xd6][0x94][0xca][0xa5]ky[0xf5][0xb2][0xb4][0xcb][0xa4] [0xd9]}b4[0xc9]I[0xb8] *[0xe0][0xa5]W[0xf9][0xa6][0xdd][0x14][0xba][0x94][0xad]R]OI[0xad]d!E[0xb]C[0x8d])[0xa7][0x1b]U[0x81][0xb2][0x90][0xb0][0x14][0x93]b[\r][0x88][0x17][0x4][0x1f][0x2][0xc]z[0xf9]&[0xea]y[0xf9][0x9a]D[0xc0][0x9c][0xee][0xfe][0x1d][0xf3][0x9f][0xf7][0x84][0xd7][0xc7][0xe][0xef][0xe1][0xdf]9[0xff][0x0]xM|q^[0xab][0xd7])4].[0xf8][0xaa]HHk_O[0xb5]L![0xac][0xed]k[0xdb]"/k[0x8f][0xf]1[0x1d]4[0xf9][0xe9]J[0x94][0x9b]st[0xe9][0xa6]&[0xe5]\[0xbe][0xf]0[0xe0]q
[0xb1] [0xd9]Ccb[0x8][0xfa]#<[0xa5][0x1c][0xc4][0xa8][0xe6][0x9f]|[0x13][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe1][0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e]#[0xe1][0x17][0x93]fjNm[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2]>[0x10][0xe4][0xd9][0x9a][0x8e]m[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2]>[0x10][0xe4][0xd9][0x9a][0x8e]m[0xf9]!![0xdd][0xfc];[0xe7]?[0xef][0x9][0xaf][0x8e][0x1d][0xdf][0xc3][0xbe]s[0xfe][0xf0][0x9a][0xf8][0xe2][0x1d][0xf9][0xe9]F'%[0xa5][0x1f][0x9a]a[0xa9][0xa9][0xac][0xb4][0x19][[0x81]+w[0x11]u`[0x93][0xba][0xac]76[0xf0][0x8d][0xb3]

Where I removed all the new-line and timing codings, for readability. This is what actually goes 'over the line'.
The OSB PartNow we're ready for the OSB part. Create a new OSB project and add the wsdl and xsd to it. If you created the wsdl, like I did, in JDeveloper, you can create the OSB project with the same name in the same folder as the JDeveloper project.

Create a new Proxy Service, and name it 'MTOMService' for instance. Base it on the MTOMService wsdl, created above.
I added a Pipeline, with stages and alerts to log the $attachments and $body variables. However, it turns out that since we're using MTOM via a base64Binary-element, the Attachments variable is empty. The body variable contains the message as seen in SoapUI.

Now, the most interesting part here is: 'How to get to the attachment-content?' Using 'Soap with Attachments' (SwA), the $attachments variable gives access to the binary content, with an expression like:
Where 'ctx:' is an internal namespace of OSB:

But since the $attachments is empty, this won't work. It is the base64Binary element that gives access to the content, in just the same way. So the expression is:

I added an assign with this as an expression to a seperate variable called 'documentBin'.

Then I added a Java Callout to my Base64-encoding method. For this I used the class described in my previous article. I jarred it and added the jar to my project. The input of this class is a 'byte[] bytes' and the output is a 'String' for wich I used the variable 'documentB64'. Then I added a replace with the following to pass back the response:
<mtom:mtomResponse xmlns:mtom="">

Then, an important setting: enable MTOM: go to the Message Handling tab of the proxy service:
Check the box 'Enabled' of 'XOP/MTOM Support'. Leave radio-button to 'Include Binary Data by Reference'. Save the proxy service.
The proof in the eatingNow, publish it to a running OSB server and change the Endpoint URL within SoapUI to the OSB Service.
Running the SoapUI Request via OSB results in the following response:
<soapenv:Envelope xmlns:soapenv="">
<soapenv:Header xmlns:mtom=""/>
<soapenv:Body xmlns:mtom="">

The Alert of the documentB64 variable shows:
ConclusionI spent quite some time searching the internet-area on usable articles on SoapUI, OSB and MTOM. But in the end, writing this article costed me more time then implementing this. I hope this article can be rightfully categorized in my 'FMW made Simple'-series.
DownloadsI made my projects downloadable via:

Make even more of UKOUG Tech15: APEX 5.0 UI Training - Dec 10th in Birmingham

Dimitri Gielis - Tue, 2015-06-30 00:48

APEX 5.0 has been released this spring. People who have already spent some time on this new version know this version is packed with new features aimed to make APEX developers even more productive, like the Page Designer.
Another striking new subset of features is aimed at creating better looking user interfaces for your APEX applications in an easy and maintainable way. The definition of user interface components in APEX 5.0 is very different to what we're used to. For example there is a new Universal Theme with Template Options and a Theme Roller. To get you up and running with this new toolset as quickly as possible, Dimitri Gielis of APEX R&D and Roel Hartman of APEX Consulting have joined forces and set up a one day course fully aimed at APEX 5.0 UI. So if you want to know not only how to use the new Theme, but also how to modify it to fit your needs, this is the event you should attend!
The training will be at the Jury’s Inn in Birmingham (UK) on Thursday Dec 10 - so conveniently immediately after the UKOUG Tech15 conference.More information and registration see
If you are from another country and think this training should be available in your country as well, please contact us - then we'll see what we can do!
Categories: Development

ReConnect 2015

Jim Marion - Mon, 2015-06-29 17:43

It is just a little less than a month until the PeopleSoft ReConnect conference in Rosemont, Illinois. I will be presenting PeopleTools Developer: Tips and Techniques on Thursday from 11:30 AM to 12:20 PM in Grand Ballroom H.

ASU Is No Longer Using Khan Academy In Developmental Math Program

Michael Feldstein - Mon, 2015-06-29 17:37

By Phil HillMore Posts (333)

In these two episodes of e-Literate TV, we shared how Arizona State University (ASU) started using Khan Academy as the software platform for a redesigned developmental math course[1] (MAT 110). The program was designed in Summer 2014 and ran through Fall 2014 and Spring 2015 terms. Recognizing the public information shared through e-Literate TV, ASU officials recently informed us that they had made a programmatic change and will replace their use of Khan Academy software with McGraw-Hill’s LearnSmart software that is used in other sections of developmental math.

To put this news in context, here is the first episode’s mention of Khan Academy usage.

Phil Hill: The Khan Academy program that you’re doing, as I understand, it’s for general education math. Could you give just a quick summary of what the program is?

Adrian Sannier: Absolutely. So, for the last three-and-a-half years, maybe four, we have been using a variety of different computer tutor technologies to change the pedagogy that we use in first-year math. Now, first-year math begins with something we call “Math 110.” Math 110 is like if you don’t place into either college algebra, which has been the traditional first-year math course, or into a course we call “college math,” which is your non-STEM major math—if you don’t place into either of those, then that shows you need some remediation, some bolstering of some skills that you didn’t gain in high school.

So, we have a course for that. Our first-year math program encompasses getting you to either the ability to follow a STEM major or the ability to follow majors that don’t require as intense of a math education. What we’ve done is create an online mechanism to coach students. Each student is assigned a trained undergraduate coach under the direction of our instructor who then helps that student understand how to use the Khan Academy and other tools to work on the skills that they show deficit in and work toward being able to satisfy the very same standards and tests that we’ve always used to ascertain whether a student is prepared for the rest of their college work.

Luckily, the episode on MAT 110 focused mostly on the changing roles of faculty members and TAs when using an adaptive software approach, rather than focusing on Khan Academy itself. After reviewing the episode again, I believe that it stands on its own and is relevant even with the change in software platform. Nevertheless, I appreciate that ASU officials were proactive to let me know about this change, so that we can document the change here and in e-Literate TV transmedia.

The Change

Since the change has not been shared outside of this notification (limiting my ability to do research and analysis), I felt the best approach would be to again interview Adrian Sannier, Chief Academic Technology Officer at ASU Online. Below is the result of an email interview, followed by short commentary [emphasis added].

Phil Hill: Thanks for agreeing to this interview to update plans on the MAT 110 course featured in the recent e-Literate TV episode. Could you describe the learning platforms used by ASU in the new math programs (MAT 110 and MAT 117 in particular) as well as describe any changes that have occurred this year?

Adrian Sannier: Over the past four years, ASU has worked with a variety of different commercially available personalized math tutors from Knewton, Pearson, McGraw Hill and the Khan Academy applied to 3 different courses in Freshman Math at ASU – College Algebra, College Math and Developmental Math. Each of these platforms has strengths and weaknesses in practice, and the ASU team has worked closely with the providers to identify ways to drive continuous improvement in their use at ASU.

This past year ASU used a customized version of Pearson’s MyMathLab as the instructional platform for College Algebra and College Math. In Developmental Math, we taught some sections using the Khan Academy Learning Dashboard and others using McGraw Hill’s LearnSmart environment.

This Fall, ASU will be using the McGraw Hill platform for Developmental Math and Pearson’s MyMathLab for College Algebra and College Math. While we also achieved good results with the Khan Academy this past year, we weren’t comfortable with our current ability to integrate the Khan product at the institutional level.

ASU is committed to the personalized adaptive approach to Freshman mathematics instruction, and we are continuously evaluating the product space to identify the tools that we feel will work best for our students.

Phil Hill: I presume this means that ASU’s usage of McGraw Hill’s LearnSmart for Developmental Math will continue and also expand to essentially replace the usage of Khan Academy. Is this correct? If so, what do you see as the impact on faculty and students involved in the course sections that previously used Khan Academy?

Adrian Sannier: That’s right Phil. Based on our experience with the McGraw Hill product we don’t expect any adverse effects.

Phil Hill: Could you further explain the comment “we weren’t comfortable with our current ability to integrate the Khan product at the institutional level”? I believe that Khan Academy’s API approach is more targeted to B2C [business-to-consumer] applications, allowing individual users to access information rather than B2B [business-to-business] enterprise usage, whereas McGraw Hill LearnSmart and others are set up for B2B usage from an API perspective. Is this the general issue you have in mind?

Adrian Sannier: That’s right Phil. We’ve found that the less cognitive load an online environment places on students the better results we see. Clean, tight integrations into the rest of the student experience result in earlier and more significant student engagement, and better student success overall.


Keep in mind that ASU is quite protective of its relationship with multiple software vendors and that they go out of their way to not publicly complain or put their partners in a bad light, even if a change is required as in MAT 110. Adrian does make it clear, however, that the key issue is the ability to integrate reliably between multiple systems. As noted in the interview, I think a related issue here is a mismatch of business models. ASU wants enterprise software applications where they can deeply integrate with a reliable API to allow a student experience without undue “cognitive load” of navigating between applications. Khan Academy’s core business model relies on people navigating to their portal on their website, and this does not fit the enterprise software model. I have not interviewed Khan Academy, but this is how it looks from the outside.

There is another point to consider here. While I can see Adrian’s argument that “we don’t expect any adverse effects” in the long run, I do think there are switching costs in the short term. As Sue McClure told me via email, as an instructor she spent significantly more time than usual on this course due to course design and ramping up the new model. In addition, ASU added 11 TAs for the course sections using Khan Academy.  These people have likely learned important lessons about supporting students in an adaptive learning setting, but a great deal of their Khan-specific time is now gone. Plus, they will need to spend time learning LearnSmart before getting fully comfortable in that environment.

Unfortunately, with the quick change, we might not see hard data to determine if the changes were working. I believe ASU’s plans were to analyze and publish the results from this new program after the third term which will not happen.

If I find out more information, I’ll share it here.

  1. The terms remedial math and developmental math are interchangeable in this context.

The post ASU Is No Longer Using Khan Academy In Developmental Math Program appeared first on e-Literate.

The Hybrid World is Coming

Tanel Poder - Mon, 2015-06-29 17:14

Here’s the video of E4 keynote we delivered together with Kerry Osborne a few weeks ago.

It explains what we see is coming, at a high level, from long time Oracle database professionals’ viewpoint and using database terminology (as the E4 audience is all Oracle users like us).

However, this change is not really about Oracle database world, it’s about a much wider shift in enterprise computing: modern Hadoop data lakes and clouds are here to stay. They are already taking over many workloads traditionally executed on in-house RDBMS systems on SAN storage arrays – especially all kinds of reporting and analytics. Oracle is just one of the many vendors affected by all this and they’ve also jumped onto the Hadoop bandwagon.

However, it would be naive to to think that Hadoop would somehow replace all your transactional or ERP systems or existing application code with thousands of complex SQL reports. Many of the traditional systems aren’t going away any time soon.

But the hybrid world is coming. It’s been a very good idea for Oracle DBAs to additionally learn Linux over the last 5-10 years, now is pretty much the right time to start learning Hadoop too. More about this in a future article ;-)

Check out the keynote video here:

Enjoy :-)

Basics of Patching in Oracle Apps (adpatch)

Online Apps DBA - Mon, 2015-06-29 15:09


Whenever a patch request comes in the first and foremost thing which has to be done by an Oracle apps DBA is to look into existing system, if the patch exists. We can query ad_bugs.login to sqlplus with apps user and fire the below command.

SQL> select bug_number,creation_date from apps.ad_bugs where bug_number in (‘&bug_number’);


Enter the patch number and if you see any rows, it means the patch is in the system already and you can go ahead and tell the business that patch already exists. You will see something like this.

But if you see no rows returned, then you have to set the ball rolling. Now you will have to perform the patch analysis of requested patch.

The next step would be to login to Oracle support with your credentials and open the README of the patch, There would be a pre-requisite section which would state that if there is any prerequisite of this patch which has to be applied. Now if you see a prerequisite then you will have to open the REDAME of that patch and check the prerequisite of that patch and this process goes on till there is no prerequisite.

From my personal experience I would suggest to prepare a template like below to do the analysis of the patch.

Now lets understand the example given above, the main patch requested in 123456, this patch has a pre-requisite 67890 and 67890 has a pre-requisite 8585858 and this has a pre-requisite 8686868.

So to apply the main patch we have to

a) First apply 8686868 and
b) Then 8585858 and
c) Then 67890 and then the main patch.

So now you will send this analysis back to your business and you will request for the downtime. Now downtime is calculated on the basis of your experience.

I assume that you have received the confirmation from the business to apply the patch. Download the patch in your patch top directory and unzip the file. After unzipping you will see a driver file like u123456.drv. When you will run adpatch (in 12.1) from this location it will ask you the name of the driver file and you have to give u123456.drv.

Now something about file systems, There are basically two types

1)Shared file systems
2)Distributed file systems

In my environment, I have shared file system and there are multiple web nodes. So in case of shared file system patches have to be applied on one node only since it is shared file system.

So let us assume that we have 3 application nodes and Non RAC DB server and also the patch is available only in American English and there are no other languages installed on the application.

Steps for patching (EBS 12.1) would be

  • Shut down the application on all the 3 nodes by logging into each node separately.
  • From adadmin put the application into maintenance mode
  • Take the count of invalids by logging to sql plus with apps user
  • Use adpatch to apply patches to the application.


  • Again check the count of invalid objects in database and compare with pre-patch application invalid count.
  • From adadmin disable the maintenance mode
  • Start the application on all the 3 nodes

Please don’t forget that for any operation to take place in the app, DB has to be up and running.


Please note that before doing any kind of patching activity, ask the unix team to perform the backup of the file systems because we can’t roll back the patch applied using adpatch


We will discuss more about patching in my next blog. Any comments or queries then post here

Related Posts for R12 Patches
  1. Basics of Patching in Oracle Apps (adpatch)

The post Basics of Patching in Oracle Apps (adpatch) appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Google Classroom Addresses Major Barrier To Deeper Higher Ed Adoption

Michael Feldstein - Mon, 2015-06-29 11:28

By Phil HillMore Posts (333)

A year ago I wrote about Google Classroom, speculating whether it would affect the institutional LMS market in higher education. My initial conclusion:

I am not one to look at Google’s moves as the end of the LMS or a complete shift in the market (at least in the short term), but I do think Classroom is significant and worth watching. I suspect this will have a bigger impact on individual faculty adoption in higher ed or as a secondary LMS than it will on official institutional adoption, at least for the next 2 – 3 years.

And my explanation [emphasis added]:

But these features are targeted at innovators and early adopter instructors who are willing to fill in the gaps themselves.

  1. The course creation, including setting up of rosters, is easy for an instructor to do manually, but it is manual. There has been no discussion that I can find showing that the system can automatically create a course, including roster, and update over the add / drop period.

  1. There is no provision for multiple roles (student in one class, teacher in another) or for multiple teachers per class.
  2. The integration with Google Drive, especially with Google Docs and Sheets, is quite intuitive. But there is no provision for PDF or MS Word docs or even publisher-provided courseware.
  3. There does not appear to be a gradebook – just grading of individual assignments. There is a button to export grades, and I assume that you can combine all the grades into a custom Google Sheets spreadsheet or even pick a GAFE gradebook app. But there is no consistent gradebook available for all instructors within an institution to use and for students to see consistently.

Well today Google announced a new Google Classroom API that directly addresses the limitation in bullet #1 above and indirectly addresses #4.

The Classroom API allows admins to provision and manage classes at scale, and lets developers integrate their applications with Classroom. Until the end of July, we’ll be running a developer preview, during which interested admins and developers can sign up for early access. When the preview ends, all Apps for Education domains will be able to use the API, unless the admin has restricted access.

By using the API, admins will be able to provision and populate classes on behalf of their teachers, set up tools to sync their Student Information Systems with Classroom, and get basic visibility into which classes are being taught in their domain. The Classroom API also allows other apps to integrate with Classroom.

Google directly addresses the course roster management in their announcement; in fact, this appears to be the primary use case they had in mind. I suspect this by itself will have a big impact in the K-12 market (would love to hear John Watson’s take on this if he addresses in his blog), making it far more manageable for district-wide and school-wide Google Classroom adoptions.

The potential is also there for a third party to develop and integrate a viable grade book application available to an entire institution. While this could partially be done by the Google Apps for Education (GAFE) ecosystem, that is a light integration that doesn’t allow deep connection between learning activities and grades. The new API should allow for deeper integrations, although I am not sure how much of the current Google Classroom data will be exposed.

I still do not see Google Classroom as a current threat to the higher ed institutional LMS market, but it is getting closer. Current ed tech vendors should watch these developments.

Update: Changed Google Apps for Education acronym from GAE to GAFE.

The post Google Classroom Addresses Major Barrier To Deeper Higher Ed Adoption appeared first on e-Literate.

Prepare for the Leap Second

Pythian Group - Mon, 2015-06-29 10:37

Catch up on how to handle the Leap Second and whether you’re ready for it with our previous updates on the impacts it will have on Cassandra and Linux.


Background Information

A leap second will be inserted at the end of June 30, 2015 at 23:59:60 UTC. 

There is a small time difference between the atomic clock and astronomical time (which is based on the rotation of earth). Rotation of earth is slowing down.

To synchronize these times, one second will be added to the atomic clock – a leap second – so that both clocks will synchronize. This will happen on June 30th – July 1st midnight UTC (not in local time, time the same as in GMT time zone). After 23 hours 59 minutes 59 seconds, the time will become 23 hours 59 minutes 60 seconds.

Since this system of correction was implemented in 1972, 26 such leap seconds have been inserted. The most recent one happened on June 30, 2012 at 23:59:60 UTC.

Unlike daylight savings time, which shifts the timezone information and does not alter the underlying UTC time clock on which servers work, a leap-second change is an actual change in the UTC time value. Usually, UTC time is continuous and predictable, but the leap second breaks this normal continuity requiring it to be addressed.


What You Need to Know – Summary

The June 2015 leap second event is the addition of one second to the atomic clock on June 30, 2015. Pythian has researched the implications that the upcoming leap second insertion may have and presents the relevant information to its clients and the wider community.

At the operating system level:

  • Windows and AIX servers are not affected by this issue.
  • Linux servers using NTP (network time protocol) may be affected, potentially causing error messages, server hangs or 100% CPU utilization. There are a series of patches and workarounds available, depending upon the needs of the components running on the Linux server.
  • HP-UX servers have NTP patches released in Q2 2015.

For databases and other software components:

  • Java programs are at risk of generating endless error loops, spiking CPU utilization. Patches are available.
  • Databases generally obtain time-stamps from the server OS, so those running on Linux have potential issues. For most, there are no additional corrections necessary.
  • Oracle databases have minimal additional risk. Oracle clustered environments and java-based administration tools should be reviewed and corrective actions taken.
  • Microsoft SQL Server databases have no risk but may expose minor application issues on data granularity and error handling.
  • Open source databases should be reviewed for Java risks. Updated kernels are available.
  • Cisco UCS environments should be reviewed. Patches are available.

Symptoms from the leap second event may persist for up to a day before and after the leap second event, as server NTP service updates are provided.

For all environments, a complete assessment and planning for your systems should be performed. The Pythian team would be pleased to help you perform this assessment and complete the planning necessary to ensure your systems can handle the leap second event in 2015. Get started by reviewing the full Leap Second Report.


Categories: DBA Blogs

Multisection Backup for Image Copies

The Oracle Instructor - Mon, 2015-06-29 10:30

A nice Oracle Database 12c New Feature enhances the multisection backup, introduced in 11g: You can use it now for image copies also!

Multisection Backup for an Image Copy

Multisection Backup for an Image Copy

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name PRIMA

List of Permanent Datafiles
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    347      SYSTEM               YES     /u01/app/oracle/oradata/prima/system01.dbf
2    235      SYSAUX               NO      /u01/app/oracle/oradata/prima/sysaux01.dbf
3    241      UNDOTBS1             YES     /u01/app/oracle/oradata/prima/undotbs01.dbf
4    602      USERS                NO      /u01/app/oracle/oradata/prima/users01.dbf

List of Temporary Files
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    40       TEMP                 32767       /u01/app/oracle/oradata/prima/temp01.dbt

RMAN> configure device type disk parallelism 2;

new RMAN configuration parameters:
new RMAN configuration parameters are successfully stored

RMAN> backup section size 301m as copy datafile 4;

Starting backup at 29-JUN-15
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 1 through 38528
channel ORA_DISK_2: starting datafile copy
input datafile file number=00004 name=/u01/app/oracle/oradata/prima/users01.dbf
backing up blocks 38529 through 77056
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
output file name=/u02/fra/PRIMA/datafile/o1_mf_users_bs2v934z_.dbf tag=TAG20150629T180658
channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:15
Finished backup at 29-JUN-15

RMAN> host 'ls -rtl /u02/fra/PRIMA/datafile/';

total 616468
-rw-r-----. 1 oracle oinstall 631250944 Jun 29 18:07 o1_mf_users_bs2v934z_.dbf
host command complete

We use backupsets by default now especially also upon the DUPLICATE DATABASE command, which leads finally to image copies of course.

Tagged: 12c New Features, Backup & Recovery, RMAN
Categories: DBA Blogs

Simplify Enterprise Mobility with Mobile Application Framework

WebCenter Team - Mon, 2015-06-29 10:14
By Mitchell Palski, Oracle WebCenter Sales Consultant
We are happy to have Mitchell Palski joining us on the blog for a Q&A around how you can Simplify Enterprise Mobility with Mobile Application Framework.
Q. How can Mobile Application Framework deliver Secure, Seamless Mobile for the Enterprise? For many years, corporate IT departments looked to the desktop as the only way to present information from their corporate enterprise applications. With the advent and exponential growth in mobile computing, applications are no longer tethered to the desktop; users expect to be able to switch among desktops, tablets, or smartphones anytime, anywhere. Multi-channel, mobile environments are becoming the new normal. Oracle helps simplify your organization’s transition into a mobile offering by providing a comprehensive platform for developing a mobile solution.
Q. Is it possible to Develop Cross-Platform Mobile Applications with Mobile Application Framework? Based on a hybrid architecture, Oracle MAF lets you build apps that are portable across devices and operating systems while still leveraging the device specific capabilities that deliver a rich user experience. 
Applications developed with Oracle MAF can be designed for phone and/or tablet form factors and packaged for either Apple iOS or Google Android – all from a single code base. Oracle MAF leverages the power of Java, HTML5, and JavaScript in a visual and declarative development environment to provide development teams with a more efficient approach to building on-device mobile apps. 
Oracle MAF end users will realize benefits from native apps that can work in both connected and disconnected mode, access device services, and store data in a local SQLite database.
Q. We’ve been hearing a lot about Digital Experience in the market today. How can Oracle Mobile Application Framework be leveraged to deliver optimized user Experiences? The key word in the question is “how”. Anyone can develop something using any number of frameworks, but what separates Oracle Mobile Application Framework? How can MAF not only accelerate your development lifecycle, but also enhance the product you are ultimately delivering to your users?

Oracle MAF is designed to increase developers’ productivity and enable intuitive mobile application development by offering extensive out-of-the-box capabilities and a declarative integrated environment. By easing the learning curve of app development, Oracle MAF allows developers to focus on user experience and get the most out of their organization’s mobile offering.

Mobile Application Framework includes a library of more than 80 professionally developed components that can be used to create rich mobile application interfaces in a declarative way. Oracle MAF components were designed specifically for mobile devices, which means they include support for touch and swipe gestures and are “skinned” to look great on mobile form factors. Developers also have the ability to quickly and declaratively integrate with local device services and features, such as camera, phone, SMS, contacts and GPS, through the declarative binding layer.

 Along with UX development, Oracle MAF supports the development of applications that can work offline as well as online. Applications are self-contained and can run on the mobile device in both connected and disconnected mode which means users are never stranded when they lose internet access.

In addition to the component based user interfaces, Oracle MAF can incorporate local HTML5 pages into the same application. This enables developers who prefer direct coding of the UI to incorporate their expertise along with third party components and code-libraries to create features in the application while keeping the ability to leverage the Oracle MAF container’s services.
Q. Integration is a common concern among business and IT today. Can you Integrate Data and Services across the Mobile Enterprise? Integration is one of the leading challenges of mobile application development. Oracle Mobile Platform supports and utilizes standard technologies and tools to expose many data formats and back-end business systems for exchange with any mobile application, primarily through the use of web services.

Web services provide the ability for a publisher of business services or content to provide that content and data to a consumer in a standardized and loosely coupled manner. In this case our MAF-developed mobile app is the consumer. SOAP or REST services can both be used by Oracle MAF through the use of data controls. Data controls create a level of abstraction over a business service that gives developers a consistent interface into all available business services, whether they are web-services or not.

They key to integration with an Enterprise is having those consumable services available to your mobile application. Many customers rush into mobile development without focusing on the business functions that their app will actually provide. A slick-interface that incorporates HTML5 and CSS3, ties into mobile device tools, and has offline functionality, still serves little to no purpose without the availability of a meaningful service-oriented architecture. The only case this might not be true is when you are developing a dictionary or glossary type app that is used purely an index for uses to reference.

Q. What about Connectivity? Can you Simplify Mobile Connectivity in the Cloud?
Separate from Mobile Application Framework, Oracle Mobile Cloud Service provides you with the tools you need to develop a strategy for supporting your mobile development. It provides out-of-the-box services that every mobile app requires, plus the ability to define and implement new enterprise-ready APIs quickly and cleanly. All API calls from your client applications are made via uniform REST calls, creating a cohesive development environment that’s easy to control and maintain.

Q. Finally, security is also a common concern when deploying anything on multiple devices. Can Mobile Application Framework deliver Secure Mobility Across All Layers? Security is a top priority for mobile application development given that mobile devices have higher risks of loss or theft. Oracle Mobile Application Framework comes with built in security that can limit access to your applications and ensure encryption of sensitive data. Oracle Mobile Application Framework enforces Communication Encryption, On-device Encryption, and SQLite Database Encryption. At the presentation layer, Developers can build single user interfaces that meet the needs of users with different privileges and provide role-based access to various features and pages.
Thank you, Mitchell for sharing your insight into how to Simplify Enterprise Mobility with Mobile Application Framework.  You can listen to a podcast on this topic here, and be sure to tune in to the Oracle WebCenter Café Best Practices Podcast Series for more information!