Feed aggregator

PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:

  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.

In case you choose the On Demand method, the data refresh can actually be done following two different methods:

  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.

  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 

How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:

We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.


Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.


Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.


 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at mailto:info@integrigy.com

Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?

Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.

Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.

What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...

Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.

Any device, anywhere, any time!  That's how your customers want the content.

We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.

It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.

We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.

I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.

What website would be complete without a full list of client testimonials?

If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!

What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.

Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 

We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.

Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.

Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...

Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.

For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.

As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!

This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

Elephant Hunting

Bradley Brown - Wed, 2014-12-17 11:00
Most every startup that I've watched (and been part of) has grand plans of virality.  Build it and they will not just come to you, but they will flock to you!  There is a dream that what you have built is going to change the world and it's going to be so obvious to everyone that they will want to share the news with all of their friends.  It's a good dream and there is a dose of reality that hits you dead in the face at some point.

When I started InteliVideo, it seemed SO clear to me that we had developed an amazing offering that everyone would tell all of their friends about us.  It was also clear to me that all of my friends who were in the training business (doing training in person or virtually - via WebEx) would choose to start offering their training through our platform.  After all, they have a brand, they have a customer base and they want to provide their training to their customers.  They certainly don't want to put their training into YouTube and serve it up for free.  They certainly don't want their customer to watch their training and then at the end of the video for them to see 10 of their competitors videos to choose from.  This seemed so obvious to me.  But...it clearly wasn't clear to them because they didn't flock to our platform - even though I offered it to them repeatedly.

After all, I knew just how easy it was for me to create my content (i.e. record a video lesson), bundle up a series of lessons into a product, set a price and away I went, selling my training online.  I knew just how excited and energized many of my students were to be able to watch my training.  They could watch it one time or 1000 times - at their own learning pace.  I could see their progress!  In fact, I knew that many of students came to me and asked for additional custom lessons, which I charged them a consulting fee to produce for them (i.e. $200 for one lesson).  I set up the lesson at $200 in the platform (without any videos in it), asked them to pay for the lesson, then I recorded it and attached it to the product...and reduced the price of the lesson for future purchasers to $15-25.  In other words, I created new content for a fee AND I was able to sell it time and time again.

You see, I've written 6 technical books (on average about 1000 pages each) that took 6-12 month of my life to write.  Sure, it generates credibility in a subject area, but it doesn't generate a lot of direct revenue.  Whereas recording and then selling a video-based course requires less than one one hundredth of the effort of writing a book for the same, actually better output.

Where am I going with all of this?  Well...after trying to convince 1000s of small business owners that they should use our platform, offering them free trials to see just how easy it is, talking endlessly about what's in it for them, we concluded that this futile effort of virality is insanity.  The common definition I hear for insanity is doing the same thing and expecting different results.  We continued to try to convince people - sure with more convincing messages - but the "conversion rate" (the number of people who signed up and were successful) was not good.

When we stopped and looked at who are real customers are that generate real revenue, we quickly discovered that they are what we might refer to as elephants.  Big companies who completely understand how to develop, curate, sell, and ultimately deliver valuable content to their customers...who buy from them time and time again.

So we changed our approach and our website to communicate to the elephants.  This new approach will go live today.  The "old approach" will show up as a "Small Business" link at the bottom of the page.  The new approach explains the deeper details of integration, APIs and things that are important to the larger companies who know to sell their valuable content.

We've had GREAT success with our elephants and we're VERY excited about where they are taking us!  We have a TON of new functionality that we continue to roll out each week.  We have integrated with a number of shopping carts.  We've created a new template system that will allow us to create a completely different look and feel for each of our clients.  We're launching a whole new series of brandable apps in the next few weeks.  We completely understand just how important our apps are to our success and have spent a fortune recreating our apps from the ground up.

It's been an exceptionally gifted ride over the last year.  We finalized our series A round this summer. Startups are an adrenaline junkie's dream job.  One day you're riding high on your laurels of success and the next day you're wondering how you're going to get to a cash flow positive position.  All the while, life, real life goes on.  Your family continues to age, grow up, build their own businesses and maybe you're not out having as much "fun" as you might like to.  For me that translates to not riding my dirt bike or snowmobile as much as I would like.  But I'm having fun in the business - that's the tradeoff.

That's what I call opportunity cost.  Each day you could be doing what you're doing or something else.  Take a minute to think about the cost of what you're doing right now.  Should you be hunting virality or elephants?

Oracle E-Business Suite Database 12c Upgrade Security Notes

When upgrading the Oracle E-Business Suite database to Oracle Database 12c (12.1), there are a number of security considerations and steps that should be included in the upgrade procedure.  Oracle Support Note ID 1524398.1 Interoperability Notes EBS 12.0 or 12.1 with RDBMS 12cR1 details the upgrade steps.  Here, we will document steps that should be included or modified to improve database security.  All references to steps are the steps in Note ID 1524398.1.

Step 8

"While not mandatory for the interoperability of Oracle E-Business Suite with the Oracle Database, customers may choose to apply Database Patch Set Updates (PSU) on their Oracle E-Business Suite Database ...".

After any database upgrade, the latest CPU patch (either PSU or SPU) should always be applied.  The database upgrade only has the latest CPU patch available at the time of release of the database upgrade patch.  In the case of, the database upgrade will be current as of July 2013 and be missing the latest five CPU patches.  Database upgrade patches reset the CPU level - so even if you had applied the latest CPU patch prior to the upgrade, the upgrade will revert the CPU patch level to July 2013.

From a security perspective, the latest PSU patch should be considered mandatory.

Step 11

It is important to note from a security perspective that Database Vault must be disable during the upgrade process.  Any protections enabled in Database Vault intended for DBAs will be disabled during the upgrade.

Step 15

The DMSYS schema is no longer used with Oracle E-Business Suite and can be safely dropped.  We recommended you drop the schema as part of this step to reduce the attack surface of the database and remove unused components.  Use the following SQL to remove the DMSYS user --

Step 16

As part of the upgrade, it is a good time to review security related initialization parameters are set correctly.  Verify the following parameters are set -

o7_dictionary_accessibility = FALSE
audit_trail = <set to a value other than none>
sec_case_sensitive_logon = TRUE (patch 12964564 may have to be applied)
Step 20

For Oracle E-Business Suite 12.1, the sqlnet_ifile.ora should contain the following parameter to correspond with the initialization parameter sec_case_sensitive_login = true -





Oracle E-Business Suite, DBA
Categories: APPS Blogs, Security Blogs

Is Oracle 12c REST ready?

Marcelo Ochoa - Sat, 2014-12-13 16:33
This post is a continuation of my previous post Is Oracle 11g REST Ready?, and the answer is yes.
Again the availability of the embedded JVM at the Oracle RDBMS allows us to run an implementation of the complete REST stack and application.
To show how to implement a simple Hello World Application in REST I decided to use this time Jersey REST stack.
With Oracle 12c we have the availability of two JDK (1.6 and 1.7) and to compile and run Jersey we have to change the default 1.6 and switch de RBMS to 1.7 JDK, follow this guide to do that, but remember that in a CDB/PDB environment switching the JDK means change the compatibility JDK on all PDB, here another good post on that topic, DB 12c update java to version 7.
Once we have our RDBMS ready with JDK 1.7 we need Jersey compiled and ready to upload, here my steps:
a.- Check JDK version:
    mochoa@localhost:~$ export JAVA_HOME=/usr/local/jdk1.7
    mochoa@localhost:~$ export PATH=$JAVA_HOME/bin:$PATH
    mochoa@localhost:~$ type java
    java is /usr/local/jdk1.7/bin/java
    mochoa@localhost:~$ java -version
    java version "1.7.0_55"
    Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) b.- Check Maven:
    mochoa@localhost:~$ mvn -version
    Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4; 2014-08-11T17:58:10-03:00)
    Maven home: /usr/local/apache-maven-3.2.3
    Java version: 1.7.0_55, vendor: Oracle Corporation
    Java home: /home/usr/local/jdk1.7.0_55/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: "linux", version: "3.13.0-40-generic", arch: "amd64", family: "unix"
    c.- Download and build Jersey using this guide Building and Testing Jersey, after a successful build of jersey all components will be located at our Maven local repository, on Linux is at $HOME/.m2/repository
    d.- Add a new container implementation for Oracle XMLDB Servlet, sources could be downloaded using this link, this new container implementation is a cloned version of jersey-servlet-core but downgrading Servlet 2.3 to 2.2 implemented by XMLDB, here the steps:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd containers/
    localhost:~/jdeveloper/mywork/jersey/containers$ tar xvfz /tmp/xdb-servlet.tar.gz
    localhost:~/jdeveloper/mywork/jersey/containers$ cd xdb-servlet/
    localhost:~/jdeveloper/mywork/jersey/containers/xdb-servlet$ mvn -Dmaven.test.skip=true clean install
    [INFO] Scanning for projects...
    [INFO] ------------------------------------------------------------------------
    [INFO] Building jersey-container-servlet-xdb 2.14-SNAPSHOT
    .... lot of stuff here ....
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 18.146 s
    [INFO] Finished at: 2014-12-13T17:07:21-03:00
    [INFO] Final Memory: 29M/351M
    [INFO] ------------------------------------------------------------------------
    e.- Create a new user into the RDBMS, this user will be used to contain all Jersey stack:
    SQL> select tablespace_name from dba_tablespaces;
    SQL> create user jersey identified by jersey
         default tablespace users
         temporary tablespace temp
         quota unlimited on users;
    User created.
    SQL> grant connect,resource,create public synonym to jersey;
    Grant succeeded.
    f.- Upload all libraries to the RDBMS, the list of libraries build from Jersey sources and their dependency are:
    1. javax/servlet/servlet-api/2.2/servlet-api-2.2.jar (la versión instalada en la BD)
    2. javax/persistence/persistence-api/1.0/persistence-api-1.0.jar
    3. org/glassfish/hk2/external/javax.inject/2.4.0-b06/javax.inject-2.4.0-b06.jar
    4. org/glassfish/hk2/hk2-utils/2.4.0-b06/hk2-utils-2.4.0-b06.jar
    5. org/osgi/org.osgi.core/4.2.0/org.osgi.core-4.2.0.jar
    6. org/glassfish/hk2/osgi-resource-locator/1.0.1/osgi-resource-locator-1.0.1.jar
    7. org/glassfish/hk2/hk2-api/2.4.0-b06/hk2-api-2.4.0-b06.jar
    8. javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar
    9. org/glassfish/jersey/bundles/repackaged/jersey-guava/2.14-SNAPSHOT/jersey-guava-2.14-SNAPSHOT.jar
    10. javax/annotation/javax.annotation-api/1.2/javax.annotation-api-1.2.jar
    11. org/glassfish/jersey/core/jersey-common/2.14-SNAPSHOT/jersey-common-2.14-SNAPSHOT.jar
    12. org/glassfish/jersey/core/jersey-client/2.14-SNAPSHOT/jersey-client-2.14-SNAPSHOT.jar
    13. javax/validation/validation-api/1.1.0.Final/validation-api-1.1.0.Final.jar
    14. javassist/javassist/3.12.1.GA/javassist-3.12.1.GA.jar
    15. org/glassfish/hk2/external/aopalliance-repackaged/2.4.0-b06/aopalliance-repackaged-2.4.0-b06.jar
    16. org/glassfish/hk2/hk2-locator/2.4.0-b06/hk2-locator-2.4.0-b06.jar
    17. org/glassfish/jersey/core/jersey-server/2.14-SNAPSHOT/jersey-server-2.14-SNAPSHOT.jar
    library [1] should never be uploaded into RDBMS because is part of XMLDB implementation, so here the steps to upload [2]-[17] libraries:

    $ cd $HOME/.m2/repository % loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[2]
    Classes Loaded: 91
    Resources Loaded: 2
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 91
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[3]
    Classes Loaded: 6
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 6
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[4]
    Classes Loaded: 60
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 60
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[5]
    Some errors, but no resolving problems found.
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[6]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 12
    Synonyms Created: 12
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[7]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 153
    Synonyms Created: 153
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[8]
    Classes Loaded: 125
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 125
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[9]
    Classes Loaded: 1594
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 1594
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[10]
       ...Some errors not allowed in PDB, other classes OK....
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[11]
    Classes Loaded: 0
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 490
    Synonyms Created: 490
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[12]
    Classes Loaded: 99
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 99
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[13]
    Classes Loaded: 106
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 106
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[14]
    Classes Loaded: 366
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 366
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[15]
    Classes Loaded: 26
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 26
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[16]
    Classes Loaded: 0
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 92
    Synonyms Created: 92
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[17]
    Classes Loaded: 0
    Resources Loaded: 16
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 652
    Synonyms Created: 652
    Errors: 0
    We can check above loadjava commands logged as jersey into the target database using (all queries must return empty):
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/persistence/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/inject/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/annotation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/validation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/ws/rs/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'jersey/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'org/%';
    g.- finally uploading our xdb-servlet container:
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl org/glassfish/jersey/containers/jersey-container-servlet-xdb/2.14-SNAPSHOT/jersey-container-servlet-xdb-2.14-SNAPSHOT.jar
    Classes Loaded: 39
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 39
    Errors: 0
    h.- at this point we have everything uploaded into the RDBMS, now we prepare XMLDB to run Java implemented Servlets.
    Enabling XMLDB HTTP access:
    PL/SQL procedure successfully completed.
    SQL> alter system register;
    System altered.
    i.- By default XMLDB is configured with digest authentication, to change that we can download xdbconfig.xml file using ftp and updating the section and finally uploading again using ftp (require SYS user):
    </authentication>j.- grants required for running Jersey Servlet, we are using JERSEY user here, for using other account similar grants are required directly or by creating a new role (recommend):
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'getClassLoader','');
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY','SYS:java.util.logging.LoggingPermission', 'control', '' );
    k.- uploading a simple Hello World application from examples directory:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd examples/helloworld
    localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ loadjava -r -v -u jersey/jersey@pdborcl target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class
    arguments: '-u' 'jersey/***@pdborcl' '-r' '-v' 'target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class'
    identical: org/glassfish/jersey/examples/helloworld/HelloWorldResource
    skipping : class org/glassfish/jersey/examples/helloworld/HelloWorldResource
    Classes Loaded: 0
    Resources Loaded: 0
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 1
    Synonyms Created: 0
    Errors: 0
    l.- Registering Servlet into XMLDB Adapater (logged as SYS):
        configxml SYS.XMLType;
     dbms_xdb.addServlet(name=>'JerseyServlet',language=>'Java',class=>'org.glassfish.jersey.servlet.ServletContainer',dispname=>'Jersey Servlet',schema=>'jersey');
    SELECT INSERTCHILDXML(xdburitype('/xdbconfig.xml').getXML(),'/xdbconfig/sysconfig/protocolconfig/httpconfig/webappconfig/servletconfig/servlet-list/servlet[servlet-name="JerseyServlet"]','init-param',
    XMLType('jersey.config.server.provider.classnamesorg.glassfish.jersey.examples.helloworld.HelloWorldResourceHello World Application'),'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') INTO configxml
     dbms_xdb.addServletSecRole(SERVNAME => 'JerseyServlet',ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser');
    m.- Finally test my App:
    mochoa@localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ curl --basic --user jersey:jersey http://localhost:8080/jersey/helloworld
    Hello World!!
    And that's all, happy 12c REST world.
    Notes on security:
    1. As you can see when registering a Servlet we added ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser', this imply that a RDBMS user name and password is required for accessing to this Servlet, as in the example we have to provide jersey/jersey which was the owner of the Hello Wolrd app
    2. HTTP protocol sent user name and password encoded as Base 64 when using basic authentication schema, if we want to hide this information over the net when using plain HTTP protocol we have to move to HTTPS.
    3. If you install other Hello World application in a different schema, for example scott, is necessary to upload also the class ServletContainer, for example using loadjava -u scott/tiger@pdborcl org/glassfish/jersey/servlet/ServletContainer.class, and obviously our new application class, finally registering the Servlet changing the tag to <servlet-schema>scott</servlet-schema>
    4. Servlet which runs without authentication are registered using ROLENAME => 'PUBLIC',ROLELINK => 'PUBLIC'), but is NOT recommended and it requires anonymous account unlock and these grants:
    SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK;User altered.SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'getClassLoader',''));
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS','SYS:java.util.logging.LoggingPermission', 'control', '' );

    Throw it away - Why you shouldn't keep your POC

    Robert Baillie - Sat, 2014-12-13 04:32
    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.It can be tempting, whilst answering these questions to become attached to the code that you generate.I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..Why do we set out on a proof of concept?The...

    Throw it away - Why you shouldn't keep your POC

    Rob Baillie - Sat, 2014-12-13 04:26

    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

    They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

    It can be tempting, whilst answering these questions to become attached to the code that you generate.

    I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

    I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

    Why do we set out on a proof of concept?

    The purpose of a proof of concept is to (by definition):

      * Prove:  Demonstrate the truth or existence of something by evidence or argument.
      * Concept: An idea, a plan or intention.

    In most cases, the concept being proven is a technical one.  For example:
      * Will this language be suitable for building x?
      * Can I embed x inside y and get them talking to each other?
      * If I put product x on infrastructure y will it basically stand up?

    They can also be functional, but the principles remain the same for both.

    It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

    The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

    By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

    As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

    This is quite different to what we set out to do in our normal software development process. 

    We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

    What process do we follow when embarking on a proof of concept?

    Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

    With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

    You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

    Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

    But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

    To illustrate:

    Will this language be suitable for building x?

    You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

    You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

    That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

    Can I embed x inside y and get them talking to each other

    You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

    You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

    That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

    If I put product x on infrastructure y will it basically stand up?

    You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

    You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

    That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

    Production development and Proof of Concept development is not the same

    The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

    You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

    When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

    That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

    Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

    That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

    That is the fundamental difference, and why you should throw your code away.

    Paginated HTML is here and has been for some time ... I think!

    Tim Dexter - Fri, 2014-12-12 18:03

    We have a demo environment in my team and of course things get a little beaten up in there. Our go to, 'here's Publisher' report was looking really bad. Data was not returning or being rendered correctly on the five templates we have for it.
    So, I spent about a half hour cleaning up the report; getting things working again; clearing out the rubbish. I noticed that one of the layouts when rendered in HTML was repeatedly showing a header down the screen. Oh, I know where to get rid of that and off I click to the report properties to fix it. But what is this I see? Is it? Can it be? Are my tired old eyes deceiving me?

    Yes, Dexter, you see that right, 'View Paginated'! I nervously changed the value to 'true' and went back to the HTML output.
    Holy Amaze Balls Batman, paginated HTML, the holy grail of HTML rendered reports, the Mount Everest of ... no, thats too easy, the K2 of html output ... its fan-bloody-tastic! Can you tell Im excited? I was immediately on messenger to Leslie (doc writer extraordinaire) 

    Obviously not quite as big a deal in the sane, real world outside of my head. 'Oh yeah, we have that now ...' Leslie is so calm and collected, however, she does like Maroon 5 but, we overlook that :)

    I command you'ers to go find the property and turn it on right now and bask in the glory that is, 'paginated html.!'
    I cannot stop clicking back and forth and then to the end and then all the way back to the beginning. Its fantastic!

    Just look at those icons, just click em, you know you want to!

    Categories: BI & Warehousing

    SDSQL - Editing Anyone?

    Barry McGillin - Fri, 2014-12-12 12:05
    Since we dropped our beta out of SQLDeveloper 4.1 and announced SDSQL, we've been busy getting some of the new things out to users.  We support SQL*plus editing straight out of the box, but one thing that was always annoying was the time when you make a mistake and can't fix it to you have finished typing to go back and add a line like this.

    This was always the way as console editors didn't let you move around, the best you could hope for on the command line was a decent line editor and anything above was printed to the screen and not accessible unless through commands like you see here in the images about..

    Well, not any more.  In SDSQL we've taken a look at several things like history, aliases and colors and we've now added a separate multiline console editor which allows you to walk up and down your buffer and make all the changes you want before executing?  Sounds normal, right? So, thats what we did.  Have a look and tell us what you think.

    What can the Oracle Audit Vault Protect?

    For Oracle database customers the Oracle Audit Vault can protect the following:

    • SQL statements logs – Data Manipulation Language (DML) statement auditing such as when users are attempting to query the database or modify data, using SELECT, INSERT, UPDATE, or DELETE.
    • Database Schema Objects changes – Data Definition Language (DDL) statement auditing such as when users create or modify database structures such as tables or views.
    • Database Privileges and Changes – Auditing can be defined for the granting of system privileges, such as SELECT ANY TABLE.  With this kind of auditing, Oracle Audit Vault records SQL statements that require the audited privilege to succeed.
    • Fine-grained audit logs – Fine Grained Auditing activities stored in SYS.FGA_LOG$ such as whether an IP address from outside the corporate network is being used or if specific table columns are being modified.  For example, when the HR.SALARY table is SELECTED using direct database connection (not from the application), a condition could be to log the details of result sets where the PROPOSED_SALARY column is greater than $500,000 USD.
    • Redo log data – Database redo log file data.  The redo log files store all changes that occur in the database.  Every instance of an Oracle database has an associated redo log to protect the database in case of an instance failure.  In Oracle Audit Vault, the capture rule specifies DML and DDL changes that should be checked when Oracle Database scans the database redo log.

    The Audit Vault also supports –

    • Database Vault – Database Vault settings stored in DVSYS.AUDIT_TRAIL$ such as Realm audit, factor audit and Rule Audit. 
    • System and SYS – Core changes to the database by privileged users such as DBAs as recorded by AUDIT_SYS_OPERATIONS.
    • Stored Procedure Auditing – Monitor any changes made to PL/SQL and stored procedures.  Standard reports are provided to stored procedure operations, deleted and created procedures as well as modification history.

    If you have questions, please contact us at mailto:info@integrigy.com

    Auditing, Oracle Audit Vault, Oracle Database
    Categories: APPS Blogs, Security Blogs

    Exploring DevOps with Chef and WebLogic Server

    Steve Button - Wed, 2014-12-10 20:58
    I'm about to embark on a journey that explores the use of WebLogic Server within a DevOps regime.  My first port of call for this journey will be using Chef.

    A loose travel itinerary is:
    • Setting up an environment to explore the basic operations of Chef - using the Chef Development Kit (ChefDK)
    • Exploring the basics of how Chef works to install Java and WebLogic Server on a single node
    • Installing and examining some of the existing cookbooks that are available for Java and WebLogic Server
    • Extending the environment to provision multiple nodes to create a typical multiple machine clustered WebLogic Server environment
    I've started working on the first task, where I've also explored using Docker to create an isolated, reusable and easily shareable environment that contains the ChefDK.

    The Docker project is here on GitHub:
    I also tried a quick experiment with using Oracle Linux as the base docker image:
    The Dockerfile contains the set of instructions required to install the ChefDK and the necessary utilities into the docker image when it is built.

    # Dockerfile for Chef 4 WLS Environment

    FROM ubuntu

    MAINTAINER Steve Button <>

    ENV DEBIAN_FRONTEND noninteractive

    # Install Utilities
    RUN apt-get update
    RUN apt-get install -yq wget
    RUN apt-get install -yq curl
    RUN apt-get install -yq git

    # Install Chef
    RUN wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb
    RUN dpkg -i chefdk*.deb

    # Verify and Setup Chef
    RUN chef verify
    RUN echo 'eval "$(chef shell-init bash)"' << ~/.bashrc


    CMD ["/bin/bash"]

    With this Dockerfile a build operation can be performed that produces a docker image, which can then be run to provide an environment in which start exploring the Chef.

    $ docker build -t buttso/chef4wls .

    $ docker run -ti buttso/chef4wls

    oracle@5481a3330f27:~$ which chef-client

    This is just a brief outline - I will describe this first task in more detail once I get a bit further along and can verify everything has been installed and works correctly.

    File Encoding in the Next Generation Outline Extractor

    Tim Tow - Tue, 2014-12-09 20:11
    We had a couple of issues reported with the output of the Next Generation Outline Extractor where the exported file did not work properly as a load file. After some investigation, we found that the file encoding was incorrect. We were encoding the files using the Unicode/UTF-8 format. We chose this encoding so that we could write all characters in all languages, but we did not consider that UTF-8 is only valid for loading Unicode databases in Essbase.

    To resolve this issue, we decided to add a configuration to the Next Generation Outline Extractor to allow users to select the file encoding. Here is a screenshot showing the new configuration setting.

    As of yesterday, December 8, 2014, the updated Next Generation Outline Extractor is available on our website. The first version to feature this functionality is version Version is available for all Essbase versions from Essbase 9.3.1 forward. We are also happy to announce that his version of the Next Generation Outline Extractor is the first version to support the recently released Essbase

    If you encounter any issues with the Next Generation Outline Extractor, please don't hesitate to contact our support team at support@appliedolap.com.

    Categories: BI & Warehousing

    Changing The Number Of Oracle Database 12c Log Writers

    This page has been permanently moved. Please CLICK HERE to be redirected.

    Thanks, Craig.Changing The Number Of Oracle Database 12c Log Writers
    In an Oracle Database 12c instance you will likely see multiple log writer (LGWR) background processes. When you first start the Oracle instance you will likely see a parent and two redo workers. This is a very big deal and something many of us have been waiting for - for many years!

    While I'm excited about the change, if I can't control the number of LGWRs I could easily find myself once again constrained by the lack of LGWRs!

    So, my question is how do I manipulate the number of LGWRs from the default. And what is the default based on? It's these types of questions that led me on this quest. I hope you enjoy the read!

    Serialization Is Death
    Multiple LGWRs is great news because serialization is death to computing performance. Think of it like this. A computer program is essentially lines of code and each line of code takes a little bit of time to execute. A CPU can only process N lines of code per second. This means every serial executing program has a maximum through capability. With a single log writer (LGWR) background process the amount of redo that can be processed is similarly constrained.

    An Example Of Serialization Throughput Limitation
    Suppose a CPU can process 1000 instructions per millisecond. Also, assume through some research a DBA determined it takes the LGWR 10 instructions to process 10 KB of redo. (I know DBAs who have taken the time to figure this stuff out.) Given these two pieces of data, how many KB of redo can the CPU theoretically process per second?

    ? KB of redo/sec = (1000 inst / 1 ms)*(10 KB redo / 10 instr)*(1000 ms / 1 sec)* (1 MB / 1000 KB) = 1000 KB redo/sec

    This is a best case scenario. As you can see, any sequential process can become a bottleneck. One solution to this problem is to parallelize.

    Note: Back in April of 2010 I posted a series of articles about parallelism. If you are interested in this topic, I highly recommend you READ THE POSTS.

    Very Cool! Multiple 12c LGWRs... But Still A Limit?

    Since serialization is death... and parallelism is life, I was really excited when I saw on my 12c Oracle instance by default it had two redo workers in addition to the "parent" log writer. On my Oracle version Linux machine this is what I see:
    $ ps -eaf|grep prod40 | grep ora_lg
    oracle 54964 1 0 14:37 ? 00:00:00 ora_lgwr_prod40
    oracle 54968 1 0 14:37 ? 00:00:00 ora_lg00_prod40
    oracle 54972 1 0 14:37 ? 00:00:00 ora_lg01_prod40

    This is important. While this is good news, unless Oracle or I have the ability to change and increase the number of LGWR redo workers, at some point the two redo workers, will become saturated bringing us back to the same serial LGWR process situation. So, I want and need some control.

    Going Back To Only One LGWR
    Interestingly, starting in Oracle Database version there is an instance parameter _use_single_log_writer. I was able to REDUCE the number LGWRs to only one by setting the instance parameter _use_single_log_writer=TRUE. But that's the wrong direction I want to go!

    More Redo Workers: "CPU" Instance Parameters
    I tried a variety of CPU related instance parameters with no success. Always two redo workers.

    More Redo Workers: Set Event...
    Using my OSM script listeventcodes.sql I scanned the Oracle events (not wait events) but was unable to find any related Oracle events. Bummer...

    More Redo Workers: More Physical CPUs Needed?
    While talking to some DBAs about this, one of them mentioned they heard Oracle sets the number of 12c log writers is based on the number of physical CPUs. Not the number CPU cores but the number of physical CPUs. On a Solaris box with 2 physical CPUs (verified using the command, psrinfo -pv) upon startup there was still on two redo workers.

    $ psrinfo -p
    $ psrinfo -pv
    The physical processor has 1 virtual processor (0)
    UltraSPARC-III (portid 0 impl 0x14 ver 0x3e clock 900 MHz)
    The physical processor has 1 virtual processor (1)
    UltraSPARC-III (portid 1 impl 0x14 ver 0x3e clock 900 MHz)

    More Redo Workers: Adaptive Behavior?
    Looking closely at the Solaris LGWR trace file I repeatedly saw this:

    Created 2 redo writer workers (2 groups of 1 each)
    kcrfw_slave_adaptive_updatemode: scalable->single group0=375 all=384 delay=144 r

    *** 2014-12-08 11:33:39.201
    Adaptive scalable LGWR disabling workers
    kcrfw_slave_adaptive_updatemode: single->scalable redorate=562 switch=23

    *** 2014-12-08 15:54:10.972
    Adaptive scalable LGWR enabling workers
    kcrfw_slave_adaptive_updatemode: scalable->single group0=1377 all=1408 delay=113

    *** 2014-12-08 22:01:42.176
    Adaptive scalable LGWR disabling workers

    It looks to me like Oracle has programed in some sweeeeet logic to adapt the numbers of redo workers based the redo load.

    So I created six Oracle sessions that simply inserted rows into a table and ran all six at the same time. But it made no difference in the number of redo workers. No increase or decrease or anything! I let this dml load run for around five minutes. Perhaps that wasn't long enough, the load was not what Oracle was looking for or something else. But the number of redo workers always remained at two.

    Summary & Conclusions
    It appears at instance startup the default number of Oracle Database 12c redo workers is two. It also appears that Oracle has either already built or is building the ability for Oracle to adapt to changing redo activity by enabling and disabling redo workers. Perhaps the number of physical CPUs (not CPU cores but physical CPUs) plays a part in this algorithm.

    While this was not my research objective, I did discover a way to set the number of redo workers back to the traditional single LGWR background process.

    While I enjoyed doing the research for this article, it was disappointing that I was unable to influence Oracle to increase the number of redo workers. I sure hope Oracle either gives me control or the adaptive behavior actually works. If not, two redo workers won't be enough for many Oracle systems.

    All the best in your Oracle performance endeavors!


    Categories: DBA Blogs

    OBPM versus BPEL, That's the Question

    Jan Kettenis - Sun, 2014-12-07 12:20
    Recently I was pointed to the so-called Oracle Learning Streams http://education.oracle.com/streams which provide short presentations on all kind of topics.

    While ironing my clothes on a Sunday afternoon, I watched one with the title "Leveraging OBPM vs BPEL" by David Mills. An excellent story where he explains in less than 13 minutes the high-level difference using a practical example.

    One reason why I like about this stream is that it is in line with what I preach for years already. Otherwise I would have told you it sucked, obviously.

    The main point David makes is that you should use the right tool for the right job. OBPM aims at orchestrating business functions, whereas BPEL aims at orchestrating system functions. The example used is an orchestration of system functions to compose an Update Customer Profile service, which then can be used in a business process, orchestrating business functions where one person is involved to approve some update, while someone else needs to be informed about that. Watch, and you'll see!

    For understandable reasons the presentation does not touch the (technical) details. Without any intention to explain those, one should think about differences in the language itself (for example in BPEL you cannot create loops while in BPMN that quite normal to do), and also in the area of configuration and tuning (for example in case of BPEL there are more threads to tune, and you can do in-memory optimization, etc.).

    Maybe I find some time to give you a more detailed insight in those more detailed differences. Would help if you would express your interest by leaving a comment!

    UKOUG 2014 : Are you there?

    Angelo Santagata - Fri, 2014-12-05 09:55

    Im going to be at UKOUG next week helping out with the AppsTech 2014 Apps "Just Do It Workshop"...

    Are you going to be there?? if so come and find me on Monday in the Executive Rooms, Tuesday/Wednesday I'll a "participant" and attending the various presentations on Cloud, Integration technologies , Mobile and ADF.. Come and find me :-)


    Getting JDeveloper HttpAnalyzer to easily work against SalesCloud

    Angelo Santagata - Fri, 2014-12-05 09:48

    Hey all

    Little tip here. If your trying to debug some Java code working against SalesCloud one of the tools you might try and use is the http analyzer.. Alas I couldn’t get it to recognize the oracle sales cloud security certificate and the currently version of JDeveloper ( doesnt give you an option to ignore the certificate..

    However.. there is a workaround, simply start JDeveloper using a special flag which tells JDevelopers Http Analyzer to trust everybody!

    jdev -J-Djavax.net.ssl.trusteverybody=true

    Very useful…and obviously for testing and development its ok, but not for anyting else

    For more information please see this  Doc reference

    What is the Oracle Audit Vault?

    Oracle Audit Vault is aptly named; the Oracle Audit Vault is a vault in which data about audit logs is placed, and it is based on two key concepts.  First, Oracle Audit Vault is designed to secure data at its source.  Second, Oracle Audit Vault is designed to be a data warehouse for audit data. 

    The Oracle Audit Vault by itself does not generate audit data.  Before the Oracle Audit Vault can be used, standard auditing needs to be first enabled in the source databases.  Once auditing is enabled in the source databases, the Oracle Audit Vault collects the log and audit data, but does not replicate, copy and/or collect the actual data.  This design premise of securing audit data at the source and not replicating it differentiates the Oracle Audit Vault from other centralized logging solutions. 

    Once log and audit data is generated in source databases, Oracle Audit Vault agents are installed on the source database(s) to collect the log and audit data and send it to the Audit Vault server.  By removing the log and audit data from the source system and storing it in the secure Audit Vault server, the integrity of the log and audit can be ensured and proven that it has not been tampered with.  The Oracle Audit Vault is designed to be a secure data warehouse of information of log and audit data.

    Application Log and Audit Data

    For applications, a key advantage to the Audit Vault’s secure-at-the-source approach is that the Oracle Audit Vault is transparent.  To use the Oracle Audit Vault with applications such as the Oracle E-Business Suite or SAP, standard Oracle database auditing only needs to be enabled on the application log and audit tables.  While auditing the application audit tables might seem duplicative, the advantage is that the integrity of the application audit data can be ensured (proven that it has not been tampered with) while not having to replicate or copy the application log and audit data. 

    For example, the Oracle E-Business Suite has the ability to log user login attempts, both successful and unsuccessful.  To protect the E-Business Suite login audit tables, standard Oracle database auditing first needs to be enabled.  An Oracle Audit Vault agent will then collect information about the E-Business Suite login audit tables.  If any deletes or updates occur to these tables, the Audit Vault would then alert and report the incident.  The Audit Vault is transparent to the Oracle E-Business Suite, no patches are required for the Oracle E-Business Suite to be used with the Oracle Audit Vault.

    Figure 1 Secure At-Source for Application Log and Audit data

    Figure 2 Vault of Log and Audit Data

    If you have questions, please contact us at mailto:info@integrigy.com

    Auditing, Oracle Audit Vault
    Categories: APPS Blogs, Security Blogs

    Getting Started with Oracle Fusion Cloud Integrations

    Angelo Santagata - Thu, 2014-12-04 12:32

    Updated: 4-May-2015

    Hey all,

    If your getting started with integrating your application with Oracle Fusion Cloud then I wholeheartedly recommend you read the following resources before starting.. Most of the below is specific to Oracle Sales Cloud because it has App Composer, however much of the below is also applicable to HCM, ERP and other Fusion products.. 

    Some of these are a MUST have read before you start integrating/coding/customizing :-) I've put them here in the order I think would work for most people... Kinda like a getting started check-list

    I consider this blog entry an living blog entry, in that  I'll be updating it on a regular basis, so make sure you periodically check this location 

    Top 5 Fusion Integrations Must Reads 

    1. Familiarise yourself with the Sales Cloud Documentation. Specifically :
      • Go through the "User" section, documents like "Using Sales Cloud", "book. If your a techie like me you'll sit there and think, "Hey this is functional why do I need to read this?", well you do.. Even as a technical person, reading through the various user documents like the Using Sales Cloud" bits as an end user helps you understand what the different concepts/topics are.. You'll also understand things like the difference between a Prospect and a Sales Account, territories, assessments and much more.. Its worth a quick read, but do make sure you have a functional consultant to hand to make sure your not building something which can be done by configuration....
      • Read through all the books in the "Extensibility" section. The only anomaly here is the "Business Card Scanner mobile App" document. Its a walk-through of how to integrate Sales Cloud with a 3rd party Service to do business card scanning with MAF... Id leave that till last...
      • Peruse the Development section, this section contains a number of example use-cases, ie how to create a customer in R8, how to call an outbound service, its a good read....
    2. Get an overview of the tasks you might do...
      • Once you've this then look at the "Tasks" section of the docs....Here the curriculum development folk have categorised some of the most common tasks and put short cuts to the documentation detailing how to do this.. e.g. like adding a field to Sales Cloud, calling a soap webservice etc
    3. Are you going to be customizing the SalesCloud User Interface?
      • Many Sales Cloud integrations involve customizing the Sales Cloud User Interface. The customization could be as simple as adding a few fields to a standard object (like Opportunity), creating new objects (like MyOrder), validation or adding external content to one or many pages.
      • If your adding fields make sure you read the "Introduction to SalesCloud Customizations" section.
      • If you will be adding validation, triggers or calling webservices from Sales Cloud then make sure you read up on groovy scripting, and specifically the chapter on calling outbound SOAP webservices from groovy.
      • Make sure you understand the difference between calling a SOAP Service from groovy and creating an outbound webservice call using object workflows
        • In a nutshell , calling SOAP Services from groovy is a synchronous call, and calling a SOAP Service from a object workflow is a fire-and-forget asynchronous call
        • If you need to make sure an outbound webservice call is executed successfully then call the outbound webservice from a groovy script and surround it with an exception handler to catch any errors
      • On the subject of groovy be aware that in Sales Cloud you do not have access to the entire groovy language, thus make sure you understand that we only support a number of groovy functions (white-listing) and these are documented at the end of the book , Appendix A Supported Groovy Classes and Methods
    4. Are you going to be accessing data from SalesCloud from the external app??
      • If you think you will be calling SOAP WebServices in Sales Cloud then the "Getting started with WebServices" is a MUST read...  This doc goes into details into how to look up the SOAP webservice in Fusion OER, how to create static proxies, querying data and how to perform CRUD operations...
      • Get to know Oracle Fusion OER,, its a gold mine of information.......
      • Read Arvinds ( A-Team Chronicles Blog ) excellent "Invoking Sales Cloud SOAP Services from external Applications (part 1)" blog entry. This blog entry describes the steps aronud looking up a SOAP service (ie Opportunities) and then how to create a SOAP JAX-WS static proxy using JDeveloper11g. I personally would the JAX-WS Proxy approach (vs the Data Control) and then building Java code around this to support your application.
    5. Do you need your app to know who is calling it? 
      • Many integrations involve embedding a 3rd party web app into Oracle Sales Cloud as an iFrame or pressing a button in SalesCloud and calling the 3rd party app (either a UI or WebService call) . If your doing this then you'll almost certainly need to pass a "token" to the 3rd party application so it can use that it can call back to Sales Cloud with a key rather than a plain text username/password combo.. We call this key JWT TOKEN and its based on industry standards (http://jwt.io/) .  For a starters read my JWT Getting started blog  entry and then use the links to read the core documentation

    That covers the top 5 areas of integration.. Now for a list of locations where you can get even MORE useful information :

    More Information

    1. Oracle Learning Centres Quick Webinars on SalesCloud Integration
      • I worked with Development to get this mini tutorial series done, its excellent but Im obviously not biased eh  ;-) 
    2. R9 Simplified WebServices doc
      • This is a new document we recently completed based on how to use the new R9 Simplified SOAP TCA Services..  Although the document is targetted at R9 developers, it covers many of the standard topics like how to create a proxy, how to create a create operation etc.. It even has some sample CRUD payloads which are really really useful 
    3. Oracle Fusion Developer Relations
      1. Good friends of mine, they host a fantastic blog, youtube channel and whitepapers for Fusion Developers, another gold mine of information covering customization , extensions and integration code.
    4. Oracle Fusion Developer Relations
      1. Youtube channel : Not content with an awesome blog the Developer Relations folk even have a you tube channel where they host a collection of short "tutorials", showing all sorts such as "How to add a field to a page" , " How to call a webservice" etc..
      2. Oracle Fusion Developer Relations Whitepapers
    5. And finally there is my humble blog (which you are reading now) where I try and blog on things which aren't documented anywhere else.. If they are documented and are interesting I often link to it.. mainly because I want to find it myself :-)

    Thats it folks!

    If there are blog entries you'd like to see, or specific how to's, then feel free to contact me at angelo.santagata@oracle.com



    Subscribe to Oracle FAQ aggregator