Feed aggregator

What does an App Cost?

Bradley Brown - Sat, 2014-12-20 17:59
People will commonly ask me this question, which has a very wide range as the answer.  You can get an app build on oDesk for nearly free - i.e. $2000 or less.  Will it provide the functionality you need?  It might!  Do you need a website that does the same thing?  Do you need a database (i.e. something beyond the app) to store your data for your customers?

Our first round of apps at InteliVideo cost us $2,000-10,000 to develop each of them.  We spent a LOT of money on the backend server code.  Our first versions were pretty fragile (i.e. broke fairly easily) and we're very sexy.  We decided that we needed to revamp our apps from stem to stern...APIs to easy branding to UI.

Here's a look at our prior version.  Our customers (people who buy videos) aren't typically buying from more than 1 of our clients - yet.  But in the prior version I saw a list of all of the products I had purchased.  It's not a very sexy UI - just a simple list of videos:


When I drilled into a specific product, again I see a list of videos within the product:


I can download or play a video in a product:


Here's what it looks like for The Dailey Method:



Here's the new version demonstrating the branding for Chris Burandt.  I've purchased a yearly subscription that currently includes 73 videos.  I scroll (right not down) through those 73 videos here:


Or if I click on the title, I get to see a list of the videos in more detail:


Notice the colors (branding) is shown everywhere here.  I scrolled up to look through those videos:


Here's a specific video that talked about a technique to set your sled unstuck:


Here's what the app looks like when I'm a The Dailey Method customer.  Again, notice the branding everywhere:


Looking at a specific video and it's details:


We built a native app for iOS (iPad, iPhone, iPod), Android, Windows and Mac that has all of the same look, feel, functionality, etc.  This was a MAJOR undertaking!

The good news is that if you want to start a business and build an MVP (Minimally Viable Product) to see if there is actually a market for your product, you don't have to spend hundreds of thousands to do so...but you might have to later!


PeopleTools 8.54 Feature: Support for Oracle Database Materialized Views

Javier Delgado - Fri, 2014-12-19 17:04
One of the new features of PeopleTools 8.54 is the support of Oracle Database Materialized Views. In a nutshell, Materialized Views can be seen as a snapshot of a given view. When you query a Materialized View, the data is not necessarily accessed online, but instead it is retrieved from the latest snapshot. This can greatly contribute to improve query performance, particularly for complex SQLs or Pivot Grids.

Materialized Views Features
Apart from the performance benefits associated with them, one of the most interesting features of Materialized Views is how the data refresh is handled. Oracle Database supports two ways of refreshing data:


  • On Commit: data is refreshed whenever a commit takes place in any of the underlying tables. In a way, this method is equivalent to maintaining through triggers a staging table (the Materialized View) whenever the source table changes, but all this complexity is hidden from the developer. Unfortunately, this method is only available with join-based or single table aggregate views.

Although it has the benefit of almost retrieving online information, normally you would use the On Commit for views based on tables that do not change very often. As every time a commit is made, the information is refreshed in the Materialized View, the insert, update and delete performance on the source tables will be affected.Hint: You would normally use On Commit method for views based on Control tables, not Transactional tables.
  • On Demand: data is refreshed on demand. This option is valid for all types of views, and implies that the Materialized View data is only refreshed when requested by the administrator. PeopleTools 8.54 include a page named Materialized View Maintenance where the on demand refreshes can be configured to be run periodically.




In case you choose the On Demand method, the data refresh can actually be done following two different methods:


  • Fast, which just refreshed the rows in the Materialized View affected by the changes made to the source records.


  • Full, which fully recalculated the Materialized View contents. This method is preferable when large volume changes between refreshes are usually performed against the source records. Also, this option is required after certain types of updates on the source records (ie: INSERT statements using the APPEND hint). Finally, this method is required when one of the source records is also a Materialized View and has been refreshed using the Full method. 


How can we use them in PeopleTools?
Before PeopleTools 8.54, Materialized Views could be used as an Oracle Database feature, but the DBA would need to be responsible of editing the Application Designer build scripts to include the specific syntax for this kind of views. On top of that, the DBA would need to schedule the data refresh directly from the database.

PeopleTools 8.54 introduces support within PeopleSoft tools. In first place, Application Designer will now show new options for View records:



We have already seen what Refresh Mode and Refresh Method mean. The Build Options indicate to Application Designer whether the Materialized View date needs to be calculated upon its build is executed or if it could be delayed until the first refresh is requested from the Materialized View Maintenance page.

This page is used to determine when to refresh the Materialized Views. The refresh can be executed for multiple views at once and scheduled using the usual PeopleTools Process Scheduler recurrence features. Alternatively, the Refresh Interval [seconds] may be used to indicate the database that this view needs to be refreshed every n seconds.

Limitations
The main disadvantage of using Materialized Views is that they are specific to Oracle Database. They will not work if you are using any other platform, in which case the view acts like a normal view, which keeps a similar functional behaviour, but without all the performance advantages of Materialized Views.

Conclusions
All in all, Materialized Views provide a very interesting feature to improve the system performance, while keeping the information reasonably up to date. Personally, I wish I've had this feature available for many of the reports I've built in all these years... :-)

Do You Really Need a Content Delivery Network (CDN)?

Bradley Brown - Fri, 2014-12-19 10:39
When I first heard about Amazon's offering called CloudFront I really didn't understand what it offered and who would want to use it.  I don't think they initially called it a content delivery network (CDN), but I could be wrong about that.  Maybe it was just something I didn't think I needed at that time.

Amazon states it well today (as you might expect).  The offering "gives developers and businesses an easy way to distribute content to end users with low latency, and high data transfer speeds."

So when you hear the word "content" what is it that you think about?  What is content?  First off, it's digital content.  So...website pages?  That's what I initially thought of.  But it's really any digital content.  Audio books, videos, PDFs - files of any time, any size.

When it comes to distributing this digital content, why would you need to do this with low latency and/or high transfer speeds?  Sure, this is important if your website traffic scales up from 1-10 concurrent viewers to millions overnight.  How realistic is that for your business?  What about the other types of content - such as videos?  Yep, now I'm referring to what we do at InteliVideo!

A CDN allows you to scale up to any number of customers viewing or downloading your content concurrently.  Latency can be translated to "slowness" when you're trying to download a video when you're in Japan because it's moving the file across the ocean.  The way that Amazon handles this is that they move the file across the ocean using their fast pipes (high speed internet) between their data centers and then the customer downloads the file effectively directly from Japan.

Imagine that you have this amazing set of videos that you want to bundle up and sell to millions of people.  You don't know when your sales will go viral, but when it happens you want to be ready!  So how do you implement a CDN for your videos, audios, and other content?  Leave that to us!

So back to the original question.  Do you really need a content delivery network?  Well...what if you could get all of the benefits of having one without having to lift a finger?  Would you do it then?  Of course you would!  That's exactly what we do for you.  We make it SUPER simple - i.e. it's done 100% automatically for our clients and their customers.  Do you really need a CDN?  It depends on how many concurrent people are viewing your content and where they are located.

For my Oracle training classes that I offer through BDB Software, I have customers from around the world, which I personally find so cool!  Does BDB Software need a CDN?  It absolutely makes for a better customer experience and I have to do NOTHING to get this benefit!

What Do Oracle Audit Vault Collection Agents Do?

The Oracle Audit Vault is installed on a server, and collector agents are installed on the hosts running the source databases.  These collector agents communicate with the audit vault server. 

If the collection agents are not active, no audit data is lost, as long as the source database continues to collect the audit data.  When the collection agent is restarted, it will capture the audit data that the source database had collected during the time the collection agent was inactive.

There are three types of agent collectors for Oracle databases.  There are other collectors for third-party database vendors such as SAP Sybase, Microsoft SQL-Server, and IBM DB2.

Audit Value Collectors for Oracle Databases*

Audit Trail Type

How Enabled

Collector Name

Database audit trail

For standard audit records: AUDIT_TRAIL initialization parameter set to: DB or DB, EXTENDED.

For fine-grained audit records: The audit trail parameter of DBMS_FGA.ADD_POLICY procedure is set to: DBMS_FGA.DB or DBMS_FGA.DB + DBMS_FGA.EXTENDED.

DBAUD

Operating system audit trail

For standard audit records: AUDIT_TRAIL initialization parameter is set to: OSXML, or XML, EXTENDED.

For syslog audit trails, AUDIT_TRAIL is set to OS and the AUDIT_SYS_OPERATIONS parameter is set to TRUE.  In addition, the AUDIT_SYSLOG_LEVEL parameter must be set.

For fine-grained audit records: The audit_trail parameter of the DBMS_FGA.ADD_POLICY procedure is set to DBMS_FGA.XML or DBMS_FGA.XML + DBMS_FGA.EXTENDED.

OSAUD

Redo log files

The table that you want to audit must be eligible.  See "Creating Capture Rules for Redo Log File Auditing" for more information.

REDO

 *Note if using Oracle 12c; the assumption is that Mixed Mode Unified Auditing is being used

If you have questions, please contact us at mailto:info@integrigy.com

Reference
Auditing, Oracle Audit Vault, Oracle Database
Categories: APPS Blogs, Security Blogs

Elephants and Tigers - V8 of the Website

Bradley Brown - Thu, 2014-12-18 21:54
It's amazing how much work goes into a one page website these days!  We've been working on our new version of our website (which is basically one page) for the last month or so.  The content is "easy" part on one hand and the look and feel / experience is the time consuming part.  To put it another way, it's all about the entire experience, not just the text/content.

Since we're a video company, it's important that they first page show some video...which required production and editing.  We're hunting elephants, so we need to tell the full story of the implementations that we've done for our large clients.  What all can you sell on our platform?  A video?  Audio books?  Movies?  TV Shows?  What else?  We needed to talk about our onboarding process for the big guys.  What's the shopping cart integration look like?  We have an entirely new round of apps coming out soon, so we need to show those off.  We need to answer that question of "What do our apps look like?"    Everybody wants analytics right?  You want to know who watched what - for how long, when and where!  What about all of the ways you can monetize - subscriptions (SVOD), transactional (TVOD) - rentals and purchases, credit-based purchases, and more.  What about those enterprises who need to restrict (or allow) viewing based on location?

Yes, it's quite a story that we've learned over the past few years.  Enterprises (a.k.a. Elephants) need it all.  We're "enterprise guys" after all.  It's natural for us to hunt Elephants.

Let's walk through this step-by-step.  In some ways it's like producing movie.  A lot of moving parts, a lot of post editing and ultimately comes down to the final cut.

What is that you want to deliver?  Spoken word?  TV Shows?  Training?  Workouts?  Maybe you want to jump right into why digital, how to customize or other topics...


Let's talk about why go digital?  Does it seem obvious to you?  It's not obvious to everyone.  Companies are still selling a lot of DVDs.


Any device, anywhere, any time!  That's how your customers want the content.


We have everything from APIs to Single Sign On, and SO much more...we are in fact an enterprise solution.


It's time to talk about the benefits.  We have these awesome apps that we've spent a fortune developing and allowing our clients to have full branding experience as you see here for UFC FIT.


We integrate to most of our large customers existing shopping carts.  We simply receive an instant payment notification from them to authorize a new customer.


I'm a data guy at heart, so we track everything about who's watching what, where they are watching from and so much more.  Our analytics reporting shows you this data.  Ultimately this leads to strategic upsell to existing customers.  It's always easier to sell someone who's already purchased over a new customer.


What website would be complete without a full list of client testimonials?


If you can dream up a way to monetize your content, we can implement it.  Credit based subscription systems to straight out purchase...we have it all!


What if you want to sell through affiliates?  How about selling the InteliVideo platform as an affiliate?  Our founders came from ClickBank, so we understand Affiliate payments and how to process them.


Do you need a step-by-step guide to our implementation process?  Well...if so, here you have it!  It's as simple as 5 steps.  For some customers this is a matter of hours and for others it's months.  The first step is simply signing up for an InteliVideo account at: http://intelivideo.com/sign-up/ 


We handle payment processing for you if you would like.  But...most big companies have already negotiated their merchant processing rates AND they typically already have a shopping cart.  So we integrate as needed.


Loading up your content is pretty easy with our platform.  Then again, we have customers with as few as one product and others with thousands of products and 10s of thousands of assets (videos, audio files, etc.).  Most of our big customers simply send us a drive.  We have a bulk upload process where you give us your drive and all of the metadata (descriptions) and the mapping of each...and we load it all up for you.


Our customers can use our own sales pages and/or membership area...or we have a template engine that allows for comprehensive redesign of the entire look and feel.  Out of the box implementations are simple...


Once our clients sign off on everything and our implementation team does as well...it's time to buy your media, promote your products and start selling.  We handle the delivery.


For those who would like to sign up or need more information, what website would be complete without a contact me page?  There are other pages (like our blog, about us, etc), but this page has a lot of information.  It's a story.  At the bottom of the page there is a "Small Business" link, which takes you to the prior version of our website...for small businesses.


As I said at the beginning of this blog post...it's amazing how much thought goes into a new web page!  We're very excited about our business.  Hopefully this post helped you think through how you want to tell the stories about your business.  How should you focus on your elephants and tigers?  How often should you update your website?  Go forth and crush it!

This new version of our website should be live in the next day or so...as always, I'd love to hear your feedback!

Elephant Hunting

Bradley Brown - Wed, 2014-12-17 11:00
Most every startup that I've watched (and been part of) has grand plans of virality.  Build it and they will not just come to you, but they will flock to you!  There is a dream that what you have built is going to change the world and it's going to be so obvious to everyone that they will want to share the news with all of their friends.  It's a good dream and there is a dose of reality that hits you dead in the face at some point.

When I started InteliVideo, it seemed SO clear to me that we had developed an amazing offering that everyone would tell all of their friends about us.  It was also clear to me that all of my friends who were in the training business (doing training in person or virtually - via WebEx) would choose to start offering their training through our platform.  After all, they have a brand, they have a customer base and they want to provide their training to their customers.  They certainly don't want to put their training into YouTube and serve it up for free.  They certainly don't want their customer to watch their training and then at the end of the video for them to see 10 of their competitors videos to choose from.  This seemed so obvious to me.  But...it clearly wasn't clear to them because they didn't flock to our platform - even though I offered it to them repeatedly.

After all, I knew just how easy it was for me to create my content (i.e. record a video lesson), bundle up a series of lessons into a product, set a price and away I went, selling my training online.  I knew just how excited and energized many of my students were to be able to watch my training.  They could watch it one time or 1000 times - at their own learning pace.  I could see their progress!  In fact, I knew that many of students came to me and asked for additional custom lessons, which I charged them a consulting fee to produce for them (i.e. $200 for one lesson).  I set up the lesson at $200 in the platform (without any videos in it), asked them to pay for the lesson, then I recorded it and attached it to the product...and reduced the price of the lesson for future purchasers to $15-25.  In other words, I created new content for a fee AND I was able to sell it time and time again.

You see, I've written 6 technical books (on average about 1000 pages each) that took 6-12 month of my life to write.  Sure, it generates credibility in a subject area, but it doesn't generate a lot of direct revenue.  Whereas recording and then selling a video-based course requires less than one one hundredth of the effort of writing a book for the same, actually better output.

Where am I going with all of this?  Well...after trying to convince 1000s of small business owners that they should use our platform, offering them free trials to see just how easy it is, talking endlessly about what's in it for them, we concluded that this futile effort of virality is insanity.  The common definition I hear for insanity is doing the same thing and expecting different results.  We continued to try to convince people - sure with more convincing messages - but the "conversion rate" (the number of people who signed up and were successful) was not good.

When we stopped and looked at who are real customers are that generate real revenue, we quickly discovered that they are what we might refer to as elephants.  Big companies who completely understand how to develop, curate, sell, and ultimately deliver valuable content to their customers...who buy from them time and time again.

So we changed our approach and our website to communicate to the elephants.  This new approach will go live today.  The "old approach" will show up as a "Small Business" link at the bottom of the page.  The new approach explains the deeper details of integration, APIs and things that are important to the larger companies who know to sell their valuable content.

We've had GREAT success with our elephants and we're VERY excited about where they are taking us!  We have a TON of new functionality that we continue to roll out each week.  We have integrated with a number of shopping carts.  We've created a new template system that will allow us to create a completely different look and feel for each of our clients.  We're launching a whole new series of brandable apps in the next few weeks.  We completely understand just how important our apps are to our success and have spent a fortune recreating our apps from the ground up.

It's been an exceptionally gifted ride over the last year.  We finalized our series A round this summer. Startups are an adrenaline junkie's dream job.  One day you're riding high on your laurels of success and the next day you're wondering how you're going to get to a cash flow positive position.  All the while, life, real life goes on.  Your family continues to age, grow up, build their own businesses and maybe you're not out having as much "fun" as you might like to.  For me that translates to not riding my dirt bike or snowmobile as much as I would like.  But I'm having fun in the business - that's the tradeoff.

That's what I call opportunity cost.  Each day you could be doing what you're doing or something else.  Take a minute to think about the cost of what you're doing right now.  Should you be hunting virality or elephants?

Oracle E-Business Suite Database 12c Upgrade Security Notes

When upgrading the Oracle E-Business Suite database to Oracle Database 12c (12.1), there are a number of security considerations and steps that should be included in the upgrade procedure.  Oracle Support Note ID 1524398.1 Interoperability Notes EBS 12.0 or 12.1 with RDBMS 12cR1 details the upgrade steps.  Here, we will document steps that should be included or modified to improve database security.  All references to steps are the steps in Note ID 1524398.1.

Step 8

"While not mandatory for the interoperability of Oracle E-Business Suite with the Oracle Database, customers may choose to apply Database Patch Set Updates (PSU) on their Oracle E-Business Suite Database ...".

After any database upgrade, the latest CPU patch (either PSU or SPU) should always be applied.  The database upgrade only has the latest CPU patch available at the time of release of the database upgrade patch.  In the case of 12.1.0.1, the database upgrade will be current as of July 2013 and be missing the latest five CPU patches.  Database upgrade patches reset the CPU level - so even if you had applied the latest CPU patch prior to the upgrade, the upgrade will revert the CPU patch level to July 2013.

From a security perspective, the latest PSU patch should be considered mandatory.

Step 11

It is important to note from a security perspective that Database Vault must be disable during the upgrade process.  Any protections enabled in Database Vault intended for DBAs will be disabled during the upgrade.

Step 15

The DMSYS schema is no longer used with Oracle E-Business Suite and can be safely dropped.  We recommended you drop the schema as part of this step to reduce the attack surface of the database and remove unused components.  Use the following SQL to remove the DMSYS user --

DROP USER DMSYS CASCADE;
Step 16

As part of the upgrade, it is a good time to review security related initialization parameters are set correctly.  Verify the following parameters are set -

o7_dictionary_accessibility = FALSE
audit_trail = <set to a value other than none>
sec_case_sensitive_logon = TRUE (patch 12964564 may have to be applied)
Step 20

For Oracle E-Business Suite 12.1, the sqlnet_ifile.ora should contain the following parameter to correspond with the initialization parameter sec_case_sensitive_login = true -

SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10

 

 

 

Oracle E-Business Suite, DBA
Categories: APPS Blogs, Security Blogs

Is Oracle 12c REST ready?

Marcelo Ochoa - Sat, 2014-12-13 16:33
This post is a continuation of my previous post Is Oracle 11g REST Ready?, and the answer is yes.
Again the availability of the embedded JVM at the Oracle RDBMS allows us to run an implementation of the complete REST stack and application.
To show how to implement a simple Hello World Application in REST I decided to use this time Jersey REST stack.
With Oracle 12c we have the availability of two JDK (1.6 and 1.7) and to compile and run Jersey we have to change the default 1.6 and switch de RBMS to 1.7 JDK, follow this guide to do that, but remember that in a CDB/PDB environment switching the JDK means change the compatibility JDK on all PDB, here another good post on that topic, DB 12c update java to version 7.
Once we have our RDBMS ready with JDK 1.7 we need Jersey compiled and ready to upload, here my steps:
a.- Check JDK version:
    mochoa@localhost:~$ export JAVA_HOME=/usr/local/jdk1.7
    mochoa@localhost:~$ export PATH=$JAVA_HOME/bin:$PATH
    mochoa@localhost:~$ type java
    java is /usr/local/jdk1.7/bin/java
    mochoa@localhost:~$ java -version
    java version "1.7.0_55"
    Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
    Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) b.- Check Maven:
    mochoa@localhost:~$ mvn -version
    Apache Maven 3.2.3 (33f8c3e1027c3ddde99d3cdebad2656a31e8fdf4; 2014-08-11T17:58:10-03:00)
    Maven home: /usr/local/apache-maven-3.2.3
    Java version: 1.7.0_55, vendor: Oracle Corporation
    Java home: /home/usr/local/jdk1.7.0_55/jre
    Default locale: en_US, platform encoding: UTF-8
    OS name: "linux", version: "3.13.0-40-generic", arch: "amd64", family: "unix"
    c.- Download and build Jersey using this guide Building and Testing Jersey, after a successful build of jersey all components will be located at our Maven local repository, on Linux is at $HOME/.m2/repository
    d.- Add a new container implementation for Oracle XMLDB Servlet, sources could be downloaded using this link, this new container implementation is a cloned version of jersey-servlet-core but downgrading Servlet 2.3 to 2.2 implemented by XMLDB, here the steps:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd containers/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ tar xvfz /tmp/xdb-servlet.tar.gz
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers$ cd xdb-servlet/
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/containers/xdb-servlet$ mvn -Dmaven.test.skip=true clean install
    [INFO] Scanning for projects...
    [INFO]                                                                        
    [INFO] ------------------------------------------------------------------------
    [INFO] Building jersey-container-servlet-xdb 2.14-SNAPSHOT
    .... lot of stuff here ....
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 18.146 s
    [INFO] Finished at: 2014-12-13T17:07:21-03:00
    [INFO] Final Memory: 29M/351M
    [INFO] ------------------------------------------------------------------------
    e.- Create a new user into the RDBMS, this user will be used to contain all Jersey stack:
    SQL> select tablespace_name from dba_tablespaces;
    TABLESPACE_NAME
    ------------------------------
    SYSTEM
    SYSAUX
    TEMP
    USERS
    EXAMPLE
    SQL> create user jersey identified by jersey
         default tablespace users
         temporary tablespace temp
         quota unlimited on users;
    User created.
    SQL> grant connect,resource,create public synonym to jersey;
    Grant succeeded.
    f.- Upload all libraries to the RDBMS, the list of libraries build from Jersey sources and their dependency are:
    1. javax/servlet/servlet-api/2.2/servlet-api-2.2.jar (la versión instalada en la BD)
    2. javax/persistence/persistence-api/1.0/persistence-api-1.0.jar
    3. org/glassfish/hk2/external/javax.inject/2.4.0-b06/javax.inject-2.4.0-b06.jar
    4. org/glassfish/hk2/hk2-utils/2.4.0-b06/hk2-utils-2.4.0-b06.jar
    5. org/osgi/org.osgi.core/4.2.0/org.osgi.core-4.2.0.jar
    6. org/glassfish/hk2/osgi-resource-locator/1.0.1/osgi-resource-locator-1.0.1.jar
    7. org/glassfish/hk2/hk2-api/2.4.0-b06/hk2-api-2.4.0-b06.jar
    8. javax/ws/rs/javax.ws.rs-api/2.0.1/javax.ws.rs-api-2.0.1.jar
    9. org/glassfish/jersey/bundles/repackaged/jersey-guava/2.14-SNAPSHOT/jersey-guava-2.14-SNAPSHOT.jar
    10. javax/annotation/javax.annotation-api/1.2/javax.annotation-api-1.2.jar
    11. org/glassfish/jersey/core/jersey-common/2.14-SNAPSHOT/jersey-common-2.14-SNAPSHOT.jar
    12. org/glassfish/jersey/core/jersey-client/2.14-SNAPSHOT/jersey-client-2.14-SNAPSHOT.jar
    13. javax/validation/validation-api/1.1.0.Final/validation-api-1.1.0.Final.jar
    14. javassist/javassist/3.12.1.GA/javassist-3.12.1.GA.jar
    15. org/glassfish/hk2/external/aopalliance-repackaged/2.4.0-b06/aopalliance-repackaged-2.4.0-b06.jar
    16. org/glassfish/hk2/hk2-locator/2.4.0-b06/hk2-locator-2.4.0-b06.jar
    17. org/glassfish/jersey/core/jersey-server/2.14-SNAPSHOT/jersey-server-2.14-SNAPSHOT.jar
    library [1] should never be uploaded into RDBMS because is part of XMLDB implementation, so here the steps to upload [2]-[17] libraries:

    $ cd $HOME/.m2/repository % loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[2]
    Classes Loaded: 91
    Resources Loaded: 2
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 91
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[3]
    Classes Loaded: 6
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 6
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[4]
    Classes Loaded: 60
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 60
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[5]
    Some errors, but no resolving problems found.
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[6]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 12
    Synonyms Created: 12
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[7]
    Classes Loaded: 0
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 153
    Synonyms Created: 153
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[8]
    Classes Loaded: 125
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 125
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[9]
    Classes Loaded: 1594
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 1594
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[10]
       ...Some errors not allowed in PDB, other classes OK....
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[11]
    Classes Loaded: 0
    Resources Loaded: 5
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 490
    Synonyms Created: 490
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[12]
    Classes Loaded: 99
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 99
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[13]
    Classes Loaded: 106
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 106
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[14]
    Classes Loaded: 366
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 366
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[15]
    Classes Loaded: 26
    Resources Loaded: 3
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 26
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[16]
    Classes Loaded: 0
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 92
    Synonyms Created: 92
    Errors: 0
    $ loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl ...[17]
    Classes Loaded: 0
    Resources Loaded: 16
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 652
    Synonyms Created: 652
    Errors: 0
    We can check above loadjava commands logged as jersey into the target database using (all queries must return empty):
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/persistence/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/inject/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/annotation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/validation/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'javax/ws/rs/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'jersey/%';
    SQL> select dbms_java.longname(object_name) from all_objects where object_type = 'JAVA CLASS' AND status = 'INVALID'
    AND dbms_java.longname(object_name) like 'org/%';
    g.- finally uploading our xdb-servlet container:
    loadjava -v -r -s -g PUBLIC -u jersey/jersey@pdborcl org/glassfish/jersey/containers/jersey-container-servlet-xdb/2.14-SNAPSHOT/jersey-container-servlet-xdb-2.14-SNAPSHOT.jar
    Classes Loaded: 39
    Resources Loaded: 4
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 0
    Synonyms Created: 39
    Errors: 0
    h.- at this point we have everything uploaded into the RDBMS, now we prepare XMLDB to run Java implemented Servlets.
    Enabling XMLDB HTTP access:
    SQL> EXEC DBMS_XDB.SETHTTPPORT(8080);
    PL/SQL procedure successfully completed.
    SQL> alter system register;
    System altered.
    i.- By default XMLDB is configured with digest authentication, to change that we can download xdbconfig.xml file using ftp and updating the section and finally uploading again using ftp (require SYS user):
    <authentication>
        
    <allow-mechanism>basic</allow-mechanism>
    </authentication>j.- grants required for running Jersey Servlet, we are using JERSEY user here, for using other account similar grants are required directly or by creating a new role (recommend):
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'getClassLoader','');
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'JERSEY','SYS:java.util.logging.LoggingPermission', 'control', '' );
    k.- uploading a simple Hello World application from examples directory:
    mochoa@localhost:~/jdeveloper/mywork/jersey$ cd examples/helloworld
    mochoa@
    localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ loadjava -r -v -u jersey/jersey@pdborcl target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class
    arguments: '-u' 'jersey/***@pdborcl' '-r' '-v' 'target/classes/org/glassfish/jersey/examples/helloworld/HelloWorldResource.class'
    identical: org/glassfish/jersey/examples/helloworld/HelloWorldResource
    skipping : class org/glassfish/jersey/examples/helloworld/HelloWorldResource
    Classes Loaded: 0
    Resources Loaded: 0
    Sources Loaded: 0
    Published Interfaces: 0
    Classes generated: 0
    Classes skipped: 1
    Synonyms Created: 0
    Errors: 0
    l.- Registering Servlet into XMLDB Adapater (logged as SYS):
    SQL> DECLARE
        configxml SYS.XMLType;
    begin
     dbms_xdb.deleteServletMapping('JerseyServlet');
     dbms_xdb.deleteServlet('JerseyServlet');
     dbms_xdb.addServlet(name=>'JerseyServlet',language=>'Java',class=>'org.glassfish.jersey.servlet.ServletContainer',dispname=>'Jersey Servlet',schema=>'jersey');
    SELECT INSERTCHILDXML(xdburitype('/xdbconfig.xml').getXML(),'/xdbconfig/sysconfig/protocolconfig/httpconfig/webappconfig/servletconfig/servlet-list/servlet[servlet-name="JerseyServlet"]','init-param',
    XMLType('jersey.config.server.provider.classnamesorg.glassfish.jersey.examples.helloworld.HelloWorldResourceHello World Application'),'xmlns="http://xmlns.oracle.com/xdb/xdbconfig.xsd"') INTO configxml
    FROM DUAL;
     dbms_xdb.cfg_update(configxml);
     dbms_xdb.addServletSecRole(SERVNAME => 'JerseyServlet',ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser');
     dbms_xdb.addServletMapping('/jersey/*','JerseyServlet');
     commit;
    end;
    /
    m.- Finally test my App:
    mochoa@localhost:~/jdeveloper/mywork/jersey/examples/helloworld$ curl --basic --user jersey:jersey http://localhost:8080/jersey/helloworld
    Hello World!!
    And that's all, happy 12c REST world.
    Notes on security:
    1. As you can see when registering a Servlet we added ROLENAME => 'authenticatedUser',ROLELINK => 'authenticatedUser', this imply that a RDBMS user name and password is required for accessing to this Servlet, as in the example we have to provide jersey/jersey which was the owner of the Hello Wolrd app
    2. HTTP protocol sent user name and password encoded as Base 64 when using basic authentication schema, if we want to hide this information over the net when using plain HTTP protocol we have to move to HTTPS.
    3. If you install other Hello World application in a different schema, for example scott, is necessary to upload also the class ServletContainer, for example using loadjava -u scott/tiger@pdborcl org/glassfish/jersey/servlet/ServletContainer.class, and obviously our new application class, finally registering the Servlet changing the tag to <servlet-schema>scott</servlet-schema>
    4. Servlet which runs without authentication are registered using ROLENAME => 'PUBLIC',ROLELINK => 'PUBLIC'), but is NOT recommended and it requires anonymous account unlock and these grants:
    SQL> ALTER USER ANONYMOUS ACCOUNT UNLOCK;User altered.SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'getClassLoader',''));
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.RuntimePermission', 'accessDeclaredMembers', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS', 'SYS:java.lang.reflect.ReflectPermission', 'suppressAccessChecks', '' );
    SQL> exec dbms_java.grant_permission( 'ANONYMOUS','SYS:java.util.logging.LoggingPermission', 'control', '' );


    Throw it away - Why you shouldn't keep your POC

    Robert Baillie - Sat, 2014-12-13 04:32
    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.It can be tempting, whilst answering these questions to become attached to the code that you generate.I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..Why do we set out on a proof of concept?The...

    Throw it away - Why you shouldn't keep your POC

    Rob Baillie - Sat, 2014-12-13 04:26

    "Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

    They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

    It can be tempting, whilst answering these questions to become attached to the code that you generate.

    I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

    I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

    Why do we set out on a proof of concept?

    The purpose of a proof of concept is to (by definition):

      * Prove:  Demonstrate the truth or existence of something by evidence or argument.
      * Concept: An idea, a plan or intention.

    In most cases, the concept being proven is a technical one.  For example:
      * Will this language be suitable for building x?
      * Can I embed x inside y and get them talking to each other?
      * If I put product x on infrastructure y will it basically stand up?

    They can also be functional, but the principles remain the same for both.

    It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

    The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

    By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

    As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

    This is quite different to what we set out to do in our normal software development process. 

    We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

    What process do we follow when embarking on a proof of concept?

    Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

    With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

    You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

    Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

    But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

    To illustrate:

    Will this language be suitable for building x?

    You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

    You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

    That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

    Can I embed x inside y and get them talking to each other

    You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

    You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

    That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

    If I put product x on infrastructure y will it basically stand up?

    You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

    You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

    That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

    Production development and Proof of Concept development is not the same

    The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

    You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

    When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

    That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

    Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

    That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

    That is the fundamental difference, and why you should throw your code away.

    Paginated HTML is here and has been for some time ... I think!

    Tim Dexter - Fri, 2014-12-12 18:03

    We have a demo environment in my team and of course things get a little beaten up in there. Our go to, 'here's Publisher' report was looking really bad. Data was not returning or being rendered correctly on the five templates we have for it.
    So, I spent about a half hour cleaning up the report; getting things working again; clearing out the rubbish. I noticed that one of the layouts when rendered in HTML was repeatedly showing a header down the screen. Oh, I know where to get rid of that and off I click to the report properties to fix it. But what is this I see? Is it? Can it be? Are my tired old eyes deceiving me?

    Yes, Dexter, you see that right, 'View Paginated'! I nervously changed the value to 'true' and went back to the HTML output.
    Holy Amaze Balls Batman, paginated HTML, the holy grail of HTML rendered reports, the Mount Everest of ... no, thats too easy, the K2 of html output ... its fan-bloody-tastic! Can you tell Im excited? I was immediately on messenger to Leslie (doc writer extraordinaire) 


    Obviously not quite as big a deal in the sane, real world outside of my head. 'Oh yeah, we have that now ...' Leslie is so calm and collected, however, she does like Maroon 5 but, we overlook that :)

    I command you 11.1.1.6+'ers to go find the property and turn it on right now and bask in the glory that is, 'paginated html.!'
    I cannot stop clicking back and forth and then to the end and then all the way back to the beginning. Its fantastic!

    Just look at those icons, just click em, you know you want to!

    Categories: BI & Warehousing

    SDSQL - Editing Anyone?

    Barry McGillin - Fri, 2014-12-12 12:05
    Since we dropped our beta out of SQLDeveloper 4.1 and announced SDSQL, we've been busy getting some of the new things out to users.  We support SQL*plus editing straight out of the box, but one thing that was always annoying was the time when you make a mistake and can't fix it to you have finished typing to go back and add a line like this.


    This was always the way as console editors didn't let you move around, the best you could hope for on the command line was a decent line editor and anything above was printed to the screen and not accessible unless through commands like you see here in the images about..

    Well, not any more.  In SDSQL we've taken a look at several things like history, aliases and colors and we've now added a separate multiline console editor which allows you to walk up and down your buffer and make all the changes you want before executing?  Sounds normal, right? So, thats what we did.  Have a look and tell us what you think.


    What can the Oracle Audit Vault Protect?

    For Oracle database customers the Oracle Audit Vault can protect the following:

    • SQL statements logs – Data Manipulation Language (DML) statement auditing such as when users are attempting to query the database or modify data, using SELECT, INSERT, UPDATE, or DELETE.
    • Database Schema Objects changes – Data Definition Language (DDL) statement auditing such as when users create or modify database structures such as tables or views.
    • Database Privileges and Changes – Auditing can be defined for the granting of system privileges, such as SELECT ANY TABLE.  With this kind of auditing, Oracle Audit Vault records SQL statements that require the audited privilege to succeed.
    • Fine-grained audit logs – Fine Grained Auditing activities stored in SYS.FGA_LOG$ such as whether an IP address from outside the corporate network is being used or if specific table columns are being modified.  For example, when the HR.SALARY table is SELECTED using direct database connection (not from the application), a condition could be to log the details of result sets where the PROPOSED_SALARY column is greater than $500,000 USD.
    • Redo log data – Database redo log file data.  The redo log files store all changes that occur in the database.  Every instance of an Oracle database has an associated redo log to protect the database in case of an instance failure.  In Oracle Audit Vault, the capture rule specifies DML and DDL changes that should be checked when Oracle Database scans the database redo log.

    The Audit Vault also supports –

    • Database Vault – Database Vault settings stored in DVSYS.AUDIT_TRAIL$ such as Realm audit, factor audit and Rule Audit. 
    • System and SYS – Core changes to the database by privileged users such as DBAs as recorded by AUDIT_SYS_OPERATIONS.
    • Stored Procedure Auditing – Monitor any changes made to PL/SQL and stored procedures.  Standard reports are provided to stored procedure operations, deleted and created procedures as well as modification history.

    If you have questions, please contact us at mailto:info@integrigy.com

    Reference
    Auditing, Oracle Audit Vault, Oracle Database
    Categories: APPS Blogs, Security Blogs

    Exploring DevOps with Chef and WebLogic Server

    Steve Button - Wed, 2014-12-10 20:58
    I'm about to embark on a journey that explores the use of WebLogic Server within a DevOps regime.  My first port of call for this journey will be using Chef.

    A loose travel itinerary is:
    • Setting up an environment to explore the basic operations of Chef - using the Chef Development Kit (ChefDK)
    • Exploring the basics of how Chef works to install Java and WebLogic Server on a single node
    • Installing and examining some of the existing cookbooks that are available for Java and WebLogic Server
    • Extending the environment to provision multiple nodes to create a typical multiple machine clustered WebLogic Server environment
    I've started working on the first task, where I've also explored using Docker to create an isolated, reusable and easily shareable environment that contains the ChefDK.

    The Docker project is here on GitHub:
    I also tried a quick experiment with using Oracle Linux as the base docker image:
    The Dockerfile contains the set of instructions required to install the ChefDK and the necessary utilities into the docker image when it is built.

    #
    # Dockerfile for Chef 4 WLS Environment
    #

    FROM ubuntu

    MAINTAINER Steve Button <>

    ENV DEBIAN_FRONTEND noninteractive

    # Install Utilities
    RUN apt-get update
    RUN apt-get install -yq wget
    RUN apt-get install -yq curl
    RUN apt-get install -yq git

    # Install Chef
    RUN wget https://opscode-omnibus-packages.s3.amazonaws.com/ubuntu/12.04/x86_64/chefdk_0.3.5-1_amd64.deb
    RUN dpkg -i chefdk*.deb

    # Verify and Setup Chef
    RUN chef verify
    RUN echo 'eval "$(chef shell-init bash)"' << ~/.bashrc

    ...

    CMD ["/bin/bash"]

    With this Dockerfile a build operation can be performed that produces a docker image, which can then be run to provide an environment in which start exploring the Chef.

    $ docker build -t buttso/chef4wls .

    $ docker run -ti buttso/chef4wls

    oracle@5481a3330f27:~$ which chef-client
    /opt/chefdk/embedded/bin/chef-client

    This is just a brief outline - I will describe this first task in more detail once I get a bit further along and can verify everything has been installed and works correctly.

    File Encoding in the Next Generation Outline Extractor

    Tim Tow - Tue, 2014-12-09 20:11
    We had a couple of issues reported with the output of the Next Generation Outline Extractor where the exported file did not work properly as a load file. After some investigation, we found that the file encoding was incorrect. We were encoding the files using the Unicode/UTF-8 format. We chose this encoding so that we could write all characters in all languages, but we did not consider that UTF-8 is only valid for loading Unicode databases in Essbase.

    To resolve this issue, we decided to add a configuration to the Next Generation Outline Extractor to allow users to select the file encoding. Here is a screenshot showing the new configuration setting.



    As of yesterday, December 8, 2014, the updated Next Generation Outline Extractor is available on our website. The first version to feature this functionality is version 2.0.2.692. Version 2.0.2.692 is available for all Essbase versions from Essbase 9.3.1 forward. We are also happy to announce that his version of the Next Generation Outline Extractor is the first version to support the recently released Essbase 11.1.2.3.505.

    If you encounter any issues with the Next Generation Outline Extractor, please don't hesitate to contact our support team at support@appliedolap.com.

    Categories: BI & Warehousing

    Changing The Number Of Oracle Database 12c Log Writers

    This page has been permanently moved. Please CLICK HERE to be redirected.

    Thanks, Craig.Changing The Number Of Oracle Database 12c Log Writers
    In an Oracle Database 12c instance you will likely see multiple log writer (LGWR) background processes. When you first start the Oracle instance you will likely see a parent and two redo workers. This is a very big deal and something many of us have been waiting for - for many years!

    While I'm excited about the change, if I can't control the number of LGWRs I could easily find myself once again constrained by the lack of LGWRs!

    So, my question is how do I manipulate the number of LGWRs from the default. And what is the default based on? It's these types of questions that led me on this quest. I hope you enjoy the read!


    Serialization Is Death
    Multiple LGWRs is great news because serialization is death to computing performance. Think of it like this. A computer program is essentially lines of code and each line of code takes a little bit of time to execute. A CPU can only process N lines of code per second. This means every serial executing program has a maximum through capability. With a single log writer (LGWR) background process the amount of redo that can be processed is similarly constrained.

    An Example Of Serialization Throughput Limitation
    Suppose a CPU can process 1000 instructions per millisecond. Also, assume through some research a DBA determined it takes the LGWR 10 instructions to process 10 KB of redo. (I know DBAs who have taken the time to figure this stuff out.) Given these two pieces of data, how many KB of redo can the CPU theoretically process per second?

    ? KB of redo/sec = (1000 inst / 1 ms)*(10 KB redo / 10 instr)*(1000 ms / 1 sec)* (1 MB / 1000 KB) = 1000 KB redo/sec

    This is a best case scenario. As you can see, any sequential process can become a bottleneck. One solution to this problem is to parallelize.

    Note: Back in April of 2010 I posted a series of articles about parallelism. If you are interested in this topic, I highly recommend you READ THE POSTS.

    Very Cool! Multiple 12c LGWRs... But Still A Limit?

    Since serialization is death... and parallelism is life, I was really excited when I saw on my 12c Oracle instance by default it had two redo workers in addition to the "parent" log writer. On my Oracle version 12.0.1.0.2.0 Linux machine this is what I see:
    $ ps -eaf|grep prod40 | grep ora_lg
    oracle 54964 1 0 14:37 ? 00:00:00 ora_lgwr_prod40
    oracle 54968 1 0 14:37 ? 00:00:00 ora_lg00_prod40
    oracle 54972 1 0 14:37 ? 00:00:00 ora_lg01_prod40

    This is important. While this is good news, unless Oracle or I have the ability to change and increase the number of LGWR redo workers, at some point the two redo workers, will become saturated bringing us back to the same serial LGWR process situation. So, I want and need some control.

    Going Back To Only One LGWR
    Interestingly, starting in Oracle Database version 12.0.1.0.2.0 there is an instance parameter _use_single_log_writer. I was able to REDUCE the number LGWRs to only one by setting the instance parameter _use_single_log_writer=TRUE. But that's the wrong direction I want to go!

    More Redo Workers: "CPU" Instance Parameters
    I tried a variety of CPU related instance parameters with no success. Always two redo workers.

    More Redo Workers: Set Event...
    Using my OSM script listeventcodes.sql I scanned the Oracle events (not wait events) but was unable to find any related Oracle events. Bummer...

    More Redo Workers: More Physical CPUs Needed?
    While talking to some DBAs about this, one of them mentioned they heard Oracle sets the number of 12c log writers is based on the number of physical CPUs. Not the number CPU cores but the number of physical CPUs. On a Solaris box with 2 physical CPUs (verified using the command, psrinfo -pv) upon startup there was still on two redo workers.

    $ psrinfo -p
    2
    $ psrinfo -pv
    The physical processor has 1 virtual processor (0)
    UltraSPARC-III (portid 0 impl 0x14 ver 0x3e clock 900 MHz)
    The physical processor has 1 virtual processor (1)
    UltraSPARC-III (portid 1 impl 0x14 ver 0x3e clock 900 MHz)

    More Redo Workers: Adaptive Behavior?
    Looking closely at the Solaris LGWR trace file I repeatedly saw this:

    Created 2 redo writer workers (2 groups of 1 each)
    kcrfw_slave_adaptive_updatemode: scalable->single group0=375 all=384 delay=144 r
    w=7940

    *** 2014-12-08 11:33:39.201
    Adaptive scalable LGWR disabling workers
    kcrfw_slave_adaptive_updatemode: single->scalable redorate=562 switch=23

    *** 2014-12-08 15:54:10.972
    Adaptive scalable LGWR enabling workers
    kcrfw_slave_adaptive_updatemode: scalable->single group0=1377 all=1408 delay=113
    rw=6251

    *** 2014-12-08 22:01:42.176
    Adaptive scalable LGWR disabling workers

    It looks to me like Oracle has programed in some sweeeeet logic to adapt the numbers of redo workers based the redo load.

    So I created six Oracle sessions that simply inserted rows into a table and ran all six at the same time. But it made no difference in the number of redo workers. No increase or decrease or anything! I let this dml load run for around five minutes. Perhaps that wasn't long enough, the load was not what Oracle was looking for or something else. But the number of redo workers always remained at two.

    Summary & Conclusions
    It appears at instance startup the default number of Oracle Database 12c redo workers is two. It also appears that Oracle has either already built or is building the ability for Oracle to adapt to changing redo activity by enabling and disabling redo workers. Perhaps the number of physical CPUs (not CPU cores but physical CPUs) plays a part in this algorithm.

    While this was not my research objective, I did discover a way to set the number of redo workers back to the traditional single LGWR background process.

    While I enjoyed doing the research for this article, it was disappointing that I was unable to influence Oracle to increase the number of redo workers. I sure hope Oracle either gives me control or the adaptive behavior actually works. If not, two redo workers won't be enough for many Oracle systems.

    All the best in your Oracle performance endeavors!

    Craig.


    Categories: DBA Blogs

    OBPM versus BPEL, That's the Question

    Jan Kettenis - Sun, 2014-12-07 12:20
    Recently I was pointed to the so-called Oracle Learning Streams http://education.oracle.com/streams which provide short presentations on all kind of topics.

    While ironing my clothes on a Sunday afternoon, I watched one with the title "Leveraging OBPM vs BPEL" by David Mills. An excellent story where he explains in less than 13 minutes the high-level difference using a practical example.

    One reason why I like about this stream is that it is in line with what I preach for years already. Otherwise I would have told you it sucked, obviously.

    The main point David makes is that you should use the right tool for the right job. OBPM aims at orchestrating business functions, whereas BPEL aims at orchestrating system functions. The example used is an orchestration of system functions to compose an Update Customer Profile service, which then can be used in a business process, orchestrating business functions where one person is involved to approve some update, while someone else needs to be informed about that. Watch, and you'll see!

    For understandable reasons the presentation does not touch the (technical) details. Without any intention to explain those, one should think about differences in the language itself (for example in BPEL you cannot create loops while in BPMN that quite normal to do), and also in the area of configuration and tuning (for example in case of BPEL there are more threads to tune, and you can do in-memory optimization, etc.).

    Maybe I find some time to give you a more detailed insight in those more detailed differences. Would help if you would express your interest by leaving a comment!

    What is the Oracle Audit Vault?

    Oracle Audit Vault is aptly named; the Oracle Audit Vault is a vault in which data about audit logs is placed, and it is based on two key concepts.  First, Oracle Audit Vault is designed to secure data at its source.  Second, Oracle Audit Vault is designed to be a data warehouse for audit data. 

    The Oracle Audit Vault by itself does not generate audit data.  Before the Oracle Audit Vault can be used, standard auditing needs to be first enabled in the source databases.  Once auditing is enabled in the source databases, the Oracle Audit Vault collects the log and audit data, but does not replicate, copy and/or collect the actual data.  This design premise of securing audit data at the source and not replicating it differentiates the Oracle Audit Vault from other centralized logging solutions. 

    Once log and audit data is generated in source databases, Oracle Audit Vault agents are installed on the source database(s) to collect the log and audit data and send it to the Audit Vault server.  By removing the log and audit data from the source system and storing it in the secure Audit Vault server, the integrity of the log and audit can be ensured and proven that it has not been tampered with.  The Oracle Audit Vault is designed to be a secure data warehouse of information of log and audit data.

    Application Log and Audit Data

    For applications, a key advantage to the Audit Vault’s secure-at-the-source approach is that the Oracle Audit Vault is transparent.  To use the Oracle Audit Vault with applications such as the Oracle E-Business Suite or SAP, standard Oracle database auditing only needs to be enabled on the application log and audit tables.  While auditing the application audit tables might seem duplicative, the advantage is that the integrity of the application audit data can be ensured (proven that it has not been tampered with) while not having to replicate or copy the application log and audit data. 

    For example, the Oracle E-Business Suite has the ability to log user login attempts, both successful and unsuccessful.  To protect the E-Business Suite login audit tables, standard Oracle database auditing first needs to be enabled.  An Oracle Audit Vault agent will then collect information about the E-Business Suite login audit tables.  If any deletes or updates occur to these tables, the Audit Vault would then alert and report the incident.  The Audit Vault is transparent to the Oracle E-Business Suite, no patches are required for the Oracle E-Business Suite to be used with the Oracle Audit Vault.

    Figure 1 Secure At-Source for Application Log and Audit data

    Figure 2 Vault of Log and Audit Data

    If you have questions, please contact us at mailto:info@integrigy.com

    Reference
    Auditing, Oracle Audit Vault
    Categories: APPS Blogs, Security Blogs

    From Zero to Hero....In About 2 Hours

    Joel Kallman - Wed, 2014-12-03 11:23


    This is an example of a real-world problem, an opportunistic one, being solved via a mobile application created with Oracle Application Express.

    First, a brief bit of background.  Our son is 9 years old and is in the Cub Scouts.  Cub Scouts in the United States is an organization that is associated with Boy Scouts of America.  It's essentially a club that is geared towards younger boys, and teaches them many valuable skills - hiking, camping out, shooting a bow and arrow, tying different knots, nutrition, etc.  This club has a single fundraiser every year, where the boys go door-to-door selling popcorn, and the proceeds of the popcorn sale fund the activities of the Cub Scouts local group for the next year.  There is a leader who organizes the sale of this popcorn for the local Cub Scout group, and this leader gets the unenvious title of "Popcorn Kernel".  For the past 2 years, I've been the "Popcorn Kernel" for our Cub Scout Pack (60 Scouts).

    I was recently at the DOAG Konferenz in Nürnberg, Germany and it wasn't until my flight home that I began to think about how I was going to distribute the 1,000 items to 60 different Scouts.  My flight home from Germany was on a Sunday and I had pre-scheduled the distribution of all of this popcorn to all 60 families on that next day, Monday afternoon.  Jet lag would not be my friend.

    The previous year, I had meticulously laid out 60 different orders across a large meeting room and let the parents and Scouts pick it up.  This year, I actually had 4 volunteer helpers, but I had no time.  All I had in my possession was an Excel spreadsheet which was used to tally the orders across all 60 Cub Scouts.   But I knew I could do better than 60 pieces of paper, which was the "solution" last year.

    On my flight home, on my iPad, I sketched out the simple 4-page user interface to locate and manage the orders.  As well, I wrote the DDL on my iPad for a single table.  Normally, I would use SQL Developer Data Modeler as my starting point, but this application and design needed to be quick and simple, so a single denormalized table was more than sufficient.



    Bright and early on Monday morning, I logged into an existing workspace on apex.oracle.com.  I created my single table using the Object Browser in SQL Commands, created a trigger on this table, uploaded the spreadsheet data into this table, and then massaged the data using some DML statements in SQL Commands.  Now that my table and data were complete, it was now time for my mobile application!

    I created a simple Mobile User Interface application with navigation links on the home page.  There are multiple "dens" that make up each group in a Cub Scout Pack, and these were navigation aids as people would come and pick up their popcorn ("Johnny is in the Wolf Den").  These ultimately went to the same report page but with different filters.



    Once a list view report was accessed, I showed the Scout's name, the total item count for them, and then via a click, drill down to the actual number of items to be delivered to the Scout.  Once the items were handed over and verified, the user of this application had to click a button to complete the order.  This was the only DML update operation in the entire application.



    I also added a couple charts to the starting page, so we could keep track of how many orders for each den had already been delivered and how many were remaining.



    I also added a chart page to show how many of each item was remaining, at least according to our records. This enabled us to do a quick "spot check" at any given point in time, and assess if the current inventory we had remaining was also accurately reflected in our system.  It was invaluable!  And remember - this entire application was all on a single table in the Oracle Database.  At one point in time, 8 people were all actively using this system - 5 to do updates and fulfill orders, and the rest to simply view and monitor the progress from their homes.  Concurrency was never even a consideration.  I didn't have to worry about it.



    Now some would say that this application:
    • isn't pixel perfect
    • doesn't have offline storage
    • isn't natively running on the device
    • can't capitalize on the native features of the phone
    • doesn't have a badge icon
    • isn't offered in a store

    And they would be correct.  But guess what?  None of it mattered.  The application was used by 5 different people, all using different devices, and I didn't care what type of devices they were using.  They all thought it was rocket science.  It looked and felt close enough to a native application that none of them noticed nor cared.  The navigation and display were consistent with what they were accustomed to.  More importantly, it was a vast improvement over the alternative - consisting of either a piece of paper or, worse yet, 5 guys huddling around a single computer looking at a spreadsheet.  And this was something that I was able to produce, starting from nothing to completed solution, in about two hours.  If I hadn't been jet lagged, I might have been able to do it in an hour.

    You might read this blog post and chuckle to yourself.  How possibly could this trivial application for popcorn distribution to Cub Scouts relate to a "real" mobile enterprise application?  Actually, it's enormously relevant.

    • For this application, I didn't have to know CSS, HTML or mobile user interfaces.
    • I only needed to know SQL.  I wrote no PL/SQL.  I only wrote a handful of SQL queries for the list views, charts, and the one DML statement to update the row.
    • It was immediately accessible to anyone with a Web browser and a smart phone (i.e., everyone).
    • Concurrency and scalability were never a concern.  This application easily could have been used by 1,000 people and I still would not have had any concern.  I let the Oracle Database do the heavy lifting and put an elegant mobile interface on it with Oracle Application Express.

    This was a simple example of an opportunistic application.  It didn't necessarily have to start from a spreadsheet to be opportunistic.  And every enterprise on the planet (including Oracle) has a slew of application problems just like this, and which today are going unsolved.  I went from zero to hero to rocket scientist in the span of two hours.  And so can you.

    A demo version of this application (with fictitious names) is here.  I left the application as is - imperfect on the report page and the form (I should have used a read-only display).  Try it on your own mobile device.

    Pages

    Subscribe to Oracle FAQ aggregator