Skip navigation.

Feed aggregator

IBM Bluemix demo using IBM Watson Personality Insights service

Pas Apicella - Mon, 2015-03-30 04:31
The IBM Watson Personality Insights service uses linguistic analysis to extract cognitive and social characteristics from input text such as email, text messages, tweets, forum posts, and more. By deriving cognitive and social preferences, the service helps users to understand, connect to, and communicate with other people on a more personalized level.

1. Clone the GitHub repo as shown below.

pas@192-168-1-4:~/bluemix-apps/watson$ git clone https://github.com/watson-developer-cloud/personality-insights-nodejs.git
Cloning into 'personality-insights-nodejs'...
remote: Counting objects: 84, done.
remote: Total 84 (delta 0), reused 0 (delta 0), pack-reused 84
Unpacking objects: 100% (84/84), done.
Checking connectivity... done.

2. Create the service as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf create-service personality_insights "IBM Watson Personality Insights Monthly Plan" personality-insights-service
Creating service personality-insights-service in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

3. Edit the manifest.yml to use a unique application name , I normally use {myname}-appname

---
declared-services:
  personality-insights-service:
    label: personality_insights
    plan: 'IBM Watson Personality Insights Monthly Plan'

applications:
- name: pas-personality-insights-nodejs
  command: node app.js
  path: .
  memory: 256M
  services:
  - personality-insights-service

4. Push the application as shown below.

pas@192-168-1-4:~/bluemix-apps/watson/personality-insights-nodejs$ cf push
Using manifest file /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs/manifest.yml

Creating app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Creating route pas-personality-insights-nodejs.mybluemix.net...
OK

Binding pas-personality-insights-nodejs.mybluemix.net to pas-personality-insights-nodejs...
OK

Uploading pas-personality-insights-nodejs...
Uploading app files from: /Users/pas/ibm/bluemix/apps/watson/personality-insights-nodejs
Uploading 188.5K, 30 files
Done uploading
OK
Binding service personality-insights-service to app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

Starting app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
-----> Downloaded app package (192K)
-----> Node.js Buildpack Version: v1.14-20150309-1555
-----> Requested node range:  >=0.10
-----> Resolved node version: 0.10.36
-----> Installing IBM SDK for Node.js from cache
-----> Checking and configuring service extensions
-----> Installing dependencies
       errorhandler@1.3.5 node_modules/errorhandler
       ├── escape-html@1.0.1
       └── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       body-parser@1.11.0 node_modules/body-parser
       ├── bytes@1.0.0
       ├── media-typer@0.3.0
       ├── raw-body@1.3.2
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── iconv-lite@0.4.6
       └── type-is@1.5.7 (mime-types@2.0.10)
       express@4.11.2 node_modules/express
       ├── escape-html@1.0.1
       ├── merge-descriptors@0.0.2
       ├── utils-merge@1.0.0
       ├── methods@1.1.1
       ├── fresh@0.2.4
       ├── cookie@0.1.2
       ├── range-parser@1.0.2
       ├── cookie-signature@1.0.5
       ├── media-typer@0.3.0
       ├── finalhandler@0.3.3
       ├── vary@1.0.0
       ├── parseurl@1.3.0
       ├── serve-static@1.8.1
       ├── content-disposition@0.5.0
       ├── path-to-regexp@0.1.3
       ├── depd@1.0.0
       ├── qs@2.3.3
       ├── on-finished@2.2.0 (ee-first@1.1.0)
       ├── debug@2.1.3 (ms@0.7.0)
       ├── etag@1.5.1 (crc@3.2.1)
       ├── proxy-addr@1.0.7 (forwarded@0.1.0, ipaddr.js@0.1.9)
       ├── send@0.11.1 (destroy@1.0.3, ms@0.7.0, mime@1.2.11)
       ├── accepts@1.2.5 (negotiator@0.5.1, mime-types@2.0.10)
       └── type-is@1.5.7 (mime-types@2.0.10)
       jade@1.9.2 node_modules/jade
       ├── character-parser@1.2.1
       ├── void-elements@2.0.1
       ├── commander@2.6.0
       ├── mkdirp@0.5.0 (minimist@0.0.8)
       ├── with@4.0.2 (acorn-globals@1.0.3, acorn@1.0.1)
       ├── constantinople@3.0.1 (acorn-globals@1.0.3)
       └── transformers@2.1.0 (promise@2.0.0, css@1.0.8, uglify-js@2.2.5)
       watson-developer-cloud@0.9.8 node_modules/watson-developer-cloud
       ├── object.pick@1.1.1
       ├── cookie@0.1.2
       ├── extend@2.0.0
       ├── isstream@0.1.2
       ├── async@0.9.0
       ├── string-template@0.2.0 (js-string-escape@1.0.0)
       ├── object.omit@0.2.1 (isobject@0.2.0, for-own@0.1.3)
       └── request@2.53.0 (caseless@0.9.0, json-stringify-safe@5.0.0, aws-sign2@0.5.0, forever-agent@0.5.2, form-data@0.2.0, stringstream@0.0.4, oauth-sign@0.6.0, tunnel-agent@0.4.0, qs@2.3.3, node-uuid@1.4.3, mime-types@2.0.10, combined-stream@0.0.7, http-signature@0.10.1, tough-cookie@0.12.1, bl@0.9.4, hawk@2.3.1)
-----> Caching node_modules directory for future builds
-----> Cleaning up node-gyp and npm artifacts
-----> No Procfile found; Adding npm start to new Procfile
-----> Building runtime environment
-----> Checking and configuring service extensions
-----> Installing App Management
-----> Node.js Buildpack is done creating the droplet

-----> Uploading droplet (12M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


OK

App pas-personality-insights-nodejs was started using this command `node app.js`

Showing health and status for app pas-personality-insights-nodejs in org pasapi@au1.ibm.com / space dev as pasapi@au1.ibm.com...
OK

requested state: started
instances: 1/1
usage: 256M x 1 instances
urls: pas-personality-insights-nodejs.mybluemix.net
last uploaded: Mon Mar 30 10:18:37 +0000 2015

     state     since                    cpu    memory   disk     details
#0   running   2015-03-30 09:20:06 PM   0.0%   0 of 0   0 of 0

5. Access Application


This demo is based off the link below.

https://github.com/watson-developer-cloud/personality-insights-nodejs

More information as follows

http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/doc/personality-insights/

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Retrieving OAM keystore password

Frank van Bortel - Mon, 2015-03-30 03:12
How to retrieve the password of OAM keystore If you ever need it; the password of the default OAM keystore password (which is generated) can be retrieved using: cd /oracle/middleware/oracle_common/common/bin ./wlst.sh connect(); domainRuntime() listCred(map="OAM_STORE",key="jks") Would you like to change it, use resetKeystorePassword() Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

Slides from my presentation at APEX World 2015 in Rotterdam

Dietmar Aust - Mon, 2015-03-30 03:05
Hi guys,

I had a great time at APEX World in Rotterdam, it was a wonderful event. I could meet up with my friends and learn a few new tricks, too :).

Here are the slides from my presentation about the many smaller new features of Oracle APEX 5.0. And I could only cram like half of the good stuff that I found into this 45 min. session.

About 70 people attended the session and I sure hope they will use some of the features I presented.

Once I clean up my demo application, I will make it available, too.

Cheers,
~Dietmar.

Presentation material & E-learning videos – In-Memory Column Store Workshop with Maria Colgan

Marco Gralike - Mon, 2015-03-30 03:00
You can now download and have another look at the presentations used during the In-Memory…

WebCenter Portal & SPA (Single Page Application) – Enhancing ADF UI Design and UX Functionality

First of all, you may be wondering: what is SPA and how can it improve Oracle WebCenter Portal?
It’s the future, trust me, especially when it comes to providing the best possible UX and flexible UI to your users and enhancing the power of ADF Taskflows with a radical UI overhaul. ADF is great for creating rich applications fast, but to be honest, it is limited in its capability to provide that rich flexible HTML5 UX  through the ADF UI components and to create that truly interactive design that we all seek in today’s modern web based apps and portals. If we are honest, the developer (or more importantly, the designers) are constrained with the out-of-the-box ADF components/taskflows and their lack of design flexibility, unless they want to create their own and extend the render kit capabilities (but extending the render kit will be for another post – today lets cover SPA with Portal).

SPA fits in perfectly – overhaul the ADF UI using today’s modern techniques and design agency with the skills to build responsive components using the latest frameworks and libraries, such as knockout, backbone, requirejs, socket.io with reactive templating using mustache, handlebars to compliment ADF and WebCenters services via its REST API.

But before taking the next step and thinking SPA is right for you, read these warnings!

  • The taskflow interface is developed using the latest framework and libraries – not all partners and web design agencies are forward thinking and may not be able to develop and achieve SPA components for WebCenter Portal.
  • The cost can be greater, as the interface is usually designed from the ground up and requires browser and device testing. But if it’s done well, it will outmatch anything that the ADF UI layer can provide your users.
  • Experience goes a long way, especially with matching your design agency with ADF-experienced developers to provide responsive web services and inline datasets.

So what is SPA?

A single-page application (SPA) is a web application or website that fits on a single web page with the goal of providing a more fluid user experience. In an SPA, either all necessary code – HTML, JavaScript and CSS – is retrieved with a single page load, or the appropriate resources are dynamically loaded and added to the page (similar to ADFs PPR) as and when necessary, reducing the page load and speeding up page load times.

With our use of SPA within portal, the SPA is made modular and wrapped within a taskflow container, providing the rich capability to have multiple components that can be easily dropped into a page while providing rich interaction between components that are context aware by simply adding them from the resource catalog from within the WebCenter Composer view.

Workspace manager

Here is an example Photoshop design that we can easily convert into a pixel perfect functioning SPA taskflows we have developed that enables logged in users to manage their own list of Application and Spaces. All server side interaction is handled via ADFm (model) layer, which enables the ADF life cycle to be aware of REST calls and prevent memory leaks or other possible session issues. While the UI layer is all custom with reactive dynamic templating that is far superior and faster than the current ADF, PPR calls, as all the interaction and updating of the template is handled client side. Another great thing is that this template can easily transform and support modern responsive design techniques and can be consumed by mobile devices, and could also be deployed to other app environments like Liferay, SharePoint, and WebCenter Content, as the UI layer does not rely on ADF calls and service requests can be proxied or CORS enabled if the calls are handled by AJAX and are not WebSocket requests.

Click here for hi-resolution

Here you can see another example of a SPA Taskflow in action displaying the JIVE Forums (above), compared against the out of the box ADF Forum Taskflow (below).

Conclusion

If you are looking to create applications fast that are functional, use the out of the box taskflows or develop your own ADF components entirely in ADF with your development team and customize the ADF skin to improve on the ADF look.

However, if you are looking to take the experience to the next level and want to invest more to create visually interactive modern and rich dynamic experiences to your users, bring in a good UX/UI team that can help transform your interface layer while enabling components to have the potential to be deployed across multiple platforms while maintaining the power of the ADF back end.

Side Note

I’m currently co-authoring a white paper on modular SPA for ADF taskflows providing examples and different development techniques that can be used to enrich the UI. I hope to get this out to you and on OTN in the next 6 months so keep a look out for it.

The post WebCenter Portal & SPA (Single Page Application) – Enhancing ADF UI Design and UX Functionality appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Oracle Database In-Memory Test Drive Workshop: Canberra 28 April 2015

Richard Foote - Sun, 2015-03-29 21:17
I’ll be running a free Oracle Database In-Memory Test Drive Workshop locally here in Canberra on Tuesday, 28th April 2015. Just bring a laptop with at least 8G of RAM and I’ll supply a VirtualBox image with the Oracle Database 12c In-Memory environment. Together we’ll go through a number of hands-on labs that cover: Configuring the Product Easily […]
Categories: DBA Blogs

Sqlplus is my second home, part 8: Embedding multiple sqlplus arguments into one variable

Tanel Poder - Sun, 2015-03-29 15:23

I’ve updated some of my ASH scripts to use these 4 arguments in a standard way:

  1. What ASH columns to display (and aggregate by)
  2. Which ASH rows to use for the report (filter)
  3. Time range start
  4. Time range end

So this means whenever I run ashtop (or dashtop) for example, I need to type in all 4 parameters. The example below would show top SQL_IDs only for user SOE sessions from last hour of ASH samples:

SQL> @ashtop sql_id username='SOE' sysdate-1/24 sysdate

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
     2271      .6   21% | 56pwkjspvmg3h 2015-03-29 13:13:16 2015-03-29 13:43:34               145
     2045      .6   19% | gkxxkghxubh1a 2015-03-29 13:13:16 2015-03-29 13:43:14               149
     1224      .3   11% | 29qp10usqkqh0 2015-03-29 13:13:25 2015-03-29 13:43:32               132
      959      .3    9% | c13sma6rkr27c 2015-03-29 13:13:19 2015-03-29 13:43:34               958
      758      .2    7% |               2015-03-29 13:13:16 2015-03-29 13:43:31                 1

When I want more control and specify a fixed time range, I can just use the ANSI TIMESTAMP (or TO_DATE) syntax:

SQL> @ashtop sql_id username='SOE' "TIMESTAMP'2015-03-29 13:00:00'" "TIMESTAMP'2015-03-29 13:15:00'"

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      153      .2   22% | 56pwkjspvmg3h 2015-03-29 13:13:29 2015-03-29 13:14:59                 9
      132      .1   19% | gkxxkghxubh1a 2015-03-29 13:13:29 2015-03-29 13:14:59                 8
       95      .1   14% | 29qp10usqkqh0 2015-03-29 13:13:29 2015-03-29 13:14:52                 7
       69      .1   10% | c13sma6rkr27c 2015-03-29 13:13:31 2015-03-29 13:14:58                69
       41      .0    6% |               2015-03-29 13:13:34 2015-03-29 13:14:59                 1

Note that the arguments 3 & 4 above are in double quotes as there’s a space within the timestamp value. Without the double-quotes, sqlplus would think the script has total 6 arguments due to the spaces.

I don’t like to type too much though (every character counts!) so I was happy to see that the following sqlplus hack works. I just defined pairs of arguments as sqlplus DEFINE variables as seen below (also in init.sql now):

  -- geeky shorcuts for producing date ranges for various ASH scripts
  define     min="sysdate-1/24/60 sysdate"
  define  minute="sysdate-1/24/60 sysdate"
  define    5min="sysdate-1/24/12 sysdate"
  define    hour="sysdate-1/24 sysdate"
  define   2hours="sysdate-1/12 sysdate"
  define  24hours="sysdate-1 sysdate"
  define      day="sysdate-1 sysdate"
  define    today="TRUNC(sysdate) sysdate"

And now I can type just 3 arguments instead of 4 when I run some of my scripts and want some predefined behavior like seeing last 5 minutes’ activity:

SQL> @ashtop sql_id username='SOE' &5min

    Total
  Seconds     AAS %This   SQL_ID        FIRST_SEEN          LAST_SEEN           DIST_SQLEXEC_SEEN
--------- ------- ------- ------------- ------------------- ------------------- -----------------
      368     1.2   23% | gkxxkghxubh1a 2015-03-29 13:39:34 2015-03-29 13:44:33                37
      241      .8   15% | 56pwkjspvmg3h 2015-03-29 13:40:05 2015-03-29 13:44:33                25
      185      .6   12% | 29qp10usqkqh0 2015-03-29 13:39:40 2015-03-29 13:44:33                24
      129      .4    8% | c13sma6rkr27c 2015-03-29 13:39:35 2015-03-29 13:44:32               129
      107      .4    7% |               2015-03-29 13:39:34 2015-03-29 13:44:33                 1

That’s it, I hope this hack helps :-)

By the way – if you’re a command line & sqlplus fan, check out the SQLCL command line “new sqlplus” tool from the SQL Developer team! (you can download it from the SQL Dev early adopter page for now).

 

Related Posts

Windows Cluster vNext and cloud witness

Yann Neuhaus - Sun, 2015-03-29 12:14

The next version of Windows will provide some interesting features about WFSC architectures. One of them is the new quorum type: "Node majority and cloud witness" which will solve many cases where a third datacenter is mandatory and missing to achieve a truly resilient quorum.

Let’s imagine the following scenario that may concern the implementation of either an SQL Server availability group or a SQL Server FCI. Let’s say you have to implement a geo-cluster that includes 4 nodes across two datacenters with 2 nodes on each. To achieve the quorum in case of broken network link between the two datacenters, adding a witness is mandatory even if you work with dynamic weight nodes feature but where to put it? Having a third datacenter to host this witness seems to be the better solution but as you may imagine, it is a costly and not affordable solution for many customers.

Using a cloud witness in this case might be a very interesting workaround. Indeed, a cloud witness consists of a blob storage inside a storage account's container. From cost perspective, it is a very cheap solution because you have to pay only for the storage space you will use (first 1TB/month – CHF 0.0217 / GB). Let's take a look at the storage space consumed by my cloud witness from my storage account:

 

blog_36_-_cloud_witness_storage_space_

 

 

Interesting, isn’t it? To implement a cloud witness, you have to meet the following requirements:

  • Yourstorage account must be configured as a locally redundant storage (LRS) because the created blob file is used as the arbitration point, which requires some consistency guarantees when reading the data. All data in the storage account is made durable by replicating transactions synchronously in this case. LRS doesn’t protect against a complete regional disaster but it may be acceptable in our case because cloud witness is also dynamic weight-based feature.
  • A special container, called msft-cloud-witness, is created to this purpose and contains the blob file lied to the cloud witness.

 

blog_36_-_storage_account_replication_type_

 

How to configure my cloud witness?

In the same way than before. By using the GUI, you have to select the quorum type you want to use and then you must provide the storage account information (storage account name and the access key). You may also prefer to configure your cloud witness by using PowerShell cmdlet Set-ClusterQuorum as follows:

 

blog_36_-_cloud_witness_configuration_powershel

 

After configuring the cloud witness, a corresponded core resource is created with an online state as follows:

 

blog_36_-_cloud_witness_view_from_GUI_

 

By using PowerShell:

 

blog_36_-_cloud_witness_view_from_powershell_

 

Let’s have a deeper look at this core resource, especially the following advanced policies parameters isAlive() and looksAlive() configuration:

 

blog_36_-_cloud_witness_isalive_looksalive

 

We may notice that the basic resource health check interval default value is configured to 15 min. Hmm, I guess that this value will probably be customized according to the customer architecture configuration.

Go ahead and let’s perform some basic tests with my lab architecture. Basically, I have configured a multi-subnet failover cluster that includes four nodes across two (simulated) datacenters. Then, I have implemented a cloud witness hosted inmy storage account “mikedavem”. You may find a simplified picture of my environment below:

 

blog_36_-_WFSC_core_resources_overview

 

...

 

blog_36_-_WFSC_nodes_overview

 

You may notice that because I implemented a cloud witness, the system changes the overall node weight configuration (4 nodes + 1 witness = 5 votes). In addition, in case of network failure between my 2 datacenters, I want to prioritize the first datacenter in terms of availability. In order to meet this requirement, I used the new cluster property LowerQuorumPriorityNodeID to change the priority of the WIN104 cluster node.

 

blog_36_-_WFSC_change_node_priority

 

At this point we are not ready to perform our first test: simulate a failure of the cloud witness:

 

blog_36_-_cloud_witness_failed_statejpg

 

Then the system recalculates the overall node weight configuration to achieve a maximum quorum resiliency. As expected, the node weight of WIN104 cluster node is changed from 1 to 0 because it has the lower priority.

The second consists in simulating a network failure between the two datacenters. Once again, as expected, the first partition of the WFSC in the datacenter1 keeps online whereas the second partition brings offline according the node weight priority configuration.

 

blog_36_-_WFSC_failed_state_partition_2jpg

 

Is the cloud witness dynamic behavior suitable with minimal configurations?

I wrote a blog post here about issues that exist with dynamic witness behavior and minimal configurations with only 2 cluster nodes. I hoped to see an improvement on that side but unfortunately no. Perhaps with the RTM release … wait and see.

 

Happy clustering!

 

 

Video Tutorial: XPLAN_ASH Active Session History - Part 4

Randolf Geist - Sun, 2015-03-29 11:55
The next part of the video tutorial explaining the XPLAN_ASH Active Session History functionality continuing the actual walk-through of the script output.

More parts to follow.


Automatic ADF Popup Opening on Fragment Load

Andrejus Baranovski - Sun, 2015-03-29 08:56
I had a post about opening ADF Popup on page load - Opening ADF PopUp on Page Load. Approach is quite straightforward, developer needs to use showPopupBehavior operation with appropriate trigger type. When it comes to ADF Popup opening on fragment load, implementation is a bit more complex. There is a known method to implement hidden text field and in the getter method call your custom logic - getter will be executed when fragment loads. However, this is not very efficient, you will need to add condition to distinguish between first and subsequent calls to the getter (it will be executed multiple times). I will describe in this post different approach - using ADF poll component and forcing it to execute only once after fragment load.

Here you can download sample application - FragmentPopUpLoadApp.zip. This sample implements two UI tabs. Each of the tabs renders ADF region. First region displays information about all employees - tree map with salary information:


Automatic popup opening is implemented in the second region - Employees By Department tab. As soon as user opens this tab, popup is load to select department. Data in the region is filtered, based on department selected in the popup:


Filtered data after selection was made in automatically opened popup:


Popup in the fragment is loaded on the first load by ADF poll component. Poll component is set with short interval of 10 milliseconds. During its first execution it will call Java listener method and in addition JavaScript client listener will be invoked. Inside JavaScript client listener, we disable ADF poll component by setting its interval to be negative. This is how ADF poll executes only once and then it stops:


Here is Java listener method, invoked by ADF poll component - it loads the popup:


ADF poll is stopped after its first execution. However, we need to ensure it will be started again - if user re-opens the same tab. For this purpose I have implemented conditional ADF region activation - region is de-activated when user navigates away from the tab. Tab disclosure listener updates helper variable to track which tab becomes active:


Disclosure listener updates page flow scope variable - forceActivate:


This variable is used in the region definition - region is active when tab is selected, and inactive otherwise:

node-oracledb 0.4.2 is on NPM (Node.js driver for Oracle Database)

Christopher Jones - Sat, 2015-03-28 18:41

The 0.4.2 version of the Node.js driver for Oracle Database is out.

  • Node-oracledb is now officially on the npmjs.com repository. This simplifies the Install instructions by removing the need to manually clone or download from GitHub. Thanks to Tim Branyen for setting this up and handing over stewardship to us.

  • Metadata support was added. Column names are now provided in the execute() callback result object. See the doc example.

  • We saw a few people try to use strangely old versions of Node 0.10. I've bumped up the lower limit requirement a bit. It won't force you to use the latest Node.js 0.10 patch set but you really should keep up to date with security fixes.

    If you want to build with Node 0.12, there is a community contributed patch from Richard Natal that can be found here. This patch also allows node-oracledb to work with io.js.

  • The default Instant Client directory on AIX was changed from /opt/oracle/instantclient_12_1 to /opt/oracle/instantclient. This now matches the default of other platforms.

  • One other small change was some improvements to the Windows install documentation.

Yes, work is continuing behind the scenes on other features.

A Glance at Smartwatches in the Enterprise: A Moment in Time Experience

Usable Apps - Sat, 2015-03-28 02:30

Ultan O’Broin (@usableapps) talks to Oracle Applications User Experience (OAUX) Vice President Jeremy Ashley (@jrwashley) about designing apps for that smartwatch, and every other smartwatch, too.

Nobody wants their device to disrupt them from what they are doing or to have to move to another one to continue working. Keeping users in the moment of their tasks—independent of the devices they’re using—is central to any great user experience.

The ability to apply our Oracle Applications Cloud design philosophy to the smartwatch demonstrates an ideal realization of the “glance” method, keeping users in that moment: Making the complex simple, flexible, intuitive, and most of all, convenient. OAUX recognizes the need for smartwatch wearers to experience that “right here, right now” feeling, the one in which you have just what you need, just when you need it.

The wearable technology space is currently focused on smartwatches. We’re excited by Apple’s announcement about their smartwatch, and we’re even more thrilled to now show you our proof of concept glance designs for the Oracle Applications Cloud on the Apple Watch. We want to hear your reaction! 

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch


Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud on Apple Watch

Glance for Oracle Applications Cloud for Apple Watch proof of concept designs

For the smartwatch specifically, VP Jeremy Ashley explained how our glance approach applies to smartwatch wearers, regardless of their choice of device:

“The most common wearable user interaction is to glance at something. The watch works as the wearer’s mini dialog box to the cloud, making microtransactions convenient on the wrist, and presenting the right information to the wearer at the right time. How quickly and easily someone can do something actually useful is the key activity."

Glance brings cloud interaction to wearers in a personal way, requesting and not demanding attention, while eliminating a need to switch to other devices to “dig in,” or to even have to pull a smartphone out of the pocket to respond.

“To continue the journey to completing a task using glance is as simple and natural as telling the time on your wrist”, says Jeremy.

Being able to glance down at your wrist at a stylish smartwatch experience—one that provides super-handy ways to engage with gems of information— enhances working in the cloud in powerful and productive ways, whether you’re a sales rep walking from your car to an opportunity engagement confidently glancing at the latest competitive news, or a field technician swiping across a watchface to securely record time on a remote job.

Glancing at a UI is the optimal wearable experience for the OAUX mobility strategy, where the cloud, not the device, is our platform. This means you can see our device-agnostic glance design at work not only on an Apple Watch, but on Android Wear, Pebble, and other devices, too.

Glance on Android Wear Samsung Gear Live and Pebble

Glance for Oracle Applications Cloud proof of concept apps on Android Wear Samsung Gear Live and Pebble

Designing a Glanceable Platform

The path to our glance designs began with OAUX research into every kind of smartwatch we could get on our wrists so that we could study their possibilities, experience how they felt, how they looked, and how they complemented everyday work and life activities. Then we combined ideas and experiences with Oracle Cloud technology to deliver a simplified design strategy that we can apply across devices. As a result, our UI designs are consistent and familiar to users as they work flexibly in the cloud, regardless of their device, type of operating system, or form factor.

This is not about designing for any one specific smartwatch. It’s a platform-agnostic approach to wearable technology that enables Oracle customers to get that awesome glanceable, cloud-enabled experience on their wearable of choice.

Why Smartwatches?

Smartwatches such as the Apple Watch, Pebble, and Android Wear devices have resonated strongly with innovators and consumers of wearable technology. The smartwatch succeeds because we’re already familiar and comfortable with using wristwatches, and they’re practical and easy to use.

From first relying on the sun to tell the time, to looking up at town hall clocks, to taking out pocket watches, and then being able to glance at our wrists to tell the time, we’ve seen an evolution in glanceable technology analogous to the miniaturization of computing from large mainframes to personal, mobile devices for consumers.

Just like enterprise apps, watches have already been designed for many specializations and roles, be they military, sport, medical, fashion, and so on. So the evolution of the smartwatch into an accepted workplace application is built on a firm foundation.

More Information

Again, OAUX is there, on trend, ready and offering a solution grounded in innovation and design expertise, one that responds to how we work today in the cloud.

In future articles, we’ll explore more examples that showcase how we’re applying the glance approach to wearable technology, and we’ll look at design considerations in more detail. You can read more about our Oracle Applications Cloud design philosophy and other trends and innovations that influence our thinking in our free eBook.

Check the Usable Apps website for events where you can experience our smartwatch and other innovations for real, read our Storify feature on wearable technology, and see our YouTube videos about our UX design philosophy and strategy.

More Apple Watch glance designs are on Instagram

Seriously proud of this and it doesn't make me grumpy!

Grumpy old DBA - Fri, 2015-03-27 18:27
So the GLOC 2015 conference registration is open (GLOC 2015 ) ( has been for a while ) and recently we completed posting all the speakers/topics.  That's been good darn good.

Just out today is our SAG  ( schedule at a glance ) which demonstrates just how good our conference will be.  Low cost high quality and just an event that you really should think about being in Cleveland for in may.

The schedule at a glance does not include our 4 top notch 1/2 day workshops going on monday but you can see them from the regular registration.

I am so grateful for the speakers we have on board.  It's a lot of work behind the scenes getting something like this rolling but when you see a lineup like this just wow!
Categories: DBA Blogs

Be Careful when using FRA with Streams

Michael Dinh - Fri, 2015-03-27 16:12

Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 – 64bit Production

select state from gv$streams_capture;

STATE
----------------------------------------------------------------------------------------------------------------------------------------------------------------
WAITING FOR REDO: LAST SCN MINED 442455793041

select thread#, sequence#, status
from v$archived_log
where 442455793041 between first_change#
and next_change# order by 1,2;

   THREAD#  SEQUENCE# S
---------- ---------- -
	 1    1070609 D
	 1    1070609 D
	 1    1070609 D
	 1    1070610 D
	 1    1070610 D
	 2    1153149 D
	 2    1153149 D
	 2    1153149 D

8 rows selected.

Who’s deleting the archived logs? Thanks to Praveen G. who figured this out. From the alert log.

WARNING: The following archived logs needed by Streams capture process
are being deleted to free space in the flash recovery area. If you need
to process these logs again, restore them from a backup to a destination
other than the flash recovery area using the following RMAN commands:
   RUN{
      # <directory/ASM diskgroup> is a location other than the
      # flash recovery area
      SET ARCHIVELOG DESTINATION TO '<directory/ASM diskgroup>';
      RESTORE ARCHIVELOG ...;
   }

Pythian at Collaborate 15

Pythian Group - Fri, 2015-03-27 15:05

Make sure you check out Pythian’s speakers at Collaborate 15. Stop by booth #1118 for a chance meet some of Pythian’s top Oracle experts, talk shop, and ask questions. This many Oracle experts in one place only happens once a year, have a look at our list of presenters, we think you’ll agree.

Click here to view a PDF of our presenters

 

Pythian’s Collaborate 15 Presenters | April 12 – 16 | Mandalay Bay Resort and Casino, Las Vegas

 

Christo Kutrovsky | ATCG Senior Consultant | Oracle ACE

 

Maximize Exadata Performance with Parallel Queries

Wed. April 15 | 10:45 AM – 11:45 AM | Room Banyan D

 

Big Data with Exadata

Thu. April 16 | 12:15 PM – 1:15 PM | Room Banyan D

 

Deiby Gomez Robles | Database Consultant | Oracle ACE

 

Oracle Indexes: From the Concept to Internals

Tue. April 14 | 4:30 PM – 5:30 PM | Room Palm C

 

Marc Fielding | ATCG Principal Consultant | Oracle Certified Expert

 

Ensuring 24/7 Availability with Oracle Database Application Continuity

Mon. April 13 | 2:00 PM – 3:00 PM | Room Palm D

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue. April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Maris Elsins | Oracle Application DBA | Oracle ACE

Mining the AWR: Alternative Methods for Identification of the Top SQLs in Your Database

Tue. April 14 | 3:15 PM – 4:15 PM | Room Palm B

 

Ins and Outs of Concurrent Processing Configuration in Oracle e-Business Suite

Wed. April 15 | 8:00 AM – 9:00 AM | Room Breakers B

 

DB12c: All You Need to Know About the Resource Manager

Thu. April 16 | 9:45 AM – 10:45 AM | Room Palm A

 

Alex Gorbachev | CTO | Oracle ACE Director

 

Using Hadoop for Real-time BI Queries

Tue, April 14 | 9:45 AM – 10:45 AM | Room Jasmine E

 

Using Oracle Multi-tenant to Efficiently Manage Development and Test Databases

Tue, April 14 | 11:00 AM – 12:00 PM | Room Palm C

 

Anomaly Detection for Database Monitoring

Thu, April 16 | 11:00 AM – 12:00 PM | Room Palm B

 

Subhajit Das Chaudhuri | Team Manager

 

Deep Dive: Integration of Oracle Applications R12 with OAM 11g, OID 11g , Microsoft AD and WNA

Tue, April 14 | 3:15 PM – 4:15 PM | Room Breakers D

 

Simon Pane | ATCG Senior Consultant | Oracle Certified Expert

 

Oracle Service Name Resolution – Getting Rid of the TNSNAMES.ORA File!

Wed, April 15 | 9:15 AM – 10:15 AM | Room Palm C

 

René Antunez | Team  Manager | Oracle ACE

 

Architecting Your Own DBaaS in a Private Cloud with EM12c

Mon. April 13 | 9:15 AM – 10:15 AM | Room Reef F

 

Wait, Before We Get the Project Underway, What Do You Think Database as a Service Is…

Mon, Apr 13 | 03:15 PM – 04:15 PM | Room Reef F

 

My First 100 days with a MySQL DBMS

Tue, Apr 14 | 09:45 AM – 10:45 AM | Room Palm A

 

Gleb Otochkin | ATCG Senior Consultant | Oracle Certified Expert

 

Your Own Private Cloud

Wed. April 15 | 8:00 AM – 9:00 AM | Room Reef F

 

Patching Exadata: Pitfalls and Surprises

Wed. April 15 | 12:00 PM – 12:30 PM | Room Banyan D

 

Second Wind for Your exadata

Tue. April 14 | 12:15 PM – 12:45 PM | Room Banyan C

 

Michael Abbey | Team Manager, Principal Consultants | Oracle ACE

 

Working with Colleagues in Distant Time Zones

Mon, April 13 | 12:00 PM – 12:30 PM | Room North Convention, South Pacific J

 

Manage Referential Integrity Before It Manages You

Tue, April 14 | 2:00 PM – 3:00 PM | Room Palm C

 

Nothing to BLOG About – Think Again

Wed, April 15 | 7:30 PM – 8:30 PM | Room North Convention, South Pacific J

 

Do It Right; Do It Once. A Roadmap to Maintenance Windows

Thu, April 16 | 11:00 AM – 12:00 PM | Room North Convention, South  Pacific J

Categories: DBA Blogs

Oracle FMW Partner Community Forum 2015: The Oracle Applications Cloud UX Rapid Development Kit Goes to Hungary!

Usable Apps - Fri, 2015-03-27 13:22

Vlad Babu (@vladbabu), Oracle Applications Cloud Pre-Sales UX Champ, files a report about his Oracle Applications User Experience (OAUX) while attending the recent Oracle Fusion Middleware Partner Community Forum 2015 in Budapest, Hungary.

Over 200 Oracle Partners from the Oracle Fusion Middleware (FMW) area stepped away from their projects in early March 2015 to take part in a groundbreaking event in Budapest, Hungary: the Oracle Fusion Middleware Partner Community Forum 2015. For some time, this two-day event had been just a glimmer in the eye of Jürgen Kress (@soacommunity),  Senior Manager SOA/FMW Partner Programs EMEA. However, with the unprecedented success of the partner programs and community growth in recent years, he really felt compelled to make this event  happen. And he did!

Andrew Sutherland, Senior Vice President Business Development - Technology License & Systems EMEA, and Amit Zavery (@azavery), Senior Vice President, Integration Products, were the keynote speakers. They inspired the audience when they spoke about Digital Disruption and how Oracle is soaring to success with Integration Cloud Services offerings, such as Oracle Cloud Platform (Platform as a Service [PaaS]).

Tweet from Debra Lilley

Tweet from Debra Lilley: Pervasiveness of UX to Cloud success

The user experience (UX) presence at the event struck a chord with Debra Lilley (@debralilley), (Vice President of Certus Cloud Services), who remarked on how important the all-encompassing Oracle Applications User Experience Simplified User Experience Rapid Development Kit (RDK) is for enabling great partner development for the cloud experience. Yes, integration and PaaS4SaaS are key partner differentiators going forward!

PTS Code Accelerator Kit and Oracle Applications UX design patterns eBook

Tweet from Vlad Babu: PTS Code Accelerator Kit and Oracle Applications UX design patterns eBook 

So, how can partners truly leverage their investment in Oracle Fusion Middleware? Use the RDK. Oracle Partners were really excited by and empowered when they used the RDK for designing and coding a simplified UI for the Oracle Applications Cloud. The RDK contains all the information you’ll need before you even start coding, such as easy-to-use RDK wireframing stencils. The YouTube guidance offers great productivity features when creating new extensions in PaaS or developing from scratch a brand new, custom application using Oracle ADF technology.

Tweet from Debra Lilley

Tweet from Debra Lilley: Integration is key to SaaS. 

For example, Certus Solutions leveraged the RDK Simplified User Experience Design Patterns eBook that covers simplified UI design patterns and the ADF-based code templates in the RDK to develop a new extension for the Oracle HCM Cloud. The result? Certus Solutions received the FMW Community Cloud Award for outstanding work in validating PaaS4SaaS with the Usable Apps team!

Tweet from Debra Lilley announcing that Certus Solutions received the FMW Community Cloud Award

Tweet from Debra Lilley: Announcing that Certus Solutions received the FMW Community Cloud Award  

Experiencing the motivation and innovation from successful partners, this event proved to be a unique and rewarding chance to interact with key Oracle Partners. This event was truly a fantastic two-day event to remember. Here’s to the next opportunity to wear the OAUX colors with pride!

Tweet from Debra Lilley

Tweet from Debra Lilley: Simplicity, Extensibility, Mobile worn with pride. 

For more information, I encourage you to visit the Usable Apps website where you’ll find lots of essential information about designing and building new simplified UIs for the Oracle Applications Cloud.

Your reward is waiting.

Postscript on Student Textbook Expenditures: More details on data sources

Michael Feldstein - Fri, 2015-03-27 12:20

By Phil HillMore Posts (304)

There has been a fair amount of discussion around my post two days ago about what US postsecondary students actually pay for textbooks.

The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.

I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.

<wonk> I argued that the two best sources for rising average textbook price are the Bureau of Labor Statistics and the National Association of College Stores (NACS), and when you look at what students actually pay (including rental, non-consumption, etc) the best sources are NACS and Student Monitor. In this post I’ll share more information on the data sources and their methodologies. The purpose is to help people understand what these sources tell us and what they don’t tell us.

College Board and NPSAS

My going-in- argument was that the College Board is not a credible source on what students actually pay:

The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes.

Both the College Board and National Postsecondary Student Aid Study (NPSAS, official data for the National Center for Education Statistics, or NCES) currently use cost of attendance data created by financial aid offices of each institution, using the category “Books and Supplies”. There is no precise guidance from DOE on the definition of this category, and financial aid offices use very idiosyncratic methods for this budget estimate. Some schools like to maximize the amount of financial aid available to students, so there is motivation to keep this category artificially high.

The difference is three-fold:

  • NPSAS uses official census reporting from schools while the College Board gathers data from a subset of institution – their member institutions;
  • NPSAS reports the combined data “Average net price” and not the sub-category “Books and Supplies”; and
  • College Board data targeted at freshman full-time student.

From NCES report just released today based on 2012 data (footnote to figure 1):

The budget includes room and board, books and supplies, transportation, and personal expenses. This value is used as students’ budgets for the purposes of awarding federal financial aid. In calculating the net price, all grant aid is subtracted from the total price of attendance.

And the databook definition used, page 130:

The estimated cost of books and supplies for classes at NPSAS institution during the 2011–12 academic year. This variable is not comparable to the student-reported cost of books and supplies (CSTBKS) in NPSAS:08.

What’s that? It turns out that in 2008 NCES actually used a student survey – asking them what they spent rather than asking financial aid offices for net price budget calculation. NCES fully acknowledges that the current financial aid method “is not comparable” to student survey data.

As an example of how this data is calculated, see this guidance letter from the state of California [emphasis added].

The California Student Aid Commission (CSAC) has adopted student expense budgets, Attachment A, for use by the Commission for 2015-16 Cal Grant programs. The budget allowances are based on statewide averages from the 2006-07 Student Expenses and Resources Survey (SEARS) data and adjusted to 2015-16 with the forecasted changes in the California Consumer Price Index (CPI) produced by the Department of Finance.

The College Board asks essentially the same question from the same sources. I’ll repeat again – The College Board is not claiming to be an actual data source for what students actually spend on textbooks.

NACS

NACS has two sources of data – both bookstore financial reporting from member institutions and from a Student Watch survey report put out in the Fall and Spring of each academic year. NACS started collecting student expenditure data in 2007, initially every two years, then every year, then twice a year.

NACS sends their survey through approximately 20 – 25 member institutions to distribute to the full student population for that institution or a representative sample. For the Fall 2013 report:

Student WatchTM is conducted online twice a year, in the fall and spring terms. It is designed to proportionately match the most recent figures of U.S. higher education published in The Chronicle of Higher Education: 2013/2014 Almanac. Twenty campuses were selected to participate based on the following factors: public vs. private schools, two-year vs. four-year degree programs, and small, medium, and large enrollment levels.

Participating campuses included:

  • Fourteen four-year institutions and six two-year schools; and
  • Eighteen U.S. states were represented.

Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.

I interviewed Rich Hershman and Liz Riddle, who shared the specific definitions they use.

Required Course Materials:Professor requires this material for the class and has made this known through the syllabus, the bookstore, learning management system, and/or verbal instructions. These are materials you purchase/rent/borrow and may include textbooks (including print and/or digital versions), access codes, course packs, or other customized materials. Does not include optional or recommended materials.

The survey goes to students who report what they actually spent. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

The data is aggregated across full-time and part-time students, undergraduates and graduates. So the best way to read the data I shared previously ($638 per year) is as per-capita spending. The report breaks down further by institution type (2-yr public, etc) and type (purchase new, rental, etc). The Fall 2014 data is being released next week, and I’ll share more breakdowns with this data.

In future years NACS plans to expand the survey to go through approximately 100 institutions.

Student Monitor

Student Monitor describes their survey as follows:

  • Conducted each Spring and Fall semester
  • On campus, one-on-one intercepts conducted by professional interviewers during the three week period March 24th to April 14th, 2014 [Spring 2014 data] and October 13th-27th [Fall 2014 data]
  • 1,200 Four Year full-time undergrads (Representative sample, 100 campuses stratified by Enrollment, Type, Location, Census Region/Division)
  • Margin of error +/- 2.4%

In other words, this is an intercept survey conducted with live interviews on campus, targeting full-time undergraduates. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.

In comparison to NACS, Student Monitor tracks more schools (100 vs. 20) but fewer students (1,200 vs. 12,000).

Despite the differences in methodology, Student Monitor and NACS report spending that is fairly consistent (both on the order of $600 per year per student).

New Data in Canada

Alex Usher from Higher Education Strategy Associates shared a blog post in response to my post that is quite interesting.

This data is a little old (2012), but it’s interesting, so my colleague Jacqueline Lambert and I thought we’d share it with you. Back then, when HESA was running a student panel, we asked about 1350 university students across Canada about how much they spent on textbooks, coursepacks, and supplies for their fall semester. [snip]

Nearly 85% of students reported spending on textbooks. What Figure 1 shows is a situation where the median amount spent is just below $300, and the mean is near $330. In addition to spending on textbooks, another 40% or so bought a coursepack (median expenditure $50), and another 25% reported buying other supplies of some description (median expenditure: also $50). Throw that altogether and you’re looking at average spending of around $385 for a single semester.

Subtracting out the “other supplies” that do not fit in NACS / Student Monitor definitions, and acknowledging that fall spending is typically higher than spring due to full-year courses, this data is also in the same ballpark of $600 per year (slightly higher in this case).

Upcoming NPSAS Data

The Higher Education Act of 2008 required NCES to add student expenditures on course materials to the NPSAS database, but this has not been added yet. According to Rich Hershman from NACS, NCES is using a survey question that is quite similar to NACS and field testing this spring. The biggest difference will be that NPSAS is annual data whereas NACS and Student Monitor send out their survey in fall and spring (then combining data).

Sometime in 2016 we should have better federal data on actual student expenditures.

</wonk>

Update: Mistakenly published without reference to California financial aid guidance. Now fixed.

Update 3/30: I mistakenly referred to the IPEDS database for NCES when this data is part of National Postsecondary Student Aid Study (NPSAS). All references to IPEDS have been corrected to NPSAS. I apologize for confusion.

The post Postscript on Student Textbook Expenditures: More details on data sources appeared first on e-Literate.

1 million page views in less than 5 years

Hemant K Chitale - Fri, 2015-03-27 10:26
My Oracle Blog has recorded 1million page views in less than 5 years.

Although the blog began on 28-Dec-2006, the first month with recorded page view counts was July-2010 -- 8,176 page views.


.
.
.

Categories: DBA Blogs

Conference Recaps and Such

Oracle AppsLab - Fri, 2015-03-27 09:28

I’m currently in Washington D.C. at Oracle HCM World. It’s been a busy conference; on Wednesday, Thao and Ben ran a brainstorming session on wearables as part of the HCM product strategy council’s day of activities.

brainstorm

Then yesterday, the dynamic duo ran a focus group around emerging technologies and their impact on HCM, specifically wearables and Internet of Things (IoT). I haven’t got a full download of the session yet, but I hear the discussion was lively. They didn’t even get to IoT, sorry Noel (@noelportual).

I’m still new to the user research side of our still-kinda-new house, so it was great to watch these two in action as a proverbial fly on the wall. They’ll be doing similar user research activities at Collaborate 15 and OHUG 15.

If you’re attending Collaborate and want to hang out with the OAUX team and participate in a user research or usability testing activity, hit this link. The OHUG 15 page isn’t up yet, but if you’re too excited to wait, contact Gozel Aamoth, gozel dot aamoth at oracle dot com.

Back to HCM World, in a short while, I’ll be presenting a session with Aylin Uysal called Oracle HCM Cloud User Experiences: Trends, Tailoring, and Strategy, and then it’s off to the airport.

Earlier this week, Noel was in Eindhoven for OBUG Experience 2015. From the pictures I’ve seen, it was a fun event. Jeremy (@jrwashley) not only gave the keynote, but he found time to hang out with some robot footballers.

robot

Check out the highlights:

Busy week, right? Next week is more of the same as Noel and Tony head to Modern CX in Las Vegas.

Maybe we’ll run into you at one of these conferences? Drop a comment.

In other news, as promised last week, I updated the feed name. Doesn’t look like that affected anything, but tell your friends just in case.

Update: Nope, changing the name totally borks the old feed, so update your subscription if you want to keep getting AppsLab goodness delivered to your feed reader or inbox.Possibly Related Posts:

Lifting the Lid on OBIEE Internals with Linux Diagnostics Tools

Rittman Mead Consulting - Fri, 2015-03-27 08:44

There comes the point in any sufficiently complex or difficult problem diagnosis that the log files in OBIEE alone are not sufficient for building up a complete picture of what’s going on. Even with the debug/trace data that Presentation Services and other components can be configured precisely to write you’re sometimes just left having to guess what is going on inside the black box of each of the OBIEE system components.

Here we’re going to look at a couple of examples of lifting the lid just a little bit further on what OBIEE is up to, using standard Linux diagnostic tools. These are not something to be reaching for in the first instance, but more getting on to a last resort. Almost always the problem is simpler than you’ll think, and leaping for an network trace or stack trace is going to be missing the wood for the trees.

Diagnostics in action

At a client recently they had a problem with a custom skin deployment on a clustered (scaled-out) OBIEE deployment. Amongst other things the skin was setting the default palette for charts (viewui/chart/dvt-graph-skin.xml), and they were seeing only 50% of chart executions pick up the custom palette – the other 50% used the default. If either entire node was shut down, things were fine, but otherwise it was a 50:50 chance what the colours would be. Most odd….

When you configure a custom skin in OBIEE you should be setting CustomerResourcePhysicalPath in instanceconfig.xml, along with CustomerResourceVirtualPath. Both these are necessary so that Presentation Services knows:

  1. Logical – How to generate URLs for content requested by the user’s browser (eg logos, CSS files, etc).
  2. Physical – How to physically reference files on the file system that are read by OBIEE itself (eg XML files, language files)

The way the client had configured their custom skin was that it was on storage local to each node, and in a node-specific path, something like this:

  • /data/instance1/s_custom/
  • /data/instance2/s_custom/

Writing out the details in hindsight always makes a problem’s root cause a lot more obvious, but at the time this was a tricky problem. Let’s start with the basics. Java Host is responsible for rendering charts, and for some reason, it was not reading the custom colour scheme file from the custom skin correctly. Presentation Services uses all the available Java Hosts in a cluster to request charts, presumably on some kind of round-robin basis. An analysis request on NODE01 has a 50:50 chance of getting its chart rendered on Java Host on NODE01 or Java Host on NODE02:


Turned all the log files up to 11 didn’t yield anything useful. For some reason half the time Java Host would just “ignore” the custom skin. Shutting down each node proved that in isolation the custom skin configuration on each node was definitely correct, because then the colours started working just fine. It was only when multiple Java Hosts across the nodes were active that there was a problem.

How Java Host picks up the custom skin is entirely undocumented, and I ended up figuring out that it must get the path to the skin as part of the chart request from Presentation Services. Since Presentation Services on NODE01 has been configured with a CustomerResourcePhysicalPath of /data/instance1/s_custom/, Java Host on NODE02 would fail to find this path (since on NODE02 the skin is located at /data/instance2/s_custom/) and so fall back on the default. This was my hypothesis that I then proved by making the path available for each skin available on each node (symlink, or using a standard path would also have worked, eg /data/shared/s_custom, or even better, a shared mount point), and from there everything worked just fine.

But a hypothesis and successful resolution alone wasn’t entirely enough. Sure the client was happy, but there was that little itch, that unknown “black box” system that appeared to behave how I had deduced, but could we know for sure?

tcpdump – network analysis

All of the OBIEE components communicate with each other and the outside world over TCP. When Presentation Services wants a chart rendered it does so by sending a request to Java Host – over TCP. Using the tcpdump tool we can see that in action, and inspect what gets sent:

$ sudo tcpdump -i venet0 -i lo -nnA 'port 9810'

The -A flag capture the ASCII representation of the packet; use -X if you want ASCII and hex. Port 9810 is the Java Host listen port.

The output looks like this:


You’ll note that in this case it’s intra-node communication, i.e. src and dest IP addresses are the same. The port for Java Host (9810) is clear, and we can verify that the src port (38566) is Presentation Services with the -p (process) flag of netstat:

$ sudo netstat -pn |grep 38566
tcp        0      0 192.168.10.152:38566        192.168.10.152:9810         ESTABLISHED 5893/sawserver

So now if you look in a bit more detail at the footer of the request from Presentation Services that tcpdump captured you’ll see loud and clear (relatively) the custom skin path with the graph customisation file:


Proof that the Presentation Services is indeed telling Java Host where to go and look for the custom attributes (including colours)! NB this is on a test environment, so that paths vary from the /data/instance... example above)

strace – system call analysis

So tcpdump gives us the smoking gun, but can we find the corpse as well? Sure we can! strace is a tool for tracing system calls, and a fantastically powerful one, but here’s a very simple example:

$strace -o /tmp/obijh1_strace.log -f -p $(pgrep -f obijh1)

-o means to write it to file, -f follows child processes as well, and -p passes the process id that strace should attach to. Have set the trace running I run my chart, and then go and pick through my trace file.

We know it’s the dvt-graph-skin.xml file that Java Host should be reading to pick up the custom colours, so let’s search for that:


Well there we go – Java Host went to go and look for the skin in the path that it was given by Presentation Services, and couldn’t find it. From there it’ll fall back on the product defaults.

Right Tool, Right Job

As as I said at the top of this article, these diagnostic tools are not the kind of things you’d be using day to day. Understanding their output is not always easy and it’s probably easy to do more harm than good with false assumption about what a trace is telling you. But, in the right situations, they are great for really finding out what is going on under the covers of OBIEE.

If you want to find out more about this kind of thing, this page is a great starting point.

Categories: BI & Warehousing