Skip navigation.

Feed aggregator

Data Caching Implementation for ADF Mobile MAF Application

Andrejus Baranovski - Tue, 2014-08-12 13:19
If you are building mobile application with web service call integration, you must take into account data caching strategy. Without data caching, mobile application will try to establish too many connections with the server - this will use a lot of bandwidth and slow down mobile application performance. This post will be focused around the scenario of implementing simple data caching strategy. In my next post, I'm planning to review MAF persistence framework from Steven Davelaar - this framework is powerful and flexible. Simple data caching strategy makes sense for smaller use cases, when we don't need to use additional framework for persistence.

Data caching implementation in the sample application - MAFMobileLocalApp_v3.zip, is based on SQLite database local to the mobile application. The whole idea is pretty straightforward, there is a database and Web Service publishing data. Mobile application is reading and synchronising data through Web Service. Once data is fetched from the Web Service, it is stored in local cache, implemented by SQLite DB. For the subsequent requests, mobile application is going to use a cache, until user will decide when he wants to synchronise data from Web Service:


Sample application implements a Web Service, based on ADF BC module. This Web Service returns Tasks data (SQL script is included with the sample application):


Besides fetching the data, Web Service implements Task update method:


On the mobile application side, use case is implemented with a single Task Flow. Task list displays a list of tasks, user can edit a task and submit changes through Web Service. This change is immediately synchronised with local cache storage. User is able to refresh a list of tasks and synchronise it with a Web Service. During synchronisation it will clear local cache and populate again with the data returned from Web Service:


Data caching logic is handle in TaskDC class, Data Control is generated on top of this class, this makes it available through the bindings layer. Initially we check if cache is empty and load data from the Web Service, for the subsequent requests data is loaded from cache. Unless user wants to synchronise with Web Service and invokes refresh:


There is a method responsible for fetching data from Web Service and translating it to the Task list structure:


Data from cache retrieval is implemented in the separate method, we are querying data from local SQLite database:


When request hits TaskDC Data Control, we are checking if there are any rows in cache. If there are rows, call to Web Service is not made and rows are fetched from cache:


When user is synchronising with the Web Service, cache is cleared up and re-populated with the rows fetched from Web Service:


Update is synchronised with local cache, so there is no need to re-fetch entire data collection from Web Service:


If the local cache is empty and Web Service is invoked to fetch data, we can see this is Web Service log - it executes SQL. Later call to Web Service is not made, data loaded locally from SQLite database:


We can track update operation on the server side as well, Task details are updated using Task Id:


Initial load for the task list, this is what you should see if Web Service and MAF mobile applications runs correctly:


User could open specific task and edit task properties - changing status:


Task description could be modified:


All changes are saved through Web Service call and immediately synchronised with local SQLite database:


Selected task is updated, user is returned back to the list of tasks:


We could simulate data update on the server, lets change status for one of the tasks:


With refresh option invoked, data is synchronised on the mobile application side and local cache gets refreshed:

New Features Available in Latest ORAchk Release

Joshua Solomin - Tue, 2014-08-12 12:46
Untitled Document


ORAchk can proactively scan for known problems within these areas:
GPIcon



  • Oracle Database
  • Enterprise Manager Cloud Control
  • E-Business Suite
  • Oracle Sun Systems





New features available in ORAchk version 2.2.5

  • Runs checks for multiple databases in parallel
  • Schedules multiple automated runs via ORAchk daemon
  • Uses configurable $HOME directory location for ORAchk temporary files
  • Ignores skipped checks when calculating System health score
  • Checks the health of pluggable databases using OS authentication
  • Reports top 10 time consuming checks to optimize runtime in the future
  • Improves report readability for clusterwide checks
  • Includes over 50 new Health Checks for the Oracle Stack
  • Provides a single dashboard to view collections across your enterprise
  • Includes pre and post upgrade checks for standalone database, option to run only these checks
  • Expands product areas in E-Business Suite and in Enterprise Manager Cloud Control

If you have particular checks or product areas you would like to see ORAchk cover, please post suggestions in the ORAchk subspace in My Oracle Support Community.

Read more about ORAchk and its features.

keeping my fingers crossed just submitted abstract for RMOUG 2015 Training Days ...

Grumpy old DBA - Tue, 2014-08-12 12:06
The Rocky Mountain Oracle Users Group has been big and organized for a very long time.  I have never been out there ( my bad ) but am hoping to change that situation in 2015.

Abstracts are being accepted for Training Days 2015 ... my first one is in there now thinking about a second submission but my Hotsos 2014 presentation needs some more work/fixing.  Ok lets be honest I need to shrink it considerably and tighten the focus of that one.

Information on RMOUG 2015 can be found here: RMOUG Training Days 2015

Keeping my fingers crossed!
Categories: DBA Blogs

Searching and installing Linux packages

Arun Bavera - Tue, 2014-08-12 11:12

yum search vnc

Loaded plugins: security

public_ol6_UEKR3_latest | 1.2 kB 00:00

public_ol6_UEKR3_latest/primary | 7.7 MB 00:01

public_ol6_UEKR3_latest 216/216

public_ol6_latest | 1.4 kB 00:00

public_ol6_latest/primary | 41 MB 00:03

public_ol6_latest 25873/25873

=============================================================================== N/S Matched: vnc ================================================================================

gtk-vnc.i686 : A GTK widget for VNC clients

gtk-vnc.x86_64 : A GTK widget for VNC clients

gtk-vnc-devel.i686 : Libraries, includes, etc. to compile with the gtk-vnc library

gtk-vnc-devel.x86_64 : Libraries, includes, etc. to compile with the gtk-vnc library

gtk-vnc-python.x86_64 : Python bindings for the gtk-vnc library

libvncserver.i686 : Library to make writing a vnc server easy

libvncserver.x86_64 : Library to make writing a vnc server easy

libvncserver-devel.i686 : Development files for libvncserver

libvncserver-devel.x86_64 : Development files for libvncserver

tigervnc.x86_64 : A TigerVNC remote display system

tigervnc-server.x86_64 : A TigerVNC server

tigervnc-server-applet.noarch : Java TigerVNC viewer applet for TigerVNC server

tigervnc-server-module.x86_64 : TigerVNC module to Xorg

tsclient.x86_64 : Client for VNC and Windows Terminal Server

vinagre.x86_64 : VNC client for GNOME

xorg-x11-server-source.noarch : Xserver source code required to build VNC server (Xvnc)

Name and summary matches only, use "search all" for everything.

yum install tigervnc-server.x86_64

Loaded plugins: security

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package tigervnc-server.x86_64 0:1.1.0-8.el6_5 will be installed

--> Processing Dependency: xorg-x11-fonts-misc for package: tigervnc-server-1.1.0-8.el6_5.x86_64

--> Running transaction check

---> Package xorg-x11-fonts-misc.noarch 0:7.2-9.1.el6 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================================================

Package Arch Version Repository Size

=================================================================================================================================================================================

Installing:

tigervnc-server x86_64 1.1.0-8.el6_5 public_ol6_latest 1.1 M

Installing for dependencies:

xorg-x11-fonts-misc noarch 7.2-9.1.el6 public_ol6_latest 5.8 M

Transaction Summary

=================================================================================================================================================================================

Install 2 Package(s)

Total download size: 6.9 M

Installed size: 9.7 M

Is this ok [y/N]: y

Downloading Packages:

(1/2): tigervnc-server-1.1.0-8.el6_5.x86_64.rpm | 1.1 MB 00:00

(2/2): xorg-x11-fonts-misc-7.2-9.1.el6.noarch.rpm | 5.8 MB 00:01

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Total 2.5 MB/s | 6.9 MB 00:02

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Warning: RPMDB altered outside of yum.

Installing : xorg-x11-fonts-misc-7.2-9.1.el6.noarch 1/2

Installing : tigervnc-server-1.1.0-8.el6_5.x86_64 2/2

Verifying : xorg-x11-fonts-misc-7.2-9.1.el6.noarch 1/2

Verifying : tigervnc-server-1.1.0-8.el6_5.x86_64 2/2

Installed:

tigervnc-server.x86_64 0:1.1.0-8.el6_5

Dependency Installed:

xorg-x11-fonts-misc.noarch 0:7.2-9.1.el6

Complete!

Categories: Development

Designing for the Experience-Driven Enterprise

WebCenter Team - Tue, 2014-08-12 06:00

Guest blog post series this week by Geoffrey Bock

How Oracle WebCenter Customers Build Digital Businesses: Designing for the Experience-Driven Enterprise

Geoffrey Bock, Principal, Bock & Company

Making the Transition from Analog to Digital

In my last blog post on contending with digital disruption, I described how several Oracle customers decided to refresh, modernize, and mobilize their enterprise application infrastructure. Web-enabling an existing application, once necessary, is no longer sufficient.

But what does it take to mobilize key business tasks and drive digital capabilities deeply into an application infrastructure? Many of the WebCenter customers I spoke to emphasize both the business value of their applications and the quality of end user experiences. They are now rebuilding their core applications, making the transition from analog to digital business practices, by designing for an experience-driven enterprise. 

The Flow of Design Activities

As I see it, customers are focusing on a sequence of five interrelated activities, summarized in Illustration 1. There is an inherent flow to application evolution.

Customers leverage their current platforms to innovate

Illustration 1. As they design their digital businesses, customers leverage their current platforms in order to deliver innovative experiences.

Here’s a description of how customers are building their digital businesses, and embracing the necessity of change along the way.

  • To begin with, there are baseline functions based on existing activities. While modernizing their core applications and the underlying back-end infrastructure, IT and business leaders emphasize that they “cannot loose anything” from their current platform. What needs to change is still up for redesign.

  • At the same time, leaders need to enhance the value of ad hoc communications. They are turning to social and mobile channels to improve overall employee productivity as well as strengthen relationships with customers and partners. New ways to communicate information become a lever for innovation.

  • There is also a business purpose for investing in social and mobile channels. Leaders expect to substantially improve service and support, when customers, partners, and employees have easy access to relevant information. There is added power through easy sharing.

  • To ensure quality service and support, it is essential to manage reusable content for a consistent experience. Organizations expect to create content once, organize it around business tasks, and distribute it across multiple channels. It helps to structure content for consistent distribution.

  • As a result, there are opportunities to launch innovative (and potentially breakthrough) digital business activities, by exploiting on the capabilities of the redesigned application environment. It’s not so much a matter of “loosing” baseline functions as embedding the flexibility to ensure that they can evolve.

From my perspective, this new application environment supports digital business initiatives by mobilizing the moments of engagement. These moments encompass the end user experiences where work gets done and value is created.

Optimizing for Agility

Companies are introducing various customer-, partner-, and employee-facing applications that run on the rebuilt enterprise platform. Leaders in these firms are designing applications from the “outside-in” by optimizing the ways in which end users access information and perform tasks. Significantly, leaders are relying on the agility and flexibility of the new platform to support an innovative collaborative environment.

As I spoke to WebCenter customers, I was struck by how their target users value the convenience of simple experiences. Designing for the experience-driven enterprise entails aggregating information from multiple sources, organizing it by business tasks, and then presenting it through intuitive applications that are seamlessly integrated with back-end services.

Download the free White Paper Today


Whitepaper List as per August 2014

Anthony Shorten - Mon, 2014-08-11 19:54
The following Oracle Utilities Application Framework technical whitepapers are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.

Unless otherwise marked the technical whitepapers in the table below are applicable for the following products (with versions):

Doc Id Document Title Contents 559880.1 ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility.
This whitepaper currently only applies to the following products:
560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. 560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows:
  • Concepts - General Concepts and Performance Troublehooting processes
  • Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions.
  • Network Troubleshooting - General troubleshooting of the network with common issues and resolutions.
  • Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions.
  • Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions.
  • Database Troubleshooting - General troubleshooting of the database with common issues and resolutions.
  • Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions.
560401.1 Software Configuration Management Series
A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. Topics include Revision Control, SDK Migration/Utilities, Bundling and Configuration Migration Assistant. The individual whitepapers are as follows:
  • Concepts - General concepts and introduction.
  • Environment Management - Principles and techniques for creating and managing environments.
  • Version Management - Integration of Version control and version management of configuration items.
  • Release Management - Packaging configuration items into a release.
  • Distribution - Distribution and installation of releases across environments
  • Change Management - Generic change management processes for product implementations.
  • Status Accounting - Status reporting techniques using product facilities.
  • Defect Management - Generic defect management processes for product implementations.
  • Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation.
  • Implementing Service Packs - Discussion on the service packs and how to use them in an implementation.
  • Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades.
773473.1 Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. 774783.1 LDAP Integration for Oracle Utilities Application Framework based products A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. 789060.1 Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices This whitepaper outlines the common and best practices implemented by sites all over the world. 856854.1 Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. 1068958.1 Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4?  Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices
This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener.
This whitepaper currently only applies to the following products:
1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite. 1486886.1 Implementing Oracle Exadata with Oracle Utilities Customer Care and Billing This whitepaper covers some advice when implementing Oracle ExaData for Oracle Utilities Customer Care And Billing. 878212.1 Oracle Utilities Application FW Available Service Packs This entry outlines ALL the service packs available for the Oracle Utilities Application Framework. 1454143.1 Certification Matrix for Oracle Utilities Products This entry outlines the software certifications for all the Oracle Utilities products. 1474435.1 Oracle Application Management Pack for Oracle Utilities Overview This whitepaper covers the Oracle Application Management Pack for Oracle Utilities. This is a pack for Oracle Enterprise Manager. 1506830.1 Configuration Migration Assistant Overview
This whitepaper covers the Configuration Migration Assistant available for Oracle Utilities Application Framework V4.2.0.0.0. This replaces ConfigLab for some products.
1506855.1 Integration Reference Solutions
This whitepaper covers the various Oracle technologies you can use with the Oracle Utilities Application Framework. 1544969.1 Native Installation Oracle Utilities Application Framework This whitepaper describes the process of installing Oracle Utilities Application Framework based products natively within Oracle WebLogic. 1558279.1 Oracle Service Bus Integration  This whitepaper describes direct integration with Oracle Service Bus including the new Oracle Service Bus protocol adapters available. Customers using the MPL should read this whitepaper as the Oracle Service Bus replaces MPL in the future and this whitepaper outlines how to manually migrate your MPL configuration into Oracle Service Bus.

Note: In Oracle Utilities Application Framework V4.2.0.1.0, Oracle Service Bus Adapters for Outbound Messages and Notification/Workflow are available 1561930.1 Using Oracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matching and  fuzzy operator facilities in Oracle Text to implemement fuzzy searching using the @fuzzy helper fucntion available in Oracle Utilities Application Framework V4.2.0.0.0 1606764.1
Audit Vault Integration This whitepaper describes the integration with Oracle Audit Vault to centralize and separate Audit information from OUAF products. Audit Vault integration is available in OUAF 4.2.0.1.0 and above only.
1644914.1
Migrating XAI to IWS
Migration from XML Application Integration to the new native Inbound Web Services in Oracle Utilities Application Framework 4.2.0.2.0 and above.
1643845.1
Private Cloud Planning Guide
Planning Guide for implementing Oracle Utilities products on Private Clouds using Oracle's Cloud Foundation set of products.
1682436.1
ILM Planning Guide
Planning Guide for Oracle Utilities new ILM based data management and archiving solution.
1682442.1
ILM Implementation Guide for Oracle Utilities Customer Care and Billing
Implementation Guide for the ILM based solution for the Oracle Utilities Customer Care And Billing.

Watch Oracle DB Session Activity With The Real-Time Session Sampler

Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Oracle Voice Debuts on the App Store

Oracle AppsLab - Mon, 2014-08-11 16:05

Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!

Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.

Enjoy.

Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience

Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.

Home_With_Frame

The home screen of Fusion Voice Cloud Service for the Oracle Sales Cloud is designed for sales reps.

Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.

Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.

Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.

Oracle sales reps tried it first, to see if we were getting it right.

We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.

Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.

For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.

We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.

“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.

“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.

This demo shows Oracle Voice in action.

What can a sales rep do with Oracle Voice?

Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:

Prepare for meetings

  • View relevant notes to see what happened during previous meetings.
  • See important activities by viewing previous tasks and appointments.
  • Brush up on opportunities and check on revenue, close date and sales stage.

Wrap up meetings

  • Capture notes and activities quickly so they don’t forget any key details.
  • Create contacts easily so they can remember the important new people they meet.
  • Update opportunities so they can make progress.
These screenshots show how to create tasks and appointments using Oracle Voice.

These screenshots show how to create tasks and appointments using Oracle Voice.

Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.

Oracle’s ongoing investment in User Experience

Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.

Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.

Are you an Oracle Sales Cloud customer?

Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.

Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.Possibly Related Posts:

The Business Value In Training

Rittman Mead Consulting - Mon, 2014-08-11 14:59

One of the main things I get asked to do here at Rittman Mead, is deliver the OBIEE front-end training course (TRN 202). This a great course that has served both us, and our clients well over the years. It has always been in high demand and always delivered with great feedback from those in attendance. However, as with all things in life and business, there is going to be room for improvement and opportunities to provide even more value to our clients. Of all the feedback I receive from delivering the course, my favorite is that we do an incredible job delivering both the content and providing real business scenarios on how we have used this tool in the consulting field. Attendees will ask me how a feature works, and how I have used it with current and former clients, 100% of the time.

This year at KSCope ’14 in Seattle, we were asked to deliver a 2 hour front-end training course. Our normal front-end course runs a span of two days and covers just about every feature you can use all the way from Answers and Dashboards, to BI Publisher. Before the invitation to KScope ’14, we had bee tooling with the idea to deliver a course that not only teaches attendees on how to navigate OBIEE and use it’s features, but also emphasizes the business value behind why those features exist in the first place. We felt that too often users are given a quick overview of what the tool includes, but left figure out on their own how to extract the most value. It is one thing to create a graph in Answers, and another to know what the best graph to use might be. So in preparation for the KScope session, we decided to build the content around not only how to develop in OBIEE, but also why, as a business user, you would choose one layout/graph/feature over another. As you would expect, the turn out for the session was fantastic, we had over 70 plus pre-register, with another 10 on the waiting list. This was proof that there is an impending need to pull as much business value out of the tool as there is to simply learn how to use it. We were so encouraged by the attendance and feedback from this event, that we spent the next several weeks developing what is called the “Business Enablement Bootcamp”. It is a 3 day course that will cover Answers, Dashboards, Action Framework, BI Publisher, and the new Mobile App Designer. This is an exciting time for us in that we not only get show people how to use all of the great features that are built into the tool, but to also incorporate years of consulting experience and hundreds of client engagements right into the content. Below I have listed a breakdown of the material and the value it will provide.

Answers

Whenever we deliver our OBIEE 5-day bootcamp, which covers everything from infrastructure to the front end, Answers is one of the key components that we teach. Answers is the building block for analysis in OBIEE. While this portion of the tool is relatively intuitive to get started with, there are so many valuable nuances and settings that can get over looked without proper instruction. In order to get the most out of the tool, a business user needs be able to not only create basic analyses, but be able to use many of the advanced features such as hierarchical columns, master-detail, and selection steps. Knowing how and why to use these features is a key component to gaining valuable insight for your business users.

Dashboards

This one in particular is dear to my heart. To create an analysis and share it on a dashboard is one thing, but to tell a particular story with a series of visualizations strategically placed on a dashboard is something entirely different. Like anything else business intelligence, optimal visualization and best practices are learned skills that take time and practice. Valuable skills like making the most of your white space, choosing the correct visualizations, and formatting will be covered. When you provide your user base with the knowledge and skills to tell the best story, there will be no time wasted with clumsy iterations and guesswork as to what is the best way to present your data. This training will provide some simple parameters to work within, so that users can quickly gather requirements and develop dashboards that more polish and relevance than ever before.

 Dashboard

 Action Framework

Whenever I deliver any form of front end training, I always feel like this piece of OBIEE is either overlooked, undervalued, or both. This is because most users are either unaware of it’s use, or really don’t have a clear idea of its value and functionality. It’s as if it is viewed as an add-on in the sense that is just simply a nice feature. The action framework is something that when properly taught how to navigate, or given demonstration of its value, it will indeed become an invaluable piece of the stack. In order to get the most out of your catalog, users need to be shown how to strategically place action links to give the ability to drill across to analyses and add more context for discovery. These are just a few capabilities within the action framework that when shown how and when to use it, can add valuable insight (not to mention convenience) to an organization.

Bi Publisher/Mobile App Designer

Along with the action framework, this particular piece of the tool has the tendency to get overlooked, or simply give users cold feet about implementing it to complement answers. I actually would have agreed with these feelings before the release of 11.1.1.7. Before this release, a user would need to have a pretty advanced knowledge of data modeling. However, users can now simply pick any subject area, and use the report creation wizard to be off and running creating pixel perfect reports in no time. Also, the new Mobile App Designer on top of the publisher platform is another welcomed addition to this tool. Being the visual person that I am, I think that this is where this pixel perfect tool really shines. Objects just look a lot more polished right out of the box, without having to spend a lot of time formatting the same way you would have to in answers. During training, attendees will be exposed the many of the new features within BIP and MAD, as well as how to use them to complement answers and dashboards.

Third Party Visualizations

While having the ability to implement third party visualizations like D3 and Flot into OBIEE is more of an advanced skill, the market and need seems to be growing for this. While Oracle has done some good things in past releases with new visualizations like performance tiles and waterfall charts, we all know that business requirements can be demanding at times and may require going elsewhere to appease the masses. You can visit https://github.com/mbostock/d3/wiki/Gallery to see some of the other available visualizations beyond what is available in OBIEE. During training, attendees will learn the value of when and why external visualizations might be useful, as well as a high level view of how they can be implemented.

Bullet Chart

Users often make the mistake of viewing each piece of the front end stack as separate entities, and without proper training this is very understandable. Even though they are separate pieces of the product, they are all meant to work together and enhance the “Business Intelligence” of an organization. Without training the business to complement one piece to another, it will always be viewed as just another frustrating tool that they don’t have enough time to learn on their own. This tool is meant to empower your organization to have everything they need to make the most informed and timely decisions, let us use our experience to enable your business.

Categories: BI & Warehousing

Websites: What to look for in a database security contract

Chris Foot - Mon, 2014-08-11 10:28

When shopping for a world-class database administration service, paying attention to what specialists can offer in the way of protection is incredibly important. 

For websites storing thousands or even millions of customer logins, constantly monitoring server activity is essential. A recent data breach showed just how vulnerable e-commerce companies, Software-as-a-Service providers and a plethora of other online organizations are. 

A staggering number 
A Russian criminal organization known as "CyberVor" recently collected 1.2 billion unique user name and password sequences and 500 million email addresses from websites executing lackluster protection techniques, Infosecurity Magazine reported.

Andrey Dulkin, senior director of cyber innovation at CyberArk noted the attack was orchestrated by a botnet – or a collection of machines working to achieve the same end-goal. CyberVor carefully employed multiple infiltration techniques simultaneously in order to harvest login data. 

Where do DBAs come into play? 
Database active monitoring is essential to protect the information websites hold for their subscribers and patrons. Employing anti-malware is one thing, but being able to perceive actions occurring in real-time is the only way organizations can hope to deter infiltration attempts at their onset. 

Although TechTarget was referring to disaster recovery, the same principles of surveillance apply to protecting databases. When website owners look at the service-level agreement, the database support company should be provide the following accommodations:

  • Real-time reporting of all sever entries, detailing which users entered an environment, how they're interacting with it and what programs they're using to navigate it. 
  • Frequent testing that searches for any firewall vulnerabilities, unauthorized programs, SQL orders, etc. 
  • On-call administrators capable of assessing any questions or concerns a website may have.

Applying basics, then language 
Although advanced analytics and tracking cookies can be applied to actively search for and eliminate viruses – like how white blood cells attack pathogens – neglecting to cover standard security practices obviously isn't optimal. 

South Florida Business Journal acknowledged one of the techniques CyberVor used was a vulnerability IT professionals have been cognizant of for the past decade – SQL injections. This particular tactic likely involved one of the criminals ordering the SQL database to unveil all of its usernames and passwords. 

SQL Server, Microsoft's signature database solution, is quite popular among many websites, so those using this program need to contract DBA organizations with extensive knowledge of the language and best practices. 

Finally, remote DBA services must be capable of encrypting login information, as well as the data passwords are protecting. This provides an extra layer of protection in case a cybercriminal manages to unmask a username-password combination. 

The post Websites: What to look for in a database security contract appeared first on Remote DBA Experts.

dotNet transaction guard

Laurent Schneider - Mon, 2014-08-11 10:16

also with ODP in 12c, you can check the commit outcome as in jdbc

let’s create a table with a deferred primary key


create table t (x number primary key deferrable initially deferred);

Here an interactive Powershell Demo


PS> [Reflection.Assembly]::LoadFile("C:\oracle\product\12.1.0\dbhome_1\ODP.NET\bin\4\Oracle.DataAccess.dll")

GAC    Version        Location
---    -------        --------
True   v4.0.30319     C:\Windows\Microsoft.Net\assembly\GAC_64\Oracle.DataAccess\v4.0_4.121.1.0__89b483f429c47342\Oracle.DataAccess.dll

I first load the assembly. Some of my frequent readers may prefer Load(“Oracle.DataAccess, Version=4.121.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342″) rather than hardcoding the oracle home directory.

PS> $connection=New-Object Oracle.DataAccess.Client.OracleConnection("Data Source=DB01; User Id=scott; password=tiger")

create the connection

PS> $connection.open()

connect

PS> $cmd = new-object Oracle.DataAccess.Client.OracleCommand("insert into t values (1)",$connection)

prepare the statement

PS> $txn = $connection.BeginTransaction()

begin transaction

PS> $ltxid = ($connection.LogicalTransactionId -as [byte[]])

Here I have my logical transaction id. Whatever happends to my database server, crash, switchover, restore, core dump, network disconnection, I have a logical id, and I will check it later.


PS> $cmd.executenonquery()
1

One row inserted


PS> $connection2=New-Object Oracle.DataAccess.Client.OracleConnection("Data Source=DB01; User Id=scott; password=tiger")
PS> $connection2.open()

I create a second connection to monitor the first one. Monitoring your own session would be too much unsafe and is not possible.


PS> $txn.Commit()

Commit, no error.


PS> $connection2.GetLogicalTransactionStatus($ltxid)
     Committed     UserCallCompleted
     ---------     -----------------
          True                  True

It is committed. I see it Committed from $connection2. This is what I expected.

Because I have a primary key, let’s retry and see what happend.


PS> $txn = $connection.BeginTransaction()
PS> $ltxid = ($connection.LogicalTransactionId -as [byte[]])
PS> $cmd.executenonquery()
1
PS> $txn.Commit()
Exception calling "Commit" with "0" argument(s): "ORA-02091: Transaktion wurde zurückgesetzt
ORA-00001: Unique Constraint (SCOTT.SYS_C004798) verletzt"
At line:1 char:1
+ $txn.Commit()
+ ~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : OracleException
PS> $connection2.GetLogicalTransactionStatus($ltxid)
     Committed     UserCallCompleted
     ---------     -----------------
         False                 False

The commit fails, and from the connection2 we see it is not committed. It is a huge step toward integrity, as Oracle tells you the outcome of the transaction.

We see Committed=False.

Offline Visualization of Azkaban Workflows

Pythian Group - Mon, 2014-08-11 07:51

As mentioned in my past adventures, I’m often working with the workflow management tool ominously called Azkaban. Its foreboding name is not really deserved; it’s relatively straightforward to use, and offers a fairly decent workflow visualization. For that last part, though, there is a catch: to be able to visualize the workflow, you have to (quite obviously) upload the project bundle to the server. Mind you, it’s not that much of a pain, and could easily managed by, say, a Gulp-fueled watch job. But still, it would be nice to tighten the feedback loop there, and be able to look at the graphs without having to go through the server at all.

Happily enough, all the information we need is available in the Azkaban job files themselves, and in a format that isn’t too hard to deal with. Typically, a job file will be called ‘foo.job’ and look like

type=command
command=echo "some command goes here"
dependencies=bar,baz

So what we need to do to figure out a whole workflow is to begin at its final job, and recursively walk down all its dependencies.

use 5.12.0;

use Path::Tiny;

sub create_workflow {
  my $job = path(shift);
  my $azkaban_dir = $job->parent;

  my %dependencies;

  my @files = ($job);

  while( my $file = shift @files ) {
    my $job = $file->basename =~ s/\.job//r;

    next if $dependencies{$job}; # already processed

    my @deps = map  { split /\s*,\s*/ }
               grep { s/^dependencies=\s*// }
                    $file->lines( { chomp => 1 } );

    $dependencies{$job} = \@deps;

    push @files, map { $azkaban_dir->child( $_.'.job' ) } @deps;
  }

  return %dependencies;
}

Once we have that dependency graph, it’s just a question of drawing the little boxes and the little lines. Which, funnily enough, is a much harder job one would expect. And better left off to the pros. In this case, I decided to go with Graph::Easy, which output text and svg.

use Graph::Easy;

my $graph = Graph::Easy->new;

while( my( $job, $deps ) = each %dependencies ) {
    $graph->add_edge( $_ => $job ) for @$deps;
}

print $graph->as_ascii;

And there we go. We put those two parts together in a small script, and we have a handy cli workflow visualizer.

$ azkaban_flow.pl target/azkaban/foo.job

  +------------------------+
  |                        v
+------+     +-----+     +-----+     +-----+
| zero | --> | baz | --> | bar | --> | foo |
+------+     +-----+     +-----+     +-----+
               |                       ^
               +-----------------------+

Or, for the SVG-inclined,

$ azkaban_flow.pl -f=svg target/azkaban/foo.job

which gives us

Screen Shot 2014-08-10 at 3.09.42 PM
Categories: DBA Blogs

Rittman Mead and Oracle Big Data Appliance

Rittman Mead Consulting - Mon, 2014-08-11 07:00

Over the past couple of years Rittman Mead have been broadening our skills and competencies out from core OBIEE, ODI and Oracle data warehousing into the new “emerging” analytic platforms: R and database advanced analytics, Hadoop, cloud and clustered/distributed systems. As we talked about in the recent series of updated Oracle Information Management Reference Architecture blog posts and my initial look at the Oracle Big Data SQL product, our customers are increasingly looking to complement their core Oracle analytics platform with ones to handle unstructured and big data, and as technologists we’re always interesting in what else we can use to help our customers get more insight out of their (total) dataset.

An area we’ve particularly focused on over the past year has been Hadoop and R analysis, with the recent announcement of our partnering with Cloudera and the recruitment of a big data and advanced analytics team operating our of our Brighton, UK office. We’ve also started to work on a number of projects and proof of concepts with customers in the UK and Europe, working mainly with core Oracle BI, DW and ETL customers looking to make their first move into Hadoop and big data. The usual pattern of engagement is for us to engage with some business users looking to analyse a dataset hitherto too large or too unstructured to load into their Oracle data warehouse, or where they recognise the need for more advanced analytics tools such as R, MapReduce and Spark but need some help getting started. Most often we put together a PoC Hadoop cluster for them using virtualization technology on existing hardware they own, allowing them to get started quickly and with no initial licensing outlay, with our preferred Hadoop distribution being Cloudera CDH, the same Hadoop distribution that comes on the Oracle Big Data Appliance. Projects then typically move on to Hadoop running directly on physical hardware, in a couple of cases Oracle’s Big Data Appliance, usually in conjunction with Oracle Database, Oracle Exadata and Oracle Exalytics for reporting.

One such project started off by the customer wanting to analyse a dataset that was too large for the space available in their Oracle database and that they couldn’t easily process or analyse using the SQL-based tools they usually used; in addition, like most large organisations, database and hardware provisioning took a long time and they needed to get the project moving quickly. We came in and quickly put together a virtualised Hadoop cluster together for them, on re-purposed hardware and using the free (Standard) edition of Cloudera CDH4, and then used the trial version of Oracle Big Data Connectors along with SFTP transfers to get data into the cluster and then analysed.

NewImage

The PoC itself then ran for just over a month with the bulk of the analysis being done using Oracle R Advanced Analytics for Hadoop, an extension to R that allows you to use Hive tables as a data source and create MapReduce jobs from within R itself; the output from the exercise was a series of specific-answer-to-specific-question R graphs that solved an immediate problem for the client, and showed the value of further investment in the technology and our services – the screenshot below shows a typical ORAAH session, in this case analyzing the flight delays dataset that you can also find on the Exalytics server and in smaller form in OBIEE 11g’s SampleApp dataset.

NewImage

That project has now moved onto a larger phase of work with Oracle Big Data Appliance used as the Hadoop platform rather than VMs, and Cloudera Hadoop upgraded from the free, unsupported Standard version to Cloudera Enterprise. The VMs in fact worked pretty well and had the advantage that they could be quickly spun-up and housed temporarily on an existing server, but were restricted by the RAM that we could assign to each VM – 2GB initially, quickly upgraded to 8GB per VM, and the fact that they were sharing CPU and IO resources. Big Data Appliance, by contrast, has 64GB or RAM per node – something that’s increasingly important now in-memory tools like Impala are begin used – and has InfiniBand networking between the nodes as well as fast network connections out to the wider network, something thats often overlooked when speccing up a Hadoop system.

The support setup for the BDA is pretty good as well; from a sysadmin perspective there’s a lights-out ILOM console for low-level administration, as well as plugins for Oracle Enterprise Manager 12c (screenshot below), and Oracle support the whole package, typically handling the hardware support themselves and delegating to Cloudera for more Hadoop-specific queries. I’ve raised several SRs on client support contracts since starting work on BDAs, and I’ve not had any problem with questions not being answered or buck-passing between Oracle and Cloudera.

NewImageOne thing that’s been interesting is the amount of actual work that you need to do with the Big Data Appliance beyond the actual installation and initial configuration by Oracle to “on-board” it into the typical enterprise environment. BDAs are left with customers in a fully-working state, but like Exalytics and Exadata though, initial install and configuration is just the start, and you’ve then got to integrate the platform in with your corporate systems and get developers on-boarded onto the platform. Tasks we’ve typically provided assistance with on projects like these include:

  • Configuring Cloudera Manager and Hue to connect to the corporate LDAP directory, and working with their security team to create LDAP groups for developer and administrative access that we then used to restrict and control access to these tools
  • Configuring other tools such as RStudio Server so that developers can be more productive on the platform
  • Putting in place an HDFS directory structure to support incoming data loads and data archiving, as well as directories to hold the output datasets from the analysis work we’re doing – all within the POSIX security setup that HDFS currently uses which limits us to just granting owner, group and world permissions on directories
  • Working with the client’s infrastructure team on things like alerting, troubleshooting and setting up backup and recovery – something that’s surprisingly tricky in the Hadoop world as Cloudera’s backup tools only backup from Hadoop-to-Hadoop, and by definition your Hadoop system is going to hold a lot of data, the volume of which your current backup tools aren’t going to easily handle

Once things are set up though you’ve got a pretty comprehensive platform that can be expanded up from the initial six nodes our customers’ systems typically start with to the full eighteen node cluster, and can use tools such as ODI to do data loading and movement, Spark and MapReduce to process and analyse data, and Hive, Impala and Pig to provide end-user access. The diagram below shows a typical future-state architecture we propose for clients on this initial BDA “starter config” where we’ve moved up to CDH5.x, with Spark and YARN generally used as the processing framework and with additional products such as MongoDB used for document-type storage and analysis:

NewImage

 

Something that’s turned out to be more of an issue on projects than I’d originally anticipated is complying with corporate security policies. By definition, most customers who buy an Oracle Big Data Appliance and going to be large customers with an existing Oracle database estate, and if they deal with the public they’re going to have pretty strict security and privacy rules you’ll need to adhere to. Something that’s surprising therefore to most customers new to Hadoop is how insecure or at least easily compromised the average Hadoop cluster is, with Hadoop FS shell security relying on trusted networks and incoming user connections and interfaces such as ODBC not checking passwords at all.

Hadoop and the BDA only becomes what’s termed “secure” when you link it to a Kerebos server, but not every customer has Kerebos set up and unless you enable this feature right at the start when you set up the BDA, it’s a fairly involved task to add retrospectively. Moreover, customers are used to fine-grained access control to their data, a single security model over their data and a good understanding in their heads as to how security works on their database, whereas Hadoop is still a collection of fairly-loosely coupled components with pretty primitive access controls, and no easy way to delete or redact data, for example, when a particular country’s privacy laws in-theory mandate this.

Like everything there’s a solution if you’re creative enough, with tools such as Apache Sentry providing role-based access control over Hive and Impala tables, alternative storage tools like HBase that permit read, write, update and delete operations on data rather than just HDFS’s insert and (table or partition-level) delete, and tools like Cloudera Navigator and BDA features like Oracle Audit Vault that provide administrators with some sort of oversight as to who’s accessing what data and when. As I mentioned in my blog post a couple of weeks ago, Oracle’s Big Data SQL product addresses this requirement pretty well, potentially allowing us to apply Oracle security over both relational, and Hadoop, datasets, but for now we’re working within current CDH4 capabilities and planning on introducing Apache Sentry for role-based access control to Hive and Impala in the coming weeks. We’re also looking at implementing Cloudera’s “secure gateway” cluster topology with all access restricted to just a single gateway Hadoop node, and the cluster itself firewalled-off with external access to just that gateway node and HTTP / REST API access to the various cluster services, for example as shown in the diagram below:

NewImage

My main focus on Hadoop projects has been on the overall Hadoop system architecture, and interacting with the client’s infrastructure and security teams to help them adopt the BDA and take over its maintenance. From the analysis side, it’s been equally as interesting, with a number of projects using tools such as R, Oracle R Advanced Analytics for Hadoop and core Hive/MapReduce for data analysis, Flume, Java and Python for data ingestion and processing, and most recently OBIEE11g for publishing the results out to a wider audience. Following the development model that we outlined in the second post in our updated Information Management Reference Architecture blog series, we typically split delivery of each project’s output into two distinct phases; a discovery phase, typically done using RStudio and Oracle R Advanced Analytics for Hadoop, where we explore and start understanding the dataset, presenting initial findings to the business and using their feedback and direction to inform the second phase; and a second, commercial exploitation phase where we use the discovery phases’ outputs and models to drive a more structured dimensional model with output begin in the form of OBIEE analyses and dashboards.

NewImage

We looked at several options for providing the datasets for OBIEE to query, with our initial idea being to connect OBIEE directly to Hive and Impala and let the users query the data in-place, directly on the Hadoop cluster, with an architecture like the one in the diagram below:

NewImage

In fact this turned out to not be possible, as whilst OBIEE 11.1.1.7 can access Apache Hive datasources, it currently only ships with HiveServer1 ODBC support, and no support for Cloudera Impala, which means we need to wait for a subsequent release of OBIEE11g to be able to report against the ODBC interfaces provided by CDH4 and CDH5 on the BDA (although ironically, you can get HiveServer2 and Impala working on OBIEE 11.1.1.7 on Windows, though this platform isn’t officially supported by Oracle for Hadoop access, only Linux). Whichever way though, it soon became apparent that even if we could get Hive and Impala access working, in reality it made more sense to use Hadoop as the data ingestion and processing platform – providing access to data analysts at this point if they wanted access to the raw datasets – but with the output of this then being loaded into an Oracle Exadata database, either via Sqoop or via Oracle Loader for Hadoop and ideally orchestrated by Oracle Data Integrator 12c, and users then querying these Oracle tables rather than the Hive and Impala ones on the BDA, as shown in the diagram below.

NewImage

In-practice, Oracle SQL is far more complete and expressive than HiveQL and Impala SQL and it makes more sense to use Oracle as the query platform for the vast majority of users, with data analysts and data scientists still able to access the raw data on Hadoop using tools like Hive, R and (when we move to CDH5) Spark.

The final thing that’s been interesting about working on Hadoop and Big Data Appliance projects is that 80% of it, in my opinion, is just the same as working on large enterprise data warehouse projects, with 20% being “the magic”. A large portion of your time is spent on analysing and setting up feeds into the system, just in this case you use tools like Flume instead of GoldenGate (though GoldenGate can also load into HDFS and Hive, something that’s useful for transactional database data sources vs. Flume’s focus on file and server log data sources). Another big part of the work is data processing, ingestion, reformatting and combining, again skills an ETL developer would have (though there’s much more reliance, at this point, on command-line tools and Unix utilities, albeit with a place for tools like ODI once you get to the set-based filtering, joining and aggregating phase). In most cases, the output of your analysis and processing will be Hive and Impala tables so that results can be analysed using tools such as OBIEE, and you therefore need skills in areas such as dimensional modelling, business analysis and dashboard prototyping as well as tool-specific skills such as OBIEE RPD development.

Where the “magic” happens, of course, is the data preparation and analysis that you do once the data is loaded, quite intensively and interactively in the discovery phase and then in the form of MapReduce and Spark jobs, Sqoop loads and Oozie workflows once you know what you’re after and need to process the data into something more tabular for tools like OBIEE to access. We’re building up a team competent in techniques such as large-scale data analysis, data visualisation, statistical analysis, text classification and sentiment analysis, and use of NoSQL and JSON-type data sources, which combined with our core BI, DW and ETL teams allows us to cover the project from end-to-end. It’s still relatively early days but we’re encouraged by the response from our project customers so far, and – to be honest – the quality of the Oracle big data products and the Cloudera platform they’re based around – and we’re looking forward to helping other Oracle customers get the most out of their adoption of these new technologies. 

If you’re an Oracle customer looking to make their first move into the worlds of Hadoop, big data and advanced analytics techniques, feel free to drop me an email at mark.rittman@rittmanmead.com  for some initial advice and guidance – the fact we come from an Oracle-centric background as well typically makes it easier for us to relate these new concepts to the ones you’re typically more familiar with. Similarly, if you’re about to bring on-board an Oracle Big Data Appliance system and want to know how best to integrate it in with your existing Oracle BI, DW, data integration and systems management estate, get in contact and I’d be happy to share experiences and our delivery approach.

Categories: BI & Warehousing

Vote for Rittman Mead at the UKOUG Partner of the Year Awards 2014!

Rittman Mead Consulting - Mon, 2014-08-11 03:00

Rittman Mead are proud to announce that we’ve been nominated by UKOUG members and Oracle customers for five categories in the upcoming UKOUG Parter of the Year Awards 2014;  Business Intelligence, Training, Managed Services, Operating Systems Storage and Hardware, and Emerging Partner, reflecting the range of products and services we now offer for customers in the UK and around the world.

NewImage

Although Rittman Mead are a worldwide organisation with offices in the US, India, Australia and now South Africa, our main operation is in the UK and for many years we’ve been a partner member of the UK Oracle User Group (UKOUG). Our consultants speak at UKOUG Special Interest Group events as well as the Tech and Apps conferences in December each year, we write articles for Oracle Scene, the UKOUG members’ magazine, and several of our team including Jon and myself have held various roles including SIG chair and deputy chair, board member and even editor of Oracle Scene.

Partners, along with Oracle customers and of course Oracle themselves, are a key part of the UK Oracle ecosystem and to recognise their contribution the UKOUG recently brought in their Partner of the Year Awards that are voted on by UKOUG members and Oracle customers in the region. As these awards are voted on by actual users and customers we’ve been especially pleased over the years to win several Oracle Business Intelligence Partner of the Year Gold awards, and last year we were honoured to receive awards in five categories, including Business Intelligence Partner of the Year, Training Partner of the Year and Engineered Systems Partner of the Year.

This year we’ve been nominated again in five categories, and if you like what we do we’d really appreciate your vote, which you can cast at any time up to the closing date, September 15th 2014. Voting is open to UKOUG members and Oracle customers and only takes a few minutes – the voting form is here and you don’t need to be a UKOUG member, only an Oracle end-user or customer – these awards are a great recognition for the hard work out team puts in, so thanks in advance for any votes you can put in for us!

Categories: BI & Warehousing

Test your WebLogic 12.1.3 enviroment with Robot

Edwin Biemond - Sun, 2014-08-10 11:42
Robot Framework is a generic test automation framework which has an easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. This means we can write our tests in readable and understandable text. If we combine this with the REST Management interface of WebLogic 12.1.3 we are able to test every detail of a WebLogic domain configuration and when we combine this

Asus Zenfone Smartphone Android Terbaik

Daniel Fink - Sat, 2014-08-09 18:51
Asus Zenfone Smartphone Android Terbaik - This isn’t unhealthy albeit you have got ne'er taken baking categories before as a result of the training method takes you from basic skills to additional advance work. a number of these colleges additionally supply berth programs thus you'll observe what you have got learned below the direction of a master chocolatier.

But nothing compares to education right there within the room. you simply got to create the time and if you can’t create it to categories on weekdays, see if there area unit those being offered on weekends.

The cost of tuition in chocolate creating colleges varies reckoning on the kind of program and additionally if this can be tired the room or reception. For those that commit to learn within the room, they don’t need to worry as a result of all the materials they have can already be provided. For those reception, they need to shop for these from the craft store and move with what they need.

Learning the way to create chocolate with the assistance of trained professionals is way higher than attempting to good however it's done through trial and error. After all, it's a certain science once it involves intermixture the ingredients and somewhat little bit of selling if you're attending to sell this product within the market.

Once you get the suspend of things, you'll strive some experiments to create concoctions of your own. After all, chocolates don't forever are available boxes. The Different Processes in creating Chocolate There area unit alternative ways within which you'll learn in creating chocolate. the primary issue that you simply need to fathom is wherever do these delicious treats come back from? Most of you'll already understand the solution. Chocolates area unit made up of the beans of cocoa.

From the trees to the chocolate manufacturers, however such processes extremely evolve? Through time, there are several developments relating to chocolate creating. Technology has benefited plenty of life's endeavors. This additionally applies to the method of chocolate creating.

But such advancement solely applies on the gathering half. The process primarily remains constant, the recent standard means. As what is been same, don't fix a issue if it is not broken. perhaps constant rule is being applied to the current venture.

It feels smart to eat chocolates. however does one wish to grasp regarding the Zenfone various strategies that go behind such concept? Here area unit some.

Roasting It takes a decent quantity of cooking moreover as cocoa seed fermentation to return up with the standard of chocolate that you simply area unit searching for. within the pre-roasting stage, the beans area unit directed to infrared beaming heaters. This method can exclude the nibs of the beans from the shells. The temperature for this half is one hundred to a hundred and forty degree Celsius. This takes regarding twenty up to forty minutes.

Roasting also can be done directly. when the beans area unit roast, the shells are often simply removed. this can be favored by most chocolate manufacturers as a result of it retains the flavour of the beans. For this half, the temperature is at one hundred fifty to a hundred and sixty degree Celsius.

Fermentation This is done to decrease the extent of sugar, aldohexose moreover as laevulose and additionally amino acids within the beans. This brings within the flavor of the beans that the method of cooking are going to be able to enhance. however not a soul will try this. It takes a master to hone this craft. Beans will rot if one thing goes wrong with this method.

Shelling To be able to take away the shells from the beans, it takes additional processes than you'll ever imagine. This includes edge, then winnowing and last, winnowing. each step is vital thus on come back up with the grains that have the correct size. Tasting If you think that that this can be a simple task, well, that appears to be not the case. This involves ability and experience. One should have studied each style of the various types and variations of chocolates to be able to proclaim that they'll perform well on this and be a choose on what varieties ought to be given to the market.

These individuals are often compared to wine specialists. simply a bite from a chocolate treat can tell them what processes it went through, what quite beans was used or wherever it had been really created. And there area unit Asus Zenfone nonetheless totally different sorts of chocolates out there within the market. Imagine what all those need to bear simply to be able to reach your favorite food market in order that you'll purchase them for your own consumption.

You don't need to be Associate in Nursing skilled in creating chocolate. however you'll begin following some techniques within the tasting half. If you're treated with a crammed chocolate, let it linger on your mouth till it melts and you'll style all its flavors. you'll then chew it for regarding 5 times, enough for the flavour and therefore the coating to mix in.

Chocolate has its unchanged charm that hooks many another person with a appetency. Then again, a number of the chocolates area unit extremely expensive . In reality, given many tips and tricks, you'll really produce your own chocolate and save yourself cash and increase your delight due to your self-creation.

Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt

Michael Feldstein - Sat, 2014-08-09 13:30

I have been very critical of the Brookings Institution report on student debt, particularly in my post “To see how illogical the Brookings Institution report on student loans is, just read the executive summary”.

D’oh! It turns out that real borrowers with real tax brackets paying off off real loans are having real problems. The percentage at least 90 days delinquent has more than doubled in just the past decade. In fact, based on another Federal Reserve report, the problem is much bigger for the future, “44% of borrowers are not yet in repayment, and excluding those, the effective 90+ delinquency rate rises to more than 30%”.

More than 30% of borrowers who should be paying off their loans are at least 90 days delinquent? It seems someone didn’t tell them that their payment-to-income ratios (at least for their mythical average friends) are just fine and that they’re “no worse off”.

Well now the Federal Reserve Board themselves weighs in on the subject with a new survey, at least as described by an article in The Huffington Post.  I have read the Fed report and concur with HP analysis – it does argue against the Brookings findings.

Among the emerging risks spotlighted by the survey is the nation’s $1.3 trillion in unpaid student debt, suggesting that high levels of student debt are crimping the broader economy. Nearly half of Americans said they had to curb their spending last year in order to make payments on student loans, adding weight to the fear among federal financial regulators that the burden of student debt on households will depress economic growth for years to come.

Some 35 percent of survey respondents who are paying back student loans said they had to reduce their spending by “a little” over the past year to keep up with their student debt payments. Another 11 percent said they had to cut back their spending by “a lot.”

The Fed’s findings appear to challenge recent research by a pair of economists at the Brookings Institution, highlighted in The New York Times and cited by the White House, that argues that households with student debt are no worse off today than they were two decades ago.

The full Fed report can be found here. Much of the survey was focused on borrowers and their perceptions of how their student loans impact them, which is much more reliable than Brookings’ assumptions on how convoluted financial ratios should affect borrowers. In particular, consider this table:

Fed Table 11

Think about this situation – amongst borrowers who have completed their degrees, almost equal numbers think the financial benefits of a degree outweigh the costs as think the opposite (41.5% to 38.1%). I don’t see this as an argument against getting a degree, but rather as clear evidence that the student loan crisis is real and will have a big impact on the economy and future student decision-making.

Thanks to the Federal Reserve Board for helping us out.

Update: Clarified that this is Federal Reserve Board and not NY Fed.

The post Federal Reserve Board backs up e-Literate in criticism of Brookings report on student debt appeared first on e-Literate.

Required Field Validation in Oracle MAF

Shay Shmeltzer - Fri, 2014-08-08 16:38

A short entry to explain how to do field validation in Oracle MAF. As an example let's suppose you want a field to have a value before someone clicks to do an operation.

To do that you can set the field's attribute for required and "show required" like this:

  <amx:inputText label="label1" id="it1" required="true" showRequired="true"/>

 Now if you run your page, leave the field empty and click a button that navigates to another page, you'll notice that there was no indication of an error. This is because you didn't tell the AMX page to actually do a validation. 

 To add validation you use an amx:validationGroup tag that will surround the fields you want to validate.

For example:

     <amx:validationGroup id="vg1">

    <amx:inputText label="label1" id="it1" required="true" showRequired="true"/>

    </amx:validationGroup>

Then you can add a amx:validateOperation tag to the button that does navigation and tell it to validate the group you defined before (vg1 in our example).

       <amx:commandButton id="cb2" text="go" action="gothere">

        <amx:validationBehavior id="vb1" group="vg1"/>

      </amx:commandButton>

Now when you run the page and try to navigate you'll get your validation error.

Categories: Development