Skip navigation.

Feed aggregator

Can you handle big data? Oracle may have an answer

Chris Foot - Wed, 2014-08-13 01:33

Now more than ever, database administration services are providing their clients with the expertise and software required to support big data endeavors. 

They haven't necessarily had much of a choice. Businesses need environments such as Hadoop to store the large amount of unstructured data they strive to collect and analyze to achieve insights regarding customer sentiment, procurement efficiencies and a wealth of other factors. 

Oracle's assistance 
According to PCWorld, Oracle recently released a software tool capable of querying Hadoop and Not Only Server Query Language environments. The solution is an add-on for the company's Big Data Appliance, a data center rack comprised of its Sun x86 servers programmed to run Cloudera's Hadoop distribution.

In order for businesses to benefit from the simplicity of Big Data SQL, the source noted they must have a 12c Oracle database installed on the company's Exadata database machine. This allows Exadata and the x86 Big Data Appliance configuration to share an interconnect for data exchange. 

Assessing a "wider problem"
Oracle Vice President of Product Development Neil Mendelson asserted the solution wasn't created for the purpose of replacing existing SQL languages such as Hive and Impala. Instead, Mendelson maintained that Big Data SQL enables remote DBA experts to query a variety of information stocks while moving a minimal amount of data. 

This means organizations don't have to spend the time or network resources required to move large troves of data from one environment to another, because Smart Scan technology is applied to conduct filtering on a local level.

InformationWeek contributor Doug Henschen described Smart Scan as a function that combs through data on the storage tier and identifies what information is applicable to the submitted query. Oracle Product Manager Dan McClary outlined an example of how it could be used:

  • A data scientist wants to compare and contrast Twitter data in Hadoop with customer payment information in Oracle Database
  • Smart Scan percolates Tweets that don't have translatable comments and eliminates posts without latitude and longitude data
  • Oracle Database then receives one percent of the total Twitter information in Hadoop
  • A visualization tool identifies location-based profitability based on customer sentiment

Reducing risk 
In addition, Oracle allows DBA services to leverage authorizations and protocols to ensure security is maintained when Hadoop or NoSQL is accessed. For instance, when a professional is assigned the role of "analyst" he or she has permission to query the big data architectures, while those who lack permission cannot. 

The post Can you handle big data? Oracle may have an answer appeared first on Remote DBA Experts.

Dept/Emp POJO's with sample data for Pivotal GemFire

Pas Apicella - Tue, 2014-08-12 21:57
I constantly blog about using DEPARTMENT/EMPLOYEE POJO'S with sample data. Here is how to create a file with data to load into GemFire to give you that sample set.

Note: You would need to create POJO'S for Department/Empployee objects that have getter/setter for the attributes mentioned below.

Dept Data

put --key=10 --value=('deptno':10,'name':'ACCOUNTING') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=20 --value=('deptno':20,'name':'RESEARCH') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=30 --value=('deptno':30,'name':'SALES') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=40 --value=('deptno':40,'name':'OPERATIONS') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;

Emp Data

put --key=7369 --value=('empno':7369,'name':'SMITH','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7370 --value=('empno':7370,'name':'APPLES','job':'MANAGER','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7371 --value=('empno':7371,'name':'APICELLA','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7372 --value=('empno':7372,'name':'LUCIA','job':'PRESIDENT','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7373 --value=('empno':7373,'name':'SIENA','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7374 --value=('empno':7374,'name':'LUCAS','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7375 --value=('empno':7375,'name':'ROB','job':'CLERK','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7376 --value=('empno':7376,'name':'ADRIAN','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7377 --value=('empno':7377,'name':'ADAM','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7378 --value=('empno':7378,'name':'SALLY','job':'MANAGER','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7379 --value=('empno':7379,'name':'FRANK','job':'CLERK','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7380 --value=('empno':7380,'name':'BLACK','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7381 --value=('empno':7381,'name':'BROWN','job':'SALESMAN','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;

Load into GemFire (Assumed JAR for POJO'S exists in class path of GemFireCache Servers)

The script bellows uses GFSH to load the file into the correct region references the correct POJO inside the files created above.

export CUR_DIR=`pwd`

gfsh <<!
connect --locator=localhost[10334];
run --file=$CUR_DIR/dept-data
run --file=$CUR_DIR/emp-data
!

Below is what the Department.java POJO would look like for example.
  
package pivotal.au.se.deptemp.beans;

public class Department
{
private int deptno;
private String name;

public Department()
{
}

public Department(int deptno, String name) {
super();
this.deptno = deptno;
this.name = name;
}

public int getDeptno() {
return deptno;
}

public void setDeptno(int deptno) {
this.deptno = deptno;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

@Override
public String toString() {
return "Department [deptno=" + deptno + ", name=" + name + "]";
}

}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Deploying an SQL Server database to Azure

Yann Neuhaus - Tue, 2014-08-12 18:46

Deploying an SQL Server database to a Windows Azure virtual machine is a feature introduced with SQL Server 2014. It can be useful for an organization  that wants to reduce its infrastucture management, simplify the deployment, or have a fast virtual machine generation.

 

Concept

This new feature is a wizard which allows either to copy or migrate an On-Premise SQL Server database to a Windows Azure virtual machine.

The following schema explains the main process of the feature:
 PrtScr-capture_20.pngAn existing SQL Server instance is present on an On-Premise machine of an organization, hosting one or several user databases.

Once the new feature is used, a copy of the on-premise database will be available on the SQL Server instance in the Cloud.

 

Prerequisites

Azure Account Creation

Obviously, the first requirement is an Azure account! To create an Azure account, go to the official Azure website.

 

Azure Virtual Machine Creation

Once the Azure account has been set, a Windows Azure virtual machine has to be created.

There are two ways to create a Virtual Machine: by a "Quick Create" or by a "Custom Create". It is recommended to perform a "Custom Create" (from gallery) because it offers more flexibility and control on the creation.

 

PrtScr-capture_20_20140730-120018_1.png

 

This example is done with “FROM GALLERY”. So the creation of the virtual machine will be performed with a wizard in four steps.

 

The first step of the wizard is the selection of the virtual machine type.

Unlike prejudices, Windows Azure offers a large panel of virtual machines, which do not only come from the Microsoft world.

See more details about Virtual Machines on the Microsoft Azure website.


 

PrtScr-capture_20_20140730-120252_1.png

The targeted feature is only available on SQL Server 2014, so a virtual machine including SQL Server 2014 has to be created. This example will be made with a Standard edition to have the more restrictive edition of SQL Server 2014.

The first step of the wizard is configured as follows: 


 PrtScr-capture_20_20140730-120551_1.png

For this example, default settings are used. Indeed the configuration of the virtual machine is not the main topic here, and can change depending on one’s need.

The release date from June is selected, so the SP1 will be included for SQL Server 2014.

The "Standard Tier" is selected so load-balancer and auto-scaling will be included. See more details about Basic and Standard Tier on the Microsoft Azure website.

The virtual machine will run with 2 cores and 3.5 GB memory, which will be enough for this demo. See more details about prices and sizes on the Microsoft Azure website.

The virtual machine is named “dbiservicesvm” and the first (admin) user is “dbiadmin”.

 

The second step of the wizard is configured as follows:

 

PrtScr-capture_20_20140730-121233_1.png

 

The creation of a virtual machine in Azure requires a cloud service, which is a container for virtual machines in Azure. See more details about Cloud Service on the Microsoft Azure website.

Furthermore, a storage account ("dbiservices") and an affinity group ("IT") are also required to store the disk files of the virtual machine. To create a storage account and an affinity group, see the Azure Storage Account Creation part from a previous blog.

 

The third step of the wizard is configured as follows:

 

PrtScr-capture_20_20140730-121705_1.png

 

This screen offers the possibility to install extensions for the virtual machine. Virtual machine extensions simplify the virtual machine management. By default, VM Agent is installed on the virtual machine.

The Azure VM Agent will be responsible for installing, configuring and managing VM extensions on the virtual machine.

For this example, VM extensions are not required at all, so nothing is selected.

 

Security

At the SQL Server level, an On-Premise SQL user with Backup Operator privilege is required on the targeted Azure database. Of course the user must be mapped with a SQL Server login to be able to connect to the SQL Server instance.

If the user does not have the Backup Operator privilege, the process will be blocked at the database backup creation step:

 

PrtScr-capture_20_20140730-122146_1.png

 

Moreover, an Azure account with Administrator Service privilege linked to the subscription containing the Windows Azure virtual machine is also required. Without this account, it is impossible to retrieve the list of Windows Azure virtual machines available on the Azure account, so the SQL Server Windows Azure virtual machine created previously.

Finally, the end point of the Cloud Adapter must be configured to access the virtual machine during the execution of the wizard. If not, the following error message will occur:

 

PrtScr-capture_201.png

 

The Cloud Adapter is a service which allows the On-Premise SQL Server instance to communicate with the SQL Server Windows Azure virtual machine. More details about Cloud Adapter for SQL Server on TechNet.

To configure the Cloud Adapter port, the Azure administrator needs to access to the Endpoints page in the SQL Server Windows Azure virtual machine on the Azure account. Then a new endpoint needs to be created as follows:

 

PrtScr-capture_201_20140730-122732_1.png

 

Now, the SQL Server Windows Azure virtual machine allows to connect to the Cloud Adapter port.

 

The credentials of a Windows administrator user are also required on the Windows Azure virtual machine to connect to the virtual machine in Azure. This administrator also needs a SQL Server login. If these two requirements are not met, the following error message will occur:

 

PrtScr-capture_201_20140730-122929_1.png

 

The login created in the SQL Server instance in Azure must also have the dbcreator server role, otherwise the following error message will occur:

 

PrtScr-capture_201_20140730-123033_1.png

 

 

Deploy a database to the Windows Azure VM wizard

Launch the wizard

The wizard can be found with a right-click on the targeted database, then "Tasks" and finally "Deploy Database to a Windows Azure VM…"

 

PrtScr-capture_201_20140730-132221_1.png

 

In fact, it does not matter from which user databases the wizard is launched, because the wizard will ask to connect to an instance, and then it will ask to select a database.

 

Use the wizard

The first step is an introduction of the wizard, which is quite without interests.

 

The second step needs three fields: the SQL Server instance, the SQL Server database and the folder to store the backup files.

 

PrtScr-capture_201_20140730-132446_1.png

 

The "SQL Server" field is the SQL Server instance hosting the SQL Server database which is planned to be deployed to Azure.

The "Select Database" field must obviously reference the database to deploy.

 

The third step needs two fields: the authentication to the Azure account and the selection of the subscription.

 

PrtScr-capture_201_20140730-132747_1.png

 

After clicking on the Sign in… button, a pop-up will require the Administrator privilege from the Azure account.

As soon as the credentials are entered to connect to the Azure account, a certificate will be generated.

If several subscriptions are linked to the Azure account, select the correct subscription ID. In this example, there is only one subscription linked to the Azure account.

For more information, see what’s the difference between an Azure account and a subscription on TechNet.

 

The fourth step of the wizard is divided in several parts. First, there is the virtual machine selection, as follows:

 

PrtScr-capture_201_20140730-133036_1.png

 

The cloud service ("dbiservicescloud") needs to be selected. Then, the virtual machine ("dbiservicesvm") can be selected.

 

Credentials to connect to the virtual machine must be provided, as follows:

 

PrtScr-capture_201_20140730-133421_1.png

 

The SQL Server instance name and the database name need to be filled, as follows:

 

PrtScr-capture_201_20140730-133756_1.png

 

Finally, all information are filled for the wizard, the deployment can be launched. The deployment finished successfully!

 

After the deployment

First, the On-Premise database used for the deployment is still present on the SQL Server instance.

 

PrtScr-capture_201_20140730-134024_1.png

 

Then, the folder used to store the temporary file is still present. In fact, this is a "bak" file because the mechanism behind the wizard is a simple backup and restore of the database from an On-Premise SQL Server instance to an Azure SQL Server instance.

 

PrtScr-capture_201_20140730-134116_1.png

 

So do not forget to delete the bak file after the deployment because this file uncessarly fills your storage space for big databases.

 

Finally, the deployed database can be found on the Azure SQL Server instance!

 

PrtScr-capture_201_20140730-134220_1.png

Limitations

The first limitation of this new feature is the database size of this operation: it cannot exceed 1 TB.

Moreover, it does not support hosted services that are associated with an Affinity Group.

The new feature does not allow to deploy all versions of SQL Server database. Indeed, only SQL Server 2008 database versions or higher are allowed to be deployed.

 

Similar features

If this new feature, for any reason , does not meet the organization’s need, two similar features exist.

 

SQL database in Azure

This feature introduced with SQL Server 2012 is quite similar to the feature presented in this blog, because the SQL Server database and the Windows Azure virtual machine are hosted on Azure.

However, the management of the infrastructure is much more reduced: no virtual machine management nor sql server management. A good comparison between these two functionalities is available: Choosing between SQL Server in Windows Azure VM & Windows Azure SQL Database.

More details about SQL Database on Azure.

 

SQL Server data files in Azure

This is a new feature introduced with SQL Server 2014, which allows to store data files from the SQL Server database in Azure Blob storage, but the SQL Server instance runs On-Premise.

It simplifies the migration process, reduces the On-Premise space storage and management, and simplifies the High Availability and recovery solutions…

More details about SQL Server data files in Azure on TechNet.

 

Conclusion

With this feature, Microsoft simplifies the SQL Server database deployment process from On-Premise to Azure.

Azure is a rather attractive and interesting tool that is highly promoted by Microsoft.

Pilots: Too many ed tech innovations stuck in purgatory

Michael Feldstein - Tue, 2014-08-12 13:44

Steve Kolowich wrote an article yesterday in the Chronicle that described the use of LectureTools, a student engagement and assessment application created by faculty member Perry Sampson at the University Michigan. These two paragraphs jumped out at me.

The professor has had some success getting his colleagues to try using LectureTools in large introductory courses. In the spring, the software was being used in about 40 classrooms at Michigan, he says.

Adoption elsewhere has been scattered. In 2012, Mr. Samson sold LectureTools to Echo360[1], an education-technology company, which has started marketing it to professors at other universities. The program is being used in at least one classroom at 1,100 institutions, according to Mr. Samson, who has kept his title of chief executive of LectureTools. But only 80 are using the software in 10 or more courses.

93% of LectureTools clients use the tool for less than 10 courses total, meaning that the vast majority of customers are running pilot projects almost two years after the company was acquired by a larger ed tech vendor.

We are not running out of ideas in the ed tech market – there are plenty of new products being introduced each year. What we are not seeing, however, are ed tech innovations that go beyond a few pilots in each school. Inside Higher Ed captured this sentiment when quoting a Gallup representative after the GSV+ASU EdInnovations conference this year:

“Every one of these companies has — at least most of them — some story of a school or a classroom or a student or whatever that they’ve made some kind of impact on, either a qualitative story or some real data on learning improvement,” Busteed said. “You would think that with hundreds of millions of dollars, maybe billions now, that’s been plowed into ed-tech investments … and all the years and all the efforts of all these companies to really move the needle, we ought to see some national-level movement in those indicators.”

In our consulting work Michael and I often help survey institutions to discover what technologies are being used within courses[2], and typically the only technologies that are used by a majority of faculty members or in a majority of courses are the following:

  • AV presentation in the classroom;
  • PowerPoint usage in the classroom (obviously connected with the projectors);
  • Learning Management Systems (LMS);
  • Digital content at lower level than a full textbook (through open Internet, library, publishers, other faculty, or OER); and
  • File sharing applications.

Despite the billions of dollars invested over the past several years, the vast majority of ed tech is used in only a small percentage of courses at most campuses.[3] Most ed tech applications or devices have failed to cross the barriers into mainstream adoption within an institution. This could be due to the technology not really addressing problems that faculty or students face, a lack of awareness and support for the technology, or even faculty or student resistance to the innovation. Whatever the barrier, the situation we see far too often is a breakdown in technology helping the majority of faculty or courses.

Diffusion of Innovations – Back to the basics

Everett Rogers wrote the book on the spread of innovations within an organization or cultural group in his book Diffusions of Innovations. Rogers’ work led to many concepts that we seem to take for granted, such as the S-curve of adoption:

 The Diffusion of Innovations, 5th ed, p. 11

Source: The Diffusion of Innovations, 5th ed, p. 11

leading to the categorization of adopters (innovators, early adopters, early majority, late majority, laggards), and the combined technology adoption curve.

 The Diffusion of Innovations, 5th ed., p. 281

Source: The Diffusion of Innovations, 5th ed., p. 281

But Rogers did not set out to describe the diffusion of innovations as an automatic process following a pre-defined path. The real origin of his work was trying to understand why some innovations end up spreading throughout a social group while others do not, somewhat independent of whether the innovation could be thought of as a “good idea”. From the first paragraph of the 5th edition:

Getting a new idea adopted, even when it has obvious advantages, is difficult. Many innovations require a lengthy period of many years from the time when they become available to the time when they are widely adopted. Therefore, a common problem for many individuals and organizations is how to speed up the rate of diffusion of an innovation.

Rogers defined diffusion as “a special type of communication in which the messages are about a new idea” (p. 6), and he focused much of the book on the Innovation-Decision Process. This gets to the key point that availability of a new idea is not enough; rather, diffusion is more dependent on the communication and decision-process about whether and how to adopt the new idea. This process is shown below (p. 170):

 The Diffusion of Innovations, 5th ed., p. 170

Source: The Diffusion of Innovations, 5th ed., p. 170

What we are seeing in ed tech in most cases, I would argue, is that for institutions the new ideas (applications, products, services) are stuck the Persuasion stage. There is knowledge and application amongst some early adopters in small-scale pilots, but majority of faculty members either have no knowledge of the pilot or are not persuaded that the idea is to their advantage, and there is little support or structure to get the organization at large (i.e. the majority of faculty for a traditional institution, or perhaps for central academic technology organization) to make a considered decision. It’s important to note that in many cases, the innovation should not be spread to the majority, either due to being a poor solution or even due to organizational dynamics based on how the innovation is introduced.

The Purgatory of Pilots

This stuck process ends up as an ed tech purgatory – with promises and potential of the heaven of full institutional adoption with meaningful results to follow, but also with the peril of either never getting out of purgatory or outright rejection over time.

Ed tech vendors can be too susceptible to being persuaded by simple adoption numbers such as 1,100 institutions or total number of end users (millions served), but meaningful adoption within an institution – actually affecting the majority of faculty or courses – is necessary in most cases before there can be any meaningful results beyond anecdotes or marketing stories. The reason for the extended purgatory is most often related to people issues and communications, and the ed tech market (and here I’m including vendors as well as campus support staff and faculty) has been very ineffective in dealing with real people at real institutions beyond the initial pilot audience.

Update: Add parenthetical in last sentence to clarify that I’m not just talking about vendors as key players in diffusion.

  1. Disclosure: Echo360 was a recent client of MindWires
  2. For privacy reasons I cannot share the actual survey results publicly.
  3. I’m not arguing against faculty prerogative in technology adoption and for a centralized, mandatory approach, but noting the disconnect.

The post Pilots: Too many ed tech innovations stuck in purgatory appeared first on e-Literate.

Data Caching Implementation for ADF Mobile MAF Application

Andrejus Baranovski - Tue, 2014-08-12 13:19
If you are building mobile application with web service call integration, you must take into account data caching strategy. Without data caching, mobile application will try to establish too many connections with the server - this will use a lot of bandwidth and slow down mobile application performance. This post will be focused around the scenario of implementing simple data caching strategy. In my next post, I'm planning to review MAF persistence framework from Steven Davelaar - this framework is powerful and flexible. Simple data caching strategy makes sense for smaller use cases, when we don't need to use additional framework for persistence.

Data caching implementation in the sample application - MAFMobileLocalApp_v3.zip, is based on SQLite database local to the mobile application. The whole idea is pretty straightforward, there is a database and Web Service publishing data. Mobile application is reading and synchronising data through Web Service. Once data is fetched from the Web Service, it is stored in local cache, implemented by SQLite DB. For the subsequent requests, mobile application is going to use a cache, until user will decide when he wants to synchronise data from Web Service:


Sample application implements a Web Service, based on ADF BC module. This Web Service returns Tasks data (SQL script is included with the sample application):


Besides fetching the data, Web Service implements Task update method:


On the mobile application side, use case is implemented with a single Task Flow. Task list displays a list of tasks, user can edit a task and submit changes through Web Service. This change is immediately synchronised with local cache storage. User is able to refresh a list of tasks and synchronise it with a Web Service. During synchronisation it will clear local cache and populate again with the data returned from Web Service:


Data caching logic is handle in TaskDC class, Data Control is generated on top of this class, this makes it available through the bindings layer. Initially we check if cache is empty and load data from the Web Service, for the subsequent requests data is loaded from cache. Unless user wants to synchronise with Web Service and invokes refresh:


There is a method responsible for fetching data from Web Service and translating it to the Task list structure:


Data from cache retrieval is implemented in the separate method, we are querying data from local SQLite database:


When request hits TaskDC Data Control, we are checking if there are any rows in cache. If there are rows, call to Web Service is not made and rows are fetched from cache:


When user is synchronising with the Web Service, cache is cleared up and re-populated with the rows fetched from Web Service:


Update is synchronised with local cache, so there is no need to re-fetch entire data collection from Web Service:


If the local cache is empty and Web Service is invoked to fetch data, we can see this is Web Service log - it executes SQL. Later call to Web Service is not made, data loaded locally from SQLite database:


We can track update operation on the server side as well, Task details are updated using Task Id:


Initial load for the task list, this is what you should see if Web Service and MAF mobile applications runs correctly:


User could open specific task and edit task properties - changing status:


Task description could be modified:


All changes are saved through Web Service call and immediately synchronised with local SQLite database:


Selected task is updated, user is returned back to the list of tasks:


We could simulate data update on the server, lets change status for one of the tasks:


With refresh option invoked, data is synchronised on the mobile application side and local cache gets refreshed:

New Features Available in Latest ORAchk Release

Joshua Solomin - Tue, 2014-08-12 12:46
Untitled Document


ORAchk can proactively scan for known problems within these areas:
GPIcon



  • Oracle Database
  • Enterprise Manager Cloud Control
  • E-Business Suite
  • Oracle Sun Systems





New features available in ORAchk version 2.2.5

  • Runs checks for multiple databases in parallel
  • Schedules multiple automated runs via ORAchk daemon
  • Uses configurable $HOME directory location for ORAchk temporary files
  • Ignores skipped checks when calculating System health score
  • Checks the health of pluggable databases using OS authentication
  • Reports top 10 time consuming checks to optimize runtime in the future
  • Improves report readability for clusterwide checks
  • Includes over 50 new Health Checks for the Oracle Stack
  • Provides a single dashboard to view collections across your enterprise
  • Includes pre and post upgrade checks for standalone database, option to run only these checks
  • Expands product areas in E-Business Suite and in Enterprise Manager Cloud Control

If you have particular checks or product areas you would like to see ORAchk cover, please post suggestions in the ORAchk subspace in My Oracle Support Community.

Read more about ORAchk and its features.

keeping my fingers crossed just submitted abstract for RMOUG 2015 Training Days ...

Grumpy old DBA - Tue, 2014-08-12 12:06
The Rocky Mountain Oracle Users Group has been big and organized for a very long time.  I have never been out there ( my bad ) but am hoping to change that situation in 2015.

Abstracts are being accepted for Training Days 2015 ... my first one is in there now thinking about a second submission but my Hotsos 2014 presentation needs some more work/fixing.  Ok lets be honest I need to shrink it considerably and tighten the focus of that one.

Information on RMOUG 2015 can be found here: RMOUG Training Days 2015

Keeping my fingers crossed!
Categories: DBA Blogs

Searching and installing Linux packages

Arun Bavera - Tue, 2014-08-12 11:12

yum search vnc

Loaded plugins: security

public_ol6_UEKR3_latest | 1.2 kB 00:00

public_ol6_UEKR3_latest/primary | 7.7 MB 00:01

public_ol6_UEKR3_latest 216/216

public_ol6_latest | 1.4 kB 00:00

public_ol6_latest/primary | 41 MB 00:03

public_ol6_latest 25873/25873

=============================================================================== N/S Matched: vnc ================================================================================

gtk-vnc.i686 : A GTK widget for VNC clients

gtk-vnc.x86_64 : A GTK widget for VNC clients

gtk-vnc-devel.i686 : Libraries, includes, etc. to compile with the gtk-vnc library

gtk-vnc-devel.x86_64 : Libraries, includes, etc. to compile with the gtk-vnc library

gtk-vnc-python.x86_64 : Python bindings for the gtk-vnc library

libvncserver.i686 : Library to make writing a vnc server easy

libvncserver.x86_64 : Library to make writing a vnc server easy

libvncserver-devel.i686 : Development files for libvncserver

libvncserver-devel.x86_64 : Development files for libvncserver

tigervnc.x86_64 : A TigerVNC remote display system

tigervnc-server.x86_64 : A TigerVNC server

tigervnc-server-applet.noarch : Java TigerVNC viewer applet for TigerVNC server

tigervnc-server-module.x86_64 : TigerVNC module to Xorg

tsclient.x86_64 : Client for VNC and Windows Terminal Server

vinagre.x86_64 : VNC client for GNOME

xorg-x11-server-source.noarch : Xserver source code required to build VNC server (Xvnc)

Name and summary matches only, use "search all" for everything.

yum install tigervnc-server.x86_64

Loaded plugins: security

Setting up Install Process

Resolving Dependencies

--> Running transaction check

---> Package tigervnc-server.x86_64 0:1.1.0-8.el6_5 will be installed

--> Processing Dependency: xorg-x11-fonts-misc for package: tigervnc-server-1.1.0-8.el6_5.x86_64

--> Running transaction check

---> Package xorg-x11-fonts-misc.noarch 0:7.2-9.1.el6 will be installed

--> Finished Dependency Resolution

Dependencies Resolved

=================================================================================================================================================================================

Package Arch Version Repository Size

=================================================================================================================================================================================

Installing:

tigervnc-server x86_64 1.1.0-8.el6_5 public_ol6_latest 1.1 M

Installing for dependencies:

xorg-x11-fonts-misc noarch 7.2-9.1.el6 public_ol6_latest 5.8 M

Transaction Summary

=================================================================================================================================================================================

Install 2 Package(s)

Total download size: 6.9 M

Installed size: 9.7 M

Is this ok [y/N]: y

Downloading Packages:

(1/2): tigervnc-server-1.1.0-8.el6_5.x86_64.rpm | 1.1 MB 00:00

(2/2): xorg-x11-fonts-misc-7.2-9.1.el6.noarch.rpm | 5.8 MB 00:01

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Total 2.5 MB/s | 6.9 MB 00:02

Running rpm_check_debug

Running Transaction Test

Transaction Test Succeeded

Running Transaction

Warning: RPMDB altered outside of yum.

Installing : xorg-x11-fonts-misc-7.2-9.1.el6.noarch 1/2

Installing : tigervnc-server-1.1.0-8.el6_5.x86_64 2/2

Verifying : xorg-x11-fonts-misc-7.2-9.1.el6.noarch 1/2

Verifying : tigervnc-server-1.1.0-8.el6_5.x86_64 2/2

Installed:

tigervnc-server.x86_64 0:1.1.0-8.el6_5

Dependency Installed:

xorg-x11-fonts-misc.noarch 0:7.2-9.1.el6

Complete!

Categories: Development

Designing for the Experience-Driven Enterprise

WebCenter Team - Tue, 2014-08-12 06:00

Guest blog post series this week by Geoffrey Bock

How Oracle WebCenter Customers Build Digital Businesses: Designing for the Experience-Driven Enterprise

Geoffrey Bock, Principal, Bock & Company

Making the Transition from Analog to Digital

In my last blog post on contending with digital disruption, I described how several Oracle customers decided to refresh, modernize, and mobilize their enterprise application infrastructure. Web-enabling an existing application, once necessary, is no longer sufficient.

But what does it take to mobilize key business tasks and drive digital capabilities deeply into an application infrastructure? Many of the WebCenter customers I spoke to emphasize both the business value of their applications and the quality of end user experiences. They are now rebuilding their core applications, making the transition from analog to digital business practices, by designing for an experience-driven enterprise. 

The Flow of Design Activities

As I see it, customers are focusing on a sequence of five interrelated activities, summarized in Illustration 1. There is an inherent flow to application evolution.

Customers leverage their current platforms to innovate

Illustration 1. As they design their digital businesses, customers leverage their current platforms in order to deliver innovative experiences.

Here’s a description of how customers are building their digital businesses, and embracing the necessity of change along the way.

  • To begin with, there are baseline functions based on existing activities. While modernizing their core applications and the underlying back-end infrastructure, IT and business leaders emphasize that they “cannot loose anything” from their current platform. What needs to change is still up for redesign.

  • At the same time, leaders need to enhance the value of ad hoc communications. They are turning to social and mobile channels to improve overall employee productivity as well as strengthen relationships with customers and partners. New ways to communicate information become a lever for innovation.

  • There is also a business purpose for investing in social and mobile channels. Leaders expect to substantially improve service and support, when customers, partners, and employees have easy access to relevant information. There is added power through easy sharing.

  • To ensure quality service and support, it is essential to manage reusable content for a consistent experience. Organizations expect to create content once, organize it around business tasks, and distribute it across multiple channels. It helps to structure content for consistent distribution.

  • As a result, there are opportunities to launch innovative (and potentially breakthrough) digital business activities, by exploiting on the capabilities of the redesigned application environment. It’s not so much a matter of “loosing” baseline functions as embedding the flexibility to ensure that they can evolve.

From my perspective, this new application environment supports digital business initiatives by mobilizing the moments of engagement. These moments encompass the end user experiences where work gets done and value is created.

Optimizing for Agility

Companies are introducing various customer-, partner-, and employee-facing applications that run on the rebuilt enterprise platform. Leaders in these firms are designing applications from the “outside-in” by optimizing the ways in which end users access information and perform tasks. Significantly, leaders are relying on the agility and flexibility of the new platform to support an innovative collaborative environment.

As I spoke to WebCenter customers, I was struck by how their target users value the convenience of simple experiences. Designing for the experience-driven enterprise entails aggregating information from multiple sources, organizing it by business tasks, and then presenting it through intuitive applications that are seamlessly integrated with back-end services.

Download the free White Paper Today


Whitepaper List as per August 2014

Anthony Shorten - Mon, 2014-08-11 19:54
The following Oracle Utilities Application Framework technical whitepapers are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.

Unless otherwise marked the technical whitepapers in the table below are applicable for the following products (with versions):

Doc Id Document Title Contents 559880.1 ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility.
This whitepaper currently only applies to the following products:
560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. 560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows:
  • Concepts - General Concepts and Performance Troublehooting processes
  • Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions.
  • Network Troubleshooting - General troubleshooting of the network with common issues and resolutions.
  • Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions.
  • Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions.
  • Database Troubleshooting - General troubleshooting of the database with common issues and resolutions.
  • Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions.
560401.1 Software Configuration Management Series
A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. Topics include Revision Control, SDK Migration/Utilities, Bundling and Configuration Migration Assistant. The individual whitepapers are as follows:
  • Concepts - General concepts and introduction.
  • Environment Management - Principles and techniques for creating and managing environments.
  • Version Management - Integration of Version control and version management of configuration items.
  • Release Management - Packaging configuration items into a release.
  • Distribution - Distribution and installation of releases across environments
  • Change Management - Generic change management processes for product implementations.
  • Status Accounting - Status reporting techniques using product facilities.
  • Defect Management - Generic defect management processes for product implementations.
  • Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation.
  • Implementing Service Packs - Discussion on the service packs and how to use them in an implementation.
  • Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades.
773473.1 Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. 774783.1 LDAP Integration for Oracle Utilities Application Framework based products A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. 789060.1 Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices This whitepaper outlines the common and best practices implemented by sites all over the world. 856854.1 Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. 1068958.1 Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4?  Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices
This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener.
This whitepaper currently only applies to the following products:
1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite. 1486886.1 Implementing Oracle Exadata with Oracle Utilities Customer Care and Billing This whitepaper covers some advice when implementing Oracle ExaData for Oracle Utilities Customer Care And Billing. 878212.1 Oracle Utilities Application FW Available Service Packs This entry outlines ALL the service packs available for the Oracle Utilities Application Framework. 1454143.1 Certification Matrix for Oracle Utilities Products This entry outlines the software certifications for all the Oracle Utilities products. 1474435.1 Oracle Application Management Pack for Oracle Utilities Overview This whitepaper covers the Oracle Application Management Pack for Oracle Utilities. This is a pack for Oracle Enterprise Manager. 1506830.1 Configuration Migration Assistant Overview
This whitepaper covers the Configuration Migration Assistant available for Oracle Utilities Application Framework V4.2.0.0.0. This replaces ConfigLab for some products.
1506855.1 Integration Reference Solutions
This whitepaper covers the various Oracle technologies you can use with the Oracle Utilities Application Framework. 1544969.1 Native Installation Oracle Utilities Application Framework This whitepaper describes the process of installing Oracle Utilities Application Framework based products natively within Oracle WebLogic. 1558279.1 Oracle Service Bus Integration  This whitepaper describes direct integration with Oracle Service Bus including the new Oracle Service Bus protocol adapters available. Customers using the MPL should read this whitepaper as the Oracle Service Bus replaces MPL in the future and this whitepaper outlines how to manually migrate your MPL configuration into Oracle Service Bus.

Note: In Oracle Utilities Application Framework V4.2.0.1.0, Oracle Service Bus Adapters for Outbound Messages and Notification/Workflow are available 1561930.1 Using Oracle Text for Fuzzy Searching This whitepaper describes how to use the Name Matching and  fuzzy operator facilities in Oracle Text to implemement fuzzy searching using the @fuzzy helper fucntion available in Oracle Utilities Application Framework V4.2.0.0.0 1606764.1
Audit Vault Integration This whitepaper describes the integration with Oracle Audit Vault to centralize and separate Audit information from OUAF products. Audit Vault integration is available in OUAF 4.2.0.1.0 and above only.
1644914.1
Migrating XAI to IWS
Migration from XML Application Integration to the new native Inbound Web Services in Oracle Utilities Application Framework 4.2.0.2.0 and above.
1643845.1
Private Cloud Planning Guide
Planning Guide for implementing Oracle Utilities products on Private Clouds using Oracle's Cloud Foundation set of products.
1682436.1
ILM Planning Guide
Planning Guide for Oracle Utilities new ILM based data management and archiving solution.
1682442.1
ILM Implementation Guide for Oracle Utilities Customer Care and Billing
Implementation Guide for the ILM based solution for the Oracle Utilities Customer Care And Billing.

Watch Oracle DB Session Activity With The Real-Time Session Sampler

Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Oracle Voice Debuts on the App Store

Oracle AppsLab - Mon, 2014-08-11 16:05

Editor’s note: I meant to blog about this today, but looks like my colleagues over at VoX have beat me to it. So, rather than try to do a better job, read do any work at all, I’ll just repost it. Free content w00t!

Although I no longer carry an iOS device, I’ve seen Voice demoed many times in the past. Projects like Voice and Simplified UI are what drew me to Applications User Experience, and it’s great to see them leak out into the World.

Enjoy.

Oracle Extends Investment in Cloud User Experiences with Oracle Voice for Sales Cloud
By Vinay Dwivedi, and Anna Wichansky, Oracle Applications User Experience

Oracle Voice for the Oracle Sales Cloud, officially called “Fusion Voice Cloud Service for the Oracle Sales Cloud,” is available now on the Apple App Store. This first release is intended for Oracle customers using the Oracle Sales Cloud, and is specifically designed for sales reps.

Home_With_Frame

The home screen of Fusion Voice Cloud Service for the Oracle Sales Cloud is designed for sales reps.

Unless people record new information they learn, (e.g. write it down, repeat it aloud), they forget a high proportion of it in the first 20 minutes. The Oracle Applications User Experience team has learned through its research that when sales reps leave a customer meeting with insights that can move a deal forward, it’s critical to capture important details before they are forgotten. We designed Oracle Voice so that the app allows sales reps to quickly enter notes and activities on their smartphones right after meetings, no matter where they are.

Instead of relying on slow typing on a mobile device, sales reps can enter information three times faster (pdf) by speaking to the Oracle Sales Cloud through Voice. Voice takes a user through a dialog similar to a natural spoken conversation to accomplish this goal. Since key details are captured precisely and follow-ups are quicker, deals are closed faster and more efficiently.

Oracle Voice is also multi-modal, so sales reps can switch to touch-and-type interactions for situations where speech interaction is less than ideal.

Oracle sales reps tried it first, to see if we were getting it right.

We recruited a large group of sales reps in the Oracle North America organization to test an early version of Oracle Voice in 2012. All had iPhones and spoke American English; their predominant activity was field sales calls to customers. Users had minimal orientation to Oracle Voice and no training. We were able to observe their online conversion and usage patterns through automated testing and analytics at Oracle, through phone interviews, and through speech usage logs from Nuance, which is partnering with Oracle on Oracle Voice.

Users were interviewed after one week in the trial; over 80% said the product exceeded their expectations. Members of the Oracle User Experience team working on this project gained valuable insights into how and where sales reps were using Oracle Voice, which we used as requirements for features and functions.

For example, we learned that Oracle Voice needed to recognize product- and industry-specific vocabulary, such as “Exadata” and “Exalytics,” and we requested a vocabulary enhancement tool from Nuance that has significantly improved the speech recognition accuracy. We also learned that connectivity needed to persist as users traveled between public and private networks, and that users needed easy volume control and alternatives to speech in public environments.

We’ve held subsequent trials, with more features and functions enabled, to support the 10 workflows in the product today. Many sales reps in the trials have said they are anxious to get the full version and start using it every day.

“I was surprised to find that it can understand names like PNC and Alcoa,” said Marco Silva, Regional Manager, Oracle Infrastructure Sales, after participating in the September 2012 trial.

“It understands me better than Siri does,” said Andrew Dunleavy, Sales Representative, Oracle Fusion Middleware, who also participated in the same trial.

This demo shows Oracle Voice in action.

What can a sales rep do with Oracle Voice?

Oracle Voice allows sales reps to efficiently retrieve and capture sales information before and after meetings. With Oracle Voice, sales reps can:

Prepare for meetings

  • View relevant notes to see what happened during previous meetings.
  • See important activities by viewing previous tasks and appointments.
  • Brush up on opportunities and check on revenue, close date and sales stage.

Wrap up meetings

  • Capture notes and activities quickly so they don’t forget any key details.
  • Create contacts easily so they can remember the important new people they meet.
  • Update opportunities so they can make progress.
These screenshots show how to create tasks and appointments using Oracle Voice.

These screenshots show how to create tasks and appointments using Oracle Voice.

Our research showed that sales reps entered more sales information into the CRM system when they enjoyed using Oracle Voice, which makes Oracle Voice even more useful because more information is available to access when the same sales reps are on the go. With increased usage, the entire sales organization benefits from access to more current sales data, improved visibility on sales activities, and better sales decisions. Customers benefit too — from the faster response time sales reps can provide.

Oracle’s ongoing investment in User Experience

Oracle gets the idea that cloud applications must be easy to use. The Oracle Applications User Experience team has developed an approach to user experience that focuses on simplicity, mobility, and extensibility, and these themes drive our investment strategy. The result is key products that refine particular user experiences, like we’ve delivered with Oracle Voice.

Oracle Voice is one of the most recent products to embrace our developer design philosophy for the cloud of “Glance, Scan, & Commit.” Oracle Voice allows sales reps to complete many tasks at what we call glance and scan levels, which means keeping interactions lightweight, or small and quick.

Are you an Oracle Sales Cloud customer?

Oracle Voice is available now on the Apple App Store for Oracle customers using the Oracle Sales Cloud. It’s the smarter sales automation solution that helps you sell more, know more, and grow more.

Will you be at Oracle OpenWorld 2014? So will we! Stay tuned to the VoX blog for when and where you can find us. And don’t forget to drop by and check out Oracle Voice at the Smartphone and Nuance demo stations located at the CX@Sales Central demo area on the second floor of Moscone West.Possibly Related Posts:

The Business Value In Training

Rittman Mead Consulting - Mon, 2014-08-11 14:59

One of the main things I get asked to do here at Rittman Mead, is deliver the OBIEE front-end training course (TRN 202). This a great course that has served both us, and our clients well over the years. It has always been in high demand and always delivered with great feedback from those in attendance. However, as with all things in life and business, there is going to be room for improvement and opportunities to provide even more value to our clients. Of all the feedback I receive from delivering the course, my favorite is that we do an incredible job delivering both the content and providing real business scenarios on how we have used this tool in the consulting field. Attendees will ask me how a feature works, and how I have used it with current and former clients, 100% of the time.

This year at KSCope ’14 in Seattle, we were asked to deliver a 2 hour front-end training course. Our normal front-end course runs a span of two days and covers just about every feature you can use all the way from Answers and Dashboards, to BI Publisher. Before the invitation to KScope ’14, we had bee tooling with the idea to deliver a course that not only teaches attendees on how to navigate OBIEE and use it’s features, but also emphasizes the business value behind why those features exist in the first place. We felt that too often users are given a quick overview of what the tool includes, but left figure out on their own how to extract the most value. It is one thing to create a graph in Answers, and another to know what the best graph to use might be. So in preparation for the KScope session, we decided to build the content around not only how to develop in OBIEE, but also why, as a business user, you would choose one layout/graph/feature over another. As you would expect, the turn out for the session was fantastic, we had over 70 plus pre-register, with another 10 on the waiting list. This was proof that there is an impending need to pull as much business value out of the tool as there is to simply learn how to use it. We were so encouraged by the attendance and feedback from this event, that we spent the next several weeks developing what is called the “Business Enablement Bootcamp”. It is a 3 day course that will cover Answers, Dashboards, Action Framework, BI Publisher, and the new Mobile App Designer. This is an exciting time for us in that we not only get show people how to use all of the great features that are built into the tool, but to also incorporate years of consulting experience and hundreds of client engagements right into the content. Below I have listed a breakdown of the material and the value it will provide.

Answers

Whenever we deliver our OBIEE 5-day bootcamp, which covers everything from infrastructure to the front end, Answers is one of the key components that we teach. Answers is the building block for analysis in OBIEE. While this portion of the tool is relatively intuitive to get started with, there are so many valuable nuances and settings that can get over looked without proper instruction. In order to get the most out of the tool, a business user needs be able to not only create basic analyses, but be able to use many of the advanced features such as hierarchical columns, master-detail, and selection steps. Knowing how and why to use these features is a key component to gaining valuable insight for your business users.

Dashboards

This one in particular is dear to my heart. To create an analysis and share it on a dashboard is one thing, but to tell a particular story with a series of visualizations strategically placed on a dashboard is something entirely different. Like anything else business intelligence, optimal visualization and best practices are learned skills that take time and practice. Valuable skills like making the most of your white space, choosing the correct visualizations, and formatting will be covered. When you provide your user base with the knowledge and skills to tell the best story, there will be no time wasted with clumsy iterations and guesswork as to what is the best way to present your data. This training will provide some simple parameters to work within, so that users can quickly gather requirements and develop dashboards that more polish and relevance than ever before.

 Dashboard

 Action Framework

Whenever I deliver any form of front end training, I always feel like this piece of OBIEE is either overlooked, undervalued, or both. This is because most users are either unaware of it’s use, or really don’t have a clear idea of its value and functionality. It’s as if it is viewed as an add-on in the sense that is just simply a nice feature. The action framework is something that when properly taught how to navigate, or given demonstration of its value, it will indeed become an invaluable piece of the stack. In order to get the most out of your catalog, users need to be shown how to strategically place action links to give the ability to drill across to analyses and add more context for discovery. These are just a few capabilities within the action framework that when shown how and when to use it, can add valuable insight (not to mention convenience) to an organization.

Bi Publisher/Mobile App Designer

Along with the action framework, this particular piece of the tool has the tendency to get overlooked, or simply give users cold feet about implementing it to complement answers. I actually would have agreed with these feelings before the release of 11.1.1.7. Before this release, a user would need to have a pretty advanced knowledge of data modeling. However, users can now simply pick any subject area, and use the report creation wizard to be off and running creating pixel perfect reports in no time. Also, the new Mobile App Designer on top of the publisher platform is another welcomed addition to this tool. Being the visual person that I am, I think that this is where this pixel perfect tool really shines. Objects just look a lot more polished right out of the box, without having to spend a lot of time formatting the same way you would have to in answers. During training, attendees will be exposed the many of the new features within BIP and MAD, as well as how to use them to complement answers and dashboards.

Third Party Visualizations

While having the ability to implement third party visualizations like D3 and Flot into OBIEE is more of an advanced skill, the market and need seems to be growing for this. While Oracle has done some good things in past releases with new visualizations like performance tiles and waterfall charts, we all know that business requirements can be demanding at times and may require going elsewhere to appease the masses. You can visit https://github.com/mbostock/d3/wiki/Gallery to see some of the other available visualizations beyond what is available in OBIEE. During training, attendees will learn the value of when and why external visualizations might be useful, as well as a high level view of how they can be implemented.

Bullet Chart

Users often make the mistake of viewing each piece of the front end stack as separate entities, and without proper training this is very understandable. Even though they are separate pieces of the product, they are all meant to work together and enhance the “Business Intelligence” of an organization. Without training the business to complement one piece to another, it will always be viewed as just another frustrating tool that they don’t have enough time to learn on their own. This tool is meant to empower your organization to have everything they need to make the most informed and timely decisions, let us use our experience to enable your business.

Categories: BI & Warehousing

Websites: What to look for in a database security contract

Chris Foot - Mon, 2014-08-11 10:28

When shopping for a world-class database administration service, paying attention to what specialists can offer in the way of protection is incredibly important. 

For websites storing thousands or even millions of customer logins, constantly monitoring server activity is essential. A recent data breach showed just how vulnerable e-commerce companies, Software-as-a-Service providers and a plethora of other online organizations are. 

A staggering number 
A Russian criminal organization known as "CyberVor" recently collected 1.2 billion unique user name and password sequences and 500 million email addresses from websites executing lackluster protection techniques, Infosecurity Magazine reported.

Andrey Dulkin, senior director of cyber innovation at CyberArk noted the attack was orchestrated by a botnet – or a collection of machines working to achieve the same end-goal. CyberVor carefully employed multiple infiltration techniques simultaneously in order to harvest login data. 

Where do DBAs come into play? 
Database active monitoring is essential to protect the information websites hold for their subscribers and patrons. Employing anti-malware is one thing, but being able to perceive actions occurring in real-time is the only way organizations can hope to deter infiltration attempts at their onset. 

Although TechTarget was referring to disaster recovery, the same principles of surveillance apply to protecting databases. When website owners look at the service-level agreement, the database support company should be provide the following accommodations:

  • Real-time reporting of all sever entries, detailing which users entered an environment, how they're interacting with it and what programs they're using to navigate it. 
  • Frequent testing that searches for any firewall vulnerabilities, unauthorized programs, SQL orders, etc. 
  • On-call administrators capable of assessing any questions or concerns a website may have.

Applying basics, then language 
Although advanced analytics and tracking cookies can be applied to actively search for and eliminate viruses – like how white blood cells attack pathogens – neglecting to cover standard security practices obviously isn't optimal. 

South Florida Business Journal acknowledged one of the techniques CyberVor used was a vulnerability IT professionals have been cognizant of for the past decade – SQL injections. This particular tactic likely involved one of the criminals ordering the SQL database to unveil all of its usernames and passwords. 

SQL Server, Microsoft's signature database solution, is quite popular among many websites, so those using this program need to contract DBA organizations with extensive knowledge of the language and best practices. 

Finally, remote DBA services must be capable of encrypting login information, as well as the data passwords are protecting. This provides an extra layer of protection in case a cybercriminal manages to unmask a username-password combination. 

The post Websites: What to look for in a database security contract appeared first on Remote DBA Experts.

dotNet transaction guard

Laurent Schneider - Mon, 2014-08-11 10:16

also with ODP in 12c, you can check the commit outcome as in jdbc

let’s create a table with a deferred primary key


create table t (x number primary key deferrable initially deferred);

Here an interactive Powershell Demo


PS> [Reflection.Assembly]::LoadFile("C:\oracle\product\12.1.0\dbhome_1\ODP.NET\bin\4\Oracle.DataAccess.dll")

GAC    Version        Location
---    -------        --------
True   v4.0.30319     C:\Windows\Microsoft.Net\assembly\GAC_64\Oracle.DataAccess\v4.0_4.121.1.0__89b483f429c47342\Oracle.DataAccess.dll

I first load the assembly. Some of my frequent readers may prefer Load(“Oracle.DataAccess, Version=4.121.1.0, Culture=neutral, PublicKeyToken=89b483f429c47342″) rather than hardcoding the oracle home directory.

PS> $connection=New-Object Oracle.DataAccess.Client.OracleConnection("Data Source=DB01; User Id=scott; password=tiger")

create the connection

PS> $connection.open()

connect

PS> $cmd = new-object Oracle.DataAccess.Client.OracleCommand("insert into t values (1)",$connection)

prepare the statement

PS> $txn = $connection.BeginTransaction()

begin transaction

PS> $ltxid = ($connection.LogicalTransactionId -as [byte[]])

Here I have my logical transaction id. Whatever happends to my database server, crash, switchover, restore, core dump, network disconnection, I have a logical id, and I will check it later.


PS> $cmd.executenonquery()
1

One row inserted


PS> $connection2=New-Object Oracle.DataAccess.Client.OracleConnection("Data Source=DB01; User Id=scott; password=tiger")
PS> $connection2.open()

I create a second connection to monitor the first one. Monitoring your own session would be too much unsafe and is not possible.


PS> $txn.Commit()

Commit, no error.


PS> $connection2.GetLogicalTransactionStatus($ltxid)
     Committed     UserCallCompleted
     ---------     -----------------
          True                  True

It is committed. I see it Committed from $connection2. This is what I expected.

Because I have a primary key, let’s retry and see what happend.


PS> $txn = $connection.BeginTransaction()
PS> $ltxid = ($connection.LogicalTransactionId -as [byte[]])
PS> $cmd.executenonquery()
1
PS> $txn.Commit()
Exception calling "Commit" with "0" argument(s): "ORA-02091: Transaktion wurde zurückgesetzt
ORA-00001: Unique Constraint (SCOTT.SYS_C004798) verletzt"
At line:1 char:1
+ $txn.Commit()
+ ~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], MethodInvocationException
    + FullyQualifiedErrorId : OracleException
PS> $connection2.GetLogicalTransactionStatus($ltxid)
     Committed     UserCallCompleted
     ---------     -----------------
         False                 False

The commit fails, and from the connection2 we see it is not committed. It is a huge step toward integrity, as Oracle tells you the outcome of the transaction.

We see Committed=False.

Offline Visualization of Azkaban Workflows

Pythian Group - Mon, 2014-08-11 07:51

As mentioned in my past adventures, I’m often working with the workflow management tool ominously called Azkaban. Its foreboding name is not really deserved; it’s relatively straightforward to use, and offers a fairly decent workflow visualization. For that last part, though, there is a catch: to be able to visualize the workflow, you have to (quite obviously) upload the project bundle to the server. Mind you, it’s not that much of a pain, and could easily managed by, say, a Gulp-fueled watch job. But still, it would be nice to tighten the feedback loop there, and be able to look at the graphs without having to go through the server at all.

Happily enough, all the information we need is available in the Azkaban job files themselves, and in a format that isn’t too hard to deal with. Typically, a job file will be called ‘foo.job’ and look like

type=command
command=echo "some command goes here"
dependencies=bar,baz

So what we need to do to figure out a whole workflow is to begin at its final job, and recursively walk down all its dependencies.

use 5.12.0;

use Path::Tiny;

sub create_workflow {
  my $job = path(shift);
  my $azkaban_dir = $job->parent;

  my %dependencies;

  my @files = ($job);

  while( my $file = shift @files ) {
    my $job = $file->basename =~ s/\.job//r;

    next if $dependencies{$job}; # already processed

    my @deps = map  { split /\s*,\s*/ }
               grep { s/^dependencies=\s*// }
                    $file->lines( { chomp => 1 } );

    $dependencies{$job} = \@deps;

    push @files, map { $azkaban_dir->child( $_.'.job' ) } @deps;
  }

  return %dependencies;
}

Once we have that dependency graph, it’s just a question of drawing the little boxes and the little lines. Which, funnily enough, is a much harder job one would expect. And better left off to the pros. In this case, I decided to go with Graph::Easy, which output text and svg.

use Graph::Easy;

my $graph = Graph::Easy->new;

while( my( $job, $deps ) = each %dependencies ) {
    $graph->add_edge( $_ => $job ) for @$deps;
}

print $graph->as_ascii;

And there we go. We put those two parts together in a small script, and we have a handy cli workflow visualizer.

$ azkaban_flow.pl target/azkaban/foo.job

  +------------------------+
  |                        v
+------+     +-----+     +-----+     +-----+
| zero | --> | baz | --> | bar | --> | foo |
+------+     +-----+     +-----+     +-----+
               |                       ^
               +-----------------------+

Or, for the SVG-inclined,

$ azkaban_flow.pl -f=svg target/azkaban/foo.job

which gives us

Screen Shot 2014-08-10 at 3.09.42 PM
Categories: DBA Blogs

Rittman Mead and Oracle Big Data Appliance

Rittman Mead Consulting - Mon, 2014-08-11 07:00

Over the past couple of years Rittman Mead have been broadening our skills and competencies out from core OBIEE, ODI and Oracle data warehousing into the new “emerging” analytic platforms: R and database advanced analytics, Hadoop, cloud and clustered/distributed systems. As we talked about in the recent series of updated Oracle Information Management Reference Architecture blog posts and my initial look at the Oracle Big Data SQL product, our customers are increasingly looking to complement their core Oracle analytics platform with ones to handle unstructured and big data, and as technologists we’re always interesting in what else we can use to help our customers get more insight out of their (total) dataset.

An area we’ve particularly focused on over the past year has been Hadoop and R analysis, with the recent announcement of our partnering with Cloudera and the recruitment of a big data and advanced analytics team operating our of our Brighton, UK office. We’ve also started to work on a number of projects and proof of concepts with customers in the UK and Europe, working mainly with core Oracle BI, DW and ETL customers looking to make their first move into Hadoop and big data. The usual pattern of engagement is for us to engage with some business users looking to analyse a dataset hitherto too large or too unstructured to load into their Oracle data warehouse, or where they recognise the need for more advanced analytics tools such as R, MapReduce and Spark but need some help getting started. Most often we put together a PoC Hadoop cluster for them using virtualization technology on existing hardware they own, allowing them to get started quickly and with no initial licensing outlay, with our preferred Hadoop distribution being Cloudera CDH, the same Hadoop distribution that comes on the Oracle Big Data Appliance. Projects then typically move on to Hadoop running directly on physical hardware, in a couple of cases Oracle’s Big Data Appliance, usually in conjunction with Oracle Database, Oracle Exadata and Oracle Exalytics for reporting.

One such project started off by the customer wanting to analyse a dataset that was too large for the space available in their Oracle database and that they couldn’t easily process or analyse using the SQL-based tools they usually used; in addition, like most large organisations, database and hardware provisioning took a long time and they needed to get the project moving quickly. We came in and quickly put together a virtualised Hadoop cluster together for them, on re-purposed hardware and using the free (Standard) edition of Cloudera CDH4, and then used the trial version of Oracle Big Data Connectors along with SFTP transfers to get data into the cluster and then analysed.

NewImage

The PoC itself then ran for just over a month with the bulk of the analysis being done using Oracle R Advanced Analytics for Hadoop, an extension to R that allows you to use Hive tables as a data source and create MapReduce jobs from within R itself; the output from the exercise was a series of specific-answer-to-specific-question R graphs that solved an immediate problem for the client, and showed the value of further investment in the technology and our services – the screenshot below shows a typical ORAAH session, in this case analyzing the flight delays dataset that you can also find on the Exalytics server and in smaller form in OBIEE 11g’s SampleApp dataset.

NewImage

That project has now moved onto a larger phase of work with Oracle Big Data Appliance used as the Hadoop platform rather than VMs, and Cloudera Hadoop upgraded from the free, unsupported Standard version to Cloudera Enterprise. The VMs in fact worked pretty well and had the advantage that they could be quickly spun-up and housed temporarily on an existing server, but were restricted by the RAM that we could assign to each VM – 2GB initially, quickly upgraded to 8GB per VM, and the fact that they were sharing CPU and IO resources. Big Data Appliance, by contrast, has 64GB or RAM per node – something that’s increasingly important now in-memory tools like Impala are begin used – and has InfiniBand networking between the nodes as well as fast network connections out to the wider network, something thats often overlooked when speccing up a Hadoop system.

The support setup for the BDA is pretty good as well; from a sysadmin perspective there’s a lights-out ILOM console for low-level administration, as well as plugins for Oracle Enterprise Manager 12c (screenshot below), and Oracle support the whole package, typically handling the hardware support themselves and delegating to Cloudera for more Hadoop-specific queries. I’ve raised several SRs on client support contracts since starting work on BDAs, and I’ve not had any problem with questions not being answered or buck-passing between Oracle and Cloudera.

NewImageOne thing that’s been interesting is the amount of actual work that you need to do with the Big Data Appliance beyond the actual installation and initial configuration by Oracle to “on-board” it into the typical enterprise environment. BDAs are left with customers in a fully-working state, but like Exalytics and Exadata though, initial install and configuration is just the start, and you’ve then got to integrate the platform in with your corporate systems and get developers on-boarded onto the platform. Tasks we’ve typically provided assistance with on projects like these include:

  • Configuring Cloudera Manager and Hue to connect to the corporate LDAP directory, and working with their security team to create LDAP groups for developer and administrative access that we then used to restrict and control access to these tools
  • Configuring other tools such as RStudio Server so that developers can be more productive on the platform
  • Putting in place an HDFS directory structure to support incoming data loads and data archiving, as well as directories to hold the output datasets from the analysis work we’re doing – all within the POSIX security setup that HDFS currently uses which limits us to just granting owner, group and world permissions on directories
  • Working with the client’s infrastructure team on things like alerting, troubleshooting and setting up backup and recovery – something that’s surprisingly tricky in the Hadoop world as Cloudera’s backup tools only backup from Hadoop-to-Hadoop, and by definition your Hadoop system is going to hold a lot of data, the volume of which your current backup tools aren’t going to easily handle

Once things are set up though you’ve got a pretty comprehensive platform that can be expanded up from the initial six nodes our customers’ systems typically start with to the full eighteen node cluster, and can use tools such as ODI to do data loading and movement, Spark and MapReduce to process and analyse data, and Hive, Impala and Pig to provide end-user access. The diagram below shows a typical future-state architecture we propose for clients on this initial BDA “starter config” where we’ve moved up to CDH5.x, with Spark and YARN generally used as the processing framework and with additional products such as MongoDB used for document-type storage and analysis:

NewImage

 

Something that’s turned out to be more of an issue on projects than I’d originally anticipated is complying with corporate security policies. By definition, most customers who buy an Oracle Big Data Appliance and going to be large customers with an existing Oracle database estate, and if they deal with the public they’re going to have pretty strict security and privacy rules you’ll need to adhere to. Something that’s surprising therefore to most customers new to Hadoop is how insecure or at least easily compromised the average Hadoop cluster is, with Hadoop FS shell security relying on trusted networks and incoming user connections and interfaces such as ODBC not checking passwords at all.

Hadoop and the BDA only becomes what’s termed “secure” when you link it to a Kerebos server, but not every customer has Kerebos set up and unless you enable this feature right at the start when you set up the BDA, it’s a fairly involved task to add retrospectively. Moreover, customers are used to fine-grained access control to their data, a single security model over their data and a good understanding in their heads as to how security works on their database, whereas Hadoop is still a collection of fairly-loosely coupled components with pretty primitive access controls, and no easy way to delete or redact data, for example, when a particular country’s privacy laws in-theory mandate this.

Like everything there’s a solution if you’re creative enough, with tools such as Apache Sentry providing role-based access control over Hive and Impala tables, alternative storage tools like HBase that permit read, write, update and delete operations on data rather than just HDFS’s insert and (table or partition-level) delete, and tools like Cloudera Navigator and BDA features like Oracle Audit Vault that provide administrators with some sort of oversight as to who’s accessing what data and when. As I mentioned in my blog post a couple of weeks ago, Oracle’s Big Data SQL product addresses this requirement pretty well, potentially allowing us to apply Oracle security over both relational, and Hadoop, datasets, but for now we’re working within current CDH4 capabilities and planning on introducing Apache Sentry for role-based access control to Hive and Impala in the coming weeks. We’re also looking at implementing Cloudera’s “secure gateway” cluster topology with all access restricted to just a single gateway Hadoop node, and the cluster itself firewalled-off with external access to just that gateway node and HTTP / REST API access to the various cluster services, for example as shown in the diagram below:

NewImage

My main focus on Hadoop projects has been on the overall Hadoop system architecture, and interacting with the client’s infrastructure and security teams to help them adopt the BDA and take over its maintenance. From the analysis side, it’s been equally as interesting, with a number of projects using tools such as R, Oracle R Advanced Analytics for Hadoop and core Hive/MapReduce for data analysis, Flume, Java and Python for data ingestion and processing, and most recently OBIEE11g for publishing the results out to a wider audience. Following the development model that we outlined in the second post in our updated Information Management Reference Architecture blog series, we typically split delivery of each project’s output into two distinct phases; a discovery phase, typically done using RStudio and Oracle R Advanced Analytics for Hadoop, where we explore and start understanding the dataset, presenting initial findings to the business and using their feedback and direction to inform the second phase; and a second, commercial exploitation phase where we use the discovery phases’ outputs and models to drive a more structured dimensional model with output begin in the form of OBIEE analyses and dashboards.

NewImage

We looked at several options for providing the datasets for OBIEE to query, with our initial idea being to connect OBIEE directly to Hive and Impala and let the users query the data in-place, directly on the Hadoop cluster, with an architecture like the one in the diagram below:

NewImage

In fact this turned out to not be possible, as whilst OBIEE 11.1.1.7 can access Apache Hive datasources, it currently only ships with HiveServer1 ODBC support, and no support for Cloudera Impala, which means we need to wait for a subsequent release of OBIEE11g to be able to report against the ODBC interfaces provided by CDH4 and CDH5 on the BDA (although ironically, you can get HiveServer2 and Impala working on OBIEE 11.1.1.7 on Windows, though this platform isn’t officially supported by Oracle for Hadoop access, only Linux). Whichever way though, it soon became apparent that even if we could get Hive and Impala access working, in reality it made more sense to use Hadoop as the data ingestion and processing platform – providing access to data analysts at this point if they wanted access to the raw datasets – but with the output of this then being loaded into an Oracle Exadata database, either via Sqoop or via Oracle Loader for Hadoop and ideally orchestrated by Oracle Data Integrator 12c, and users then querying these Oracle tables rather than the Hive and Impala ones on the BDA, as shown in the diagram below.

NewImage

In-practice, Oracle SQL is far more complete and expressive than HiveQL and Impala SQL and it makes more sense to use Oracle as the query platform for the vast majority of users, with data analysts and data scientists still able to access the raw data on Hadoop using tools like Hive, R and (when we move to CDH5) Spark.

The final thing that’s been interesting about working on Hadoop and Big Data Appliance projects is that 80% of it, in my opinion, is just the same as working on large enterprise data warehouse projects, with 20% being “the magic”. A large portion of your time is spent on analysing and setting up feeds into the system, just in this case you use tools like Flume instead of GoldenGate (though GoldenGate can also load into HDFS and Hive, something that’s useful for transactional database data sources vs. Flume’s focus on file and server log data sources). Another big part of the work is data processing, ingestion, reformatting and combining, again skills an ETL developer would have (though there’s much more reliance, at this point, on command-line tools and Unix utilities, albeit with a place for tools like ODI once you get to the set-based filtering, joining and aggregating phase). In most cases, the output of your analysis and processing will be Hive and Impala tables so that results can be analysed using tools such as OBIEE, and you therefore need skills in areas such as dimensional modelling, business analysis and dashboard prototyping as well as tool-specific skills such as OBIEE RPD development.

Where the “magic” happens, of course, is the data preparation and analysis that you do once the data is loaded, quite intensively and interactively in the discovery phase and then in the form of MapReduce and Spark jobs, Sqoop loads and Oozie workflows once you know what you’re after and need to process the data into something more tabular for tools like OBIEE to access. We’re building up a team competent in techniques such as large-scale data analysis, data visualisation, statistical analysis, text classification and sentiment analysis, and use of NoSQL and JSON-type data sources, which combined with our core BI, DW and ETL teams allows us to cover the project from end-to-end. It’s still relatively early days but we’re encouraged by the response from our project customers so far, and – to be honest – the quality of the Oracle big data products and the Cloudera platform they’re based around – and we’re looking forward to helping other Oracle customers get the most out of their adoption of these new technologies. 

If you’re an Oracle customer looking to make their first move into the worlds of Hadoop, big data and advanced analytics techniques, feel free to drop me an email at mark.rittman@rittmanmead.com  for some initial advice and guidance – the fact we come from an Oracle-centric background as well typically makes it easier for us to relate these new concepts to the ones you’re typically more familiar with. Similarly, if you’re about to bring on-board an Oracle Big Data Appliance system and want to know how best to integrate it in with your existing Oracle BI, DW, data integration and systems management estate, get in contact and I’d be happy to share experiences and our delivery approach.

Categories: BI & Warehousing

Vote for Rittman Mead at the UKOUG Partner of the Year Awards 2014!

Rittman Mead Consulting - Mon, 2014-08-11 03:00

Rittman Mead are proud to announce that we’ve been nominated by UKOUG members and Oracle customers for five categories in the upcoming UKOUG Parter of the Year Awards 2014;  Business Intelligence, Training, Managed Services, Operating Systems Storage and Hardware, and Emerging Partner, reflecting the range of products and services we now offer for customers in the UK and around the world.

NewImage

Although Rittman Mead are a worldwide organisation with offices in the US, India, Australia and now South Africa, our main operation is in the UK and for many years we’ve been a partner member of the UK Oracle User Group (UKOUG). Our consultants speak at UKOUG Special Interest Group events as well as the Tech and Apps conferences in December each year, we write articles for Oracle Scene, the UKOUG members’ magazine, and several of our team including Jon and myself have held various roles including SIG chair and deputy chair, board member and even editor of Oracle Scene.

Partners, along with Oracle customers and of course Oracle themselves, are a key part of the UK Oracle ecosystem and to recognise their contribution the UKOUG recently brought in their Partner of the Year Awards that are voted on by UKOUG members and Oracle customers in the region. As these awards are voted on by actual users and customers we’ve been especially pleased over the years to win several Oracle Business Intelligence Partner of the Year Gold awards, and last year we were honoured to receive awards in five categories, including Business Intelligence Partner of the Year, Training Partner of the Year and Engineered Systems Partner of the Year.

This year we’ve been nominated again in five categories, and if you like what we do we’d really appreciate your vote, which you can cast at any time up to the closing date, September 15th 2014. Voting is open to UKOUG members and Oracle customers and only takes a few minutes – the voting form is here and you don’t need to be a UKOUG member, only an Oracle end-user or customer – these awards are a great recognition for the hard work out team puts in, so thanks in advance for any votes you can put in for us!

Categories: BI & Warehousing