Rittman Mead Consulting
Over the past week Venkat, myself and the Rittman Mead India team have been running a series of BI Masterclasses at locations in India, in conjunction with ODTUG, the Oracle Development Tools User Group. Starting off in Bangalore, then traveling to Hyderabad and Mumbai, we presented on topics ranging from OBIEE through Exalytics through to EPM Suite and BI Applications, and with networking events at the end of each day.
Around 50 attended at Bangalore, 30 in Hyderbad and 40 in Mumbai, at at the last event we were joined by Harsh Bhogle from the local Oracle office, who presented on Oracle’s high-level strategy around business analytics. Thanks to everyone who attended, thanks to ODTUG for sponsoring the networking events, and thanks especially to Vijay and Pavan from Rittman Mead India who organised everything behind the scenes. If you’re interested, here’s a Flickr set of photos from all three events (plus a few at the start where I visited our offices in Bangalore.)
For anyone who couldn’t attend the events, or if you were there and you’d like copies of the slides, the links below are for the PDF versions of the sessions we presented at various points over the week.
- Oracle BI, Analytics and EPM Product Update
- Extreme BI: Agile BI Development using OBIEE, ODI and Golden Gate
- OBIEE 11g Integration with the Oracle EPM Stack
- OBIEE and Essbase on Exalytics Development & Deployment Best Practices
- OBIEE 11g Security Auditing
- Intro and tech deep dive into BI Apps 11g + ODI
- Metadata & Data loads to EPM using Oracle Data Integrator
So I’m writing this in my hotel room in Mumbai on Sunday morning, waiting for the airport transfer and then flying back to the UK around lunchtime. It’s been a great week but my only regret was missing the UKOUG Apps’13 conference last week, where I was also supposed to be speaking but managed to double-book myself with the event in India.
In the end, Mike Vickers from Rittman Mead in the UK gamely took my place and presented my session, which was put together as a joint effort with Minesh Patel, another of the team in the UK and one of our BI Apps specialists. Entitled “Oracle BI Apps – Giving the Users the Reports they *Really* Want”, it’s a presentation around the common front-end customisations that we typically carry out for customers who want to move beyond the standard, generic dashboards and reports provided by the BI Apps, and again if you missed the session or you’d like to see the slides, they’re linked-to below:
That’s it for now – and I’ll definitely be at Tech’13 in a few weeks’ time, if only because I’ve just realised I’m delivering the BI Masterclass sessions on the Sunday, including a session on OBIEE/ODI and Hadoop integration - I’ve been saying to myself I’d like to get these two tools working with Impala as an alternative to Hive, so that gives me something to start looking at on the flight back later today.
In my previous post, I introduced the steps necessary for integrating Oracle BI Applications 18.104.22.168.1 and GoldenGate (OGG). Now, I’m going to dive into the details and describe how to complete the setup and configuration of GoldenGate and the Source Dependent Data Store schema. As I mentioned before, this process will closely follow Oracle’s documentation on “Administering Oracle GoldenGate and Source Dependent Schemas“, providing additional information and insight along the way.User and Schema Setup
The first step is to manually create the GoldenGate user on the source and target databases. These users, along with the Source Dependent Data Store schema, are not created by the BI Apps installer like the other standard schemas. This will be a dedicated user for OGG, and will have privileges specific to the needs of the extract process on the source and the replicat process on the target.Create Source GoldenGate User
Beginning with the source, create the user and grant the initial privileges. Be sure your tablespace has already been created.
-- Create OGG User on the source CREATE USER ogg_user IDENTIFIED BY Password01 DEFAULT TABLESPACE ggs_data QUOTA UNLIMITED ON ggs_data; GRANT CREATE SESSION TO ogg_user; GRANT ALTER SESSION TO ogg_user; GRANT SELECT ANY DICTIONARY TO ogg_user; GRANT FLASHBACK ANY TABLE TO ogg_user;
The specific table grants will not be made until later on via a script generated by an ODI Procedure, as the GoldenGate user does not need SELECT ANY TABLE privileges. On the other hand, the user does temporarily need ALTER ANY TABLE in order to set up supplemental logging for individual tables. Later on, this privilege can be revoked.
GRANT ALTER ANY TABLE TO ogg_user;
Finally, we’ll setup supplemental logging at the database level, ensuring the necessary information is logged for each transaction.
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;Create Target GoldenGate User
Next, we’ll go out to the target server and create the GoldenGate user with target-specific privileges. Since GoldenGate performs the DML on the target, based on the change made in the source database, the user will need to be granted privileges to INSERT, UPDATE, DELETE. Again, rather than grant INSERT ANY TABLE, etc., the specific table grants will be generated as a script via an ODI Procedure.
-- Create OGG User CREATE USER ogg_target IDENTIFIED BY Password01 DEFAULT TABLESPACE USERS QUOTA UNLIMITED ON USERS; GRANT CREATE SESSION TO ogg_target; GRANT ALTER SESSION TO ogg_target; GRANT SELECT ANY DICTIONARY TO ogg_target;
We’ll be creating the checkpoint table via GoldenGate, so this user will temporarily need to be granted the CREATE TABLE privilege. The checkpoint table will keep track of the latest position in the target trail file, allowing a clean recovery should the target database go offline.
GRANT CREATE TABLE TO ogg_target;Create SDS User
Now we’ll create the SDS user and schema. A separate SDS schema must be created for each OLTP source application, as the SDS schema will essentially act as the source schema. We’ll follow the recommended naming conventions for the schema: <BIAPPS>SDS<Model Code>_<DSN>. BIAPPS is the user defined code signifying this is a BI Applications schema. To keep it simple, we’ll use BIAPPS. The Model Code is the unique code assigned to the data source and the DSN is the data source number for that source application.
In this example using Peoplesoft Campus Solutions, the SDS schema name is BIAPPS_SDS_PSFT_90_CS_20. Not a very friendly name to type, but serves its purpose in identifying the source of the schema data.
-- Create tablespace. CREATE TABLESPACE BIAPPS_SDS_PSFT_90_CS_20_TS DATAFILE '/u01/app/oracle/oradata/orcldata/BIAPPS_SDS_PSFT_90_CS_20..dbf' SIZE 100M AUTOEXTEND ON NEXT 10M LOGGING DEFAULT COMPRESS FOR OLTP; -- Create SDS User CREATE USER BIAPPS_SDS_PSFT_90_CS_20 IDENTIFIED BY Password01 DEFAULT TABLESPACE BIAPPS_SDS_PSFT_90_CS_20_TS QUOTA UNLIMITED ON BIAPPS_SDS_PSFT_90_CS_20_TS; -- Required Grants GRANT CREATE SESSION TO BIAPPS_SDS_PSFT_90_CS_20; GRANT CREATE TABLE TO BIAPPS_SDS_PSFT_90_CS_20;
Finally, the GoldenGate target user must be granted access to use the SDS tablespace for inserts/updates.
-- OGG user must be granted Quota to insert and update data ALTER USER ogg_target QUOTA UNLIMITED ON BIAPPS_SDS_PSFT_90_CS_20_TS;Install and Configure GoldenGate
The schemas are in place, so the next part of the setup is to install and configure the GoldenGate application on both the source and target servers. The GoldenGate installation process is pretty well documented on Gavin Soorma’s blog, so I won’t go into much detail here. The Oracle BI Applications documentation also has some example scripts, which take you through the setup of the extract, data pump, and replicat group processes.
The naming standards for the parameter files are fairly straightforward, with DSN being the same data source number we used in the SDS schema name.
- Extract: EXT_DSN
- Data Pump: DP_DSN
- Replicat: REP_DSN
Following the OBIA documentation examples, you will end up with each process group setup and the checkpoint table created in the target GoldenGate schema. I prefer to create an obey file for both the source and target setup scripts, similar to the following example.
--stop manager on target db dblogin userid ogg_target, password Password01 stop mgr --stop gg processes stop rep_20 delete rep_20 --delete CHECKPOINTTABLE DELETE CHECKPOINTTABLE ogg_target.OGGCKPT --delete previous trail files SHELL rm ./dirdat/* --start manager on target db start mgr --create CHECKPOINTTABLE in target db dblogin userid ogg_target, password Password01 ADD CHECKPOINTTABLE ogg_target.OGGCKPT add replicat rep_20, exttrail ./dirdat/tr, CHECKPOINTTABLE ogg_target.OGGCKPT
Using an obey script allows me to rerun the process should there be any sort of issue or failure and also provides me with a template that I can use for additional sources and SDS targets. The result should be process groups setup and ready to roll (once the parameter files are in place, of course).
Remember to revoke the CREATE TABLE privilege from the target GoldenGate user once the checkpoint table has been created.
REVOKE CREATE TABLE FROM ogg_target;
In the next post, I’ll walk through the SDS setup in OBIA and ODI, as well as the ODI Procedures that help generate the GoldenGate parameter files, SDS schema DDL, and initial load scripts.
Oracle BI Apps 22.214.171.124.1 – GoldenGate Integration
That’s it! The long awaited Oracle Data Integrator 12c is out! You can find the 12.1.2 version on the ODI Downloads page and discover the new features here while it gets downloaded. The main news is surely the new flow-based paradigm and the ability to load multiple targets within the same interface… Oh no wait, we don’t talk about “interfaces” anymore, it’s now called “mappings”! Fantastic, ODI developers can now communicate with developers using other ETL tools using the same vocabulary. This new terminology and the flow-based paradigm also bring ODI and Oracle Warehouse Builder (OWB) closer than ever. Let’s have a glance at some of the new features.
So this release brings a whole new way to develop your integration jobs. In the new logical tab of your mapping, you can drag and drop source datastores from your model as in the previous versions of ODI. But you will also have to drag your target datastores in the same canvas.
To map your source to your target and build the logic, a new “component” panel, very similar to the OWB one, has been added on the right hand side of ODI Studio. From there you can drag and drop join, filter, expression, union and lookup components into your canvas. There is even a distinct component, which means we are done with creating a yellow interface just to select distinct rows from the source. Isn’t it nice? And more than that, it was announced during OOW that more components stand on the ODI roadmap.
Every datastores and components have an IN and an OUT connector. Dataflows are created by dragging a connector on another one. What is interesting is that you can map an OUT connector to multiple IN connectors, and therefore load multiple targets at the same time!
<old-school>Good news for the nostalgics, it’s still possible to develop a mapping using the interface-style paradigm… More information in a future post.</old-school>
The physical tab of you mapping is similar to the former flow tab of ODI 11g. That’s where you can select your KMs and their options. What is interesting is that you can now have multiple physical implementations – called deployment specifications – while keeping the same business logic. You can for instance create only one mapping for both your initial and your incremental load by selecting a different IKM in each of these deployment specifications (DS) and select which one you want to use at run-time.
The new mappings also introduce a few useful features :
- in-mapping lineage and impact analysis : When a column in one datastore/component of the mapping is selected, all the columns used to load it and all the columns loaded by it are highlighted.
- syntax highlighting appears in the expression fields.
- autocompletion is available in every expression fields. Columns are suggested based on the few characters already typed. This is the best announcement ever for a lazy developer like me!
ODI 12c introduces the concept of reusable mapping, similar to those in OWB. It is designed like a standard mapping except that it use an input and/or an output signature in replacement of datastores. This allow to reuse it in multiple regular mappings by connecting these signatures to other components.
When upgrading from ODI 11g to ODI 12c, yellow (temporary) interfaces will turned on Reusable Mappings.
Instead of running a mapping, you can now also debug it from ODI Studio. A brand new pane appears where you can see the blueprints of your mapping. From there, you can set breakpoints or you can control your execution step by step. Thanks to this, it’s possible review to temporary state of your data or the values of variables at every step of an execution.
What about your old OWB jobs?
ODI 12.1.2 enables the execution and monitoring of OWB jobs within ODI. A new OWB technology is now supported in the topology in order to plug the OWB repository. Once it’s done you can run an OWB mapping or process flow from a package or a procedure thanks to a new ODI Tool called OdiStartOwbJob.
According to the ODI and OWB Statement of Direction, a migration utility should be available later to automatically translate some of the OWB objects in their ODI equivalent. Of course some manual work will be required as well.
To be continued…
Of course Part 2 is coming with other new features… Also expect a lot more to come from other Rittman Mead guys in the next few days!
[Update 04-Nov-2013] The second part is published.
There will come a point in the lifecycle of an OBIEE deployment when one user will need to access another user’s account. This may be to cover whilst a colleague is on leave, or a support staff trying to reproduce a reported error.
Password sharing aside (it’s zero-config! but a really really bad idea), OBIEE supports two methods for one user to access the system as if they were another: Impersonation and Act As.
This blog article is not an explanation of how to set these up (there are plenty of blogs and rip-off blogs detailing this already), but to explain the difference between the two options.
First, a quick look at what they actually are.Impersonation
Impersonation is where a “superuser” (one with
oracle.bi.server.impersonateUser application policy grant) can login to OBIEE as another user, without needing their password. It is achieved in the front end by constructing a URL, specifying:
- The superuser’s login name and password (
- The login ID of the user to impersonate (
The server will return a blank page to this request, but you can then submit another URL to OBIEE (eg the OBIEE catalog page or home page) and will already be authenticated as the Impersonate user – without having specified their password.
From here you can view the system as they would, and carry out whatever support or troubleshooting tasks are required.
Caution : Impersonation is disabled by default, even for the weblogic Administrator user, and it is a good idea to leave it that way. If you do decide to enable it, make sure that the user to whom you grant it has a secure password that is not shared or known by anyone other than the account owner. Also, you will see from the illustration above that the password is submitted in plain text which is not good from a security point of view. It could be “sniffed” along the way, or more easily, extracted from the browser history.Act As
Whilst Act As is a very similar concept to Impersonation (allow one user to access OBIEE as if they were another), Act As is much more controlled in how it grants the rights. Act As requires you to specify a list of users who may use the functionality (“Proxy users”), and for each of the proxy users, a list of users (“Target users”) who they may access OBIEE as.
Act As functionality is accessed from the user dropdown menu :
From where a list of users that the logged-in user (“proxy user”) has been configured to be able to access is shown :
Selecting a user switches straight to it:
In addition to this fine grained specification of user:user relationships, you can specify the level of access a Proxy user gets – full, or read-only. Target users (those others can Act As) can see from their account page who exactly has access to their account, and what level of access.
So what’s the difference?
Here’s a comparison I’ve drawn up
Here are a couple of examples to illustrate the point:
Based on this, my guidelines for use would be :
- As an OBIEE sysadmin, you may want to use Impersonate to be able to test and troubleshoot issues. However, it is functionality much more intended for systems integration than front-end user consumption. It doesn’t offer anything that Act As doesn’t, except fewer configuration steps. It is less secure that Act As, and could even be seen as a “backdoor” option. Particularly at companies where audit/traceability is important should be left disabled.
- Act As is generally the better choice in all scenarios of an OBIEE user needing the ability to access another’s account, whether between colleagues, L1/L2 support staff, or administrators.
Compared to Impersonation, it is more secure, more flexible, and more granular in whose accounts can be accessed by whom. It is also fully integrated into the user interface as standard functionality of the tool.
- Act As: Enabling Users to Act for Others
- Impersonate: How to use OBIEE Impersonate parameter for quick checks
Thanks to Christian Berg, Gianni Ceresa and Gianni Ceresa for reading drafts of this article and providing valuable feedback
The release of Oracle Business Intelligence Applications 126.96.36.199.1 includes a major change in components, with Oracle Data Integrator replacing Informatica as the ETL application. The next logical step was to integrate Oracle’s data replication tool, Oracle GoldenGate, for a real-time load of source system data to the data warehouse. Using GoldenGate replication rather than a conventional extract process, contention on the source is essentially eliminated and all of the source OLTP data is stored locally on the data warehouse, eliminating network bottlenecks and allowing ETL performance to increase. In this series of posts, I’m going to walk through the architecture and setup for using GoldenGate with OBIA 188.8.131.52.1.GoldenGate and the Source Dependent Data Store
For those of you not familiar with Oracle GoldenGate (OGG), it is the standard Oracle product for data replication, providing log-based change data capture, distribution, and delivery in real-time.
GoldenGate captures transactional data changes from the source database redo log and loads the changes into its own log file, called a Trail File, using a platform-independent universal data format. The Extract process understands the schemas and tables from which to capture changes based on the configuration set in the Extract parameter file. The data is then read from the Source Trail File and moved across the network to the Target Trail File using a process called a Data Pump, also driven by a parameter file. Finally, the transactions are loaded into the target database tables using the Replicat parameter file configuration, which maps source tables and columns to their target. The entire process occurs with sub-second latency and minimal impact to the source and target systems.
In my previous blog posts regarding Oracle GoldenGate, I described how to replicate changes from the source to the Staging and Foundation layers of the Oracle Reference Architecture for Information Management. In OBIA, GoldenGate is used for pure replication from the source database to the target data warehouse, into what is known as the Source Dependent Data Store (SDS) schema.
The SDS is setup to look exactly like the source schema, allowing the Oracle Data Integrator pre-built Interfaces to change which source they are using from within the Knowledge Module by evaluating a variable (IS_SDS_DEPLOYED) at various points throughout the KM (we’ll look at this in more detail later on). Using this approach, the GoldenGate integration can be easily enabled at any point, even after initial configuration. In fact, that is exactly what I did – making the switch to using OGG after my first full data load from the source without GoldenGate. The Oracle BI Apps team did a great job of utilizing the features of ODI that allow the logical layer to be abstracted from the physical layer and data source connection.Getting Started – High Level Steps
To begin, we must first install Oracle BI Applications 184.108.40.206.1, if it is not already up and running in your environment. I followed the recently published OTN article, “Cookbook: Installing and Configuring Oracle BI Applications 220.127.116.11.1″, written by Mark Rittman and Kevin McGinley. Rather than use Windows, though, I decided to go with Linux as my host operating system for OBIA. This had its own challenges, but nothing’s worth doing if there isn’t a bit of learning involved! After generating the “Source Extract and Load” Load Plan, it’s time to setup GoldenGate.
Before we dig into the details of the GoldenGate integration, let’s review the necessary steps at a high-level. The process follows Oracle’s documentation on administering GoldenGate and OBIA Source Dependent Schema.
1. Configure the source and target database schemas.
We need to create a GoldenGate user on both the source and target databases, as well as the Source Dependent Data Store schema on the target, along with the appropriate grants, etc.
2. Install Oracle GoldenGate on the source and target servers.
Download and install GoldenGate on each server. The Oracle BI Applications documentation shows an example on how to get the configuration started.
3. Configure the Source Dependent Data Store.
Enable the SDS in the OBIA Configuration Manager and create a new Physical Schema for the SDS in Oracle Data Integrator.
4. Generate and execute the DDL to create tables in the SDS schema on the target database.
As part of the OBIA installation, many “standard” ODI Packages and Procedures were created, including GENERATE_SDS_DDL. By entering the appropriate values into the Options during execution, the process will generate a SQL script that can then be executed against the SDS.
5. Generate the initial load script.
Yet another OBIA delivered Procedure will generate a SQL script for the initial load from the source to SDS schema. The script will contain INSERT statements using a database link from target to source. This script may be useful if an outage were called on the source application during OBIA and GoldenGate integration setup. But, if transactions are still flowing into the source application, a different approach will need to be used. We’ll get into more details on this later (hint: it involves the source SCN).
6. Generate and deploy the GoldenGate parameter files.
This is where we might expect to see the “JKM Oracle to Oracle Consistent (OGG)” Journalizing Knowledge Module put to use, correct? But no, the OBIA product team decided to go with a custom Procedure rather than the JKM, as the ODI Change Data Capture (aka Journalizing) is not put to use. Execute the GENERATE_SDS_OGG_PARAM_FILES Scenario, copy the parameter files to the appropriate locations, and complete the GoldenGate configuration.
7. Start GoldenGate replication.
Again, if there is not a source outage we will probably need to customize our start replicat statement to ensure we do not miss any transactions from the source.
Once GoldenGate replication has begun and you can continue to use the same “Source Extract and Load” Load Plan that was originally generated to pull data from the source database into the data warehouse. But now, instead of reaching out to the source database, this Load Plan will pull data from the SDS schema into the staging area.
In the next post, we’ll dive deeper into the setup and configuration details, working through each of the steps listed above.
Oracle BI Apps 18.104.22.168.1 – GoldenGate Integration
The UK OUG Apps 2013 conference is a must attend for users of Oracle Applications. Held at the Brewery in London, there are three full days of content with 10 streams and over 150 speakers.
Rittman Mead are proud to be the Analytics Sponsor for the event and have two speaker sessions at the event.
Click on the details below to view the session abstracts and more details
Speaker: James Knight – Head of Advanced Analytics
Date: Monday 14th October at 16.40 in the Queen Charlotte Hall
Speaker: Mike Vickers
Date: Tuesday 15th October at 10.20 in the King Vault Hall
We look forward to seeing you there
The Rittman Mead Team
One of the data visualisations that you can use with Oracle Endeca Information Discovery is the “tag cloud”. You’ve probably seen tag clouds used in newspaper articles and other publications to show the most commonly found words or phrases in a document, the screenshot below shows a tag cloud in Endeca built on data sourced from comments in a social media feed.
The component within Endeca Information Discovery that extracted the bank names from the data feed is called the “text enrichment engine”, which actually uses technology that Oracle license from a company called Lexalytics. When you use the text enrichment engine, one of the things it does is to extract “entities” such as people, companies, products, places, email addresses and dates, along with themes and any quotations mentioned in the text.
However, as you might have noticed from the tag cloud, several of the banks and other institutions that this process extracts have a few different variations in their name – for example, Amex is also shown as AmEx, AMEX and so on – but obviously these all actually refer to the same company, American Express. So how can we display tag clouds in Endeca but deal with this data issue in the background?
Another issue that can occur is that some words may be ignored or mistakenly allocated to the wrong group of entities. For example, I had “OMG” picked up as a company name, which is correct, but by checking the data itself it proved to be a shortening of “Oh My God” in the text!!
One solution to these kinds of problems is to use the Text-Tagger component within Endeca Information Discovery, the data-loading tool that is used to load Endeca Server data domains. Using this Text-Tagger component, you can prepare a list of in this example, companies of interest in advance, and the component will find and tag any record with the pre-defined tag, including ignoring case if required.
In some circumstances you will want to create a new list based on the application that you are working on. An example related to the displayed image is a list of all bank or financial institutes and their acronyms. It could be the case that we want to exclude company names that are not related to banking, rather than as shown in the image above where Amazon appears for example.
To solve any of above cases, the text enrichment engine supports customer-defined lists (CDL). In the example below, I’ll create one of these lists and use it to clean up the organisation naming so that my tag cloud shows the correct names for each organisation.
1. First Create a Customer-Defined list and save it as custom.cdl postfix in ..\Lexalytics\data\user\salience\entities\lists. As a rule, the file format must be similar to this:
Lloyds TSB<tab> Lloyds
British Broadcasting Corporation<tab>BBC
2. Update the Text Enrichment data directory. The default directory is normally Lexalytics\data but after applying the cdl file it should point to Lexalytics\data\user instead.
3. Update the Salience.properties file within Oracle Endeca Information Discovery ETL tool, Integrator designer. By default the property “te.entity.types” contains Company, Person, Place, Product, Sports and Title. Add “List” to include user-defined entities.
5. Running the new graph and configurations, here is the new tag cloud using the new entity I created:
The new v309R2 version of the OBIEE 11g SampleApp is now available for download on OTN, based on the 22.214.171.124.1 version of OBIEE and with a number of new dashboards, analyses and integration examples.
OBIEE 126.96.36.199.1 is of course the version that supports the new Mobile App Designer, Oracle’s new HTML5-based mobile BI authoring tool. I covered this new mobile BI option a few weeks ago, and the new SampleApp includes a number of Mobile App Designer demos that you can access either from the main dashboard, or on your mobile device through the new “App Store”.
SampleApp v309R2 also comes with the back-end Oracle Database upgraded to 12cR1, which means there’s some examples of the temporal queries, pattern matching queries and so on that we covered during the 12c launch.
This new version also comes with a couple of “tips and tricks” features that you might want to look into further, to see how they were done. The first one is having two RPDs, and two catalogs, running on the same install, as you can see from the screenshots below – one is on port 9704 whilst the other is on 9502, but they’re both on the same IP address.
This isn’t quite the “holy grail” of hosting multiple RPDs and catalogs on the same installation though – the way it’s actually done is by creating a second BI instance within the same middleware home, so you’ve got two WebLogic domains and therefore two admin servers, two managed servers and so on. It’s still useful if you’re looking to host multiple demos on the same VM (bearing in mind each install will take another 2GB of RAM because of the WebLogic JVMs), and it also helps illustrate the difference between middleware homes, domains and instances.
The other “tip and trick” that I noticed was the example of displaying image files uploaded into the catalog directly in the analysis results, rather than having to expose them through a WebLogic folder (the res folders that John Cook talks about in this blog post). In the screenshots below, you can see the final dashboard page with a number of catalog items displayed in it, and then the underlying analysis that access them via the saw.dll?downloadfile call.
Earlier in the year I announced that we had been nominated in 5 categories for the UKOUG Partner of the Year Awards. The award ceremony was last night and I’m proud to say that we received an award in each of the categories we entered and won two of them, results below:
- Business Intelligence – Gold
- Emerging (New Products) – Gold
- Engineered Systems – Silver
- Training – Silver
- Managed Services – Bronze
This is great recognition of the work everyone has put in at Rittman Mead across the group, from the Managed Services team we set up last year, to the investment we make in our training courses, the work we have done with Exalytics and the work we do with some of the new products, such as Endeca, R and RTD.
We very much appreciate the support of everyone who voted for us, it means a lot to everyone working at Rittman Mead.
As well as industry-leading solution architecture, consultancy and training on Oracle BI, here at Rittman Mead we also provide expert services in implementation and support of such systems. In this blog I want to share some of the things I find useful when working with OBIEE on a Linux system.OBIEE Linux start up script / service
Ticking both the OBIEE and Linux boxes, this script that I wrote is probably top of my list of recommendations (he says modestly…). It enables you to start and stop OBIEE using the Linux standard
service command, integrate it into system startup/shutdown (through
init.d), and also supports an advanced
status command which does its very best to determine the health of your OBIEE system.
screen is one of the most useful programs that I use on Linux. I wrote extensively about it in my blog post screen and OBIEE. It enables you to do things such as:
- Run multiple commands simultaneously in one SSH session
- Disconnect and reconnect (deliberately, or from dropped connections, e.g. on unreliable wi-fi or 3G) and pick up exactly where you left off, with all processes still running
- Share your SSH view with someone else, for remote training or a second pair of eyes when troubleshooting
- Search through screen scroll back history, cut and paste
- …a lot more!
There are other screen multiplexers such as
tmux, but I’ve found that
screen is the most widely available by default. Since they all have quite steep learning curves and esoteric key shortcuts to operate them, I tend to stick with
screen, nothing to do with OBIEE per se, but an important part of Linux server security to understand, IMNSHO (In My Not-So Humble Opinion!).
Maybe I’m overly simple but I like pretty pictures when I’m trying to grasp concepts, so here goes:
You create a pair of keys using
ssh-keygen. These are plain text and can be cut and pasted , copied, as required. One is private (e.g.
id_rsa), and you need to protect this as you would any other security artifact such as server passwords, and you can optionally secure with a pass phrase. The other is public (e.g.
id_rsa.pub), and you can share with anyone.
Your public key is placed on any server you need access to, by the server’s administrator. It needs to go in the
.sshfolder in the user’s home folder, in a file called
authorized_keys. As many public keys as need access can be placed in this file. Don’t forget the leading dot on
- You don’t need a password to login to a server, which is a big time saver and productivity booster.
- Authentication becomes about “this is WHO may access something” rather than “here is the code to access it, we have no idea who knows it though”.
- It removes the need to share server passwords
- Better security practice
- Easier auditing of exactly who used a server
- It enables the ability to grant temporary access to servers, and precisely control when it is revoked and from whom.
- Private keys can be protected with a passphrase, without which they can’t be used.
- Using SSH keys to control server access is a lot more secure since you can disable server password login entirely, thus kiboshing any chance of brute force attacks
- SSH keys can be used to support automatic connections between servers for backups, starting jobs, etc, without the need to store a password in plain text
- SSH keys are just plain text, making them dead easy to backup in a Password Manager such as LastPass, KeePass, or 1Password.
- SSH keys work just fine from Windows. Tools such as PuTTY and WinSCP support them, although you need to initially change the format of the private key to
ppkusing PuTTYGen, an ancillary PuTTY tool.
- Whilst SSH keys reside by default in your user home .ssh folder, you can store them on a cloud service such as Dropbox and then use them from any machine you want.
- To make an ssh connection using a key not in the default location, use the
-iflag, for example
ssh -i ~/Dropbox/ssh-keys/mykey email@example.com
- To make an ssh connection using a key not in the default location, use the
- To see more information about setting up SSH keys, type:
authorized_keysfile is space separated, and the last entry on each line can be a comment. This normally defaults to the user and host name where the key was generated, but can be freeform text to help identify the key more clearly if needed. See
man sshdfor the full spec of the file.
Dead simple this one – if you’re working on a server, or maybe a development VM, and need to check it has internet connection or want to know what the IP address is :
curl -s http://icanhazip.com/
This command returns just the IP and nothing else. Of course if you don’t have
curl installed then it won’t work, so you’re left with the ever-reliable
…or emacs, or whatever your poison is. My point is that if you are going to be spending any serious time as an admin you need to be able to view and edit files locally on the Linux server. Transferring them to your Windows desktop with WinSCP to view in Notepad is what my granny does, and even then she’s embarrassed about it.
Elitism and disdain aside, the point remains. The learning curve of these console-based editors repays itself many-fold in time and thus efficiency savings in the long run. It’s not only faster to work with files locally, it reduces context-switching and the associated productivity loss.
- I need to view this log file
- I need to check this log file for an error
- Close terminal window
- Start menu … now where was that program … hey fred, what’s the program … yeh yeh WinSCP that’s right
- Scroll though list of servers, or find IP to connect to
- Try to remember connection credentials
- Hey I wonder if devopsreactions has anything cool on it today
- Back to the job in hand … transfer file , which file?
- Hmmm, what was that folder called … something something logs, right?
- Dammit, back to the terminal …
pwd, right, gottcha
- Navigate to the folder in WinSCP, find the file
- Download the file
- That dbareactions is pretty funny too, might just have a quick look at that
- Open Notepad (or at least Notepad++, please)
- Open file … where did Windows put it? My Documents? Desktop? Downloads? Libraries, whatever the heck they are ? gnaaargh
- Wonder if those cool guys at Rittman Mead have posted anything on their blog, let’s go have a look
- Back to Notepad, got the log file, now …… what was I looking for?
This has to be my #1 tip in the Work Smarter, Not Harder category of OBIEE administration, and is as applicable to OBIEE on Windows as it is to OBIEE on Linux. Silent installs are where you run the installer “hands off”. You create a file in advance that describes all the configuration options to use, and then crank the handle and off it goes. You can use silent installs for
- OBIEE Enterprise install
- OBIEE Software Only install
- OBIEE domain configurtion
- WebLogic Server (WLS) install
- Repository Creation Utility (RCU), both Drop and Create
The advantages of silent installs are many:
- Guaranteed identical configurations across installations
- No need to waste time getting a X Server working for non-Windows installs to run the graphical install client
- Entire configuration of a server can be pre-canned and scripted
- Running the graphical installer is TEDIOUS the first time, the second, third, tenth, twentieth … kill me. Silent installs make the angels sing and new born lambs frolic in the virtual glades of OBIEE grass meadows heady with the scent of automatically built RCU schemas
To find out more about silent installations, check out:
- OBIEE: Installing Oracle Business Intelligence in Silent Mode
- RCU: Running Repository Creation Utility from the Command Line
- WLS: Running the Installation Program in Silent Mode
We’ve shared some example response files on the Rittman Mead public GitHub repository, or you can generate your own by running the installer once in GUI mode and selecting the Save option on the Summary screen of an installation. You can just run the installer to generate the response file – you don’t have to actually proceed with the installation if all you want to do is generate the response file.opatch napply
I wrote about this handy little option for
opatch in a blog post here. Where you have more than one patch to apply (as happens frequently with OBIEE patch bundles) this can be quite a time saver.
Bash is the standard command line that you will encounter on Linux. Here are a few tricks I find useful:Ctrl-R – command history
This is one of those shortcuts that you’ll wonder how you did without. It’s like going through your command history by pressing the up/down arrows (you knew about that one, right?), but on speed.
What Ctrl-R does is let you search through your command history and re-use a command just by hitting enter.
How it works is this:
1) Press Ctrl-R. The bash prompt changes to
2) Start entering part of the command line entry that you want to repeat. For example, I want to switch back to my FMW config folder. All I type is “co” to match the “co” in config, and bash shows me the match:
(reverse-i-search)`co': cd /u01/dit/fmw/instances/instance1/config/
3) If I want to amend the command, I can press left/right arrows to move along the line, or just hit enter and it gets re-issued straight off
4) If there are multiple matches, either keep typing to narrow the search down, or press Ctrl-R to show the next match or Shift-Ctrl-R to show the previous match
Another example, I want to repeat my sqlplus command, I just press Ctrl-R and start typing
sql and it’s matched:
(reverse-i-search)`sq': sqlplus / as sysdba
Finally, repeat the restart of Presentation Services, just by entering ps:
(reverse-i-search)`ps': ./opmnctl restartproc ias-component=coreapplication_obips1
If you prefix any command by
time you get a nice breakdown of how long it took to run and where the time was spent after it completes. Very handy for quick bits of performance testing etc, or just curiosity :-)
$ time ./opmnctl restartproc ias-component=coreapplication_obis1 opmnctl restartproc: restarting opmn managed processes... real 0m14.387s user 0m0.016s sys 0m0.031s
This is a fantastic little utility that will take the command you pass it and repeatedly issue it, by default every two seconds.
You can use it to watch disk space, directory contents, and so on.
watch df -h
Not the exclamation “sudo!”, but
sudo !!, meaning, repeat the last command but with sudo.
$ tail /var/log/messages tail: cannot open `/var/log/messages' for reading: Permission denied $ sudo !! sudo tail /var/log/messages Sep 26 18:18:16 rnm-exa-01-prod kernel: e1000: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX Sep 26 18:18:16 rnm-exa-01-prod avahi-daemon: Invalid query packet.
What is sudo? Well I’m glad you asked:
Over to you!
Which commands or techniques are you flabbergasted aren’t on this list? What functionality or concept should all budding OBI sysadmin padawans learn? Let us know in the comments section below.
Regular readers of this blog will know Stewart Bryson, our US Managing Director and in his spare time, Oracle ACE, writer and presenter on Oracle BI &DW development topics. Those of you who’ve met Stewart will know that his first love has always been technology and how to implement it well, so we’re pleased to announce that Stewart is taking on the newly created role of Chief Innovation Officer, working alongside our CTO, Mark Rittman, and CEO, Jon Mead. So what is a “Chief Innovation Officer”, and what does Stewart intend to do with his new role? I had a chance to put a few questions to Stewart about his new appointment and here is what he had to say.
Pippa Old [PO]: “For those of you who don’t you, tell me a little bit about yourself”
Stewart Bryson [SB]: “I like to say that I grew up as an Oracle DBA. While I’ve watched the Oracle BI ranks grow with the acquisitions of Siebel and Hyperion, I’ve been under the Red Tent for my entire career. Lots of folks come to Oracle BI from the top-down: starting as financial users or developers who are trying to find ways to get the data they need. I charted a reverse path, starting from the data warehouse perspective and moving up the stack to work on products such as ODI and OBIEE.
I was awarded the Oracle ACE a few years ago while I was promoting a methodology we call Extreme BI here at Rittman Mead. It’s an agile approach to delivering content that makes the business user a major component in delivery. It requires a thorough understanding of many of Oracle’s BI products, and I’d like to think my unique experience and capabilities are what have made it successful for us. I think these are the same reasons that folks come to see me speak… to see something a little different in the BI space.”
[PO]: “Many people may know you for your podcasts with Kevin McGinley – will you keep doing them?”
[SB]: “Absolutely. Whether we have 10 viewers or 10,000, Kevin and I will certainly keep doing this. We both look forward to recording the show every time. The premise was simple: folks love to get together and talk about sports, or movies, stock portfolios, etc. So why not have a medium where enthusiasts come together and talk about Oracle BI? When I’m speaking at conferences, I usually have a few attendees come up to me afterwards and say “I love the show.” So we will still do it. It’s just lots of fun.”
[PO]: “Explain the new role of Chief Innovation officer at Rittman Mead”
[SB]: “Over the last four years of managing Rittman Mead America, we’ve had a lot of things to be proud of, both technically and commercially, but it is the technical successes that have been most rewarding . So I’m glad I get to focus on our technical capabilities at Rittman Mead, and work closer with Mark Rittman converting all the roadmap and product knowledge he has into a framework and process that we can use to improve our delivery capabilities.
The BI landscape is changing rapidly, and these changes are causing a lot of internal reflection at Rittman Mead about the approaches we take to helping our customers deliver intelligence. The new acquisitions from Oracle such as Endeca make us question whether our current slate of technologies is right for all occasions. We see paradigm shifts in the general technology market that reverberate through to BI… things like Big Data and cloud computing that drastically change delivery approaches. The advances in engineered systems and appliances have a similar effect. I’m excited that I will be the main conduit through which these approaches are vetted and assimilated… there’s no place I would rather be.”
[PO]: “So how important is innovation to you as an individual and Rittman Mead?”
[SB]: “That’s an interesting question. Jon Mead, Mark Rittman and I spent a long time formulating the duties for this new role, but I would say that Jon and I spent almost as much time coming up with the title. I wanted to make sure that, if I traded in my CEO duties with the US business, it was for the right reason, and honestly, there’s nothing more important in today’s marketplace than innovation. New ideas become old ones in the blink of an eye, and a company like Rittman Mead needs to ensure we evolve to always deliver maximum value to our customers. Innovation, when done right, can change the world… Apple is the perfect example. Rittman Mead is not going to change the world the way Apple has… but there’s no reason we can’t leave our mark on the BI space.”
[PO]: “Rittman Mead has 100 or so employees. Why is innovation so important?”
[SB]: “While 100 employees may seem small to folks working for Fortune 100 companies… it’s almost incomprehensible to me. I think I was employee number five for Rittman Mead… and I’m counting Mark and Jon in that. Early on, our Oracle ACE’s were involved with actual delivery: Mark in the UK, Venkatakrishnan Janakiraman in India, me in the US. But as we grow, we have the unique opportunity to take those minds and direct them toward innovation: what we like to call the “Rittman Mead Way”. I want to take the seemingly limitless technical expertise at Rittman Mead and focus it toward building products and frameworks that benefit our customers in both the quality of what we deliver, but also in the pace of that delivery.”
[PO]: “Will you be focusing on how to improve Oracle BI or new innovative uses of the technology?”
[SB]: “I’ll be focused on fleshing out the “Rittman Mead Way”. This encompasses, at the very core, how we approach delivery with our customers. One of the components that I’m focusing on immediately is producing the Rittman Mead Delivery Framework. Though I can’t share exact details yet, this will involve a series of licensed products that are engineered to deliver immediate ROI to our customers. But also, this will involve a series of accelerators that help our consultants deliver our undeniable industry leadership rapidly and with 100% consistency.”
[PO]: “With Salesforce.com and other vendors offering cloud based services, what is RMs view and strategy for cloud and saas?”
[SB]: “Oracle finally dove in and immersed themselves in the Cloud, and you can expect the same from us. Though I can’t discuss specifics yet… expect some major announcements from us in this area in the coming months.”
[PO]: “Finally, on a personal note, what’s your favourite innovation in the last few years?”
[SB]: “It has to be the iPad. That product changed my life. It’s such an intimate delivery mechanism… it’s now how I consume almost all media: books, news, movies, etc.”
Running in partnership with ODTUG and with myself, Venkat Janakiraman and Stewart Bryson leading the sessions, we’re looking forward to sharing the news from Openworld, talking about the latest in Oracle BI and EPM development, and meeting Oracle BI enthusiast at each event.
The event is taking place over three cities – Bangalore, Hyderabad and Mumbia, with each masterclass running for a full day. We’ll go to Bangalore on Tuesday 15th October, Hyderabad on Thursday 17th October and then fly up to Mumbai for Saturday, 19th October 2013. Full details are on the event home page including details on how to register, with each masterclass’s agenda looking like this:
- 9.30am – 10.00am: Registration and Welcome
- 10.00am – 10.30am: Oracle BI, Analytics and EPM Product Update – Mark Rittman
- 10.30am – 11.30pm: Extreme BI: Agile BI Development using OBIEE, ODI and Golden Gate – Stewart Bryson
- 11.30pm – 12.30pm: OBIEE 11g Integration with the Oracle EPM Stack – Venkatakrishnan J
- 12.30pm – 1.30pm: Lunch & Networking
- 1.30pm – 2.30pm: OBIEE and Essbase on Exalytics Development & Deployment Best Practices – Mark Rittman
- 2.30pm – 3.30pm: Oracle BI Multi-user Development: MDS XML versus MUDE – Stewart Bryson
- 3.30pm – 4.00pm: Coffee Break & Networkng
- 4.00pm – 5.00pm: Intro and tech deep dive into BI Apps 11g + ODI
- 5.00pm – 6.00pm: Metadata & Data loads to EPM using Oracle Data Integrator - Venkatakrishnan J
The dates, locations and registration links for the three events are as follows:
- Bangalore, Fortune Select Trinity Hotel, Whitefield: October 15th 2013, 10am – 6pm (IST)
- Hyderabad, Westin Mindspace, Hyderabad, Hitec City: October 17th 2013, 10am – 6pm (IST)
- Mumbai, Courtyard Marriott, Mumbai: October 19th 2013, 10am – 6pm (IST)
We’re also investigating the idea of bringing our Rittman Mead BI Forum to India in 2014, so this would be a good opportunity to introduce yourself to us and the other attendees if you’d like to present at that event, and generally let us know what you’re doing with Oracle’s BI, EPM, analytics and data warehousing tools. There’ll also be lots of ODTUG goodies and giveaways, and a social event in the evening after the main masterclass finishes.
Numbers are limited though, and places are going fast – check out the event page for full details, and hopefully we’ll see some of you in either Bangalore, Hyderabad or Mumbai!
I’m sitting writing this at my desk back home, with a steaming mug of tea next to me and the kids pleased to see me after having been away for eight days (or at least my wifepleased to hand them over to me after looking after them for eight days). It was an excellent Oracle Openworld – probably the best in the ten years I’ve been going in terms of product announcements, and if you missed any of my daily updates, here’s the links to them:
- Oracle Openworld 2013 Day 0: Previewing the Week Ahead
- Oracle Openworld 2013 Day 1 : User Group Forum, News on Oracle Database In-Memory Option
- Oracle Openworld 2013 Day 2: Exalytics, TimesTen and Essbase Futures
- Oracle Openworld 2013 Days 3 & 4 : Oracle Cloud, OBIEE and ODI Futures
We also delivered sixteen sessions over the week, and whilst a few of them can’t be circulated because they contain details on beta or forthcoming products, here’s links to the ones that we can post:
- Deep Dive into OBIA 188.8.131.52.1 – Overview (Mark Rittman)
- Deep Dive into OBIA 184.108.40.206.1 – Data Integration (Stewart Bryson)
- Agile BI Development using OBIEE, ODI and Golden Gate (Stewart Bryson)
- Hyperion Profitability & Cost Management – Integration of Standard & Detailed Profitability (Venkatakrishnan J)
- Make the Most of your Exalytics and BI Investments with Enterprise Manager 12c (Mark Rittman, Henrik Blixt, Dhananjay Papde)
- Birds of a Feather Session: Best Practices for TimesTen on Exalytics (Mark Rittman, Chris Jenkins, Tony Heljula)
- Oracle BI EE Integration with Hyperion Data Sources (Venkatakrishnan J)
- Oracle Data Quality Solutions, Oracle Data Integrator and Oracle GoldenGate on Exadata (Jérôme Françoisse and Gurcan Orhan)
- Innovation in BI: OBIEE against Essbase and Relational (Stewart Bryson and Edward Roske)
- How to Handle Dev/Test/Prod with Oracle Data Integrator (Jerome Francoisse and Gurcan Orcan)
- Oracle Endeca User Experience Case Study at Barclays (James Knight and Kelvin Lau)
- Configuring OBIA 220.127.116.11.1 on ODI – Deep Dive (Mark Rittman, Kevin McGinley and Hari Cherukupalli)
So then, on reflection, what did I think about the various product announcements during the week? Here’s my thoughts now I’m back in the UK.
First off – Exalytics. Clearly there’s a lot of investment going into the Exalytics offering, both from the hardware and the software sides. For hardware, it’s just really a case of Oracle keeping up with additions to Sun’s product line, and with the announcement of the T5-8 model we’re now up to 4TB of RAM and 128 SPARC CPU cores – aimed at the BI consolidation market, where 1 or 2TB of RAM quickly goes if you’re hosting a number of separate BI systems. Cost-wise – it’s correspondingly expensive, about twice the price of the X3-4 machine, but it’s got twice the RAM, three times the CPU cores and runs Solaris, so you’ve got access to the more fine-grained workload separation and virtualisation that you get on that platform. Not a machine that I can see us buying for a while, but there’s definitely a market for this.
With Exalytics though you could argue that it’s been the software that’s underwhelmed so far, as opposed to the hardware. The Summary Advisor is good, but it doesn’t really handle the subsequent incremental refresh of the aggregate tables, and TimesTen itself whilst fast and powerful hasn’t had a great “out of the box” experience – in the wrong hands, it can give misleadingly-slow response-times, something I found myself a few months ago back on the blog. So it was interesting to hear some of the new features that we’re likely to see in “Exalytics v2.0″, probably late in calendar year 2014; an updated aggregate refresh mechanism based on DAC Server technology and with support for GoldenGate; new visualisations including data mash-up capabilities that I’m guessing we’ll see as exclusives on Exalytics and Oracle’s cloud products; enhancements coming for Essbase that’ll make it easier to spin-off ASO cubes from an OBIEE repository; and of course, the improvements to TimesTen to match those coming in the core Oracle database – in-memory analytics.
And what an announcement that was – in-memory column-store technology within the Oracle database, not predicated on using Exadata, and all running transparently in the background withminimal DBA setup required. Now in-reality, not only is this not the first in-memory Oracle database offering – the Exadata boxes in previous open world presentations also were positioned as in-memory, but that was flash memory, not DRAM – and they’re not the first vendor to offer in-memory, column-store as a feature, but given that it’ll be available to all Oracle 12.1.2 databases that license the in-memory option, and it’ll be so easy to administer – in theory – it’s a potential industry game-changer.
Of course the immediate question on my lips after the in-memory Oracle Database announcement was “what about TimesTen“, and “what about TimesTen’s role in Exalytics”, but Oracle played this in the end very well – TimesTen will gain similar capabilities, implemented in a slightly different way as TimesTen already stores its data in memory, albeit in row-store format – and in fact TImesTen can then take on a role of a business-controlled, mid-tier analytic “sandbox”, probably receiving new in-memory features faster than the core Oracle database as it’s got less dependencies and a shorter release cycle, but complementing the Oracle database an it’s own, more large-scale in-memory features. And that’s not forgetting those customers with data from multiple, heterogenous sources, or those that can’t afford to stump-up for the In-Memory option for all of the processors in their data warehouse database server. So – fairly internally-consisent at least at the product roadmap level, and we’ll be looking to get on any betas or early adopter programs to put both products through their paces.
The other major announcement that affects OBIEE customers, is, of course, OBIEE in the Cloud – or “Reporting-as-a-Service” as Oracle referred to it during the keynotes. This is one of the components of Oracle’s new “platform-as-a-service” or PaaS offering, alongside a new, full version of Oracle 12c based on its new multitenant architecture, identity-management-as-a-service, documents-as-a-service and so on. What reporting-a-service will give us isn’t quite “OBIEE in the cloud”, or at least, not as we know it now; Oracle’s view on platform-as-a-service is that it should be consumer-level in terms of simplicity to setup, and the quality of the user interface, it should be self-service and self-provisioning, and simple to sign-up for with no separate need to license the product. So in OBIEE terms, what this means is a simplified RPD/data model builder, simple options to upload and store data (also in Oracle’s cloud), and automatic provisioning using just a credit card (although there’ll also be options to pay by PO number etc, for the larger customers.)
And there’s quite a few things that we can draw-out of this announcement; first, it’s squarely aimed – at least at the start – at individual users, departmental users and the like looking to create sandbox-type applications most probably also linking to Oracle Cloud Database, Oracle Java-as-a-Service and the like. It won’t, for example, be possible to upload data to this service’s datastore using conventional ETL tools, as the only datasource it will connect to at least initially will be Oracle’s Cloud Database schema-as-a-service, which only allows access via ApEx and HTTP, because it’s a shared service and giving you SQL*Net access could compromise other users. In the future, it may well connect to Oracle’s full DBaaS which gives you a full Oracle instance, but for now (as far as I’ve heard) there’s no option to connect to an on-premise data source, or Amazon RDS, or whatever. And for this type of use-case – that may be fine, you might only want a single data source, and you can still upload spreadsheets which, if we’re honest, is where most sandbox-type applications get their data from.
This Reporting-as-a-Service offering though might well be where we see new user interface innovations coming through first, though. I get the impression that Oracle plan to use their Cloud OBIEE service to preview and test new visualisation types first, as they can iterate and test faster, and the systems running on it are smaller in scope and probably more receptive to new features. Similar to Salesforce.com and other SaaS providers, it may well be the case that there’s a “current version”, and a”preview version” available at most times, with the preview becoming the current after a while and the current being something you’ve got 6-12 months to switch from after that point. And given that Oracle will know there’s an Oracle database schema behind the service, it’s going to make services such as the proposed “personal data mashup” feature possible, where users can upload spreadsheets of data through OBIEE’s user interface, with the data then being stored in the cloud and the metrics then being merged in with the corporate dataset, with the source of each bit of data clearly marked. All this is previews and speculation though – I wouldn’t expect to see this available for general use until the middle of 2014, given the timetable for previous Oracle cloud releases.
The final product area that I was particularly interested in hearing future product direction about, was Oracle’s Data integration and Data Quality tools. We’ve been on the ODI 12c beta for a while and we’re long-term users of OWB, EDQ, GoldenGate and the other data integration tools; moreover on recent projects, and in our look at the cloud as a potential home for our BI, DW and data analytcs projects, its become increasingly clear that database-to-database ETL is no longer what data integration is solely about. For example, if you’re loading a data warehouse in the cloud, and the source database is also in the cloud, does it make sense to host the ETL engine, and the ETL agents, on-premise, or should they live in the cloud too?
And what if the ETL source is not a database, but a service, or an application such as Salesforce.com that provides a web service / RESTful API for data access? What if you want to integrate data on-the-fly, like OBIEE does with data federation but in the cloud, from a wide range of source types including services, Hadoop, message buses and the like. And where does replication come in, and quality-of-service management, and security and so forth come in? In my view, ODI 12c and its peers will probably be the last of the “on-premise”, “assumed-relational-source-and-target” ETL tools, with ETL instead following apps and data into the cloud, assuming that sources can be APIs, messages, big data sources and so forth as well as relational data, and it’ll be interesting to see what Oracle’ Fusion Middleware and DI teams come up with next year as their vision for this technology space. Thomas Kurian’s keynote touched on this as a subject, but I think we’re still a long way from working out what the approach will be, what the tooling will look like, and whether this will be “along with”, or “instead of” tools like ODI and Informatica.
Anyway – that’s it for Openworld for me, back to the real world now and time to see the family. Check-back on the blog next week for normal service, but for now – laptop off, kids time.
I wanted to continue on Mark Rittman’s excellent posts about the BI Management Packs and Enterprise Manager 12c:
There is an increasing demand to monitor BI systems, as they become more critical to businesses and their daily operations. I work in the Rittman Mead Global Services team, and we are starting to see an increasing demand for complete monitoring solutions. Within Global Services we are conscious that most BI systems go un-monitored or have only basic checks being performed. There are many aspects of a BI System that can be monitored, we need to be able to know if the system is up and how it is performing from both a system level and the end user experience. On that basis, I thought I’d share some of the more advance monitoring aspects of Enterprise Manager 12c (18.104.22.168).
I’d like to start off with looking at “Service Beacons”, which enable us to simulate a test that might be performed by a human, in particular I will be looking at web transactions which are a set of web based events. The goal is to simulate user logon to OBIEE and to test if an analysis comes back with an expected row set. In this example I used a vanilla install of OBIEE 22.214.171.124 with SampleAppLite, and create a simple analysis with ‘Product Type’ and ‘# of Orders’ placed on My Dashboard, which has been set up to be the default dashboard to show once logged in.
After connecting OBIEE to EM12c (as per Marks post), you can now add a Service Beacon to perform the aforementioned simulation. There is however a requirement to use Internet Explorer in order to record the user interaction, these are referred to as Web Transactions; I used Internet Explorer 8 but more on this later.We have an SLA with the business that guarantees a certain dashboard page is 95% available 24/7, and that users can access it within 10 seconds. To monitor performance against this SLA, we’re going to set up a service beacon that will test against a set of pre-recorded steps (in EM terms, a Web Transaction). To achieve this we need to perform the following steps: 1. Log into EM, with OBIEE already registered as a target 2. Record the web transaction in EM by using IE to log into OBIEE, bring up the dashboard, logging off. (Plugin required) 3. Set up the service beacon to run this test every X minutes 4. Test the metric as see the output which can then be used in an event
Start off by navigating to Targets > Services, from this screen click on Create > Generic Service
In the next screen enter a name for the service, in this example User Agent 1
We also need to define what system this service should run on. Ideally you would place it on a dedicated system, in this case we will place it on the BI Server
Click Select System and select the Target Type: Oracle BI Instance, then press Select.
After returning to the General Page click Next to preceded to the Availability stage. Ensuring Service Test is selected click Next.
The Service Test step is where we can define our Web Transaction. Type in a name for the test, for example ”OBIEE Login”, and then select Create a Transaction and click Go.
When clicking on Record, Internet Explorer will prompt you to install a plugin. For this plugin to install though, you’ll most probably need to alter Internet Explorer’s security settings, by editing the zone settings for IE to an unsecure state or adding the site to the trusted list and importing the certificates – see the Internet Explorer help files for full details on how to do this.
Once configured click on Record.
After this button is clicked, another instance of Internet Explorer is opened, displaying a blank web page.
In this window enter the URL for OBIEE into the URL bar, and press Enter. You should then see the standard OBIEE login page, as shown below.
Navigate to the dashboard page that you’re interested in setting up the service beacon for, to simulate a user bringing up this page.
Then log out and close the browser window.
Back in the Enterprise Manager browser window, it should have recorded the login steps which should look similar to the screenshot below. Click Continue to proceed
Back on the service test screen there will a few steps filled out, this is the result of the transaction recording. At this stage it’s not easy to debug the process, so we need to continue the creation process. Click Continue to proceed.
Now back on the Service Test screen click Next.
Next we need to add a beacon, where this service will monitored from. Click the Add button
In the basic setup the beacon installed as part of EM will be used. Ideally a beacon that replicated a client machine to simulate a realistic test.
Select the EM Management Beacon and then Select
Now back on the Beacons screen click Next
Enter the Performance Metrics required, in this example Click Next to go with the default values.
The Usage Metrics step is where we can associate metrics to this service test. For example if you have a few sessions initialised that could affect the login times to Oracle BI, it might be worth monitoring the connection pool usage to ensure that it’s not over used.
The final screen reviews what has been set up, for now we will complete the service creation by clicking the Finish button.
Returning back to the service page, our new service will have a clock next to it, until it has initialized and ran for the first time. After a couple of minutes refresh the page to see if the agent is up
Next we need to further configure this service, click on User Agent 1
note:At this point I will make it clear that my screenshots use a service is called User Agent 2, not User Agent 1
The initial home screen shows the performance of the service beacon, but for more details we need to navigate to the Monitoring Configuration screen
From this screen click on the Service Tests and Beacons Link
This screen is almost like the one during the setup process but has an added benefit, of running through the interaction to show us what is returning by clicking on Verify Service Test button.
Tick the Enable logging box and then Perform test, Note that the Status is up and should be while the webpage is running, but we have not entered any success criteria. After the test, click on the download log icon
Save the log file
After downloading the log, extract it and locate the index.html and open.
This should show the steps that the beacon has performed
When examining the steps, one thing I notice is that OBIEE’s presentation services is complaining about the browser version…
And after looking at the headers, the error is because the service test is identifying itself as Internet Explorer 6.0 – an unsupported browser for recent versions of OBIEE.
So our first task is change the User Agent that this service is running as. Return back to the Service Tests and Beacon page and ensure OBIEE login is selected and click Edit
Navigate to the Advance Properties Tab
On this screen expand Request and replace the user agent with:
Mozilla /5.0 (Compatible MSIE 9.0;Windows NT 6.1;WOW64; Trident/5.0)
Now we can re-run the test from this page by scrolling up and clicking on Verify Service Test
Perform the test and download the log file, and you should notice it’s now much bigger.
Now we need to find the step that has loaded our analysis, in this example it’s the HTTP Redirect element.
Using this we can go into step 2 of our process, navigate back to the steps tab and click Edit
On the Step screen scroll down to the bottom and select the second redirect related step and click edit, this is the step that we need to add our validation test
Scroll down to the Validation section and in this example we will add the validation text Plasma, as it was one of the product types returned in our initial analysis
Click Continue, Continue again and validate the service test again. If all is well it should be up.
You can now test this by either turning off your BI server or editing the analysis so that Plasma does not get returned. Either change should result in the service test going down.
This in depth setup provides a real world test to stop your end users telling you that your system is not working by proactively monitoring:
- The availability of the BI System
- The database returning results (and up to date results)
- The LDAP and login mechanism able to service request
- The time it takes to logon, return data and logoff.
- A growing set of metrics which can be used to generate system up time.
… and provides an Oracle supported route to monitoring a BI system.
Stay tuned as I try to cover other elements of Enterprise Manger 12c that include other types of Service tests, Metric Templates and Administration Groups.
It’s getting to the end of day 4 of Oracle Openworld, with Team Oracle USA coming back from 8-1 down and pulling-off the unlikeliest of comeback victories, and Openworld sessions covering OBIEE, ODI and Oracle Cloud futures. Here’s the details:
1) Oracle’s Cloud Strategy
Last year Oracle announced a number of cloud initiatives, with database and java cloud service announcements followed by availability later in 2013. Unlike Amazon’s Web Services offering though which basically gave you an open platform for deploying database and web applications (Infrastructure-as-a-service, or “Iaas”), Oracle’s offerings last year were designed to be simpler, and offered you a single database schema in the case of the Database Cloud product, and a Java Cloud Service offering a single WebLogic managed server.
The Database Cloud Service only let you access the schema via HTTP and a RESTful API though, because providing SQL*Net access opened up security issues around the database listener, which left you with programmatic calls or uploading data via ApEx and spreadsheet uploads. Similarly, the Java Cloud Service was only really aimed at small applications and websites, and you could imagine building a simple web app using the Java Cloud front-end and the database cloud backend – but this was really PaaS (Platform-as-a-Service) and not something that would really be of interest to BI&DW developers - as there was no ETL capability and no way to run BI tools such as OBIEE.
In the meantime, organisations (such as ourselves) that wanted to run complete OBIEE, or a complete data warehouse, in the cloud have had to use services such as Amazon AWS, where compute, storage, network and other infrastructure are provided as-a-service, with few restrictions and with billing by the hour. In most cases, these are considered “BYOL” (bring your own license), with the customer providing the license, and also performing all the DBA and sysadmin work – what Amazon do is provide the virtual hardware, and host it for you in their cloud. Once you’re there though there are other options around the database – a few weeks ago I covered Amazon Redshift and EnterpriseDB as two alternatives to the Oracle database for cloud-based systems – and pricing for the Amazon AWS service itself is rock-bottom, making it quite an interesting option for customers that don’t have a big IT department, or big-company departments looking to spin-up sandbox or short-term development servers.
So Oracle’s announcements on Tuesday were very interesting – what they’re basically planning to offer is a direct competitor to Amazon AWS albeit Oracle-centric, so that they will in future offer infrastructure-as-a-service (basic compute, storage and networking services), alongside their existing platform-as-a-service; and, as we’ll see in a moment, they’ll also be offering applications such as OBIEE and their Fusion ERP stack as turnkey cloud applications with credit card sign-ups and consumer-level interfaces – this is Oracle moving full-scale into the cloud.
As well as this infrastructure-as-a-service offering, their platform-as-a-service database and java offerings will be expanded to now include full database-as-a-service, and a full WebLogic cluster-as-a-service to complement what’s now being referred to as schema-as-a-service and java-as-a-service. The full database-as-a-service (DBaas) will provide a full licensed Oracle instance (via the new pluggable database feature in 12cR1), your own listener, SQL*Net access and therefore the ability to ETL into it, but with backups and “tuning” covered in the background by Oracle’s staff.
Similarly, the updated WebLogic cloud service will support full WLST access, JMX interfaces and so on, arranged in a WebLogic cluster for high-availability and failover. So this setup will be conceptually similar to Amazon AWS, but run using Oracle software and with platform services designed around the needs of database, and Java application server, software.
No details on pricing and release dates were made in the session, but given last year’s releases I’d expect these to be available late summer next year.
2) OBIEE in the Cloud
So following on from the cloud keynote, the OBIEE roadmap session built on this announcement to provide details on their “reporting-as-a-service” offering, based on OBIEE but with a simplified, more consumer-style interface. What this seems to be offering is full OBIEE but with new, web-based tools for building the RPD, and with previews of new capabilities such as data mashups, faster previews of new visualisations, and a new cloud-style look and feel to match the rest of Oracle’s new cloud products.
Importantly though, reporting-as-a-service will come with a number of initial restrictions aimed at making the product as self-service and self-provisioning as possible; it’s likely that in the first iteration, it’ll only connect to the database schema-as-a-service offering, meaning that the only way you’ll get data in is via Apex and spreadsheet uploads or more likely, through Java applications you also build in Oracle’s platform-as-a-service cloud. Over time, presumably it’ll connect to the full DBaaS service making it possible to ETL data in, but for now it’s more aimed at departmental solutions and sandbox applications – but it’s a very interesting taste of the future.
3) ODI, and Moving to it from OWB
Moving on to today now, Rittman Mead’s Stewart Bryson along with Sumit’s Holger Friedrich presented alongside Oracle on ODI12c, with the session I attended being all about making the move to ODI from Oracle Warehouse Builder. ODI12c’s still in beta and we’re on the beta program, so I’ll have to be a bit circumspect with what I say on the blog, but in terms of the planned migration path from OWB, the plan from Oracle is that there’ll be three stages to a typical customer migration (if they want to migrate, that is):
1. From the first release of ODI12c, it’ll be possible to execute and monitor OWB jobs from within ODI, giving you a central place to run and control all of your Oracle ETL jobs
2. Then, shortly afterwards, there’ll be a command-line utility for migrating OWB project objects to their ODI equivalent, covering most object types but not process flows and data quality projects, for example
3. And then for all new ETL work, you’d use ODI 12c.
OWB itself will continue to be supported for many years, but the most recent release was the last one. Look out for more details on the blog once ODI 12c goes GA, and as one of the beta testers for the tool itself, and the migration utility, we’ll have a lot of advice and experiences to share once we’re allowed to talk about it publicly.
So that’s it for now – check back tomorrow for the last post in this series, where I’ll recap on the week
Monday’s almost over at Oracle Openworld 2013 in San Francisco, and it started for me today with a presentation on Enterprise Manager 12c and the BI Management Pack, alongside Henrik Blixt (PM for the BI Management Pack) and Dhananjay Papde, author of a book on EM12c. I’ve covered EM12c and the BI Management Pack extensions quite a bit on the blog over the past few months so it was good to exchange a few ideas and observations with Henrik, and it was also good to meet Dhananjay, who’s been working with EM for a long time and has particularly specialized in the configuration management, and SLA-monitoring parts of the app.
Similarly, I finished-up the day with another joint session this time on TimesTen for Exalytics, with Peak Indicators’ Tony Heljula and Chris Jenkins, one of the TimesTen development PMs. As with all these sessions, it’s the audience interaction that makes them interesting, and we had a number of excellent questions, particularly at the TimesTen one given the very interesting product announcements during the day – more on which in a moment.
Before I get onto those though, here’s the links to today’s RM presentation downloads, with presentations today given by myself, Jérôme Françoisse (with Gurcan Orhan) and Venkat:
- Make the Most of your Exalytics and BI Investments with Enterprise Manager 12c (Mark Rittman, Henrik Blixt, Dhananjay Papde)
- Birds of a Feather Session: Best Practices for TimesTen on Exalytics (Mark Rittman, Chris Jenkins, Tony Heljula)
- Oracle BI EE Integration with Hyperion Data Sources (Venkatakrishnan J)
- Oracle Data Quality Solutions, Oracle Data Integrator and Oracle GoldenGate on Exadata (Jérôme Françoisse and Gurcan Orhan)
So, onto the product roadmap sessions:
1) Oracle Exalytics In-Memory Machine
The first set of announcements was around Oracle Exalytics In-Memory Machine, which started off as a Sun x86_64 server with 1TB RAM and 40 CPU cores, then recently went to 2TB and SSD disks, and now is available in a new configuration called Oracle Exalytics T5-8. This new version comes with 4TB RAM and is based on Sun SPARC T5 processes with, in this configuration, a total of 128 CPU cores, and is aimed at the BI consolidation market – customers who want to consolidate several BI applications, or BI environments, onto a single server – priced in this case around the $350k mark excluding the software.
What’s also interesting is that the T5-8 will use Sun Solaris on SPARC as the OS, giving it access to Solaris’ virtualisation and resource isolation technologies again positioning it as a consolidation play rather than host for a single, huge, BI application. Given the price I can’t quite see us getting one yet, but it’s an obvious upgrade path from the X2-3 and X2-4 servers and something you’d want to seriously consider if you’re looking at setting up a “private cloud”-type server infrastructure.
The Exalytics roadmap session also previewed other potential upcoming features for OBIEE, I would imagine earmarked for Exalytics given some of the computation work that’d need to go into the background to support them, including:
- A “Google Search” /. “Siri”-type feature called BI Ask, that presented the user with a Google-style search box into which you could type phrases such as “revenue for May 2010 for Widget A”, with the feature then dynamically throwing-up a table or graph based on what you’ve requested. Rather than attempting natural language parsing, BI Ask appears to work with a structured dictionary of words based on objects in the Presentation Services catalog, with choices available to the user (for example, lists of measures or dimensions) appearing under the search box in a similar manner to the “Google Suggest” feature.Although the demo was done using a desktop browser, where I think this could be particularly useful is in a mobile context, especially given most browsers’ and mobile platforms’ in-built ability to receive speed input and automatically pass that as text to the calling application. If you can imagine Siri for mobile analytics, with you holding your iPhone up and saying to it “revenue for southern region for past three months, compared to last year” and a graph of revenue over this period automatically appearing on your iPhone screen – that’s what I think BI Ask is trying to get towards.
- A user-driven data mashup feature that allowed the user to browse to a spreadsheet file on their local desktop, upload it to the OBIEE server (maybe to an Oracle Cloud database?), and then automatically join it to the main corporate dataset so that they could add their own data to that provided to the BI system. Clearly any setup like this needs to clearly differentiate between metrics and attributes uploaded by the user, compared to the “gold standard” ones provided as part of the RPD and Presentation Services Catalog, but this is potentially a very useful feature for users who’d otherwise export their OBIEE data to Excel, and then do the data combining there.
- Probably more for “Exalytics v2″, but a totally revamped aggregate refresh and reload framework, probably based around DAC technology, that would leverage the DAC’s own data loading capabilities and tools such as GoldenGate to perform incremental refreshes of the Exalytics adaptive in-memory cache. No specific details yet but it’s pretty obvious how this could improve over the current Exalytics v1 setup.
2) Oracle TimesTen for Exalytics
Yesterday of course had the big announcement about the new In-Memory Option for Oracle Database 12c, and this of course then led to the obvious question – what about TimesTen, which up until now was Oracle’s in-memory database – and what about Exalytics, where TimesTen was the central in-memory part of the core proposition? And so – given that I was on the TimesTen Birds of a Feather Panel this evening and no doubt would need to field exactly those questions, I was obviously quite keen to get along to one of the TimesTen roadmap sessions earlier in the day to hear Oracle’s story around this.
And – it actually does make sense. What’s happening is this:
- TimesTen’s development team has now been brought under the same management as Oracle Database 12c’s In-Memory option, with (over time) the same core libraries, and same performance features
- TimeTen will get the ability to store its tables (which are already held in memory) in columnar format as well as the existing row format – the difference being that unlike the Oracle in-memory feature, this is not done through on-the-fly data replication – it’s either stored row-store or column-store, something you decide when you create the table, and the only thing disk is used for is checkpointing and data persistence between reboots
- TimesTen will also gain the ability to be set up as a grid of servers that provide a single database instance – a bit like RAC and it’s single instance/cache fusion, and with support for replication so that you can copy data across the nodes to protect against machine failure. Currently you can link TimesTen servers together but each one is its own database instance, and you’d typically do this for high-availability and failover rather than creating one large database. What this grid setup also gives us though is the ability to do parallel query – Oracle didn’t say whether this would be one slave per grid node, or whether it’d support more than one slave per node, but coupled with the in-memory column store feature, presumably this is going to mean bigger TimesTen databases and a lot faster queries (and it’s fast already).
So what about the positioning of TimesTen vs. Oracle Database In-Memory Option – does one replace the other, or do you use the two together? Oracle’s ideas on this were as follows:
- Clearly the in-memory Oracle Database option is going to be a great query accelerator for large-scale data warehouses, but there’s still a lot of value in having a mid-tier in-memory data layer that’s under the control of the BI system owner, rather than the DBAs. You’ll have control over the data model, you can implement it quicker than you’d be able to upgrade the whole data warehouse database, and its physically co-located closer to the BI Server, so you’ll have less of an issue with network latency.
- TimesTen’s in-memory columnar storage technology will be based on a similar approach to that which is being taken by the database, and developed by the same overall team. But TimesTen most probably will have shorter development cycles, so new features might appear in TimesTen first, and it’s also lower risk for customers to test out new in-memory approaches in TimesTen rather than trying to reconfigure the whole warehouse to try out a new approach
And I think this makes sense. Of course, until we actually get hold of the two products and test them out, and see how the pace of development works out over time, we’re not going to fully know which product to deploy where – and of course pricing and packaging has yet to be announced; for example, I’d strongly predict that columnar storage for TimesTen will be an Exalytics-only feature, whilst the In-Memory Option for the database might be priced more like RAC than Partitioning, or even packaged up with Partitioning and OLAP as some sort of “data warehousing option”. We’ll just have to wait and see.
3) Oracle Essbase
The Essbase roadmap was the last session I managed to attend today, and again there were some pretty exciting new features announced or trailed (and it was made clear that the new features at the end of this list were more at planning or conceptual stage at the moment, and may well not make it into the product). Anyway, here’s what was talked about in the session, for BI and Exalytics-type use cases:
- Improved MDX query creation when working with the BI Server, including support for sub-selects – something that might help to reduce the number of separate MDX queries that OBIEE has to generate to work-out all the subtotals required for hierarchical column queries
- Improvements to the MDX AGGREGATE function and a revamped cube spin-off feature for OBIEE, including a prototype new web-based MOLAP Acceleration Wizard for auto-generating Essbase cubes for OBIEE aggregate persistence
- A new Cube Deployment Service private API, that’s used by the MOLAP Aggregation Wizard (amongst others) to generate and deploy an Essbase cube within a cloud-type environment
- A “renegade member” feature used for collecting in all the data load records for members that can’t be located – aimed at avoiding the situation where totals in an Essbase cube don’t then match the totals in the source system, because records got dropped during the data load
- Very speculatively – a potential hybrid BSO/ASO storage mode, combining BSO’s calculation capabilities with ASO’s dynamic aggregation.
So – lots of potential new features and a peek into what could be in the roadmap for three key OBIEE and Exalytics technologies. More tomorrow as we get to attend roadmap sessions for OBIEE in the Cloud, and ODI 12c.
The Sunday before Oracle Openworld proper is “User Group Forum Sunday”, with each of the major user groups and councils having dedicated tracks for topics such as BI&DW, Fusion Development, Database and so on. Stewart, myself and Venkat were honoured to be presenting on behalf of ODTUG, IOUG and APAC covering topics around the new 11g release of the BI Applications, Hyperion/EPM Suite, and agile development using OBIEE, ODI and Golden Gate. Links to the presentation PDFs from each of our sessions are listed below:
- Deep Dive into OBIA 126.96.36.199.1 – Overview (Mark Rittman)
- Deep Dive into OBIA 188.8.131.52.1 – Data Integration (Stewart Bryson)
- Agile BI Development using OBIEE, ODI and Golden Gate (Stewart Bryson)
- Hyperion Profitability & Cost Management – Integration of Standard & Detailed Profitability (Venkatakrishnan J)
All of the sessions drew a good crowd, and I was especially pleased to see the number of people that came along to the BI Apps 184.108.40.206.1 sessions, and that there were a few early-adopters in the audience who’d either completed their initial implementations, or had carried out pilot or PoC exercises. Feedback from those attendees was as I’d expected – some initial early adopter issues but generally positive feedback on the simplified architecture, and use of ODI. Stewart’s session on the data integration aspects of this new release included content on its new Golden Gate integration, and the new Source-Dependent Store ODI concept that it supports, again which went down well with an audience looking for more technical details on how this new release works.
After the user group sessions finished, it was time to go over to Moscone North for Larry Ellison’s opening keynote, where three new products were announced. First up was the new In-Memory option for the Oracle Database, which adds an in-memory, column-store capability to Oracle databases on all platforms, not just Exadata. Aimed at the upcoming 12.1.2 release, this new feature will provide a column-store capability alongside the existing on-disk row-store, with the column-store being used for DW-style queries whilst the row-store will continue to be used for OLTP.
The way this in-memory column store will work, is as follows:
- The DBA will enable the in-memory feature by setting the database parameter “in memory_size = XX GB”, with the memory then being allocated in the database’s SGA (System Global Area, one of the shared areas in the overall database memory allocation)
- Tables, partitions or sets of columns will be enabled for in-memory storage by an “alter table … in memory” DDL command
- Existing query indexes on the source tables can then be dropped
The database will then take care of copying these tables, columns or partitions into the in-memory column-store area, and then refreshing those tables on a regular basis, so that the database will have both row-store, and column-store versions of the tables available at the same time.
Oracle’s assertion is that the overhead in maintaining both the row- and column-store versions of the tables will be balanced out by the removal of the need to maintain query indexes on the source tables, and performance improvements of 100x to 1000x were quoted for DW-style queries, and 2x for OLTP-style queries. Unlike the Hybrid Columnar Compression feature announced a couple of years ago at Openworld, none of this is Exadata-specific, but it will be an option for the Enterprise Edition of the database, and it will require the 12.1.2 release, so you’ll need to budget for it and you’ll need to be on the most recent release to make use of it.
Other than the in-memory option, the other two product announcements in the keynote were:
- The M6-32 “Big Memory Machine”, with 32TB of DRAM and a SPARC M6 chip architecture – positioned as the ideal server for the in-memory option
- The “Oracle Database Backup, Logging and Recovery” appliance, a server designed to receive and then store incremental database backups for private and public clouds, and then restore those databases as necessarily – basically a backup server optimised for database backup and recovery.
So that was it for today – more news tomorrow once the main conference sessions and keynotes start.
It’s the Saturday afternoon before the start of Oracle Openworld 2013, and whilst I’ve been here since last Wednesday, my colleagues from Rittman Mead in the Europe, the USA and India are arriving today and tomorrow ready for their sessions later in the week. Myself and Stewart Bryson are taking part in User Group Sunday tomorrow, presenting on BI Apps 220.127.116.11.1 for ODTUG and IOUG, whilst Jon Mead, Jérôme Françoisse, Venkat and James Knight are presenting later in the week on TimesTen, ODI 12c and Oracle EPM Suite. Full details on where to find us can be found on our Rittman Mead at Oracle Openworld 2013 website page.
As I mentioned earlier, I’ve been in San Francisco since Wednesday, at Oracle’s headquarters in Redwood Shores for the annual Oracle ACE Director product briefing. Over two days, Oracle brief members of the ACE Director program on the major announcements to be made at Openworld, plus what’s happened in product development over the past twelve months, all under NDA until the various announcements are made during the week.
As most of us on the ACE Director program know each other from conferences and events around the world, it’s a good opportunity to catch-up with other Oracle “experts” and share ideas and product experiences, and this year we were lucky to get some excellent late-summer weather – which has now finished in time for Openworld proper, with rain today in SF just in time for the America’s Cup races….
As I said, the various announcements due for Openworld are under NDA until at least Larry’s keynote on Sunday – but one planned announcement that’s fairly common knowledge (if only because of the posters down at the Moscone) is the one around the new in-memory option for the Oracle Database.
The main details of this new feature will be announced by Larry Ellison tomorrow evening, and check back here after the keynote for more details – also keep an eye out for announcements around TimesTen, Exalytics and Oracle’s other “in-memory” products later in the week, which we’ll again be covering as they’re made.
Other than that – hopefully we’ll see some of you in San Francisco this week, and be sure to check back on the blog during the week as we cover what’s new, and what’s coming, as announced at Oracle Openworld 2013.
A few weeks ago I posted an article on the blog about the new Oracle BI Mobile App Designer, an HTML5-based mobile app builder for OBIEE 18.104.22.168.1. BI Mobile App Designer differs from Oracle BI Mobile HD in that:
- It’s based on HTML5, so it works on most modern mobile devices, including Android, iOS, BlackBerry and Windows Phone
- It uses “responsive design”, so apps built using it work on phones, tablets and laptops
- The technology behind it is based on BI Publisher 11g, to it’s easy to use and builds on skills you already have
- As it’s 100% thin-client, you won’t get caught up in mobile device security issues, or need to involve companies such as Good Technologies
We’re particularly excited about BI Mobile App Designer and its possibilities for LOB (“Line of Business”) mobile BI apps – creating focused, workflow-based analytic apps for a particular department and with the UI focused on doing a particular job. In the screenshots below you can see a couple of examples of user interfaces very-much customised to a particular role or reporting scenario, and users can easily access apps you create using the new “Apps Library”.
Whilst you could give BI Mobile App Designer a go yourself, you’re probably going to get the most success by working alongside a partner who’s got experience with the product, understands the issues around deploying mobile apps, and has a delivery approach that emphasises quick wins and working collaboratively with users. To this end, we’ve put together a special “Quickstart for Oracle BI Mobile App Designer” package, that delivers a working LOB mobile analytic app for you within a week, using this new feature – and leaves you with a roadmap and development plan so you can extend it yourself, afterwards.
Over this five-day, fixed-price and fixed-scope engagement, we will:
- Work with you collaboratively to identify the LOB use-case
- Review your current OBIEE 11g installation, and if necessary work with you to install any required updates to enable the Mobile App Designer feature
- Identify with you the existing analyses and reports to include in the app, and the app structure and navigation menu
- Develop the first iteration of the app, and then review it with the proposed users
- Create the final iteration, including any imagery and corporate design, links to other content etc, and then deploy in the Mobile App “Appstore” within Oracle Business Intelligence
- Work with your team to deploy the app to end-users, and provide hand-over so that they can continue development after the engagement.
Interested? Full details on this package are on the Quickstart for Oracle BI Mobile App Designer page on our website, and also on this data sheet. Get in touch now if you’d like to take advantage of this offer – and have mobile analytics delivered to your workforce in just a week!
If your requirement is to install EAL for HFM, then this post is for you. Oracle has not yet released a new version of EAL that is certified with HFM 22.214.171.124 and hence you will not be able to find one in the edelivery. The question is – what version of EAL one can use with EPM 126.96.36.199. A few days ago, MOS published a Doc – 1570187.1 that said we could use EAL 188.8.131.52.301 PSU for the same. This PSU is a full installation, which is available for download at MOS site. Also, any previous installation of EAL must be uninstalled before installing this release.
The pre-requisites from the documentation include (these are applicable only if you have an existing EAL instance, and not applicable for fresh installations) -
- Clear the existing Analytics Link repository (this is bad since you need to redefine the connections, regions, bridges etc.)
- Unregister previous instance of Analytics Link Server from Foundation Services
- Reset Data Sync Server
This post will demonstrate a fresh installation on a Windows 2008 64-bit server. Also, this post will not give you a step-by-step approach but will highlight the key steps in the installation.
Like other EPM products, EAL cannot use EPM System Installer for installation. To install EAL 184.108.40.206.301, you should use 32-bit version Oracle Universal Installer 11.2, even if the installation is going to be on a 64-bit machine. The OUI will install Analytics Link version that matches the bitness of the operating system. This comes with the EAL download and hence no separate download required. Alternatively, you can use the OUI that comes with Oracle database installed if you’ve one on the same server.
Run the OUI installer; select the installation type and destination folder.
To let OUI know what product we’re going to install, we need to browse to the path where products.xml exists under the unzipped EAL part.
Specify a path where you want to install EAL and kick-off installation. Make sure the installer displays 220.127.116.11.301 version during the install.
Once the installation is finished, the configuration tool starts up. You must enter the Weblogic server details and Analytics Link repository details when prompted.
The Doc ID mentioned earlier also suggests not using the default EPM Instance Home location at the Foundation Configuration step.
Give a suitable Username and password for the Data Sync Server and use an account that is an Administrator to configure Analytics Link services. Unfortunately, the configuration tool doesn’t show the progress of configuration, so you’ll have to wait until you see the ‘success’ message on the window. As specified earlier, this is a full installation and cannot be rolled back.
Verifying the installation:
After successful installation, to verify that EAL is able to connect to HFM, we have to log on to Essbase Administration Services Console (use client installers to install EAS Console) and import the EAL plug-in which is shown in the below screenshots. Go to Tools>Configure components and click Add to import the EAL plug-in.
Navigate to the directory where Analytics Link server is installed (HFS_HOME which is C:\EAL_Home in our case) and import the jar file eas-plugin_wl.jar.
After the import is finished, you may need to exit and restart EAS Console to see the ‘Analytics Link Servers’ node.
Add a new Analytics Link Server by specifying the username and password that is given at the time of configuration.
Now that we have successfully imported EAL plug-in and added our First Analytics Link server, to verify the HFM connectivity – we need to define which HFM application that EAL should connect to etc. and to what Essbase database it should write the outline/data based on HFM application. Basically, we should configure all the objects under the ‘First’ Analytics Link Server.
HFM Server and Application:
Essbase Application and Database:
After you define the Data Sync Server and Data Store – create a bridge that acts as a link between HFM application and Essbase database refreshing the outline and data.
Open the bridge, create a bridge application and check if the outline is created.
Now, we can conclude that there are no configuration related issues since we’re able to refresh the metadata to Essbase without any issues. This I assume gives a good walk through of installing and configuring Essbase Analytics Link for HFM.