Rittman Mead Consulting

Subscribe to Rittman Mead Consulting feed
Rittman Mead consults, trains, and innovates within the world of Oracle Business Intelligence, data integration, and analytics.
Updated: 7 hours 49 min ago

The Case for ETL in the Cloud - CAPEX vs OPEX

Thu, 2017-04-27 11:12

Recently Oracle announced a new cloud service for Oracle Data Integrator. Because I was helping our sales team by doing some estimates and statements of work, I was already thinking of costs, ROI, use cases, and the questions behind making a decision to move to the cloud. I want to explore what is the business case for using or switching to ODICS?

Oracle Data Integration Cloud Services

First, let me briefly talk about what is Oracle Data Integration Cloud Services? ODICS is ODI version 12.2.1.2 available on Oracle’s Java Cloud Service known as JCS. Several posts cover the implementation, migration, and technical aspects of using ODI in the cloud. Instead of covering the ‘how’, I want to talk about the ‘when’ and ‘why’.

Use Cases

What use cases are there for ODICS?
1. You have or soon plan to have your data warehouse in Oracle’s Cloud. In this situation, you can now have your ODI J2EE agent in the same cloud network, removing network hops and improving performance.
2. If you currently have an ODI license on-premises, you are allowed to install that license on Oracle’s JCS at the JCS prices. See here for more information about installing on JCS. These use cases are described in a webinar posted in the PM Webcast Archive.

When and Why?

So when would it make sense to move towards using ODICS? These are the scenarios I imagine being the most likely:
1. A new customer or project. If a business doesn’t already have ODI, this allows them to decide between an all on-premises solution or a complete solution in Oracle’s cloud. With monthly and metered costs, the standard large start-up costs for hardware and licenses are avoided, making this solution available for more small to medium businesses.
2. An existing business with ODI already and considering moving their DW to the cloud. In this scenario, a possible solution would be to move the current license of ODI to JCS and begin using that to move data, all while tracking JCS costs. When the time comes to review licensing obligations for ODI, compare the calculation for a license to the calculation of expected usage for ODICS and see which one makes the most sense (cents?). For a more detailed explanation of this point, let’s talk CAPEX and OPEX!

CAPEX vs. OPEX

CAPEX and OPEX are short for Capital Expense and Operational Expense, respectively. In a finance and budgeting perspective, these two show up very differently on financial reports. This often has tax considerations for businesses. Traditionally in the past, a data warehouse project was a very large initial capital expenditure, with hardware, licenses, and project costs. This would land it very solidly as CAPEX. Over the last several years, sponsorship for these projects has shifted from CIOs and IT Directors to CFOs and Business Directors. With this shift, several businesses would rather budget and see these expenses monthly as an operating expense as opposed to every few years having large capital expenses, putting these projects into OPEX instead.

Conclusion

Having monthly and metered service costs in the cloud that are fixed or predictable are appealing. As a bonus, this style of service is highly flexible and can scale up (or down) as demand changes. If you are or will soon be in the process of planning for your future business analytics needs, we provide expert services, assessments, accelerators, and executive consultations for assisting with these kinds of decisions. When it is time to talk about actual numbers, your Oracle Sales Representative will have the best prices. Please get in touch for more information.

Categories: BI & Warehousing

SQL-on-Hadoop: Impala vs Drill

Wed, 2017-04-19 10:01
 Impala vs Drill

I recently wrote a blog post about Oracle's Analytic Views and how those can be used in order to provide a simple SQL interface to end users with data stored in a relational database. In today's post I'm expanding a little bit on my horizons by looking at how to effectively query data in Hadoop using SQL. The SQL-on-Hadoop interface is key for many organizations - it allows querying the Big Data world using existing tools (like OBIEE,Tableau, DVD) and skills (SQL).

Analytic Views, together with Oracle's Big Data SQL provide what we are looking for and have the benefit of unifying the data dictionary and the SQL dialect in use. It should be noted that Oracle Big Data SQL is licensed separately on top of the database and it's available for Exadata machines only.

Nowadays there is a multitude of open-source projects covering the SQL-on-Hadoop problem. In this post I'll look in detail at two of the most relevant: Cloudera Impala and Apache Drill. We'll see details of each technology, define the similarities, and spot the differences. Finally we'll show that Drill is most suited for exploration with tools like Oracle Data Visualization or Tableau while Impala fits in the explanation area with tools like OBIEE.

As we'll see later, both the tools are inspired by Dremel, a paper published by Google in 2010 that defines a scalable, interactive ad-hoc query system for the analysis of read-only nested data that is the base of Google's BigQuery. Dremel defines two aspects of big data analytics:

  • A columnar storage format representation for nested data
  • A query engine

The first point inspired Apache Parquet, the columnar storage format available in Hadoop. The second point provides the basis for both Impala and Drill.

Cloudera Impala

We started blogging about Impala a while ago, as soon as it was officially supported by OBIEE, testing it for reporting on top of big data Hadoop platforms. However, we never went into the details of the tool, which is the purpose of the current post.

Impala is an open source project inspired by Google's Dremel and one of the massively parallel processing (MPP) SQL engines running natively on Hadoop. And as per Cloudera definition is a tool that:

provides high-performance, low-latency SQL queries on data stored in popular Apache Hadoop file formats.

Two important bits to notice:

  • High performance and low latency SQL queries: Impala was created to overcome the slowness of Hive, which relied on MapReduce jobs to execute the queries. Impala uses its own set of daemons running on each of the datanodes saving time by:
    • Avoiding the MapReduce job startup latency
    • Compiling the query code for optimal performance
    • Streaming intermediate results in-memory while MapReduces always writing to disk
    • Starting the aggregation as soon as the first fragment starts returning results
    • Caching metadata definitions
    • Gathering tables and columns statistics
  • Data stored in popular Apache Hadoop file formats: Impala uses the Hive metastore database. Databases and tables are shared between both components. The list of supported file formats include Parquet, Avro, simple Text and SequenceFile amongst others. Choosing the right file format and the compression codec can have enormous impact on performance. Impala also supports, since CDH 5.8 / Impala 2.6, Amazon S3 filesystem for both writing and reading operations.

One of the performance improvements is related to "Streaming intermediate results": Impala works in memory as much as possible, writing on disk only if the data size is too big to fit in memory; as we'll see later this is called optimistic and pipelined query execution. This has immediate benefits compared to standard MapReduce jobs, which for reliability reasons always writes intermediate results to disk.
As per this Cloudera blog, the usage of Impala in combination with Parquet data format is able to achieve the performance benefits explained in the Dremel paper.

Impala Query Process

Impala runs a daemon, called impalad on each Datanode (a node storing data in the Hadoop cluster). The query can be submitted to any daemon in the cluster which will act as coordinator node for the query. Impala daemons are always connected to the statestore, which is a process keeping a central inventory of all available daemons and related health and pushes back the information to all daemons. A third component called catalog service checks for metadata changes driven by Impala SQL in order to invalidate related cache entries. Metadata are cached in Impala for performance reasons: accessing metadata from the cache is much faster than checking against the Hive metastore. The catalog service process is in charge of keeping Impala's metadata cache in sync with the Hive metastore.

Once the query is received, the coordinator verifies if the query is valid against the Hive metastore, then information about data location is retrieved from the Namenode (the node in charge of storing the list of blocks and related location in the datanodes), it fragments the query and distribute the fragments to other impalad daemons to execute the query. All the daemons read the needed data blocks, process the query, and stream partial result to the coordinator (avoiding the write to disk), which collects all the results and delivers it back to the requester. The result is returned as soon as it's available: certain SQL operations like aggregations or order by require all the input to be available before Impala can return the end result, while others, like a select of pre-existing columns without a order by can be returned with only partial results.

 Impala vs Drill

Apache Drill

Defining Apache Drill as SQL-on-Hadoop is limiting: also inspired by Google's Dremel is a distributed datasource agnostic query engine. The datasource agnostic part is very relevant: Drill is not closely coupled with Hadoop, in fact it can query a variety of sources like MongoDB, Azure Blob Storage, or Google Cloud Storage amongst others.

One of the most important features is that data can be queried schema-free: there is no need of defining the data structure or schema upfront - users can simply point the query to a file directory, MongoDB collection or Amazon S3 bucket and Drill will take care of the rest. For more details, check our overview of the tool. One of Apache Drill's objectives is cutting down the data modeling and transformation effort providing a zero-day analysis as explained in this MapR video.
 Impala vs Drill

Drill is designed for high performance on large datasets, with the following core components:

  • Distributed engine: Drill processes, called Drillbits, can be installed in many nodes and are the execution engine of the query. Nodes can be added/reduced manually to adjust the performances. Queries can be sent to any Drillbit in the cluster that will act as Foreman for the query.
  • Columnar execution: Drill is optimized for columnar storage (e.g. Parquet) and execution using the hierarchical and columnar in-memory data model.
  • Vectorization: Drill take advantage of the modern CPU's design - operating on record batches rather than iterating on single values.
  • Runtime compilation: Compiled code is faster than interpreted code and is generated ad-hoc for each query.
  • Optimistic and pipelined query execution: Drill assumes that none of the processes will fail and thus does all the pipeline operation in memory rather than writing to disk - writing on disk only when memory isn't sufficient.
Drill Query Process

Like Impala's impalad, Drill's main component is the Drillbit: a process running on each active Drill node that is capable of coordinating, planning, executing and distributing queries. Installing Drillbit on all of Hadoop's data nodes is not compulsory, however if done gives Drill the ability to achieve the data locality: execute the queries where the data resides without the need of moving it via network.

When a query is submitted against Drill, a client/application is sending a SQL statement to a Drillbit in the cluster (any Drillbit can be chosen), which will act as Foreman (coordinator in Impala terminology) that will parse the SQL and convert it into a logical plan composed by operators. The next step is the cost-based optimizer which, based on optimizations like rule/cost based, data locality and storage engine options, rearranges operations to generate the optimal physical plan. The Foreman then divides the physical plan in phases, called fragments, which are organised in a tree and executed in parallel against the data sources. The results are then sent back to the client/application. The following image taken from drill.apache.org explains the full process:

 Impala vs Drill

Similarities and Differences

As we saw above, Drill and Impala have a similar structure - both take advantage of always on daemons (faster compared to the start of a MapReduce job) and assume an optimistic query execution passing results in cache. The code compilation and the distributed engine are also common to both, which are optimized for columnar storage types like Parquet.

There are, however, several differences. Impala works only on top of the Hive metastore while Drill supports a larger variety of data sources and can link them together on the fly in the same query. For example, implicit schema-defined files like JSON and XML, which are not supported natively by Impala, can be read immediately by Drill.
Drill usually doesn't require a metadata definition done upfront, while for Impala, a view or external table has to be declared before querying. Following this point there is no concept of a central and persistent metastore, and there is no metadata repository to manage just for Drill. In OBIEE's world, both Impala and Drill are supported data sources. The same applies to Data Visualization Desktop.
 Impala vs Drill

The aim of this article isn't a performance-wise comparison since those depends on a huge amount of factors including data types, file format, configurations, and query types. A comparison dated back in 2015 can be found here. Please be aware that there are newer versions of the tools since this comparison, which bring a lot of changes and improvements for both projects in terms of performance.

Conclusion

Impala and Drill share a similar structure - both inspired by Google's Dremel - relying on always active daemons deployed on cluster nodes to provide the best query performances on top of Big Data data structures. So which one to choose and when?
As described, the capability of Apache Drill to query a raw data-source without requiring an upfront metadata definition makes the tool perfect for insights discovery on top of raw data. The capacity of joining data coming from one or more storage plugins in a unique query makes the mash-up of disparate data sources easy and immediate. Data science and prototyping before the design of a reporting schema are perfect use cases of Drill. However, as part of the discovery phase, a metadata definition layer is usually added on top of the data sources. This makes Impala a good candidate for reporting queries.
Summarizing, if all the data points are already modeled in the Hive metastore, then Impala is your perfect choice. If instead, you need a mashup with external sources, or need work directly with raw data formats (e.g. JSON), then Drill's auto-exploration and openness capabilities are what you're looking for.
Even though both tools are fully compatible with Oracle BIEE and Data Visualization (DV), due to Drill's data exploration nature, it could be considered more in line with DV use cases, while Impala is more suitable for standard reporting like OBIEE. The decision on tooling highly depends on the specific use case - source data types, file formats and configurations have deep impact on the agility of the business analytics process and query performance.

If you want to know more about Apache Drill, Impala and the use cases we have experienced, don't hesitate to contact us!

Categories: BI & Warehousing

OBIEE Component Status Notifications

Wed, 2017-04-05 09:00

At Rittman Mead, we often hear requests for features or solutions generally not provided by Oracle. These requests range from commentary functionality to custom javascript visualizations. There are many ways to implement these functionalities, but organizations often lack the in-house resources to engineer an acceptable solution.

Rittman Mead has the capability to engineer any solution desired, and in many cases, has already developed a solution. Many of our accelerators currently offered, such as Chit Chat or User Engagement, grew out of numerous requests for these features.

One of the more common requests we hear at Rittman Mead is for BI Administrators to receive notifications for the status of their OBIEE components. They want to be notified of the status of the system components throughout the day in a convenient manner, so any failures are recognized quickly.

This particular feature can easily be implemented with Rittman Mead's Performance Analytics service. However, Rittman Mead would like to publicly provide this capability independent of our accelerator. We have developed a small Python script to provide this functionality, and we would like to give this script to the community.

The provided script is available free of charge, and available under the MIT license. It has been tested on both OBIEE 11G and 12C environments, as well as on Windows and Linux operating systems. The rest of this blog will detail, at a high level, how the script works, and how to configure it correctly.

The script is available through our public Github repository here.

Script Output

First, let's clarify how we will gather the status of the components in the first place. Thankfully, OBIEE includes some scripts to display this information on both Linux and Windows. In 12C, the script is status.sh/status.cmd, and in 11G the primary command is opmnctl status.

When I execute this script on an OBIEE 12C OEL environment, I receive the following response:


The output includes some extra information we don't require, but we can ignore it for now. With some programming knowledge, we can trim what we don't need, organize it into a nice table, and then send the output to nearly anywhere desired. For portability and stability, I will use Python to organize the message contents and I will also use email as the channel of communication.

Sending the Output Through Email

If we are only concerned with notifying administrators of the current status, one of the better avenues to send this data is through email. An email destination will allow users to be able to receive the status of the components almost instantaneously, and be able to take the appropriate action as soon as possible.

Additionally, Python's standard set of modules includes functions to assist in sending SMTP messages, making the script even more portable and maintainable. The simplest method to generate the email is just by sending the complete output as the body of the message. An example of this output is below:

While this works, it's not exactly attractive. With some Python and HTML/CSS skills, we can style the email to look much nicer:


Now we have something nice we can send BI Administrators to show the status of the components.

Configuration and Use

To effectively utilize this script, you will have to change some of the configuration parameters, located at the top of the script. The parameters I am using are shown below (with sensitive information hidden, of course):

The sender and username fields should both be the user you are logging in as on the SMTP server to send the email. If you want the email address shown on a message to be different than the user configured on the SMTP server, then these can be set separately. The password field should be the password for the user being configured on the SMTP server.

The recipient field should be the address of the user who will be receiving the emails. For simple management, this should be a single user, who should then be configured to forward all incoming status emails to the responsible parties. This will allow easier maintenance, since modifying the list of users can take place in the email configuration, rather than the script configuration. In this example, I am just sending the emails to my Rittman Mead email address.

The SMTP settings should also be updated to reflect the SMTP server being utilized. If you are using Gmail as the SMTP server, then the configuration shown should work without modifications.

Finally, the python script requires the absolute path to the status command to execute to produce the output (the opmnctl or status commands). Environment variables may not be taken into consideration by this script, so it's best to not use a variable in this path.

NOTE: If the \ character is in the path, then you MUST use \\ instead. This is especially true on Windows environments. If this change is required but omitted, the script will not function properly.

Additionally, if you don't care about the HTML output (or if it doesn't render nicely in your email client), then it can be disabled by setting the value of render_html to False. If, for some reason, the nice HTML fails to render, then the email will just send the plain text output, instead of failing to deliver an email at all.

Once configured, try executing the script:
python email_component_status.py

If everything worked correctly, then you should have received an email with the status of the components. If you do not receive an email, then you should check both the configuration settings, and the internet connection of the machine (firewalls included). The script will also generate output that should assist you in troubleshooting the cause of the problem.

Additional Notifications or Destinations

The solution provided, while useful, is not perfect. What if you want to send this information to a destination other than an email address, such as a ticketing system like Atlassian JIRA? Or what if you want notifications based on other situations, such as slow running reports, or high CPU usage?

There may be many situations in which you would want one, or several, employees to receive different notifications based on events or circumstances that occur in your OBIEE environment. The script in this blog post only provides one of these notifications, but implementing many more will quickly become burdensome.

As part of Rittman Mead's Performance Analytics offering, we include custom software and code to fulfill this requirement. In addition to providing dashboards to explore the performance of your OBIEE systems, Performance Analytics can be configured to distribute alerts, based on any quantifiable condition, to any number of external systems.

The full Performance Analytics suite can not only alert users of down system components, but of any number of conditions that may occur in your BI environment.

If you have questions about this script, Performance Analytics, or anything else Contact Us here.

To find out more about Performance Analytics, contact us, visit the product page here, or read some of the fantastic blogs from Robin Moffatt.

Categories: BI & Warehousing

Oracle Analytics Cloud: Product Overview

Tue, 2017-04-04 10:00
 Product Overview

We at Rittman Mead are always helping our customer solving their problems, many times we heard them

  • being unsure about the sizing of their server
  • being worried about the upfront cost of the licensing
  • having recurring nightmares about patching
  • willing to try the cloud but couldn't find the right option to replace their on-premises system

This is their lucky day: Oracle officially launched Oracle Analytics Cloud (OAC), a new PaaS (Platform as a Service) providing a complete and elastic Business Intelligence platform in the cloud, customizable and managed by you but all on the Oracle Cloud!

 Product Overview

If you haven't been on a remote island you may have noticed that in recent years Oracle's main focus has been around the Cloud. Several products have been launched covering a vast spectrum of functionalities: Data Management, Application Development, Business Analytics and Security are only some of the areas covered by the Software/Platform/Infrastructure as a Service offering.

 Product Overview

In the Business Analytics area, we at Rittman Mead started thinking long time ago on how to host Oracle's BI on-premises (OBIEE) in the Cloud and worked closely with Oracle since the beta phase of their first PaaS product: BI Cloud Service (BICS). Effectively we put our hands on all the cloud products in the BA family like Big Data Discovery (both on premises and cloud), Data Visualization Cloud Service, Big Data Preparation Service.

Business Intelligence Cloud Products

Until few weeks ago Oracle's main Business Analytics cloud products were BI Cloud Service (BICS) and Data Visualization Cloud Service (DVCS). As mentioned in our blog both tools aimed initially at departmental use-cases: the simplicity of the data model interface and the lack of admin configuration options stopped them from being a compelling story for hosting a full enterprise Business Intelligence solution.

 Product Overview

New features like BICS Data Sync, Remote Data Connector and RPD lift and shift addressed almost all the limitations but the lack of detailed admin/maintenance capabilities represent a stopper for moving complex environments in the cloud. Still BICS and DVCS are perfect for their aim: business users analysing sets of data without needing to wait the IT to provision a server or to care about upfront licensing costs.

Oracle Analytics Cloud

Oracle Analytics Cloud extends the watermark in every direction by providing a product that is:

  • Complete functionality: most of the tools, procedures and options provided on-premises are now available in OAC.
  • Combining all the offering of BICS, DV, BIEE and Essbase: OAC includes the features of Oracle's top BI products.
  • Licensing Tailored: the many options available (discussed in a later post) can be chosen depending on analytical needs, timeframe of service, required performances
  • Easily Scalable: do you want to expand your BI solution to the double of the users without loosing performances? Just buy some more horsepower!
  • Fully Accessible: SSH connection available to the server makes it easy to change settings as needed, REST API and Clients are provided for all lifecycle operations
  • Customizable: settings, images, networking, VPN all settings are available
  • Scriptable: settings like scaling, instance creation and deletion, start and stop can be easily scripted via the REST-APIs
  • Fully Customer Managed: Oracle provides the automation to backup and patch but the customer decides when to run them.
What's The Difference?

So what's the difference between Oracle Analytics Cloud and the "old" DVCS and BICS? How is OACS going to change Oracle's BI offer in the cloud?

The great deal of using OACS is control: BICS/DVC limiting factors around admin options and development are solved providing a tool capable of hosting a full enterprise BI solution. Even if the platform is managed by Oracle SSH access is provided meaning that instance configurations can be changed. No more upfront server sizing decisions, now the size of the instance is decided during creation time and can be changed later in the process if the demand changes.

The REST-APIs will enable the scripting of the full lifecycle of the instance, providing a way to automate the BI enterprise workflow even in complex environments where concurrent development is needed. Patching and Backups are not a problem anymore with the automated processes provided.

Direct RPD online editing is available with the Admin tool. The old BICS Data Modeler is still there for simple models, but Admin Tool can be used in case of complex RPDs.

 Product Overview

The front-end is like the BICS and OBIEE 12c one, some new visualization have been added to Visual Analyzer in line with the new additions to Data Visualization Desktop: Parallel Coordinates, Chord, Network, Sankey diagrams are now available.

 Product Overview

A new console is now available in Visual Analyzer allowing settings like Mail or Deliveries that before were only accessible via Weblogic Console, Enterprise Manager or config files.

 Product Overview

Finally Essbase is now available in the cloud too with a new web interface!
Summarizing, if you wanted to go Cloud, but were worried about missing options, now Oracle Analytics Cloud provides all you need to host a full Enterprise BI solution.

In the next few days I'll be analysing various aspects of Oracle Analytics Cloud Suite, so keep in touch!

If you need assistance in checking if Oracle Analytics Cloud suits your needs or in planning your migration to the cloud don't hesitate to contact us

Categories: BI & Warehousing

Metadata Modeling in the Database with Analytic Views

Mon, 2017-04-03 09:00
Metadata Modeling in the Database with Analytic Views

12.2, the latest Oracle database release provides a whole set of new features enhancing various aspects of the product including JSON support, Auto-List Partitioning and APEX news among others. One of the biggest news in the Data Warehousing / Analytics area was the introduction of the Analytic Views, that as per Oracle's definition are

Metadata objects that enable the user to quickly and easily create complex hierarchical and dimensional queries on data in database tables and views

tl;dr

If you are on rush, here is an abstract of what you'll find in this looooong blog post:

Metadata modeling can now be done directly in the database using Analytic Views, providing to end users a way of querying database objects without needing a knowledge of joining conditions, aggregation functions or order by clauses.
This post will guide you through the creation of an analytic view that replicates a part of a OBIEE's Sampleapp business model. The latest part of the post is dedicated to understanding the usage of analytic views and the benefits for end users especially in cases when self-service BI tools are used.

Metadata Modeling in the Database with Analytic Views

If you are still around and interested in the topic please take a drink and sit comfortably, it will be a good journey.

Metadata Modeling

What are then the Analytics Views in detail? How are they going to improve end user's ability in querying data?

To answer above question I would take a step back. Many readers of this blog are familiar with OBIEE and its core: the Repository. The repository contains the metadata model from the physical sources till the presentation areas and includes the definition of:

  • Joins between tables
  • Hierarchies for dimensions
  • Aggregation rules
  • Security settings
  • Data Filters
  • Data Sources

This allows end users to just pick columns from a Subject Area and display them in the appropriate way without needing to worry about writing SQL or knowing how the data is stored. Moreover definitions are held centrally providing the famous unique source of truth across the entire enterprise.

Metadata Modeling in the Database with Analytic Views

The wave of self-service BI tools like Tableau or Oracle's Data Visualization Desktop provided products capable of querying almost any kind of data sources in a visual and intuitive way directly in the end user hands. An easy and direct access to data is a good thing for end user but, as stated above, requires knowledge of the data model, joins and aggregation methods.
The self-service tools can slightly simplify the process by providing some hints based on column names, types or values but the cruel reality is that the end-user has to build the necessary knowledge of the data source before providing correct results. This is why we've seen several times self-service BI tools being "attached" to OBIEE: get corporate official data from the unique source of truth and mash them up with information coming from external sources like personal Excel files or output of Big Data processes.

Analytics Views

Analytic Views (AV) take OBIEE's metadata modeling concept and move it at database level providing a way of organizing data in a dimensional model so it can be queried with simpler SQL statements.
The Analytical Views are standard views with the following extra options:

  • Enable the definition of facts, dimensions and hierarchies that are included in system-generated columns
  • Automatically aggregate the data based on pre-defined calculations
  • Include presentation metadata

Analytics views are created with a CREATE ANALYTIC VIEW statement, some privileges need to be granted to the creating user, you can find the full list in Oracle's documentation.

Every analytical view is composed by the following metadata objects:

  • Attribute dimensions: organising table/view columns into attributes and levels.
  • Hierarchies: defining hierarchical relationships on top of an attribute dimension object.
  • Analytic view objects: defining fact data referencing both fact tables and hierarchies.

With all the above high level concepts in mind it's now time to try how Analytical Views could be used in a reporting environment.

Database Provisioning

For the purpose blog post I used Oracle's 12.2.0.1 database Docker image, provided by Gerald Venzl, the quickest way of spinning up a local instance. You just need to:

  • Install Docker
  • Download database installer from Oracle's website
  • Place the installer in the proper location mentioned in the documentation
  • Build Oracle Database 12.1.0.2 Enterprise Edition Docker image by executing
./buildDockerImage.sh -v 12.1.0.2 -e
  • Running the image by executing
docker run --name db12c -p 1521:1521 -p 5500:5500 -e ORACLE_SID=orcl -e ORACLE_PDB=pdborcl -e ORACLE_CHARACTERSET=AL32UTF8 oracle/database:12.2.0.1-ee  

The detailed parameters definition can be found in the GitHub repository. You can then connect via sqlplus to your local instance by executing the standard

sqlplus sys/pwd@//localhost:1521/pdborcl as sysdba  

The password is generated automatically during the first run of the image and can be found in the logs, look for the following string

ORACLE AUTO GENERATED PASSWORD FOR SYS, SYSTEM AND PDBAMIN: XXXXxxxxXXX  

Once the database is created it's time to set the goal: I'll try to recreate a piece of the Oracle's Sampleapp RPD model in the database using Analytic Views.

Model description

In this blog post I'll look in the 01 - Sample App business model and specifically I'll try to replicate the logic behind Time, Product and the F0 Sales Base Measures using Analytic Views.

Metadata Modeling in the Database with Analytic Views

Dim Product

The Sampleapp's D1 - Products (Level Based Hierarchy) is based on two logical table sources: SAMP_PRODUCTS_D providing product name, description, LOB and Brand and the SAMP_PROD_IMG_D containing product images. For the purpose of this test we'll keep our focus on SAMP_PRODUCTS_D only.
The physical mapping of Logical columns is shown in the image below.

Metadata Modeling in the Database with Analytic Views

Attribute Dimension

The first piece we're going to build is the attribute dimension, where we'll be defining attributes and levels. The mappings in above image can "easily" be translated into an attributes with the following SQL.

CREATE OR REPLACE ATTRIBUTE DIMENSION D1_DIM_PRODUCT  
USING SAMP_PRODUCTS_D  
ATTRIBUTES  
 (PROD_KEY as P0_Product_Number
    CLASSIFICATION caption VALUE 'P0 Product Number',
  PROD_DSC as P1_Product
    CLASSIFICATION caption VALUE 'P1 Product',
  TYPE as P2_Product_Type
    CLASSIFICATION caption VALUE 'P2 Product Type',
  TYPE_KEY as P2k_Product_Type
    CLASSIFICATION caption VALUE 'P2k Product Type',
  LOB as P3_LOB
    CLASSIFICATION caption VALUE 'P3 LOB',
  LOB_KEY as P3k_LOB
    CLASSIFICATION caption VALUE 'P3k LOB',
  BRAND as P4_Brand
    CLASSIFICATION caption VALUE 'P4 Brand',
  BRAND_KEY as P4k_Brand
    CLASSIFICATION caption VALUE 'P4k Brand',
  ATTRIBUTE_1 as P5_Attribute_1
    CLASSIFICATION caption VALUE 'P5 Attribute 1',
  ATTRIBUTE_2 as P6_Attribute_2
    CLASSIFICATION caption VALUE 'P6 Attribute 2',
  SEQUENCE as P7_Product_Sequence
    CLASSIFICATION caption VALUE 'P7 Product Sequence',
  TOTAL_VALUE as P99_Total_Value
    CLASSIFICATION caption VALUE 'P99 Total Value')

Few pieces to note:

  • CREATE OR REPLACE ATTRIBUTE DIMENSION: we are currently defining a dimension, the attributes and levels.
  • USING SAMP_PRODUCTS_D: defines the datasource, in our case the table SAMP_PRODUCTS_D. Only one datasource is allowed per dimension.
  • PROD_KEY as P0_Product_Number: using the standard notification as we can easily recaption columns names
  • CLASSIFICATION CAPTION ... several options can be added for each attribute like caption or description

The dimension definition is not complete with only attribute declaration, we also need to define the levels. Those can be taken from OBIEE's hierarchy

Metadata Modeling in the Database with Analytic Views

For each level we can define:

  • The level name, caption and description
  • The Key
  • the Member Name and Caption
  • the Order by Clause

Translating above OBIEE's hierarchy levels into Oracle SQL

LEVEL BRAND  
  CLASSIFICATION caption VALUE 'BRAND'
  CLASSIFICATION description VALUE 'Brand'
  KEY P4k_Brand
  MEMBER NAME P4_Brand
  MEMBER CAPTION P4_Brand
  ORDER BY P4_Brand
LEVEL Product_LOB  
  CLASSIFICATION caption VALUE 'LOB'
  CLASSIFICATION description VALUE 'Lob'
  KEY P3k_LOB
  MEMBER NAME P3_LOB
  MEMBER CAPTION P3_LOB
  ORDER BY P3_LOB
  DETERMINES(P4k_Brand)
LEVEL Product_Type  
  CLASSIFICATION caption VALUE 'Type'
  CLASSIFICATION description VALUE 'Type'
  KEY P2k_Product_Type
  MEMBER NAME P2_Product_Type
  MEMBER CAPTION P2_Product_Type
  ORDER BY P2_Product_Type
  DETERMINES(P3k_LOB,P4k_Brand)
LEVEL Product_Details  
  CLASSIFICATION caption VALUE 'Detail'
  CLASSIFICATION description VALUE 'Detail'
  KEY P0_Product_Number
  MEMBER NAME P1_Product
  MEMBER CAPTION P1_Product
  ORDER BY P1_Product
  DETERMINES(P2k_Product_Type,P3k_LOB,P4k_Brand)
ALL MEMBER NAME 'ALL PRODUCTS';  

There is an additional DETERMINES line in above sql for each level apart from Brand, this is how we can specify the relationship between level keys. If we take the Product_LOB example, the DETERMINES(P4k_Brand) defines that any LOB in our table automatically determines a Brand (in OBIEE terms that LOB is a child of Brand).

Hierarchy

Next step is defining a hierarchy on top of the attribute dimension D1_PRODUCTS defined above. We can create it just by specifying:

  • the attribute dimension to use
  • the list of levels and the relation between them

which in our case becomes

CREATE OR REPLACE HIERARCHY PRODUCT_HIER  
  CLASSIFICATION caption VALUE 'Products Hierarchy'
USING D1_DIM_PRODUCT  
  (Product_Details CHILD OF
   Product_Type CHILD OF
   Product_LOB CHILD OF
   BRAND);

When looking into the hierarchy Product_hier we can see that it's creating an OLAP-style dimension with a row for each member at each level of the hierarchy and extra fields like DEPT, IS_LEAF and HIER_ORDER

Metadata Modeling in the Database with Analytic Views

The columns contained in Product_hier are:

  • One for each Attribute defined in attribute dimension D1_PRODUCTS like P0_PRODUCT_NUMBER or P2K_PRODUCT_TYPE
  • The member name, caption and description and unique name
  • The level name in the hierarchy and related depth
  • The relative order of the member in the hierarchy
  • A field IS_LEAF flagging hierarchy endpoints
  • References to the parent level
Member Unique Names

A particularity to notice is that the MEMBER_UNIQUE_NAME of Cell Phones is [PRODUCT_TYPE].&[101] which is the concatenation of the LEVEL and the P2K_PRODUCT_TYPE value.
One could expect the member unique name being represented as the concatenation of all the preceding hierarchy members, Brand and LOB, and the member key itself in a string like [PRODUCT_TYPE].&[10001]&[1001]&[101].

This is the default behaviour, however in our case is not happening since we set the DETERMINES(P3k_LOB,P4k_Brand) in the attribute dimension definition. We Specified that Brand ([10001]) and LOB ([1001]) can automatically be inferred by the Product Type so there is no need to store those values in the member key. We can find the same setting in OBIEE's Product Type logical level

Metadata Modeling in the Database with Analytic Views

Dim Date

The basic D0 Dim Date can be built starting from the table SAMP_TIME_DAY_D following the same process as above. Like in OBIEE, some additional settings are required when creating a time dimension:

  • DIMENSION TYPE TIME: the time dimension type need to be specified
  • LEVEL TYPE <LEVEL_NAME>: each level in the time hierarchy needs to belong to a precise level type chosen from:
    • YEARS
    • HALF_YEARS
    • QUARTERS
    • MONTHS
    • WEEKS
    • DAYS
    • HOURS
    • MINUTES
    • SECONDS
Attribute Dimension

Metadata Modeling in the Database with Analytic Views

Taking into consideration the additional settings, the Dim Date column mappings in above image can be translated in the following attribute dimension SQL definition.

CREATE OR REPLACE ATTRIBUTE DIMENSION D0_DIM_DATE  
DIMENSION TYPE TIME  
USING SAMP_TIME_DAY_D  
ATTRIBUTES  
 (CALENDAR_DATE AS TOO_CALENDAR_DATE,
  PER_NAME_MONTH AS T02_PER_NAME_MONTH,
  PER_NAME_QTR AS T03_PER_NAME_QTR,
  PER_NAME_YEAR AS T04_PER_NAME_YEAR,
  DAY_KEY AS T06_ROW_WID,
  BEG_OF_MTH_WID AS T22_BEG_OF_MTH_WID,
  BEG_OF_QTR_WID AS T23_BEG_OF_QTR_WID
  )
    LEVEL CAL_DAY
      LEVEL TYPE DAYS
      KEY TOO_CALENDAR_DATE
      ORDER BY TOO_CALENDAR_DATE
      DETERMINES(T22_BEG_OF_MTH_WID, T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR)
    LEVEL CAL_MONTH
      LEVEL TYPE MONTHS
      KEY T22_BEG_OF_MTH_WID
      MEMBER NAME T02_PER_NAME_MONTH
      ORDER BY T22_BEG_OF_MTH_WID
      DETERMINES(T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR)
    LEVEL CAL_QUARTER
      LEVEL TYPE QUARTERS
      KEY T23_BEG_OF_QTR_WID
      MEMBER NAME T03_PER_NAME_QTR
      ORDER BY T23_BEG_OF_QTR_WID
      DETERMINES(T04_PER_NAME_YEAR)
    LEVEL CAL_YEAR
      LEVEL TYPE YEARS
      KEY T04_PER_NAME_YEAR
      MEMBER NAME T04_PER_NAME_YEAR
      ORDER BY T04_PER_NAME_YEAR
    ALL MEMBER NAME 'ALL TIMES';

You may have noticed a different mapping of keys, member names and order by attributes. Let's take the CAL_MONTH as example. It's defined by two columns

  • BEG_OF_MTH_WID: used for joins and ordering
  • PER_NAME_MONTH: used as "display label"

PER_NAME_MONTH in the YYYY / MM format could be also used for ordering, but most of the times end user requests months in the MM / YYYY format. Being able to set a ordering column different from the member name allows us to properly manage the hierarchy.

Metadata Modeling in the Database with Analytic Views

Hierarchy

Time hierarchy follows the same rules as the product one, no additional settings are required.

CREATE OR REPLACE HIERARCHY TIME_HIER  
USING D0_DIM_DATE  
  (CAL_DAY CHILD OF
   CAL_MONTH CHILD OF
   CAL_QUARTER CHILD OF
   CAL_YEAR);
Fact Sales

The last step in the journey is the definition of the analytic view of the fact table that as per Oracle's documentation

An analytic view specifies the source of its fact data and defines measures that describe calculations or other analytic operations to perform on the data. An analytic view also specifies the attribute dimensions and hierarchies that define the rows of the analytic view.

The analytic view definition contains the following specifications:

  • The data source: the table or view that will be used for the calculation
  • The columns: which columns from the source objects to use in the calculations
  • The attribute dimensions and hierarchies: defining both the list of attributes and the levels of the analysis
  • The measures: a set of aggregations based on the predefined columns from the data source.

Within analytical views definition a materialized view can be defined in order to store aggregated values. This is a similar to OBIEE's Logical Table Source setting for aggregates.

Analytic View Definition

For the purpose of the post I'll use SAMP_REVENUE_F which is one of the sources of F0 Sales Base Measures in Sampleapp. The following image shows the logical column mapping.

Metadata Modeling in the Database with Analytic Views

The above mappings can be translated in the following SQL

CREATE OR REPLACE ANALYTIC VIEW F0_SALES_BASE_MEASURES  
USING SAMP_REVENUE_F  
DIMENSION BY  
  (D0_DIM_DATE
    KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE
    HIERARCHIES (
      TIME_HIER DEFAULT),
   D1_DIM_PRODUCT
    KEY PROD_KEY REFERENCES P0_Product_Number
    HIERARCHIES (
      PRODUCT_HIER DEFAULT)
   )
MEASURES  
 (F1_REVENUE FACT REVENUE AGGREGATE BY SUM,
  F10_VARIABLE_COST FACT COST_VARIABLE AGGREGATE BY SUM,
  F11_FIXED_COST FACT COST_FIXED AGGREGATE BY SUM,
  F2_BILLED_QTY FACT UNITS,
  F3_DISCOUNT_AMOUNT FACT DISCNT_VALUE AGGREGATE BY SUM,
  F4_AVG_REVENUE FACT REVENUE AGGREGATE BY AVG,
  F21_REVENUE_AGO AS (LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1))
  )
DEFAULT MEASURE F1_REVENUE;  

Some important parts need to be highlighted:

  • USING SAMP_REVENUE_F: defines the analytic view source, in our case the table SAMP_REVENUE_F
  • DIMENSION BY: this section provides the list of dimensions and related hierarchies to take into account
  • KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE: defines the join between the fact table and attribute dimension
  • HIERARCHIES (TIME_HIER DEFAULT): multiple hierarchies can be defined on top of an attribute dimension and used in an analytical view, however like in OBIEE only one will be used by default
  • F1_REVENUE FACT REVENUE AGGREGATE BY SUM: defines the measure with alias, source column and aggregation method
  • F2_BILLED_QTY FACT UNITS: if aggregation method is not defined it replies on default SUM
  • F21_REVENUE_AGO: new metrics can be calculated based on previously defined columns replicating OBIEE functions like time-series. The formula (LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1)) calculates the equivalent of the OBIEE's AGO function for each level of the hierarchy.
  • DEFAULT MEASURE F1_REVENUE: defines the default measure of the analytic view
Using Analytic Views

After the analytic view definition, it's time to analyse what benefits end users have when using them. We are going to take a simple example: a query to return the Revenue and Billed Qty per Month and Brand.

Using only the original tables we would have the following SQL

SELECT D.CAL_MONTH,  
  D.BEG_OF_MTH_WID,
  P.BRAND,
  SUM(F.REVENUE) AS F01_REVENUE,
  SUM(F.UNITS)   AS F02_BILLED_QTY
FROM SAMP_REVENUE_F F  
JOIN SAMP_PRODUCTS_D P  
ON (F.PROD_KEY = P.PROD_KEY)  
JOIN SAMP_TIME_DAY_D D  
ON (F.BILL_DAY_DT = D.CALENDAR_DATE)  
GROUP BY D.CAL_MONTH,  
  D.BEG_OF_MTH_WID,
  P.BRAND
ORDER BY D.BEG_OF_MTH_WID,  
  P.BRAND;

The above SQL requires the knowledge of:

  • Aggregation methods
  • Joins
  • Group by
  • Ordering

Even if this is an oversimplification of the analytic view usage you can already spot that some knowledge of the base data structure and SQL language is needed.

Using the analytic views defined above, the query can be written as

SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE,  
  PRODUCT_HIER.MEMBER_NAME   AS PRODUCT_SLICE,
  F1_REVENUE,
  F2_BILLED_QTY
FROM F0_SALES_BASE_MEASURES  
WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_MONTH')  
AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND')  
ORDER BY TIME_HIER.HIER_ORDER,  
  PRODUCT_HIER.HIER_ORDER;

As you can see, there is a simplification of the SQL statement: no more aggregation, joining conditions and group by predicates are needed. All the end-user has to know is the analytical view name, and the related hierarchies that can be used.

The additional benefit is that if we want to change the level of granularity of the above query we just need to change the WHERE condition. E.g. to have the rollup per Year and LOB we just have to substitute

WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_MONTH')  
AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND')  

with

WHERE TIME_HIER.LEVEL_NAME  IN ('CAL_YEAR')  
AND PRODUCT_HIER.LEVEL_NAME IN ('LOB')  

without touching granularity, group by and order by statements.

Using Analytic Views in DVD

At the beginning of my blog post I wrote that Analytic Views could be useful when used in conjunction with self-service BI tools. Let's have a look at how the end user journey is simplified in the case of Oracle's Data Visualization Desktop.

Without AV the end-user had two options to source the data:

  • Write the complex SQL statement with joining condition, group and order by clause in the SQL editor to retrieve data at the correct level with the related dimension
  • Import the fact table and dimensions as separate datasources and join them together in DVD's project.

Both options require a SQL and joining conditions knowledge in order to being able to present correct data. Using Analytic Views the process is simplified. We just need to create a new source pointing to the database where the analytic views are sitting.
Next step is retrieve the necessary columns from the analytic view. Unfortunately analytic views are not visible from DVD object explorer (only standard table and views are shown)

Metadata Modeling in the Database with Analytic Views

We can however specify with a simple SQL statement all the informations we need like Time and Member Slice, the related levels and the order in hierarchy.

SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE,  
  PRODUCT_HIER.MEMBER_NAME   AS PRODUCT_SLICE,
  TIME_HIER.LEVEL_NAME AS TIME_LEVEL,
  PRODUCT_HIER.LEVEL_NAME AS PRODUCT_LEVEL,
  TIME_HIER.HIER_ORDER AS TIME_HIER_ORDER,
  PRODUCT_HIER.HIER_ORDER AS PRODUCT_HIER_ORDER,
  F1_REVENUE,
  F2_BILLED_QTY
FROM F0_SALES_BASE_MEASURES  
ORDER BY TIME_HIER.HIER_ORDER,  
  PRODUCT_HIER.HIER_ORDER;

You may have noted that I'm not specifying any WHERE clause for level filtering: as end user I want to be able to retrieve all the necessary levels by just changing a filter in my DVD project. After including the above SQL in the datasource definition and amending the measure/attribute definition I can start playing with the analytic view data.

Metadata Modeling in the Database with Analytic Views

I can simply include the dimension's MEMBER_NAME in the graphs together with the measures and add the LEVEL_NAME in the filters. In this way I can change the graph granularity by simply selecting the appropriate LEVEL in the filter selector for all the dimensions available.

Metadata Modeling in the Database with Analytic Views

One particular to notice however is that all the data coming from various columns like date, month and year are "condensed" into a single VARCHAR column. In case of different datatypes (like date in the time dimension) this will prevent a correct usage of some DVD's capabilities like time series or trending functions. However if a particular type of graph is needed for a specific level, either an ad-hoc query or a casting operation can be used.

Conclusion

In this blog post we analysed the Analytic Views, a new component in Oracle Database 12.2 and how those can be used to "move" the metadata modeling at DB level to provide an easier query syntax to end-users.

Usually metadata modeling is done in reporting tools like OBIEE that offers additional set of features on top of the one included in analytic views. However centralized reporting tools like OBIEE are not present everywhere and, with the wave of self-service BI tools, analytic views represent a perfect method of enabling users not familiar with SQL to simply query their enterprise data.

If you are interested in understanding more about analytic views or metadata modeling, don't hesitate to contact us!
If you want to improve the SQL skills of your company workforce, check out our recently launched SQL for beginners training!

Categories: BI & Warehousing

Real World OBIEE: Demystification of Variables Pt. 3

Tue, 2017-03-28 08:00
 Demystification of Variables  Pt. 3

In part two of this blog series, I went over using Repository, System and Presentation Variables to make reports dynamic for any series of time. In part three, I am going to talk about making reports dynamic for periods of time using built in functions within Answers itself.

Real World

While it's a lot more efficient to create Repository Variables to use in filters and prompts for time dimensions, sometimes it is simply not possible. Perhaps you are a front end developer for OBIEE and have no access to the RPD or the database . Perhaps you have no communication with the person in your organization who handles all of the RPD development and therefore can not submit any change requests.

Don't worry. We've got you covered.

There are several functions and tricks you can use within Answers itself to make reports dynamic and eliminate having to hardcode dates.

The Scenario

I am going to use the same scenario I used for part two of this series for the example one. To recap, here are the requirements:

I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used in part one of this series, needs to contain 'Gross Rev $', 'Net Rev $' and '# of Orders' and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.

In part two, I used a custom SQL statment which used Repository Variables I created to populate all the date values from the first of every month to the current date for Variable Prompts. There is a gap in the data loads for # of Orders in which data does not update until the 2nd or 3rd of each new month. The person who requested the report wanted a summary of the previous months '# of Orders' to be shown until the data is updated for the current month. I used a Repository Variable that returned the value of the previous month with the current year and used a CASE statement with along with Filter Expressions to switch between the Filter Expression using the Repository Variable (Prev_Month) if the date was <=2 or if # of Orders is null and the Filter Expression which contained the Start Date and End_Date Presentation Variable placeholders which were defined in my Variable Prompts.

Example One

In this example, I have to figure out a way to make the report dynamic with only the functions available within Answers. There are two parts to this example. First I need to use a function that will return the previous month's value for the Calendar Year Month column to use with the '# of Orders' column. This will replace the Repository Variable Prev_Month I used in part two of this series. Second I need to write a new SQL statment for the Start Date and End_Date prompts I created in part two and also define a new SQL statment for the default values in those prompts.

Part 1

I am going to start by creating a new statement to return the previous month's value for the Calendar Year Month column. I can use the TIMESTAMPADD function in conjunction with the extraction syntax 'YEAR' and 'MONTH' to return the desired results. Let's take a look at the entire statment and then I will break it down.

CAST(YEAR(timestampadd(SQL_TSI_YEAR, -1,CURRENT_DATE))*100+MONTH(timestampadd(SQL_TSI_MONTH, -1,CURRENT_DATE)) AS VARCHAR(6))

1. TIMESTAMPADD - This is what defines that I am going to use addition to return a date by adding or one date to another.

2. SQL_TSI_YEAR and SQL_TSI_MONTH - The first argument in the function. It defines what interval of time the function will work with.

3. -1. This the interval of time that is compared to the third argument.

4. CURRENT_DATE - The third argument in the function. This is what the second argument is compared against.

5. YEAR and MONTH - This is the extraction syntax that will return only the year and the month respectively.

Also notice that I have used VARCHAR(6) for the CAST argument. If I use VARCHAR, I can specify the exact number of characters I want returned.

Now I need to copy my column formula and paste it into the column formula that I created for '# of Orders' in part two of this series.

I am going to replace the Prev_Month Repository Variable with my statement, which will look like this.

CASE WHEN DAY(CURRENT_DATE)<=2 or="" "sales="" -="" fact="" sales"."measures"."#="" of="" orders"="" is="" null="" then="" filter("sales="" using="" ("sales="" sales"."periods"."calendar="" year="" month"="CAST(YEAR(timestampadd(SQL_TSI_MONTH," -1,current_date))*100+month(timestampadd(sql_tsi_month,="" -1,current_date))="" as="" varchar(6))))="" else="" ("periods"."day="" date"="" between="" @{pv_start_dt}{date="" '2015-10-01'}="" and="" @{pv_end_dt}{date="" '2015-10-15'}))="" end<="" code="">

 Demystification of Variables  Pt. 3

If I run the report, my results return as expected.

 Demystification of Variables  Pt. 3

Part 2

Now I need to write a new SQL statement for my Start Date and End Date prompts. In order to do this, I am going to need to use two functions: TIMESTAMPS and CURRENT_DATE. First, lets take a look at the TIMESTAMP function.

I am going to use the TIMESTAMP function to filter the Day Date column for the first day of the month. To demonstrate, I am going to create a new analysis and use the TIMESTAMP function in a column formula. My column formula looks like the following:

TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE)

This formula can be broken down into four parts:

1. TIMESTAMPADD - This is what defines that I am going to use addition to return a date by adding or one date to another.

2. SQL_TSI_DAY - The first argument in the function. It defines what interval of time the function will work with (in this case days)

3. -DAYOFMONTH(CURRENT_DATE)+1 - This the interval of time that is compared to the third argument. In this case I am taking the negative value of the day of the month, adding 1 and then adding it with current date which always returns 1 or the first day of the month.

4. CURRENT_DATE - The third argument in the function. This is what the second argument is compared against.

This is only scratching the surface of what you can do with the TIMESTAMP function. If you would like more information, check out the blog on TIMESTAMPS written by Brian Hall.

I am going to add an additional column to the Criteria and use the CURRENT_DATE function in a column formula.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

Now I am going to click on Results to show the results of the TIMESTAMP function and the CURRRENT_DATE function.

 Demystification of Variables  Pt. 3

From the results you can see that I have both the first day of the month and the current date. Now I need to convert this into a filter for the Day Date column so that I can get the logical SQL query for my Start Date and End Date prompts.

 Demystification of Variables  Pt. 3

In the New Filter window, I need to change the operator to is between and click on Add More Options to add a SQL Expression.

 Demystification of Variables  Pt. 3

In the SQL Expression box, I need to put the TIMESTAMP function for current date from the previous example. In addition I need to add another SQL Expression for the CURRENT_DATE function.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

When I return to my Criteria, I can see the filter I created in the Filter window.

 Demystification of Variables  Pt. 3

I can click on Results to run the report. The results for the Day Date column return as expected.

 Demystification of Variables  Pt. 3

Now I can click on the Advanced tab and copy the logical SQL statement to use for my Start Date and End Date prompts.

 Demystification of Variables  Pt. 3

Now I am going to paste the following into my Start Date Variable Prompt

Choice List Values > SQL Results

SELECT "Sales - Fact Sales"."Periods"."Day Date" 
FROM "Sales - Fact Sales" WHERE ("Periods"."Day Date" BETWEEN TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE), CURRENT_DATE)  AND CURRENT_DATE) 
ORDER BY "Periods"."Day Date"

Default Selection > SQL Results

SELECT
TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE)
FROM "Sales - Fact Sales"
FETCH FIRST 65001 ROWS ONLY

 Demystification of Variables  Pt. 3

For the default selection, I am using a SQL statment that is selecting the first day of the month using the same TIMESTAMP function used in the above query from my subject area "Sales - Fact Sales".

Now I need to change the SQL query for both the Choice List Values and Default Selection for my End Date Variable Prompt.

I am going to use the same SQL query for the Choice List Values in my End Date prompt as I did in my Start Date prompt. I am going to change the default selection to the following:

SELECT
CURRENT_DATE
FROM "Sales - Fact Sales"
FETCH FIRST 65001 ROWS ONLY

 Demystification of Variables  Pt. 3

If I go to the Display window, I can view the results of my changes.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

Notice that the results are exactly the same as the results in part two of this series.

I can save the dashboard prompt and go to my dashboard and test the prompt.

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

 Demystification of Variables  Pt. 3

In Conclusion

In part one of this series, we looked at using Bins, CASE statements to create custom grouping for values and switch between those groups and values using Presentation Variables.

In part two of this series, we looked at creating Repository Variables to make reports dynamic using those Repository Variables in Variable Prompts and passing them into column formulas using Presentation Variables.

In the third and final part of this series, we looked at making reports dynamic by using built in functions within Answers such as TIMESTAMPS and CURRENT_DATE.

My hope is that you can take these examples and apply them in your own OBIEE development. If you would like to know more about front end or RPD development, please check out the variety of training courses we offer at Rittman Mead. Until next time.

Categories: BI & Warehousing

Real World OBIEE: Demystification of Variables Pt. 2

Tue, 2017-03-21 09:00

In part one of this blog series, I went over using bins and presentation variables to dynamically create groups and switch between them in a report and on a dashboard. In part two, I am going to talk about making reports dynamic for periods of time using repository, system and presentation variables. Before I dive into an example, there are a couple of things I would like to cover first.

SYSDATE

The sysdate function returns the current datetime set by the system where the database resides. Sysdate is a really useful function for creating repository variables for use with date dimensions. If I go into SQL Developer, I can write a query to return the current sysdate:

select sysdate from dual;

CURRENT_DATE

The current_date functions returns the current datetime set by the system where the bi server resides. This datetime may differ from sysdate depending on the geographical location of the database vs. the system that OBIEE resides on. I can write a query using sql developer to return the datetime using the current_date function:

select current_date from dual;

Since my database and OBIEE instance are on the same system, sysdate and current_date are the same.

TRUNCATE

When using sysdate or current_date to create repository variables for dates (which I am going to show in an upcoming example), you have to keep something in mind. While the date may match, the time may not. To show an example of this, I am going to join one of my date columns with sysdate.

select sysdate, dim_date_key from dual, 
gcbc_pef.dim_date
where sysdate = dim_date_key;

If I run this query, I don't get an error but I get no results.

Why? To answer this, I need to write a query to inspect my date column.

select dim_date_key from gcbc_pef.dim_date;

As you can see by the results of my query, the DIM_DATE_KEY column does have the same format as sysdate but all the times are set to 00:00:00 (or midnight). To further demonstrate the difference between my date column and sysdate, I am going to write a new query and use the TRUNC (or TRUNCATE) function.

select sysdate, dim_date_key from dual, 
gcbc_pef.dim_date
where trunc(sysdate) = dim_date_key;

As you can see, the query runs successfully but notice how sysdate and DIM_DATE_KEY still have different times. How is the join possible? Because I used the truncate function in the where clause in my query for sysdate. Without going into too much detail, using truncate on a date function without any formatting (which I will cover later) will set (or truncate) the datetime to the start (or midnight) of the current day. For example, if I run another query that just selects the truncated sysdate from dual, I get this result.

select trunc(sysdate) from dual;

Now, lets dive into an example.

Note: For all of the examples in this blog series I am using OBIEE 12.2.1.2.0

The Scenario

In this example, I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used part one of this series, needs to contain Gross Rev $, Net Rev $ and # of Orders and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.

There are two foreseeable challenges with this report. The first, and probably the most obvious, is how to make the date prompt for the current month and have it change dynamically with each month. The second is how to pass the dates into the report.

There is one more challenge that I will have to tackle. There is a gap in the data loads for # of Orders. Data does not update until the 2nd or 3rd of each new month. This wouldn't be a big deal except the person who requested the report wants a summary of the previous months # of Orders to be shown until the data is updated for the current month.

Fortunately, by using Repository, System and Presentation Variables, I can accomplish all of the requirements of this report.

The Example

For this example, I am going to start by creating Repository Variables to use with my date column in order to make the dates dynamic. There are other ways to make dates dynamic using functions within Answers but they are a little bit trickier to use and are less common. I am going to go over some of those functions in part three of this blog series.

Repository Variables are created using the Admin Tool. By launching the Admin Tool and opening my RPD in online mode (can also be created offline), I can go to Manage > Variables to start creating my first Repository Variable.

From the Variable Manager window, I can create a Repository Variable by selecting Action > New > Repository > Variable.

I am going to start by creating the Repository Variable for the current date. Since this variable will be dynamic, I need to make sure I select the option 'Dynamic' and I am going to give it the name USCurDate.

Now I need to create a new init block. I can do this by clicking New...

Once in the Repository Variable Initialization Block screen, I need to give the init block a name, set the schedule for when variable or variables will be refreshed then click Edit Data Source to define the connection pool the init block will use as well as the initialization string (query) the init block will use to populate the Repository Variable.

In the data source window, I am going to set my connection pool to one I have created just for my init blocks and then type in the following into the initialization string window:

select TRUNC(sysdate) from dual;

If I click Test, the query will execute and will return a result.

Notice how the result is the same as the query I ran using SQL Developer earlier.

Now I need to create a Repository Variable for the first day of every month. I am going to use the same method as before and name it USMoBeginDate. The query I am going to use is slightly different from the previous query. I still need to use the TRUNC function but I also need to apply formatting so that it truncates to the start of the month. I am going to enter the following into the initialization string window:

select TRUNC(sysdate, 'MM') from dual;

Some other useful queries I can use are:

First Day of the Current Year

select TRUNC(sysdate, 'YY') from dual;

Last Day of the Previous Year

select TRUNC(sysdate, 'YY') -1 from dual;

Previous Year Date

select TRUNC(ADD_MONTHS(sysdate, -12)) from dual;

Now I need to create a Repository Variable for the previous month to use with my # of Orders measure column. Upon inspection, I discover that the column I need to use is called Calendar Year Month and is a VARCHAR or character type. If I go into Answers and pull in the Calendar Year Month column, I can see the format is 'YYYYMM'

To create the Repository Variable, I am going to use the same method as with the current date and first day of the current month Repository Variables and issue a new query. Because the Calendar Year Month column is a VARCHAR, I need to use the to_char function to change sysdate from a date type to a character type, use some formatting syntax and use some basic arithmetic. The query is as follows:

select to_char(to_number(to_char(sysdate, 'YYYY')) * 100 + to_number(to_char(sysdate, 'MM') -1)) from dual;

To break down each part of this query, lets start with the year. In order to use the 'YYYY' format I must first cast sysdate to a character (to_char(sysdate, 'YYYY')). Then I need to cast that result back to and int so that I can multiply by 100. This will give me the result 201500.00. The reason for this is when I add the month number to my yearx100, there will always be a leading 0 for month numbers 1-9. To get the previous month number, I have to first cast sysdate to a character and use the formatting 'MM'. I then have to cast it back to an int and subtract 1 to get the previous month number (to_number(to_char(sysdate, 'MM') -1) then cast the entire statment back to a character type so that it matches the type for the Calendar Year Month column. When I run the query, I get this result.

Now that I have my three repository variables (USCurDate, USMoBeginDate and Prev_Month) I can start to create the report.

Im going to fast forward a little bit to the part of the report creation process where I will use my Repository Variables I created using the Admin Tool. Since I am using virtually the same report as part one of this blog series, please refer back for how to create custom groups using bins and presentation variables and custom value prompts.

Because of the delay in the data load for the # of Orders at the beginning of the month, I can not use a global report filter. Instead, I am going to have to use something called a Filter Expression within each measure column formula.

About Filter Expressions

Unlike global report filters, column formula level filter expressions are used when you need to specify a particular constraint within the column formula itself. Because the filter is at the column formula level, it is independent of any subsequent column filters.

Note: When using a column formula filter for a measure, you can not add a global filter of the same data subject on top of it. For example, if using a column level filter for a particular Year and Month, I can not add a global filter for a particular year. The two filters contradict each other and the result will be null.

To add a filter in the column formula, go to Edit formula, make sure the column syntax is highlighted and click Filter.

From here the Insert Filter window will pop up and I can select the attribute column to filter the measure by. Here, I want to use the column Day Date to filter Gross Rev $ by the day.

I can add a column by double clicking it in the the Subject Areas pane. When a column is added, I will be prompted with a New Filter window and from here, everything is exactly the same process as adding a global report filter.

Here I need to define the operator as is between since we are dealing with date ranges. I could call my Repository Variables for current_date and first day of the month here but, because the request is for a prompt to select between date ranges, I am going to have to call Presentation Variables and use the prompt to populate the actual values.

Note: If you are unsure about the functionality of Presentation Variables, see part one of this blog series

To add Presentation Variables to the filter expression, click Add More Options and select Presentation Variable from the dropdown.

When a Presentation Variable is added to the filter, two new text boxes appear. The Variable Expr box is where you define the variable to be used and the (default) box is used to add a default value. The default value is optional but, when defining a Presentation Variable within a filter, you have to specify a default value in order to get any results. The reason for this is because, when the report is run, the query issued will use the Presentation Variable placeholder that is defined unless a default value is specified. In other words, the default value will always be used unless the Presentation Variable is populated with a value or a list of values.

Because I want the users to be able to specify a date range, I need to define two Presentation Variables: one for the start date and one for the end date. I can add another place for a Presentation Variable by simply clicking Add More Options again and selecting Presentation Variable.

Now I need to add both my start and end date Presentation Variables in the Variable Expr boxes. I’m going to call my start date presentation variable pv_start_dt and my end date presentation variable pv_end_dt. I am also going to specify a default date range from the beginning of the current month (10/01/2015) to yesterday's date (10/15/2015).

If I click OK, I will be taken back to the Insert Filter screen where I can see the filter expression previously defined.

Clicking OK again will return me to Edit Column Formula which shows the column formula with the filter expression defined in the previous steps.

Now I have to do the exact same thing for the Net Rev $ column. Since the filter expression is identical, I can simply copy and paste the column formula for Gross Rev $ and replace the column name in the expression.

Now I need to take care of the # of Orders column. This column is tricky because of the gap between the 1st and the 2nd or 3rd of every month. I could use a filter expression that defaults to the previous month by using the previous month repository variable I created in a previous step, but this alone wouldn’t switch over when the data became available.

So how can we fulfill the requirement of the report if we don’t know the exact date in which the data will be available? This can be accomplished by using a CASE statement as shown previously in part one of this series. We can break the Case statement down into two parts or two conditions:

1. When the day for the current month is less than or equal to 2 OR if # of Orders is null, then filter # of Orders by Calendar Year Month using the value of the Prev_Month Repository Variable.

2. When condition one is not true, then filter # of Orders by Day Date between the values of the pv_start_date and the pv_end_date Presentation Variables

Putting both conditions together and using the correct syntax for Column Formula results in the following formula:

Note that I am using CURRENT_DATE in my column formula. In this case, I am extracting the day number from the current date by using the extract day function (DAY(CURRENT_DATE)). I am going to talk about this in further detail when I talk about using built in functions in Answers to make reports dynamic in part 3 of this series.

Now I need to create my dashboard prompt. I am going to start by clicking on New > Dashboard Prompt.

I need to create two prompts: One for the start date and one for the end date. Because I am using presentation variables as placeholders for the date between values, I have to use a Variable Prompt instead of a Column Prompt. Variable Prompts allow us to define a presentation variable and then define a list of values for the users to select from.

To create a Variable Prompt for Start Date, I can click on the new prompt icon and select Variable Prompt.

There a few things I need to do in order to make this prompt function for the report. First, I have to define the same presentation variable name (pv_start_dt) that I used in the filter expressions for the Gross Rev $, Net Rev $ and # of Orders columns.

Because this is not a column prompt, I have to manually specify the values I want the user to be able to select from. Rather than typing in each value, I can use the SQL Results option from the Choice List Values dropdown and use a SQL statement to select the exact values that I want.

This may seem daunting at first but there is a very straightforward way to accomplish this. Rather than manually writing out a SQL query, we can make use of the Advanced Tab within a new report.

I’m going to start by clicking New > Analysis and selecting the column that I want values for: Day Date.

I need to add a filter to Day Date so that it returns only the values I want to user to select from.

Now I need to select the operator to be is between and add two Repository Variables that I have set up: one for the first date of the current month and one for the current date of the current month.

If I go to results, I can see the data returned with the filter I have specified.

As you can see, the Day Date column only contains the values from the first of the month to the current date (October, 16th 2015 in this example)

Now for the good stuff. I can navigate to the Advanced Tab and copy the SQL statement used to generate these values and paste them into the SQL Results text box in my prompt.

You will notice that within the SQL Statement generated by OBI,
there are numbers and s_# between the SELECT and Day Date column, after the Day Date column and there is also an order by clause that uses a number “2”. Without going into too much detail, this what OBI uses to make the query more efficient when retrieving results from the database. In order to allow the values to populate the prompt, these have to be removed in OBIEE 12c and the “ORDER BY” clause has to be rewritten in order to make it work.

This

SELECT
   0 s_0,
   "Sales - Fact Sales"."Periods"."Day Date" s_1
FROM "Sales - Fact Sales"
WHERE
("Periods"."Day Date" BETWEEN VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))
ORDER BY 2 ASC NULLS LAST
FETCH FIRST 65001 ROWS ONLY

Changed to this

SELECT
   "Sales - Fact Sales"."Periods"."Day Date"
FROM "Sales - Fact Sales"
WHERE
("Periods"."Day Date" BETWEEN  VALUEOF("USMoBeginDate") AND  VALUEOF("USCurDate"))
ORDER BY "Periods"."Day Date" ASC
FETCH FIRST 65001 ROWS ONLY

This can be a bit confusing if you are not very familiar with SQL but just remember:

When populating a prompt using an SQL statement in OBIEE 12c, take out any number and anything that begins with “s” between the SELECT and first column and anything that begins with “s” after any subsequent columns and make sure the “ORDER BY” clause contains the actual column name of the column you want to order by.

Note: If you do not require any values to be in order, you can omit the “ORDER BY” clause all together.

If I expand Options in the Edit Prompt window, I can add a default selection or a default value that the prompt will start with. I can use the USMoBeginDate here as well so that the prompt always starts with the first date of every month as the start date.

Note: You will notice that under Options in the Edit Prompt window there is a Variable Data Type option with a dropdown selector. This can be used if the data type needs to be specified to something other than the default which is ‘text’ or character type. If you are getting an error when running the report that says “Selected value does not match datatype. Expected [this value] but got [this value]” you need to change the Variable Data Type to the datatype of the column you are prompting on. In this example, we are prompting a date datatype so therefore it needs to be set to date.

If I click OK, I can check the values in the display window by clicking the dropdown for the Start Date prompt I just created.

The blue checkmark indicates the value that is selected which, because the first date of every month was set by using the USMoBeginDate Repository Variable as the default value, defaults to the first date of the current month (October, 1st 2015) in this example.

Now I need to create another Variable Prompt for the End Date. The SQL statement used for Start Date can be reused for the values as we want the exact same values to be available for selection. I am going to specify the presentation variable to be named pvenddt, and the default value to be the USCurDate Repository Variable so that the End Date prompt always defaults to the current date.

Now all that’s left to do is put the prompt and report on the Dashboard. Here is the result.

So that concludes part 2 of Demystification of Variables. Please feel free to ask questions or leave me a comment! In part 3, I am going to talk about using built in front end functions and presentation variables to make reports dynamic for any series of time. Until next time.

Categories: BI & Warehousing

A Performance Analytics Application Case Study: Challenges and Successes

Mon, 2017-03-20 10:00

The Performance Analytics application is a collection of open source technologies that aids users in: identifying performance bottlenecks, identifying causes for slow report execution, discovering areas for performance optimization, and gathering meaningful insights into the health of an OBIEE environment.

This post focuses on lessons learned after a successful Performance Analytics application installation, where within one day of being operational it enabled us to identify and isolate a long-standing memory issue. Here's how.

OBIEE Performance Analytics Dashboards

Overview

Rittman Mead recently undertook an engagement with the remit to:

  • Carry out a health check on the current state of an OBIEE platform, architecture, and development process
  • Install the Rittman Mead Performance Analytics application, enabling rapid and accurate diagnostics of OBIEE issues

The client was on OBIEE 11g, having previously upgraded from 10g. OBIEE Production environment was a three-node cluster running the latest version of the 11g release. It served around 150 users daily, of a registered user base of around 1000.

The client had a long-standing issue with memory alerts on the master node of OBIEE cluster, but no evident architectural, hardware capacity, or software configuration issues were found after completing the health check.

Challenges and successes Gather all relevant data

Performance Analytics gathers data from a variety of sources in order to provide a full stack view of the OBIEE environment.

  • Active Session History (ASH) - The Active Session History data is read from the v$ACTIVE_SESSION_HISTORY system database view. Access to this data allows Performance Analytics users to have an understanding of the performance and state of the database at a given point it time as it provides information such as the SQL operation being performed, which application is executing the query, whether the query is actively being performed or is waiting for service, what state of execution the query is in, and many other useful statistics.

  • Operating System Metrics - Unix-based Operating Systems contain several commands used to gather information about the performance or status of the server such as vmstat, iostat, netstat, top and df. Performance Analytics utilizes the output of these commands to display the current status of the OS at a given point in time.

  • Usage Tracking - The Oracle BI Server supports the collection of usage tracking data. When usage tracking is enabled, the Oracle BI Server collects usage tracking data for each query, and it writes statistics to a usage tracking log file or inserts them directly into a database table. Access to this data allows Performance Analytics users to have an understanding of the performance of the BI Server and specific reports in the OBIEE environment at any given point in time.

  • OBIEE metrics - OBIEE has the capability to expose internal performance data through the Dynamic Monitoring Service (DMS). The data exposed contains information such as Connection Pool statistics, JVM statistics, the number of active threads, and much more. Access to this data allows Performance Analytics to record the current status of many of the OBIEE components and intricacies found within the tool.

Performance Analytics was deployed using Docker in a couple of days, requiring absolutely no software installation on any machine other than the monitoring server. All configuration settings are held in one file, and it was sufficient to add connection details of each server to it in order to gather all aforementioned data.

Accurately diagnose issues

By combining operating system metrics (CPU, memory, etc.) with internal OBIEE metrics and those from the database, Performance Analytics gives a "single pane of glass" view on the behaviour of the whole stack. This enables correlations in behaviour to be easily identified, and issues drilled into using the analysis capabilities of the tool.

Within a day of being installed, Performance Analytics enabled the client to accurately diagnose a long-standing issue with memory alerts on OBIEE master node. The cause was traced to the export to Excel of a large dataset by a single user.

Workload Planning

Performance Analytics allows to capture system status and workload over time, so you can see how the system is responding to peak loads in real-time.

With Performance Analytics the client is now able to estimate maximum workload the current architecture can support before starting to see issues and whether it is going to cope with the next years workload.

Conclusion

Performance Analytics just paid for itself.

Performance Analytics collects all relevant data and makes it accessible from one central location, allowing users to investigate performance inquiries promptly and simply. Instead of forcing users to dig through database records or a linux server manually, they can access all of the same data through a set of dashboards engineered to facilitate discovery from the collected data.

If you’d like to find out more about the Performance Analytics service offered by Rittman Mead, please get in touch.

Categories: BI & Warehousing

Property Graph in Oracle 12.2

Fri, 2017-03-10 11:00

The latest release of Oracle (12.2) includes support for Property Graph, previously available only as part of the Big Data Spatial and Graph tool. Unlike the latter, in which data is held in a NoSQL store (Oracle NoSQL, or Apache HBase), it is now possible to use the Oracle Database itself for holding graph definitions and analysing them.

Here we'll see this in action, using the same dataset as I've previously used - the "Panama Papers".

My starting point is the Oracle Developer Day VM, which at under 8GB is a tenth of the size of the beast that is the BigDataLite VM. BDL is great for exploring the vast Big Data ecosystem, both within and external to the Oracle world. However the Developer Day VM serves our needs perfectly here, having been recently updated for the 12.2 release of Oracle. You can also use DB 12.2 in Oracle Cloud, as well as the Docker image.

Prepare Database for Property Graph

The steps below are based on Zhe Wu's blog "Graph Database Says Hello from the Cloud (Part III)", modified slightly for the differing SIDs etc on Developer Day VM.

First, set the Oracle environment by running from a bash prompt

. oraenv

When prompted for SID enter orcl12c:

[oracle@vbgeneric ~]$ . oraenv
ORACLE_SID = [oracle] ? orcl12c  
ORACLE_BASE environment variable is not being set since this  
information is not available for the current user ID oracle.  
You can set ORACLE_BASE manually if it is required.  
Resetting ORACLE_BASE to its previous value or ORACLE_HOME  
The Oracle base has been set to /u01/app/oracle/product/12.2/db_1  
[oracle@vbgeneric ~]$

Now launch SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

and from the SQL*Plus prompt create a tablespace in which the Property Graph data will be stored:

alter session set container=orcl;

create bigfile tablespace pgts  
datafile '?/dbs/pgts.dat' size 512M reuse autoextend on next 512M maxsize 10G  
EXTENT MANAGEMENT LOCAL  
segment space management auto;  

Now you need to do a bit of work to update the database to hold larger string sizes, following the following steps.

In SQL*Plus:

ALTER SESSION SET CONTAINER=CDB$ROOT;  
ALTER SYSTEM SET max_string_size=extended SCOPE=SPFILE;  
shutdown immediate;  
startup upgrade;  
ALTER PLUGGABLE DATABASE ALL OPEN UPGRADE;  
EXIT;  

Then from the bash shell:

cd $ORACLE_HOME/rdbms/admin  
mkdir /u01/utl32k_cdb_pdbs_output  
mkdir /u01/utlrp_cdb_pdbs_output  
$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utl32k_cdb_pdbs_output' -b utl32k_cdb_pdbs_output utl32k.sql

When prompted, enter SYS password (oracle)

After a short time you should get output:

catcon.pl: completed successfully  

Now back into SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

and restart the database instances:

shutdown immediate;  
startup;  
ALTER PLUGGABLE DATABASE ALL OPEN READ WRITE;  
exit  

Run a second script from the bash shell:

$ORACLE_HOME/perl/bin/perl $ORACLE_HOME/rdbms/admin/catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -l '/u01/utlrp_cdb_pdbs_output' -b utlrp_cdb_pdbs_output utlrp.sql

Again, enter SYS password (oracle) when prompted. This step then takes a while (c.15 minutes) to run, so be patient. Eventually it should finish and you'll see:

catcon.pl: completed successfully  

Now to validate that the change has worked. Fire up SQL*Plus:

sqlplus sys/oracle@localhost:1521/orcl12c as sysdba  

And check the value for max_string, which should be EXTENDED:

alter session set container=orcl;  
SQL> show parameters max_string;

NAME                                 TYPE        VALUE  
------------------------------------ ----------- ------------------------------
max_string_size                      string      EXTENDED  
Load Property Graph data from Oracle Flat File format

Now we can get going with our Property Graph. We're going to use Gremlin, a groovy-based interpretter, for interacting with PG. As of Oracle 12.2, it ships with the product itself. Launch it from bash:

cd $ORACLE_HOME/md/property_graph/dal/groovy  
sh gremlin-opg-rdbms.sh

--------------------------------
Mar 08, 2017 8:52:22 AM java.util.prefs.FileSystemPreferences$1 run  
INFO: Created user preferences directory.  
opg-oracledb>  

First off, let's create the Property Graph object in Oracle itself. Under the covers, this will set up the necessary database objects that will store the data.

cfg = GraphConfigBuilder.\  
        forPropertyGraphRdbms().\
        setJdbcUrl("jdbc:oracle:thin:@127.0.0.1:1521/ORCL").\
        setUsername("scott").\
        setPassword("oracle").\
        setName("panama").\
        setMaxNumConnections(8).\
        build();
opg = OraclePropertyGraph.getInstance(cfg);  

You can also do this with the PL/SQL command exec opg_apis.create_pg('panama', 4, 8, 'PGTS');. Either way, the effect is the same; a set of tables created in the owner's schema:

SQL> select table_name from user_tables;  
TABLE_NAME  
------------------------------------------
PANAMAGE$  
PANAMAGT$  
PANAMAVT$  
PANAMAIT$  
PANAMASS$  

Now let's load the data. I'm using the Oracle Flat File format here, having converted it from the original CSV format using R. For more details of why and how, see my article here.

From the Gremlin prompt, run:

// opg.clearRepository();     // start from scratch
opgdl=OraclePropertyGraphDataLoader.getInstance();  
efile="/home/oracle/panama_edges.ope"  
vfile="/home/oracle/panama_nodes.opv"  
opgdl.loadData(opg, vfile, efile, 1, 10000, true, null);  

This will take a few minutes. Once it's completed you'll get null response, but can verify the data has successfully loaded using the opg.Count* functions:

opg-oracledb> opgdl.loadData(opg, vfile, efile, 1, 10000, true, null);  
==>null
opg-oracledb> opg.countEdges()  
==>1265690
opg-oracledb> opg.countVertices()  
==>838295

We can inspect the data in Oracle itself too. Here I'm using SQLcl, which is available by default on the Developer Day VM. Using the ...VT$ table we can query the number of distinct properties the nodes (verticies) in the graph:

SQL> select distinct k from panamaVT$;  
K  
----------------------------
Entity incorporation.date  
Entity company.type  
Entity note  
ID  
Officer icij.id  
Countries  
Type  
Entity status  
Country  
Source ID  
Country Codes  
Entity struck.off.date  
Entity address  
Name  
Entity jurisdiction  
Entity jurisdiction.description  
Entity dorm.date

17 rows selected.  

Inspect the edges:

[oracle@vbgeneric ~]$ sql scott/oracle@localhost:1521/orcl

SQL> select p.* from PANAMAGE$ p where rownum<5;

       EID       SVID       DVID EL               K       T V      VN VT     SL VTS  VTE  FE
---------- ---------- ---------- ---------------- ---- ---- ---- ---- ---- ---- ---- ---- ----
         6          6     205862 officer_of
        11         11     228601 officer_of
        30         36     216748 officer_of
        34         39     216487 officer_of

SQL>  

You can also natively execute some of the Property Graph algorithms from PL/SQL itself. Here is how to run the PageRank algorithm, which can be used to identify the most significant nodes in a graph, assigning them each a score (the "page rank" value):

set serveroutput on  
DECLARE  
    wt_pr  varchar2(2000); -- name of the table to hold PR value of the current iteration
    wt_npr varchar2(2000); -- name of the table to hold PR value for the next iteration
    wt3    varchar2(2000);
    wt4    varchar2(2000);
    wt5    varchar2(2000);
    n_vertices number;
BEGIN  
    wt_pr := 'panamaPR';
    opg_apis.pr_prep('panamaGE$', wt_pr, wt_npr, wt3, wt4, null);
    dbms_output.put_line('Working table names  ' || wt_pr
       || ', wt_npr ' || wt_npr || ', wt3 ' || wt3 || ', wt4 '|| wt4);
    opg_apis.pr('panamaGE$', 0.85, 10, 0.01, 4, wt_pr, wt_npr, wt3, wt4, 'SYSAUX', null, n_vertices)
;
END;  
/

When run this creates a new table with the PageRank score for each vertex in the graph, which can then be queried as any other table:

SQL> select * from panamaPR  
  2  order by PR desc
  3* fetch first 5 rows only;
      NODE         PR          C
---------- ---------- ----------
    236724 8851.73652          0
    288469 904.227685          0
    264051 667.422717          0
    285729 562.561604          0
    237076 499.739316          0

On its own, this is not so much use; but joined to the vertices table, we can now find out, within our graph, the top ranked vertices:

SQL> select pr.pr, v.k,v.V from panamaPR pr inner join PANAMAVT$ V on pr.NODE = v.vid where v.K = 'Name' order by PR desc fetch first 5 rows only;  
        PR K          V
---------- ---------- ---------------
8851.73652 Name       Portcullis TrustNet Chambers P.O. Box 3444 Road Town- Tortola British Virgin Isl  
904.227685 Name       Unitrust Corporate Services Ltd. John Humphries House- Room 304 4-10 Stockwell Stre  
667.422717 Name       Company Kit Limited Unit A- 6/F Shun On Comm Bldg. 112-114 Des Voeux Road C.- Hong  
562.561604 Name       Sealight Incorporations Limited Room 1201- Connaught Commercial Building 185 Wanc  
499.739316 Name       David Chong & Co. Office B1- 7/F. Loyong Court 212-220 Lockhart Road Wanchai Hong K

SQL>  

Since our vertices in this graph have properties, including "Type", we can also analyse it by that - the following shows the top ranked vertices that are Officers:

SQL> select V.vid, pr.pr from panamaPR pr inner join PANAMAVT$ V on pr.NODE = v.vid where v.K = 'Type' and v.V = 'Officer' order by PR desc fetch first 5 rows only;  
       VID         PR
---------- ----------
  12171184 1.99938104
  12030645 1.56722346
  12169701 1.55754873
  12143648 1.46977361
  12220783 1.39846834

which we can then put in a subquery to show the details for these nodes:

with OfficerPR as  
        (select V.vid, pr.pr
          from panamaPR pr
               inner join PANAMAVT$ V
               on pr.NODE = v.vid
         where v.K = 'Type' and v.V = 'Officer'
      order by PR desc
      fetch first 5 rows only)
select pr2.pr,v2.k,v2.v  
from OfficerPR pr2  
     inner join panamaVT$ v2
     on pr2.vid = v2.vid
where v2.k in ('Name','Countries');  
        PR K          V
---------- ---------- -----------------------
1.99938104 Countries  Guernsey  
1.99938104 Name       Cannon Asset Management Limited re G006  
1.56722346 Countries  Gibraltar  
1.56722346 Name       NORTH ATLANTIC TRUST COMPANY LTD. AS TRUSTEE THE DAWN TRUST  
1.55754873 Countries  Guernsey  
1.55754873 Name       Cannon Asset Management Limited re J006  
1.46977361 Countries  Portugal  
1.46977361 Name       B-49-MARQUIS-CONSULTADORIA E SERVICOS (SOCIEDADE UNIPESSOAL) LDA  
1.39846834 Countries  Cyprus  
1.39846834 Name       SCIVIAS TRUST  MANAGEMENT LTD

10 rows selected.  

But here we get into the limitations of SQL - already this is starting to look like a bit of a complex query to maintain. This is where PGQL comes in, as it enables to express the above request much more eloquently. The key thing with PGQL is that it understands the concept of a 'node', which removes the need for the convoluted sub-select that I had to do above to first identify the top-ranked nodes that had a given property (Type = Officer), and then for those identified nodes show information about them (Name and Countries). The above SQL could be expressed in PGQL simply as:

SELECT n.pr, n.name, n.countries  
WHERE (n WITH Type =~ 'Officer')  
ORDER BY n.pr limit 5  

At the moment Property Graph in the Oracle DB doesn't support PGQL - but I'd expect to see it in the future.

Jupyter Notebooks

As well as working with the Property Graph in SQL and Gremlin, we can use the Python API. This is shipped with Oracle 12.2. I'd strongly recommend using it through a Notebook, and this provides an excellent environment in which to prototype code and explore the results. Here I'll use Jupyter, but Apache Zeppelin is also very good.

First let's install Anaconda Python, which includes Jupyter Notebooks:

wget https://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh  
bash Anaconda2-4.3.0-Linux-x86_64.sh  

In the install options I use the default path (/home/oracle) as the location, and keep the default (no)

Launch Jupyter, telling it to listen on any NIC (not just localhost). If you installed anaconda in a different path from the default you'll need to amend the /home/oracle/ bit of the path.

/home/oracle/anaconda2/bin/jupyter notebook --ip 0.0.0.0

If you ran the above command from the terminal window within the VM, you'll get Firefox pop up with the following:

If you're using the VM headless you'll now want to fire up your own web browser and go to http://<ip>:8888 use the token given in the startup log of Jupyter to login.

Either way, you should now have a functioning Jupyter notebook environment.

Now let's install the Property Graph support into the Python & Jupyter environment. First, make sure you've got the right Python set, by confirming with which it's the anaconda version you installed, and when you run python you see Anaconda in the version details:

[oracle@vbgeneric ~]$ export PATH=/home/oracle/anaconda2/bin:$PATH
[oracle@vbgeneric ~]$ which python
~/anaconda2/bin/python
[oracle@vbgeneric ~]$ python -V
Python 2.7.13 :: Anaconda 4.3.0 (64-bit)  
[oracle@vbgeneric ~]$

Then run the following

cd $ORACLE_HOME/md/property_graph/pyopg  
touch README  
python ./setup.py install  

without the README being created, the install fails with IOError: [Errno 2] No such file or directory: './README'

You need to be connected to the internet for this as it downloads dependencies as needed. After a few screenfuls of warnings that appear OK to ignore, the installation should be succesful:

[...]
creating /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages/JPype1-0.6.2-py2.7-linux-x86_64.egg  
Extracting JPype1-0.6.2-py2.7-linux-x86_64.egg to /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages  
Adding JPype1 0.6.2 to easy-install.pth file

Installed /u01/userhome/oracle/anaconda2/lib/python2.7/site-packages/JPype1-0.6.2-py2.7-linux-x86_64.egg  
Finished processing dependencies for pyopg==1.0  

Now you can use the Python interface to property graph (pyopg) from within Jupyter, as seen below. I've put the notebook on gist.github.com meaning that you can download it from there and run it yourself in Jupyter.

Categories: BI & Warehousing

Accelerating Your ODI Implementation, Rittman Mead Style

Thu, 2017-03-09 10:01
Introduction

Over the years, at Rittman Mead, we've built up quite a collection of tooling for ODI. We have accelerators, scripts, reports, templates and even entire frameworks at our disposal for when the right use case arises. Many of these tools exploit the power of the excellent ODI SDK to automate tasks that would otherwise be a chore to perform manually. Tasks like, topology creation, model automation, code migration and variable creation.

In this blog post, I'm going to give you a demo of our latest addition, a tool that allows you to programmatically create ODI mappings. ( and a few other tricks )

So you may be thinking isn't that already possible using the ODI SDK ? and you'd be right, it most definitely is. There are many examples out there that show you how it's done, but they all have one thing in common, they create a fairly simple mapping, with, relatively speaking, quite a lot of code and are only useful for creating the said mapping.

And herein lies the obvious question, Why would you create a mapping using the ODI SDK, when it's quicker to use ODI Studio ?

And the obvious answer is...you wouldn't, unless, you were trying to automate the creation of multiple mappings using metadata.

This is a reasonable approach using the raw ODI SDK, the typical use case being the automation of your source to stage mappings. These mappings tend to be simple 1 to 1 mappings, the low hanging fruit of automation if you like. The problem arises though, when you want to automate the creation of a variety of more complex mappings, you run the risk of spending more time writing the automation code, than you would actually save due to the automation itself. The point of diminishing return can creep up pretty quickly.

The principle, however, is sound. Automate as much as possible by leveraging metadata and free up your ODI Developers to tackle the more complex stuff.

All Aboard the Rittman Mead Metadata Train !

What would be really nice is something more succinct, more elegant, something that allows us to create any mapping, with minimal code and fuss.

Something that will allow us to further accelerate...

  • Migrating to ODI from other ETL products
  • Greenfield ODI Projects
  • Day to Day ODI Development work

..all powered by juicy metadata.

These were the design goals for our latest tool. To meet these goals, we created a mini-mapping-language on top of the ODI SDK. This mini-mapping-language abstracts away the SDK's complexities, while, at the same time, retaining its inherent power. We call this mini mapping language OdiDsl ( Oracle Data Integrator Domain Specific Language ) catchy heh?!

OdiDsl

OdiDsl is written in Groovy and looks something like this...

/*
 * OdiDsl to create a SCD2 dimension load mapping.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

The above code will create the following, fully functional, mapping in ODI 12c (sorry 11g).

It should be fairly easy to eyeball the code and reconcile it with the above mapping image. We can see that we are specifying our datastores, selecting the EMPLOYEES datastore, adding a filter, a couple of joins and then connecting to our target. OdiDsl has been designed in such a way that it mimics the flow based style of ODI 12c's mappings by chaining components onto one another.

Creating a Mapping Using OdiDsl

Let's walk through the above code, starting with just the datastores, adding the rest as we go along...

Datastores

We start by creating the mapping with mapping.create( <project>, <folder>, <mapping name>). We then chain the .datastores(), .commit() and .validate() methods onto it using the "dot" notation. The .datastores() method is the only method you can chain directly onto mapping.create() as it's a requirement to add some datastores before you start building up the mapping. The .commit() method persists the mapping in the repository and the .validate() method calls ODI's validation routine on the mapping to check if all is ok.

/*
 * OdiDsl to create a mapping with 4 datastores.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .commit()
        .validate()

When we execute this code it returns the following to the console. You can see that the mapping has been dropped/created and that ODI has some validation warnings for us.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component EMPLOYEES has no input or output connections.
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And here is the mapping in ODI - well, it's a start at least...

Starting the Flow with a Filter

Before we can start building up the rest of the mapping we need to select a starting datastore to chain off, you've got to start somewhere right? For that, we call .select("EMPLOYEES"), which is a bit like clicking and selecting the component in ODI Studio. The .filter() method is then chained onto it, passing in the filter name and some configuration, in this case, the actual filter condition.

/*
 * OdiDsl to create a mapping with 4 datastores and a filter.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ]) 
        .commit()
        .validate()

We now have an error in the validation results. This is expected as our filter doesn't connect to anything downstream yet.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component DEPARTMENTS has no input or output connections.
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.
  ERROR: Mapping component NAME_FILTER must have a connection for output connector point OUTPUT1.

And here's the mapping, as you can see the filter is connected to the EMPLOYEES datastore output connector.

Adding a Join

Next we'll create the join between the filter and the DEPARTMENTS table. To do this we can just chain a .join() onto the .filter() method and pass in some arguments to specify the join name, what it joins to and the join condition itself.

/*
 * OdiDsl to create a mapping with 4 datastores a filter and a join.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
        .commit()
        .validate()

Only 2 validation warnings and no errors this time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component JOBS has no input or output connections.
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

We now have a join named EMP_DEPT joining DEPARTMENTS and the filter, NAME_FILTER, together.

Adding another Join

We'll now do the same for the final join.

/*
 * OdiDsl to create a mapping with 4 datastores, a filter and 2 joins.
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])      
        .commit()
        .validate()

looking better all the time...

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created

  Validation Results
  ------------------
  WARNING: Mapping component D_EMPLOYEE has no input or output connections.

And we now have a join named DEPT_JOBS joining JOBS and the join, EMP_DEPT, to each other.

Connecting to the target

The final step is to connect the DEPT_JOBS join to our target datastore, D_EMPLOYEE. For this we can use the .connect() method. This method is used to map upstream attributes to a datastore. When you perform this action in ODI Studio, you'll be prompted with the attribute matching dialog, with options to auto-map the attributes.

OdiDsl will, by default, auto-map all attributes that are not explicitly mapped in the .connect() method. In our completed code example below we are explicitly mapping several attributes to support SCD2 functionality, auto-map takes care of the rest.

/*
 * OdiDsl to create a SCD2 dimension load mapping
 */

mapping.drop('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'EMPLOYEE_DIM_LOAD')  
        .datastores([
                [name: "HR.EMPLOYEES"],
                [name: "HR.DEPARTMENTS"],
                [name: "HR.JOBS"],
                [name: "PERF.D_EMPLOYEE", integration_type: "SCD"],
        ])
        .select("EMPLOYEES")
            .filter('NAME_FILTER', [filter_condition: "EMPLOYEES.FIRST_NAME LIKE 'D%'" ])
            .join('EMP_DEPT', ['DEPARTMENTS'], [join_condition: "EMPLOYEES.DEPARTMENT_ID = DEPARTMENTS.DEPARTMENT_ID" ])
            .join('DEPT_JOBS', ['JOBS'], [join_condition: "EMPLOYEES.JOB_ID = JOBS.JOB_ID" ])
            .connect("D_EMPLOYEE", [
                                        [ attr: "employee_id", key_indicator: true ],
                                        [ attr: "eff_from_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "eff_to_date", expression: "sysdate", execute_on_hint: "TARGET"],
                                        [ attr: "current_flag", expression: 1, execute_on_hint: "TARGET"],
                                        [ attr: "surr_key", expression: ":RM_PROJECT.D_EMPLOYEE_SEQ_NEXTVAL", execute_on_hint: "TARGET"],
                                   ])
        .commit()
        .validate()

Nice, all validated this time.

Connecting to the repository...

mapping EMPLOYEE_DIM_LOAD dropped  
mapping EMPLOYEE_DIM_LOAD created  
Validation Successful  


What about Updates ?

Yes. We can also update an existing mapping using mapping.update(<project>, <folder>, <mapping name>). This is useful when you need to make changes to multiple mappings or when you can't drop and recreate a mapping due to a dependency. The approach is the same, we start by selecting a component with .select() and then chain a method onto it, in this case, .config().

mapping.update('MYPROJECT', 'DEMO', "EMPLOYEE_DIM_LOAD")  
        .select('DEPT_JOBS')
            .config([join_type: "LEFT_OUTER"])


Which Properties Can I Change for each Component ?

Probably more than you'll ever need to, OdiDsl mirrors the properties that are available in ODI Studio via the SDK.

Can We Generate OdiDsl Code From an Existing Mapping ?

Yes, we can do that too, with .reverse(). This will allow you to mirror the process.

Let's take this, hand built, fictional and completely CRAZY_MAPPING as an example. (fictional and crazy in the sense that it does nothing useful, however, the flow and configuration are completely valid).

If we execute .reverse() on this mapping by calling...

mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  

...OdiDsl will return the following output to the console. What you are seeing here is the OdiDsl required to recreate the crazy mapping above.

Connecting to the repository...

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'CRAZY_MAPPING')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'TABLE1'],
         ['name':'STAGING.TABLE9', 'alias':'TABLE9'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TABLE4'],
         ['name':'STAGING.TABLE6', 'alias':'TABLE6'],
         ['name':'STAGING.TABLE5', 'alias':'TABLE5'],
         ['name':'STAGING.TABLE7', 'alias':'TABLE7'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE8', 'alias':'TABLE8'],
         ['name':'STAGING.TABLE11', 'alias':'TABLE11'],
         ['name':'STAGING.TABLE12', 'alias':'TABLE12'],
         ['name':'STAGING.TABLE13', 'alias':'TABLE13'],
         ['name':'STAGING.TABLE15', 'alias':'TABLE15'],
         ['name':'STAGING.TABLE14', 'alias':'TABLE14'],
         ['name':'STAGING.TABLE16', 'alias':'TABLE16'],
         ['name':'STAGING.TABLE17', 'alias':'TABLE17'],
         ['name':'STAGING.TABLE42', 'alias':'TABLE42'],
    ])
    .select('TABLE5')
        .join('JOIN2', ['TABLE7'], [join_condition: "TABLE5.ID = TABLE7.ID" ])
        .join('JOIN3', ['TABLE6'], [join_condition: "TABLE6.ID = TABLE7.ID" ])
        .connect('TABLE14', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE7.COL1" ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "TABLE7.COL4" ],
        ])
    .select('JOIN3')
        .expr('EXPRESSION1', [attrs: [
                [ attr: "ID", expression: "TABLE6.ID * 42", datatype: "NUMERIC", size: "38", scale: "0"]]])
        .connect('TABLE15', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "TABLE6.COL2" ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .join('JOIN', ['TABLE14'], [join_condition: "TABLE14.ID = TABLE15.ID" ])
        .filter('FILTER2', [filter_condition: "TABLE15.COL3 != 'FOOBAR'" ])
        .connect('TABLE16', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('JOIN')
        .connect('TABLE17', [
                [ attr: "ID", expression: "TABLE15.ID" ],
                [ attr: "COL1", expression: "TABLE15.COL1" ],
                [ attr: "COL2", expression: "TABLE14.COL2" ],
                [ attr: "COL3", expression: "TABLE14.COL3" ],
                [ attr: "COL4", expression: "TABLE14.COL4" ],
        ])
    .select('TABLE5')
        .sort('SORT1', [sorter_condition: "TABLE5.ID, TABLE5.COL2, TABLE5.COL4" ])
        .connect('TABLE13', [
                [ attr: "ID", expression: "TABLE5.ID" ],
                [ attr: "COL1", expression: "TABLE5.COL1" ],
                [ attr: "COL2", expression: "TABLE5.COL2" ],
                [ attr: "COL3", expression: "TABLE5.COL3" ],
                [ attr: "COL4", expression: "TABLE5.COL4" ],
        ])
    .select('TABLE3')
        .filter('FILTER1', [filter_condition: "TABLE3.ID != 42" ])
    .select('TABLE4')
        .filter('FILTER', [filter_condition: "TABLE4.COL1 = 42" ])
        .lookup('LOOKUP1', 'FILTER1', [join_condition: "TABLE4.ID = TABLE3.ID AND TABLE3.COL1 = TABLE4.COL1"])
        .join('JOIN5', ['TABLE13'], [join_condition: "TABLE13.ID = TABLE3.ID" ])
        .distinct('DISTINCT_', [attrs: [
                [ attr: "COL3_1", expression: "TABLE4.COL3", datatype: "VARCHAR", size: "30"],
                [ attr: "COL4_1", expression: "TABLE4.COL4", datatype: "VARCHAR", size: "30"]]])
    .select('DISTINCT_')
        .join('JOIN4', ['EXPRESSION1'], [join_condition: "TABLE5.ID = TABLE6.COL1" ])
        .sort('SORT', [sorter_condition: "EXPRESSION1.ID" ])
        .connect('TABLE8', [
                [ attr: "ID", expression: "EXPRESSION1.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "TABLE7.COL3" ],
                [ attr: "COL4", expression: "", active_indicator: false ],
        ])
        .connect('TABLE12', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "TABLE8.COL1" ],
                [ attr: "COL2", expression: "TABLE8.COL2" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .select('TABLE9')
        .expr('EXPRESSION', [attrs: [
                [ attr: "ID", expression: "TABLE9.ID *42", datatype: "NUMERIC", size: "38", scale: "0"],
                [ attr: "COL4", expression: "TABLE9.COL4 || 'FOOBAR'", datatype: "VARCHAR", size: "30"]]])
        .connect('TABLE1', [
                [ attr: "ID", expression: "EXPRESSION.ID" ],
                [ attr: "COL1", expression: "", active_indicator: false ],
                [ attr: "COL2", expression: "", active_indicator: false ],
                [ attr: "COL3", expression: "", active_indicator: false ],
                [ attr: "COL4", expression: "TABLE9.COL4" ],
        ])
        .join('JOIN1', ['TABLE2'], [join_condition: "TABLE1.ID = TABLE2.ID" ])
        .aggregate('AGGREGATE', [attrs: [
                [ attr: "ID", expression: "TABLE1.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1", expression: "MAX(TABLE2.COL4)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .lookup('LOOKUP', 'DISTINCT_', [join_condition: "AGGREGATE.ID = DISTINCT_.COL3_1"])
        .aggregate('AGGREGATE1', [attrs: [
                [ attr: "ID", expression: "AGGREGATE.ID", datatype: "NUMERIC", size: "38", scale: "0", group_by: "YES"],
                [ attr: "COL4_1_1", expression: "SUM(AGGREGATE.COL4_1)", datatype: "VARCHAR", size: "30", group_by: "AUTO"]]])
        .filter('FILTER3', [filter_condition: "AGGREGATE1.COL4_1_1 > 42" ])
        .connect('TABLE42', [
                [ attr: "ID", expression: "AGGREGATE1.ID" ],
        ])
    .select('AGGREGATE1')
        .join('JOIN6', ['TABLE8'], [join_condition: "AGGREGATE1.ID = TABLE8.ID" ])
        .connect('TABLE11', [
                [ attr: "ID", expression: "TABLE8.ID" ],
                [ attr: "COL1", expression: "" ],
                [ attr: "COL2", expression: "" ],
                [ attr: "COL3", expression: "TABLE8.COL3" ],
                [ attr: "COL4", expression: "TABLE8.COL4" ],
        ])
    .commit()
    .validate()

When we execute this OdiDsl code we get, you guessed it, exactly the same crazy mapping with the flow and component properties all intact.

Being able to flip between ODI studio and OdiDsl has some really nice benefits for those who like the workflow. You can start prototyping a mapping in ODI Studio, convert it to code, hack around for a bit and then reflect it all back into ODI. It's also very useful for generating a "code template" from an existing mapping. The generated code template can be modified to accept variables instead of hard coded properties, all you need then is some metadata.

Did Somebody Say Metadata ?

Metadata is the key to bulk automation. You can find metadata in all kinds of places. If you are migrating to ODI from another product then there will be a whole mass of metadata living in your current product's repository or via some kind of export routine which typically produces XML files. If you are starting a fresh ODI implementation, then there will be metadata living in your source and target systems, in data dictionaries, in excel sheets, in mapping specifications documents, all over the place really. This is the kind of metadata that can be used to feed OdiDsl.

A Quick Example of One possible Approach to Using OdiDsl With Metadata

First we build a mapping in Odi Studio, this will act as our template mapping.

We then generate the equivalent OdiDsl code using mapping.reverse('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA'). Which gives us this code.

mapping.create('MY_PROJECT', 'DEMO_FOLDER', 'FEED_ME_METADATA')  
    .datastores([
         ['name':'STAGING.TABLE1', 'alias':'LOOKUP_TABLE'],
         ['name':'STAGING.TABLE2', 'alias':'TABLE2'],
         ['name':'STAGING.TABLE3', 'alias':'TABLE3'],
         ['name':'STAGING.TABLE4', 'alias':'TARGET_TABLE'],
    ])
    .select('TABLE2')
        .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "TABLE2.ID = LOOKUP_TABLE.ID"])
        .join('JOIN', ['TABLE3'], [join_condition: "TABLE2.ID = TABLE3.ID" ])
        .filter('FILTER', [filter_condition: "TABLE3.COL1 = 'FILTER'" ])
        .expr('EXPRESSION', [attrs: [
                [ attr: "CONSTANT", expression: "42", datatype: "VARCHAR", size: "30"]]])
        .connect('TARGET_TABLE', [
                [ attr: "ID", expression: "LOOKUP_TABLE.ID" ],
                [ attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT" ],
                [ attr: "COL2", expression: "TABLE2.COL2" ],
                [ attr: "COL3", expression: "TABLE3.COL3" ],
                [ attr: "COL4", expression: "LOOKUP_TABLE.COL4" ],
        ])
    .commit()
    .validate()

We now need to decorate this code with some variables, these variables will act as place holders for our metadata. The metadata we are going to use is from a database table, I'm keeping it simple for the purpose of this demonstration but the approach is the same. Our metadata table has 10 rows and from these 10 rows we are going to create 10 mappings, replacing certain properties with the values from the columns.

Remember that OdiDsl is expressed in Groovy. That means, as well as OdiDsl code, we also have access to the entire Groovy language. In the following code we are using a mixture of Groovy and OdiDsl. We are connecting to a database, grabbing our metadata and then looping over mapping.create(), once for each row in our metadata table. The columns in the metadata table are represented as the variables row.datastore, row.constant_expr and row.filter_cond. The code comments indicate where we are substituting these variables in place of our previously hard coded property values.

import groovy.sql.Sql

// Connect to the database and retrieve rows from the METADATA table.
def sqlConn = Sql.newInstance("jdbc:oracle:thin:@hostname:1521/pdborcl", "username", "password", "oracle.jdbc.pool.OracleDataSource")  
def rows = sqlConn.rows("SELECT * FROM METADATA")  
sqlConn.close()

// For each row in our METADATA table
rows.eachWithIndex() {  row, index ->

    mapping.create('MY_PROJECT', 'DEMO_FOLDER', "FEED_ME_METADATA_${index+1}") // Interpolate row number to make the mapping name unique
            .datastores([
                    ['name': 'STAGING.TABLE1', 'alias': 'LOOKUP_TABLE'],
                    ['name': "STAGING.${row.datastore}" ], // substitute in a different datastore
                    ['name': 'STAGING.TABLE3', 'alias': 'TABLE3'],
                    ['name': 'STAGING.TABLE4', 'alias': 'TARGET_TABLE'],
            ])
            .select(row.datastore)
                .lookup('LOOKUP', 'LOOKUP_TABLE', [join_condition: "${row.datastore}.ID = LOOKUP_TABLE.ID"]) // substitute in a different datastore
                .join('JOIN', ['TABLE3'], [join_condition: "${row.datastore}.ID = TABLE3.ID"]) // substitute in a different datastore
                .filter('FILTER', [filter_condition: "TABLE3.COL1 = '${row.filter_cond}'"]) // substitute in a different filter condition
                .expr('EXPRESSION', [attrs: [
                    [attr: "CONSTANT", expression: row.constant_expr, datatype: "VARCHAR", size: "30"]]]) // substitute in a different constant for the expression
                .connect('TARGET_TABLE', [
                    [attr: "ID", expression: "LOOKUP_TABLE.ID"],
                    [attr: "COL1", expression: "LOOKUP_TABLE.COL1 || EXPRESSION.CONSTANT"],
                    [attr: "COL2", expression: "${row.datastore}.COL2"], // substitute in a different datastore
                    [attr: "COL3", expression: "TABLE3.COL3"],
                    [attr: "COL4", expression: "LOOKUP_TABLE.COL4"],
                ])
            .commit()
            .validate()

}

Here is the output...

Connecting to the repository...

mapping FEED_ME_METADATA_1 created  
Validation Successful  
mapping FEED_ME_METADATA_2 created  
Validation Successful  
mapping FEED_ME_METADATA_3 created  
Validation Successful  
mapping FEED_ME_METADATA_4 created  
Validation Successful  
mapping FEED_ME_METADATA_5 created  
Validation Successful  
mapping FEED_ME_METADATA_6 created  
Validation Successful  
mapping FEED_ME_METADATA_7 created  
Validation Successful  
mapping FEED_ME_METADATA_8 created  
Validation Successful  
mapping FEED_ME_METADATA_9 created  
Validation Successful  
mapping FEED_ME_METADATA_10 created  
Validation Successful  

And here are our 10 mappings, each with it's own configuration.

If we take a look at the FEED_ME_METADATA_5 mapping, we can see the metadata has been reflected into the mapping.

And that's about it. We've basically just built a mini accelerator using OdiDsl and we hardly had to write any code. The OdiDsl code was generated for us using .reverse(). All we really had to code, was the connection to the database, a loop and bit of variable substitution!

Summary

With the Rittman Mead ODI Tool kit, accelerating your ODI implementation has never be easier. If you are thinking about migrating to ODI from another product or embarking on a new ODI Project, Rittman Mead can help. For more information please get in touch.

Categories: BI & Warehousing

Create Your Own DVD Plugin in 22 minutes

Wed, 2017-03-08 06:45
Introduction

Oracle DVD played well for the task of elections data analysis. But I faced some limitations and had to curtail my use of one of my favourite charts - Scatter chart. While DVD’s implementation of it looks good, has smooth animations and really useful, it is not suitable for visualisations of huge datasets. Election dataset has about 100K election precinct data rows and if I try to plot all of them at one chart I will get something close to this. It doesn't really important what is the data, just compare how charts look like.

Picture with Turnout%, United Russia% by PEC

This picture can give us about zero insights. Actually, I'm not even sure if shows my 100K points. What I want to get is something like this:
plugin with Turnout%, United Russia% by PEC (source: https://habrahabr.ru/post/313372/)

What can I see at this chart and can't see at the vanilla one? First of all, it gives a much better understanding of points distribution other the plot. I can see a relatively dense core around coordinates [30;40] and then not so dense tail growing by both dimensions and again a small and very dense core in the top right corner.
Secondly, it not only shows dense and sparse areas but shows that there are exist a few points which have an unexplainably high concentration. Around [64;62], for example.

Luckily DVD allows me (and anyone else too) create custom plugins. That's what I'm going to show here. The plugin I'm going to do is something I'd call Minimum Viable Product. It's very, very simple and needs a lot of work to make it good and really useful.

The code I'm going to show is the simplest I can invent. It doesn't handle errors, exceptions and so on. It should be more modular. It possibly could use some optimisation. But that was done for a purpose. And the purpose is called 'simplicity'. If you want a more comprehensive sample, there is a link to the Oracle's guide in the Summary. I want to show you that writing a DVD plugin is not a rocket science. It takes less than half an hour to build your own DVD plugin.

And the clock starts now!

[0:00-5:40] Setup Environment and DVD SDK Setup

Download and install DVD. I presume you already have it, because from my experience people who doesn't have it very rarely feel a need for its plugins. Hence I didn't include download and installation times into 22 minutes.

Create a folder to store your plugins and some Gradle-related configs and scripts. It may be any directory you like, just select your favourite place.

Next step is to define some environment variables.

  • The first one is DVDDESKTOP_SDK_HOME and it should point to thу DVD folder. In most of the cases, it is C:\Program files\Oracle Data Visualisation Desktop.
  • The second variable is PLUGIN_DEV_DIR it points to the folder you created 40 seconds ago. These two variables seem not to be absolutely necessary, but they make your life a bit easier.
  • And one more change to the environment before going further. Add %DVDESKTOP_SDK_HOME%\tools\bin to the PATH variable (do you feel how your life became easier with DVDDESKTOP_SDK_HOME already defined so you can start using it right now?).

And the last step is to initialise our development environment. Go to the plugins directory and execute the initialization command. Open cmd window and execute:

cd %PLUGIN_DEV_DIR%  
bicreateenv  

This will not only initialise everything your need but create a sample plugin to explore and research.

Let's take a look at what we got. In the folder we have created before (%PLUGIN_DEV_DIR%) this command generated the following structure.

a screenshot of %PLUGIN_DEV_DIR%

All names here are more than self-explanatory. You won't need to do anything with .gradle and gradle, and src stores all the code we will develop. Please notice that src already has sample plugin sampleviz. You may use it as a referrence for your own work.

Create First Plugin

Every plugin consists of a few parts. We don't need to create them from scratch every time. Oracle gives us a simple way of creating a new plugin. If you didn't close cmd window from previous steps execute the following:

bicreateplugin viz -id com.rittmanmead.demo -subType dataviz  

Obviously, you should use your own company name for the id. The last word (demo) plus word Plugin is what user will see as a visualisation name. Obviously we can change it but not today.

This will create a fully operational plugin. Where can we find it? You shouldn't be too surprised if I say "In src folder".

a screenshot of %PLUGIN_DEV_DIR%

And we may test it right now. Yes, its functionality is very limited, but you didn't expect it to do something useful, right?

Start SDK

New we should start Oracle DVD in SDK mode. The main difference from the normal mode is that DVD will start in the default browser. That's not a typo. DVD will start as a usual site in a browser. It's a good idea to select as the default a browser with good developer console and tools. I prefer Chrome for this task but that's not the only choice. Choose the browser you'd normally use for site development. When you finish with setting up of the default browser do:

cd %PLUGIN_DEV_DIR%  
.\gradlew run

If you don't have an ultrafast computer I'd recommend you to make some tea at this point. Because the very first start will take a long time. Actually, it will consume most of the first stage time.

[5:40-6:15] Test the Plugin

We didn't start changing the first plugin yet, so I expect everything to be OK at this point. But anyway it's a good idea to test everything. I think we can afford 30-40 seconds for this.

And again I assume you know how to use DVD and don't need a step-by-step guide on creating a simple visualisation. Just create a new project, select any data source, go to Visualisations, find your plugin and add it. You may even add some data to the plugin and it will show how many rows your data has. I think that's a good place to start from.

a picture of plugin

[6:15-22:00] Add Functions Change parameters

If you didn't skip testing phase and played with this new toy for a while you may have noticed two things we need to change right now. First, for a chosen type of visualisation, I need that my plugin can accept two measures. You possibly noticed that I was unable to add Number of cancelled ballots as a second measure. By default a plugin accepts not more than one (zero or one). Luckily it can be changed very easily.

We can find all parameter about measures, rows, columns etc inside of extensions folder. In this case it is %PLUGIN_DEV_DIR%\customviz\com-rittmanemad-demo\extensions. Here we can find two folders. The first one is oracle.bi.tech.pluginVisualizationDataHandlerand it has only one file com.rittmanemad.demo.visualizationDataHandler.JSON. This file allows us to define which types of input data our plugin has (measures, columns, rows, colour etc.), what are their types (measures/dimensions), which is the default one and so on.

a picture of json and plugin side-by side Here we should change maxCount to 2. This will allow us to add more than one measure to our visualisation.

The second thing I want to change is the number of data points. Elections dataset has data about 100K precinct commissions. And DVD's default is 10K. We can change this value in the second configuration file from extensions folder. Its name is oracle.bi.tech.Visualization. There is again only one JSON file com.rittmanmead.demo.json. We need to change rows to 200000.

Why 200000 if I need only 100000 points? Well, it's about how data will be passed to our code. Every measure is a separate row, so I need 100K points with 2 measures each and that gives me 200000 rows. It looks like right now that's the only way to have more than one measure in a plugin (at least it's my best knowledge for DVD 12.2.2.2.0-20170208162627).

a picture of json and plugin side-by side

Note. I could change here some things like plugin name, input data types shown to user and so on but my aim is simplicity.

Now we should restart DVD SDK in order to use new parameters.

two measures and ~200K rows

Write code

For my custom scatter chart I'm going to use d3js JavaScript library. It will allow me to concentrate more on logic and less on HTML manipulation. To add it to my plugin I should add a couple of strings in the beginning of my code.

Before:

define(['jquery',  
        'obitech-framework/jsx',
        'obitech-report/datavisualization',
        'obitech-reportservices/datamodelshapes',
        'obitech-reportservices/events',
        'obitech-appservices/logger',
        'ojL10n!com-rittmanmead-demo/nls/messages',
        'obitech-framework/messageformat',
        'css!com-rittmanmead-demo/demostyles'],
        function($,
                 jsx,
                 dataviz,
                 datamodelshapes,
                 events,
                 logger,
                 messages) {

After (I added two lines starting with d3):

define(['jquery',  
        'obitech-framework/jsx',
        'obitech-report/datavisualization',
        'obitech-reportservices/datamodelshapes',
        'obitech-reportservices/events',
        'obitech-appservices/logger',
      'd3js',
        'ojL10n!com-rm-domoViz/nls/messages',
        'obitech-framework/messageformat',
        'css!com-rittmanmead-demo/demostyles'],
        function($,
                 jsx,
                 dataviz,
                 datamodelshapes,
                 events,
                 logger,
               d3,
                 messages) {

That's all. Now I can use d3js magic. And that's cool.

OK, it's time to make our plugin do something more useful than a simple counting of rows. All plugin code is in demo.js file and the main procedure is render. It is called every time DVD needs to update the visualisation. The initial version of this function is really small. Without comments there are only four rows. It retrieves data, counts rows, then finds a container to write and writes a message.

Demo.prototype.render = function(oTransientRenderingContext) {  
      // Note: all events will be received after initialize and start complete.  We may get other events
      // such as 'resize' before the render, i.e. this might not be the first event.

      // Retrieve the data object for this visualization
  var oDataLayout = oTransientRenderingContext.get(dataviz.DataContextProperty.DATA_LAYOUT);

      // Determine the number of records available for rendering on ROW
      // Because we specified that Category should be placed on ROW in the data model handler,
      // this returns the number of rows for the data in Category.
  var nRows = oDataLayout.getEdgeExtent(datamodelshapes.Physical.ROW);

      // Retrieve the root container for our visualization.  This is provided by the framework.  It may not be deleted
      // but may be used to render.
  var elContainer = this.getContainerElem();
  $(elContainer).html(messages.TEXT_MESSAGE.format("Demo Plugin", "" + nRows));
};

First of all, I need to know the actual size of the plotting area. I simply added that after var elContainer = this.getContainerElem();:

  [...]
  var elContainer = this.getContainerElem();

  //container dimensions
  var nWidth = $(elContainer).width();
  var nHeight = $(elContainer).height();
  [...]

The template plugin has code for pulling the data into the script (oDataLayout variable). But for my scatter plot, I need to put this data into an array and find the minimum and the maximum value for both arrays. This part of the code is my least favourite one. It looks like currently we can't make tho separate measures in a custom viz (or at least I can't find the solution), therefore instead of two nice separate measures, I have them as different rows. Like:

......... PEC #1Turnout,%34,8% PEC #1United Russia,%44,3% PEC #2Turnout,%62,1% PEC #2United Russia,%54,2% PEC #3Turnout,%25,9% PEC #3United Russia,%33,2% .........

I really hope that any solution for this will be found. So far I just put even and odd rows into X ad Y coordinates. At the same time, I'm determine minimum and maximum values for both axes.

  //temporary measure
  var tmp_measure;

  //current measure
  var cur_measure;

  var vMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];
  var hMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];

  for(var i=0;i<nRows;i++){

    if(i%2==1){
      cur_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
      vMinimax[0]=Math.min(vMinimax[0], cur_measure);
      vMinimax[1]=Math.max(vMinimax[1], cur_measure);
      points.push([cur_measure,tmp_measure]);
    }
    else{
      tmp_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
      hMinimax[0]=Math.min(hMinimax[0], tmp_measure);
      hMinimax[1]=Math.max(hMinimax[1], tmp_measure);
    }
  }

The next part of the code is about handling multiple calls. I should handle things like changing measures or dimensions. I simply delete the old chart and create a new empty one.

  var oSVG;

  //Delete old chart if exists
  d3.select("#DemoViz").remove();

  //Create new SVG with id=DemoViz
  oSVG=d3.select(elContainer).append("svg");
  oSVG.attr("id","DemoViz");

  //Set plot area size
  oSVG=oSVG
       .attr("width", nWidth)
       .attr("height", nHeight)
       .append("g");

Now I have an array with data. I have a prepared SVG plot for drawing. And I have no reason not to combine all this into a chart.

  //Compute scaling factors
  var hStep=(nWidth-40)/(hMinimax[1]-hMinimax[0]);
  var vStep=(nHeight-40)/(vMinimax[1]-vMinimax[0]);

  //Draw
  oSVG.selectAll("rect")
      .data(points)
      .enter()
      .append("rect")
      .attr("x", function(d) {return 20+d[1]*hStep;})
      .attr("y", function(d) {return nHeight-(20+d[0]*vStep);})
      .attr("height",1).attr("width",1)
      .attr('fill-opacity', 0.3);

What I should mention here is opacity. If its value is too high (almost no opacity) the chart looks like this:

0.99

If I make dots more opaque, the chart looks (and works) better. This is how it looks like with fill-opacity=0.3.
0.99 Look at the black point in the middle. It's there because a lot of commissions has exactly the same values for both measures. That's how we could find Saratov region anomaly.

[22:00-] Summary and TODOs

This shows how we may enhance Oracle DVD with our own visualisation. Yes, it's really, really, really simple. And it needs a lot of work in order to make it more useful. First of all, we should make it more reliable. This realisation doesn't test input parameters for correctness. And we also need to add interactivity. The chart should be zoomable and selectable. It should work as a filter. It should react to resize. It needs visible axes. And so on. But we can create it in less than a half of hour and I think as a first step it works well. I hope you don't afraid of making your own plugins now.

If you need more complex sample, take a look at the Oracle's tutorial. It covers some of these TODOs.

Categories: BI & Warehousing

A Case for Essbase and Oracle Data Visualization

Mon, 2017-03-06 12:03

So it’s the end of the month, or maybe even the end of the quarter. And you’ve found yourself faced, yet again, with the task of pulling whatever data will be needed to produce the usual standardized budgetary and / or finance reports. You’re dreading this as it will probably eat up most of your day just getting the data you need, combing through it to find the metrics you need, and plugging them into your monumentally complicated custom spreadsheet, only to find the numbers are, well, off.

Enter Data Visualization (DV), and its lightweight brother application, Data Visualization Desktop (DVD). My bud at RM, Matt Walding, already did a pretty great post on some of the cursory features of DVD, covering a lot of the important how-tos and what’s whats. So check that out if you need a bit of a walkthrough. Both of these great tools tout that you can go from zero to analysis pretty darn quick, and from the extensive testing and prodding we’ve done with both DV and DVD, this claim is accurate. Now how does this help us, however, in the previous scenario? Well, IT processes being what they are in a lot of mid to large size companies, getting the data we need, to do the crunching we need to do, can be quite the monumental task, let alone the correct data. So when we get it, we are going to want a solution that can take us from zero to report, pretty darn quick.


If you’re the one stuck with doing the crunching, and then providing the subsequent results, your solution or workflow probably resembles one of the following:


Scenario 1

Emailed a spreadsheet with a ton of rows. Download the csv/xlsx and then crunch the rows into something that you can force into a super spreadsheet that has a ton of moving pieces just waiting to throw an error.


Scenario 2

You have access to Essbase, which stays pretty fresh, especially as reporting time draws nigh. You connect to Smartview and extract what you need for your report. See scenario 1.


Scenario 3

You have OBIEE that you depend on for data dumps, and then just export whatever you need. See scenario 1.

While there are no doubt variations on these themes, the bigger picture here is that between the time you receive your dataset and the final report, there are likely a few iterations of said final report. Maybe you’re having to make corrections to your Excel templates, perhaps the numbers on your sheet just aren’t jiving. Whatever the case may be, this part of the process is often the one that can be the most demanding of your time, not to mention the most headache inducing. So what’s the point of my schpeal? Well, wouldn’t it be nice to expedite this part of the whole thing? Let’s take a look at how we can do just that with both Data Visualization in OBIEE and Data Visualization Desktop.


Data Visualization

With DV, we can simply access any of our existing OBIEE subject areas to quickly create a basic pivot table. Right away you can see the profound time savings garnered by using DV. What's more, you don't need to feel forced into managing OBIEE on premises, as DV is also part of BICS (Business Intelligence Cloud Service) and DVCS (Data Visualization Cloud Service).

Even if you have Smartview, and can do more or less the same thing, what if you wanted to delegate some of the tedium involved in manually crunching all those rows? You could simply hand off an export to another analyst, and have them plug it right into their own instance of Data Visualization Desktop, which, might I add, comes with your purchase of the DV license. This also, however, leads down the slippery slope into siloing off your department. This approach is essentially doing that, however kept under the quarantine that is DVD, as this blog is touting, and keeping the data with which you are working consistent, you shouldn't be able to do too much damage. The point I'm trying to make is that everything about using DV and DVD as your sort of report crafting and proofing mediums, is super-fluid and smooth. The process from source system to report and over and over again, is super-seamless. Even if I didn't have direct access to the data source I needed, and had to rely on emailed data dumps or other, I can simply upload that sheet right into DV, assign some data types, and get to work. I can even add dynamic filters to the analysis by simply dragging and dropping a column to the filters area. If you're feeling adventurous, you could also display these tables on a dashboard, that perhaps your department looks at to proof them and share in the pleasurable experience that is concocting period-end reporting.


Data Visualization Desktop

Right now, DVD is only out for Windows (with a version for the Mac on the roadmap), which is mostly ok, as most every medium to large size company I have worked with employs Windows as their go-to OS. An analyst can install the program on their desktop machine and be ready to plug away in under 10 minutes. We can take the example spreadsheet above that we dumped out of VA and create our own version of the report right in DVD. One better, we can also blend it with any other source DVD can connect to. This feature, especially, can save lots of time when trying to get your numbers just right for sign off.

Looks Just Like DV

Acts Just Like DV

And hold the phone! There's even a native Essbase connector!

Actually One Better Than DV!

Speaking of connectors, check out the rest of the list, as well as the custom data flow functionality, which allows you to construct and save in-app data transformations to be invoked again, and again, and again...


Summary

Flexibility is the name of the game with DV and DVD. So sure, while it isn't the ideal tool for creating precisely formatted financial statements for SEC submission, it sure beats massaging and munging all that data in Excel. And once we're happy with our numbers, either by sharing our reports on a dashboard, or export, we can go ahead and plug our numbers right into whatever tool we are going to use to produce the final product. We've covered a couple of really good concepts, so let's just do a bit of a wrap up to make everything a bit more cohesive.

  • We can pull data into Data Visualization from OBIEE, from a spreadsheet, or even mash up the two to handle any inconsistencies that may exist at month's/quarter's end. Note that we can also use Smartview to pull directly to Excel. Note that you will be unable to use the OLAP capabilities of Essbase / Essbase in OBIEE, as the data will presented in DV and DVD as flattened hierarchies.

  • We can share reports in Data Visualization with other analysts or approvers via a dashboard (note that this is not a supported feature, but something that requires only a 'bit' of a hack) or PDF, or hand off our exported data set for further work. This can be to another person using DV or DVD.

  • Data Visualization Desktop can connect directly to our Essbase source or utilize a data dump from DV / OBIEE to do further work and analysis on the data set. We can also connect DVD to OBIEE, as posted here, in order to extract an analysis from the web catalog. This will at least save us the step of dumping / emailing the report.

  • All of this app-to-app compatability encourages a sustainable, functional, and fluid reporting environment (warning: data silos!), especially for those who are unfamiliar with the ins and outs of Essbase and / or working with OBIEE.

Categories: BI & Warehousing

Real World OBIEE: Demystification of Variables Pt. 1

Mon, 2017-03-06 10:36

OBIEE has some very powerful features and capabilities when it comes to how we visualize and present our data. There are some features which almost everyone uses (filters, prompts, rename columns ect.) and then some that are less common. In part, I think there is a lot of "noise" that surrounds the explanation of some of these features. Another reason, and probably the most likely, is that a lot of report developers may just don't know that they exist. I can't tell you how many times in one of our training courses here at Rittman Mead that someone says "wow, I didn't know I could do that!" or that I get asked, "could you explain how Presentation Variables work?". Now, I am not saying that there is anything wrong with not knowing or being confused when it comes to some of the features in OBIEE. In fact, it's my main purpose in writing this blog series! I want to demystify and show you some of the features and techniques I have used with almost every client I have worked with. Even if you are a seasoned veteran, my goal is to present something in a way that you haven't heard before and spark some ideas in your day-to-day development. So with all of that said, let's dive into part one of the Demystification of Variables.

For the first part of this series, I want to talk about using Bins and Presentation Variables together at a report level and also in dashboard prompts. Before we dive into an example, let me talk a little bit about bins and presentation variables.

Bins

Sometimes we need to group values differently than what we have available from our column selection. Common examples could be when we need to group products differently than product category, when we need to create a group for items on sale or, maybe the most common I've seen, when we need to create custom groups for certain values and then have a group for the "others". Creating custom groups can be accomplished in a few different ways (selection steps, add new group from the Results tab), but the way I'm going to show you involves using the Bins tab under Edit formula within a column. I'll go over this process shortly when we dive into the example.

Presentation Variables

These are one of the most powerful features in Answers. They are also one of the things that trip people up the most. Think of presentation variables like placeholders and these placeholders are waiting for a value or a group of values to be passed into them. There are really only three parts to the presentation variables we will look at in this example: First you assign the presentation variable (give it a name) in a prompt to pass the value or group of values selected into the second part. The second part is where the placeholder (presentation variable with the same name) is put in a column formula, column heading, text object etc. The third part is called formatting. Formatting is used when you need to apply a certain format to the values coming into the presentation variable placeholder. There are a massive amount of use cases for presentation variables but I am going to cover only a couple of the ones that I've used the most.

Make sense? If you're still a little confused or hearing about this for the first time, don't worry. We are going to go through an example that I hope will explain the uses of Bins and Presentation Variables in a way that is concise and clear.

Now, lets dive into the example.

The Scenario

In this example, I am going to work through a hypothetical situation where I am the front-end developer for a coffee shop and I've been asked to create a report for an executive dashboard. The report needs to include Gross Revenue, Net Revenue and # of Orders for the current date for six groups of products (Breakfast Foods, Coffee, Coffee Beans, Tea, Tea Bags and Other) and five location groups which contain all the stores located in four states (California, Massachusetts, District of Columbia, New York and Others). The report needs to be in a table view and the person that requested this report also wants the ability to switch between the two columns for product groups and location groups. They also want a dashboard prompt for selecting from one or more groups for each column.

The Example

Let's start by tackling the custom groups. Since both groups are going to be created from the lowest level of detail for both products and locations, we need to bring in both Product Name and Shop Name columns and also our measures.

Note: For all of the examples in this blog series I am using OBIEE 12.2.1.2

I can click on column properties > Edit Formula then click on the Bins tab to start creating my groups.

In the Bins tab, I can click on Add Bin to start creating my first group.

Once I click on Add Bin, I am presented with a New Filter window. The process of creating a new group (or bin) is exactly the same as creating a column filter. I can type the values individually, separated by ; or I can click on the search icon to add values manually. In this case, I am going to add them manually.

Once in the Select Values window, I am going to select all of the products that belong in the first group (Breakfast Foods).

Note: In this scenario, the person has given me a list of all the products and stores that they want in each group

Once I'm done adding all the items, clicking OK will take me to the Edit Bin Name window where I can give my new group (or Bin) a custom name.

Since the person who requested the report probably wants the Others group to be last, I am going to add numerical values to each group. This will give me the flexibility to order columns any way that I choose. After I’m done creating all my Bins, there is a checkbox at the bottom of the Bins window where I can create a custom bin for all other values.

Clicking OK will return me to the Criteria where I can view the formula that OBIEE generated for all of my selections.

When using Bins, OBIEE generates a custom Case statement for us. At a high level, a Case statement takes a set of conditions and executes a corresponding statement. In other words, when the Product Name column has these values, put them all together an call them THIS.

If I go to Results, I can see the Bins I just created.

Now I can return to the Criteria to create the groups for the stores. Upon completion, here is the formula for the Locations:

And the result:

Now I need to come up with a way to be able to switch between the Product and Location columns I just created. There are a couple of options. Since i'm using OBIEE 12c, I can save both the Products and Locations columns to the web catalog and simply create a column selector just like I would do for any of the columns within a subject area.

There are two problems with this method. One is that the Save Column As feature is only available from OBIEE 11.1.1.9 forward so, if you are still on 11.1.1.7 or earlier, this method won't work. The second problem is that the request is not just for swapping the Product and Location columns with a prompt but also being able to select individual values for each column with a prompt. To avoid confusion, we probably want the prompt values for each column to change dynamically depending on which column is selected. To accomplish this I am going to use a second method which requires me to do the following:

  1. Combine the case statements for both Products and Locations column formulas

  2. Insert two additional case statements along with presentation variables to allow me to switch between the two.

  3. Create a filter for the combined column with a presentation variable to allow me to use prompted values.

  4. Create a variable type dashboard prompt to select between the Product and Locations independently.

  5. Create a dashboard prompt that dynamically changes between the value selections for the Products and Locations.

I'm going to start with the first two steps since they require editing the same column formula.

I am going to create a new column and copy both my Products and Locations column formulas into the new column's edit formula window and, to make it easier to edit and insert the two additional case statements, I am going to put a blank line between them.

Now I need to put in my two additional case statements with presentation variables so that I can switch between the two. The syntax and naming convention I am going to use is as follows:

CASE WHEN @{pv_select}['@']{'Products'} = 'Products' THEN [Products Case Statment]

CASE WHEN @{pv_select}['@']{'Products'} = 'Location' THEN [Locations Case Statment]

Let me take a moment to examine and explain the presentation variable I'm using.

The syntax for putting a presentation variable in a column formula, column heading or text object is as follows:

@{presentation_variable_name}[formatting]{default_value}

Note: I am going to use the presentation variable name (pv_select) in my variable prompt later in this example

The default value I have defined (Products) is an optional step that is primarily used for when I want to view my report without any values being passed into the presentation variable. The default value is used when no value has been passed into the presentation variable.

Notice that I have also included formatting between the presentation variable name and the default value (['@']). The reason for this is because the prompt values (which will be defined as Products and Locations in the variable prompt I will create) are a string (text) datatype and I have to explicitly define the format I need for the presentation variable value or I will get an error. You have to define formatting using single brackets [] and then encapsulate the @ character with the formatting you need applied to the presentation variable value. Think of the @ character as a sort of shorthand or an abbreviation for the presentation variable itself.

Note: I could enclose my presentation variable in single quotes to achieve the same results in this senario ('@{pv_select}{Products}').

So, to explain the entire statement another way:

CASE WHEN the value of the presentation variable coming from the prompt is equal to 'Products', THEN then use the CASE statment for the product groups

CASE WHEN the value of the presentation variable coming from the prompt is equal to 'Location', THEN then use the CASE statment for the location groups

My new column formula is as follows:

CASE WHEN @{pv_select}['@']{Products} = 'Products' THEN

CASE WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('All Butter Croissant','Almond Croissant','Banana and Walnut Muffin','Blueberry Muffin','Caramel Shortbread','Chocolate Chip Muffin','Lemon Drizzle Muffin','Maple Pecan Danish') THEN '1. Breakfast Foods' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Americano','Caramel Hot Cocoa','Classic Hot Cocoa','Dark Hot Cocoa','Expresso','Filter Coffee','Iced Caramel Latte','Iced Cocoa','Iced Latte','Iced Mocha','Latte','Macchiato','Mocha','Pain Au Chocolat','Vanilla Hot Cocoa','White Hot Cocoa') THEN '2. Coffee' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Arabica Beans','Arabica Ground','Robustica Beans','Robustica Ground','GCBC Special Blend Instant') THEN '3. Coffee Beans' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Chai Tea','Chamomile Tea','Earl Grey Tea','Fresh Mint Tea','Jasmine Tea','Spearmint Tea') THEN '4. Tea' WHEN "Sales - Fact Sales"."Products"."Product Name" IN ('Chamomile Tea Bags','Chai Tea Bags','Earl Grey Tea Bags','English Breakfast Tea Bags','Fresh Mint Tea Bags','Jasmine Tea Bags','Spearmint Tea Bags') THEN '5. Tea Bags' ELSE '6. Other' END

WHEN @{pv_select}['@']{'Products'} = 'Location' THEN

CASE WHEN "Shops"."Shop Name" IN ('SF1 - Fisherman''s Wharf', 'SF2 - South Beach', 'SJ1 - First Street Corridor', 'SV1 - Mountain View') THEN '1. California' WHEN "Shops"."Shop Name" IN ('BO1 - Financial District', 'BO2 - Atlantic Wharf', 'BO3 - Back Bay', 'CA2 - Harvard Square') THEN '2. Massachusetts' WHEN "Shops"."Shop Name" IN ('DC1 - Central Business District', 'DC2 - Dupont Circle') THEN '3. District of Columbia' WHEN "Shops"."Shop Name" IN ('NY1 - Times Square', 'NY2 - Fifth Avenue') THEN '4. New York' ELSE '5. Others' END
ELSE 'Other' END*

I am also going to use the presentation variable in the column heading so that when I make my selection in the prompt, the column heading will reflect the selection as well.

If I go to my results, notice how only the Products column is showing due to the default value I set when defining my presentation variable.

If I go back to my column formula and change the default value to be Location instead of Products, the column values will change to the ones defined in my Locations case statment.

In order for the report to be filtered for individual values, I need to create a filter using a presentation variable. To do this I am going to create a filter for the column I used to define all of my groups and, instead of selecting a value, I am going to put in a new presentation variable which I will define later when I create the dashboard prompt for the values.

I am going to name my new presentation variable pv_select_value.

When a Presentation Variable is added to the filter, two new text boxes appear. The Variable Expr box is where you define the variable to be used and the (default) box is used to add a default value. Notice how I am not using the syntax @{presentation_variable_name}. When defining a presentation variable using the presentation variable option in a filter or in a prompt, you only have to define the name. Once I click ok, OBIEE will insert the proper syntax for me. The default value is optional but, when defining a Presentation Variable within a filter, you have to specify a default value in order to get any results within the analysis. If I run the report with just a presentation variable defined with no default value, I will get the error shown below:

This is because, when the report is run, the query issued is trying to use the value of the presentation variable. Since there is no default value specified, it is trying to use the presentation variable name itself as a value. If I specify a default value or a list of default values, the report will use those values in the absence of a value or values in the presentation variable. Because I want all of the values to be shown initially on the dashboard, I am going to use the Convert this filter to SQL option and input all of the values for the product group manually for the list of default values.

If I run the report with the default values specified, I get the results back for the product group.

In order for the filter to work with the column prompt I will create later in this example, it requires the [formatting] syntax to be used (@{pv_select_value}['@']{'1. Breakfast Foods', '2. Coffee', '3. Coffee Beans', '4. Tea', '5. Tea Bags', '6. Other'}) for example). This will encapsulate each value passed into the pv_select_value presentation variable with single quotes.

Now I need to add an additional filter on the same column using the same pv_select_value presentation variable. The reason for this is because I need to define each default value for the location group as well so that all of the values for location will be shown on the dashboard. To do this, I am going use the same process that I used for the product group filter.

There is one additional step that I need to do in order to make the filter work for both products and locations. When you initially create report filters, OBIEE assumes that you want to use all filters together at the same time therefore it uses 'AND' for each filter created after the initial filter.

If I run the report, I will get an error because the filter is being applied for both the products and location values. Because the case statment is defined for products and locations independently, I can not use all values in a filter at the same time.

To fix this, I need to change the operator for the location filter from 'AND' to 'OR'. I can do this by simply clicking on the 'AND' icon.

Now the report will run because only the first set of conditions for the product group filter is applied.

Now I need to create my two dashboard prompts: One for the initial selection between my two case statements for Products and Locations and one for the values.

I start by creating a new dashboard prompt and create a new Variable Prompt.

Because this is a variable prompt, I have to manually specify the values I want the user to be able to select from as well as define the presentation variable. Since I am creating the prompt for switching between the two case statements for Products and Locations, I am going to make the following selections and input the following values.

Prompt for: Presentation Variable > pv_select

User Input: Choice List

Choice List Values: Custom Values

Values: Products and Location

Default Selection: Specific Custom Value > Products

As you can see, I have defined the variable that we used in the column formula for the case statements. By defining the two custom values (Products and Location), I can select between the two case statements at will and, as far as the user is concerned, it will be just like using a column prompt within a report. Also, the default value needs to be specified for reasons I will explain when creating the next prompt for the values.

Now I am going to create a column prompt for the values.

First, I need to create a new column prompt and put in the same column formula that I used earlier in my report. I can do that by clicking on the fx icon and pasting in the column formula.

Essentially what I am doing is making the column values populate for the prompt by using the same case statement definitions and they will also change depending on the value selected in the prompt we created previously.

For the choice list of values, I can simply leave the default of All Column Values. Since the values will dynamically change depending on the initial prompt value in the variable prompt, it will only show values corresponding to each selection.

I do not need to define a default selection because I set the default values in the two filters I created in my report.

Finally, all that is left to do is define the pv_select_value presentation variable I defined for the filter of the case statement column in the report. This is how I will get all of the values to be shown for each initial prompt selection and also how I will select individual values. The default values will be passed into the filter via the pv_select_value presentation variable initially which will prompt all the values to be shown and then individual selections will be passed in the same way.

Here are the results when placed on the dashboard.

So that concludes part 1 of Demystification of Variables. Please feel free to ask questions or leave me a comment! In part 2, I am going to talk about using Repository, System and Presentation Variables to make reports dynamic for any series of time. Until next time.

Categories: BI & Warehousing

New Performance Analytics Release

Wed, 2017-03-01 09:00

Poor performance is one of the most common complaints we hear upon arrival to customer sites, and often for good reason. When users have time to “make the rounds” before the system returns a result (if it returns a result at all), they have the right to be frustrated.

Unfortunately, tracking down the origins of performance issues can be extremely difficult, as the root cause could be one or more problems across several distinct systems. Without a comprehensive view of the complete technology stack, an administrator will effectively be guessing.

But we are all in the business of analytics. We should be taking an analytical approach to drive our decisions, and provide evidence to support them. Performance data should be collected, investigated, and then acted upon. But an analytical approach raises two distinct questions, “What data can we collect that will help us identify performance issues, and once found, what actions can we take”?

Enter Performance Analytics

Performance Analytics is an application that collects data from all layers of the stack in your environment, and makes it readily available for study and investigation. By collecting data from OBIEE Usage Tracking, operating system metrics, database sessions, and a myriad of other sources, our application provides administrators with an all-encompassing view of their system. It has never been easier to identify the causes of poor performance.

However, simply identifying a cause of poor performance isn’t all that’s required. In fact, it’s almost useless without knowing how to resolve the issue. Rittman Mead has expertise ranging from databases and data warehouses to front end browser applications, and everything in between. Our team will help you understand the problems discovered by Performance Analytics and show you how to fix them.

With the newest release of Performance Analytics, improving the performance of your BI ecosystem is easier than ever. No longer will your users sit around waiting for reports, no longer will you need to guess at configuration settings, and no longer will you be left with no leads to investigate when users desire a better experience.

New Features Simple and Lightweight Installation

Performance Analytics is now deployed using Docker, requiring absolutely no software installation on any machine other than the monitoring server.

Upgraded and enhanced software.

The open source technologies included in Performance Analytics have come a long way since its inception, and this release provides their latest and greatest versions, as well as our own improvements that increase performance and reliability.

Alerts and Notifications.

Receive notifications based on failed OBIEE system components, a violated performance threshold, or any other identifiable condition found in your data. Be made aware of adverse performance conditions the instant they occur.
And many more...

Learn More

We will be releasing more exciting news about Performance Analytics over the following weeks, so be sure to subscribe to our blog and newsletter to stay up to date!

Visit our Performance Analytics page here.
To request a demo or ask any questions about Performance Analytics, call or email us:

P (US): (888) 631-1410
E (US): info@rittmanmead.com
P (UK): +44 (0) 1273 911 268
E (UK): info@rittmanmead.com
Categories: BI & Warehousing

OBIEE 11.1.1.7: Is It Still Supported?

Tue, 2017-02-28 07:11
 Is It Still Supported?

Last week, during a quiet working morning, my colleague Robin Moffatt sent me a link that changed my working focus for few hours.

The link was to "OBIEE 11g: Required and Recommended Bundle Patches and Patch Sets (Doc ID 1488475.1)", a document containing the latest bundle patch (patch set) available for every major OBIEE release.

He spotted that for the first time, the new bundle patch released on 17 Jan 2017 was for OBIEE 11.1.1.9 only, with the latest patch for 11.1.1.7 being the one previously released on 18 Oct 2016.
Moreover there is a clear statement in the document saying that is necessary to be at least on the 11.1.1.7.151020 patch-set or upgrade to 11.1.1.9 (or 12c) in order to receive continued Error Correction Support.

Questions became then naturally:

  • Is OBIEE 11.1.1.7 still fully supported?
  • What is Error Correction Support?
  • How does Oracle Support Work?
  • How can I verify if my software is supported and when the support ends?

 Is It Still Supported?

With those questions in mind, I started my wonderful journey in understanding the Oracle support, if you want be my guests then sit down, take a coffee (espresso please) and enjoy the trip.

Understanding OBIEE Oracle Support

Disclaimer: All the information and date mentioned in the post reflect the situation as of the post publication date, Oracle may change them in future. Oracle support documents mentioned in this post should be directly taken as reference. The documents linked below are mostly regarding OBIEE, for other products please visit Oracle's support website.

Understanding Oracle's support requires patience and the right set of documents, with the main two being:

  • Lifetime Support Policy: provides the announcements, dates and exceptions for the end of Premier, Extended and Sustaining Support. The detail of the Lifetime Support Policy is by product e.g OBIEE 11.1.1.*.
  • Error Correction Support Policy: provides details regarding the end of bug correction support per single release (e.g. 11.1.1.7). For OBIEE 12c the Error Correction Support document is merged in the Fusion Middleware document.

The combined information contained in these two sources should be used to determine the status of a specific Oracle's software version.

Oracle's Support Phases

The two documents listed above provide a good starting point, but what do "Premier", "Extended", "Sustaining" and "Error Correction" Support mean?
Doc ID 1664916.1 and Lifetime Support Policy page provide a detailed explanation.

The "Lifetime Support Policy" defines the three main phases for each product (e.g. OBI 11g):

  • Premier Support: is in the first 5 years (however this timeframe could be changed) from General Availability of a product, during this period bundle patches are released providing error correction and new features with certification for most Oracle or third party support.
  • Extended Support: almost all the benefits of Premier Support can be extended by another three years after its end with an additional fee.
  • Sustaining Support: after the end of Premier Support, this option (requiring an additional fee) provides a limited support on existing product bugs but no new developments (e.g. a certified mechanism of connecting OBIEE 10g to Impala); can be extended for an unlimited amount of time.

The first two phases are also considered Error Correction Support since are the only two where new bug/security related patches are provided. After the end of Error Correction Phase no more patches will be released for a certain product.

Applying the Lifetime Support Policy to OBIEE 11.1.1.* based on Oracle's documentation the following is obtained.

 Is It Still Supported?

As mentioned before, this is a guideline for the whole OBIEE 11.1.1.* set of products, so how can I determine the end of support for a certain release?

Usually the answer to this question is defined by

  • The end of the Premier/Extended Support if the release you're checking is the latest available of a product
  • The end of the release's Grace period: if a newer release is available. The grace period, mentioned in Doc ID 944866.1 is the period of time (usually one year) following the release of a patch set in which Oracle provides fixes for both the new and previous patch set. The grace period is calculated based on the availability of the following release. E.g. since OBIEE 11.1.1.1.9 was released in May 2015, the grace period for 11.1.1.7 should be ended in May 2016. There are however exceptions where the grace period and related error correction support is extented, like for the 11.1.1.7 which now is extended until December 2018 as per OBIEE Error Correction Support document.

The following graph shows the error correction support phases for OBIEE 11.1.1.7.

 Is It Still Supported?

What Does This Mean For OBIEE 11.1.1.7?

As stated above OBIEE 11.1.1.7 is still in its grace period meaning that bundle patches and security fixes will still be released when needed until Dec-2018.

via GIPHY


Is OBIEE 11.1.1.7 Still The Right Choice?

If all you need is support for your existing and working environment having the latest bundle patch, then yes, you're fine and you'll be until December 2018.
But we all know how long an upgrade takes from planning to the actual execution and testing. Probably you don't want to wait until the end of next year having to rush for the upgrade because your system is getting out of support.

 Is It Still Supported?

We at Rittman Mead are in favour of a smart upgrade planning and with our fixed cost upgrade process can take care of it while you keep focusing on what matters in your company.

Upgrading

As suggested by Oracle's document there are two options: migrate to the latest 11g release (11.1.1.9) or 12c. Both are valid options - 11.1.1.9 has another few years of premier support, with end of error correction support targeted for Dec-2021 as stated in Oracle's Doc 1664916.1.

If you are opting for the migration, choosing the target release should be based on the features you aim to get and the version stability plan: some options like Visual Analyzer, Advanced Analytics, and Data Mashups are only available on 12c which is the version currently developed, while 11g is on its way out. These points are discussed in more detail below.

Upgrading to OBIEE 11.1.1.9

The migration to 11.1.1.9 might seem easier and it could be done in-place. However it still requires database schema changes, moreover an in-place upgrade has associated risk of having unavailable live environments until all issues are fixed; risk that can be mitigated with an out-of-place upgrade.
You can find a list of 11.1.1.9 new features for end users and system admins in our blog.

Finally, if you migrate to 11.1.1.9, the end of its error correction support in Dec-2021 forces the next milestone in your upgrade path.

Upgrading to OBIEE 12c

On the other hand the migration to OBIEE 12c (with the 12.2.1.2.0 being the latest available as of now) is completely an out-of-place process. There is a requirement on the source 11g version meaning that if you are currently on a pre-11.1.1.7 version, you'll first need to perform a migration to 11.1.1.7 or 11.1.1.9 and then to 12c.

The upgrade process is smooth, with a dedicated upgrade tool taking care of moving the catalog, RPD and part of the security and a Baseline Validation Tool helping in automating the testing.
OBIEE 12c includes several new features with new connectors to big data world that will increase over time and the new options like Visual Analyzer and Data Mashup providing great functionalities directly in the hands of end users.

Check out our 12c migration case study presented at UKOUG!

Rittman Mead Can Help!

Unsure about the support status of your current system and the risks related? With our experience we can help you plan a migration roadmap!
A migration is not a quick process, it needs to be understood and planned correctly, the following are just some examples of our pre-migration activities with clients.

  • Worried about migrating?

    We can advice about best installation/migration processes and the benefits of such upgrades. We can also help you implementing the processes during the migration phase.

  • Unsure about server capacity?

    We can help estimating server sizing based on the number of active users and the enabled features. Our Performance Analytics Service can help understanding and solving bottlenecks.

  • Unsure about impact on BAU activities?

    Our migration approach reduces the Code Freeze time to a minimum; the time to swap between versions in the production environment can also be minimised.

  • Uncertain about timings?

    We can help you estimating effort and plans for a successful upgrade based on our experience.

Once the migration in planned, we can implement it for you or assist and support your team! Let us help you!

Categories: BI & Warehousing

Getting Apex 4.2 Up and Running on Oracle 12c Using PL/SQL Gateway

Mon, 2017-02-27 10:30

Recently I was asked to create a simple Master-Detail Apex form on top of some metadata tables stored within Oracle 12c. Apex version 4.2 is already pre-installed in 12c but it needs configuring to be able to use. This is where I hit some trouble which stems from a conflict of information across a number of different sources.

I was following the Oracle certified documentation on 4.2 and how to get it running on 12c specifically but it wouldn't work. Time after time I kept getting 'Error 404' messages whenever I tried accessing Apex through the browser. I gave up on the Oracle documentation and decided to scout out some blogs but they were virtually copy and paste and resulted in the same error messages. I asked colleagues at work and also joined the Apex slack domain but nobody quite understood why it wasn't working.

It seemed like a lost cause, until I decided to check the later versions of the Oracle documentation and came across an extra piece of information from an Apex 5.0 document:

Procedures CREATE_ACL, ASSIGN_ACL, ADD_PRIVILEGE and CHECK_PRIVILEGE in DBMS_NETWORK_ACL_ADMIN are deprecated in Oracle Database 12c. Oracle recommends using APPEND_HOST_ACE instead.

Given that 12c comes with Apex 4.2, I was surprised to find it not mentioned anywhere else but in the Apex 5.0 documentation.

So the intention for this blog entry, is to bring together all the information that is needed to get Apex 4.2 using PL/SQL Gateway on Oracle 12c so you're able to take a quick and easy look at what it can offer.

Steps to configure the PL/SQL Gateway on Oracle 12c

  1. Change your working directory where Apex is installed:

    cd $ORACLE_HOME/apex  
    
  2. Connect to SQL*Plus with a user with SYS privileges:

    sqlplus / as sysdba  
    
  3. Switch containers to the pluggable database container:

    ALTER SESSION SET CONTAINER = <PDB_NAME>;  
    
  4. Run the apex_epg_config.sql script along with the $ORACLE_HOME:

    @apex_epg_config.sql /u01/app/oracle/product/12.1.0/dbhome_1/
    

    Log back into SQL*Plus from the Apex directory if you get logged out

  5. Then unlock the ANONYMOUS user within the Root Container:

    ALTER SESSION SET CONTAINER = CDB$ROOT;
    ALTER USER ANONYMOUS ACCOUNT UNLOCK;
    
  6. Switch to the pluggable database container and run apxconf.sql:

    ALTER SESSION SET CONTAINER = <PDB_NAME>; 
    @apxconf.sql;
    

    When prompted change the ADMIN password and assign a port for APEX to connect to - Default is 8080.

  7. Verify the Oracle XDB Protocol Server Port that was set after running the apxconf.sql script:

    ALTER SESSION SET CONTAINER = <PDB_NAME>;  
    SELECT DBMS_XDB.GETHTTPPORT FROM DUAL;
    

    If it returns with [0] then enable the XDB Server

    EXEC DBMS_XDB.SETHTTPPORT(port); COMMIT;  
    
  8. Enable the Network Services in Oracle Database 12c - This will be disabled by default. This following will give access to all hosts:

    BEGIN 
    DBMS_NETWORK_ACL_ADMIN.APPEND_HOST_ACE
    (host => '*', ace => xs$ace_type(privilege_list => xs$name_list('connect'), principal_name => 'apex_040200', 
    principal_type => xs_acl.ptype_db)); 
    END; 
    /
    

    NOTE: If you require stricter security add the host name after the host parameter e.g. host => 'localhost'

  9. Open your browser and connect to Apex:

    http://hostname:8080/apex
    
Summary

This blog has given you the guidelines for getting Apex 4.2, that comes pre-installed on Oracle 12c, up and running.

Note that 4.2 is now quite dated and it is recommended to upgrade to Apex 5. This ensures continued support from Oracle, and access to newer and more efficient features.

Categories: BI & Warehousing

Data Preparation for "Analyse Elections with Oracle Data Visualisation Desktop"

Fri, 2017-02-24 09:13

In the previous post I wrote about some interesting insights we can get from the elections data. And this post concentrates on the data preparation for it. As the first one, it doesn't work as a guide and aimed to show some data manipulation things end-user can do without IT department and coding. As data preparation is not absolutely needed for the understanding of the visual part but still a piece of the story it was extracted as a separate post.

The Data

From the logical point of view I have two datasets.

  1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
  2. Turnout figures at given times throughout the day.

From the physical point of view, both datasets are big files (about 500 Mb combined) each line of which is a single record. These records represent information at various levels of aggregation. I will use this fact to show some of Data flows features (great introduction from Francesco Tisiot).

Generally, each record is a JSON which looks like this:

{  
 [...]
 "Data":{  
  "1 | Number of voters included in the voters list at the time of the end of voting":"102",
  "2 | The number of ballots received by the precinct election commission":"102",
  "3 | The number of ballots issued to voters who voted early":"0",
  [...]
  "19 | 1 Russian political party 'Rodina'":"0",
  "20 | 2 Political party 'COMMUNISTS COMMUNIST PARTY OF RUSSIA'":"3",
  "21 | 3 Political party 'Russian Party of Pensioners for Justice'":"0",
  [...]
  },
  "Loc":[  
   "Nizhny Novgorod Region",
   "129 Nizhny Novgorod Region - Nizhny Novgorod",
   "28 Krasnooktyabrskaya",
   "PEC #1148"
   ]
}

You can find a full sample of the JSON here.

Data part is a set of 32 measures and Loc is an array of one to four elements. The more detailed the record, the more elements in Loc array.

The whole votes dataset is slightly smaller than 100'000 of records. As I said, records have various levels of aggregation, and this sample shows the most detailed data (its Loc has four elements). Nizhny Novgorod Region is obviously the region, 129 Nizhny Novgorod Oblast - Nizhny Novgorod is a concatenation of district electoral commission number (129), region name (Nizhny Novgorod Region) and district electoral commission name (Nizhny Novgorod). 28 Krasnooktyabrskaya is number and name of a territorial election commission, and PEC #1148 is precinct electoral commission. For the most aggregated records Loc looks like:

"Loc":[  
      "DEC #129"
   ]

This is an aggregated data for a district election commission #129.

Unfortunately, DVD doesn't work with JSON format directly. As one of the possibilities, we can use Apache Drill (for more information see previous posts by Francesco Tisiot and Robin Moffatt). Apache Drill is a supported option for Oracle DV, but it isn't an end-user tool and the story is not about building a complete and reusable system. It is about the storytelling. So I did a one-time manual conversion using Excel magic and got a plain table.

The second part of the data is turnout. In JSON form it's basically the same: one record per line, Data part with four values and Loc part based on the same principles. Data is significantly smaller and has only four values - turnout at four fixed moments. But here I need it in a more dimesional form. I want time as a dimension rather than pivoted.

In my opinion, pivot/unpivot is one more highly desired feature DVD needs. We can live without it in tools like OBIEE because its models aren't built on rapidly changing user-generated data structures, but for desktop data exploration tools like DVD powerful data transformation capabilities are crucial.

But while it isn't implemented, I made unpivot in Excel and got this.

It wasn't obvious that I need in in this particular form in the beginning of data preparation stage, but the good news is both visualisation and data manipulation live in one product, and in reality I was able to build charts, understand that I need more data/attributes/etc, add them and go back to analysis. Very smooth and clean.

I Thought We Weren’t Supposed to Use Excel?

While Data flows DVD feature is powerful and very useful there are some data manipulation tasks that Excel can do just easier or better than DVD. I hope that Oracle will improve Data flows over time but right now it’s not perfect.

I used Excel for the following:

  1. Splitting of long strings into fields. I could do it using locate+substring+left+right functions in DVD. But that would produce unreadable code. Excel Text to columns tool is a weapon of choice so far.
  2. UNION. When I need to combine two datasets into one, Excel’s copy and paste are beyond competition. Adding set operations (not only union) will greatly improve Oracle’s tool.
  3. A few technical tasks like translating the data. Initially, the data was not in English and I had to translate it. I could use both Excel and DVD for this task but chose Excel in order to simplify the post. But if I did it in DVD, it would be just a few more joins. Can't say what is simpler, so no clear winner here.

Note: I'm talking about static data sets. In case of a changing data that may be not true.

Data Transformations in DVD

While I was able to (had to) make some of the data preparation in Excel, it's better and easier to do some tasks in DVD using relatively new Data flows feature. I will start with building a clean dictionary of electoral commissions.

When I finished my Excel part, I got this:

It isn't so bad, but I can make it better in a few minutes. I want to rename Location1 into Subject, split Location2 into DEC #129 and Nizhny Novgorod, remove leading digits from Location3 and rename it to TEC name, and also rename Location4 into PEC #.

Here is the data flow which does it. It may not look like the simplest thing in the world, but I spent about the same time building it as you spent reading what it does.

The second thing I want to do is to split my wide file with 32 measures into two. One for party results and another one for other let's say "technical" measures like number of voters, ballots and so on. It's not absolutely necessary but will please me and anyone else who will see it. Both data flows for this task use Commissions dictionary I made earlier and the same source file. I filter the source to get only the most detailed data, join with commissions, select only interesting columns and save the result. Simple and straightforward. The only difference is sets of selected columns.

I could start building visualisations with this data, but I'd prefer to calculate some relative measures. Absolute numbers won't help me because every commission has its own number of voters. We can't compare them directly but relative measures will do the trick.
Data flow for these calculations is the simplest of all.
The only not immediately obvious part is why I placed two consequential Add columns steps. The first one calculates Total votes column as a sum of votes for all parties and the second one uses this column for calculation of a percentage.

And it is really cool how we can use results of different flows. I took Protocols, made Commisions, joined Protocols and Commisions and got Votes data and later made calculations based on Votes data. That was very cool.

And here I want to show how the autocomplete feature in formulas editor works (because I really like it). I hope to see it on OBIEE one day.

I have one more data flow for the turnout data, but it simply joins source file with the dictionary of commissions.

Summary

I believe Oracle DVD is a very powerful tool for a data analyst. While many users prefer to stick with Excel exclusively, DVD can significantly increase their data preparation and transformation capabilities. Even now at the very beginning of its journey DVD allows making many data manipulation operations fast and easy. It obviously worth give it a try and include into a data analysis process. Look how the data from this post may be used for analysis and making interesting conclusions: Analyse Elections with Oracle Data Visualisation Desktop.

Categories: BI & Warehousing

Analysing Elections Data with Oracle Data Visualisation Desktop

Fri, 2017-02-24 04:38

Disclaimer #1 This post is not about politics. Its dataset is about politics, but that's a coincidence. It could be about immunisation or transport or anything else. If you are strictly against any politics, here is a link to watch cute kittens.

Introduction

Let's pretend that I'm an analyst and got a supposedly interesting data set. Now I want to understand if the data is actually interesting or it's a total rubbish. Having been in IT for some time I can use tools and technologies which typical end-user can’t access. But this time I pretend I’m a usual analyst which has data and desktop tools. And my task is to do a research and tell if there are anomalies in the data or everything looks like it supposed to look like.
The main tool for this work is obviously Oracle Data Visualisation Desktop (DVD). And, as a supplementary tool, I use Excel. This post is not a guide for any particular DVD feature. It won’t give a step by step instruction or anything like that. The main idea is to show how we can use Oracle Data Visualisation for an analysis of a real dataset. Not simply show that we can build bar charts, and pie charts and other fancy whatever charts, but show how we can get insights from the data.

The Data

I should say a few words about the data. It is an official result of the Russian State Duma (parliament) elections in 2016. Half of the Duma was elected by party lists and for this post I took that data. I should confess that I cheated a little and decided not spend my time downloading and parsing the data piece by piece from the official site, and took a prepared set.

From a bird's-eye view I have the following data:

  1. Voting results by election commissions: number of votes for every political party and a lot of technical measures like number of registered voters, number of good and damaged voting ballots and so on.
  2. Turnout figures at given times throughout the day.

From a more technical point of view, the data was stored in two big files with multiple JSON in each. As the data preparation part is big enough, it was extracted to another post. This one concentrates purely on visualisation and the second one is about data flows and comparison to Excel.

Analysing the Data

I did some cleaning, refining and enriching of the data and it's time to use it. I started with a standard Stacked bar chart. It shows percentages of parties by regions and in addition width of bars shows Total votes. The chart is sorted by ascending Total votes.

What can we say here?

Before I start talking I need a lawyer and a disclaimer #2:

Disclaimer #2 Some of the results may be interpreted in different ways. Some of them may be not so pleasant for some people. But I'm not a court and this post is only a data visualisation exercise. Therefore I'm not accusing anyone of committing any crimes. I will make some conclusions because of rules of the genre, but they should be treated as hypotheses only.

That’s not a proven charge (see disclaimer #2) but for me these regions look a bit suspicious. Their results are very uncommon. United Russia ruling party (orange bars) got an extremely high result in these few regions. This may be a sign of some kind of interfere with an election process there. But of course, other explanations (including a measure incorrectly selected for sorting) exist.

Just for reference so we don’t forget the names: Tatarstan, Kemerovo, Dagestan, Chechnya and Mordovia. There are a few more regions with similar results but their number of voters is lower so I don’t show them here.

At this point I'm starting to suspect something. But I need more visuals to support my position, and my next hypothesis is that in these regions ballots were somehow added to voting boxes (or protocols were changed which is basically the same). From a data visualisation point of view that will mean that these regions will have higher turnout (because of added ballots) along with higher United Russia result.

To check this I need one more measure - Turnout, %. It shows how many of registered voters actually voted. I can create this field without leaving visualisation mode. Cool.

Note. This formula may be not absolutely correct but it works well for demonstration purposes.

In order to visualise this hypothesis, I built a Scatter chart. Its horizontal axis is Turnout,% and vertical one United Russia, %. And I added a trend line to make things more obvious. Colour density shows Total votes.

I think my hypothesis just got a strong support. As usual it is not an absolutely impossible situation. But it's hard to explain why the more people come to voting stations the higher one party result is. I'd expect either high result not depending on the turnout (more or less like Uniform distribution) or at least a few exceptions with high turnout and low result.

I consider this result strange because in real life I'd expect that higher turnout should mean more opposition voters (a passive group indeed) coming to voting stations. But that's only my opinion. And highly arguable I should agree. What I really want to show here is that these charts highlight an oddity that should be investigated and may or may not have a rational explanation.

And who are our heroes? Let’s zoom in on the chart …and we see the same regions.

But maybe other parties can show the same result? We can build the same Scatter charts for every party or we can visualise all at once with a usual Line chart. Here I’ve plotted the percent of vote won by each party (Y-axis) against the overall turnout % (X-axis).

United Russia is the only party that increases with turnout.

So far all my charts were about relative measures, it's time to check some absolute values. Here is a Bar chart which shows a number of precinct commissions by results. I'd expect to see something close to normal distribution - a bell-shaped chart with the maximum around 54% (average turnout). Now, look at the real chart (bin size=1.0%). Its maximum is at 36-37%.

Normal distribution

In probability theory, the normal (or Gaussian) distribution is a very common continuous probability distribution. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known.

Strictly speaking all numbers I'm showing here are discrete and I should say Binomial distribution rather than Normal but right now for my purposes the diffence is not that big.

I'm definitely not Carl Gauss (and even not one of his best students) and you may ignore my opinion, but I expected something more like this:

And I don't have the slightest idea how it is possible that the most "popular" turnout is 100%.

What if we look at the same chart with more details? The previous one was grouped by 1% bins, and this one has 0.1% bins. And I had to add turnout not equal to 100% filter. Even with smaller bin size, the last one is almost the same ~3K commissions. This bar is much bigger than the others and the chart doesn't show anything in that case.
What can we see here? Well, people aren’t really good in generating random numbers. It's perfectly OK to have some runout on the chart. But hey, it's not normal to have them mostly at round values. That looks like someone was trying to fit the result to some plan.

Here is my favourite part of the chart. I marked 1% intervals, and you can see that round turnout value is always more probable than its closest non-round neighbours. No exceptions. A possible explanation is that the number of commissions with results that high is relatively low and even the slightest manipulation is clearly visible.

But wait. What about that 64.3 percent? It’s not round, but it is a big runaway. Let’s take a closer look at this value and check if there is anything interesting or that is a normal situation. Here is a few interesting visualisation for it.

The first one is Tree Diagram. It shows all existing combinations of district and precinct commissions by regions for the filtered data (turnout=64.3). And in order to demonstrate how it works for this case I made an animation. Most of the regions have a few commissions with 64.3% turnout. But Saratov region beats them all.

This visualisation has a serious flaw. End-user has to scroll it (I mean for this set of data) and can miss the point. Another visualisation can improve the situation. Network diagram doesn't need scrolling.

Looks good and shows exactly the same. But for this chart we must ensure that every data point is unique what is not true in my case. Different precinct commissions have the same numbers and I had to create a unique field first (DEC #||PEC #). It's easy to forget and get unpredictable or even misleading results.

Or if you prefer more traditional charts, here is Sunburst for you. Its sectors size shows Total votes and the colour is PEC count. It gives a good representation of how uncommon Saratov's result is.

And the last picture for the same topic boring never-old classic Bar chart.

Considering all these charts I'd say that almost exclusive concentration of commissions with 63.4% turnout in Saratov doesn't look normal for me. It's pretty weird that sibling commissions show exactly the same figures.

A few more diagrams which could work well are Sankey and Parallel coordinates, unfortunately, they are less informative because of the high number of precinct commissions. But if the number was lower I'd consider them too.

All previous charts are based on voting data. But I have one more dataset - official turnout. Let's check if we can find anything interesting there. And unfortunately significant part of commissions doesn't have official data, and sometimes I may use formulas that are not exactly the same as official ones, so numbers may differ slightly from what I got from the protocols data.

The first chart shows the number of commissions (vertical axis) by the official turnout (horizontal axis). Colour shows the moment of time. Strictly saying I shouldn't have used continuous linear charts for discrete values, but coloured overlapped bars don't give that clear picture.

Except for the 100% tail, everything is more or less natural. Graph shape looks more like Gamma distribution rather than Normal but I didn't test it.

What picture do I have for various regions?
Moscow city is known for a relatively high number of poll watchers and we may expect more clean data there. Ignoring the long tail, these look normal (or binomial if you want to be precise).

Saratov region. The one with 64.3% turnout. Look at these peaks. Do they look natural to you?
Do you remember Tatarstan (was the hero in the beginning of this story)? Here it is. I don't know how can anyone explain how it is possible (without manual results adjusting I mean).

Summary

This post shows how we can use Oracle DVD for visualisation of a real data set. And I hope I was able to convince you that this tool can be useful and can give you really interesting insights. Of course, visualisation alone doesn't answer all questions. And this time actually it was less about answers but more about questions. It helps to ask right questions.

More reading on the topic: 1, 2 (Russian language). If you can read Russian, here you will find more visualisations, discussions and interesting information. And this article is about elections in 2011. Its undisputable advantage is that it is in English.

Categories: BI & Warehousing

OBIEE 12c Time Dimension: Logical Sequence Number

Thu, 2017-02-23 02:59
 Logical Sequence Number

The key component of any successful OBIEE implementation is the metadata model known as the repository (RPD). Doing it right is sometimes considered "black magic", and small mistakes in the RPD can impact all the exposed Subject Areas, resulting in poor performances or, even worse, wrong results.

Working an RPD requires dedicated knowledge of the tool and we are sharing it in our RPD modelling training both for OBIEE 11g and OBIEE 12c.

If you ever worked on RPD modelling, one of the settings you surely encountered is the Time dimension. This blog post written back in 2007 explains the process of setting up a time dimension for OBIEE 10g. The process didn't have any major modifications until recently when, in 12.2.1, Logical Sequence Numbers were introduced. As per Oracle's documentation this new feature "optimizes time series functions and in some cases improves query time", and in this post we'll see how to configure it and its impact on the time-series calculations. The examples shown below are based on Oracle Sampleapp v607, a really good source of modelling and front-end examples.

Usual Time-series Query Behaviour

Time-series functions like Ago,ToDate, and more recently PeriodRolling, allow end users to compare results coming from different moments just by specifying the level in the time dimension hierarchy and the number of periods to look backwards or forwards. As example if you needed to compare current month sales revenue with the previous month figure you'll end up writing a formula like

 AGO("F0 Sales Base Measures"."1- Revenue","H0 Time"."Month", 1)

Where:

  • AGO: is the Time-series function being called
  • "F0 Sales Base Measures"."1- Revenue": is the metric
  • "H0 Time"."Month": is the time hierarchy level
  • 1: is the amount of periods (months in our case) to look back in history

Once the time-series metric has been created, it can be used in an analysis like the following to compare Revenue of 2015-04 with the one of the previous month.

 Logical Sequence Number

The analysis generates the following Logical SQL which basically lists the columns retrieved and the filters applied.

SELECT  
   0 s_0,
   "A - Sample Sales"."Time"."T02 Per Name Month" s_1,
   "A - Sample Sales"."Base Facts"."1- Revenue" s_2,
   "A - Sample Sales"."Time Series"."127  Mth Ago Rev  (Fix Time Lvl)" s_3
FROM "A - Sample Sales"  
WHERE  
("Time"."T02 Per Name Month" = '2015 / 04')
ORDER BY 2 ASC NULLS LAST  
FETCH FIRST 5000001 ROWS ONLY  

The translation to source SQL hugely depends on the data model created in the RPD and on the data source type. In our example an Oracle SQL gets generated containing the following steps:

  • Sequence Generation: a RANK function is used to created a dense sequence based on Per_Name_Month, the chronological key defined in the time-hierarchy for the month level. Mth_Key is also part of the query since it's used in the join between dimension and fact table.
WITH  
OBICOMMON0 AS (select DENSE_RANK() OVER ( ORDER BY T653.Per_Name_Month) as c1,  
     T653.Mth_Key as c2,
     T653.Per_Name_Month as c3
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ),
  • Period Shifting: The sequence generated above is now shifted by the number of periods defined in the column formula (D1.c1 + 1), in our example 1.
SAWITH0 AS (select D1.c1 + 1 as c1,  
     D1.c2 as c2,
     D1.c3 as c3
from  
     OBICOMMON0 D1),
  • Ago Period Query: Using the period shifted query the historical record (or set of records) is retrieved.
SAWITH1 AS (select distinct D1.c1 as c1,  
     D1.c3 as c2
from  
     OBICOMMON0 D1),
SAWITH2 AS (select sum(T418.Revenue) as c1,  
     D3.c2 as c2
from  
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ ,
     SAWITH0 D4,
     SAWITH1 D3
where  ( T418.Bill_Mth_Key = D4.c2 and D3.c1 = D4.c1 and D3.c2 = '2015 / 04' )  
group by D3.c2, D4.c3),  

The period shifted query usage is explained visually by the image below

 Logical Sequence Number

  • Selected Period Query: the query for the selected period, in our case 2015-04, is executed using standard time dimension
SAWITH3 AS (select sum(T418.Revenue) as c1,  
     T653.Per_Name_Month as c2
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ ,
     BISAMPLE.SAMP_REVENUE_F T418 /* F10 Billed Rev */ 
where  ( T418.Bill_Mth_Key = T653.Mth_Key and T653.Per_Name_Month = '2015 / 04' )  
group by T653.Per_Name_Month)
  • Resultsets joining: Results coming from Ago Period and Selected Period queries are then joined with an outer join.
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4 from ( select D1.c1 as c1,  
     D1.c2 as c2,
     D1.c3 as c3,
     D1.c4 as c4
from  
     (select 0 as c1,
               coalesce( D1.c2, D2.c2) as c2,
               D2.c1 as c3,
               D1.c1 as c4,
               ROW_NUMBER() OVER (PARTITION BY coalesce( D1.c2, D2.c2) ORDER BY coalesce( D1.c2, D2.c2) ASC) as c5
          from 
               SAWITH2 D1 full outer join SAWITH3 D2 On D1.c2 = D2.c2
     ) D1
where  ( D1.c5 = 1 )  
order by c2 ) D1 where rownum <= 5000001  

As you can see, it's a rather complex set of instructions that involves several steps including analytical functions like the DENSE_RANK() in the sequence generation. In our case we have been lucky that the source system provided the DENSE_RANK() function; with other sources the sequence generation needs to be calculated directly by the BI Server (OBIS) with a possible degradation of performances.

What is Logical Sequence Number?

The Sequence Generation step mentioned above can sometimes be the bottleneck of the whole query especially when the time dimension is complex or huge in volume since DENSE_RANK() is a costly analytical function.

All that OBIEE's Time-series need to work is a pure sequence, or in Oracle's words:

Sequence numbers are enumerations of time dimensional members at a certain level. The enumeration must be dense (no gaps) and must correspond to a real time order. For example, months in a year can be enumerated from 1 to 12.

Then what if we can find a way of pre-calculating them and storing in the table or calculate them on the fly but using functions less expensive than a DENSE_RANK()?

This is the idea behind the Logical Sequence Number (LSN): a way of avoiding the BI Server (OBIS) needing to execute the DENSE_RANK(), by passing either a pre-calculated sequence column in the source table or a cheaper-cost function to calculate it on the fly based on existing columns.

The formula behind a Logical Sequence Number must resolve in a number (no varchar, or dates are allowed) and can either be:

  • Absolute: when the sequence is consistent and doesn't need any external reference, e.g. calendar years are a self-defined sequence
  • Relative: when the sequence is relative to a parent level, e.g. creating a sequence for months between 1 and 12 would need the calendar year as parent level to provide correct cross-years references

As the word already says "Sequence Numbers" must be sequential, no gaps can be included. Year and month number are good examples of this.

But what if we wanted to provide an absolute sequence number at month level?
One could think about using a month key in the YYYYMM format but this is not a sequence: 201701 <> 201612+1. The best way of generating the sequence number would be to add it as a column in the database table.

Will take as example the SAMP_TIME_MTH_D table that can be found in Sampleapp v607 containing MONTH_KEY column in the YYYYMM format.
 Logical Sequence Number

I'll add to SAMP_TIME_MTH_D a column MONTH_ABS_SEQ containing the absolute sequence number. The formula behind the column is exactly what Oracle was doing under the covers using a DENSE_RANK.

SELECT MTH_KEY, DENSE_RANK() OVER (ORDER BY MTH_KEY) MONTH_ABS_SEQ FROM SAMP_TIME_MTH_D  

And the end result as expected being

 Logical Sequence Number

How are Logical Sequence Number Used?

Previously I described the two types of Logical Sequence Numbers: Absolute and Relative, each one has it use case:

  • PERIODROLLING: This function uses absolute LSN to calculate the starting Sequence Number based on the selected one: e.g. looking at the previous image a 6 month rolling starting from 2009-02 will include data from sequences in the range [9-14].
  • TO_DATE: uses relative LSN, e.g A YearToDate Measure shown by month will use the month relative LSN to calculate the previous months in the same year.
  • AGO: Ago function uses both absolute and relative: Absolute numbers are used if the grain of the query and the one of the AGO are at the same level e.g. Yearly analysis of Actual vs Previous Year. However when the grain of the shift in the ago is higher than the grain of the query Relative LSN are used, e.g. Monthly analysis of Actual vs Previous Year.
How to Configure Logical Sequence Numbers

Logical Sequence Number is a new feature and as such it requires additional settings in the time dimension hierarchy to be working. In our case we'll add two sequences, an absolute at calendar year level and a relative at calendar month level.
We'll add the sequences directly as formulas in the repository however those formulas should be pushed down as columns in the database table if optimal performances are sought.

In order to set the Logical Sequence Numbers we need to open the RPD (the SampleApp one in this test), and select the Time dimension we want to change.
 Logical Sequence Number

After selecting any level apart from the Grand Total (top of the hierarchy) a new tab called "Sequence Numbers" should be visible. However if, like in our case, none of the columns at that level are integer or doubles, the sequence number selector is disabled.

 Logical Sequence Number

In order to enable the selector we first need to create a sequence column in our dimension and bring it at the correct level in the hierarchy. For the Year Level there is already an integer column named "T35 Cal Year" which can be used as sequence. We need simply to drag the column at Year level in the Time hierarchy and set it as Absolute Sequence.

 Logical Sequence Number

I can do the same with the Month level in the hierarchy and the "T32 Cal Month" column. Note that the column contains only the months enumeration from 1 till 12 so we need to set the sequence number as Relative to the level Year.

 Logical Sequence Number

Please note that both absolute and relative LSN can be (and should be) entered since as discussed above each have a different use cases. In addition relative LSN should be set for all logical parents level in the hierarchy since they will be used only if the grain of the time shift matches the one of the parent level. For example a Monthly LSN based on Year logical level will only be used in AGO functions having a year shift and not in case of Quarterly shift.

For an optimal usage every level of the time hierarchy should have one absolute and a relative LSN for each of the parents level in the hierarchy.

Impact on SQL

It's time now to review the SQL generated by our analysis and check the differences with the old-school time-series query.

When creating an analysis at year level like the following
 Logical Sequence Number

As Expected the BI Server (OBIS) uses the CAL_YEAR column as sequence instead of the DENSE_RANK() function over the PER_NAME_YEAR column.

WITH  
OBICOMMON0 AS (select T795.CAL_YEAR as c1,  
     T795.QTR_KEY as c2,
     T795.PER_NAME_YEAR as c3
from  
     BISAMPLE.SAMP_TIME_QTR_D T795 /* D03 Time Quarter Grain */ )

While when using the TO_DATE both the relative sequence is used, like in the following example where the measure "166 Revenue Year To Date" is defined by the formula:

TODATE("01 - Sample App"."F0 Sales Base Measures"."1- Revenue",  "01 - Sample App"."H0 Time"."Year" )  

 Logical Sequence Number

The following query gets generated, note the usage of Cal_Year and Cal_Month in the sequence generation query instead of the DENSE_RANK() function as per RPD settings mentioned above.

WITH  
OBICOMMON0 AS (select T653.Cal_Year as c1,  
     T653.Cal_Month as c2,
     T653.Per_Name_Month as c3,
     T653.Per_Name_Year as c4
from  
     BISAMPLE.SAMP_TIME_MTH_D T653 /* D02 Time Month Grain */ )
...
Are Logical Sequence Numbers Useful?

Most of the times the bottleneck when using Time Series is not in the sequence generation, since the time dimension cardinality is rarely big enough to produce a noticeable delay in the query time, but rather in the poorly managed calculations made on query time on top of massive fact tables.

Don't expect LSN to solve all your performance problems with Time Series. However, the usage of Logical Sequence Numbers provides to OBI a way of pre-cooking part of the calculation and so in theory should help performance. The small effort required to set them up centrally in the time hierarchy is covered by the benefits during query time, without having to touch any pre-defined time-series calculation.

If you do have performance problems with your OBIEE system, or would like to ensure that a system you’re building will be performant from the outset, please get in touch to find out more about our Performance Analytics service!
We also provide expert OBIEE training, implementations, QA and health checks - to find out more about how we can help you, please contact us!

Categories: BI & Warehousing

Introducing Advanced Analytics Training from Rittman Mead!

Tue, 2017-02-21 09:00

alt

Rittman Mead is proud to release our new training course: Advanced Analytics with Oracle's R Technologies.

Oracle has made significant investments in the R language with Oracle R, ROracle and Oracle R Enterprise. Using these tools, data scientists and business intelligence practitioners can work together more efficiently and can transition between their roles more easily.

Rittman Mead has developed a three-day course that tackles R's notoriously steep learning curve. It builds on Oracle professionals' existing skills to accelerate growth into R programming and data science.

What does the course include?

Day one is all about the R programming language, starting with a history and explanation of Oracle's R technologies. Hands-on coding begins right away, with practical labs comparing R's data types and data structures with those found in the Oracle Database. The day wraps up with R programming concepts like conditions and functions, providing a strong grasp of the fundamentals on the very first day.

Day two focuses on the analysis pipeline, from data acquisition to data visualization. You will use SQL and R to tidy and transform raw data into a structured format and then use visualization and basic statistics to gain insights.

Day three looks at statistical modeling—discussing linear models and the predictive modeling pipeline. We present pros and cons of different types of models and get hands-on with preprocessing, model tuning, cross-validation and interpreting model results.

Our course is a mixture of theory and practical exercises—ensuring that you'll understand the tools and know when to apply them.

Who should attend?

The course is suitable for Oracle practitioners having some experience with SQL and business intelligence. No previous knowledge of R is assumed or necessary.

Sounds great, where do I sign up?

Please view our UK & Europe or US training schedule for public courses. For any questions or queries, including on-site training requests, please contact Daniel Delgado (US) or Sam Jeremiah (UK & Europe) for more details.

Categories: BI & Warehousing

Pages