Rittman Mead Consulting

Subscribe to Rittman Mead Consulting feed
Rittman Mead consults, trains, and innovates within the world of Oracle Business Intelligence, data integration, and analytics.
Updated: 17 hours 18 min ago

New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design

Wed, 2016-03-30 03:09

I’m pleased to have recently had my first article published on the Oracle Technology Network (OTN). You can read it in its full splendour and glory(!) over there, but I thought I’d give a bit of background to it and the tools demonstrated within.

OBIEE Performance Analytics Dashboards

One of the things that we frequently help our clients with is reviewing and optimising the performance of their OBIEE systems. As part of this we’ve built up a wealth of experience in the kind of suboptimal design patterns that can cause performance issues, as well as how to go about identifying them empirically. Getting a full stack view on OBIEE performance behaviour is key to demonstrating where an issue lies, prior to being able to resolve it and proving it fixed, and for this we use the Rittman Mead OBIEE Performance Analytics Dashboards.

OBIEE Performance Analytics

A common performance issue that we see is analyses and/or RPDs built in such a way that the BI Server inadvertently returns many gigabytes of data from the database and in doing so often has to dump out to disk whilst processing it. This can create large NQS_tmp files, impacting the disk space available (sometimes critically), and the disk I/O subsystem. This is the basis of the OTN article that I wrote, and you can read the full article on OTN to find out more about how this can be a problem and how to go about resolving it.

OBIEE implementations that cause heavy use of temporary files on disk by the BI Server can result in performance problems. Until recently in OBIEE, it was really difficult to track because of the transitory nature of the files. By the time the problem had been observed (for example, disk full messages), the query responsible had moved on and so the temporary files deleted. At Rittman Mead we have developed lightweight diagnostic tools that collect, amongst other things, the amount of temporary disk space used by each of the OBIEE components.

pad_tmp_disk

This can then be displayed as part of our Performance Analytics Dashboards, and analysed alongside other performance data on the system such as which queries were running, disk I/O rates, and more:

OBIEE Temp Disk Usage

Because the Performance Analytics Dashboards are built in a modular fashion, it is easy to customise them to suit specific analysis requirements. In this next example you can see performance data from Oracle being analysed by OBIEE dashboard page in order to identify the cause of poorly-performing reports:

OBIEE Database Performance Analysis

We’ve put online a set of videos here demonstrating the Performance Analytics Dashboards, and explaining in each case how they can help you quickly and accurately diagnose OBIEE performance problems.

You can read more about our Performance Analytics offering here, or get in touch to find out more!

The post New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

The Importance of BI Documentation

Thu, 2016-03-17 05:00
Why Is BI Documentation Important?

Business intelligence systems come with a lot of extra information. Even beautifully constructed analyses have piles of background information and histories. Administrators might often have memos and updates that they’d like share with analysts. Sales figures might have anomalies that need further explanation. But OBIEE does not currently have any options for BI Documentation inside the dashboard.

Let’s say a BI user for a cell phone distribution company is viewing a report comparing the yearly sales figures for several different cell phones. If the analyst notices that one specific cell phone is outperforming the others, but doesn’t know what makes that specific model unique, then they have to go searching for that information.


But what if the individual phone model specifications and advertising and marketing histories were already included as reports inside the dashboard? What if the analyst, with only a couple of clicks, discovered that the reason one cell phone was outperforming the others was due to its next-gen screen, camera, and chip upgrades, which proved popular with consumers? Or what if the analyst discovered that the popular phone, while containing outdated peripherals, was selling so well because a Q3 advertising push for that model only? All of this information might not be contained in the dashboard’s visuals, but greatly affects the analysts’ understanding of the reports.

Current Options for OBIEE Documentation

Some information can be displayed as visuals, but many times this isn’t a practical solution. Besides making dashboards too cluttered, memos, product descriptions, company directories, etc., are not practical as charts and graphs. As of right now, important documentation can be stored in a wide range of places outside of the BI dashboard, but the operating reality at most organizations means that important information is spread across several locations and not always accessible to the people who need it.


Workarounds are inefficient, cost time, cause BI users to leave the BI environment (potentially reducing usage), and increase frustration. If an analyst has to email several different people to locate the information she wants, that complicates her workflow and produces extraneous communications (who likes answering emails?). Before now, there wasn’t an easy solution to these problems.

ChitChat’s BI Documentation Features

With ChitChat, it’s now possible to store critical documentation where it belongs—at the source of the conversation. Keep phone directories, memos from administrators (or requests from analysts to administrators), product descriptions, analytical histories—really, the possibilities are endless—inside the dashboard where they are accessible to the people who need them. Shorten workflows and make life easier for your BI users.

ChitChat’s easy-to-use functionality allows BI users to copy and paste or write (ChitChat has a built-in WYSIWYG text editor) important information inside the BI dashboard, creating a quicker path to insightful and actionable analytics. And isn’t that the goal in the end?

To learn more about ChitChat’s many commentary features, or to request a demo, click here.

The post The Importance of BI Documentation appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

ASO Slice Clears – How Many Members?

Mon, 2016-03-14 05:00

Essbase developers have had the ability to (comparatively) easily clear portions of our ASO cubes since version 11.1.1, getting away from fiddly methods involving manually contra-ing existing data via reports and rules files, making incremental loads substantially easier.

Along with the official documentation in the TechRef and DBAG, there are a number of excellent posts already out there that explain this process and how to effect “slice clears” in detail (here and here are just two I’ve come across that I think are clear and helpful). However, I had a requirement recently where the incremental load was a bit more complex than this. I am sure people must have fulfilled in the same or a very similar way, but I could not find any documentation or articles relating to it, so I thought it might be worth recording.

For the most part, the requirements I’ve had in this area have been relatively straightforward—(mostly) financial systems where the volatile/incremental slice is typically a months-worth (or quarters-worth) of data. The load script will follow this sort of sequence:

  • [prepare source data, if required]
  • Perform a logical clear
  • Load data to buffer(s)
  • Load buffer(s) to new database slice(s)
  • [Merge slices]

With the last stage being run here if processing time allows (this operation precludes access to the cube) or in a separate routine “out of hours” if not.

The “logical clear” element of the script will comprise a line like (note: the lack of a “clear mode” argument means a logical clear; only a physical clear needs to be specified explicitly):

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[Jan16]}’

or more probably

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[&CurrMonth]}’

i.e., using a variable to get away from actually hard coding the member values to clear. For separate year/period dimensions, the slice would need to be referenced with a CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘Crossjoin({[Jan]},{[FY16]})’ alter database ‘${Appname}’.’${DBName}’ clear data in region ‘Crossjoin({[&{CurrMonth]},{[&CurrYear]})’

which would, of course, fully nullify all data in that slice prior to the load. Most load scripts will already be formatted so that variables would be used to represent the current period that will potentially be used to scope the source data (or in a BSO context, provide a FIX for post-load calculations), so using the same to control the clear is an easy addition.

Taking this forward a step, I’ve had other systems whereby the load could comprise any number of (monthly) periods from the current year. A little bit more fiddly, but achievable: as part of the prepare source data stage above, it is relatively straightforward to run a select distinct period query on the source data, spool the results to a file, and then use this file to construct that portion of the clear command (or, for a relatively small number, prepare a sequence of clear commands).

The requirement I had recently falls into the latter category in that the volatile dimension (where “Period” would be the volatile dimension in the examples above) was a “product” dimension of sorts, and contained a lot of changed values each load. Several thousand, in fact. Far too many to loop around and build a single command, and far too many to run as individual commands—whilst on test, the “clears” themselves ran satisfyingly quickly, it obviously generated an undesirably large number of slices.

So the problem was this: how to identify and clear data associated with several thousand members of a volatile dimension, the values of which could change totally from load to load.

In short, the answer I arrived at is with a UDA.

The TechRef does not explicitly say or give examples, but because the Uda function can be used within a CrossJoin reference, it can be used to effect a clear: assume the Product dimension had an UDA of CLEAR against certain members…

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)})’

…would then clear all data for all of those members. If data for, say, just the ACTUAL scenario is to be cleared, this can be added to the CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)}, {[ACTUAL]})’

But we first need to set this UDA in order to take advantage of it. In the load script steps above, the first step is prepare source data, if required. At this point, a SQLplus call was inserted to a new procedure that

  1. examines the source load table for distinct occurrences of the “volatile” dimension
  2. populates a table (after initially truncating it) with a list of these members (and parents), and a third column containing the text “CLEAR”:

picture1

A “rules” file then needs to be built to load the attribute. Because the outline has already been maintained, this is simply a case of loading the UDA itself:

picture2

In the “Essbase Client” portion of the load script, prior to running the “clear” command, the temporary UDA table needs to be loaded using the rules file to populate the UDA for those members of the volatile dimension to be cleared:

import database ‘AppName‘.’DBName‘ dimensions connect as ‘SQLUsername‘ identified by ‘SQLPassword‘ using server rules_file ‘PrSetUDA’ on error write to ‘LogPath/ASOCurrDataLoad_SetAttr.err’;

picture3

 

With the relevant slices cleared, the load can proceed as normal.

After the actual data load has run, the UDA settings need to be cleared. Note that the prepared table above also contains an empty column, UDACLEAR. A second rules file, PrClrUDA, was prepared that loads this (4th) column as the UDA value—loading a blank value to a UDA has the same effect as clearing it.

The broad steps of the load script therefore become these:

  • [prepare source data, if required]
  • ascertain members of volatile dimension to clear from load source
  • update table containing current load members / CLEAR attribute
  • Load CLEAR attribute table
  • Perform a logical clear
  • Load data to buffers
  • Load buffer(s) to new database slice(s)
  • [Merge slices]
  • Remove CLEAR attributes

So not without limitations—if the data was volatile over two dimensions (e.g., Product A for Period 1, Product B for Period 2, etc.) the approach would not work (at least, not exactly as described, although in this instance you could possible iterate around the smaller Period dimension)—but overall, I think it’s a reasonable and flexible solution.

Clear / Load Order

While not strictly part of this solution, another little wrinkle to bear in mind here is the resource taken up by the logical clear. When initializing the buffer prior to loading data into it, you have the ability to determine how much of the total available resource is used for that particular buffer—from a total of 1.0, you can allocate (e.g.) 0.25 to each of 4 buffers that can then be used for a parallel load operation, each loaded buffer subsequently writing to a new database slice. Importing a loaded buffer to the database then clears the “share” of the utilization afforded to that buffer.

Although not a “buffer initialization” activity per se, a (slice-generating) logical clear seems to occupy all of this resource—if you have any uncommitted buffers created, even with the lowest possible resource utilization of 0.01 assigned, the logical clear will fail:

picture4

The Essbase Technical Reference states at “Loading Data Using Buffers“:

While the data load buffer exists in memory, you cannot build aggregations or merge slices, as these operations are resource-intensive.

It could perhaps be argued that as we are creating a “clear slice,” not merging slices (nor building an aggregation), that the logical clear falls outside of this definition, but a similar restriction certainly appears to apply here too.

This is significant as, arguably, the ideally optimum incremental load would be along the lines of

  • Initialize buffer(s)
  • Load buffer(s) with data
  • Effect partial logical clear (to new database slice)
  • Load buffers to new database slices
  • Merge slices into database

As this would both minimize the time that the cube was inaccessible (during the merge), and also not present the cube with zeroes in the current load area. However, as noted above, this does not seem to be possible—there does not seem to be a way to change the resource usage (RNUM) of the “clear,” meaning that this sequence has to be followed:

  • Effect partial logical clear (to new database slice)
  • Initialize buffer(s)
  • Load buffer(s) with data
  • Load buffers to new database slices
  • Merge slices into database

I.e., the ‘clear’ has to be fully effected before the initialization of the buffers. This works as you would expect, but there is a brief period—after the completion of the “clear” but before the load buffer(s) have been committed to new slices—where the cube is accessible and the load slice will show as “0” in the cube.

The post ASO Slice Clears – How Many Members? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG

Thu, 2016-03-10 04:00

Background

A few months before the start of the 2014 World Cup, Jon Mead, Rittman Mead’s CEO, asked me to come up with a way to showcase our strengths and skills while leveraging the excitement generated by the World Cup. With this in mind, my colleague Pete Tamisin and I decided to create our own game-tracking page for World Cup matches, similar to the ones you see on popular sports websites like ESPN and CBSSports, with one caveat: we would build the game-tracker inside an OBIEE dashboard.

Unfortunately, after several long nights and weekends, we weren’t able to come up with something we were satisfied with, but we learned tons along the way and kept a lot of the content we created for future use. That future use came several months later when we decided to create our own soccer match (“The Rittman Mead Cup”) and build a game-tracking dashboard that would support this match. We then had the pleasure to present our work in a few industry conferences, like the BI Forum in Atlanta and KScope in Hollywood, Florida.

GaOUG Tech Day

Recently I had the privilege of delivering that presentation one last time, at Georgia Oracle Users Group’s Tech Day 2016. With the right amount of silliness (yes, The Rittman Mead cup was played/acted by our own employees), this presentation allowed us to discuss with the audience our approach to designing a “sticky” application; meaning, an application that users and consumers will not only find useful, but also enjoyable, increasing the chances they will return to and use the application.

We live in an era where nice, fun, pretty applications are commonplace, and our audience expects the same from their business applications. Validating the numbers on the dashboard is no longer enough. We need to be able to present that data in an attractive, intuitive, and captivating way. So, throughout the presentation, I discussed with the audience the thoughtful approach we used when designing our game-tracking page. We focused mainly on the following topics: Serving Our Consumers; Making Life Easier for Our Designers, Modelers, and Analysts; and Promoting Process and Collaboration (the latter can be accomplished with our ChitChat application). Our job would have been a lot easier if ChitChat were available when we first put this presentation together….

Finally, you can find the slides for the presentation here. Please add your comments and questions below. There are usually multiple ways of accomplishing the same thing, so I’d be grateful to hear how you guys are creating “stickiness” with your users in your organizations.

Until the next time.

The post Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

The Importance of BI Commentary

Mon, 2016-03-07 04:00
Why Is Commentary Important?

We communicate every day. Communication through text is especially abundant with the proliferation of new on-demand technologies. Have you gone through your emails today? Have you read the news, weather, or blogs (like this one)? Communication is the backbone to every interpersonal interaction. Without it, we are left guessing and assuming.

BI implementations are no exception when it comes to communication’s importance, and I would argue communication is a major component of every BI environment. The goal of any BI application is to discover and expose actionable information from data, but without collaboration, discovering insights becomes difficult. By allowing users to collaborate immediately in the BI application, new insights can be discovered quicker.

Any BI conversation should maintain its own dedicated communication channel, and the optimal place for these conversations is as close to the information-consumption phase as possible. By allowing users to collaborate in discussions over results at the same location as the data, users will be empowered to extract as much information as possible.

Unfortunately, commentary support is absent from OBIEE.

The Current OBIEE Communication Model

The lack of commentary support does not stop the community from developing their own methods or approaches to communicating within their BI environments. Right now, common approaches include purchasing pre-developed software, engineering custom solutions, or forcing the conversations into other channels.

Purchasing a commentary application or developing your own internal solutions expedites the user communication process. However, what about those who do not find a solution, and instead decide to use a “work-around” approach?

Choosing to ignore the missing functionality is the cheapest approach, initially, but may actually cost more in the long run. To engage in simple conversations, users are required to leave the BI dashboard, which adds time and difficulty to their daily processes. And reiterating the context of a conversation is both time consuming and error prone.

Additionally, which communication channel will the BI conversations invade? A dedicated communication channel, built specifically to easily display and relay the BI topics of interest, is the most efficient, and beneficial, solution.

How ChitChat Can Help

ChitChat provides a channel of communication directly within the BI environment, allowing users to engage in conversations as close to the data consumption phase as possible. Users will never be required to leave the BI application to engage in a conversation about the data, and they won’t need to reiterate the environment through screenshots or descriptions.

Recognizing the importance of separate channels of communication, ChitChat also easily allows each channel to maintain their respective scopes. For instance, a user may discover an error on a BI dashboard. Rather than simply identifying the error in the BI environment, the user can export the comment to Atlassian JIRA and create a ticket for the issue to be resolved, thus maintaining the appropriate scopes of both JIRA and ChitChat. Integrations allow existing channels of communication to maintain their respective importance, and appropriately restrict the scope of conversations.

ChitChat is placed in the most opportune location for BI commentary, while maintaining the correct scope of the conversation. Other approaches often ignore one of these two aspects of BI commentary, but both are required to efficiently support a community within a BI environment. The most effective solution is not one that simply solves the problem, or meets some of the criteria, but the solution that meets all of the requirements.

Commentary Made Simple

Conversation around a BI environment will always occur, regardless of the supporting infrastructure or difficulty in doing so. Rather than forcing users to spend time working around common obstacles or developing their own solutions, investing in an embedded application will save both time and money. These offerings will not only meet the basic requirements, but also ensure the best experience for users, and the most return on investment.

Providing users the exact features they need, where they need it, is one step in nurturing a healthy BI environment, and ChitChat is an excellent solution to meet these criteria.

To find out more about ChitChat, or to request a demo, click here!

The post The Importance of BI Commentary appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

OBIEE 12c – Your Answers After Upgrading

Thu, 2016-03-03 04:00

Several blogs have already been written about new functionality in OBIEE 12c. Mark Rittman, for example, posted a good one here.

Now, I’ve personally had the chance to play with it for a few weeks, mostly in Answers and some with the RPD, and wanted to share my experience. With a sleek interface and many new functionalities, 12c brings some very useful features that users will appreciate. As with most new software releases, I expected to find issues that needed to be worked out. In general, I was pleasantly surprised with the UI, the speed, and the intuitiveness that came along with OBIEE 12c.

Here, I’ll share with you some of the new features within Answers:

Percent Calculation

If you’ve created lots of percent variance columns, it’s probably second nature that you will create your formula and then multiply by 100. In 12c, you can create your percent calculation without multiplying it by 100, then set your % data formatting in the Column Properties. In the same spot where you specify how the data is displayed, you can check the x100 box, which in turn will automatically multiply your results from that column by 100. Pretty sleek solution to simplify your formulas.

Percent Calc

Saved Columns

This feature is very well described here, so I will give a high level overview: 12c gives you a very easy way to save a complex formula into the catalog. If you’ve built a lot of logic in a column’s formula, and would like to reuse the logic in future reports, you will appreciate the opportunity of saving columns. I remember creating many financial calculations that had to be reused often, and until now there was no easy way to retrieve the column formulas. Trying to simplify my life, I ended up inventing “my own method” of saving complex calculations by saving different analyses that I named as “Master – Calculation” containing the columns that I reused often. I would start many reports based on these Master reports because they had my pre-built formulas; however, this was not a clean method for others to follow. OBIEE 12c gives you this clean and simple method for storing and reusing your most wanted columns. You do this by entering your formula in edit formula and choosing to “Save Column as” for future use.

Calculated Columns

OBIEE 12c provides a more intuitive way to create calculated columns than previous versions. In 10g or 11g, you needed to add a “whatever” column to the query, and then go in Edit Formula to define the calculation for your new column. While this worked, most new users often wondered why they were “bringing in two revenue columns,” for example. In 12c, you can add only the needed columns to your Criteria, then go straight to Results. In the Results tab, there is a New Calculated Measure icon that brings you immediately to the Edit Formula screen where you can name your new measure and define its formula.

Calc Columns

Measure Abbreviation

There is also a more intuitive abbreviation of the measures that are placed on a graph. In 11g, when you dragged an amount to an axis, you may recall that the numbers would show up exactly as the raw number. So, if your result was 12,000,000, then that was exactly what you would see on the graph to begin. If you wanted to improve your graph, then you needed to go to the Graph Properties and format the data from the axis to be abbreviated into, for our example above, millions (or 12M). To save you a step, 12c will automatically abbreviate your graph data in the most user-friendly way. So, if the data is 12,000,000, you automatically get 12M!

Measure Abbreviation.png

Heat Matrix

Easy to use heat matrix!—I mean it: easy. While in 11g, you would have to be somewhat visually savvy and spend a lot of time conditionally formatting. OBIEE 12c gives you a tool that allows you to create a meaningful heat matrix in a matter of minutes—wait—even seconds. All you need is to know the two dimensions and one measure that you would like to use, and drag and drop them. Choose from an array of color schemas and how you would like to use the colors. In no time, your heat matrix is ready.

Heat Map

Treemaps

A new member of the OBIEE family is here to provide a visual solution for very complex activities. The Treemap provides a hierarchical structure that allows you to quickly spot patterns and outliers. At first, it may require a bit of head twisting to look at a graph like this, but remember, this is indeed a graph for complex activities. One of the most ideal usages for this new feature is the grouping by parent/children groups and the displaying of how two measures fair up inside each group.

Treemap.png

Advanced Analytics

OBIEE 12c gives you the capability of working with statistical and R functions right from the ‘Edit formula’ pane. While I have found that this new feature was still not very user friendly, it’s a lot easier than making this functionality work in 11g. For example, to create a simple Trendline with 11g, the developer had to slowly build each step of a calculation to find the slope of a line, and then find the Y intercept. With these answers in hand, the results had to be carefully placed on a graph, so that it could render meaningful results. If you require statistical graphs within OBIEE, 12c may be a great fit for you. For example, below is a graph showing four different Trendlines:

Trendline

The Criteria for building these four lines would be very intense in 11g; but in OBIEE 12c, it contained only five columns: one for the Calendar Year, and one for each Trendline. The Trendlines were created one at a time, by inserting the new “Analytics” Function in the column’s formula (see below).

AA Combo

Data Mashup

This is a dream come true to many of us, though it requires an optional data visualization license. With this new functionality, you are able to use OBIEE along with any excel spreadsheet (XSA) saved on your machine.

You can add a spreadsheet to OBIEE from two areas:

  1. When you are creating an analysis (in the Criteria tab, and then choosing to add data source as shown below), or

Add Data Source

2. By going to the Visual Analyzer Home Page.

As this blog focuses on Answers, I will review the first option here.

There are three possible ways of analyzing a spreadsheet in Answers. You either want to:

  1. Analyze the spreadsheet by itself, or
  2. Use attributes from the spreadsheet along with fact data from your enterprise system, or
  3. Use fact data from your spreadsheet along with attributes and facts from your enterprise system.

For options 2 and 3 to work properly, it is important that your joins are properly matched (watch your cardinalities!) from your spreadsheet to your enterprise data. Also, as usual, option 3 will only work along with another fact table when the two tables are joined to a conformed dimension. Cardinalities and conformed dimensions are items that we generally take for granted when working on front-end OBIEE, because these points have been carefully handled during RPD modeling. Since the spreadsheet modeling has to be done in the front end, special caution must be used when modeling them in order to avoid “exploded” results, or simply inaccurate results.

Word of caution on placing an XSA sourced analysis in a shared folder:

Once you create an analysis using a spreadsheet and save it to a shared folder, you will receive this message:

Warning

Once you choose “YES,” the spreadsheet will show as a new subject area—for you and for anyone who has access to the folder in which you placed the analysis, meaning that the catalog security just TOOK CONTROL of your spreadsheet! Below is a screenshot of how they show as new subject areas:

Subject Areas

So, if your intent was to share an analysis from a XSA, but not necessarily share the entire spreadsheet to be reused, you may want to restrict your analysis to a folder with the specific securities that you would like to apply to your spreadsheet. BUT…think carefully before saving the analysis in a shared folder. If you realize that you made a mistake, just know that deleting your analysis from the incorrect folder will NOT remove your spreadsheet as an available subject area for other users. Remember, the catalog security took control of your spreadsheet, and it’s not going to let it go! If you saved the analysis in a folder with incorrect permissions, you must delete the spreadsheet altogether from the tool, reload it, and then save the analysis in the correct folder (with the permissions that you want).

You will likely need in-depth information regarding mashup security once you are really working with it. Check out this Oracle doc for more info.

Word of caution when archiving an analysis containing a spreadsheet, or when moving that analysis between environments:

The username of the owner of the analysis gets embedded in the column formula, and so does the precise name that you gave your spreadsheet when you first loaded it. So, let’s say that you are transitioning environments and the new environment does not contain your spreadsheets. If someone else has an archived catalog containing one of your mashup queries, they will get an error when retrieving results for your query, because the tool doesn’t have your spreadsheet loaded yet. The only way for them to unarchive your analysis and retrieve results is for YOUR USER to log into OBIEE, load the original spreadsheet (saving it with the same exact name as before), and then saving the analysis in the proper shared folder once again.

Deleting the New Subject Area

One tricky thing in this new tool: even if you uploaded your spreadsheet (XSA) during an analysis in OBIEE, it can only be deleted from the “New Home Page,” which is the Home Page of Visual Analyzer. You can get to the “New Home Page” from the “Old Home Page”:

New Home Page

Once in the New Home Page, click on Data Sources. Choose your Data Source and delete it!

Delete SA

I confess that I had some trouble finding the delete button. Maybe I would have bumped into it had I played more with VA, but that was not the case. Regardless, I felt relieved that this button existed somewhere!

Data Mashup Performance

This was a bit of an issue, but mostly when combined with the Advanced Analytics functions. From my research and from talking to colleagues, I found that the following must be observed to optimize performance:

  1. Reduce the size of your spreadsheet, when possible.
  2. DB indexing on the field that you are joining.
  3. Proper cardinality on your mashup joins with your DB data.
  4. Set up caching for mashups on bi server.

Overall, the experience in OBIEE 12c Answers was very positive, and the new features could bring a great deal of time savings for any organization!

To learn more about all that OBIEE 12c has to offer, check out our upcoming bootcamps here.

Hope to see you then!

The post OBIEE 12c – Your Answers After Upgrading appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Pages