BI & Warehousing

Rittman Mead at Collaborate 16: Data Integration Focus

Rittman Mead Consulting - Mon, 2016-04-04 04:59

It’s that time of year again when Oracle technologists from around the world gather in Las Vegas, Nevada, to teach, learn, and, of course, network with their peers. The Collaborate 16 conference, running for 10 years now, has been a collaboration, if you will, between the Independent Oracle Users Group (IOUG), Oracle Applications Users Group (OAUG), and Quest International Users Group (Quest), making it one of the largest user group conferences in the world. Rittman Mead will once again be in attendance, with two data integration focused presentations by me over the course of the week.

My first session at Collaborate 16, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration,” scheduled for Monday, April 11, at 10:30 a.m., will focus on how we can implement the ETL Subsystems using Oracle Data Integration solutions. As you know, Big Data integration has been the hot topic over the past few years, and it’s an excellent feature in the Oracle Data Integration product suite (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality). But not all analytics require big data technologies, such as labor cost, revenue, or expense reporting. Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet these reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories: Extracting, Cleaning & Conforming, Delivering, and Managing, describing how the Oracle Data Integration products are perfectly suited for the Kimball approach.

I go into further detail on one of the ETL Subsystems in an upcoming IOUG Select Journal article, titled “Implement an Error Event Schema with Oracle Data Integrator.” The Select Journal is a technical magazine published quarterly and available exclusively to IOUG members. My recent post Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table shows a bit of the detail behind the research performed for the article.

error-event-schema

If you’re not familiar with the Kimball approach to data warehousing, I definitely would recommend reading one (or more) of their published books on the subject. I would also recommend attending one of their training courses, but unfortunately for the data warehousing community, the Kimball Group has closed shop as of December 2015. But hey, the good news is that two of the former Kimball team members have joined forces at Decision Works, and they offer the exact same training they used to deliver under The Kimball Group name.

GoldenGate to Kafka logo

On Thursday, April 14, at 11 a.m., I will dive into the recently released Oracle GoldenGate for Big Data 12.2 in a session titled “Oracle GoldenGate and Apache Kafka: A Deep Dive into Real-Time Data Streaming.” The challenge for us as data integration professionals is to combine relational data with other non-structured, high volume and rapidly changing datasets, known in the industry as Big Data, and transform it into something useful. Not just that, but we must also do it in near real-time and using a big data target system such as Hadoop. The topic of this session, real-time data streaming, provides us a great solution for that challenging task. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

If you plan to be at Collaborate 16 next week, feel free to drop me a line in the comments, via email at michael.rainey@rittmanmead.com, or on Twitter @mRainey. I’d love to meet up and have a discussion around my presentation topics, data integration, or really anything we’re doing at Rittman Mead. Hope to see you all there!

The post Rittman Mead at Collaborate 16: Data Integration Focus appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

ChitChat: The Importance of BI Integrations

Rittman Mead Consulting - Thu, 2016-03-31 05:00

A user’s workflow shouldn’t change to accommodate a new tool. A new tool should fill a gap in the current workflow and help streamline the user’s process. An application without a clearly defined scope eventually overlaps with existing solutions, creating confusion and distress among users. It takes both time and effort to clarify the appropriate situations to use the application, reconcile different use cases and approaches, and resolve incorrect uses. We designed ChitChat with appropriate scopes in mind, implementing key integrations, to fit seamlessly into existing workflows.

What exactly do we mean by “scope?”

Let’s look at an example with JIRA. JIRA owns the complete ticketing process, meaning tickets are stored and maintained by the tool. Using a competing ticket solution, such as Trello, for the same purpose within the organization will cause havoc among users. However, JIRA tickets are still extremely useful outside of the JIRA application. They can be linked to and displayed inside other applications, but they are still maintained by JIRA itself.

If you can recognize that the ticketing management should be handled solely by JIRA, but exposure of those tickets outside of the tool is also important, then you understand the correct scope of the application. The scope of the application does not determine where the context of an application is useful. It only describes what section of a workflow the application has absolute control over. The question isn’t “Where should we be able to view the information?” The question is “Where should the content be maintained?”

ChitChat respects the appropriate scopes of neighboring applications and allows the flexibility to continue maintaining the scopes of these applications. With integrations to Atlassian JIRA and Confluence and Salesforce Chatter, the information you need is available where you need it, without infringing on your existing workflow.

Examples of Integrations

Let’s look at some examples. As we use a BI dashboard, we stumble upon an issue. Using ChitChat, the issue can be identified and a conversation can be made about temporarily working around the problem. However, the IT team uses JIRA to accept issues and resolves them as appropriate. We obviously want the IT team to know of this issue, so we must create a ticket in JIRA as well. Rather than going to JIRA and creating a ticket manually, we can simply export the initial annotation to JIRA. The workflow remains generally identical, but now requires less time and effort. And this comes with the added benefit of the ticket pointing directly to the location of the issue on the dashboard.

In another instance, let’s say our dashboard has some confusing calculations on it, some of which are not immediately recognizable. The formulas used, and the reasons to use such formulas, are available in Atlassian Confluence for us to view. However, not all users have a Confluence account, and even fewer have access to the document. We could copy and paste the calculations as a document using ChitChat, but now we have two separate instances of the same information. If the calculations are changed, we must ensure both locations are accurate. Alternatively, ChitChat can sync directly with Confluence and pull a page into the application. The page guarantees accuracy by consistently pulling new updates from Confluence, as well as pushing updates to Confluence if the content is changed in ChitChat.

These approaches allow the JIRA ticket and Confluence document to be maintained in the appropriate location, while also being available in a useful context. Chitchat does not impede on the purposes of other applications. ChitChat offers integrations that seamlessly enhance your workflow without making it convoluted. Our tool is designed specifically to fill the missing pieces in your BI workflow, allowing for a seamless transition between analysis and communication.

To learn more about ChitChat’s many commentary features, or to request a demo, click here.

The post ChitChat: The Importance of BI Integrations appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design

Rittman Mead Consulting - Wed, 2016-03-30 03:09

I’m pleased to have recently had my first article published on the Oracle Technology Network (OTN). You can read it in its full splendour and glory(!) over there, but I thought I’d give a bit of background to it and the tools demonstrated within.

OBIEE Performance Analytics Dashboards

One of the things that we frequently help our clients with is reviewing and optimising the performance of their OBIEE systems. As part of this we’ve built up a wealth of experience in the kind of suboptimal design patterns that can cause performance issues, as well as how to go about identifying them empirically. Getting a full stack view on OBIEE performance behaviour is key to demonstrating where an issue lies, prior to being able to resolve it and proving it fixed, and for this we use the Rittman Mead OBIEE Performance Analytics Dashboards.

OBIEE Performance Analytics

A common performance issue that we see is analyses and/or RPDs built in such a way that the BI Server inadvertently returns many gigabytes of data from the database and in doing so often has to dump out to disk whilst processing it. This can create large NQS_tmp files, impacting the disk space available (sometimes critically), and the disk I/O subsystem. This is the basis of the OTN article that I wrote, and you can read the full article on OTN to find out more about how this can be a problem and how to go about resolving it.

OBIEE implementations that cause heavy use of temporary files on disk by the BI Server can result in performance problems. Until recently in OBIEE, it was really difficult to track because of the transitory nature of the files. By the time the problem had been observed (for example, disk full messages), the query responsible had moved on and so the temporary files deleted. At Rittman Mead we have developed lightweight diagnostic tools that collect, amongst other things, the amount of temporary disk space used by each of the OBIEE components.

pad_tmp_disk

This can then be displayed as part of our Performance Analytics Dashboards, and analysed alongside other performance data on the system such as which queries were running, disk I/O rates, and more:

OBIEE Temp Disk Usage

Because the Performance Analytics Dashboards are built in a modular fashion, it is easy to customise them to suit specific analysis requirements. In this next example you can see performance data from Oracle being analysed by OBIEE dashboard page in order to identify the cause of poorly-performing reports:

OBIEE Database Performance Analysis

We’ve put online a set of videos here demonstrating the Performance Analytics Dashboards, and explaining in each case how they can help you quickly and accurately diagnose OBIEE performance problems.

You can read more about our Performance Analytics offering here, or get in touch to find out more!

The post New OTN Article – OBIEE Performance Analytics: Analysing the Impact of Suboptimal Design appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

The future of data warehousing

Dylan's BI Notes - Thu, 2016-03-17 05:17
Data warehousing is really about preparation of the data for reporting.  The assumption are: You can predicate what typical queries look like to some extent. The data need to be prepared to make the query easier or faster, or make more sense from the data . You know where the data come from and you […]
Categories: BI & Warehousing

The Importance of BI Documentation

Rittman Mead Consulting - Thu, 2016-03-17 05:00
Why Is BI Documentation Important?

Business intelligence systems come with a lot of extra information. Even beautifully constructed analyses have piles of background information and histories. Administrators might often have memos and updates that they’d like share with analysts. Sales figures might have anomalies that need further explanation. But OBIEE does not currently have any options for BI Documentation inside the dashboard.

Let’s say a BI user for a cell phone distribution company is viewing a report comparing the yearly sales figures for several different cell phones. If the analyst notices that one specific cell phone is outperforming the others, but doesn’t know what makes that specific model unique, then they have to go searching for that information.


But what if the individual phone model specifications and advertising and marketing histories were already included as reports inside the dashboard? What if the analyst, with only a couple of clicks, discovered that the reason one cell phone was outperforming the others was due to its next-gen screen, camera, and chip upgrades, which proved popular with consumers? Or what if the analyst discovered that the popular phone, while containing outdated peripherals, was selling so well because a Q3 advertising push for that model only? All of this information might not be contained in the dashboard’s visuals, but greatly affects the analysts’ understanding of the reports.

Current Options for OBIEE Documentation

Some information can be displayed as visuals, but many times this isn’t a practical solution. Besides making dashboards too cluttered, memos, product descriptions, company directories, etc., are not practical as charts and graphs. As of right now, important documentation can be stored in a wide range of places outside of the BI dashboard, but the operating reality at most organizations means that important information is spread across several locations and not always accessible to the people who need it.


Workarounds are inefficient, cost time, cause BI users to leave the BI environment (potentially reducing usage), and increase frustration. If an analyst has to email several different people to locate the information she wants, that complicates her workflow and produces extraneous communications (who likes answering emails?). Before now, there wasn’t an easy solution to these problems.

ChitChat’s BI Documentation Features

With ChitChat, it’s now possible to store critical documentation where it belongs—at the source of the conversation. Keep phone directories, memos from administrators (or requests from analysts to administrators), product descriptions, analytical histories—really, the possibilities are endless—inside the dashboard where they are accessible to the people who need them. Shorten workflows and make life easier for your BI users.

ChitChat’s easy-to-use functionality allows BI users to copy and paste or write (ChitChat has a built-in WYSIWYG text editor) important information inside the BI dashboard, creating a quicker path to insightful and actionable analytics. And isn’t that the goal in the end?

To learn more about ChitChat’s many commentary features, or to request a demo, click here.

The post The Importance of BI Documentation appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Revisiting "Continued..."

Tim Dexter - Tue, 2016-03-15 06:39

Adding "Continued.." to the bottom of a table if the content spills over more than one page is a very common requirement for Customer Bills. I am sure most of you have already seen Tim's blog on this topic. Just wanted to add a small note here which I got as a quick tidbit from our template expert, Hok Min. This requirement came from a telecom customer:

  1. The invoice had multiple tables giving different bill breakups such as "Current Charges", "Usage Charges", "Discounts", "Itemized bills for Local Calls", "Itemized bills for STD Calls" etc. Among these, any of the table could spill over to next page in any of the pages.
  2. The itemized bills were grouped under a category "Your Itemized Bill"

The requirement was

  1. Whenever a table splits across page, the next page should repeat the table header and should also display "(Continued ..)" in the table header 
  2. If the table is inside the category - "Your Itemized Bill", then the heading "Your Itemized Bill" should repeat in the next page added with "(Continued ..)" text
  3. With multiple tables within the category "Your Itemized Bill", the "(Continued ...)" message should be displayed for all tables if they split across page.

This can be seen here in the images:

Page 1: Here "YOUR ITEMIZED BILL" and "Local Calls" starts in this page. 



Page 2: Here "YOUR ITEMIZED BILL" and "Local Calls" are in continuation from previous page while "STD Calls" table starts in this page.



Page 3: Here "YOUR ITEMIZED BILL" and "STD Calls" are in continuation from previous page. 


We can use the same code logic that was explained in Tim's blog. The main thing to note here is that the init-page-total should be included within each table. If the init statement of a table is kept outside then it will not be able to reset the context to display "Continued ..." correctly. Here the first two rows of the external table and the nested table are marked to "Repeat as header row at the top of each page". The itemized bills are displayed grouped by date, therefore for-each-group is done in the third row of the nested table and the last row has the for-each loop to display each transaction.


The below image shows the code corresponding to the above table design. Notice the use of  display-condition="exceptfirst" so that "(Continued..)" text will show in all table-headers except the first one. 

You can find the sample RTF template and XML data here

Stay tuned for more updates... 

Enjoy :) !! 

Categories: BI & Warehousing

ASO Slice Clears – How Many Members?

Rittman Mead Consulting - Mon, 2016-03-14 05:00

Essbase developers have had the ability to (comparatively) easily clear portions of our ASO cubes since version 11.1.1, getting away from fiddly methods involving manually contra-ing existing data via reports and rules files, making incremental loads substantially easier.

Along with the official documentation in the TechRef and DBAG, there are a number of excellent posts already out there that explain this process and how to effect “slice clears” in detail (here and here are just two I’ve come across that I think are clear and helpful). However, I had a requirement recently where the incremental load was a bit more complex than this. I am sure people must have fulfilled in the same or a very similar way, but I could not find any documentation or articles relating to it, so I thought it might be worth recording.

For the most part, the requirements I’ve had in this area have been relatively straightforward—(mostly) financial systems where the volatile/incremental slice is typically a months-worth (or quarters-worth) of data. The load script will follow this sort of sequence:

  • [prepare source data, if required]
  • Perform a logical clear
  • Load data to buffer(s)
  • Load buffer(s) to new database slice(s)
  • [Merge slices]

With the last stage being run here if processing time allows (this operation precludes access to the cube) or in a separate routine “out of hours” if not.

The “logical clear” element of the script will comprise a line like (note: the lack of a “clear mode” argument means a logical clear; only a physical clear needs to be specified explicitly):

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[Jan16]}’

or more probably

alter database ‘Appname‘.’DBName‘ clear data in region ‘{[&CurrMonth]}’

i.e., using a variable to get away from actually hard coding the member values to clear. For separate year/period dimensions, the slice would need to be referenced with a CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘Crossjoin({[Jan]},{[FY16]})’ alter database ‘${Appname}’.’${DBName}’ clear data in region ‘Crossjoin({[&{CurrMonth]},{[&CurrYear]})’

which would, of course, fully nullify all data in that slice prior to the load. Most load scripts will already be formatted so that variables would be used to represent the current period that will potentially be used to scope the source data (or in a BSO context, provide a FIX for post-load calculations), so using the same to control the clear is an easy addition.

Taking this forward a step, I’ve had other systems whereby the load could comprise any number of (monthly) periods from the current year. A little bit more fiddly, but achievable: as part of the prepare source data stage above, it is relatively straightforward to run a select distinct period query on the source data, spool the results to a file, and then use this file to construct that portion of the clear command (or, for a relatively small number, prepare a sequence of clear commands).

The requirement I had recently falls into the latter category in that the volatile dimension (where “Period” would be the volatile dimension in the examples above) was a “product” dimension of sorts, and contained a lot of changed values each load. Several thousand, in fact. Far too many to loop around and build a single command, and far too many to run as individual commands—whilst on test, the “clears” themselves ran satisfyingly quickly, it obviously generated an undesirably large number of slices.

So the problem was this: how to identify and clear data associated with several thousand members of a volatile dimension, the values of which could change totally from load to load.

In short, the answer I arrived at is with a UDA.

The TechRef does not explicitly say or give examples, but because the Uda function can be used within a CrossJoin reference, it can be used to effect a clear: assume the Product dimension had an UDA of CLEAR against certain members…

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)})’

…would then clear all data for all of those members. If data for, say, just the ACTUAL scenario is to be cleared, this can be added to the CrossJoin:

alter database ‘Appname‘.’DBName‘ clear data in region ‘CrossJoin({Uda([Product], “CLEAR”)}, {[ACTUAL]})’

But we first need to set this UDA in order to take advantage of it. In the load script steps above, the first step is prepare source data, if required. At this point, a SQLplus call was inserted to a new procedure that

  1. examines the source load table for distinct occurrences of the “volatile” dimension
  2. populates a table (after initially truncating it) with a list of these members (and parents), and a third column containing the text “CLEAR”:

picture1

A “rules” file then needs to be built to load the attribute. Because the outline has already been maintained, this is simply a case of loading the UDA itself:

picture2

In the “Essbase Client” portion of the load script, prior to running the “clear” command, the temporary UDA table needs to be loaded using the rules file to populate the UDA for those members of the volatile dimension to be cleared:

import database ‘AppName‘.’DBName‘ dimensions connect as ‘SQLUsername‘ identified by ‘SQLPassword‘ using server rules_file ‘PrSetUDA’ on error write to ‘LogPath/ASOCurrDataLoad_SetAttr.err’;

picture3

 

With the relevant slices cleared, the load can proceed as normal.

After the actual data load has run, the UDA settings need to be cleared. Note that the prepared table above also contains an empty column, UDACLEAR. A second rules file, PrClrUDA, was prepared that loads this (4th) column as the UDA value—loading a blank value to a UDA has the same effect as clearing it.

The broad steps of the load script therefore become these:

  • [prepare source data, if required]
  • ascertain members of volatile dimension to clear from load source
  • update table containing current load members / CLEAR attribute
  • Load CLEAR attribute table
  • Perform a logical clear
  • Load data to buffers
  • Load buffer(s) to new database slice(s)
  • [Merge slices]
  • Remove CLEAR attributes

So not without limitations—if the data was volatile over two dimensions (e.g., Product A for Period 1, Product B for Period 2, etc.) the approach would not work (at least, not exactly as described, although in this instance you could possible iterate around the smaller Period dimension)—but overall, I think it’s a reasonable and flexible solution.

Clear / Load Order

While not strictly part of this solution, another little wrinkle to bear in mind here is the resource taken up by the logical clear. When initializing the buffer prior to loading data into it, you have the ability to determine how much of the total available resource is used for that particular buffer—from a total of 1.0, you can allocate (e.g.) 0.25 to each of 4 buffers that can then be used for a parallel load operation, each loaded buffer subsequently writing to a new database slice. Importing a loaded buffer to the database then clears the “share” of the utilization afforded to that buffer.

Although not a “buffer initialization” activity per se, a (slice-generating) logical clear seems to occupy all of this resource—if you have any uncommitted buffers created, even with the lowest possible resource utilization of 0.01 assigned, the logical clear will fail:

picture4

The Essbase Technical Reference states at “Loading Data Using Buffers“:

While the data load buffer exists in memory, you cannot build aggregations or merge slices, as these operations are resource-intensive.

It could perhaps be argued that as we are creating a “clear slice,” not merging slices (nor building an aggregation), that the logical clear falls outside of this definition, but a similar restriction certainly appears to apply here too.

This is significant as, arguably, the ideally optimum incremental load would be along the lines of

  • Initialize buffer(s)
  • Load buffer(s) with data
  • Effect partial logical clear (to new database slice)
  • Load buffers to new database slices
  • Merge slices into database

As this would both minimize the time that the cube was inaccessible (during the merge), and also not present the cube with zeroes in the current load area. However, as noted above, this does not seem to be possible—there does not seem to be a way to change the resource usage (RNUM) of the “clear,” meaning that this sequence has to be followed:

  • Effect partial logical clear (to new database slice)
  • Initialize buffer(s)
  • Load buffer(s) with data
  • Load buffers to new database slices
  • Merge slices into database

I.e., the ‘clear’ has to be fully effected before the initialization of the buffers. This works as you would expect, but there is a brief period—after the completion of the “clear” but before the load buffer(s) have been committed to new slices—where the cube is accessible and the load slice will show as “0” in the cube.

The post ASO Slice Clears – How Many Members? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG

Rittman Mead Consulting - Thu, 2016-03-10 04:00

Background

A few months before the start of the 2014 World Cup, Jon Mead, Rittman Mead’s CEO, asked me to come up with a way to showcase our strengths and skills while leveraging the excitement generated by the World Cup. With this in mind, my colleague Pete Tamisin and I decided to create our own game-tracking page for World Cup matches, similar to the ones you see on popular sports websites like ESPN and CBSSports, with one caveat: we would build the game-tracker inside an OBIEE dashboard.

Unfortunately, after several long nights and weekends, we weren’t able to come up with something we were satisfied with, but we learned tons along the way and kept a lot of the content we created for future use. That future use came several months later when we decided to create our own soccer match (“The Rittman Mead Cup”) and build a game-tracking dashboard that would support this match. We then had the pleasure to present our work in a few industry conferences, like the BI Forum in Atlanta and KScope in Hollywood, Florida.

GaOUG Tech Day

Recently I had the privilege of delivering that presentation one last time, at Georgia Oracle Users Group’s Tech Day 2016. With the right amount of silliness (yes, The Rittman Mead cup was played/acted by our own employees), this presentation allowed us to discuss with the audience our approach to designing a “sticky” application; meaning, an application that users and consumers will not only find useful, but also enjoyable, increasing the chances they will return to and use the application.

We live in an era where nice, fun, pretty applications are commonplace, and our audience expects the same from their business applications. Validating the numbers on the dashboard is no longer enough. We need to be able to present that data in an attractive, intuitive, and captivating way. So, throughout the presentation, I discussed with the audience the thoughtful approach we used when designing our game-tracking page. We focused mainly on the following topics: Serving Our Consumers; Making Life Easier for Our Designers, Modelers, and Analysts; and Promoting Process and Collaboration (the latter can be accomplished with our ChitChat application). Our job would have been a lot easier if ChitChat were available when we first put this presentation together….

Finally, you can find the slides for the presentation here. Please add your comments and questions below. There are usually multiple ways of accomplishing the same thing, so I’d be grateful to hear how you guys are creating “stickiness” with your users in your organizations.

Until the next time.

The post Use OBIEE to Achieve Your GOOOALS!!! – A Presentation for GaOUG appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

The Importance of BI Commentary

Rittman Mead Consulting - Mon, 2016-03-07 04:00
Why Is Commentary Important?

We communicate every day. Communication through text is especially abundant with the proliferation of new on-demand technologies. Have you gone through your emails today? Have you read the news, weather, or blogs (like this one)? Communication is the backbone to every interpersonal interaction. Without it, we are left guessing and assuming.

BI implementations are no exception when it comes to communication’s importance, and I would argue communication is a major component of every BI environment. The goal of any BI application is to discover and expose actionable information from data, but without collaboration, discovering insights becomes difficult. By allowing users to collaborate immediately in the BI application, new insights can be discovered quicker.

Any BI conversation should maintain its own dedicated communication channel, and the optimal place for these conversations is as close to the information-consumption phase as possible. By allowing users to collaborate in discussions over results at the same location as the data, users will be empowered to extract as much information as possible.

Unfortunately, commentary support is absent from OBIEE.

The Current OBIEE Communication Model

The lack of commentary support does not stop the community from developing their own methods or approaches to communicating within their BI environments. Right now, common approaches include purchasing pre-developed software, engineering custom solutions, or forcing the conversations into other channels.

Purchasing a commentary application or developing your own internal solutions expedites the user communication process. However, what about those who do not find a solution, and instead decide to use a “work-around” approach?

Choosing to ignore the missing functionality is the cheapest approach, initially, but may actually cost more in the long run. To engage in simple conversations, users are required to leave the BI dashboard, which adds time and difficulty to their daily processes. And reiterating the context of a conversation is both time consuming and error prone.

Additionally, which communication channel will the BI conversations invade? A dedicated communication channel, built specifically to easily display and relay the BI topics of interest, is the most efficient, and beneficial, solution.

How ChitChat Can Help

ChitChat provides a channel of communication directly within the BI environment, allowing users to engage in conversations as close to the data consumption phase as possible. Users will never be required to leave the BI application to engage in a conversation about the data, and they won’t need to reiterate the environment through screenshots or descriptions.

Recognizing the importance of separate channels of communication, ChitChat also easily allows each channel to maintain their respective scopes. For instance, a user may discover an error on a BI dashboard. Rather than simply identifying the error in the BI environment, the user can export the comment to Atlassian JIRA and create a ticket for the issue to be resolved, thus maintaining the appropriate scopes of both JIRA and ChitChat. Integrations allow existing channels of communication to maintain their respective importance, and appropriately restrict the scope of conversations.

ChitChat is placed in the most opportune location for BI commentary, while maintaining the correct scope of the conversation. Other approaches often ignore one of these two aspects of BI commentary, but both are required to efficiently support a community within a BI environment. The most effective solution is not one that simply solves the problem, or meets some of the criteria, but the solution that meets all of the requirements.

Commentary Made Simple

Conversation around a BI environment will always occur, regardless of the supporting infrastructure or difficulty in doing so. Rather than forcing users to spend time working around common obstacles or developing their own solutions, investing in an embedded application will save both time and money. These offerings will not only meet the basic requirements, but also ensure the best experience for users, and the most return on investment.

Providing users the exact features they need, where they need it, is one step in nurturing a healthy BI environment, and ChitChat is an excellent solution to meet these criteria.

To find out more about ChitChat, or to request a demo, click here!

The post The Importance of BI Commentary appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

OBIEE 12c – Your Answers After Upgrading

Rittman Mead Consulting - Thu, 2016-03-03 04:00

Several blogs have already been written about new functionality in OBIEE 12c. Mark Rittman, for example, posted a good one here.

Now, I’ve personally had the chance to play with it for a few weeks, mostly in Answers and some with the RPD, and wanted to share my experience. With a sleek interface and many new functionalities, 12c brings some very useful features that users will appreciate. As with most new software releases, I expected to find issues that needed to be worked out. In general, I was pleasantly surprised with the UI, the speed, and the intuitiveness that came along with OBIEE 12c.

Here, I’ll share with you some of the new features within Answers:

Percent Calculation

If you’ve created lots of percent variance columns, it’s probably second nature that you will create your formula and then multiply by 100. In 12c, you can create your percent calculation without multiplying it by 100, then set your % data formatting in the Column Properties. In the same spot where you specify how the data is displayed, you can check the x100 box, which in turn will automatically multiply your results from that column by 100. Pretty sleek solution to simplify your formulas.

Percent Calc

Saved Columns

This feature is very well described here, so I will give a high level overview: 12c gives you a very easy way to save a complex formula into the catalog. If you’ve built a lot of logic in a column’s formula, and would like to reuse the logic in future reports, you will appreciate the opportunity of saving columns. I remember creating many financial calculations that had to be reused often, and until now there was no easy way to retrieve the column formulas. Trying to simplify my life, I ended up inventing “my own method” of saving complex calculations by saving different analyses that I named as “Master – Calculation” containing the columns that I reused often. I would start many reports based on these Master reports because they had my pre-built formulas; however, this was not a clean method for others to follow. OBIEE 12c gives you this clean and simple method for storing and reusing your most wanted columns. You do this by entering your formula in edit formula and choosing to “Save Column as” for future use.

Calculated Columns

OBIEE 12c provides a more intuitive way to create calculated columns than previous versions. In 10g or 11g, you needed to add a “whatever” column to the query, and then go in Edit Formula to define the calculation for your new column. While this worked, most new users often wondered why they were “bringing in two revenue columns,” for example. In 12c, you can add only the needed columns to your Criteria, then go straight to Results. In the Results tab, there is a New Calculated Measure icon that brings you immediately to the Edit Formula screen where you can name your new measure and define its formula.

Calc Columns

Measure Abbreviation

There is also a more intuitive abbreviation of the measures that are placed on a graph. In 11g, when you dragged an amount to an axis, you may recall that the numbers would show up exactly as the raw number. So, if your result was 12,000,000, then that was exactly what you would see on the graph to begin. If you wanted to improve your graph, then you needed to go to the Graph Properties and format the data from the axis to be abbreviated into, for our example above, millions (or 12M). To save you a step, 12c will automatically abbreviate your graph data in the most user-friendly way. So, if the data is 12,000,000, you automatically get 12M!

Measure Abbreviation.png

Heat Matrix

Easy to use heat matrix!—I mean it: easy. While in 11g, you would have to be somewhat visually savvy and spend a lot of time conditionally formatting. OBIEE 12c gives you a tool that allows you to create a meaningful heat matrix in a matter of minutes—wait—even seconds. All you need is to know the two dimensions and one measure that you would like to use, and drag and drop them. Choose from an array of color schemas and how you would like to use the colors. In no time, your heat matrix is ready.

Heat Map

Treemaps

A new member of the OBIEE family is here to provide a visual solution for very complex activities. The Treemap provides a hierarchical structure that allows you to quickly spot patterns and outliers. At first, it may require a bit of head twisting to look at a graph like this, but remember, this is indeed a graph for complex activities. One of the most ideal usages for this new feature is the grouping by parent/children groups and the displaying of how two measures fair up inside each group.

Treemap.png

Advanced Analytics

OBIEE 12c gives you the capability of working with statistical and R functions right from the ‘Edit formula’ pane. While I have found that this new feature was still not very user friendly, it’s a lot easier than making this functionality work in 11g. For example, to create a simple Trendline with 11g, the developer had to slowly build each step of a calculation to find the slope of a line, and then find the Y intercept. With these answers in hand, the results had to be carefully placed on a graph, so that it could render meaningful results. If you require statistical graphs within OBIEE, 12c may be a great fit for you. For example, below is a graph showing four different Trendlines:

Trendline

The Criteria for building these four lines would be very intense in 11g; but in OBIEE 12c, it contained only five columns: one for the Calendar Year, and one for each Trendline. The Trendlines were created one at a time, by inserting the new “Analytics” Function in the column’s formula (see below).

AA Combo

Data Mashup

This is a dream come true to many of us, though it requires an optional data visualization license. With this new functionality, you are able to use OBIEE along with any excel spreadsheet (XSA) saved on your machine.

You can add a spreadsheet to OBIEE from two areas:

  1. When you are creating an analysis (in the Criteria tab, and then choosing to add data source as shown below), or

Add Data Source

2. By going to the Visual Analyzer Home Page.

As this blog focuses on Answers, I will review the first option here.

There are three possible ways of analyzing a spreadsheet in Answers. You either want to:

  1. Analyze the spreadsheet by itself, or
  2. Use attributes from the spreadsheet along with fact data from your enterprise system, or
  3. Use fact data from your spreadsheet along with attributes and facts from your enterprise system.

For options 2 and 3 to work properly, it is important that your joins are properly matched (watch your cardinalities!) from your spreadsheet to your enterprise data. Also, as usual, option 3 will only work along with another fact table when the two tables are joined to a conformed dimension. Cardinalities and conformed dimensions are items that we generally take for granted when working on front-end OBIEE, because these points have been carefully handled during RPD modeling. Since the spreadsheet modeling has to be done in the front end, special caution must be used when modeling them in order to avoid “exploded” results, or simply inaccurate results.

Word of caution on placing an XSA sourced analysis in a shared folder:

Once you create an analysis using a spreadsheet and save it to a shared folder, you will receive this message:

Warning

Once you choose “YES,” the spreadsheet will show as a new subject area—for you and for anyone who has access to the folder in which you placed the analysis, meaning that the catalog security just TOOK CONTROL of your spreadsheet! Below is a screenshot of how they show as new subject areas:

Subject Areas

So, if your intent was to share an analysis from a XSA, but not necessarily share the entire spreadsheet to be reused, you may want to restrict your analysis to a folder with the specific securities that you would like to apply to your spreadsheet. BUT…think carefully before saving the analysis in a shared folder. If you realize that you made a mistake, just know that deleting your analysis from the incorrect folder will NOT remove your spreadsheet as an available subject area for other users. Remember, the catalog security took control of your spreadsheet, and it’s not going to let it go! If you saved the analysis in a folder with incorrect permissions, you must delete the spreadsheet altogether from the tool, reload it, and then save the analysis in the correct folder (with the permissions that you want).

You will likely need in-depth information regarding mashup security once you are really working with it. Check out this Oracle doc for more info.

Word of caution when archiving an analysis containing a spreadsheet, or when moving that analysis between environments:

The username of the owner of the analysis gets embedded in the column formula, and so does the precise name that you gave your spreadsheet when you first loaded it. So, let’s say that you are transitioning environments and the new environment does not contain your spreadsheets. If someone else has an archived catalog containing one of your mashup queries, they will get an error when retrieving results for your query, because the tool doesn’t have your spreadsheet loaded yet. The only way for them to unarchive your analysis and retrieve results is for YOUR USER to log into OBIEE, load the original spreadsheet (saving it with the same exact name as before), and then saving the analysis in the proper shared folder once again.

Deleting the New Subject Area

One tricky thing in this new tool: even if you uploaded your spreadsheet (XSA) during an analysis in OBIEE, it can only be deleted from the “New Home Page,” which is the Home Page of Visual Analyzer. You can get to the “New Home Page” from the “Old Home Page”:

New Home Page

Once in the New Home Page, click on Data Sources. Choose your Data Source and delete it!

Delete SA

I confess that I had some trouble finding the delete button. Maybe I would have bumped into it had I played more with VA, but that was not the case. Regardless, I felt relieved that this button existed somewhere!

Data Mashup Performance

This was a bit of an issue, but mostly when combined with the Advanced Analytics functions. From my research and from talking to colleagues, I found that the following must be observed to optimize performance:

  1. Reduce the size of your spreadsheet, when possible.
  2. DB indexing on the field that you are joining.
  3. Proper cardinality on your mashup joins with your DB data.
  4. Set up caching for mashups on bi server.

Overall, the experience in OBIEE 12c Answers was very positive, and the new features could bring a great deal of time savings for any organization!

To learn more about all that OBIEE 12c has to offer, check out our upcoming bootcamps here.

Hope to see you then!

The post OBIEE 12c – Your Answers After Upgrading appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Up in the JCS Clouds !!

Tim Dexter - Wed, 2016-01-27 04:05
Hello Friends,

Oracle BI Publisher has been in the cloud for quite sometime ....as a part of Fusion Applications or few other Oracle product offerings. We now announce certification of BI Publisher in the Java Cloud Services!! 

BI Publisher on JCS

Oracle Java Cloud Service (JCS) is a part of the platform service offerings in Oracle Cloud. Powered by Oracle WebLogic Server, it provides a platform on top of Oracle's enterprise-grade cloud infrastructure for developing and deploying new or existing Java EE applications. Check for more details on JCS here. In this page, under "Perform Advanced Tasks" you can find a link to "Leverage your on-premise licenses". This page cites all the products certified for Java Cloud Services and now we can see BI Publisher 11.1.1.9 listed as one of the certified products using Fusion Middleware 11.1.1.7.


How to Install BI Publisher on JCS?

Here are the steps to install BI Publisher on JCS. The certification supports the Virtual Image option only.

Step 1: Create DBaaS Instance


Step 2: Create JCS Instance

To create an Oracle Java Cloud Service instance, use the REST API for Oracle Java Cloud Service. Do not use the Wizard in the GUI. The Wizard does not allow an option to specify the MWHOME partition size, whereas REST API allows us to specify this. The default size created by the Wizard is generally insufficient for BI Publisher deployments.

The detailed instructions to install JCS instance are available in the Oracle By Example Tutorial under "Setting up your environment", "Creating an Oracle Java Cloud Service instance".


Step 3:  Install and Configure BI Publisher

  1. Set up RCU on DBaaS
    • Copy RCU
    • Run RCU
  2. Install BI Publisher in JCS instance
    • Copy BI Installer in JCS instance
    • Run Installer
    • Use Software Only Install
  3. Configure BI Publisher
    • Extend Weblogic Domain
    • Configure Policy Store
    • Configure JMS
    • Configure Security

You can follow the detailed installation instructions as documented in "Oracle By Example" tutorial. 


Minimum Cloud Compute and Storage Requirements:

  1. Oracle Java Cloud Service: 1 OCPU, 7.5 GB Memory, 62 GB Storage
    • To install Weblogic instance
    • To Install BI Publisher
    • To set Temp File Directory in BI Publisher
  2. Oracle Database Cloud Service: 1 OCPU, 7.5 GB Memory, 90 GB Storage
    • To install RCU
    • To use DBaaS as a data source
  3. Oracle IaaS (Compute & Storage): (Optional - Depends on sizing requirements)
    • To Enable Local & Cloud Storage option in DBaaS (Used with Full Tooling option)

So now you can use your on-premise license to host BI Publisher as a standalone on the Java Cloud Services for all your highly formatted, pixel perfect enterprise reports for your cloud based applications. Have a great Day !!

Categories: BI & Warehousing

Next Generation Outline Extractor 2.0.5.1073 released

Tim Tow - Wed, 2016-01-06 23:21


In the last week or so, we placed an updated version of the Next Generation Outline Extractor on our website. This version provides support for some updated Essbase versions, including 11.1.2.4.002, 11.1.2.4.003, and 11.1.2.4.005. More importantly, it addresses a bug where alias names were improperly associated with parent members when using the MaxL extraction source.. This bug was reported to us by a number of users and we are glad we were able to address it. Here is a list of the issues that were addressed:

2015.11.23 - Issue 1401 - Resolved an issue where only one alias table is exported when using MaxL as the extract source.

2015.11.23 - Issue 1402 - Resolved an issue where extracts using MaxL input and having members specified with Unicode may print incorrect characters in the output.

2015.11.23 - Issue 1403 - Resolved an issue where aliases and udas may have been improperly placed on parent members.

Please contact our support team if you have any issues.

Categories: BI & Warehousing

Learn About Hyperion & Oracle BI... 5 Minutes at a Time

Look Smarter Than You Are - Fri, 2015-11-27 14:13
Since early 2015, we've been trying to figure out how to help educate more people around the world on Oracle BI and Oracle EPM. Back in 2006, interRel launched a webcast series that started out once every two weeks and then rapidly progressed to 2-3 times per week. We presented over 125 webcasts last year to 5,000+ people from our customers, prospective customers, Oracle employees, and our competitors.

In 2007, we launched our first book and in the last 8 years, we've released over 10 books on Essbase, Planning, Smart View, Essbase Studio, and more. (We even wrote a few books we didn't get to publish on Financial Reporting and the dearly departed Web Analysis.) In 2009, we started doing free day-long, multi-track conferences across North America and participating in OTN tours around the world. We've also been trying to speak at as many user groups and conferences as we can possibly fit in. Side note, if you haven't signed up for Kscope16 yet, it's the greatest conference ever: go to kscope16.com and register (make sure you use code IRC at registration to take $100 off each person's costs).

We've been trying to innovate our education offerings since then to make sure there were as many happy Hyperion, OBIEE, and Essbase customers around the world as possible. Since we started webcasts, books, and free training days, others have started doing them too which is awesome in that it shares the Oracle Business Analytics message with even more people.

The problem is that the time we have for learning and the way we learn has changed. We can no longer take the time to sit and read an entire book. We can't schedule an hour a week at a specific time to watch an hour webcast when we might only be interested in a few minutes of the content. We can't always take days out of our lives to attend conferences no matter how good they are.  So in June 2015 at Kscope16, we launched the next evolution in training (epm.bi/videos):


#PlayItForward is our attempt to make it easier for people to learn by making it into a series of free videos.  Each one focuses on a single topic. Here's one I did that attempts to explain What Is Big Data? in under 12 minutes:

As you can see from the video, the goal is to teach you a specific topic with marketing kept to an absolute minimum (notice that there's not a single slide in there explaining what interRel is). We figure if we remove the marketing, people will not only be more likely to watch the videos but share them as well (competitors: please feel free to watch, learn, and share too). We wanted to get to the point and not teach multiple things in each video.

Various people from interRel have recorded videos in several different categories including What's New (new features in the new versions of various products), What Is? (introductions to various products), Tips & Tricks, deep-dive series (topics that take a few videos to cover completely), random things we think are interesting, and my personal pet project, the Essbase Technical Reference.
Essbase Technical Reference on VideoYes, I'm trying to convert the Essbase Technical Reference into current, easy-to-use videos. This is a labor of love (there are hundreds of videos to be made on just Essbase calc functions alone) and I needed to start somewhere. For the most part, I'm focusing on Essbase Calc Script functions and commands first, because that's where I get the most questions (and where some of the examples in the TechRef are especially horrendous). I've done a few Essbase.CFG settings that are relevant to calculations and a few others I just find interesting.  I'm not the only one at interRel doing them, because if we waited for me to finish, well, we'd never finish. The good news is that there are lots of people at interRel who learned things and want to pass them on.

I started by doing the big ones (like CALC DIM and AGG) but then decided to tackle a specific function category: the @IS... boolean functions. I have one more of those to go and then I'm not sure what I'm tackling next. For the full ever-increasing list, go to http://bit.ly/EssTechRef, but here's the list as of this posting: 
To see all the videos we have at the moment, go to epm.bi/videos. I'm looking for advice on which TechRef videos I should record next. I'm trying to do a lot more calculation functions and Essbase.CFG settings before I move on to things like MDX functions and MaxL commands, but others may take up that mantle. If you have functions you'd like to see a video on, shoot an email over to epm.bi/videos, click on the discussion tab, and make a suggestion or two. If you like the videos and find them helpful (or you have suggestions on how to make them more helpful), please feel free to comment too.

I think I'm going to go start working on my video on FIXPARALLEL.
Categories: BI & Warehousing

Oracle BI Publisher 12c released !!

Tim Dexter - Mon, 2015-10-26 04:43

Greetings !!

We now have Oracle BI Publisher 12c (12.2.1.0.0) available. You will be able to get the download, documentation, release notes and certification information in BI Publisher OTN home page. The download is also available from Oracle Software Delivery Cloud. This release is part of Fusion Middleware 12c release that includes

  • Oracle WebLogic Server 12c (12.2.1.0.0)
  • Oracle Coherence 12c (12.2.1.0.0)
  • Oracle TopLink 12c (12.2.1.0.0)
  • Oracle Fusion Middleware Infrastructure 12c (12.2.1.0.0)
  • Oracle HTTP Server 12c (12.2.1.0.0)
  • Oracle Traffic Director 12c (12.2.1.0.0)
  • Oracle SOA Suite and Business Process Management 12c (12.2.1.0.0)
  • Oracle MapViewer 12c (12.2.1.0.0)
  • Oracle B2B and Healthcare 12c (12.2.1.0.0)
  • Oracle Service Bus 12c (12.2.1.0.0)
  • Oracle Stream Explorer 12c (12.2.1.0.0)
  • Oracle Managed File Transfer 12c (12.2.1.0.0)
  • Oracle Data Integrator 12c (12.2.1.0.0)
  • Oracle Enterprise Data Quality 12c (12.2.1.0.0)
  • Oracle GoldenGate Monitor and Veridata 12c (12.2.1.0.0)
  • Oracle JDeveloper 12c (12.2.1.0.0)
  • Oracle Forms and Reports 12c (12.2.1.0.0)
  • Oracle WebCenter Portal 12c (12.2.1.0.0)
  • Oracle WebCenter Content 12c (12.2.1.0.0)
  • Oracle WebCenter Sites 12c (12.2.1.0.0)
  • Oracle Business Intelligence 12c (12.2.1.0.0)

For BI Publisher this is primarily an infrastructure upgrade release to integrate with WebLogic Server 12c, Enterprise Manager 12c, FMW infrastructure 12c. There are still some important enhancements and new features in this release: 

  1. Scheduler Job Diagnostics: This feature is primarily to help with custom report designs and for production job analysis. A report author during design time can view SQL Explain Plan and data engine logs to diagnose report performance and other issues. This will also help in diagnostics of a job in production.  
  2. Improved handling of large reports online: Large reports are always recommended to be run as scheduled job. However, there are scenarios where in a few reports vary in size from one user to another. For most end users the report may be just a few pages, but for few end users the same report may run into thousands of pages. Such reports are generally designed to be viewed online and sometimes such large reports end up causing stuck thread issue on Weblogic Server. This release enhances the user experience by providing the user an ability to cancel the processing of a large report. Also, the enhanced design will no longer cause any stuck thread issue.
  3. Schedule Job Output view control: Administrators can now hide the "make output public" option from the report job schedulers (Consumer Role) to prevent public sharing of report output.

The installation of BI Publisher will be a very different experience in this release. The entire installation effort has been divided into the following steps:

  1. Prepare
    • Install Java Developers Kit 8 (JDK8)
    • Run Infrastructure installer fmw_12.2.1.0.0_infrastructure.jar. This will install Web Logic Server 12c
  2. Install BI
    • Launch installation by invoking executable ./bi_platform-12.2.1.0.0_linux64.bin
  3. Configure BI
    • Run Configuration Assistant
  4. Post Installation Tasks
    • Setting up Datasources
    • Setting up Delivery Channels
    • Updating Security - LDAP, SSO, roles, users, etc.
    • Scaling out

Upgrade from the 11g environment to the 12c environment is an out-of-place migration, where you would basically migrate the Business Intelligence metadata and configuration from the Oracle 11g instance to the new 12c instance. For the migration procedure, see Migration Guide for Oracle Business Intelligence.

For rest of the details please refer to the documentation here. Happy exploring BI Publisher 12c !!

Categories: BI & Warehousing

PDF417 for E-Business Suite

Tim Dexter - Mon, 2015-10-19 16:49

A while back I wrote up a how to on 2D barcode formats. I kept things generic and covered the basics of getting the barcodes working.  Tony over in Bahrain (for we are truly international :) has had a tough time getting it working under EBS. Mostly to do with the usual bug bear of the JAVA_TOP, XX_TOP and getting class paths set up. Its finally working and Tony wanted to share a document on the 'how' to get PDF417s working under EBS.

Document available here.

Thanks for sharing Tony!

Categories: BI & Warehousing

Orphan Table Rows ... ugh!

Tim Dexter - Fri, 2015-10-09 10:57

This week, orphaned table rows and how to avoid them.

Its a bit more subtle than rows breaking across a page border and the solution is a doozy!

Im using another video to demonstrate because

  1. I don't have to type and grab screen shots, even thou I have one above
  2. Its faster and more easily understood, even in my umming and erring English.
  3. I'm hip and happening and video help is the future kids!
  4. You get to hear my Southern (England) drawl; a great sleep aid for insomniacs!

Here it is. You might want to 'fullscreen' it. Enjoy!


Categories: BI & Warehousing

Fundamentals of SQL Writeback in Dodeca

Tim Tow - Mon, 2015-10-05 22:00
One of the features of Dodeca is read-write functionality to SQL databases.  We often get questions as to how to write data back to a relational database, so I thought I would post a quick blog entry for our customers to reference.

This example will use a simple table structure in SQL Server though the concepts are the same when using Oracle, DB2, and most other relational databases.  The example will use a simple Dodeca connection to a JDBC database.  Here is the Dodeca SQL Connection object used for the connection.

The table I will use for this example was created with the following CREATE TABLE  statement.

CREATE TABLE [dbo].[Test](
[TestID] [int] IDENTITY(1,1) NOT NULL,
[TestCode] [nvarchar](50) NULL,
[TestName] [nvarchar](50) NULL,
  CONSTRAINT [PK_Test] PRIMARY KEY CLUSTERED 
  ([TestID] ASC)
)

First, I used the Dodeca SQL Excel View Wizard to create a simple view in Dodeca to retrieve the data into a spreadsheet.  The view, before setting up writeback capabilities, looks like this.

To make this view writeable, follow these steps.
  1. Add the appropriate SQL insert, update, or delete statements to the Dodeca SQL Passthrough Dataset object.  The values to be replaced in the SQL statement must be specified using the notation @ColumnName where ColumnName is the column name, or column alias, of the column containing the data.
  2. Add the column names of the primary key for the table to the PrimaryKey property of the SQL Passthrough DataSet object.
  3. Depending on the database used, define the column names and their respective JDBC datatypes in the Columns property of the SQL Passthrough Dataset.  This mapping is optional for SQL Server because Dodeca can obtain the required information from the Microsoft JDBC driver, however, the Oracle and DB2 JDBC drivers do not provide this information and it must be entered by the developer.
For insert, update, and delete operations, Dodeca parses the SQL statement to read the parameters that use the @ indicator and creates a JDBC prepared statement to execute the statements.  The prepared statement format is very efficient as it compiles the SQL statement once and then executes it multiple times.  Each inserted row is also passed to the server during the transaction.  The values from each row are then used in conjunction with the prepared statement to perform the operation.

Here is the completed Query definition.


Next, modify the DataSetRanges property of the Dodeca View object and, to enable insert operations, set the AllowAddRow property to True.  Note that if you added update and/or delete SQL to your SQL Passthrough Dataset object, be sure to enable those operations on the worksheet via the AllowDeleteRow and AllowModifyRow properties.

Once this step is complete, you can run the Dodeca View, add a row, and press the Save button to save the record to the relational database.



The insert, update, and delete functionalities using plain SQL statements is limited to operations on a single table.  If you need to do updates on multiple tables, you must use stored procedures to accomplish the functionality.  You can call a stored procedure in Dodeca using syntax similar to the following example:

{call sp_InsertTest(@TestCode, @TestName)}

Dodeca customers can contact support for further information at support@appliedolap.com.
Categories: BI & Warehousing

Page Borders and Title Underlines

Tim Dexter - Wed, 2015-08-26 16:32

I have taken to recording screen grabs to help some folks out on 'how do I' scenarios. Sometimes a 3 minute video saves a couple of thousand words and several screen shots.

So, per chance you need to know:

1. How to add a page border to your output and/or

2. How to add an under line that runs across the page

Watch this!   https://www.youtube.com/watch?v=3UcXHeSF0BM

If you need the template, sample data and output, get them here.

I'm taking requests if you have them.

Categories: BI & Warehousing

Kscope15 - It's a Wrap, Part II

Chet Justice - Thu, 2015-07-09 14:04
Another fantastic Kscope in the can.

This was my final year in an official capacity which was a lot more difficult to deal with than I had anticipated. Here's my record of service:
  • 2010 (2011, Long Beach) - I was on the database abstract review committee run by Lewis Cunningham. I ended up volunteering to help put together the Sunday Symposium and with the help of Dominic Delmolino, Cary Millsap and Kris Rice, I felt I did a pretty decent job.
  • 2011 (2012, San Antonio) - Database track lead. I believe this is the year that Oracle started running the Sunday Symposiums. Kris again led the charge with some input from those other two from the year before, i.e. DevOps oriented
  • 2012 (2013, New Orleans) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey (Tom Kyte OMFG!), OOW/ODTUG Coordinator, etc.
  • 2013 (2014, Seattle) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey, OOW/ODTUG Coordinator, etc.
  • 2014 (2015, Hollywood, FL) Content co-chair for the traditional stuff (Database, APEX, ADF)

This has been a wonderful time for me both professionally and, more importantly to me, personally. Obviously I had a big voice in the direction of content. Also and maybe hard to believe, I actually presented for the first time. Slotted against Mr. Kyte. I reminded everyone of that too. Multiple times. It seemed to go well though. Only a few made fun of me.

I was constantly recruiting too. "Did you submit an abstract?" "No, why not?" and I'd go into my own personal diatribe (ignoring my own lack of presenting) into why they should present. Sarah Craynon Zumbrum summed it up pretty well in a recent article.

But it was the connections I made, the people I met, the stories I shared (#ampm, #cupcakeshirt, etc), and the friends that I made, that's what has had the most impact on me. Kscope is unique in that way because of it's size...at Collaborate or OOW, you'll be lucky to see someone more than once or twice, at Kscope you're running into everyone constantly.

How could I forget? #tadasforkate! This year was even more special. For those that don't know, Katezilla is my profoundly delayed but equally profoundly happy 10 y/o daughter. Just prior to the conference her physical therapist taught her "tada!" and Kate would hold her hands up high in the air and everyone around would yell, Tada! I got this crazy idea to ask others to do it and I would film it. Thirty or forty videos and hundreds of participants later...



So a gigantic thank you to everyone who made this possible for me.
Here's a short list of those that had a direct impact on me...
  • Lewis Cunningham - he asked me to be a reviewer which started all of this off.
  • Mike Riley - can't really say enough about Mike. After turning me away a long time ago (jerk), he was probably my biggest supporter over the years. (Remind me next year to you tell you about "The Hug."). Mike, and his family, are very dear to me.
  • Monty Latiolais (rhymes with Frito Lay I would tell myself) - How can you not love this guy?
  • Natalie Delemar - Co-chair for EPM/BI and then boss as Conference Chair.
  • Opal Alapat - Co-chair for EPM/BI and one of my favorite humans ever invented. I aspire to be more organized, assertive, and bad-ass like Opal.
That list is by no means exhaustive. It doesn't even include staff at YCC, like Crystal Walton, Lauren Prezby and everyone else there. Nor does it include the very long list of Very Special People I've met. I consider myself very fortunate and incredibly grateful.

What's the future hold?
I have no idea. My people are in talks with Helen J. Sander's people to do one or more presentations next year, so there's that. Speaking of which...it's in Chicago. Abstract submissions start soon, I hope you plan on submitting. If you're not ready to submit, I hope you take try to take part in shaping the content by finding one of about 10 abstract review committees. Who knows where they may lead you?

Finally, here's the It's a Wrap video from Kscope15 (see Helen's story there). Here's Kscope16's site. Go sign up.

Categories: BI & Warehousing

Next Generation Outline Extractor 2.0.4.887 Released

Tim Tow - Sun, 2015-06-14 09:36
We recently released an updated version of the Next Generation Outline Extractor. This new version, 2.0.4.887, addresses three issues:


  • Fixed an issue where the username and password passed via the command line were improperly logged
  • Fixed an issue reading MaxL XML data sources when the alias or UDA contained xml encoded characters such as the ampersand (&) character.
  • Updated labels on the Input Source tab of the user interface to clarify their purpose.
Here is a screenshot showing the updated labeling.


Due to the architecture of the Oracle Essbase APIs, it is generally much faster to use the MaxL Outline XML extracts when processing an Essbase Outline extract.  The Next Generation Outline Extractor still uses the Essbase Java API during this extract, but it is able to minimize the number of calls.  The second option shown above, Extract and Process MaxL Outline XML, will automatically extract the Outline XML from the cube during the processing.  The third option shown, Use Previously Extracted MaxL Outline XML, uses (obviously) an Outline XML file that has already been extracted.

Thank you to everyone who reported issues or made suggestions as you help make this utility better!

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator - BI & Warehousing