BI & Warehousing

What's New in Dodeca 6.7.1?

Tim Tow - Mon, 2014-03-24 11:28
Last week, we released Dodeca version 6.7.1.4340 which focuses on some new relational functionality. The major new features in 6.7.1 are:
  • Concurrent SQL Query Execution
  • Detailed SQL Timed Logging
  • Query and Display PDF format
  • Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query ExecutionDodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database.  The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object can include multiple queries that get executed and returned on a single trip to the server and several of our customers have taken great advantage of that functionality.  We have at least one customer, in fact, that has some SQLPassthroughDataSets that execute up to 20 queries in a single trip to the server.  The functionality was originally designed to run the queries sequentially, but in some cases it would be better to run the queries concurrently.  Because this is Dodeca, of course concurrent query execution is configurable at the SQLPassthroughDataSet level.

Detailed SQL Timed LoggingIn Dodeca version 6.0, we added detailed timed logging for Essbase transactions.  In this version, we have added similar functionality for SQL transactions and have formatted the logs in pipe-delimited format so they can easily be loaded into Excel or into a database for further analysis.  The columns of the log include the log message level, timestamp, sequential transaction number, number of active threads, transaction GUID, username, action, description, and time to execute in milliseconds.

Below is an example of the log message.

Click to enlarge

Query and Display PDF formatThe PDF View type now supports the ability to load the PDF directly from a relational table via a tokenized SQL statement.  This functionality will be very useful for those customers who have contextual information, such as invoice images, stored relationally and need a way to display that information.  We frequently see this requirement as the end result of a drill-through operation from either Essbase or relational reports.

Ability to Launch Local External Processes from Within DodecaCertain Dodeca customers store data files for non-financial systems in relational data stores and use Dodeca as a central access point.  The new ability to launch a local process from within Dodeca is implemented as a Dodeca Workbook Script method which provides a great deal of flexibility in how the process is launched.

The new 6.7.1 functionality follows closely on the 6.7.0 release that introduces proxy server support and new MSAD and LDAP authentication services.  If you are interested in seeing all of the changes in Dodeca, highly detailed Dodeca release notes are available on our website at http://www.appliedolap.com/resources/downloads/dodeca-technical-docs.

Categories: BI & Warehousing

Internal Links

Tim Dexter - Wed, 2014-03-05 20:06

Another great question today, this time, from friend and colleague, Jerry the master house re-fitter. I think we are competing on who can completely rip and replace their entire house in the shortest time on their own. Every conversation we have starts with 'so what are you working on?' He's in the midst of a kitchen re-fit, Im finishing off odds and ends before I re-build our stair well and start work on my hidden man cave under said stairs. Anyhoo, his question!

Can you create a PDF document that shows a summary on the first page and provides links to more detailed sections further down in the document?

Why yes you can Jerry. Something like this? Click on the department names in the first table and the return to top links in the detail sections. Pretty neat huh? Dynamic internal links based on the data, in this case the department names.

Its not that hard to do either. Here's the template, RTF only right now.


The important fields in this case are the ones in red, heres their contents.

TopLink

<fo:block id="doctop" />

Just think of it as an anchor to the top of the page called doctop

Back to Top

<fo:basic-link internal-destination="doctop" text-decoration="underline">Back to Top</fo:basic-link>

Just a live link 'Back to Top' if you will, that takes the user to the doc top location i.e. to the top of the page.

DeptLink

<fo:block id="{DEPARTMENT_NAME}"/>

Just like the TopLink above, this just creates an anchor in the document. The neat thing here is that we dynamically name it the actual value of the DEPARTMENT_NAME. Note that this link is inside the for-each:G_DEPT loop so the {DEPARTMENT_NAME} is evaluated each time the loop iterates. The curly braces force the engine to fetch the DEPARTMENT_NAME value before creating the anchor.

DEPARTMENT_NAME

<fo:basic-link  internal-destination="{DEPARTMENT_NAME}" ><?DEPARTMENT_NAME?></fo:basic-link>

This is the link for the user to be able to navigate to the detail for that department. It does not use a regular MSWord URL, we have to create a field in the template to hold the department name value and apply the link. Note, no text decoration this time i.e. no underline.

You can add a dynamic link on to anything in the summary section. You just need to remember to keep link 'names' as unique as needed for source and destination. You can combine multiple data values into the link name using the concat function.

Template and data available here. Tested with 10 and 11g, will work with all BIP flavors.

Categories: BI & Warehousing

The OLAP Extension is now available in SQL Developer 4.0

Keith Laker - Tue, 2014-03-04 14:57


The OLAP Extension is now in SQL Developer 4.0.

See
http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/sqldev-releasenotes-v4-1925251.html for the details.

The OLAP functionality is mentioned toward the bottom of the web page.
You will still need AWM 12.1.0.1.0 to
  • Manage and enable cube and dimension MV's.
  • Manage data security.
  • Create and edit nested measure folders (i.e. measure folders that are children of other measure folders)
  • Create and edit Maintenance Scripts
  • Manage multilingual support for OLAP Metadata objects
  • Use the OBIEE plugin or the Data Validation plugin
What is new or improved:
  • New Calculation Expression editor for calculated measures.  This allows the user to nest different types to calculated measures easily.  For instance a user can now create a Moving Total of a Prior Period as one calculated measure.  In AWM, it would have required a user to create a Prior Period first and then create a Moving Total calculated measure which referred to the Prior Period measure.  Also the new Calculation Expression editor displays hypertext helper templates when the user selects the OLAP API syntax in the editor.
  • Support for OLAP DML command execution in the SQL Worksheet.  Simply prefix OLAP DML commands by a '~' and then select the execute button to execute them on the SQL Worksheet.  The output of the command will appear in the DBMS Output Window if it is opened, or the Script Output Window if the user has executed 'set serveroutput on' before executing the DML command.
  • Improved OLAP DML Program Editor integrated within the SQL Developer framework.
  • New diagnostic reports in the SQL Developer Report navigator.
  • Ability to create a fact view with a measure dimension (i.e. "pivot cube").  This functionality is accessible from the SQL Developer Tools-OLAP menu option.
  • Cube scripts have been renamed to Build Specifications and are now accessible within the Create/Edit Cube dialog.  The Build Specifications editor there, is similar to the calculation expression editor as far as functionality.
Categories: BI & Warehousing

Waterfall Charts

Tim Dexter - Fri, 2014-02-28 19:35

Great question came through the ether from Holger on waterfall charts last night.

"I know that Answers supports waterfall charts and BI Publisher does not.
Do you have a different solution approach for waterfall charts with BI Publisher (perhaps stacked bars with white areas)?
Maybe you have already implemented something similar in the past and you can send me an example."

I didnt have one to hand, but I do now. Little known fact, the Publisher chart engine is based on the Oracle Reports chart engine. Therefore, this document came straight to mind. Its awesome for chart tips and tricks. Will you have to get your hands dirty in the chart code? Yep. Will you get the chart you want with a little effort? Yep. Now, I know, I know, in this day and age, you should get waterfalls with no effort but then you'd be bored right?

First things first, for the uninitiated, what is a waterfall chart? From some kind person at Wikipedia, "The waterfall chart is normally used for understanding how an initial value is affected by a series of intermediate positive or negative values. Usually the initial and the final values are represented by whole columns, while the intermediate values are denoted by floating columns. The columns are color-coded for distinguishing between positive and negative values."

We'll get back to that last sentence later, for now lets get the basic chart working.

Checking out the Oracle Report charting doc, search for 'floating' their term for 'waterfall' and it will get you to the section on building a 'floating column chart' or in more modern parlance, a waterfall chart. If you have already got your feet wet in the dark arts world of Publisher chart XML, get on with it and get your waterfall working.

If not, read on.

When I first starting looking at this chart, I decided to ignore the 'negative values' in the definition above. Being a glass half full kind of guy I dont see negatives right :)

Without them its a pretty simple job of rendering a stacked bar chart with 4 series for the colors. One for the starting value, one for the ending value, one for the diffs (steps) and one for the base values. The base values color could be set to white but that obscures any tick lines in the chart. Better to use the transparency option from the Oracle Reports doc.

<Series id="0" borderTransparent="true" transparent="true"/> 

Pretty simple, even the data structure is reasonably easy to get working. But, the negative values was nagging at me and Holger, who I pointed at the Oracle Reports doc had come back and could not get negative values to show correctly. So I took another look. What a pain in the butt!

In the chart above (thats my first BIP waterfall maybe the first ever BIP waterfall.) I have lime green, start and finish bars; red for negative and green for positive values. Look a little closer at the hidden bar values where we transition from red to green, ah man, royal pain in the butt! Not because of anything tough in the chart definition, thats pretty straightforward. I just need the following columns START, BASE, DOWN, UP and FINISH. 

START 200
BASE 0
UP 0
DOWN 0
FINISH 0
START 0
BASE 180
UP 0
DOWN 20
FINISH 0
START 0
BASE 150
UP 0
DOWN 30
FINISH 0
 Bar 1 - Start Value
 Bar 2 - PROD1
 Bar 3 - PROD2

and so on. The start, up, down and finish values are reasonably easy to get. The real trick is calculating that hidden BASE value correctly for that transition from -ve >> + ve and vice versa. Hitting Google, I found the key to that calculation in a great page on building a waterfall chart in Excel from the folks at Contextures.  Excel is great at referencing previous cell values to create complex calculations and I guess I could have fudged this article and used an Excel sheet as my data source. I could even have used an Excel template against my database table to create the data for the chart and fed the resulting Excel output back into the report as the data source for the chart. But, I digress, that would be tres cool thou, gotta look at that.
On that page is the formula to get the hidden base bar values and I adapted that into some sql to get the same result.

Lets assume I have the following data in a table:

PRODUCT_NAME SALES PROD1 -20 PROD2 -30 PROD3 50 PROD4 60

The sales values are versus the same period last year i.e. a delta value.  I have a starting value of 200 total sales, lets assume this is pulled from another table.
I have spent the majority of my time on generating the data, the actual chart definition is pretty straight forward. Getting that BASE value has been most tricksy!

I need to generate the following for each column:

PRODUCT_NAME

STRT

BASE_VAL

DOWN

UP

END_TOTAL

START
200
0
0
0
0
PROD1
0
180
20
0
0
PROD2
0
150 30 0
0
PROD3
0 150 0 50 0 PROD4
0 200
0 60 0 END
0 0 0 0 260

Ignoring the START and END values for a second. Here's the query for the PRODx columns:

 SELECT 2 SORT_KEY 
, PRODUCT_NAME
, STRT
, SALES
, UP
, DOWN
, 0 END_TOTAL
, 200 + (SUM(LAG_UP - DOWN) OVER (ORDER BY PRODUCT_NAME)) AS BASE_VAL
FROM
(SELECT P.PRODUCT_NAME
,  0 AS STRT
, P.SALES
, CASE WHEN P.SALES > 0 THEN P.SALES ELSE 0 END AS UP  
, CASE WHEN P.SALES < 0 THEN ABS(P.SALES) ELSE 0 END AS DOWN
, LAG(CASE WHEN P.SALES > 0 THEN P.SALES ELSE 0 END,1,0) 
      OVER (ORDER BY P.PRODUCT_NAME) AS LAG_UP
FROM PRODUCTS P
)

The inner query is breaking the UP and DOWN values into their own columns based on the SALES value. The LAG function is the cool bit to fetch the UP value in the previous row. That column is the key to getting the BASE values correctly.

The outer query just has a calculation for the BASE_VAL.

200 + (SUM(LAG_UP - DOWN) OVER (ORDER BY PRODUCT_NAME))

The SUM..OVER allows me to iterate over the rows to get the calculation I need ie starting value (200) + the running sum of LAG_UP - DOWN. Remember the LAG_UP value is fetching the value from the previous row.
Is there a neater way to do this? Im most sure there is, I could probably eliminate the inner query with a little effort but for the purposes of this post, its quite handy to be able to break things down.

For the start and end values I used more queries and then just UNIONed the three together. Once note on that union; the sorting. For the chart to work, I need START, PRODx, FINISH, in that order. The easiest way to get that was to add a SORT_KEY value to each query and then sort by it. So my total query for the chart was:

SELECT 1 SORT_KEY
, 'START' PRODUCT_NAME
, 200 STRT
, 0 SALES
, 0 UP
, 0 DOWN
, 0 END_TOTAL
, 0 BASE_VAL
FROM PRODUCTS
UNION
SELECT 2 SORT_KEY 
, PRODUCT_NAME
, STRT
, SALES
, UP
, DOWN
, 0 END_TOTAL
, 200 + (SUM(LAG_UP - DOWN) 
      OVER (ORDER BY PRODUCT_NAME)) AS BASE_VAL
FROM
(SELECT P.PRODUCT_NAME
,  0 AS STRT
, P.SALES
, CASE WHEN P.SALES > 0 THEN P.SALES ELSE 0 END AS UP  
, CASE WHEN P.SALES < 0 THEN ABS(P.SALES) ELSE 0 END AS DOWN
, LAG(CASE WHEN P.SALES > 0 THEN P.SALES ELSE 0 END,1,0) 
       OVER (ORDER BY P.PRODUCT_NAME) AS LAG_UP
FROM PRODUCTS P
)
UNION
SELECT 3 SORT_KEY 
, 'END' PRODUCT_NAME
, 0 STRT
, 0 SALES
, 0 UP
, 0 DOWN
, SUM(SALES) + 200 END_TOTAL
, 0 BASE_VAL
FROM PRODUCTS
GROUP BY 1,2,3,4,6
ORDER BY 1 

A lot of effort for a dinky chart but now its done once, doing it again will be easier. Of course no one will want just a single chart in their report, there will be other data, tables, charts, etc. I think if I was doing this in anger I would just break out this query as a separate item in the data model ie a query just for the chart. It will make life much simpler.
Another option that I considered was to build a sub template in XSL to generate the XML tree to support the chart and assign that to a variable. Im sure it can be done with a little effort, I'll save it for another time.

On the last leg, we have the data; now to build the chart. This is actually the easy bit. Sadly I have found an issue in the online template builder that precludes using the chart builder in those templates. However, RTF templates to the rescue!

Insert a chart and in the dialog set up the data like this (click the image to see it full scale.)

Its just a vertical stacked bar with the BASE_VAL color set to white.You can still see the 'hidden' bars and they are over writing the tick lines but if you are happy with it, leave it as is. You can double click the chart and the dialog box can read it no problem. If however, you want those 'hidden' bars truly hidden then click on the Advanced tab of the chart dialog and replace:

<Series id="1" color="#FFFFFF" />

with

<Series id="1" borderTransparent="true" transparent="true" />

and the bars will become completely transparent. You can do the #D and gradient thang if you want and play with colors and themes. You'll then be done with your waterfall masterpiece!

Alot of work? Not really, more than out of the box for sure but hopefully, I have given you enough to decipher the data needs and how to do it at least with an Oracle db. If you need all my files, including table definition, sample XML, BIP DM, Report and templates, you can get them here.

Categories: BI & Warehousing

Wildcard Filtering continued

Tim Dexter - Mon, 2014-02-24 13:27

I wrote up a method for using wildcard filtering in your layouts a while back here. I spotted a followup question on that blog post last week and thought I would try and address it using another wildcard method. 

I want to use the bi publisher to look for several conditions using a wild card. For example if I was sql it would look like this:

if name in ('%Wst','%Grt')

How can I utilize bi publisher to look for the same conditions.

This, in XPATH speak is an OR condition and we can treat it as such. In the last article I used the 'starts-with' function, its a little limiting, there is a better one, 'contains'. This is a much more powerful function that allows you to look for any string within another string. Its case insensitive so you do not need to do upper or lowering of the string you are searching to get the desired results.
Here it is in action:

For the clerks filter I use :

<?for-each-group:G_1[contains(JOB_TITLE,'Clerk')];./JOB_TITLE?>

and to find all clerks and managers, I use:

<?for-each-group:G_1[contains(JOB_TITLE,'Clerk') or contains(JOB_TITLE,'Manager')];./JOB_TITLE?>

Note that Im using re-grouping here, you can use the same XPATH with a regular for-each. Also note the lower case 'or' in the second expression. You can also use an 'and' too.

This works in 10 and 11g flavors of BIP. Sample files available here.

Categories: BI & Warehousing

Filtered Charts

Tim Dexter - Tue, 2014-02-11 17:19

A customer question this week regarding filtering a chart. They have a report with a bunch of criteria with monetary values but, rather than show all of the criteria in a pie chart, they just want to show a few. For example:

 This ...
 rather than this

 There are a couple of ways to tackle this:

1. Filter the chart data in the chart definition. Using an XPATH expression you can filter out all of the criteria you do not want to see. Open the chart definition and update the definition. You will need to update the RowCount, RowLabels and DataValues attributes in the chart definition. Adding in the following XPATH expression:

    [DEPARTMENT_NAME='Accounting' or DEPARTMENT_NAME='Marketing' or DEPARTMENT_NAME='Executive']

so the DataValues value becomes:

    <DataValues><xsl:for-each-group select=".//G_1[DEPARTMENT_NAME='Accounting' or 
                                                     DEPARTMENT_NAME='Marketing' or 
                                                        DEPARTMENT_NAME='Executive']" ...

2. Create a variable in the template to hold just the values you want to chart.

    <?variable: filterDepts; /DATA_DS/LIST_G_1/G_1[DEPARTMENT_NAME='Accounting' or 
                                                     DEPARTMENT_NAME='Marketing' or 
                                                       DEPARTMENT_NAME='Executive']?>

Then update the chart definition with the variable for the same three attributes above, the RowCount, RowLabels and DataValues. For example:

    <DataValues><xsl:for-each-group select="$filterDepts" ...

These both work admirably, but they both require some manual updating of the chart definition which can get fiddly and a pain to maintain. I'm also just filtering for three departments, when you get up to 5 or 6 then the XPATH starts to become a pain to maintain. Option 2 alleviates this somewhat because you only need to define the filter once to create the filtered variable.
A better option may be ...

3. Force the effort down into the data layer. Create another query in the report that just pulls the data for the chart.

LIST_G2/G_2 holds the data for the chart. Then all you need do is create a vanilla chart on that particular section of the data.

Yes, there is some overhead to re-fetch the data but this is going to be about the same if not less than the extra processing required in the template with options 1 and 2. This has another advantage, you can parametrize the criteria for the user. You can create a parameter to allow the user to select, in my case, the department(s) they want to chart.


Its simple enough to create the multi-select parameter and modify the query to filter based on the values chosen by the user.

Sample report (including data model and layout template here) just un-archive into your catalog.
RTF Template plus sample data available here.




Categories: BI & Warehousing

Alternate Tray Printing

Tim Dexter - Mon, 2014-02-10 17:18

Since we introduced support for check printing PCL escape sequences in 11.1.1.7 i.e. being able to set the micr font or change the print cartridge to the magnetic ink for that string. I have wanted to test out other PCL commands, particularly, changing print trays. Say you have letter headed paper or pre-printed or colored paper in tray 2 but only want to use it for the first page or specific or for a separator page, the rest can come out of plain ol Tray 1 with its copier paper.

I have had a couple of inquiries recently and so, I finally took some time to test out the theory. I should add here, that the dev team thought it would work but were not 100%. The feature was built for the check printing requirements alone so they could not support any other commands. I was hopeful thou!
In short, it works!



I can generate a document and print it with embedded PCL commands to change from Tray 1 (&l4H) to Tray 2 (&l1H ) - yep, makes no sense to me either. I got the codes from here, useful site with a host of other possibilities to test.

For the test, I just created a department-employee listing that broke the page when the department changed. Just inside the first grouping loop I included the PCL string to set Tray 1.

<pcl><control><esc/>&l4H </control> </pcl>

Note, this has to be in clear text, you can not use a formfield.
I then created a dummy insert page using a template and called it from just within the closing department group field (InsertPAGE field.) At the beginning of the dummy page I included the PCL string to get the paper from Tray 2:

<pcl><control><esc/>&l1H</control> </pcl>

When you run this to PDF you will see the PCL string. I played with this and hid it using a white font and it worked great, assuming you have white paper :)

When you set up the printer in the BIP admin console, you need to ensure you have picked the 'PDF to PCL Filter' for the printer.



If you dont want to have PCL enabled all the time, you can have multiple definitions for the same printer with/with out the PCL filter. Users just need to pick the appropriate printer instance. Using this filter ensures that those PCL strings will be preserved into the final PCL that gets sent to the printer.

Example files here. Official documentation on the PCL string here.

Happy Printing!





Categories: BI & Warehousing

Memory Guard

Tim Dexter - Mon, 2014-02-03 12:38

Happy New ... err .. Chinese Year! Yeah, its been a while, its also been danged busy and we're only in February, just! A question came up on one of our internal mailing lists concerning out of memory errors. Pieter, support guru extraordinaire jumped on it with reference to a support note covering the relatively new 'BI Publisher Memory Guard'. Sounds grand eh?

As many a BIP user knows, at BIP's heart lives an XSLT engine. XSLT engines are notoriously memory hungry. Oracle's wee beastie has come a long way in terms of taming its appetite for bits and bytes since we started using it. BIP allows you to take advantage of this 'scalable mode.' Its a check box on the data model which essentially says 'XSLT engine, stop stuffing your face with memory doughnuts and get on with the salad and chicken train for this job' i.e. it gets a limited memory stack within which to work and makes use of disk, if needed, think Windows' 'virtual memory'.

Now that switch is all well and good, for a known big report that you would typically mark as 'schedule only.' You do not want users sitting in front of their screen waiting for a 10,000 page document to appear, right? How about those reports that are borderline 'big' or you have a potentially big report but expect users to filter the heck out of it and they choose not to? It would be nice to be able to set some limits on reports in case a user kicks off a monster donut binge session. Enter 'BI Publisher Memory Guard'!

It essentially lets you set those limits on memory and report size so that users can not run a report that brings the server to its knees. More information on the support web site, search for 'BI Publisher Memory Guard a New Feature to Prevent out-of-memory Errors (Doc ID 1599935.1)' or you can get Leslie's white paper covering the same here.

Categories: BI & Warehousing

FSG Reporting and BIP

Tim Dexter - Fri, 2013-12-20 11:30

This is a great overview of the Financial Statement Generator (FSG) engine from GL in EBS and how Publisher fits into the picture.Thanks to Helle Hellings on the Financials PM team.


Categories: BI & Warehousing

Meet the Oracle ACE Directors Panel - January 9 - Seattle

Tim Tow - Thu, 2013-12-19 08:27

I will be in Seattle on Thursday, January 9th for the Meet the Oracle ACE Directors Panel.  It is at the Sheraton Seattle from 4 - 6 pm and will feature several other ACE Directors including Martin D'Souza, Kellyn Pot'Vin, Tim Gorman, and my longtime friend and collaborator, Cameron Lackpour.  

Come see and the panel and stay for the Happy Hour; the beer will be on me!


Categories: BI & Warehousing

My Friend, Mike Riley, Has Cancer

Look Smarter Than You Are - Mon, 2013-12-16 08:18
I found out this summer that one of my best friends - one of the entire Hyperion community's best friends - has cancer. This is his story.

But first, a mea culpa:
In 2008, I Was An IdiotBack in early 2008, I wrote a blog entry comparing Collaborate, Kaleidoscope, and OpenWorld.  In this entry, I said that Collaborate was the obvious successor to the Hyperion Solutions conference and I wasn't terribly nice to Kaleidoscope.  Here's me answering which of the three conferences I think the Hyperion community should attend (I dare you to hold in the laughter):
Now which one would I attend if I could only go to one?Collaborate. Without reservation. If I'm going to a conference, it's primarily to learn. As such, content is key.I actually got asked a very similar question on Network 54's Essbase discussion board just yesterday (apparently, it's a popular question these days). To parrot what I said there, OpenWorld was very, very marketing-oriented. 80% of the fewer than 100 presentations in the Hyperion track were delivered by Oracle (in some cases, with clients/partners as co-speakers). COLLABORATE is supposed to have 100-150 presentations with 100+ of those delivered by clients and partners.In the interest of full-disclosure, my company, interRel, is paying to be a 4-star partner of COLLABORATE. Why? Because we're hoping that COLLABORATE becomes the successor to the Solutions conference. Solutions was a great opportunity to learn (partying was always secondary) and I refuse to believe it's dead with nothing to take it's mantle. We're investing a great deal of money with the assumption that something has to take the place of Hyperion Solutions conference, and it certainly isn't OpenWorld.Is OpenWorld completely bad? Absolutely not. In addition to the great bribes, it's a much larger conference than COLLABORATE or ODTUG's Kaleidoscope, so if your thing is networking, by all means, go to OpenWorld. OpenWorld is the best place to get the official Oracle party line on upcoming releases and what not. OpenWorld is also the place to hear better keynotes (well, at least by More Famous People like Larry Ellison, himself). OpenWorld has better parties too. OpenWorld is also in San Francisco which is just a generally cooler town. In short, OpenWorld was very well organized, but since it's being put on by Oracle, it's about them getting out their message to their existing and prospective client base.So why aren't I recommending Kaleidoscope (since I haven't been to that either)? Size, mostly. Their entire conference will have around 100 presentations, so their Hyperion track will most likely be fewer than 10 presentations. I've been to regional Hyperion User Group meetings that have more than that (well, the one interRel hosted in August of 2007 had 9, but close enough). While Kaleidoscope may one day grow their Hyperion track, it's going to be a long time until they equal the 100-150 presentations that COLLABORATE is supposed to have on Hyperion alone.If you're only going to one Hyperion-oriented conference this year, register for COLLABORATE. If you've got money in the budget for two conferences, also go to OpenWorld. If you're a developer that finds both COLLABORATE and OpenWorld to be too much high-level fluff, then go to Kaleidoscope.


So, ya, that entry may live in infamy.  [Editor's Note: Find out a way to delete prior blog posts without anyone noticing.]  Notice that of the three conferences, I recommended Kaleidoscope last and dared to say that it would take them a long time until they had 100-150 sessions like Collaborate.  Interestingly, Collaborate peaked that year at 84 Hyperion sessions, and Kaleidoscope is well over 150 Business Analytics sessions, but I'm getting ahead of myself.


In 2008, Mike Riley Luckily Wasn't An Idiot
I had never met Mike Riley, but he commented directly on my blog.  He was gracious even though I was slamming his tiny little conference in New Orleans:
Hyperion users are blessed with many training opportunities. I agree with Edward, the primary reason for going to a conference is to learn, but I disagree that Collaborate is the best place to do that. ODTUG Kaleidoscope, Collaborate, and OpenWorld all have unique offerings. 

It’s true that ODTUG is a smaller conference, however that is by choice. At every ODTUG conference, the majority of the content is by a user, not by Oracle or even another vendor. And even though Collaborate might seem like the better buy because of its scale, for developers and true technologists ODTUG offers a much more targeted and efficient conference experience. Relevant tracks in your experience level are typically consecutive, rather than side-by-side so you don’t miss sessions you want to attend. The networking is also one of the most valuable pieces. The people that come to ODTUG are the doers, so everyone you meet will be a valuable contact in the future.

It’s true, COLLABORATE will have many presentations with a number of those delivered by clients and partners, but what difference does that make? You can’t attend all of them. ODTUG’s Kaleidoscope will have 17 Hyperion sessions that are all technical. 

In the interest of full disclosure, I have been a member of ODTUG for eight years and this is my second year as a board member. What attracted me to ODTUG from the start was the quality of the content delivered, and the networking opportunities. This remains true today.

I won’t censor or disparage any of the other conferences. We are lucky to have so many choices available to us. My personal choice and my highest recommendation goes to Kaleidoscope for all the reasons I mentioned above (and I have attended all three of the above mentioned conferences).

One last thing; New Orleans holds its own against San Francisco or Denver. All of the cities are wonderful, but when it comes to food, fun, and great entertainment there’s nothing like the Big Easy. 
Mike was only in his second year as a board member of ODTUG, but he was willing to put himself out there, so I wrote him an e-mail back.  In that e-mail, dated February 10, 2008, I said that for Kaleidoscope to become a conference that Hyperion users would love, it would require a few key components: keynote(s) by headliner(s), panels of experts, high-quality presentations, a narrow focus that wasn't all things to all people, and a critical mass of attendees.

At the end of the e-mail, I said "If Kaleidoscope becomes that, I'll shout it from the rooftops.  I want to help Kaleidoscope be successful, and I'm willing to invest the time and effort to help out.  Regarding your question below, I would be more than happy to work with Mark [Rittman] and Kent [Graziano] to come up with a workable concept and I think I'm safe in saying that Tim [Tow] would be happy to contribute as well.  For that matter, if you're looking for two people to head up your Hyperion track (and enact some of the suggestions above), Tim and I would be willing (again, I'm speaking on Tim's behalf, but he's one of the most helpful people on planet Hyperion)."


K(aleido)scope
Kaleidoscope 2008 ended up being the best Hyperion conference I ever attended (at the time).  It was a mix of Hyperion Solutions, Arbor Dimensions, and Hyperion Top Gun.  With only 4 months prep time, we had 175 attendees in what then was only an Essbase track.  Though it was only one conference room there in New Orleans, the attendees sat in their seats for most of a week and learned more than many of us had learned in years.

After the conference, Mike and the ODTUG board offered Tim Tow a spot on the ODTUG board (a spot to which he was later elected by the community) to represent the interests of Hyperion.  I founded the ODTUG Hyperion SIG along with several attendees from that Kaleidoscope 2008. I eventually became Hyperion Content Chair for Kaleidoscope and passed my Hyperion SIG presidency on to the awesome Gary Crisci.  In 2010, Mike talked me into being Conference Chair for Kaleidoscope (which I promptly renamed Kscope since I never could handle how "kaleidoscope" violated the whole "i before e" rule).  Or maybe I talked him into it.  Either way, I was Conference Chair for Kscope11 and Kscope12.

During those years, Mike worked closely with the Kscope conference committee in his role as President of ODTUG.  Mike rather good-naturedly ("good-natured" is, I expect, the most commonly used phrase to describe Mike) put up with whatever crazy thing I wanted him to do. In 2011, he was featured during the general session in several reality show parodies (including his final, climactic race with John King to see who got to pick the location for Kscope12).  I decided to up the ante in 2012 by making the entire general session about him in a "Mike Riley, This Is Your Life" hour and we found ourselves laughing not at Mike, but near him.  It included Mike having to dance with the Village Persons (a Village People tribute band) and concluded with Mike stepping down as President of ODTUG...

... to focus his ODTUG time on being the new Conference Chair for Kscope.  Kscope13 returned to New Orleans and Mike did a fabulous job with what I consider to be Hyperion's 5 year anniversary with Kscope.  Mike was preparing Kscope14 when I got a phone call from him.  I expected him to talk over Kscope, ODTUG, or just to say hi, but I'll never forget when Mike told me he had stage 3 rectal cancer.  My father died in 2002 of colorectal cancer, and the thought that one of my best friends was going to face this was terrifying... and I wasn't the one with cancer.

I feel that the Hyperion community was saved by Mike (what would have happened if we had all just given up after Collaborate 2008 was a major letdown?) and now it's time for us to do our part.  Whether you've attended Kscope in the past or just been envious of those of us who have, you know that it's the one place per year that you can meet and learn from some of the greatest minds in the industry.


Mike Helped Us, Let's Help Him
Kscope is now the best conference for Oracle Business Analytics (EPM and BI) in the world, and Mike, I'm shouting it from every rooftop I can find (although I wish when I climbed up there people would stop yelling "Jump!  You have nothing else to live for!").  I tell everyone I know how much I love Kscope, and on behalf of all the help you've given the Hyperion community over the last 5 years, Mike, it's now time for us to help you.

After many weeks of chemo, Mike goes into surgery tomorrow to hopefully have the tumor removed.  Then he has many more weeks of chemo after that. He's a fighter, but getting rid of cancer is expensive, so we've set up a Go Fund Me campaign to help offset his medical bills.  If you love Kscope, there is no one on Earth more responsible for its current state than Mike Riley.  If you love ODTUG, no one has more fundamentally changed the organization in the last millennium than Mike Riley.  If you love Hyperion, no one has done more to save the community than Mike Riley.  

And if after reading this entry, you love Mike for all he's done, go to http://bit.ly/HelpMike and donate generously, because we want Mike to be there at the opening of Kscope14 in Seattle on June 22.  Please share this entry, and even if you can't donate, send Mike an e-mail at mriley@odtug.com letting him know you appreciate everything he's done.
Categories: BI & Warehousing

Smart View Internals: Exploring the Plumbing of Smart View

Tim Tow - Fri, 2013-12-13 17:46
Ever wonder how Smart View stores the information it needs inside an Excel file?  I thought I would take a look to see what I could learn.  First, I created this simple multi-grid retrieve in Smart View.



Note that the file was saved in the Excel 2007 and higher format (with an .xlsx file extension).  Did you know that the xlsx format is really just a specialized zip file?  Seriously.  It is a zip file containing various files that are primarily in xml format.  I saved the workbook, added the .zip extension to the filename, and opened it in WinRar.  Here is what I found.

I opened the xl folder to find a series of files and folders.


Next, I opened the worksheets folder to see what was in there.  Of course, it is a directory of xml files containing the contents of the worksheets.


My Essbase retrieves were on the sheet named Sheet1, so let’s take a look at what is in the sheet1.xml file.   The xml is quite large, so I can’t show all of it here, but needless to say, there is a bunch of information in the file.  The cell contents are only one of the things in the file.  Here is an excerpt that shows the contents of row 5 of the spreadsheet.



This is interesting as it shows the numbers but not the member name.  What is the deal with that?  I noticed there is an attribute, ‘t’, on that node.  I am guessing that the attribute t=”s” means the cell type is a string.  I had noticed that in one of the zip file screenshots, there was a file named sharedStrings.xml.  Hmm...  I took a look at that file and guess what I found?




That’s right!  The 5th item, assuming you start counting at zero like all good programmers do, is Profit.   That number corresponds perfectly with the value specified in the xml for cell B5, which was five (circled in blue in the xml file above).   OK, so when are we going to get to Smart View stuff?  The answer is pretty quick.  I continued looking at sheet1.xml and found these nodes near the bottom.

Hmm, custom properties that contain the name Hyperion?  Bingo!  There were a number of custom property files in the xml file.  Let’s focus on those.

Custom property #1 is identified by the name CellIDs.  The corresponding file, customProperty1.bin, contained only the empty xml node <root />.  Apparently there aren’t any CellIDs in this workbook.

Custom property #2 is identified by the name ConnName.  The file customProperty2.bin contains the string ‘Sample Basic’ which is the name of my connection.

Custom property #3 is named ConnPOV but it appears to contain the connection details in xml format.  Here is an excerpt of the xml.


Custom property #4 is named HyperionPOVXML and the corresponding file contains xml which lines up with the page fields I have in my worksheet.



What is interesting about the POV xml is that I have two different retrieves that both have working POV selectors which are both implemented as list-type data validations in Excel.  I don’t know what happens internally if I save different values for the POV.

Custom property #5 is labeled HyperionXML.  It appears to contain the information about the Essbase retrieval, but it doesn't appear to be the actual retrieval xml because it doesn't contain the numeric data.  My guess is that this xml is used to track what is on the worksheet from a Hyperion standpoint.



There is a lot of information in this simple xml stream, but the most interesting information is contained in the slice element.  Below is a close-up of contents in the slice.



The slice covers 6 rows and 7 columns for a total of 42 cells.  It is interesting that the Smart View team chose to serialize their XML in this manner for a couple of reasons.  First, the pipe delimited format means that every cell must be represented regardless of whether it has a value or not.  This really isn’t too much of a problem unless your spreadsheet range is pretty sparse.  The second thing about this format is that the xml itself is easy and fast to parse, but the resulting strings need to be parsed again to be usable.  For example, the vals node will get split into an array containing 42 elements.  The code must then loop the 42 elements and process them individually.  The other nodes, such as the status, contain other pieces of information about the grid.  The status codes appear to be cell attributes returned by Essbase; these attributes are used to apply formatting to cells in the same way the Excel add-in UseStyles would apply formatting.  There are a couple of things to take away:

  1. In addition to the data on the worksheet itself, there is potentially *a lot* of information stored under the covers in a Smart View file.
  2. String parsing is a computation-intensive operation and can hurt performance.  Multiply that workload by 8 because, depending on the operation and perhaps the provider, all 8 xml nodes above may need to be parsed.

In addition, the number of rows and columns shown in the slice may be important when you are looking at performance.  Smart View must look at the worksheet to determine the size of the range to read in order to send it to Essbase.  In the case of a non-multi-grid retrieve, the range may not be known and, as a result, the grid may be sized based on the UsedRange of the worksheet.  In our work with Dodeca, we have found that workbooks converted from the older xls format to the newer xlsx format, which support a larger number of cells, may have the UsedRange flagged internally to be 65,536 rows by 256 columns.  One culprit appears to be formatting applied to the sheet in a haphazard fashion.  In Dodeca, this resulted in a minor issue which resulted in a larger memory allocation on the server.   Based on the format of the Smart View xml, as compared to the more efficient design of the Dodeca xml format, if this were to happen in Smart View it may cause a larger issue due to the number of cells that would need to be parsed and processed.  Disclaimer: I did not attempt to replicate this issue in Smart View but rather is an educated guess based on my experience with spreadsheet behavior.

Note: The Dodeca xml format does not need to contain information for cells that are blank.  This format reduces the size and the processing cycles necessary to complete the task.  In addition, when we originally designed Dodeca, we tested a format similar to the one used today by Smart View and found it to be slower and less efficient.

Considering all of this information, I believe the xml format would be difficult for the Smart View team to change at this point as it would cause compatibility issues with previously created workbooks.  Further, this discussion should give some visibility to the fact that the Smart View team faces an on-going challenge to maintain compatibility between different versions of Smart View considering that different versions distributed on desktops and different versions of the internal formats that customers may have stored in their existing Excel files.  I don’t envy their job there.

After looking at all of this, I was curious to see what the xml string would look like on a large retrieve, so I opened up Smart View, connected to Sample Basic and drilled to the bottom of the 4 largest dimensions.  The resulting sheet contained nearly 159,000 rows of data.  Interestingly enough, when I looked at the contents of customProperty5.bin inside that xlsx file, the contents were compressed.  It occurred to be a bit strange to me as the xlsx file format is already compressed, but after thinking about it for a minute it makes sense as the old xls file format probably did not automatically compress content, so compression was there primarily to compress the content when saved in the xls file format.

Custom property #6 is labeled NameConnectionMap.  The corresponding property file contains xml that appears to map the range names in the workbook to the actual grid and the connection.


Custom property #7 is labeled POVPosition. The file customProperty7.bin contains the number 4 followed by a NUL character.  Frankly, I have no idea what position 4 means.

Moving on to custom property #8 which is labeled SheetHasParityContent.  This file contains the number 1 followed by a NUL character.  This is obviously a boolean flag that tells the Smart View code that new features, such as support for multiple grids, are present in this file.

Custom property #9 is labeled SheetOptions.  The corresponding file, customProperty9.bin, contains an xml stream that (obviously) contains the Hyperion options for the sheet.


Custom property #10 is labeled ShowPOV and appears to contain a simple Boolean flag much like that in custom property #8.

Finally, custom property #11 is labeled USER_FORMATTING and may not be related to Smart View.

I did look through some of the other files in the .zip and found a few other references to Smart View, but I did not see anything significant.

So, now that we have completed an overview of what is contained in one, very simple, multi-grid file, what have we learned?

  1. There is a bunch of stuff stored under the covers when you save a Smart View retrieve as an Excel file.
  2. With the reported performance issues in certain situations with Smart View, you should now have an idea of where to look to resolve Smart View issues in your environment.

There are a number of files I did not cover in this overview that could also cause performance issues.  For example, Oracle support handled one case where they found over 60,000 Excel styles in the file.  Smart View uses Excel Styles when it applies automatic formatting to member and data cells.  When there are that many styles in the workbook, however, it is logical that Excel would have a lot of overhead searching through its internal list of Style objects to find the right one.  Accordingly, there is a styles.xml file that contains custom styles.  If you have a bunch of Style objects, you could delete the internal styles.xml file.

Note: Be sure to make a copy of your original workbook before you mess with the internal structures.  There is a possibility that you may mess it up and lose everything you have in the workbook. Further, Oracle does not support people going under-the-covers and messing with the workbook, so don’t even bring it up to support if you mess something up.

Wow, that should give you some idea of what may be going on behind the scenes with Smart View.  Even with the experience I have designing and writing the Dodeca web services that talk to Essbase, I wouldn't say that I have a deep understanding of how the information in a Smart View workbook really works.  However, one thing is for certain;  Dodeca does not put stuff like this in your Excel files.  It may be interesting to hear what you find when you explore the internals of your workbooks.
Categories: BI & Warehousing

Dodeca 6.6.0.4194 Now Available for Download!

Tim Tow - Mon, 2013-11-25 18:23
This past Friday, November 22nd, we completed our work on the newest version of the Dodeca Spreadsheet Management System and made Dodeca 6.6.0.4194 available for download from our website.  This blog entry is a sneak peek at some of the new features in version 6.6, as well as 6.5, which was released to select customers with specific functionality requests.  There are a few features that are particularly useful for end users, so let’s start there.
More Excel Support
Dodeca has always been strong on Excel version support and this version delivers even more Excel functionality.  Internally, we use the SpreadsheetGear control, which does a very good job with Excel compatibility.  This version of Dodeca integrates a new version of SpreadsheetGear that now has support for 398 Excel functions including the new SUMIFS, COUNTIFS, and CELL functions.
Excel Page Setup Dialog
The new version of Dodeca includes our implementation of the Excel Page Setup Dialog which makes it easy for users to customize the printing of Dodeca views that are based on Excel templates.  Note that for report developers, the Excel Page Setup has also been included in the Dodeca Template Designer.


New PDF View Type
Customers who use PDF files in their environments will like the new PDF View Type.  In previous releases of Dodeca, PDF documents displayed in Dodeca opened in an embedded web browser control.  Beginning in this version, Dodeca includes a dedicated PDF View type that uses a specialized PDF control.


View Selector Tooltips
Finally, users will like the new View Selector tooltips which optionally display the name and the description of a report as a tooltip.


Performance
Performance is one of those things that users always appreciate, so we have added a new setting that can significantly improve performance in some circumstances.  Dodeca has a well-defined set of configuration objects that are stored on the server and we were even awarded a patent recently for the unique aspects of our metadata design.  That being said, depending on how you implement reports and templates, there is the possibility of having many queries issued to the server to check for configuration updates.  In a few instances, we saw that optimizing the query traffic could be beneficial, so we have implemented the new CheckForMetadataUpdatesFrequencyPolicy property.  This property, which is controlled by the Dodeca administrator, tells Dodeca whether we should check the server for updates before any object is used, as was previously the case, only when a view opens, or only when the Dodeca session begins.  We believe the latter case will be very useful when Dodeca is deployed in production as objects configured in production often do not change during the workday and, thus, network traffic can be optimized using this setting.  The screenshot below shows where the administrator can control the update frequency.


Though users will like these features, we have put a lot of new things in for the people who create Dodeca views and those who administer the system.  Let’s start with something that we think all Dodeca admins will use frequently.
Metadata Property Search Utility
As our customers continue to expand their use of Dodeca, the number of objects they create in the Dodeca environment continues to grow.  In fact, we now have customers who have thousands of different objects that they manage in their Dodeca environments.  The Metadata Property Search Utility will help these users tremendously.

This utility allows the administrator to enter a search string and locate every object in our system that contains that string.  Once a property is located, there is a hyperlink that will navigate to the given object and automatically select the relevant property.  This dialog is modeless, which means you can navigate to any of the located items without closing the dialog.

Note: this version does not search the contents of Excel files in the system.
Essbase Authentication Services
In the past, when administrators wished to use an Essbase Authentication service to validate a login against Essbase and automatically obtain Dodeca roles based on the Essbase user’s group memberships, they had to use an Essbase connection where all users had access to the Essbase application and database.  The new ValidateCredentialsOnly property on both of the built-in Essbase Authentication services now flags the service to check login credentials at the server-level only, eliminating the need for users to have access to a specific Essbase database.
New Template Designer Tools
Prior to Dodeca 6.x, all template editing was performed directly in Excel.  Since that time, however, most template design functionality has been replicated in the Dodeca Template Designer, and we think it is preferable due to the speed and ease of use with which users can update templates stored in the Dodeca repository.  We have added a couple of new features to the Template Designer in this version.  The first tool is the Group/Ungroup tool that allows designers to easily apply Excel grouping to rows and/or columns within the template.   The second new tool is the Freeze/Unfreeze tool that is used to freeze rows and/or columns in place for scrolling.
Parameterized SQL Select Statements
Since we introduced the SQLPassthroughDataSet object in the Dodeca 5.x series, we have always supported the idea of tokenized select statements.  In other words, the SQL could be written so that point-of-view selections made by users could be used directly in the select statement.  In a related fashion, we introduced the concept of parameterized insert, update, and delete statements in the same version.  While parameterized statements are similar in concept to tokenized statements, there is one important distinction under the covers.  In Dodeca, parameterized statements are parsed and converted into prepared statements that can be used multiple times and results in more efficient use of server resources.  The parameterized select statement was introduced in this version of Dodeca in order for customers using certain databases that cache the prepared statement to realize improved server efficiency on their select statements.
Workbook Script Formula Editor Improvements
We have also been working hard to improve extensibility for developers using Workbook Scripts within Dodeca.  In this release, our work focused on the Workbook Script Formula Editor.  The first thing we added here is color coding that automatically detects and distinguishes Excel functions, Workbook Script functions, and Dodeca tokens.  In the new version, Excel functions are displayed in green, Dodeca functions and parentheses are displayed in blue, and tokens are displayed in ochre.   Here is an example.



In addition, we have implemented auto-complete for both Excel and Dodeca functions.


New SQLException Event
Version 6.6 of Dodeca introduces a new SQLException event that provides the ability for application developers to customize the behavior when a SQL Exception is encountered.
XCopy Release Directory
Beginning in version 6.6, the Dodeca Framework installation includes a pre-configured directory intended for customers who prefer to distribute their client via XCopy deployment instead using Microsoft ClickOnce distribution.  The XCopy deployment directory is also for use by those customers who use Citrix for deployment.
Mac OS X Release Directory
The Dodeca Framework installation now includes a pre-compiled Dodeca.app deployment for customers who wish to run the Dodeca Smart Client on Mac OS X operating systems.  What that means is that Dodeca now runs on a Mac without the need for any special Windows emulators.  Dodeca does not require Excel to run on the Mac (nor does it require Excel to run on Windows for that matter), so you can certainly save your company significant licensing fees by choosing Dodeca for your solution. 

In short, you can see we continue to work hard to deliver functionality for Dodeca customers.  As always, the Dodeca Release Notes provide detailed explanations of all new and updated Dodeca features.  As of today, we have decided to make the Release Notes and other technical documents available for download to non-Dodeca customers.  If you are curious about all of the things Dodeca can do, and if you aren't afraid to dig into the details, you can now download our 389 page cumulative Release Notes document from the Dodeca Technical Documents section of our website.  



Categories: BI & Warehousing

Conditional Borders

Tim Dexter - Mon, 2013-11-25 11:57

How can you conditionally turn cells borders on and off in Publishers RTF/XSLFO templates? With a little digging you'll find what appears to be the appropriate attributes to update in your template. You would logically come up with using the various border styling options:

 

border-top|bottom|left|right-width
border-top|bottom|left|right-style
border-top|bottom|left|right-color

 

Buuuut, that doesnt work. Updating them individually does not make a difference to the output. Not sure why and I will ask but for now here's the solution. Use the compound border formatter border-top|bottom|left|right. This takes the form ' border-bottom="0.5pt solid #000000". You set all three options at once rather than individually. In a BIP template you use:

<?if:DEPT='Accounting'?>
<?attribute@incontext:border-bottom;'3.0pt solid #000000'?>
<?attribute@incontext:border-top;'3.0pt solid #000000'?>
<?attribute@incontext:border-left;'3.0pt solid #000000'?>
<?attribute@incontext:border-right;'3.0pt solid #000000'?>
<?end if?>

3pt borders is a little excessive but you get the idea. This approach can be used with the if@row option too to get the complete row borders to update. If your template will need to be run in left to right languages e.g. Arabic or Hebrew, then you will need to use start and end in place of left and right.

For the inquisitive reader, you're maybe wondering how, did this guy know that? And why the heck is this not in the user docs?
Other than my all knowing BIP guru status ;0) I hit the web for info on XSLFO cell border attributes and then the Template Builder for Word. Particularly the export option; I generated the XSLFO output from a test RTF template and took a look at the attributes. Then I started trying stuff out, Im a hacker and proud me!  For the users doc updates, I'll log a request for an update.


Categories: BI & Warehousing

Desktop Testing XSL

Tim Dexter - Thu, 2013-11-21 23:05

Bit of a corner case this week but I wanted to park this as much for my reference as yours. Need to be able to test a pure XSL template against some sample data? Thats an XSL template that is going to generate HTML, Text or HTML. The Template Viewer app in the BI Publisher Desktop group does not offer that as an option. It does offer XSL-FO proccesing thou.

A few minutes digging around in the java libraries and I came up with a command line solution that is easy to set up and use.

1. Place your sample XML data and the XSL template in a directory
2. Open the lib directory where the TemplateViewer is installed. On my machine that is d:\Oracle\BIPDesktop\TemplateViewer\lib
3. Copy the xmlparserv2.jar file into the directory created in step 1.
4. Use the following command in a DOS/Shell window to process the XSL template against the XML data.

java -cp ./xmlparserv2.jar oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls


The file generated will depend on your XSL. For an Excel output, you would instruct the process to generate fileX.xls in the same folder. You can then test the file with Excel, a browser or a text editor. Now you can test on the desktop until you get it right without the overhead of having to load it to the server each time.

To be completely clear, this approach is for pure XSL templates that are designed to generate text, html or xml. Its not for the XSLFO templates that might be used at runtime to generate PDF, PPT, etc. For those you should use the Template Viewer application, it supports the XSLFO templates but not the pure XSL templates.

If your template still falls into the pure XSL template category. This will be down to you using some BIP functionality in the templates. To get it to work you'll need to add in the Publisher libraries that contain the function e.g. xdo-core.jar, i18nAPI_v3.jar, etc to the classpath argument (-cp.)

So a new command including the required libraries might look like:

java -cp ./xmlparserv2.jar;./xdo-core.jar;./i18nAPI_v3.jar 
                            oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls

 You will need to either move the libraries to the local directory, my assumption above or include the full path to them. More info here on setting the -cp attribute.

Categories: BI & Warehousing

Save $250 on Kscope14 Registration Now!

Tim Tow - Thu, 2013-11-21 17:17
If you work in the Essbase, Oracle EPM, or Oracle BI world, *the* place to be every June is the annual Kscope conference.  Registration is open for the next conference, Kscope14, coming next June in Seattle, WA.  If you are not currently a full ODTUG member, let me tell you how you can save $250 on the $1650 registration fee.

There are two steps you have to take to "save big".  First, become a full member of ODTUG for $99 and enjoy all of the benefits, including access to a members-only presentations library, throughout the year.  Next, register for Kscope14 and you are eligible for the members-only price of $1500 for a savings of $150.  While you are registering, simply use the code AOLAP to get an additional $100 discount!

My company, Applied OLAP, is one of top-tier Platinum Sponsors of Kscope14 and I will be there.  I hope to see you at the conference and, if you were able to save some money by using our exclusive AOLAP code, be sure to stop our booth, say hello, and learn how the Dodeca Spreadsheet Management System can help your company reduce spreadsheet risk, increase spreadsheet accuracy, and reduce costs.
Categories: BI & Warehousing

The Riley Family, Part II

Chet Justice - Tue, 2013-11-12 22:13
You didn't miss Part I, at least not here you didn't.

Many of you know Mike Riley. If you don't, here's a little history. He's the past president of ODTUG (for like 37 years or something) and for the last two years, he's served as Conference Chair for Kscope. Yeah, that doesn't really follow, but you know I'm a bit...scattered.

Did you read the link above? OK, well, here's the skinny. Mike has rectal cancer. Stage III. If it weren't for the stupid cancer part, the jokes would abound. Oh wait, they do anyway. Mike was diagnosed shortly after #kscope13, right around his 50th birthday (Happy Birthday Mike, Love, Cancer!). Ugh. (I want to say, "are you shittin' me?" see what I mean about the jokes? I can't help myself, I'm 14). Needless to say, cancer isn't really a joke. We all know someone affected by it. It is...well, it's not fun.

Go read his post if you haven't already. I'm going to give my version of that story. I'll wait...

So, Sunday morning, Game 3 of the World Series went to the Cardinals in a very bizarre way. I was watching highlights that morning as I had missed the end of the game (doesn't everyone know that I'm old and can't stay up that late to watch baseball?). Highlights. Mike lives in St. Louis. He's a Cardinal's fan. Wouldn't it be cool if he and his family could go to the game (mostly just his family, I don't like Mike that much). So I make some phone calls to see what people think of my idea. My idea is met with resistance. OK, I'll skip the people. Let's call Lisa (Mike's wife).

Apparently Sunday's are technology free days in the Riley household, no response. I go for a bike ride, but I take my phone, just in case Lisa calls me back. After the halfway point, my phone rings, I jump off the bike to answer.

So I talked to Lisa about my idea, can Mike handle the chaos of a World Series game?

We hang up and she goes to work. BTW, I asked her to keep my name out of it, but she didn't. We'll have words about that in the future.

She calls back (I think, it may have been over text, 2 weeks is an eternity to me). "He doesn't think he can do it."

So I call Mike directly (Lisa had already spoiled the surprise.)

"What about Box seats? You know, where the people with top hats and monocles sit? Away from the rift-raft, much more comfortable and free food and beer."

Backstory. Mike had finished his first round of chemo less than a week before Sunday. To make things worse, he decided it was a good time to throw out his back. He wasn't in the best of shape.

Mike said he thought he could do it.

OK, nay-sayers aside, let's see what we can do. I emailed approximately 50 people, mostly ODTUG people; board members, content leads, anyone I had in my address book. "Hey, wouldn't it be great to send Mike and his family to Game 5 of the World Series? We need to do this quick, tickets will probably double in price tonight especially if the Cardinals win." (that would mean Game 5 would be a clincher for the Cardinals, at home, muy expensive).

Within about 20 minutes, a couple of people pledged $600.

Holy shit!

At the prices I had seen, I was hoping to get between $50 and $100 from 50 people, hoping. I had $600 already. Game starts. Now it's up to $1100 in pledges. Holy shit, Part II. This might just be possible. Another 30 minutes and were about an hour into Game 4. Ticket prices have already gone up by $250 a ticket. Given that maybe 4 people have responded and I have $1600 in pledges, I pull the trigger. I bought 4 box seat tickets for the Riley family. (I had to have a couple of beers because I was about to drop a significant chunk of change without actual cash in hand, I could be out a lot of money, liquid courage is awesome).

Tickets sent to the Riley family. Pretty good feeling.

Like I said, I was confident, but I was scared. Before the end of the night though, there was over $5K pledged to get Mike and family to Game 5. Holy shit, Part III.

By midday Monday, pledges were well over $7K. I'll refer you back to Mike's post for more details. Shorter: jerseys for the family and a limo to the game.

Here's the breakdown: 35 people pledged, and paid, $8,080. Holy shit, Part IV. Average donation was $230.86. Median was $200. Low was $30 and high was $1000. Six people gave $500 or more. Nineteen people gave $200 or more. The list is a veritable Who's Who in the Oracle community.

Tickets + Jerseys + Limo = $6027.76

Riley family memory = Priceless.

So, what happened to the rest? Well, they have bills. Lots of bills. With the remainder, $2052.24, we paid off some hospital bills of $1220.63. There is currently $831.61 that will be sent shortly. It doesn't stop there though. Cancer treatment is effing expensive. Mike has surgery in December. He'll be on bed rest for some time. His bed is 17 years old. He needs a new one. After that, more chemo and more bills.

"Hey Chet, I'd love to help the Riley family out, can I give you my money for them?"

Yes, absolutely. Help me help them. I started a GoFundMe campaign. Goal is $10K. Any and all donations are welcome. Gifts include a thank you card from the Riley family and the knowledge that you helped out a fellow Oracle (nerd, definitely a nerd) in need. You can find the campaign here.

If you can't donate money, I've also created a hashtag so that we can show support for Mike and his family. It's #fmcuta (I'll let you figure out what it means). Words of encouragement are welcome and appreciated.

Thank you to the 35 who have already so generously given. Thank you to the rest of you who will donate or send out (rude) tweets.
Categories: BI & Warehousing

Fun with SQL - My Birthday

Chet Justice - Wed, 2013-11-06 10:26
This year is kind of fun, my birthday is on November 12th (next Tuesday, if you want to send gifts). That means it will fall on 11/12/13. Even better perhaps, katezilla's birthday is December 13th. 12/13/14. What does this have to do with SQL?

Someone mentioned to me last night that this wouldn't happen again for 990 years. I was thinking, "wow, I'm super special now (along with the other 1/365 * 6 billion people)!" Or am I? I had to do the math. Since date math is hard, and math is hard, and I'm good at neither, SQL to the rescue.

select 
to_number( to_char( sysdate + ( rownum - 1 ), 'mm' ) ) month_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'dd' ) ) day_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'yy' ) ) year_of,
sysdate + ( rownum - 1 ) actual
from dual
connect by level <= 100000
(In case you were wondering, 100,000 days is just shy of 274 years. 273.972602739726027397260273972602739726 to be more precise.)

That query gives me this:
MONTH_OF DAY_OF YEAR_OF ACTUAL   
-------- ------ ------- ----------
11 06 13 2013/11/06
11 07 13 2013/11/07
11 08 13 2013/11/08
11 09 13 2013/11/09
11 10 13 2013/11/10
11 11 13 2013/11/11
...
So how can I figure out where DAY_OF is equal to MONTH_OF + 1 and YEAR_OF is equal to DAY_OF + 1? In my head, I thought it would be far more complicated, but it's not.
select *
from
(
select
to_number( to_char( sysdate + ( rownum - 1 ), 'mm' ) ) month_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'dd' ) ) day_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'yy' ) ) year_of,
sysdate + ( rownum - 1 ) actual
from dual
connect by level <= 100000
)
where month_of + 1 = day_of
and day_of + 1 = year_of

order by actual asc
Which gives me:
MONTH_OF DAY_OF YEAR_OF ACTUAL   
-------- ------ ------- ----------
11 12 13 2013/11/12
12 13 14 2014/12/13
01 02 03 2103/01/02
02 03 04 2104/02/03
03 04 05 2105/03/04
04 05 06 2106/04/05
05 06 07 2107/05/06
...
OK, so it looks closer to 100 years, not 990. Let's subtract. LAG to the rescue.
select
actual,
lag( actual, 1 ) over ( partition by 1 order by 2 ) previous_actual,
actual - ( lag( actual, 1 ) over ( partition by 1 order by 2 ) ) time_between
from
(
select
to_number( to_char( sysdate + ( rownum - 1 ), 'mm' ) ) month_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'dd' ) ) day_of,
to_number( to_char( sysdate + ( rownum - 1 ), 'yy' ) ) year_of,
sysdate + ( rownum - 1 ) actual
from dual
connect by level <= 100000
)
where month_of + 1 = day_of
and day_of + 1 = year_of
order by actual asc
Which gives me:
ACTUAL     PREVIOUS_ACTUAL TIME_BETWEEN
---------- --------------- ------------
2013/11/12
2014/12/13 396
2103/01/02 32161
2104/02/03 397
2105/03/04 395
2106/04/05 397
2107/05/06 396
2108/06/07 398
2109/07/08 396
2110/08/09 397
2111/09/10 397
2112/10/11 397
2113/11/12 397
2114/12/13 396
2203/01/02 32161
So, it looks like every 88 years it occurs and is followed by 11 consecutive years of matching numbers. The next time 11/12/13 and 12/13/14 will appear is in 2113 and 2114. Yay for SQL!
Categories: BI & Warehousing

Comb Over

Tim Dexter - Tue, 2013-11-05 17:14

Being some what follicly challenged, and to my wife's utter relief, the comb over is not something I have ever considered. The title is a tenuous reference to a formatting feature that Adobe offers in their PDF documents.

The comb provides the ability to equally space a string of characters on a pre-defined form layout so that it fits neatly in the area. See the numbers above are being spaced correctly. Its not a function of the font but a property of the form field.

For the first time, in a long time I had the chance to build a PDF template today to help out a colleague. I spotted the property and thought, hey, lets give it a whirl and see in Publisher supports it? Low and behold, Publisher handles the comb spacing in its PDF outputs. Exciting eh? OK, maybe not that exciting but I was very pleasantly surprise to see it working.

I am reliably informed, by Leslie, BIP Evangelist and Tech Writer that, this feature was introduced from version 10.1.3.4.2 onwards.

Official docs and no mention of comb overs here.


Happy Combing!

Categories: BI & Warehousing

Hyperion 11.1.2.3 Update - Kscope14 Street Team Meetup in Seattle - November 8th

Tim Tow - Fri, 2013-11-01 20:54
If you are in or near Seattle on Friday, November 8th, don't miss the opportunity to see the new features of the Hyperion 11.1.2.3 release.  Oracle representatives are teaming together with the Kscope14 Street Team to bring you this 2-hour mini-session at the Fred Hutchison Cancer Research Center.  The session starts at 3 pm sharp and concludes with networking and a happy hour.  For details and to RSVP, visit http://kscope14.com/events/seattle-street-team.

I won't be able to make it to this event, but I wish I *could* be there.  I can't think of a better way to kick off a weekend!

Categories: BI & Warehousing

Pages

Subscribe to Oracle FAQ aggregator - BI &amp; Warehousing