Feed aggregator

Cleaning Up After Yourself

Antony Reynolds - Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOAInfra DB password
SOAINFRA_PASSWORD=welcome1
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd ${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
sqlplus ${SOA_PREFIX}_soainfra/${SOAINFRA_PASSWORD} @truncate_soa_oracle.sql << EOF
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=${FMW_HOME}/jdk1.7.0_17
BAM_CMD=${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in ${BAM_OBJECTS}
do
  ${BAM_CMD} -cmd clear -name ${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Ready for Inspection

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

Cleaning Up After Yourself

Antony Reynolds - Tue, 2013-12-24 11:50
Maintaining a Clean SOA Suite Test Environment

Fun blog entry with Fantasia animated gifs got me thinking like Mickey about how nice it would be to automate clean up tasks.

I don’t have a sorcerers castle to clean up but I often have a test environment which I use to run tests, then after fixing problems that I uncovered in the tests I want to run them again.  The problem is that all the data from my previous test environment is still there.

Now in the past I used VirtualBox snapshots to rollback to a clean state, but this has a problem that it not only loses the environment changes I want to get rid of such as data inserted into tables, it also gets rid of changes I want to keep such as WebLogic configuration changes and new shell scripts.  So like Mickey I went in search of some magic to help me.

Cleaning Up SOA Environment

My first task was to clean up the SOA environment by deleting all instance data from the tables.  Now I could use the purge scripts to do this, but that would still leave me with running instances, for example 800 Human Workflow Tasks that I don’t want to deal with.  So I used the new truncate script to take care of this.  Basically this removes all instance data from your SOA Infrastructure, whether or not the data is live.  This can be run without taking down the SOA Infrastructure (although if you do get strange behavior you may want to restart SOA).  Some statistics, such are service and reference statistics, are kept since server startup, so you may want to restart your server to clear that data.  A sample script to run the truncate SQL is shown below.

#!/bin/sh
# Truncate the SOA schemas, does not truncate BAM.
# Use only in development and test, not production.

# Properties to be set before running script
# SOAInfra Database SID
DB_SID=orcl
# SOA DB Prefix
SOA_PREFIX=DEV
# SOAInfra DB password
SOAINFRA_PASSWORD=welcome1
# SOA Home Directory
SOA_HOME=/u01/app/fmw/Oracle_SOA1

# Set DB Environment
. oraenv << EOF
${DB_SID}
EOF

# Run Truncate script from directory it lives in
cd ${SOA_HOME}/rcu/integration/soainfra/sql/truncate

# Run the truncate script
sqlplus ${SOA_PREFIX}_soainfra/${SOAINFRA_PASSWORD} @truncate_soa_oracle.sql << EOF
exit
EOF

After running this script all your SOA composite instances and associated workflow instances will be gone.

Cleaning Up BAM

The above example shows how easy it is to get rid of all the runtime data in your SOA repository, however if you are using BAM you still have all the contents of your BAM objects from previous runs.  To get rid of that data we need to use BAM ICommand’s clear command as shown in the sample script below:

#!/bin/sh
# Set software locations
FMW_HOME=/home/oracle/fmw
export JAVA_HOME=${FMW_HOME}/jdk1.7.0_17
BAM_CMD=${FMW_HOME}/Oracle_SOA1/bam/bin/icommand
# Set objects to purge
BAM_OBJECTS=/path/RevenueEvent /path/RevenueViolation

# Clean up BAM
for name in ${BAM_OBJECTS}
do
  ${BAM_CMD} -cmd clear -name ${name} -type dataobject
done

After running this script all the rows of the listed objects will be gone.

Ready for Inspection

Unlike the hapless Mickey, our clean up scripts work reliably and do what we want without unexpected consequences, like flooding the castle.

Supporting the Team

Antony Reynolds - Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

Supporting the Team

Antony Reynolds - Fri, 2013-12-20 15:44
SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

FSG Reporting and BIP

Tim Dexter - Fri, 2013-12-20 11:30

This is a great overview of the Financial Statement Generator (FSG) engine from GL in EBS and how Publisher fits into the picture.Thanks to Helle Hellings on the Financials PM team.


Categories: BI & Warehousing

$5 eBook Bonanza

Antony Reynolds - Thu, 2013-12-19 10:20
Packt eBooks $5 Offer

Packt Publishing just told me about their Christmas offer, get eBooks for $5.

From December 19th, customers will be able to get any eBook or Video from Packt for just $5. This offer covers a myriad of titles in the 1700+ range where customers will be able to grab as many as they like until January 3rd 2014 – more information is available at http://bit.ly/1jdCr2W

If you haven’t bought the SOA Developers Cookbook then now is a great time to do so!

Meet the Oracle ACE Directors Panel - January 9 - Seattle

Tim Tow - Thu, 2013-12-19 08:27

I will be in Seattle on Thursday, January 9th for the Meet the Oracle ACE Directors Panel.  It is at the Sheraton Seattle from 4 - 6 pm and will feature several other ACE Directors including Martin D'Souza, Kellyn Pot'Vin, Tim Gorman, and my longtime friend and collaborator, Cameron Lackpour.  

Come see and the panel and stay for the Happy Hour; the beer will be on me!


Categories: BI & Warehousing

Enterprise vs Consumer HDDs

Charles Lamb - Wed, 2013-12-18 10:23
Backblaze has two interesting blog posts about enterprise vs consumer
drives. Their conclusions are that drives fail either when they're
young, or when they're old, but not when they're in mid-life. They also
found no real difference failure rates between the two classes.

http://blog.backblaze.com/2013/11/12/how-long-do-disk-drives-last/

http://blog.backblaze.com/2013/12/04/enterprise-drive-reliability/

Get on Board the ARC

Scott Spendolini - Wed, 2013-12-18 07:25

Yesterday, we launched the new APEX Resource Center - or ARC - on enkitec.com.  The ARC was designed to provide the APEX experts at Enkitec with an easy way to share all things APEX with the community.  It’s split up into a number of different sections, each of which I’ll describe here:

  • What's New
    The first page of the ARC will display content from all other sections, sorted by date from newest to oldest.  Thus, if you want to see what’s new, simply visit this page and have a look.  In the future, we’ll provide a way to be notified anytime anything new is added to any section.
     
  • Demonstrations
    The Demonstrations section is perhaps the most interesting.  Here, our consultants have put together a number of mini-demonstrations using APEX and a number of other associated technologies.  Each demonstration has a working demo, as well as the steps used to create it.  Our plan is to keep adding new demonstrations on a weekly basis.
     
  • Events
    The Events section is a copy of the Events calendar, but with a focus on only APEX-related events.
     
  • Presentations
    Like Events, the Presentations section is a copy of the main Presentations section filtered on only APEX-related presentations.
     
  • Technical Articles
    Technical Articles will contain a number of different types of articles.  These will usually be a bit longer than what’s in the Demonstrations section, and may from time to time contain an opinion or editorial piece.  If you have an idea for a Technical Article, then use the Suggest a Tech Article link to send it our way.
     
  • Plug-ins
    If you’re not already aware, Enkitec provides a number of completely free APEX Plug-Ins.  This section highlights those, with links to download and associated documentation. 
     
  • Webinars
    Currently, the Webinars section displays any upcoming webinars.  In the near future, we’re going to record both webinar content and presentations, and also make those available here.

We’re going to work hard to keep adding new content to the ARC at least weekly, so be sure to check back frequently.  And as always, any feedback or suggestions are always welcome - just drop us a line by using the Contact Us form on enkitec.com.

Readable Code for Modify_Snapshot_Settings

Jeremy Schneider - Mon, 2013-12-16 12:32

It annoyed me slightly that when I googled modify_snapshot_settings just now and all of the examples used huge numbers for the retention with (at best) a brief comment saying what the number meant. Here is a better example with slightly more readable code. Hope a few people down the road cut-and-paste from this article instead and the world gets a few more lines of readable code as a result. :)

On a side note, let me re-iterate the importance of increasing the AWR retention defaults. There are a few opinions about the perfect settings but everyone agrees that the defaults are a “lowest common denominator” suitable for demos on laptops but never for production servers. The values below are what I’m currently using.

BEGIN 
  DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
    retention => 105 * 24 * 60,   -- days * hr/day * min/hr  (result is in minutes)
    interval  => 15);             -- minutes
END;
/



SQL> select * from dba_hist_wr_control;
Pivoting output using Tom Kyte's printtab....
==============================
DBID                          : 3943732569
SNAP_INTERVAL                 : +00000 00:15:00.0
RETENTION                     : +00105 00:00:00.0
TOPNSQL                       : DEFAULT

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.10

My Friend, Mike Riley, Has Cancer

Look Smarter Than You Are - Mon, 2013-12-16 08:18
I found out this summer that one of my best friends - one of the entire Hyperion community's best friends - has cancer. This is his story.

But first, a mea culpa:
In 2008, I Was An IdiotBack in early 2008, I wrote a blog entry comparing Collaborate, Kaleidoscope, and OpenWorld.  In this entry, I said that Collaborate was the obvious successor to the Hyperion Solutions conference and I wasn't terribly nice to Kaleidoscope.  Here's me answering which of the three conferences I think the Hyperion community should attend (I dare you to hold in the laughter):
Now which one would I attend if I could only go to one?Collaborate. Without reservation. If I'm going to a conference, it's primarily to learn. As such, content is key.I actually got asked a very similar question on Network 54's Essbase discussion board just yesterday (apparently, it's a popular question these days). To parrot what I said there, OpenWorld was very, very marketing-oriented. 80% of the fewer than 100 presentations in the Hyperion track were delivered by Oracle (in some cases, with clients/partners as co-speakers). COLLABORATE is supposed to have 100-150 presentations with 100+ of those delivered by clients and partners.In the interest of full-disclosure, my company, interRel, is paying to be a 4-star partner of COLLABORATE. Why? Because we're hoping that COLLABORATE becomes the successor to the Solutions conference. Solutions was a great opportunity to learn (partying was always secondary) and I refuse to believe it's dead with nothing to take it's mantle. We're investing a great deal of money with the assumption that something has to take the place of Hyperion Solutions conference, and it certainly isn't OpenWorld.Is OpenWorld completely bad? Absolutely not. In addition to the great bribes, it's a much larger conference than COLLABORATE or ODTUG's Kaleidoscope, so if your thing is networking, by all means, go to OpenWorld. OpenWorld is the best place to get the official Oracle party line on upcoming releases and what not. OpenWorld is also the place to hear better keynotes (well, at least by More Famous People like Larry Ellison, himself). OpenWorld has better parties too. OpenWorld is also in San Francisco which is just a generally cooler town. In short, OpenWorld was very well organized, but since it's being put on by Oracle, it's about them getting out their message to their existing and prospective client base.So why aren't I recommending Kaleidoscope (since I haven't been to that either)? Size, mostly. Their entire conference will have around 100 presentations, so their Hyperion track will most likely be fewer than 10 presentations. I've been to regional Hyperion User Group meetings that have more than that (well, the one interRel hosted in August of 2007 had 9, but close enough). While Kaleidoscope may one day grow their Hyperion track, it's going to be a long time until they equal the 100-150 presentations that COLLABORATE is supposed to have on Hyperion alone.If you're only going to one Hyperion-oriented conference this year, register for COLLABORATE. If you've got money in the budget for two conferences, also go to OpenWorld. If you're a developer that finds both COLLABORATE and OpenWorld to be too much high-level fluff, then go to Kaleidoscope.


So, ya, that entry may live in infamy.  [Editor's Note: Find out a way to delete prior blog posts without anyone noticing.]  Notice that of the three conferences, I recommended Kaleidoscope last and dared to say that it would take them a long time until they had 100-150 sessions like Collaborate.  Interestingly, Collaborate peaked that year at 84 Hyperion sessions, and Kaleidoscope is well over 150 Business Analytics sessions, but I'm getting ahead of myself.


In 2008, Mike Riley Luckily Wasn't An Idiot
I had never met Mike Riley, but he commented directly on my blog.  He was gracious even though I was slamming his tiny little conference in New Orleans:
Hyperion users are blessed with many training opportunities. I agree with Edward, the primary reason for going to a conference is to learn, but I disagree that Collaborate is the best place to do that. ODTUG Kaleidoscope, Collaborate, and OpenWorld all have unique offerings. 

It’s true that ODTUG is a smaller conference, however that is by choice. At every ODTUG conference, the majority of the content is by a user, not by Oracle or even another vendor. And even though Collaborate might seem like the better buy because of its scale, for developers and true technologists ODTUG offers a much more targeted and efficient conference experience. Relevant tracks in your experience level are typically consecutive, rather than side-by-side so you don’t miss sessions you want to attend. The networking is also one of the most valuable pieces. The people that come to ODTUG are the doers, so everyone you meet will be a valuable contact in the future.

It’s true, COLLABORATE will have many presentations with a number of those delivered by clients and partners, but what difference does that make? You can’t attend all of them. ODTUG’s Kaleidoscope will have 17 Hyperion sessions that are all technical. 

In the interest of full disclosure, I have been a member of ODTUG for eight years and this is my second year as a board member. What attracted me to ODTUG from the start was the quality of the content delivered, and the networking opportunities. This remains true today.

I won’t censor or disparage any of the other conferences. We are lucky to have so many choices available to us. My personal choice and my highest recommendation goes to Kaleidoscope for all the reasons I mentioned above (and I have attended all three of the above mentioned conferences).

One last thing; New Orleans holds its own against San Francisco or Denver. All of the cities are wonderful, but when it comes to food, fun, and great entertainment there’s nothing like the Big Easy. 
Mike was only in his second year as a board member of ODTUG, but he was willing to put himself out there, so I wrote him an e-mail back.  In that e-mail, dated February 10, 2008, I said that for Kaleidoscope to become a conference that Hyperion users would love, it would require a few key components: keynote(s) by headliner(s), panels of experts, high-quality presentations, a narrow focus that wasn't all things to all people, and a critical mass of attendees.

At the end of the e-mail, I said "If Kaleidoscope becomes that, I'll shout it from the rooftops.  I want to help Kaleidoscope be successful, and I'm willing to invest the time and effort to help out.  Regarding your question below, I would be more than happy to work with Mark [Rittman] and Kent [Graziano] to come up with a workable concept and I think I'm safe in saying that Tim [Tow] would be happy to contribute as well.  For that matter, if you're looking for two people to head up your Hyperion track (and enact some of the suggestions above), Tim and I would be willing (again, I'm speaking on Tim's behalf, but he's one of the most helpful people on planet Hyperion)."


K(aleido)scope
Kaleidoscope 2008 ended up being the best Hyperion conference I ever attended (at the time).  It was a mix of Hyperion Solutions, Arbor Dimensions, and Hyperion Top Gun.  With only 4 months prep time, we had 175 attendees in what then was only an Essbase track.  Though it was only one conference room there in New Orleans, the attendees sat in their seats for most of a week and learned more than many of us had learned in years.

After the conference, Mike and the ODTUG board offered Tim Tow a spot on the ODTUG board (a spot to which he was later elected by the community) to represent the interests of Hyperion.  I founded the ODTUG Hyperion SIG along with several attendees from that Kaleidoscope 2008. I eventually became Hyperion Content Chair for Kaleidoscope and passed my Hyperion SIG presidency on to the awesome Gary Crisci.  In 2010, Mike talked me into being Conference Chair for Kaleidoscope (which I promptly renamed Kscope since I never could handle how "kaleidoscope" violated the whole "i before e" rule).  Or maybe I talked him into it.  Either way, I was Conference Chair for Kscope11 and Kscope12.

During those years, Mike worked closely with the Kscope conference committee in his role as President of ODTUG.  Mike rather good-naturedly ("good-natured" is, I expect, the most commonly used phrase to describe Mike) put up with whatever crazy thing I wanted him to do. In 2011, he was featured during the general session in several reality show parodies (including his final, climactic race with John King to see who got to pick the location for Kscope12).  I decided to up the ante in 2012 by making the entire general session about him in a "Mike Riley, This Is Your Life" hour and we found ourselves laughing not at Mike, but near him.  It included Mike having to dance with the Village Persons (a Village People tribute band) and concluded with Mike stepping down as President of ODTUG...

... to focus his ODTUG time on being the new Conference Chair for Kscope.  Kscope13 returned to New Orleans and Mike did a fabulous job with what I consider to be Hyperion's 5 year anniversary with Kscope.  Mike was preparing Kscope14 when I got a phone call from him.  I expected him to talk over Kscope, ODTUG, or just to say hi, but I'll never forget when Mike told me he had stage 3 rectal cancer.  My father died in 2002 of colorectal cancer, and the thought that one of my best friends was going to face this was terrifying... and I wasn't the one with cancer.

I feel that the Hyperion community was saved by Mike (what would have happened if we had all just given up after Collaborate 2008 was a major letdown?) and now it's time for us to do our part.  Whether you've attended Kscope in the past or just been envious of those of us who have, you know that it's the one place per year that you can meet and learn from some of the greatest minds in the industry.


Mike Helped Us, Let's Help Him
Kscope is now the best conference for Oracle Business Analytics (EPM and BI) in the world, and Mike, I'm shouting it from every rooftop I can find (although I wish when I climbed up there people would stop yelling "Jump!  You have nothing else to live for!").  I tell everyone I know how much I love Kscope, and on behalf of all the help you've given the Hyperion community over the last 5 years, Mike, it's now time for us to help you.

After many weeks of chemo, Mike goes into surgery tomorrow to hopefully have the tumor removed.  Then he has many more weeks of chemo after that. He's a fighter, but getting rid of cancer is expensive, so we've set up a Go Fund Me campaign to help offset his medical bills.  If you love Kscope, there is no one on Earth more responsible for its current state than Mike Riley.  If you love ODTUG, no one has more fundamentally changed the organization in the last millennium than Mike Riley.  If you love Hyperion, no one has done more to save the community than Mike Riley.  

And if after reading this entry, you love Mike for all he's done, go to http://bit.ly/HelpMike and donate generously, because we want Mike to be there at the opening of Kscope14 in Seattle on June 22.  Please share this entry, and even if you can't donate, send Mike an e-mail at mriley@odtug.com letting him know you appreciate everything he's done.
Categories: BI & Warehousing

Back to shell scripting basics

Nuno Souto - Sun, 2013-12-15 22:32
Some quick and basic Unix and ksh stuff that crossed my path recently. As part of the move of our Oracle dbs to our new P7+ hardware (more on that later...), I'm taking the opportunity to review and improve a few of the preventive maintenance and monitoring scripts I have floating around. Some were written a few years ago by other dbas and have been added to by myself as needed.  Others were Noonshttp://www.blogger.com/profile/04285930853937157148noreply@blogger.com0

Android Update: 4.4.2

Dietrich Schroff - Sat, 2013-12-14 04:16
Only two week after upgrading to Android 4.4 the next update was available:
And like all other minor upgrades: no information about the updates....
If you compare the information you notice the kernel upgrade from 3.1.10-gee1a0b2 to 3.1.10-g4776c86.
For a complete history of all updates visit this posting.

Smart View Internals: Exploring the Plumbing of Smart View

Tim Tow - Fri, 2013-12-13 17:46
Ever wonder how Smart View stores the information it needs inside an Excel file?  I thought I would take a look to see what I could learn.  First, I created this simple multi-grid retrieve in Smart View.



Note that the file was saved in the Excel 2007 and higher format (with an .xlsx file extension).  Did you know that the xlsx format is really just a specialized zip file?  Seriously.  It is a zip file containing various files that are primarily in xml format.  I saved the workbook, added the .zip extension to the filename, and opened it in WinRar.  Here is what I found.

I opened the xl folder to find a series of files and folders.


Next, I opened the worksheets folder to see what was in there.  Of course, it is a directory of xml files containing the contents of the worksheets.


My Essbase retrieves were on the sheet named Sheet1, so let’s take a look at what is in the sheet1.xml file.   The xml is quite large, so I can’t show all of it here, but needless to say, there is a bunch of information in the file.  The cell contents are only one of the things in the file.  Here is an excerpt that shows the contents of row 5 of the spreadsheet.



This is interesting as it shows the numbers but not the member name.  What is the deal with that?  I noticed there is an attribute, ‘t’, on that node.  I am guessing that the attribute t=”s” means the cell type is a string.  I had noticed that in one of the zip file screenshots, there was a file named sharedStrings.xml.  Hmm...  I took a look at that file and guess what I found?




That’s right!  The 5th item, assuming you start counting at zero like all good programmers do, is Profit.   That number corresponds perfectly with the value specified in the xml for cell B5, which was five (circled in blue in the xml file above).   OK, so when are we going to get to Smart View stuff?  The answer is pretty quick.  I continued looking at sheet1.xml and found these nodes near the bottom.

Hmm, custom properties that contain the name Hyperion?  Bingo!  There were a number of custom property files in the xml file.  Let’s focus on those.

Custom property #1 is identified by the name CellIDs.  The corresponding file, customProperty1.bin, contained only the empty xml node <root />.  Apparently there aren’t any CellIDs in this workbook.

Custom property #2 is identified by the name ConnName.  The file customProperty2.bin contains the string ‘Sample Basic’ which is the name of my connection.

Custom property #3 is named ConnPOV but it appears to contain the connection details in xml format.  Here is an excerpt of the xml.


Custom property #4 is named HyperionPOVXML and the corresponding file contains xml which lines up with the page fields I have in my worksheet.



What is interesting about the POV xml is that I have two different retrieves that both have working POV selectors which are both implemented as list-type data validations in Excel.  I don’t know what happens internally if I save different values for the POV.

Custom property #5 is labeled HyperionXML.  It appears to contain the information about the Essbase retrieval, but it doesn't appear to be the actual retrieval xml because it doesn't contain the numeric data.  My guess is that this xml is used to track what is on the worksheet from a Hyperion standpoint.



There is a lot of information in this simple xml stream, but the most interesting information is contained in the slice element.  Below is a close-up of contents in the slice.



The slice covers 6 rows and 7 columns for a total of 42 cells.  It is interesting that the Smart View team chose to serialize their XML in this manner for a couple of reasons.  First, the pipe delimited format means that every cell must be represented regardless of whether it has a value or not.  This really isn’t too much of a problem unless your spreadsheet range is pretty sparse.  The second thing about this format is that the xml itself is easy and fast to parse, but the resulting strings need to be parsed again to be usable.  For example, the vals node will get split into an array containing 42 elements.  The code must then loop the 42 elements and process them individually.  The other nodes, such as the status, contain other pieces of information about the grid.  The status codes appear to be cell attributes returned by Essbase; these attributes are used to apply formatting to cells in the same way the Excel add-in UseStyles would apply formatting.  There are a couple of things to take away:

  1. In addition to the data on the worksheet itself, there is potentially *a lot* of information stored under the covers in a Smart View file.
  2. String parsing is a computation-intensive operation and can hurt performance.  Multiply that workload by 8 because, depending on the operation and perhaps the provider, all 8 xml nodes above may need to be parsed.

In addition, the number of rows and columns shown in the slice may be important when you are looking at performance.  Smart View must look at the worksheet to determine the size of the range to read in order to send it to Essbase.  In the case of a non-multi-grid retrieve, the range may not be known and, as a result, the grid may be sized based on the UsedRange of the worksheet.  In our work with Dodeca, we have found that workbooks converted from the older xls format to the newer xlsx format, which support a larger number of cells, may have the UsedRange flagged internally to be 65,536 rows by 256 columns.  One culprit appears to be formatting applied to the sheet in a haphazard fashion.  In Dodeca, this resulted in a minor issue which resulted in a larger memory allocation on the server.   Based on the format of the Smart View xml, as compared to the more efficient design of the Dodeca xml format, if this were to happen in Smart View it may cause a larger issue due to the number of cells that would need to be parsed and processed.  Disclaimer: I did not attempt to replicate this issue in Smart View but rather is an educated guess based on my experience with spreadsheet behavior.

Note: The Dodeca xml format does not need to contain information for cells that are blank.  This format reduces the size and the processing cycles necessary to complete the task.  In addition, when we originally designed Dodeca, we tested a format similar to the one used today by Smart View and found it to be slower and less efficient.

Considering all of this information, I believe the xml format would be difficult for the Smart View team to change at this point as it would cause compatibility issues with previously created workbooks.  Further, this discussion should give some visibility to the fact that the Smart View team faces an on-going challenge to maintain compatibility between different versions of Smart View considering that different versions distributed on desktops and different versions of the internal formats that customers may have stored in their existing Excel files.  I don’t envy their job there.

After looking at all of this, I was curious to see what the xml string would look like on a large retrieve, so I opened up Smart View, connected to Sample Basic and drilled to the bottom of the 4 largest dimensions.  The resulting sheet contained nearly 159,000 rows of data.  Interestingly enough, when I looked at the contents of customProperty5.bin inside that xlsx file, the contents were compressed.  It occurred to be a bit strange to me as the xlsx file format is already compressed, but after thinking about it for a minute it makes sense as the old xls file format probably did not automatically compress content, so compression was there primarily to compress the content when saved in the xls file format.

Custom property #6 is labeled NameConnectionMap.  The corresponding property file contains xml that appears to map the range names in the workbook to the actual grid and the connection.


Custom property #7 is labeled POVPosition. The file customProperty7.bin contains the number 4 followed by a NUL character.  Frankly, I have no idea what position 4 means.

Moving on to custom property #8 which is labeled SheetHasParityContent.  This file contains the number 1 followed by a NUL character.  This is obviously a boolean flag that tells the Smart View code that new features, such as support for multiple grids, are present in this file.

Custom property #9 is labeled SheetOptions.  The corresponding file, customProperty9.bin, contains an xml stream that (obviously) contains the Hyperion options for the sheet.


Custom property #10 is labeled ShowPOV and appears to contain a simple Boolean flag much like that in custom property #8.

Finally, custom property #11 is labeled USER_FORMATTING and may not be related to Smart View.

I did look through some of the other files in the .zip and found a few other references to Smart View, but I did not see anything significant.

So, now that we have completed an overview of what is contained in one, very simple, multi-grid file, what have we learned?

  1. There is a bunch of stuff stored under the covers when you save a Smart View retrieve as an Excel file.
  2. With the reported performance issues in certain situations with Smart View, you should now have an idea of where to look to resolve Smart View issues in your environment.

There are a number of files I did not cover in this overview that could also cause performance issues.  For example, Oracle support handled one case where they found over 60,000 Excel styles in the file.  Smart View uses Excel Styles when it applies automatic formatting to member and data cells.  When there are that many styles in the workbook, however, it is logical that Excel would have a lot of overhead searching through its internal list of Style objects to find the right one.  Accordingly, there is a styles.xml file that contains custom styles.  If you have a bunch of Style objects, you could delete the internal styles.xml file.

Note: Be sure to make a copy of your original workbook before you mess with the internal structures.  There is a possibility that you may mess it up and lose everything you have in the workbook. Further, Oracle does not support people going under-the-covers and messing with the workbook, so don’t even bring it up to support if you mess something up.

Wow, that should give you some idea of what may be going on behind the scenes with Smart View.  Even with the experience I have designing and writing the Dodeca web services that talk to Essbase, I wouldn't say that I have a deep understanding of how the information in a Smart View workbook really works.  However, one thing is for certain;  Dodeca does not put stuff like this in your Excel files.  It may be interesting to hear what you find when you explore the internals of your workbooks.
Categories: BI & Warehousing

Upgrade to Oracle 12c [1Z0-060 ] exam is available now

Syed Jaffar - Fri, 2013-12-13 14:16
A very quick note about the announcement of Upgrade to Oracle 12c - 1Z0-060 exam availability, it is no longer beta now. The exam has two sections, with 51 and 34 question respectively in each section with 64% and 65% passing margin. You must succeed in both sections in order to certify Oracle 12c upgrade exam.
However, I felt the exam fee is little higher, 245$, isn't expensive?

What you guys waiting for, go ahead, and upgrade your certification to the latest Oracle release. Its a good chance to upgrade to the latest release with a single upgrade exam, even if you are certified earlier with Oracle 7.3.

Here is the link for more information about the exam:

http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=5001&get_params=p_exam_id:1Z0-060

https://blogs.oracle.com/certification/entry/0856_18


Good luck with your exam!

OWB to ODI Migration Utility released for Windows 64 bit.

Antonio Romero - Wed, 2013-12-11 15:13

The OWB to ODI Migration Utility is now available for Windows 64-bit platforms. It can be downloaded from the Oracle support site. It is available as Patch number17830453. It needs to be applied on top of  a OWB 11.2.0.4 standalone install.

More information about the Migration Utility is available here.

OWB to ODI Migration Utility released for Windows 64 bit.

Antonio Romero - Wed, 2013-12-11 15:13

The OWB to ODI Migration Utility is now available for Windows 64-bit platforms. It can be downloaded from the Oracle support site. It is available as Patch number17830453. It needs to be applied on top of  a OWB 11.2.0.4 standalone install.

More information about the Migration Utility is available here.

OWB to ODI Migration Utility Webcast - Thu 12th December

Antonio Romero - Wed, 2013-12-11 13:33

On Thursday 12th December there is a webcast on the OWB to ODI 12c migration utility, there will be a demo and drill down into the utility. Check the meeting URL here - its at 10am PST on 12th December. Check out the blog post here on getting the utility. Good chance to get the inside scoop on the utility and ask questions to the PM and development team.


OWB to ODI Migration Utility Webcast - Thu 12th December

Antonio Romero - Wed, 2013-12-11 13:33

On Thursday 12th December there is a webcast on the OWB to ODI 12c migration utility, there will be a demo and drill down into the utility. Check the meeting URL here - its at 10am PST on 12th December. Check out the blog post here on getting the utility. Good chance to get the inside scoop on the utility and ask questions to the PM and development team.


How To Do Single Sign On (SSO) for Web Services

Mark Wilcox - Wed, 2013-12-11 08:38

A recent question on our internal list was

"A customer has OAM and wants to do SSO to SOAP Web Services".

In this case the customer was using Webcenter Content (the product formerly known as Unified Content Manager UCM). But the scenario applies to any SOAP Web Service.

My answer was well received and there isn't anything proprietary here so I thought I would share to make it easier for people to find and for me to refer to later.

First - There is no such thing as SSO in web services.

There is only identity propagation.

Meaning that I log in as Fabrizio into OAM, connect to a Web application protected by OAM.

That Web application is a Web Services client and I want to tell the client to tell the Web Services that Fabrizio is using the service.

The first step to set this up is to protect the web services via OWSM.

The second step is to translate the OAM token into a WS-Security token.

There are 3 ways to this second step:

1 - If you are writing manual client and don't want any other product involved - use OAM STS

2 - Use Oracle Service Bus (which most likely will also use OAM STS but should make this a couple of mouse clicks)

3 - Use OAG - which doesn't need to talk to STS. It has a very simple way to convert OAM into WS-Security header.

If you're not using OSB already - I would recommend OAG. It's by far the simplest plus you get the additional benefits of OAG.

PS - You can use OSB and OAG together in many scenarios - I was only saying to avoid OSB here because the service was already exposed and there was no benefit I could see for having OSB. If you have a reason to have OSB - let me know. I only know OSB at a very high level since my area of focus is security.

Pages

Subscribe to Oracle FAQ aggregator