Feed aggregator

Time for a Change – Upcoming Announcements – Millionth Hit

Venkat Akrishnan - Mon, 2009-06-15 12:35

Well, as the saying goes “Change is the only Constant”, there are quite a few changes that are coming up on this blog(well not blog alone!!!) in the near future. I would be in a position to make an announcement in a week or so. And i am very much looking forward to that. One thing that i can say for sure is the fact that you can expect more of my blog entries in the future:-). More on that next week.

And as luck would have it, while i was writing this, the blog registered its first Millionth hit (of a total of 302 blog entries). I would have to express and extend my thanks to anyone and everyone who have been visiting this blog ever since its inception on 18th of July 2007. I believe the blog has come a long way since then. I have written at least two blog entries every week since i started, barring a couple of months when i did not even write a single one. When i started to write on BI EE there were only a couple of people writing about it like Mark(who was very well known in the Oracle BI Community even at that time) and Adrian(actually myself and Adrian were discussing this in the BI Forum). Then came along John who was also very active on the BI Forums. And then came people like Alex(Siebel + BI EE) , Christian (BI EE + Essbase) and others who have been working on these products for long but just now started to blog about them.

In the coming future, i would be primarily focusing on Hyperion Essbase(i would say this has been a tool that has been really close to my heart that i have not blogged much about), EPM Integration, Hyperion Planning/EPMA integration, BI EE – Essbase Integration (more use cases). Hopefully you have found this blog useful and thanks for stopping by.

Categories: BI & Warehousing

When Backwards Compatibility Goes Too Far

Tahiti Views - Sat, 2009-06-13 19:46
I couldn't help but notice this new article, about holdovers from the earliest days of DOS and even CP/M still showing up in Windows-based development:Zombie Operating Systems and ASP.NET MVCPersonally, I really enjoyed working on the IBM C/C++ compiler back in the day, targeting Windows 95. They licensed the Borland resource editor and I adapted the RTF-format online help, with no RTF specs, John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

ODTUG 2009

Susan Duncan - Sat, 2009-06-13 04:05
I can hardly believe it's another year (of few posts to my blog) and another ODTUG Kaleidoscope conference is almost upon us. This year the conference is in Monterey so I'm packing my bags and off to Oracle Headquarters in San Francisco tomorrow - then down to the conference on June 20th

If you have the opportunity I'd urge you to try and make it there too. The 'fun' starts off on Saturday when there is a community service day. Last year we painted school classrooms in New Orleans, this year we are helping to restore habitat at Martin Dunes, California’s largest and most intact dune ecosystem. So I'm packing plenty of sunscreen as my pale English skin isn't used to the California sun! More fun after the first day of sessions Sunday - with the second ODTUG Jam Session. Those of you who know Grant Ronald and I know that we are much too shy and retiring to join in that ;-)

But of course, that's not all the fun. The conference is full of interesting and diverse sessions - and I should know, I was part of the panel reviewing papers for the Editor's Choice award - I spent a few evenings reading papers on everything from project management to Oracle to the Holy Grail.

As for me, I'm really excited to be doing two sessions -

5000 tables, 100 schemas, 2000 developers: This will showcase some of the team-working features such as standards and version management, and reporting and impact analysis and the highly usable and scalable data modeling in JDeveloper. I've got some great new functionality to reveal - reporting on your data models, user defined validation and declarative compare of versioned database objects

Tooling up for ALM 2.0 with Oracle Team Productivity Center: If you were lucky enough to be at Oracle World or the UK Oracle User Group conference last year you might have seen a very early incarnation of this project that I've been working on. At ODTUG I'm going to be demoing the very latest code and showing you how to use your ALM repositories from within JDeveloper and how to integrate artifacts from those (maybe) disparate repositories together through Oracle Team Productivity Center. All this and team management too!

Another goal I have for the conference week is to talk to as many JDeveloper users as possible about team working, ALM and SDLC - and to ensure that I get feedback to take back and work on more functionality in JDeveloper to compliment the great application development tool we have

I look forward to seeing you there - or if not, finding other ways to talk to you!

Back to Top

Fusion Tables

Charles Schultz - Fri, 2009-06-12 13:24
So I admit it, I read slashdot (who doesn't?? *grin*). While some topics I really do not care about, for some reason "Oracle" in the headline does. =) And I am not opposed to Oracle-bashing, because I do a fair share myself.

I love how folks at Google Labs come up with all this crazy stuff. And not just GL, but Apple and lots of other places as well. The way technology moves is absolutely spellbinding, and I mean that in the most literal sense possible. *grin*

What I hate is techno-marketing gibberish:
"So now we have an n-cube, a four-dimensional space, and in that space we can now do new kinds of queries which create new kinds of products and new market opportunities"
Ok so I can grapple with n-cube or 4-space. Show me a query that can create a new kind of product. Heck, show me a query that can make an old product! Create new market opportunities?!? Come on, everything in the galaxy is a market opportunity. You couldn't hit a house fly with a query. And I mean that in the most literal sense. *wink*

Purge old files on Linux/Unix using “find” command

Aviad Elbaz - Wed, 2009-06-10 01:30

I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days.

I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:

find /interfaces/inbound -mtime +60 -type f -maxdepth 1 -exec rm {} \;

It finds and deletes all files in directory /interface/inbound that are older than 60 days.
"-maxdepth 1" -> find files in current directory only. Don't look for files in sub directories.

After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:

find /interfaces/inbound -name "*.csv" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

All csv files in /interface/inbound that are older than 60 days will be deleted.

But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert...

I tried several things, like add another "-name" to the find command:

find /interfaces/inbound -name "*.csv" -name "*.xls" -mtime +60 -type f -maxdepth 1 -exec rm {} \;

But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course).

After struggling a liitle with the find command, I managed to make it works:

find /interfaces/inbound \( -name "*.csv" -o -name "*.xls" \) -mtime +60 -type f -maxdepth 1 -exec rm {} \;



Categories: APPS Blogs

Oracle Data Integrator – Connectivity to Open LDAP of Shared Services

Venkat Akrishnan - Tue, 2009-06-09 15:32

One of the features of Oracle Data Integrator is its ability to connect to a lot of disparate data sources using JDBC. One such feature is its ability to expose any LDAP directory as a relational source. If you are on earlier releases of Hyperion EPM like 9.3, where there is no out of the box SSO and authentication/authorization capability to BI EE with open LDAP, one approach is to configure BI EE to authenticate against OpenLDAP and then get the user-group information from some other custom table(or by using the DBMS_LDAP package). I had shown how to configure BI EE to authenticate against OpenLDAP here. Since BI EE cannot automatically pick up the groups directly from OpenLDAP in prior releases, one way is to get the user-group related information from OpenLDAP and then populate that into a set of custom tables. Then BI EE can be used to get these groups from the custom tables. The architecture would look something like this


Lets look at what it takes to setup the OpenLDAP connectivity from ODI. As a first step lets first log into Topology Manager and create a new LDAP connection. Choose the “Sunopsis JDBC Driver for LDAP” as the JDBC driver


And then choose the JDBC URL.


To enable the connectivity to any LDAP directory, the password would have to be passed in an encoded format. To encode the password, run the below command from a command prompt.

java -cp {OracleDI}\oracledi\drivers\snpsldapo.jar 
<the of password root openldap>


Copy the above encoded password. In the JDBC URL, enter the below URL

jdbc:snps:ldap?ldap_url=ldap://localhost:28089/ &amp;ldap_password=KILAKMNJKKLHKJJJDDGPGPDB


The basedn above is what would be used for searching all the users, groups, roles etc. In the Data Server definition, enter the username as root user who has traversing access to the entire OpenLDAP directory


You should be able to test the connection to the LDAP from here. The root user of OpenLDAP is different from the admin user. In fact, the admin user’s original cn is not admin. It is 911. admin is the givenName attribute of the 911 user. The root user password is by default root. One behavior that i noticed across the releases, was the fact that in 9.3 release admin user had the traverse directory privilege. But in EPM 11, 911 user does not have the traverse directory privilege. In my case, the default root password did not work. So, i had to reset the root user password from shared services.


As a side note, if you feel that shared services web console does not give you the actual LDAP directory structure, i would recommend a free LDAP client like JXplorer. The screenshot of shared services OpenLDAP using this free client is given below


Now, if you go to the Designer and reverse engineer this data source using selective reverse.



This should convert the entire directory structure to a relational format. From this point onwards, its a matter of building the interfaces and loading the custom user-group tables. Though the setup of the above is pretty straight forward, this can come in very handy especially when you are trying to consolidate/report against multiple user sources.

Categories: BI & Warehousing

Getting a Handle on Logical I/O

Eric S. Emrick - Tue, 2009-06-09 13:43
The other day a colleague brought to my attention an interesting situation related to one of the databases he supports. The database was, rather consistently, experiencing heavy cache buffers chains (CBC) latch wait events while processing against a set of “related” tables. The solution devised to mitigate the CBC latch contention involved range partitioning said tables. I believe proper partitioning can be a very reasonable approach to minimize the probability of CBC latch collisions. Of course, you must know the manner in which your data is accessed and partition accordingly, as you don’t want to sacrifice existing solid execution plans among other considerations.

As it turned out, the partitioning approach did indeed reduce the CBC collisions; albeit another form of contention surfaced as a corollary, cache buffer handles latch collisions. I must admit I had a very limited knowledge of buffer handles prior to being made aware of this situation. My colleague pointed me to a very interesting article on Jonathan Lewis' site. This article gives a pithy description of buffer handles. I highly recommend you carve out a few minutes to read it. Not only might you learn something about buffer handles, you might be surprised that the more traditional notions of logical I/O do not really suffice. I was first suitably introduced to the buffer is pinned count statistic during a Hotsos training course. Essentially, this statistic indicates the presence of latch-reduced logical I/O.

While, generally speaking, Oracle recommends that hidden parameters not be changed, sometimes they need to be modified to accommodate very specific issues your database is encountering. In this particular case, increasing the value of the _db_handles_cached parameter got rid of the newly surfaced collisions on the cache buffer handles latch. I love learning from others’ experiences. It is amazing how many interesting little tales such as this exist. Also, this type of unforeseen contention shifting reinforces the need to properly test production changes - or maybe better said, the ability to properly test production changes.

SQL Features Tutorials: Grouping Rows with GROUP BY (New SQL Snippets Tutorial)

Joe Fuda - Tue, 2009-06-09 13:00
A new tutorial has been added to SQL Snippets exploring the GROUP BY clause and related extensions such as GROUPING SETS, ROLLUP, and CUBE. Group related functions such as GROUP_ID, GROUPING, and GROUPING_ID are also covered.

Tablespace selection in interval partitioning

Yasin Baskan - Tue, 2009-06-09 01:58
11G brought interval partitioning which is a new partitioning method to ease the maintenance burden of adding new partitions manually. The interval partition clause in the create table statement has an option to list tablespace names to be used for interval partitioning. The documentation states that the tablespaces in the list you provide are used in a round-robin manner for new partitions:

Interval partitions are created in the provided list of tablespaces in a round-robin manner.

This does not mean that any newly created partition will reside in the tablespace which is next on the list. The tablespaces may be skipped if partitions map to more than one interval. Here is a test case that shows how the list is used.

set lines 200
SQL> r
1 create table t(col1 date,col2 varchar2(100))
2 partition by range (col1)
3 interval(numtoyminterval(1,'MONTH')) store in (tbs1,tbs2,tbs3)
4* (PARTITION p0 VALUES LESS THAN (TO_DATE('1-1-2009', 'DD-MM-YYYY')) tablespace tbs1)

Table created.

SQL> r
2* from user_Tab_partitions where table_name='T'

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

The "store in" clause lists tablespaces tbs1, tbs2 and tbs3 to be used for interval partitioning. After the above create table command I now have one partition which resides in tbs1. Let's insert a row which needs to be inserted into a new partition and see which tablespace the partition will be created in.

SQL> insert into t values(to_date('15.01.2009','dd.mm.yyyy'),'jan');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

The row I inserted maps to one interval, which is one month, it does not have a date value which is more than one month higher than the current maximum value. So the next tablespace, tbs2, is used for the new partition.

SQL> insert into t values(to_date('15.02.2009','dd.mm.yyyy'),'feb');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

Again I inserted a row for the next month and the partition is created in tbs3, which is the next one on the list.

What happens if I insert a row with a date value that is more than one month after the current maximum partitioning key?

SQL> insert into t values(to_date('15.04.2009','dd.mm.yyyy'),'apr');

1 row created.

SQL> commit;

Commit complete.

2 from user_Tab_partitions where table_name='T';

------------------------------ -------------------------------------------------------------------------------- ------------------------------
P0 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA TBS1

I skipped March and inserted a value for April. The current maximum key becomes May 1st, we do not see a partition with a maximum value of Apr 1st. The next tablespace on the list was tbs1 but we see that the new partition is on tbs2, not tbs1. Tbs1 would be used if I did not skip an interval when inserting rows.

So, the tablespaces on the list are used in a round-robin manner but each is used for only one interval. If you skip intervals the tablespaces related to that interval are skipped too.

This is something to keep in mind if you want to strictly decide which tablespace will hold which partition.

The Need for Feed: RSS & Twitter, Why They May be Like Peanut Butter and Jelly

Ken Pulverman - Mon, 2009-06-08 20:54
The challenge with all things social media is keeping them up-to-date. Blogs can be a serious drain as they beg for content like Audrey in 'Little Shop of Horrors.' If you are using Twitter to communicate with your network and push relevant content, you may be experiencing the same thing. Even keeping up with 140 character posts can be a grind.

Recently, I've started experimenting with pushing some of the content that I feel is most relevant to the people I interact with as well as the public at large. It's been fun to watch the reactions and the results.

http://www.twitter.com/topcrmbloggers is a feed I set up that aggregates what I consider to be the best CRM Bloggers out there. You'll note that I have 33 followers in just a week without even trying. The feed of this twitter handle in turn feeds one of our Netvibes pages. Entropy yes, but we are serving up content at different potential access points for different users.

I also set up a fun feed of Odd News which I love to read while on the bus. It started on @pulverman on Twitter and is now featured on @OddNewsNetwork. @pulverman will now be an aggregation of the top Marketing 2.0 blogs starting tomorrow as well as my shorter musings on the world of Marketing 2.0. From the same feed as OddNewsNetwork I select one post at random once a day and feed my personal Twitter feed, @bolobao. This in turn updates Facebook providing a bit of fun for friends to see and comment on.

On slow news days, I know that at least the feeds I've set up are keeping various sites up-to-date with interesting content.

These are early days in my feed experiments, but I imagine marketers everywhere are struggling with these same issues.

Like a good blog post, I think in the final analysis, what you chose to feed to your social marketing efforts like what you post on your blog will be judged on relevance. If it is relevant - aggregated feeds crafted with the love and personality you apply to a post - it will be appreciated.

I'll no doubt keep tuning my feeds to make them ever more relevant and interesting. I just sent my Yelp reviews to my personal Twitter (@bolobao) and our Delicious posts (www.delicious.com/OracleCRM) to our work Twitter account (@OracleCRM). Perhaps RSS and Twitter aren't quite as cozy at PB&J yet, but I would like to submit that we all have the need for feed, even newspapers, making this combo perhaps your own personal Associated Press.

Oracle Exadata posts #1 TCP-H result

Nigel Thomas - Sun, 2009-06-07 07:01
Grag Rahn's Structured Data blog provides the data that Kevin Closson had to remove from his own blog. From an HP/Oracle point of view, a very good performance, reducing cost/QphH by a factor of 4.

However, it is interesting to see that the HP/Oracle solution is still more than 4 times the cost/QphH of the #2 placed Exasol solution (running on Fujitsu Primergy, and reported a year ago) - while the absolute performance improvement is relatively slight (1.16M queries/hr against 1.02M).

Oracle Data Integrator – Using OBI EE as a Data Source – Using BI EE JDBC Driver

Venkat Akrishnan - Sun, 2009-06-07 06:33

In a prior blog entry here, i had shown how to use BI Server as a data source for loading data from Essbase into custom tables. That approach basically used the Sun ODBC-JDBC bridge which in most cases has bad performance. Today we shall see how to use the native BI Server JDBC driver to connect to BI Server and in-turn use that for loading Essbase Data into a custom table.

One of the major advantages of ODI is the fact that it can be used to connect to any data source that has a valid jdbc driver. In order to make ODI to use the BI EE JDBC driver, we need to make the JDBC driver (jar file) of the BI Server to be accessible by ODI. The jar file(bijdbc14.jar) of BI Server can be obtained from the BI Publisher web server installation folder {OracleBI}\oc4j_bi\j2ee\home\applications\xmlpserver\xmlpserver\WEB-INF\lib. The main reason why we are going to the BIP folder structure is because BIP uses the JDBC driver of BI Server to do reporting on BI EE metadata.


The above screenshot basically provides the database driver class and the connection string of the JDBC driver of BI EE. Now copy this driver jar file to {OracleDI}\oracledi\drivers directory.


Now open up Topology Manager and create a new Universal Data Server. Use the above driver class and connection in the JDBC URLs.


Ensure that you are able to test the connection using Administrator username and password.


Now, in the physical schema choose any subject area in the Presentation Layer.


Now once this is done, go to the Designer and create a Project. And import the necessary Knowledge Modules that i had shown in the last blog entry. Since we are using JDBC, we can do a selective reverse of all the necessary tables.


But be aware that, BI EE supports multiple presentation tables with the same name across subject areas. But ODI currently does not support multiple table import with the same name (as there is no BI EE specific RKM. This can be resolved if you build a separate RKM. Also, though you might be able to reverse objects with special characters, they would not work within an Interface. So, its better to create aliases with proper ODI supported naming conventions. ). Lets choose just 4 such tables, Basic, Market, Scenario and Year.




Now lets build a very simple interface wherein we would be inserting Gen2, Market and the Measure value into a dummy table. Remember in BI EE, whenever you are bringing in multiple tables from a single presentation subject area, no joins are needed i.e. a cross join would have to be configured across multiple tables.


If you execute this interface you would notice that BI Server would generate the MDX accordingly and would then insert the output of the MDX to the database table.


This should give you an idea of how to go about using multiple BI EE subject area tables in an ODI interface and then using that interface to load data into a database table. Also, this method should typically be as fast as your BI Server is since it uses the JDBC connectivity as opposed to the JDBC-ODBC bridge.

Categories: BI & Warehousing

Opening up SOA

Rakesh Saha - Fri, 2009-06-05 09:45

Oracle BI EE – Handling Complex String Manipulations – Using Regular Expressions (RegEx) and Evaluate

Venkat Akrishnan - Thu, 2009-06-04 05:05

One of the requirements that i recently came across involved some complex string manipulations that cannot be solved easily by using BI EE specific functions. Just to illustrate the requirement, take a look at the screenshot below


As you see, the above is a simple BI EE report containing the customer related details. But the major problem with this report is that, it contains all the details of a customer in a comma separated  format. Basically the source application has inserted the data into this table in such a fashion. One key aspect to this is the fact that the comma delimited customer details follow a specific pattern. That is the details column is nothing but a concatenation of the below customer attributes in the same order

1. Customer First Name
2. Customer Last Name
3. Customer Gender
4. Customer Email Address
5. Customer Phone Number
6. Customer Street Address
7. Customer City
8. Customer State
9. Customer Country

The requirement is to get a report containing the Customer ID, Country ID, Customer Phone Number and the Customer Email address. Normally the best place for handling this would be in the ETL stage wherein individual attributes can be split into multiple columns and then used for reporting. Unfortunately in most cases we would have to work with what we have. To do this in the reporting layer requires some complex INSTR, SUBSTR logic. Today we shall another approach for handling such requirements. Instead of using normal string manipulation functions, we shall be using a new feature of Oracle 10g called as the Regular Expressions.

Regular Expressions or RegEx make string manipulations easy and in fact provides quite a few advanced formatting, filtering options. The regular expressions come as 4 different types of functions

1. REGEXP_SUBSTR – Advanced Sub-String function
2. REGEXP_INSTR – Advanced String position evaluation function
3. REGEXP_LIKE – For doing advanced filtering on a result set
4. REGEXP_REPLACE – For doing advanced multiple-iterative string replace

In order to achieve the above requirement, we need to use these database functions. And of course, we need to use EVALUATE to call these database functions. To split the above comma separate customer attributes, we need to use RegEx functions given below

1. Customer Phone – REGEXP_REPLACE(CUSTOMER_DETAILS,'(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*)’,’\4′)
2. Customer Email – REGEXP_REPLACE(CUSTOMER_DETAILS,'(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*)’,’\3′)

Now use EVALUATE and pass down the above RegEx functions back to the database.


If you look at the report now, you should basically have the individual attributes in a separate column as shown below.


The above was one requirement. Lets look at another requirement now. In some cases, when you are reporting against transactional applications, you would have columns which get entered as free flowing text. The requirement is to classify these columns into 3 types. One is numeric, the other is free text without numbers and the third is alphanumeric. The requirement is to basically give a provision to the end users to choose a specific type and then that column should only the corresponding text i.e. for example if numeric is chosen only the text where numbers were entered should be displayed. In normal sql terms, filtering for numerals and alpha-numerals require separate logic and is complicated as well. We shall see how we can achieve this using RegEx.

Lets first take a look at the report


Now, lets create a prompt which will produce just 3 values (Numeric, Alphabets, Alpha-Numeric).



Make this prompt to set a presentation variable called AlphaNum.


Now go to the original report and create a filter as shown below

CASE WHEN '@{AlphaNum}{Numeric}' = 'Numeric' 
WHEN '@{AlphaNum}{Numeric}' = 'Alphabets' 


Now include this report in the main dashboard along with the prompt.



As you see, the RegEx functions are pretty powerful. And of course, with the advent of EVALUATE, we now have the capability to leverage them from BI EE itself. This saves time and effort in implementing complex string based logics.

Categories: BI & Warehousing

Oracle Enterprise Linux 4 Update 8 CD and DVD ISOs Now Available

Sergio's Blog - Wed, 2009-06-03 00:53

Download OEL 4 Update 8 for i386 and x86_64 architectures for free from edelivery.oracle.com/linux. ia64 ISOs will be published in the same location a bit later.

Categories: DBA Blogs

My JavaOne Presentation:

Debu Panda - Tue, 2009-06-02 13:24
I'm co-presenting with my co-author (EJB 3 In Action) Reza Rahman on JPA Application Tuning.

Here are details for the talk:

TS-3977 -Keeping a Relational Perspective for Optimizing the Java™ Persistence API

Thursday June 04 4:10 PM - 5:10 PM Hall E 134

If you are attending JavaOne this year, please attend the presentation. Also we have a book signing scheduled tomorrow (Wednesday June 3, 2pm at Java Book Store) so stop by if you want your book to be signed


Oracle Data Integrator – Using Oracle BI EE as a Data Source – Loading Essbase data into custom tables – Using JDBC-ODBC Bridge

Venkat Akrishnan - Tue, 2009-06-02 03:23

In the blog entry here, i had basically shown multiple ways of loading data into Essbase. Today we shall see an approach wherein one can extract data from Oracle BI Server using the JDBC-ODBC bridge. The idea is to use BI Server to import Essbase metadata. Then Oracle Data Integrator would be used to extract the Essbase metadata through MDX via the BI Server and then load into a custom table. Though there are no out of the box RKM’s to reverse engineer BI Server metadata, we can work around them using some other functionality. If you want to extend this, you can create your own custom RKM to reverse-engineer BI Server metadata tables.

To start with lets first import a sample Demo->Basic cube into our BI EE repository. While creating the subject area of this cube, ensure that you do not have spaces/special characters in the column names. If you need the special characters for Answers, then add aliases with new no space/special characters name. For example, if you have Gen1, Year as one of your column names, add an alias as Year.



The idea behind doing this is to ensure that we can treat these tables as normal tables using the AnalyticsWeb ODBC connection. Of course, we can make ODI to append double quotes if special characters are present in a column or a table name. But for now lets go with this approach.

Once the aliases have been added, go to the Topology Manager and create a universal Data Server using the SUN JDBC-ODBC bridge.




Create a physical schema. Ensure that you are entering the correct subject area name in the schema.


Ensure that you are able to test the connection. Once this is done, create a logical schema and assign it to the Physical Data Server through the Global Context.


After this go to the designer and create a project. Import the the following Knowledge Modules in to the Project.


Then create a model. Since we do not have a RKM to reverse engineer the data sources automatically (though you can actually extend other RKM’s for this purpose). So, for now lets manually enter the tables and columns in the Model as shown below


Use the Aliases as the column names.



Now ensure that you are able view the data directly from Essbase using this.



If you look at the screenshot above, BI Server would be showing the data against the actual names (Gen1, Year etc). So, you can be sure now that the aliases can be accessed even outside of BI EE using the ODBC connection. Now, to make this simple lets create a straight forward table called YEAR_DIM_ESS in any database schema (this will act as our target)


Import this table to another Model. And then create an interface mapping the columns from the source to the target. Ensure that you are using the SUNOPSIS_MEMORY_ENGINE as the staging area.




Now, if you execute the interface, you would have essbase data loaded inside the database table.


This is one other good option to consider if you have full set of end to end oracle tools like ODI, Essbase, BI EE etc. In this case today, i have used the JDBC-ODBC bridge. Later i would be covering how we can use the JDBC jar of BI Server directly within ODI.

Categories: BI & Warehousing

Deconstructing the iPod Shuffle UI

Tahiti Views - Mon, 2009-06-01 10:36
The new buttonless iPod Shuffle, which moves all the controls onto the headphone cord, is taken to task in this article:The new iPod shuffle: Button, button, who's got the button?Now, I'm a recent purchaser of the previous Shuffle model, and intuitively I prefer the Play/Pause/Forward/Back/Up/Down controls of that previous model. But I like to take contrarian positions sometimes too, so let me John Russellhttp://www.blogger.com/profile/17089970732272081637noreply@blogger.com0

size sometimes does matter...

Nuno Souto - Sun, 2009-05-31 18:37
Not really. Well... You know what I mean. ;)Anyways, sorry for the "catchy" title. This entry is prompted by some of the ongoing exercise in fine tuning our AIX db servers.AIX is a great OS but the information needed to eek out the last ounce of performance for Oracle dbs is sparse, or across many documents. This blog entry is for my future reference on how to make Oracle use large pages inNoonsnoreply@blogger.com13


Subscribe to Oracle FAQ aggregator