Feed aggregator

Bissantz DeltaMaster - Cool Tool for OLAP

Keith Laker - Fri, 2010-06-18 09:56

I recently returned from a trip to Germany where visted a Bissantz, a relatively small company in Nürnburg that develops and markets an interesting reporting and data visualization tool named DeltaMaster that works with Oracle OLAP (and other data sources). I was very impressed with this tool. There are few things that I really liked about it:

  • It's very good at displaying a lot of information within a single report. One of the ways that it does this is by mixing graphical representations of data with numerical representation (they are very big on something called 'Sparklines'). This makes it very easy to create a single report that includes data on, for example, sales for the current quarter but also provides indications of sales trends and shares.

  • The presentation of data is very clean. While the reports themselves are very sophisticated, the developers have done a terrific job of presenting them to users. The presentation tends to be more functional than fluffy, but it's done very well. It is easy on the eyes.

  • DeltaMaster goes way beyond basic cross tabs and charts. There are prebuilt reports / analysis templates for rankings, concentration analysis, portfolio analysis, etc. There's quite a few different types of pre-built analysis and I won't try to do justice to them here. See for yourself.

  • It works better on OLAP than tables. I'm obviously biased when it comes to this topic, but for the end user this means more analytic power and flexibility.

Below is a concentration analysis report. This is along the lines of a Pareto chart. There are many different types of built-in analysis, but this one looks nice in the confined space of this blog's page.




Here are some links:

The DeltaMaster page at Bissantz: http://www.bissantz.com/products/

A clever blog by Bella, the Bissantz company dog: http://www.bella-consults.com/

Bella, if you happen to find your way to this blog, here's a 'hello' from Henry (the OLAP product manager's dog).







Categories: BI & Warehousing

Please vote for my Ruby session proposals at Oracle OpenWorld

Raimonds Simanovskis - Wed, 2010-06-16 16:00

oow2010.pngI am trying to tell more people at Oracle OpenWorld about Ruby and Rails and how it can be used with Oracle database. Unfortunately my session proposals were rejected by organizers but now there is a second chance to propose sessions at mix.oracle.com and top voted sessions will be accepted for conference. But currently my proposed sessions do not have enough votes :(

I would be grateful if my blog readers and Ruby on Oracle supporters would vote for my sessions Fast Web Applications Development with Ruby on Rails on Oracle and PL/SQL Unit Testing Can Be Fun!.

You need to log in to mix.oracle.com with your oracle.com login (or you should create new one if you don’t have it). And also you need to vote for at least one more session as well (as votes are counted if you have voted for at least 3 sessions). Voting should be done until end of this week (June 20).

And if you have other oracle_enhanced or ruby-plsql users in your
organization then please ask their support as well :)

Thanks in advance!

Categories: Development

Oracle BPM Suite .. unified engine..

Khanderao Kand - Mon, 2010-06-14 18:48
Oracle announced BPM Suite today http://finance.yahoo.com/news/Oracle-Announces-Oracle-iw-1502422125.html?x=0&.v=1

An important note about this Suite is based on unified process foundation of Oracle Business Process Management Suite 11g . It has the same engine that executes both BPEL and BPMN processes. Note that there is not conversion from BPMN to BPEL or BPMN to any other model for execution. The same service engine can execute both BPEL and BPMN instructions. The BPM suite is enrichment and extension to Oracle's SOA suite. It provides easy to use Process Composer (BPMN) that can be used to create process, deploy them and modify (for extension) the processes that are already deployed. It provides End-to-end unified the management and monitoring of the business processes.

Using Groovy AntBuilder to zip / unzip files

Nigel Thomas - Mon, 2010-06-14 06:23
I've been quiet for quite a while - partly because I am not working with Oracle just at the moment. I have been building some automated workflow systems using Groovy as the scripting language. I've known about Groovy since James Strachan first invented it back in around 2002/3 - but this is the first time I've really been using it in earnest. It's a great for portable scripts, and for integration with Java (it runs in the JVM). It's much friendlier than Java for someone like me who comes from a PL/SQL and C (not C++) background.

Anyhow, I found out about using Groovy Antbuilder tasks, and have been using them to manage zipping / unzipping file sets:

def ant = new AntBuilder(); // create an antbuilder
ant.unzip( src: planZipFile, dest:workingDirName, overwrite:"true")

Then I found I wanted to flatten the output (ie don't reproduce the directory structure). The Apache Ant documentation for the unzip task shows the Ant XML:

<unzip src="apache-ant-bin.zip" dest="${tools.home}">
<patternset>
<include name="apache-ant/lib/ant.jar"/>
</patternset>
<mapper type="flatten"/>
</unzip>


How to add the mapper element?

Well, lots of googling later, I couldn't find an example but I did see the patternset being used. So thanks to that, I found that the Groovy way of expressing the mapper part of this is to add a closure after the call:
def ant = new AntBuilder();
ant.unzip( src: planZipFile, dest:workingDirName, overwrite:"true"){ mapper(type:"flatten")};


So I hope someone finds that useful.

APEX 4.0 Oracle-By-Examples now available

David Peake - Fri, 2010-06-11 16:55
Our curriculum development team have been very busy of late developing new Oracle-By-Examples (OBEs) for Oracle Application Express 4.0.

These include:
  • Getting Started with Application Express {Updated from previous versions to include APEX 4.0 content}

  • Interactive Reports (x 3) {Updated from previous versions to include APEX 4.0 content}

  • Building a Websheet in Application Express 4.0 {New}

  • Building Dynamic Actions in Oracle Application Express 4.0 {New}

  • Extending your Application Using Plug-Ins in Oracle Application Express 4.0 {New}

  • Building Charts, Gantts and Maps with Oracle Application Express 4.0 {New}


Please go the Oracle Learning Library to teach yourself how to use many of the new features in Oracle Application Express 4.0: Oracle Learning Library
{If you look at the URL you may have noticed this site is written in APEX :)}

Happy learning!

APEX 4.0 Sample Applications

David Peake - Fri, 2010-06-11 16:43

Included within the pre-production APEX 4.0 build on http://apex.oracle.com is a revised Sample Database Application and a brand new Sample Websheet Application. Simply go to Application Builder and then Create > Sample Applications and install or re-install.

The Sample Database Application has new (less dated) clothing products and new features such as dynamic actions, plug-ins, maps, etc. Download the application and see working examples of many of the new features.

The Sample Websheet Application is all about big cats and shows how multiple pages can be constructed incorporating information from data grids.

FREE event in Reston, Virginia - June 15

David Peake - Fri, 2010-06-11 16:29
The OTN Develop Day is coming back to Reston: See details here.

The event we ran last time in Reston led us to completely change the way we organize and run these days. Since then we have created a new VM for running on Oracle Virtual Box: Download from here. These days are designed as BYOL (Bring Your Own Laptop) days so we can run the events more frequently and at the end you have a fully configured laptop. This also alleviates the issue we had last time in Reston where we didn't have enough machines for the participants.

Hope to see you in Reston!

Tablet PCs - Why so pricey?

Stephen Booth - Thu, 2010-06-10 12:35
Is touch screen really expensive to do or something. I've been tossing around the idea of getting a tablet PC to stick OneNote on to use for note taking, reading documents and some web surfing (including Gmail) via WiFi. I'm having to attend a lot of meetings/briefings where the ability to take notes and link them to documents and web pages would be incredibly useful.Looking around the web the Stephen Boothhttps://plus.google.com/107526053475064059763noreply@blogger.com0

Dealing with JDK1.5 libraries on Oracle 10g

Marcelo Ochoa - Tue, 2010-06-08 16:04
Modern libraries are compiled with JDK 1.5 and the question is How to deal with these libraries on an Oracle 10g OJVM.
Some examples are Lucene 3.x branch or Hadoop. The solution that I tested is using a Java Retro Translator and some complementary libraries.
I have tested this solution in Lucene Domain Index 3.x branch with success.
As you can see on the CVS there is build.xml file which performs all the retro translator steps. Here an step by step explanation of the process:

  1. Load all required libraries provided by Retro translator project which implements features not available on JDK 1.4/1.3 runtime, this is done on the target load-retrotranslator-sys-code.  This target loads many libraries on SYS schema due are immutable, or with low probability of change. It will change if we upgrade a retro-translator version. All libraries are then compiled to assembler using NCOMP utility, target ncomp-runtime-retrotranslator-sys-code.
  2. Then we can convert libraries compiled with JDK1.5, in this build.xml file the Lucene and Lucene Domain Index implementation, to a JDK1.4 target runtime. This is done on the targets backport-code-lucene and backport-code-odi, on first target We converts all Lucene libraries excluding JUnit and Test code, these libraries require as a dependency JUnit and retro-translator jars. Second target converts Lucene Domain Index jar depending on Lucene core and Oracle's libs. The back-port operation generates a file named lucene-odi-all-${version}.jar with Lucene and Lucene Domain Index code ready to run on JDK1.4 runtime.
  3. Once We have the code back-ported to a JDK1.4 runtime We can upload and NCOMP into Oracle 10g, this is done on targets load-lucene-odi-backported-code and ncomp-lucene-all.
And that's all!!, the code works fine on my Oracle 10.2 database - Linux :), finally users of 11g and 10g databases can deploy Lucene Domain Index implementation using one distribution file.

Concepts Guide: 9/27 - Process Architecture

Charles Schultz - Fri, 2010-06-04 13:26
"Figure 9-1 can represent multiple concurrent users running an application on the same computer as Oracle. This particular configuration usually runs on a mainframe or minicomputer."

Wow, this section of the documentation must have been recycled for a number of years. =)

Good pictures, descriptions of various processes.

In general, I like the "See also" sections, but I wish the link would go directly to the relevant section of the reference, instead of the top-most TOC page.

This section confused me:
"When a checkpoint occurs, Oracle must update the headers of all datafiles to record the details of the checkpoint. This is done by the CKPT process. The CKPT process does not write blocks to disk; DBWn always performs that work.

The statistic DBWR checkpoints displayed by the System_Statistics monitor in Enterprise Manager indicates the number of checkpoint requests completed."

If The CKPT process is responsible for updating the datafile headers and DBWR is responsible for something else (writing blocks to disk), why is the statistic called DBWR checkpoints? That is quite misleading, and perhaps leads to the confusion that spawned the warning about the DBWR in the first place. =)

Both PMON and SMON "check regularly". What is "regularly"?

While there are a lot of good ideas imbedded in Oracle, it is surprising that some of the still have such an antiquated and/or obfuscated interfaced. For example, the job scheduling system. The job queue processes are quite cool, but using them is a pain in the arse. The EMGC GUI is not too shabby, but what really sucks is the API; what about a simple API for those of us who do command-line work? VPD and Streams are the same way (have not yet seen any GUI for VPD). At least Shared Server is a little easier to grasp and implement, but it is still very easy to shoot yourself in the foot.

In terms of performance in the context of Shared Server, would not immediate results from FIRST_ROWS_N operations be queued as well? So it would be possible that queued results would actually return slower than when using a dedicated server?


Overall I found this chapter disappointingly light on details, or examples for that matter. I would love to see the program flow, end-to-end, of requesting, establishing, executing and concluding a transaction. Likewise, the last few sections (under "The Program Interface") don't really say much at all - it is most useful as a dictionary or appendix, nothing really that describes what things are or how they work, or the role they play in the larger picture. I mean, they do a little, but not a whole lot.

Shameless boasting

Tony Andrews - Fri, 2010-06-04 05:44
I hate to boast but...StackOverflow has become one of my favourite forums for reading and sometimes answering Oracle-related questions (though it covers all programming topics in fact).Today I am the first person ever to be awarded the Oracle badge for having earned 1000 upvotes for my answers to questions with the Oracle tag:Of course, this may just mean I have too much time on my hands...Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com3http://tonyandrews.blogspot.com/2010/06/shameless-boasting.html

Berkeley DB Java Edition High Availability Performance Whitepaper

Charles Lamb - Thu, 2010-06-03 07:53

Over the past few months we've been working on measuring the impact of HA on JE performance when running on large configurations. The results are documented in a whitepaper that I wrote.

vmForce - adding new age features to the application platform

Vikas Jain - Thu, 2010-06-03 00:08
As VmWare and Force.com joined hands to create the vmForce platform for cloud applications it's interesting to note how some of the new age features are becoming part and parcel of the application infrastructure.

Few years back, an application server with servlet, EJB containers, connection pooling and other services was considered to be an application platform. Then with the SOA wave, features like orchestration (BPEL), service bus (for routing, transformation), adapters (for connecting apps), and governance tools became part of the platform leading to development of composite applications.
Now, vmForce is taking it another step ahead including features such as social apps like collaboration, google like search for any data, mobile access, BPM and reporting dashboards to be part of the platform, relieving application developers and administrators from integration pains with external tools providing these features.

Following vmForce feature list is extracted from Anshu's blogpost on this topic.
  • Social Profiles: Who are the users in this application so I can work with them?
  • Status Updates: What are these users doing? How can I help them and how can they help me?
  • Feeds: Beyond
    user status updates, how can I find the data that I need? How can this
    data come to me via Push? How can I be alerted if an expense report is
    approved or a physician is needed in a different room?
  • Content Sharing: How
    can I upload a presentation or a document and instantly share it in a
    secure and managed manner with the right set of co-workers?
  • Search: Ability to search any and all data in your enterprise apps
  • Reporting: Ability to create dashboards and run reports, including the ability to modify these reports
  • Mobile: Ability to access business data from mobile devices ranging from BlackBerry phones to iPhones
  • Integration: Ability to integrate new applications via standard web services with existing applications
  • Business Process Management: Ability to visually define business processes and modify them as business needs evolve
  • User and Identity Management:
    Real-world applications have users! You need the capability to add,
    remove, and manage not just the users but what data and applications
    they can have access to
  • Application Administration: Usually an afterthought, administration is a critical piece once the application is deployed

Connecting Salesforce.com from Google AppEngine using OAuth

Vikas Jain - Wed, 2010-06-02 23:47
Here's a blogpost on how to connect and authenticate salesforce.com from an application deployed on Google AppEngine using OAuth protocol.
http://blog.sforce.com/sforce/2010/04/connecting-google-app-engine-and-salesforcecom-with-oauth.html

See how the complexity of the OAuth protocol has been hidden by the helper APIs of OAuthAccessor and OauthHelperUtils.
Refer to this demo project written by Jeff Douglas.

Force.com security

Vikas Jain - Wed, 2010-06-02 23:37
You can find resources and links to Force.com platform security for secure cloud development here.
http://blog.sforce.com/sforce/2010/04/introducing-forcecom-secure-cloud-development.html

What I like is how it's organized - complete with education material, security design principles, secure coding guidelines, security testing tools, and how to perform security review - providing end to end guidance on how to implement security for apps deployed on Force.com.

JHeadstart custom search with default values

JHeadstart - Sun, 2010-05-30 00:15

JHeadstart has a powerful generator for ADF Faces pages. One of the features of the generator is the generation of search functionality into a page. The search functionality offers a Quick Search with a single search item and an Advanced Search for multiple search criteria. When generating search functionality with JHeadstart 11g a design decision has to be made whether to use the ADF Model or JHeadstart custom approach.

With the ADF Model approach the Quick- and/or Advanced Search is defined in the View Object using Query Criteria. The generated page uses a single component (<af:query> or <af:quickQuery>) that renders a complete search area. This approach is recommended if you

The JHeadstart custom approach uses meta-information in JHeadstart to generate the search areas. There is no need to specify something on the View Object. The generated search areas are composed by more page components which can be flexible arranged. This approach is recommended if you

  • have special layout requirements like organizing search items in groups using tabs
  • need to use the JHeadstart custom List Of Values because of your LOVs need multi-selection, detail disclosure or special layout requirements
  • want to keep you metadata in the JHeadstart application definition instead of in the ADF Business Components
  • want to customize your search

Often it would be nice to have initial values for the search items, but this is not supplied by JHeadstart by default. For customization reason I am using JHeadstart Search custom approach to add this functionality.

Based on the idea of the article by Lucas Jellema Using default search values on the JHeadstart advanced search the solution described here shows how to implement the feature of default values for search items. In this solution default values can be specified for both Quick- and Advanced Search. An additional "Default" button is added into the search region to manually apply the default values. In addition the default values are always set when the task flow starts e.g. when be called from the menu. The final pages will look like these:

Quick Search with Default button: JHSSearchDefaultQS.png

Advanced Search with Default button:JHSSearchDefaultAS.png

The demo application is based on the employee data of the HR demo database schema. Here are the steps to implement it.

  1. Create a new Fusion Web Application (ADF) and define an ADF BC Model with only one updateable Viewobject EmployeesView based on the table EMPLOYEES.

  2. Enable JHeadstart on the ViewController project and create a new Service Definition. Choose the following settings:

    • Quick Search Style=JHeadstart - Dropdown List
    • Advanced Search Style=JHeadstart - Same Page
    • Always use JHeadstart LOVs in JHeadstart search area: Checked

    For the other options accept the default values of the wizard.

  3. Define a default value for the item DepartmentId when creating new rows. This is not the definition for the default value of a search item but it will be used later for demo supplying a default value from the generated bean. Open the JHS Application Definition Editor navigate to group Employees and select DepartmentId in Items. Enter 50 to the property "Default Display Value".

  4. Generate the application and run it to see everything is working.

  5. Now it is time to add search with default values functionality. In this solution the default values will be supplied as managed properties of the search bean generated by JHeadstart. A new custom generator template needs to be registered on the Employees group in the application definition:

      SEARCH_BEAN = custom/misc/facesconfig/EmployeesSearchBean.vm
    

    In the custom generator template the new Search bean class and the managed properties are defined:

    <managed-bean>
      <managed-bean-name>${bean.beanName}</managed-bean-name>
      <managed-bean-class>
    com.jhssearchdemo.controller.jsf.bean.SearchBeanWithDefaultValues
    </managed-bean-class>
        ...
        <managed-property>
          <property-name>advancedSearchDefaultValues</property-name>
          <map-entries>
            <key-class>java.lang.String</key-class>
            <map-entry>
              <key>EmployeesDepartmentId</key>
              <value>
           #{pageFlowScope.EmployeesDefaultValues.defaultValues['DepartmentId']}
              </value>
            </map-entry>
            <map-entry>
              <key>EmployeesHireDate</key>
              <value>10-Apr-1997</value>
            </map-entry>
          </map-entries>  
        </managed-property>
        <managed-property>
          <property-name>quickSearchDefaultValues</property-name>
          <map-entries>
            <key-class>java.lang.String</key-class>
            <map-entry>
              <key>EmployeesLastName</key>
              <value>O*</value>
            </map-entry>
          </map-entries>  
      </managed-property>
    </managed-bean>
    

    Defining the managed properties as a Map allows to use the item name as key and the default value as value of the map entry. The item name is the name of the associated control binding generated for the page fragment. The value can be a constant or a reference to another bean. The Advanced Search default value for EmployeesDepartmentId shows an example how to reference a value from the default value bean for new rows.

    The new enhanced search bean has to supply managed properties to hold the default values and a method to apply the default values. The new class will extend the JHeadstart search bean class JhsSearchBean.

    package com.jhssearchdemo.controller.jsf.bean;
    public class SearchBeanWithDefaultValues extends JhsSearchBean {
        // Maps of default values
        private Map quickSearchDefaultValues;
        private Map advancedSearchDefaultValues;
        ...  // getter and setters
    
        /**
         * Appy default values to searchBean
         */
        @PostConstruct
        public void applyDefaultSearchValues() {
            sLog.debug("enter applyDefaultSearchvalues");
    
            Map criteria = super.getCriteria();
            sLog.debug("current criteria:" + criteria);
    
            // clear all search criterias
            super.clearSearchCriteria();
    
            DCBindingContainer container = getBindingsInternal();
    
            if (advancedSearchDefaultValues != null) {
                // Apply default values for advanced search
                sLog.debug("set advanced search items:" + 
                  advancedSearchDefaultValues);
                for (String searchItemName :
                     (Set)advancedSearchDefaultValues.keySet()) {
    
                    // copy default value only to items that exists
                    if (findSearchControlBinding(container, searchItemName) !=
                        null) {
                        Object asItemValue =
                            advancedSearchDefaultValues.get(searchItemName);
                        sLog.debug("set default value " + asItemValue +
                                   " for advsearch item: " + searchItemName);
                        criteria.put(searchItemName, asItemValue);
                    } else {
                        sLog.warn("search item for default value doesn't exist: " +
                                  searchItemName);
                    }
                }
                // super.getCriteria().putAll(advancedSearchDefaultValues);
            }
    
            // Apply default value for simple search
            if (quickSearchDefaultValues != null &&
                !quickSearchDefaultValues.isEmpty()) {
                // get first key from Quicksearch Map
                Set keyset = quickSearchDefaultValues.keySet();
                Iterator keyIter = keyset.iterator();
                String qsItem = (String)keyIter.next();
    
                sLog.debug("previous search item was " + getSearchItem());
                if (findSearchControlBinding(container, qsItem) != null) {
    
                    Object qsItemValue = quickSearchDefaultValues.get(qsItem);
    
                    // set Quicksearch item
                    sLog.debug("set quicksearch item: " + qsItem);
                    setSearchItem(qsItem);
                    if (qsItemValue != null && !"".equals(qsItemValue)) {
                        sLog.debug("set quicksearch item value: " + qsItemValue);
                        setSearchText(qsItemValue);
                    }
                } else {
                    sLog.warn("search item for default value doesn't exist: " +
                              qsItem);
                }
            }
            sLog.debug("exit applyDefaultSearchvalues");
        }
    

    The method applyDefaultSearchValues() clears the existing search criteria, looks up and applies the default search values from the managed properties. Take a note on the annotation @PostConstruct. JSF will fire the method applyDefaultSearchValues() when all managed properties are set. As the Search Bean is pageflow-scoped the method will be automatically applied every time the pageflow is entered like from a menu item. This way the default search values will be already set when entering the page. The method findSearchControlBinding() checks if the specified item name is valid (has a control binding with the same name).

  6. To manually invoke the method applyDefaultSearchValues() add a "Default" Button next to the Search button of the Advanced Search. The button has an actionListener to invoke the new method of the search bean. In the JHeadstart Application Definition Editor customize the template DO_ADVANCED_SEARCHED_BUTTON at group (Employees) level. Add the following at the end of the new template custom/button/doAdvancedSearchDefaultButton.vm:

    ...
    <af:commandButton textAndAccessKey="#{nls['FIND_DEFAULT']}"
        actionListener="#{#SEARCH_BEAN().applyDefaultSearchValues}"
        id="${JHS.current.group.shortName}AdvancedSearchDefaultButton"/>
    

    The same way create a new customized template DO_QUICK_SEARCH_BUTTON (custom/button/doQuickSearchDefaultButton.vm):

    <af:commandButton textAndAccessKey="#{nls['FIND_DEFAULT']}"
        actionListener="#{#SEARCH_BEAN().applyDefaultSearchValues}"
        id="${JHS.current.group.shortName}QuickSearchDefaultButton"/>
    
  7. Add the button label to the application resource file. Locate the resource file ViewController\src\com\jhssearchdemo\view\ApplicationResources_en.properties and add a new entry:

    FIND_DEFAULT=&Default
  8. Add the actionListener applyDefaultSearchValues to the search bean class. Note that there is already a method applyDefaultSearchValues in the search bean class but not with the rigtht signature for an action listener. Adding a small wrapper will help:

        /**
         * Wrapper to set default values with action listener
         * @param evt
         */
        public void applyDefaultSearchValues(ActionEvent evt) {
            applyDefaultSearchValues();
        }
    
  9. Generate the application again, compile and run it. Navigate to the Employees tab to see the new button "Default" in the Quick Search. Also the default search item "Lastname" and value "O*" is shown. Change the search item and/or value and apply the new "Default" button to initialize the search again. Execute the search to verify that it works.

    The same functionality is available in Advanced Search. Once the search is initialized either manually or by the annotation both Quick Search and Advanced search is defaulted. The reason is that the method applyDefaultSearchValues() set the default values for both Quick Search and Advanced Search. If this is not desired the method can be splitted into default values for Quick Search and Advanced Search. Two separate action listeners can invoke these methods.

That's it. You can download a sample workspace that illustrates the solution here.

Categories: Development

Corporate Dashboards

Michael Armstrong-Smith - Fri, 2010-05-28 12:43
During a recent consulting engagement I was asked about dashboards and where one should begin when the boss comes in and says I want a dashboard. I decided what I needed to do was step back and look at the dashboard concept, then explain my understanding in simple terms. I share those thoughts here and invite your comments.


Dashboards are unique to an organization and what works in one place will not be suitable in another. But of course, it all depends on your definition of a dashboard. The one that I like and the one that keeps me out of mischief is this one:

A dashboard or dash board is a panel located under the windscreen containing indicators and dials such as the tachometer / speedometer and odometer. I bet you never thought it was so easy.

Seriously, look again at this definition and you will see the foundations of business dashboards. It is not the dials such as the tachometer, odometer and fuel gauge that are important. It is not the numbers either.

What is really important is the meaning or significance (aka the KPI) that is applied to the numbers. Thus, depending upon the situation, a speed of 100 mph might be considered excessive, particularly if being chased by an irate police officer down a busy city street. Do the same thing on a race track and you might be considered a menace for going too slow. But do 100 mph on an autobahn in Germany and no-one will bat an eyelid because it is perfectly acceptable. You can see that the gauge, in this case the speedometer and the 100 mph reading, is by itself meaningless as a KPI. It is only when you apply the criteria which states that 100 mph must be highlighted in red because it is excessive that a real KPI is born.

The concepts of dashboards in automobiles and in business are the same - they give us a snapshot of critical information at a moment in time. If you happen to be running out of fuel the dashboard will bring this fact to your attention. It does this by turning on a light or sounding a bell when a certain low point in the fuel tank is reached.

The vehicle dashboard needs to provide enough pertinent information so that informed decisions can be made as to how the vehicle is functioning.

Business dashboards need to provide enough pertinent information to the manager or executive so that they can make informed decisions as to how the department or company is functioning. Just like with a vehicle, a corporate dashboard needs to provide all of the critical information that is needed to run the organization's daily operations.

Most corporate dashboards are a snapshot in time, typically midnight, that tell an organization if it is spending cash too fast; or whether the percentage of patients who needed a repeat visit is higher than 5%; or whether the number of requests for service this week exceeded the number from last week by more than 10%. The common factor here is that a rule is being applied to the data to indicate that something needs to be brought to someone's attention.

In a business, you can imagine that every employee has a steering wheel and an accelerator pedal. However, it is not necessary that everyone gets the same dashboard. Since the user roles are different not everyone needs the same level and kind of information. The worker bees need to work, the managers need to manage, and the executives need to improve their golf handicap. Typically, higher executives want to manage by exception and will only become really interested when something out of the ordinary happens.

If an organization is truly managing by exception then it should have a goal to move routine work from the manager to the employee, thus leaving the manager more time to manage. By creating a dashboard that displays the KPIs that the manager is interested in, a quick glance to see that all is green is all that is needed. Good KPIs, and thus good dashboards, reduce micromanagement which is good for everyone involved.

Now that reminds me, golf anyone!

MDL import/export

Klein Denkraam - Fri, 2010-05-28 03:58

Some time ago, when we moved from OWB 9 to 10, I noticed the size of OWB exports into mdl files dramatically decreased. That was a good thing in that it was easier to distribute the mdl files e.g. to production sites. It it also meant you could not read (or edit) the contents of the mdl files any more. I always assumed the new mdl format was some kind of binary optimized format, but today I read this blog from the OWB guys. And it turns out that the ‘new’ format is just a normal zip file containing 2 files. And that you can specify the export to be done in the ‘old’ text format. Text you can edit!

It could be a means to move/copy one or more mappings from one project to another project. Not easy as you must ‘hack’ the mdl file, but it can be done. Neat.


Follow @ORCL_Linux on Twitter

Sergio's Blog - Thu, 2010-05-27 02:06
We've created the following Twitter handles for those of you who like your Oracle Linux and virtualization news in micro chunks * [@ORCL_Linux](http://twitter.com/ORCL_linux) * [@ORCL_virtualize](http://twitter.com/ORCL_virtualize)
Categories: DBA Blogs

Composite Interval Partitioning isn't as advertised.

Claudia Zeiler - Wed, 2010-05-26 18:37

Oracle® Database VLDB and Partitioning Guide 11g Release 1 (11.1) Part Number B32024-01 says:

Interval Partitioning

Interval partitioning is an extension of range partitioning which instructs the database to automatically create partitions of a specified interval when data inserted into the table exceeds all of the existing range partitions. You must specify at least one range partition.


You can create single-level interval partitioned tables as well as the following composite partitioned tables:

* Interval-range

* Interval-hash

* Interval-list

Sure, I can create these composite partitions, but the results aren't particularly useful. When I tried. Oracle spread my results nicely between the two hash subpartitions for the manually defined partition, but put everything in the same subpartition for the interval generated partition. Notice that these are identical sets of rows. The only difference is the key to force them into the manually specified partition or the generated partition. I assume that there is a metalink note on this somewhere.

I got equivalent results for interval-list composite partitioning. I won't bore the reader with the step-by-step for that test since the results are also that all rows in the generated partitions are forced into one subpartition.

Here are my results for the interval hash test:


SQL> create table interval_hash (
N number,
N2 number
)
partition by range(N) interval (2)
SUBPARTITION BY HASH (N2)
(partition p1 values less than (2)
(SUBPARTITION p_1 ,
SUBPARTITION p_2
));

Table created.

SQL> BEGIN


FOR i IN 1 .. 15 LOOP

INSERT INTO interval_hash VALUES (5, i);
INSERT INTO interval_hash VALUES (0, i);

END LOOP;
COMMIT;
END;
/

PL/SQL procedure successfully completed.


SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'INTERVAL_HASH', granularity=>'ALL');

PL/SQL procedure successfully completed.


SQL> SELECT table_name, partition_name, subpartition_name, num_rows
FROM user_tab_subpartitions
ORDER by table_name, partition_name, subpartition_name;

TABLE_NAME PARTITION_NAME SUBPARTITION_NAME NUM_ROWS
-------------------- -------------------- -------------------- ----------
INTERVAL_HASH P1..................P_1........................... 6
INTERVAL_HASH P1..................P_2........................... 9
INTERVAL_HASH SYS_P138......SYS_SUBP137............15

(I am having tabbing problems in blogger. I hope that my added lines of dots don't confuse too much)


SQL> select * from interval_hash subpartition(p_2) order by n2;

N N2
---------- ----------
0 1
0 3
0 4
0 7
0 9
0 10
0 12
0 14
0 15

9 rows selected.

SQL> select * from interval_hash subpartition(p_1) order by n2;

N N2
---------- ----------
0 2
0 5
0 6
0 8
0 11
0 13

6 rows selected.


SQL> select * from interval_hash subpartition(SYS_SUBP137) ORDER BY N2;

N N2
---------- ----------
5 1
5 2
5 3
5 4
5 5
5 6
5 7
5 8
5 9
5 10
5 11
5 12
5 13
5 14
5 15

15 rows selected.

Pages

Subscribe to Oracle FAQ aggregator