Feed aggregator


Rob van Wijk - Thu, 2013-11-28 16:06
During a performance inspection of some E-Business Suite batch programs running on, I noticed almost 99% of the consistent reads missing in the tkprof file for a particular insert statement. It was a giant INSERT ALL statement where the accompanying select statement contained several subqueries in a "case when exists (query)" construct. And those subqueries in the select clause was what Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com1

Oracle Linux 6.5

Wim Coekaerts - Wed, 2013-11-27 11:21
Oracle Linux 6.5 has been uploaded to ULN and public-yum. OL6.5 includes UEKr3 based on Linux 3.8 alongside the base 2.6.32-based kernel.

The ISOs are available for download from My Oracle Support and will be available shortly(early December) from edelivery.

The ISOs are also being mirrored to public external mirror sites, one of them is my own mirror site.

Release notes are here.

Dodeca Now Available for Download!

Tim Tow - Mon, 2013-11-25 18:23
This past Friday, November 22nd, we completed our work on the newest version of the Dodeca Spreadsheet Management System and made Dodeca available for download from our website.  This blog entry is a sneak peek at some of the new features in version 6.6, as well as 6.5, which was released to select customers with specific functionality requests.  There are a few features that are particularly useful for end users, so let’s start there.
More Excel Support
Dodeca has always been strong on Excel version support and this version delivers even more Excel functionality.  Internally, we use the SpreadsheetGear control, which does a very good job with Excel compatibility.  This version of Dodeca integrates a new version of SpreadsheetGear that now has support for 398 Excel functions including the new SUMIFS, COUNTIFS, and CELL functions.
Excel Page Setup Dialog
The new version of Dodeca includes our implementation of the Excel Page Setup Dialog which makes it easy for users to customize the printing of Dodeca views that are based on Excel templates.  Note that for report developers, the Excel Page Setup has also been included in the Dodeca Template Designer.

New PDF View Type
Customers who use PDF files in their environments will like the new PDF View Type.  In previous releases of Dodeca, PDF documents displayed in Dodeca opened in an embedded web browser control.  Beginning in this version, Dodeca includes a dedicated PDF View type that uses a specialized PDF control.

View Selector Tooltips
Finally, users will like the new View Selector tooltips which optionally display the name and the description of a report as a tooltip.

Performance is one of those things that users always appreciate, so we have added a new setting that can significantly improve performance in some circumstances.  Dodeca has a well-defined set of configuration objects that are stored on the server and we were even awarded a patent recently for the unique aspects of our metadata design.  That being said, depending on how you implement reports and templates, there is the possibility of having many queries issued to the server to check for configuration updates.  In a few instances, we saw that optimizing the query traffic could be beneficial, so we have implemented the new CheckForMetadataUpdatesFrequencyPolicy property.  This property, which is controlled by the Dodeca administrator, tells Dodeca whether we should check the server for updates before any object is used, as was previously the case, only when a view opens, or only when the Dodeca session begins.  We believe the latter case will be very useful when Dodeca is deployed in production as objects configured in production often do not change during the workday and, thus, network traffic can be optimized using this setting.  The screenshot below shows where the administrator can control the update frequency.

Though users will like these features, we have put a lot of new things in for the people who create Dodeca views and those who administer the system.  Let’s start with something that we think all Dodeca admins will use frequently.
Metadata Property Search Utility
As our customers continue to expand their use of Dodeca, the number of objects they create in the Dodeca environment continues to grow.  In fact, we now have customers who have thousands of different objects that they manage in their Dodeca environments.  The Metadata Property Search Utility will help these users tremendously.

This utility allows the administrator to enter a search string and locate every object in our system that contains that string.  Once a property is located, there is a hyperlink that will navigate to the given object and automatically select the relevant property.  This dialog is modeless, which means you can navigate to any of the located items without closing the dialog.

Note: this version does not search the contents of Excel files in the system.
Essbase Authentication Services
In the past, when administrators wished to use an Essbase Authentication service to validate a login against Essbase and automatically obtain Dodeca roles based on the Essbase user’s group memberships, they had to use an Essbase connection where all users had access to the Essbase application and database.  The new ValidateCredentialsOnly property on both of the built-in Essbase Authentication services now flags the service to check login credentials at the server-level only, eliminating the need for users to have access to a specific Essbase database.
New Template Designer Tools
Prior to Dodeca 6.x, all template editing was performed directly in Excel.  Since that time, however, most template design functionality has been replicated in the Dodeca Template Designer, and we think it is preferable due to the speed and ease of use with which users can update templates stored in the Dodeca repository.  We have added a couple of new features to the Template Designer in this version.  The first tool is the Group/Ungroup tool that allows designers to easily apply Excel grouping to rows and/or columns within the template.   The second new tool is the Freeze/Unfreeze tool that is used to freeze rows and/or columns in place for scrolling.
Parameterized SQL Select Statements
Since we introduced the SQLPassthroughDataSet object in the Dodeca 5.x series, we have always supported the idea of tokenized select statements.  In other words, the SQL could be written so that point-of-view selections made by users could be used directly in the select statement.  In a related fashion, we introduced the concept of parameterized insert, update, and delete statements in the same version.  While parameterized statements are similar in concept to tokenized statements, there is one important distinction under the covers.  In Dodeca, parameterized statements are parsed and converted into prepared statements that can be used multiple times and results in more efficient use of server resources.  The parameterized select statement was introduced in this version of Dodeca in order for customers using certain databases that cache the prepared statement to realize improved server efficiency on their select statements.
Workbook Script Formula Editor Improvements
We have also been working hard to improve extensibility for developers using Workbook Scripts within Dodeca.  In this release, our work focused on the Workbook Script Formula Editor.  The first thing we added here is color coding that automatically detects and distinguishes Excel functions, Workbook Script functions, and Dodeca tokens.  In the new version, Excel functions are displayed in green, Dodeca functions and parentheses are displayed in blue, and tokens are displayed in ochre.   Here is an example.

In addition, we have implemented auto-complete for both Excel and Dodeca functions.

New SQLException Event
Version 6.6 of Dodeca introduces a new SQLException event that provides the ability for application developers to customize the behavior when a SQL Exception is encountered.
XCopy Release Directory
Beginning in version 6.6, the Dodeca Framework installation includes a pre-configured directory intended for customers who prefer to distribute their client via XCopy deployment instead using Microsoft ClickOnce distribution.  The XCopy deployment directory is also for use by those customers who use Citrix for deployment.
Mac OS X Release Directory
The Dodeca Framework installation now includes a pre-compiled Dodeca.app deployment for customers who wish to run the Dodeca Smart Client on Mac OS X operating systems.  What that means is that Dodeca now runs on a Mac without the need for any special Windows emulators.  Dodeca does not require Excel to run on the Mac (nor does it require Excel to run on Windows for that matter), so you can certainly save your company significant licensing fees by choosing Dodeca for your solution. 

In short, you can see we continue to work hard to deliver functionality for Dodeca customers.  As always, the Dodeca Release Notes provide detailed explanations of all new and updated Dodeca features.  As of today, we have decided to make the Release Notes and other technical documents available for download to non-Dodeca customers.  If you are curious about all of the things Dodeca can do, and if you aren't afraid to dig into the details, you can now download our 389 page cumulative Release Notes document from the Dodeca Technical Documents section of our website.  

Categories: BI & Warehousing

Browsing the Meta Data Services Repository of the Oracle SOA/BPM Suite 11g

Jan Kettenis - Mon, 2013-11-25 13:09
In this article I explain a handy way to browse the MDS on the SOA/BPM server from JDeveloper, as well as how to download its content using Enterprise Manager, and finally an (as far as I know) undocumented feature to look up artifacts using a browser.

This article has been updated on November 26 to include the option regarding downloading the MDS content.

The Meta Data Services (or MDS for short) of Oracle's SOA/BPM Suite is used to manage various types of artifacts like:
  • Process models created with Process Composer,
  • Abstract WSDL's and XSD's,
  • Domain Value Map's (DVM), and even
  • Artifacts  of deployed composites.
Browsing the MDS from JDeveloperTo find out what actually is deployed in the MDS you can setup an MDS connection within JDeveloper to the server. Such a connection can be handy, for example to verify if replacements of variables in the MDS artifacts are properly done when deploying. Using this connection you can open those artifacts in JDeveloper and check the source.

To create an MDS connection go to the Resource Palette -> New Connection -> SOA-MDS. This will pop-up a tab from which you can create a database connection to the MDS for example the dev_mds schema. Having created the database connection you have to choose the partition to use for the SOA-MDS connection. To be able to check-out processes created whith Composer from the MDS or to save them in the MDS, you create a SOA-MDS that uses the obpm partition. As the name already suggests, this is in BPM-specific partition. To browse the other artifacts I mention above, you use the soa-infra partion, which is shared by both SOA and BPM.

In the figure below you can see two types of connections, above to the soa-infra and below to the obpm partition. In the (soa-infra) apps you can find the reusable artifacts that you have deployed explicitly (like abstract WSDL's, XSD's, EDL's).

What you also see is a deployed-composites folder that shows all composites that have been deployed. When expanding a composite, you will find that all artifacts are shown. This is a much easier way to verify that you do not deploy too many artifacts to the server then by introspecting the SAR file, I would say. Except for .bpmn files (that at the time of writing are not yet recognized by this MDS  browser) you can open all plain text files in JDeveloper.

Downloading the MDS from Enterprise ManagerNow let's assume that you have not been given access to the MDS's DB schema on the environment (perhaps because it is Production), but you do have access to the Enterprise Manager. For this situation my dear colleague Subhashini Gumpula pointed me to the possibility to download the content from the MDS as follows:

soa-infra -> Adminstration -> MDS Configuration  -> and then on the right side of the screen: Export.

This will download a soa-infra_metadata.zip file with its content!
Looking up Artifacts in the MDS Using a BrowserNow let's assume that you also have not been given access to Enterprise Manager on the environment, but you can access using the HTTP protocol. Thanks to my dear colleague Luc Gorrisen I recently learned that you can browse it using part of the URL of the composite, as follows:


For example, to look up the abstract WSDL of some ApplicationService that is used by some StudentRegistration business process, I can use the following URL.


Mind you, this is not restricted to only the WSDL's it is using.

Ain't that cool?!

Conditional Borders

Tim Dexter - Mon, 2013-11-25 11:57

How can you conditionally turn cells borders on and off in Publishers RTF/XSLFO templates? With a little digging you'll find what appears to be the appropriate attributes to update in your template. You would logically come up with using the various border styling options:




Buuuut, that doesnt work. Updating them individually does not make a difference to the output. Not sure why and I will ask but for now here's the solution. Use the compound border formatter border-top|bottom|left|right. This takes the form ' border-bottom="0.5pt solid #000000". You set all three options at once rather than individually. In a BIP template you use:

<?attribute@incontext:border-bottom;'3.0pt solid #000000'?>
<?attribute@incontext:border-top;'3.0pt solid #000000'?>
<?attribute@incontext:border-left;'3.0pt solid #000000'?>
<?attribute@incontext:border-right;'3.0pt solid #000000'?>
<?end if?>

3pt borders is a little excessive but you get the idea. This approach can be used with the if@row option too to get the complete row borders to update. If your template will need to be run in left to right languages e.g. Arabic or Hebrew, then you will need to use start and end in place of left and right.

For the inquisitive reader, you're maybe wondering how, did this guy know that? And why the heck is this not in the user docs?
Other than my all knowing BIP guru status ;0) I hit the web for info on XSLFO cell border attributes and then the Template Builder for Word. Particularly the export option; I generated the XSLFO output from a test RTF template and took a look at the attributes. Then I started trying stuff out, Im a hacker and proud me!  For the users doc updates, I'll log a request for an update.

Categories: BI & Warehousing

new Puppet 3 Weblogic provisioning module

Edwin Biemond - Sun, 2013-11-24 14:22
The last few weeks I was busy re-writing of my puppet WLS module so it fully supports the power of Puppet 3 (thanks for more than 4000 downloads on puppet forge and all the github downloads). With Puppet 3 we now can use Hiera, Iterations and Lambdas expression. This does not sound like a big change but with Hiera and the new Puppet Language features, I can define big WebLogic Domains without

Turkish Hadoop User Group(TRHUG) 2013 meeting

H.Tonguç Yılmaz - Sat, 2013-11-23 14:59
Our first Hadoop User Group meeting will be at 11.12.13(interesting date :) Wednesday, Istanbul Marmara University Haydarpasa Campus. Hortonworks will be the keynote speaker. Teradata is the sponsor of the meeting. Detailed Agenda To buy a ticket

Role and Tool for Real Time Processing in IoT

Anshu Sharma - Fri, 2013-11-22 13:34

Everyone acknowledges that IoT will improve our productivity by automating mundane tasks like getting relevant information from a machine to a Cloud Service for Repair or Replenishment. Large numbers of folks also agree that Big Data has an important role to play. For example, crunching data from hundreds of sensors in a turbine and predicting failures. However, relatively less number of folks understand the role of Real Time processing in Internet of Things. After all if I have the ability to reduce my usage based automobile insurance premium, I would appreciate if instead of sending a monthly report I can get an alert to change my behavior while I am still in the middle of my long drive.

If Real Time Processing of data was difficult in simple transactional systems, consider the complexity when you are dealing with streaming data plus you have to combine multiple data sources like information of my trip and the data coming in constantly from the Telematics module in my car. So you have to learn a new tool. While you might be familiar with Hadoop, you should familiarize yourself with Storm. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing.

Oracle's solution for building application to process streaming data in real time is Oracle Event Processing. Oracle also offers a lighter weight version of OEP for Embedded Development on ARM and x86 devices. Such devices include Home Automation Gateways, Smart Meter Aggregators, Healthcare Hubs, Industrial Gateways. Yes, if you can take decision without needing any more data from the backend, then why not do it at the Gateway/Aggregator level and not send all the streaming data back to the data center. You might still need to send the data back for other use cases, but at least your real time decision making latency is reduced considerably. 

So what are the strengths of OEP. Here are some key points - 

1) OEP provides infrastructure and guidelines for developers to create scalable and highly available Event processing networks (EPN).

2) OEP provides out of the box Adapters for easily consuming data coming in different protocols and formats. Of course you are free to write your own adaptors.

3) Oracle CQL is a rich formalized Query Language optimized for concise expression of temporal queries similar to how SQL is optimized for expression of static data queries. Out of the box functions augment the core language. 

4) Oracle CQL is augmented by data cartridge framework allowing you to tightly integrate other domain data types and functions. Following Data Cartridges are available – Oracle Java, Oracle Spatial, Oracle JDBC, Hadoop. Oracle JDBC cartridge allows you to write applications merging streaming data with geo fencing functions available from Oracle Spatial or predictive analytics capabilities of Oracle Advanced Analytics.

5) Pre integration with Oracle Coherence for local and distributed cache. You can integrate a cache system with your Oracle Event Processing application so that the cache is available as source or destination for data your application uses, including event data. Integrating a cache can provide access to relatively static data at a speed that is well suited to an application that handles streaming data.

6) Persist the events that flow out of a component of the EPN to a store and then play them back at a later stage. This helps debug a problem with a currently running application. If you have been recording the events at a node in the EPN when the problem occurred, you can later playback the same list of events to recreate the problem scenario for debugging purposes.

7) OEP Visualizer is a rich Web 2.0 application that displays OEP Data in a useful and intuitive way for admins e.g. latency and throughput graphs display the amount of time it takes an event to pass through the specified stage or path in the EPN or the number of events passing through, respectively. EPN Configurations can be changed through the Visualizer. All this can also be done programmatically through available JMX APIs.

8) OEP architecture is highly modular based on OSGi. Adapters and other components involved in implementing data feeds can be dynamically configured/activated/suspended via a lifecycle API (SuspendableBean, ResumableBean). Changes can be made dynamically to CQL queries. In business implementations, business rules are very dynamic. This also limits testing required for changes.

9) OEP integrates well with Oracle SOA.  It is available as a part of Oracle SOA Suite and can be used in SOA Composites. 

So explore the role of real time processing in your IoT application with OEP and share with us what cool application you came up with. Fast Data enabled by OEP brings more value to high velocity sensor/device data. This enables new services, better customer experience, improved efficiency, higher quality in operations.

Using Oozie in Kerberized Cluster

Chen Shapira - Fri, 2013-11-22 09:14

In general, most Hadoop ecosystem tools work rather transparently in a kerberized cluster. Most of the time things “just work”. This includes Oozie. Still, when things don’t “just work”, they tend to fail with slightly alarming and highly ambiguous error messages. Here are few tips for using Oozie when your Hadoop cluster is kerberized. Note that this is a client/user guide. I assume you already followed the documentation on how to configure the Oozie server in the kerberized cluster (or you are using Cloudera Manager, which magically configures it for you).

    1. As always, use “kinit” to authenticate with Kerberos and get your tgt before trying to run oozie commands. Verify with klist. Failure to do this will result in “No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)”
    2. I always enable security debug messages. This helps me troubleshoot, and also helps when I need to ask support/mailing list for help.
      export HADOOP_ROOT_LOGGER=TRACE,console;
      export HADOOP_JAAS_DEBUG=true;
      export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
    3. Your Oozie command typically contains a URL. Something like “oozie -url http://myserver:11000/oozie -conf job.properties -run” The server name in the URL must match an existing principal name in Kerberos. If your principals are actually “myserver.mydomain.com” make sure you use that in the URL.
    4. If you decide to use CURL to connect to your Oozie server, either for troubleshooting or for using the REST API, don’t forget to use “–negotiate -u foo:bar”. The actual username and password don’t matter (you are authenticating with your Kerberos ticket), but CURL throws a fit if they don’t exist.
    5. If you have Hive action in your Oozie workflow, you need to define and use credentials. Here’s an example:
      <workflow-app xmlns="uri:oozie:workflow:0.2.5" name="example-wf">
                      <credential name='hive_credentials' type='hcat'>
      <start to="hive-example"/>
      <action name="hive-example" cred="hive_credentials">
              <hive xmlns="uri:oozie:hive-action:0.2">
              <ok to="end"/>
              <error to="fail"/>
      <kill name="fail">
              <message>Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
      <end name="end"/>
    6. To make step #5 actually work (i.e. allow Oozie to run Hive actions), you will also need to do the following:
      In CM:
    7. go to “HDFS Service->Configuration->Service-Wide->Advanced->Cluster-wide Configuration Safety Valve for core-site.xml” and add:


      – go to “Oozie service->Configuration->Oozie Server(default)->Advanced-> Oozie Server Configuration Safety Valve for oozie-site.xml” and add:


      – Deploy client configuration and restart Hive service and Oozie service.

    8. Oozie doesn’t kinit a user for you on the node its launching the action on, and it doesn’t move principles and tickets around. Instead it uses delegation tokens. If you want to authenticate to Hadoop inside a shell or java action, you’ll need to use the same tokens.

      In a shell action, it will be something like:

      	hive -e "select x from test" -S
      	hive -e "SET mapreduce.job.credentials.binary=$HADOOP_TOKEN_FILE_LOCATION; select x from test" -S

      In Java it will be:

      if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
                  jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));

    Hope this helps! Feel free to comment with questions, especially if you ran into errors that I did not address. I’ll be happy to add more tips to the list.

Categories: DBA Blogs

Desktop Testing XSL

Tim Dexter - Thu, 2013-11-21 23:05

Bit of a corner case this week but I wanted to park this as much for my reference as yours. Need to be able to test a pure XSL template against some sample data? Thats an XSL template that is going to generate HTML, Text or HTML. The Template Viewer app in the BI Publisher Desktop group does not offer that as an option. It does offer XSL-FO proccesing thou.

A few minutes digging around in the java libraries and I came up with a command line solution that is easy to set up and use.

1. Place your sample XML data and the XSL template in a directory
2. Open the lib directory where the TemplateViewer is installed. On my machine that is d:\Oracle\BIPDesktop\TemplateViewer\lib
3. Copy the xmlparserv2.jar file into the directory created in step 1.
4. Use the following command in a DOS/Shell window to process the XSL template against the XML data.

java -cp ./xmlparserv2.jar oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls

The file generated will depend on your XSL. For an Excel output, you would instruct the process to generate fileX.xls in the same folder. You can then test the file with Excel, a browser or a text editor. Now you can test on the desktop until you get it right without the overhead of having to load it to the server each time.

To be completely clear, this approach is for pure XSL templates that are designed to generate text, html or xml. Its not for the XSLFO templates that might be used at runtime to generate PDF, PPT, etc. For those you should use the Template Viewer application, it supports the XSLFO templates but not the pure XSL templates.

If your template still falls into the pure XSL template category. This will be down to you using some BIP functionality in the templates. To get it to work you'll need to add in the Publisher libraries that contain the function e.g. xdo-core.jar, i18nAPI_v3.jar, etc to the classpath argument (-cp.)

So a new command including the required libraries might look like:

java -cp ./xmlparserv2.jar;./xdo-core.jar;./i18nAPI_v3.jar 
                            oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls

 You will need to either move the libraries to the local directory, my assumption above or include the full path to them. More info here on setting the -cp attribute.

Categories: BI & Warehousing

Save $250 on Kscope14 Registration Now!

Tim Tow - Thu, 2013-11-21 17:17
If you work in the Essbase, Oracle EPM, or Oracle BI world, *the* place to be every June is the annual Kscope conference.  Registration is open for the next conference, Kscope14, coming next June in Seattle, WA.  If you are not currently a full ODTUG member, let me tell you how you can save $250 on the $1650 registration fee.

There are two steps you have to take to "save big".  First, become a full member of ODTUG for $99 and enjoy all of the benefits, including access to a members-only presentations library, throughout the year.  Next, register for Kscope14 and you are eligible for the members-only price of $1500 for a savings of $150.  While you are registering, simply use the code AOLAP to get an additional $100 discount!

My company, Applied OLAP, is one of top-tier Platinum Sponsors of Kscope14 and I will be there.  I hope to see you at the conference and, if you were able to save some money by using our exclusive AOLAP code, be sure to stop our booth, say hello, and learn how the Dodeca Spreadsheet Management System can help your company reduce spreadsheet risk, increase spreadsheet accuracy, and reduce costs.
Categories: BI & Warehousing

Upgrading the JasperReports libraries to 5.2.0

Dietmar Aust - Thu, 2013-11-21 09:41
Would you like to upgrade your existing JasperReportsIntegration with the 5.2.0 libraries of JasperReports?

As of now this already seems to be not the most current version, JasperReports has already moved on to 5.5.0. But since I have already created the 5.2.0 package a while ago, I might just post the instructions here. The 5.5.0 version will follow.

Here you go ...

This is an updated version of the original instructions found here: http://daust.blogspot.de/2013/01/upgrading-jasperreports-libraries-to-501.html

Step 1: Download the libary files for 5.2.0
You can download the files here:

 Step 2: Shutdown the Apache Tomcat J2EE server Step 3:  Remove the existing JasperReportsLibraries from your existing installation
Typically, after you have installed your previous version of the JasperReportsIntegration toolkit on your Apache Tomcat J2EE server, the files will be located in the directory $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib, for example version 4.7.0 of JasperReports, where $CATALINA_HOME represents the path to your installation of Tomcat.

Then you would have to remove these libraries first. In this directory you should find two files for removal: _jasper-reports-delete-libs-4.7.0.sh and _jasper-reports-delete-libs-4.7.0.cmd, for *nix or Windows respectively. For *nix systems you would have to make it executable, though, e.g.: chmod u+x _jasper-reports-delete-libs-4.7.0.sh . Then you can call it and it will remove all files for version 4.7.0. But it will NOT remove the file for the JasperReportsIntegration and all other libraries which YOU might have placed there deliberately.

You can always find the required removal scripts here: http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/ .

Whenever I release another package, the removal scripts for that package will be shipped as well.

Step 4: Install the new 5.2.0 libraries
Now you can just copy the new libraries from JasperReportsLibraries-5.2.0.zip into $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib.
 Step 5: Start the Apache Tomcat J2EE server again
Now you system should be upgraded to the most current JasperReports 5.2.0 !

Just drop me a note when you need updated libraries for a later JasperReports version, etc. I have scripts in place to create a new package of the libraries.

Here you can find the notes from my upgrade (4.5.0 => 4.8.0) on Linux to illustrate the process, I hope it makes sense:

** download the libraries from:
**  http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/4.8.0/JasperReportsLibraries-4.8.0.zip
** to /home/jasper/JasperReportsLibraries

cd /home/jasper
mkdir JasperReportsLibraries

** unzip them
cd JasperReportsLibraries
unzip JasperReportsLibraries-4.8.0.zip -d JasperReportsLibraries-4.8.0

** stop tomcat server

** remove libraries of current jasper reports release
cd /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib
chmod +x _jasper-reports-delete-libs-4.5.0.sh
dos2unix _jasper-reports-delete-libs-4.5.0.sh


** copy libraries of the new release to the WEB-INF/lib directory
cp /home/jasper/JasperReportsLibraries/JasperReportsLibraries-4.8.0/* /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib

** restart tomcat


How to Make File-Based MDS in JDeveloper Work for both Windows and Linux

Jan Kettenis - Wed, 2013-11-20 10:52
In this article I explain how you  can modify the JDeveloper adf-config.xml file to make it work for both Windows, as well as Linux.

If in a JDeveloper application you point to artifacts in a file based MDS residing in a Windows folder for example "d:\projects\MDS", then JDeveloper will create an new entry in the adf-config.xml file (that you can find in Application Resources -> Descriptors -> ADF META-INF) pointing to the absolute location of that folder:

The problem with this is that if you have other colleagues that use Linux instead of Window, it's not possible to make it work for both by defining some relative location (as you could do with ant), like "../../projects/MDS", so now what?

As in most cases, the solution is so simple that I successfully missed it many times: use an environment variable! What worked was creating an environment variable with name MDS_HOME, and after that I could use it like this:

Problem solved! Such an evironment variable you can create in Windows as well as Linux.

I have not yet fully tested if this works for everything, like deploying with ant, but if it doesn't I expect you can easily fix that by modifying the ant-sca-compile.xml file in your JDeveloper installation folder by adding a property "mds_home" in there as well.

Customizing (or rather Hacking) Oracle BPM 11g Business Exceptions

Jan Kettenis - Wed, 2013-11-20 02:05
In this article I explain how you can add custom attributes to Oracle BPM 11g business exceptions. Mind that this is not officially supported.

One of the fun things of giving a training like the Advanced BPM Suite 11g course that I'm running now, is that students ask questions to which you don't know the answer. But hey, you are the teacher, and that won't do, so off you go!

One question asked yesterday was if it is possible to have more attributes on a business exception than just "errorInfo". First question should be: "why do you want that?". Well, let's assume that you at some higher level you want to have access to context information that is only available where the exception is thrown, like a local variable. Of course you can concatenate all info in one long semi-colon separated string or something, but then you probably have to have other logic to transfer that back into something readable.

If you look closely at the definition of a business exception, you will notice that it uses a generated WSDL and XSD. What I did was adding an extra element "name" to the XSD of a business exception with name FailProcess (created for the previous blog article) as follows:

I then restarted JDeveloper to see what would happen, and not totally to my surprise: nothing! As in: it works! I also tried it run-time, and no problem there either, as you can see in the following figure:

One warning though. As the generated XSD clearly states, it should not be altered manually, and any change may be overwritten. It obviously is not a supported hack I show you here, and you should expect that any change of the business exception via the UI will break your code. Therefore let's hope that changes like this in some next version are supported by JDeveloper as well.

Why you may consider NOT using the Terminate End Event in Oracle BPM Suite

Jan Kettenis - Tue, 2013-11-19 10:47
In this article I explain why you should try to avoid using the Terminate End Event in Oracle BPM 11g.

According to the BPMN specification, the Terminate End Event is supposed to terminate a process instance at the level at which it is raised, including any ongoing activity for sub-processes. But is should not terminate any higher level (parent) process.

With OBPM 11g it works differently (at least up to PS6), as raising a Terminate End Event will actually terminate the composite's instance. Except for human tasks, because as clearly stated in the documentation: "Human tasks are independent from BPMN processes. If you terminate a BPMN process while it runs a user task, the associated human tasks keeps running independently". The reason is that the human workflow engine runs separately from the BPM engine.

In the process below this will result in the situation that the Review Data task will still show in Workspace, while the associated process instance is already terminated. That was not the original intention of this model.

This flow shows what actually happens:

Conclusion: before considering using the Terminate End Event be well aware of its behavior. Consider alternate types of modeling, like raising a business exception that is caught by the parent process, which then uses an Update Task activity to withdraw all human tasks before actually ending. In this way you can prevent any human doing accidentally doing work for nothing.

Such a model would look like this:

Resizing MGMT_TABLESPACE in Grid Control 12c

Bas Klaassen - Mon, 2013-11-18 07:06
We noticed the MGMT_TABLESPACE in our 12c Grid Control instance was very big. Almost 380gb of data was stored in this tablespace. Because of all the targets registered in this Grid Control instance and the frequent usage of this Grid Control, I did not think anything was wrong. Untill someone looked into this and found out it was a bug in 12c... Note 1502370.1 describes this bug and also the Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com34
Categories: APPS Blogs

A day in Kunming China

Arvind Jain - Sun, 2013-11-17 09:08
Kunming is a important location for transit passengers traveling via China Eastern.I had a unplanned 24 hour stay and I am glad that I found a reasonable accommodation. So for those that are looking for simple, straightforward transit stay in Kunming,  Dhaka Hotel Ph# 135 2933 2392 is a good option if you are within a budget. They have WiFi, Airport Transfer and South Asian food options.

Creating your own Virtualbox Development Images

Edwin Biemond - Sat, 2013-11-16 15:11
For my Oracle Puppet provisioning development I can't do without these create image tools: Packer and Vagrant in combination with Oracle VirtualBox or VMware.  In this blogpost I will explain what these tools can do for you and how you can make your own images and use puppet as provisioning tool. With Vagrant you can create your own virtual images and it can start puppet or chef to do all the

Playing with Oracle Spatial - An important consideration when using SDO_NN operator

Mihajlo Tekic - Fri, 2013-11-15 23:27

I've been playing with Oracle Spatial quite a bit lately and ran into some interesting things that I  plan to write about in the next couple of posts.

This post covers an important consideration, I think, one should make when using SDO_NN spatial operator with sdo_num_res parameter.

But first, just briefly on the operator itself. SDO_NN operator is used to identify the nearest neighbors for a particular geometry. The full operator synopsis can be found in the official documentation

Please note, the database I use in the examples below is Oracle Enterprise Edition with no additional patches applied and default Oracle Spatial settings.

Let's consider the following example, there is a table RESTAURANTS having only an id and a geometry column (point).

 CREATE TABLE restaurants

ALTER TABLE restaurants ADD CONSTRAINT restaurants_pk PRIMARY KEY (id);

Let's create a spatial index on restaruants.point column:

 INSERT INTO user_sdo_geom_metadata 
( 'restaurants'
SDO_DIM_ELEMENT('X', 0, 10000000, 0.005),
SDO_DIM_ELEMENT('Y', 0, 10, 0.005)
, null -- SRID

CREATE INDEX restaurants_sidx ON restaurants(point) INDEXTYPE IS MDSYS.SPATIAL_INDEX;

For simplicity let's say there are 500,000 restaurants distributed on a single line (x-axis of two dimensional coordinate system, y=0, x=1,2,3,...500,000).

INSERT INTO restaurants 
(SELECT rownum --id
2001 -- 2 dimensional point
, null -- SDO SRID
, SDO_POINT_TYPE(rownum,0,null) –-x=rownum, y=0
, null
, null
FROM dual CONNECT BY rownum<=500000

In order to find the first 5 nearest neighbors to point (10,0) one could use the following spatial query:

SQL> SELECT p.id, sdo_nn_distance(1) distance
2 FROM restaurants p
3 WHERE SDO_NN(p.point,
5 2001 -- 2 dimensional point
6 , null
7 , SDO_POINT_TYPE(10,0,null)
8 , null
9 , null
10 ),
11 'sdo_num_res=5',
12 1
13 )='TRUE'
14* ORDER BY 2;

---------- ----------
10 0
9 1
11 1
12 2
8 2


Now suppose each restaurant has 5 menu items that are stored in MENU_ITEMS table. The table has 4 columns, a unique identifier ID; a reference to RESTAURANTS table, rest_id; an identifier that identifies an item within the restaurant menu, item_number; and lastly a description, 100 characters long column that I use in this example to make this table a bit bigger.

CREATE TABLE menu_items
item_number NUMBER NOT NULL,
description varchar2(100)

ALTER TABLE menu_items ADD CONSTRAINT menu_items_pk PRIMARY KEY (id);

ALTER TABLE menu_items ADD CONSTRAINT menu_items_rest_FK FOREIGN KEY (rest_id) REFERENCES restaurants(id);

CREATE INDEX menu_items_rest_idx ON menu_items(rest_id);

INSERT INTO menu_items
(SELECT rownum
, b.rn
, a.rn
, rpad(rownum,100,'A')
(SELECT rownum rn FROM dual CONNECT BY rownum<=5) a, --5 menu items per rest.
(SELECT rownum rn FROM dual CONNECT BY rownum<=500000) b


Suppose you want to get the menu items of the top 5 nearest restaurants to a particular location (10,0) for an example. One way would be with the following spatial query (but first of course collect the necessary stats):

DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'menu_items',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');

SELECT t1.id
, t2.id
, t2.restaurants_id
, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
ORDER BY t1.id, t2.item_number;

The query produces the desired result. However, let's look how it performed. I re-run the query, but this time I used gather_plan_statistics hint to get the execution statistics for the query.

SQL>SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
2 FROM restaurants t1
3 , menu_items t2
4 WHERE t1.id=t2.restaurants_id
5 AND SDO_NN(t1.point,
7 2001 -- 2 dimensional point
8 , null
9 , SDO_POINT_TYPE(10,0,null)
10 , null
11 , null
12 ),
13 'sdo_num_res=5',
14 1
15 )='TRUE'
16 ORDER BY t1.id, t2.item_number;

---------- ---------- -------------- -----------
8 8 8 1
8 500008 8 2
8 1000008 8 3
8 1500008 8 4
8 2000008 8 5
9 9 9 1
9 500009 9 2
9 1000009 9 3
9 1500009 9 4
9 2000009 9 5
10 10 10 1
10 500010 10 2
10 1000010 10 3
10 1500010 10 4
10 2000010 10 5
11 11 11 1
11 500011 11 2
11 1000011 11 3
11 1500011 11 4
11 2000011 11 5
12 12 12 1
12 500012 12 2
12 1000012 12 3
12 1500012 12 4
12 2000012 12 5

25 rows selected.

The query took 1.32 seconds to complete as you can see from the execution statistics:

SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

SQL_ID gwpqub3k0awqm, child number 0
SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id,
t2.item_number FROM restaurants t1 , menu_items t2 WHERE
t1.id=t2.restaurants_id AND SDO_NN(t1.point, SDO_GEOMETRY(
2001 -- 2 dimensional point , null ,
SDO_POINT_TYPE(10,0,null) , null , null ),
'sdo_num_res=5', 1 )='TRUE' ORDER BY t1.id, t2.item_number

Plan hash value: 2076547507

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:01.32 | 41540 | 41465 | | | |
| 1 | SORT ORDER BY | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 2048 | 2048 | 2048 (0)|
|* 2 | HASH JOIN | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 1517K| 1517K| 886K (0)|
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5000 | 5 |00:00:00.01 | 64 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | 0 | | | |
| 5 | TABLE ACCESS FULL | MENU_ITEMS | 1 | 2500K| 2500K|00:00:00.62 | 41476 | 41465 | | | |

Predicate Information (identified by operation id):

2 - access("T1"."ID"="T2"."RESTAURANTS_ID")
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='

29 rows selected.


The problem

Well, one could say that 1.32 seconds is not that bad. However, if you look at the plan more carefully you can notice that CBO used a HASH JOIN method to join RESTAURANTS and MENU_ITEMS tables and as a result it performed full table scan on MENU_ITEMS table. Now imagine if MENU_ITEMS table was quite big (suppose you have stored the items of all restaurants across US)?!

What made CBO to pick a HASH JOIN? Look at the estimated (5,000) vs. actual rows (5) selected from RESTAURANTS table. We were only after the 5 nearest neighbors. We used sdo_num_res=5 in our SDO_NN call. One could expect that CBO would recognize that fact and estimate cardinality of 5. But, no, CBO applied selectivity of 1% (500,000 restaurants x 1% = 5,000)

Why selectivity of 1%?

SDO_NN is an operator that is bound to the implementation of the function NN that is part of PRVT_IDX package owned by MDSYS.

By default, statistics for PRVT_IDX package is defined by SDO_STATISTICS type. That is Extensible Optimizer feature is used to define how cardinality and the cost will be calculated for functions/procedures defined in the package. (if statistics are not associated, CBO uses default selectivity of 1%)

Seems like the logic implemented in ODCIStatsSelectivity() function is not good enough to detect that we are only after the first N rows, as defined with SDO_NUM_RES parameter.

You can clearly see this if you create 10053 trace file in the section where MDSYS.SDO_STATISTICS.ODCIStatsSelectivity procedure is used to calculate selectivity.

MDSYS.SDO_STATISTICS.ODCIStatsFunctionCost returned:
CPUCost : 100000000000000
IOCost : 100000000000000
NetworkCost : -1
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
sel number;

:1 := "MDSYS"."SDO_STATISTICS".ODCIStatsSelectivity(
if sel IS NULL then
:2 := -1.0;
:2 := sel;
end if;
when others then
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS.ODCIStatsSelectivity returned selectivity: 1.00000000%
Table: RESTAURANTS Alias: T1
Card: Original: 500000.000000 Rounded: 5000 Computed: 5000.00 Non Adjusted: 5000.00

If statistics are not associated CBO would also use selectivity of 1%. So what is the differences? Under what circumstances one could take advantage of the extensible optimizer feature and generate better selectivity when SDO_NN operator is used along with sdo_num_res parameter? I couldn't find any article or documentation piece that will help me answer these questions.

What I do know however, is that it shouldn't be that difficult to implement a piece of logic that will make SDO_STATISTICS.ODCIStatsSelectivity procedure generate more accurate numbers.

As a matter of fact, I played a bit with this.

Disclaimer: Please be aware the following examples are for demonstration purposes only and shouldn't be used in an actual, supported, database environment. The logic I show below is quite simple and is for illustration only. Please contact Oracle Support if you want to implement something similar in your environment.

I created a simple type named SDO_STATISTICS_CUSTOM under MDSYS schema using the Extensible Optimizer interface (link)

I implemented ODCIStatsSelectivity procedure as shown below (one more time, this implementation is to demonstrate that with appropriate logic ODCIStatsSelectivity can return desired values):

pred_info IN SYS.ODCIPredInfo,
args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
param IN VARCHAR2,

if param=NULL then
else if lower(substr(param,1,instr(param,'=')-1)) = 'sdo_num_res' then
SELECT decode(num_rows,null,1,
INTO sel
FROM dba_tables where owner=args(3).TABLESCHEMA and table_name=args(3).TABLENAME;

if sel>100 then
sel:=100 ;
end if;

end if;
end if;


RETURN ODCIConst.success;
END ODCIStatsSelectivity;

The logic I chose is simple, if sdo_num_res parameter is defined, get the number of records from the statistics available for the table and set the selectivity variable (sel) to (sdo_num_res/num_rows)*100.

Let's see how it works:

2 SELECT t1.id, t2.id, t2.restaurants_id, t2.item_number
3 FROM restaurants t1
4 , menu_items t2
5 WHERE t1.id=t2.restaurants_id
6 AND SDO_NN(t1.point,
8 2001 -- 2 dimensional point
9 , null
10 , SDO_POINT_TYPE(10,0,null)
11 , null
12 , null
13 ),
14 'sdo_num_res=5',
15 1
16 )='TRUE'
17* ORDER BY t1.id, t2.item_number
SQL> /


SQL> set line 200 pagesize 9999

Plan hash value: 2341889131

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 25 | 1000 | 36 (3)| 00:00:01 |
| 1 | SORT ORDER BY | | 25 | 1000 | 36 (3)| 00:00:01 |
| 2 | NESTED LOOPS | | 25 | 1000 | 35 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 5 | 80 | 0 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 120 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | | 2 (0)| 00:00:01 |

Predicate Information (identified by operation id):

6 - access("T1"."ID"="T2"."RESTAURANTS_ID")

20 rows selected.

As you can see the cardinality is correctly calculated which resulted CBO to produce better execution plan. With this plan in place, the query completes in less than 1 cs.

If you look into 10053 trace file you can also see that the calculated selectivity is 0.001%:

CPUCost : 10000
IOCost : 10000
NetworkCost : 10000
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
sel number;

if sel IS NULL then
:2 := -1.0;
:2 := sel;
end if;
when others then
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsSelectivity returned selectivity: 0.00100000%

Anyway, since the current implementation of SDO_STATISTICS type returns selectivity of 1%, the next logical question is what one can do in order to workaround the problem.


The problem can be addressed by providing CBO that extra bit of information, that is how many rows we expect to get back after applying SDO_NN operator. In order to do this one could consider using the cardinality hint or rewrite the query and add one more predicate ROWNUM<=5 that would tell CBO how many rows we are expecting to select from RESTAURANTS table. (there may be other strategies available too)

Use cardinality hint:

SELECT /*+ cardinality(t1 5) gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
ORDER BY t1.id, t2.item_number;

The execution statistics in this case look like:

Plan hash value: 2341889131

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 92 | 5 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 55 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 54 | 0 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 5 | 25 |00:00:00.01 | 37 | 5 | | | |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | 0 | | | |


Predicate Information (identified by operation id):

4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")

29 rows selected.

Rewrite the query and use ROWNUM<=n predicate where n is the number associated with sdo_num_rows parameter:

SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM restaurants
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
AND rownum<=5
) t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
ORDER BY t1.id, t2.item_number;

… and the execution statistics :

Plan hash value: 3570932640

| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 101 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 3 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 76 | | | |
| 4 | VIEW | | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 5 | COUNT STOPKEY | | 1 | | 5 |00:00:00.01 | 64 | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 7 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | | | |
|* 8 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID | MENU_ITEMS | 25 | 5 | 25 |00:00:00.01 | 25 | | | |

Predicate Information (identified by operation id):

5 - filter(ROWNUM<=5)
7 - access("MDSYS"."SDO_NN"("POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
8 - access("T1"."ID"="T2"."RESTAURANTS_ID")

34 rows selected.

As you can see, Oracle CBO made correct cardinality estimates in the last two examples and used NESTED LOOPS method to join both tables that resulted with performance improvement.

I hope you got some valuable information out of this post.

Postscript on Scripts

Antony Reynolds - Fri, 2013-11-15 17:07
More Scripts for SOA Suite

Over time I have evolved my startup scripts and thought it would be a good time to share them.  They are available for download here.  I have finally converted to using WLST, which has a number of advantages.  To me the biggest advantage is that the output and log files are automatically written to a consistent location in the domain directory or node manager directory.  In addition the WLST scripts wait for the component to start and then return, this lets us string commands together without worrying about the dependencies.

The following are the key scripts (available for download here):

Script Description Pre-Reqs Stops when
Task Complete startWlstNodeManager.sh Starts Node Manager using WLST None Yes startNodeManager.sh Starts Node Manager None Yes stopNodeManager.sh Stops Node Manager using WLST Node Manager running Yes startWlstAdminServer.sh Starts Admin Server using WLST Node Manager running Yes startAdminServer.sh Starts Admin Server None No stopAdminServer.sh Stops Admin Server Admin Server running Yes startWlstManagedServer.sh Starts Managed Server using WLST Node Manager running Yes startManagedServer.sh Starts Managed Server None No stopManagedServer.sh Stops Managed Server Admin Server running Yes Samples

To start Node Manager and Admin Server

startWlstNodeManager.sh ; startWlstAdminServer.sh

To start Node Manager, Admin Server and SOA Server

startWlstNodeManager.sh ; startWlstAdminServer.sh ; startWlstManagedServer soa_server1

Note that the Admin server is not started until the Node Manager is running, similarly the SOA server is not started until the Admin server is running.

Node Manager Scripts startWlstNodeManager.sh

Uses WLST to start the Node Manager.  When the script completes the Node manager will be running.


The Node Manager is started in the background and the output is piped to the screen. This causes the Node Manager to continue running in the background if the terminal is closed. Log files, including a .out capturing standard output and standard error, are placed in the <WL_HOME>/common/nodemanager directory, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the Node Manager.


Uses WLST to stop the Node Manager.  When the script completes the Node manager will be stopped.

Admin Server Scripts startWlstAdminServer.sh

Uses WLST to start the Admin Server.  The Node Manager must be running before executing this command.  When the script completes the Admin Server will be running.


The Admin Server is started in the background and the output is piped to the screen. This causes the Admin Server to continue running in the background if the terminal is closed.  Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find.  This script pipes the output of the log file to the screen and keeps doing this until terminated,  Terminating the script does not terminate the server.


Stops the Admin Server.  When the script completes the Admin Server will no longer be running.

Managed Server Scripts startWlstManagedServer.sh <MANAGED_SERVER_NAME>

Uses WLST to start the given Managed Server. The Node Manager must be running before executing this command. When the script completes the given Managed Server will be running.

startManagedServer.sh <MANAGED_SERVER_NAME>

The given Managed Server is started in the background and the output is piped to the screen. This causes the given Managed Server to continue running in the background if the terminal is closed. Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the server.

stopManagedServer.sh <MANAGED_SERVER_NAME>

Stops the given Managed Server. When the script completes the given Managed Server will no longer be running.

Utility Scripts

The following scripts are not called directly but are used by the previous scripts.


This script is used to provide information about the Node Manager and WebLogic Domain and must be edited to reflect the installed FMW environment, in particular the following values must be set:

  • DOMAIN_NAME – the WebLogic domain name.
  • NM_USERNAME – the Node Manager username.
  • NM_PASSWORD – the Node Manager password.
  • MW_HOME – the location where WebLogic and other FMW components are installed.
  • WEBLOGIC_USERNAME – the WebLogic Administrator username.
  • WEBLOGIC_PASSWORD - the WebLogic Administrator password.

The following values may also need changing:

  • ADMIN_HOSTNAME – the server where AdminServer is running.
  • ADMIN_PORT – the port number of the AdminServer.
  • DOMAIN_HOME – the location of the WebLogic domain directory, defaults to ${MW_HOME}/user_projects/domains/${DOMAIN_NAME}
  • NM_LISTEN_HOST – the Node Manager listening hostname, defaults to the hostname of the machine it is running on.
  • NM_LISTEN_PORT – the Node Manager listening port.

This script runs the WLST script passed in environment variable ${SCRIPT} and takes its configuration from _fmwenv.sh.  It dynamically builds a WLST properties file in the /tmp directory to pass parameters into the scripts.  The properties filename is of the form <DOMAIN_NAME>.<PID>.properties.


This script runs the command passed in as an argument, writing standard out and standard error to a log file.  The log file is rotated between invocations to avoid losing the previous log files.  The log file is then tailed and output to the screen.  This means that this script will never finish by itself.

WLST Scripts

The following WLST scripts are used by the scripts above, taking their properties from /tmp/<DOMAIN_NAME>.<PID>.properties:

  • startNodeManager.py
  • stopNodeManager.py
  • startServer.py
  • startServerNM.py

The dependencies and relationships between my scripts and the built in scripts are shown in the diagram below.


Subscribe to Oracle FAQ aggregator