Feed aggregator

Reference Data Set

Krishanu Bose - Sun, 2011-12-18 04:59
This is a new concept that has come in Fusion. Reference Data sets are logical groups which provides the enterprise to decide which business unit access the reference data groups, such as grades, locations, AR & AP payment terms, departments, and jobs. Oracle provides a default Reference Data set which can be used across all Business units. However, we can define our own Reference data sets, to partition the data from effectively.
E.g. in R12 we had to live with the entire list AR payment irrespective to the fact whether one OU was using it or not. however, in Fusion we can restrict payment terms across BU's, so only the relevant ones will be accessible to the BU.

Worst Blogger Ever...

alt.oracle - Wed, 2011-12-14 16:38
Yes.  I know.  I'm the worst blogger ever.  That last post says... <choke>... May.  But I have an excuse (sort of).  Busy does not describe my past six months.  Some of you are familiar with the reason, but for those of you who aren't, I'll post about it very soon.
Categories: DBA Blogs

Core ADF11: Building Highly Reusable ADF Task Flows

JHeadstart - Wed, 2011-12-14 02:52

In this old post in the Core ADF11 series, I explained the various options you have in designing the page and taskflow structure. The preferred approach in my opinion that maximizes flexibility and reusability, is to build the application using bounded taskflows with page fragments. You then have various ways to disclose these task flows to the user. You can use a dynamic region or dynamic tabs, or a human workflow task list, or even have the user add the task flow at runtime using Webcenter Composer. 

To maximize reuse of individual task flows, there are some simple techniques you can apply:

  • Define a set of input parameters that allows you to configure for the various (re)use cases
  • Define a router activity as the default activity to enable reuse-case-based conditional flows
  • Configure use of dynamic iterator binding to use a task flow both as a master region or detail region
  • Configure display properties of UI components based on task flow input parameters so components can be shown/hidden, editable/readOnly, required/optional, etc based on the specific (re)use case

By applying these techniques, you can dramatically reduce the number of task flows you need to build in your project. In this sample application, you can see how you can reuse the same task flow to support the following reuse cases:

  • Show in read-only mode
  • Enable deeplinking from another page to show one specific row 
  • Use as read-only context info in popup
  • Use as master region as well as detail region 
  • Enable deeplinking from external source like e-mail.

Downloads: 

Categories: Development

OWSM: Loading private and public certifcates

Marc Kelderman - Mon, 2011-12-12 08:30
As written in my blog article on SSL, handling certificates is not easy. One of the goals I had was to load a public and private certificate into a JKS key-store. With tools such as keytool and openssl, this is not possible. After struggeling a few hours, I managed to fix this. This is how I did it.

You have two files, one public key and one private key; vijfhuizen_pub.pem, vijfhuizen_prv.pem. Based on these files, you can load the keystore as follows:
  • Convert the keys into DER format.
  • Load the DER files into a new keystore via Java.
Example:
openssl x509 -in vijfhuizen-pub.pem -inform PEM -out vijfhuizen-pub.crt -outform DER

openssl pkcs8 -topk8 -nocrypt -in vijfhuizen-prv.pem -inform PEM -out vijfhuizen-prv.crt -outform DER

java ImportKey -prikey vijfhuizen-prv.crt -signed vijfhuizen-pub.crt -alias vijfhuizen -keypass changeit -store vijfhuizen.jks

De Java Class has the following options:
java ImportKey
Usage

java ImportKey -alias alias -prikey file.der -signed cert.der -keypass pas1 -storepass pas2
java ImportKey -alias alias -prikey file.der -signed cert.der -keypass pas1 -store file.jks -storepass pas2

Description

Store DER key and signed certificate into user's home key store, or into the key
store file specified by the STORE parameter.

The Java  code can be download here.

Memory Limits for Windows Releases

Mike Rothouse - Sat, 2011-12-10 20:38
When I need to reference the physical memory limits for Windows Operating Systems, I seem to waste time trying to locate my notes or searching the Internet.  I am posting here so I can remember.  Found the information on Microsoft’s site and will use it for future reference. Below are the Operating Systems and Editions […]

Google map in Fusion Apps

Krishanu Bose - Fri, 2011-12-09 11:48
Now we can locate all our employees, suppliers, customers addresses in Fusion through Google Maps.

Business Units and Shared Service model in Fusion Procurement

Krishanu Bose - Fri, 2011-12-09 10:46

Business Units (BU) definition: A business unit is a unit of an enterprise that performs one or many business functions that can be rolled up in a management hierarchy. A business unit can process transactions on behalf of many legal entities. Normally, it will have a manager, strategic objectives, a level of autonomy, and responsibility for its profit and loss. (1)

Prior to Oracle Fusion Applications, operating units in Oracle E-Business Suite were assumed to perform all business functions, while in PeopleSoft, each business unit had one specific business function. Oracle Fusion Applications blends these two models and allows defining business units with one or many business functions. (2)

In Fusion Procurement context we need to understand the function of the following types of BU’s”

  1. Procurement BU
  2. Requisitioning BU
  3. Sold-To BU
  4. Client BU

Procurement Business units: As the name suggests, Procurement BU’s are responsible for the procurement business function which involves vendor management, negotiation of contracts and purchase agreements, issue of order and subsequent administration.

Client Business Unit: Any BU that will be serviced by the Procurement BU needs to be set as the Client BU. In case of Shared Services model where the Procurement services are centralized to one BU, the Procurement BU will be serving all the requisitions from the Client BU’s.

Requisitioning Business Unit: As the name suggests Requisitioning BU is the business unit that raises the requisition for the goods or services that it needs to the Procurement BU. Sometimes, the Requisitioning BU may be responsible for the financial impact of the purchase, in which case it will also be defined as the Sold-To BU. In case there is another BU which takes the financial responsibility of the purchase then, this Sold-To BU will be different from the Requisitioning BU.

I’ll take an example to explain the above concept.

A mobile manufacturing company having global presence has its headquarters based in Norway (XYZ Norway). Its manufacturing division is based in India (XYZ India). The India operation sources its parts from the branch based in Singapore (XYZ Singapore) which does the centralized purchase of chips from different manufacturing companies based in Taiwan and Japan. However, the payment to the supplier is done by headquarter in Norway. In this case the Requisitioning BU would be XYZ India, Purchasing BU would be the XYZ Singapore BU and the Sold-To BU would be XYZ Norway

Another example to make this clearer. I’ll take the example one of my colleagues Suchismita, uses to explain the concept;

In a normal family, when the teenage daughter while returning home sees the designer shoe on the shop’s display, and promptly she approaches her mother with the request, knowing very well that her need would be fulfilled. The mother being a simple homemaker approaches the dad, and the dad being a doting father purchases the shoe and gives to the daughter.

If we take the above example, the daughter is the Procurement BU as she is buying the shoe from the supplier (shoe store). The mother is the requisitioning BU and the father is the Sold-To BU as he is taking the financial responsibility of the purchase. (3)

Following is the list of setups to be done to ensure this works in the system.

  1. Select the Procurement Service Providers for the selected BU
  2. Assign either one or many out of the Procurement, Requisitioning and Receiving business functions to the respective BU
  3. Configure the Procurement Business Function for the BU and Configure the Requisitioning Business Function for the BU
  4. Select the Procurement BU at the Supplier Address
  5. Select the Client BU and Sold-To BU at the Supplier Site Assignment

In today’s scenario, businesses find it beneficial to channel purchases through international subsidiaries instead of directly dealing with suppliers. The reasons range from country specific legal requirements to favorable tax treatment to having better margins due to economies of scale because of centralizing procurement. Fusion Procurement provides this feature seamlessly which in hindsight seems quite intuitive.

Bibliography
  1. Oracle® Fusion Applications Procurement Implementation Guide. Retrieved from http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e20383/toc.htm
  2. Oracle® Fusion Applications Financials Implementation Guide. Retrieved from http://docs.oracle.com/cd/E15586_01/fusionapps.1111/e20375/toc.htm
  3. Suchismita Pattnaik, Linkedin profile: http://in.linkedin.com/in/suchismita.

UKOUG TECHEBS 2011 Presentations

Anthony Rayner - Thu, 2011-12-08 08:12
I just wanted to do another quick, post-conference follow up post after UKOUG TECHEBS 2011 in Birmingham. At this conference I presented on 2 topics relating to Application Express and as promised, here are the slides and samples I showed during the sessions:




With the accessibility sample, by just looking at the application, I appreciate it's not easy to work out exactly what I did to make it more accessible, so will try and follow up with some more information in the next couple of weeks about the changes. (Hopefully the slides and application together are still of some use now, until I do this.) Also, I had some interesting feedback after the session, where 2 people suggested the screen reader demo's could be done with the projector switched off, which I thought was a great idea to try and provide a more accurate user experience, so will try and incorporate that next time.

Thanks to all who attended, I hope you got something from them.
Categories: Development

Apache Ivy and JDeveloper integration

Chris Muir - Wed, 2011-12-07 19:03
As software applications grow, a common technique to reduce the complexity is to break the overall solution into separately built and deployed modules. This allows each component to be worked on independently without being overwhelmed with detail, though the cost of reassembling and building the application is the trade off for the added flexibility. When modules become reusable across applications the reassembly and build problem is exasperated and it becomes essential to track which version of each module is required against each application. Such problems can be reduced by the introduction of dependency management tools.

In the Java world there are a few well known tools for dependency management including Apache Ivy and Apache Maven. Strictly speaking Ivy is just a dependency management tool which integrates with Apache Ant, while Maven is a set of tools of where dependency management is but just one of its specialities.

In the ADF world thanks to the inclusion of ADF Libraries (aka. modules) that can be shared across applications, dependency management is also a relevant problem. Recently I went through the exercise of including Apache Ivy into our JDeveloper 11g and Hudson mix for an existing set of applications. This blog post attempts to describe the configuration of Apache Ivy in context of our JDeveloper setup in order to assist others setting up a similar installation. The blog post will introduce a simplistic application (downloadable from here) with 1 dependency to introduce the Ivy features, in very much an A-B-C style to assist the reader's learning.

Readers should be careful to note this post doesn't attempt to explain all the in's and out's of using Apache Ivy, just a successful configuration on our part. Readers are encouraged to seek out further resources to assist their learning of Apache Ivy.

Assumptions

This blog post assumes you understand the following concepts:

ADF Libraries
Resource palette
Apache Ant
ojdeploy

In the beginning there was... ah... ApplicationA

To start out with our overall application contains one JDeveloper application workspace known as ApplicationA, installed under C:/JDeveloper/mywork as follows:

ApplicationA initially has no dependencies and can be built and run standalone.

Within the application we create a separate project entitled "Build" with an Ant build scripts entitled "pre-ivy.build.xml" to build our application using ojdeploy as follows:
<?xml version="1.0" encoding="UTF-8" ?>
<project xmlns="antlib:org.apache.tools.ant" name="Build" basedir=".">
<property name="jdev.ojdeploy.path" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\bin\ojdeploy.exe"/>
<property name="jdev.ant.library" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\lib\ant-jdeveloper.jar"/>
<target name="Build">
<taskdef name="ojdeploy" classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask" uri="oraclelib:OJDeployAntTask"
classpath="${jdev.ant.library}"/>
<ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask" executable="${jdev.ojdeploy.path}"
ora:buildscript="C:\Temp\build.log" ora:statuslog="C:\Temp\status.log">
<ora:deploy>
<ora:parameter name="workspace" value="C:\JDeveloper\mywork\ApplicationA\ApplicationA.jws"/>
<ora:parameter name="profile" value="ApplicationA"/>
<ora:parameter name="outputfile" value="C:\JDeveloper\mywork\ApplicationA\deploy\ApplicationA"/>
</ora:deploy>
</ora:ojdeploy>
</target>
</project>
(Note the jdev.ojdeploy.path & jdev.ant.library properties that map to your JDeveloper installation. You will need to change these to suit your environment. This will need to be done for both ApplicationA and the following ADFLibrary1)

And then ApplicationA begat ADFLibrary1

Now we'll create a new task flow in a separate application workspace known as ADFLibrary1 which ApplicationA is dependent on:

We add an ADF Library JAR deployment profile to ADFLibrary1's ViewController project to generate ADFLibrary1.jar to:

C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\adflibADFLibrary1.jar

Similar to ApplicationA we add a Build project to our application workspace and a pre-ivy.build.xml Ant build script using ojdeploy:
<?xml version="1.0" encoding="UTF-8" ?>
<project xmlns="antlib:org.apache.tools.ant" name="Build" basedir=".">
<property name="jdev.ojdeploy.path" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\bin\ojdeploy.exe"/>
<property name="jdev.ant.library" value="C:\java\jdeveloper\JDev11gBuild6081\jdeveloper\jdev\lib\ant-jdeveloper.jar"/>
<target name="Build">
<taskdef name="ojdeploy" classname="oracle.jdeveloper.deploy.ant.OJDeployAntTask" uri="oraclelib:OJDeployAntTask"
classpath="${jdev.ant.library}"/>
<ora:ojdeploy xmlns:ora="oraclelib:OJDeployAntTask" executable="${jdev.ojdeploy.path}"
ora:buildscript="C:\Temp\build.log" ora:statuslog="C:\Temp\status.log">
<ora:deploy>
<ora:parameter name="workspace" value="C:\JDeveloper\mywork\ADFLibrary1\ADFLibrary1.jws"/>
<ora:parameter name="project" value="ViewController"/>
<ora:parameter name="profile" value="ADFLibrary1"/>
<ora:parameter name="outputfile" value="C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\ADFLibrary1"/>
</ora:deploy>
</ora:ojdeploy>
</target>
</project>
From here we want to attach ADFLibrary1.jar to ApplicationA's ViewController project. Overtime we might have many JARs we want to attach, so rather than mapping to several different deploy directories under each ADF Library application workspace, we'll assume the libraries are instead available under a central "lib" directory as follows:

Experienced readers will know to setup a Resource Palette "File Connection" to map to C:\JDeveloper\mywork\lib then simply add the JARs from the palette.

Adding Apache Ivy

At this point we have a rudimentary form of dependency management setup, where a logical version 1 of ApplicationA has attached a logical version 1 of ADFLibrary1 through the use of the ADF Library JAR being attached to ApplicationA's ViewController project. Note the word "rudimentary". Currently there is no way to track versions. If we have separate versions of ApplicationA dependent on separate versions of ADFLibrary1, developers have to be very careful to check out and build the correct versions, and there's nothing inherently obvious in the generated JAR file names to gives us an idea of what versions are being used.

Let's introduce Apache Ivy into the mix with this simplistic dependency model as a basis for learning, to see how Ivy solves the versioning dependency issue.

Adding ivy.xml to each module

Ivy requires that each module have an ivy.xml. The ivy.xml file among other things describes for each module:

a) The module name
b) The version of the module
c) Determines what artefacts the module publishes
d) Track the module's dependencies including the version of the dependencies

For our existing ADFLibrary1 we'll add an ivy.xml file to our Build project containing the following details:
<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0">
<info organisation="sage" module="ADFLibrary1" revision="1"/>
<configurations>
<conf name="jar" description="Java archive"/>
<conf name="ear" description="Enterprise archive"/>
</configurations>
<publications>
<artifact name="ADFLibrary1" conf="jar" ext="jar"/>
</publications>
<!-- <dependencies> There are no dependencies for this module <dependencies/> -->
</ivy-module>
Of note:

a) The module name in the <info> tag
b) The revision/version number in the <info> tag
c) The publication of an ADF Library jar in the <publications> tag
d) And that this module is not dependent on any other modules through the commented out <dependencies> tag

(You might also note the <configurations> tag. Configurations define the type of artefacts we can possible generate for the module. In this case we're creating an ADF Library "JAR", but alternatively for example we could produce a WAR or EAR file or some other sort of artefact. For purposes of this blog post we'll keep this relatively simple and just stick to JARs and EARs).

For our existing ApplicationA its ivy.xml file under the Build project will look as follows:
<?xml version="1.0" encoding="UTF-8"?>
<ivy-module version="2.0">
<info organisation="sage" module="ApplicationA" revision="1"/>
<configurations>
<conf name="jar" description="Java archive"/>
<conf name="ear" description="Enterprise archive"/>
</configurations>
<publications>
<artifact name="ApplicationA" conf="ear" ext="ear"/>
</publications>
<dependencies>
<dependency org="sage" name="ADFLibrary1" rev="1">
<artifact name="ADFLibrary1" ext="jar"/>
</dependency>
</dependencies>
</ivy-module>
Of note:

a) The module name ApplicationA in the <info> tag
b) The revision/version number 1 in the <info> tag
c) The publication of an EAR in the <publications> tag
d) And of most importance, a dependency of ADFLibrary1, specifically release/version 1.

It's this last point that is most important as not only does it track the dependencies between modules (which truthfully JDev was already doing for us) but the ivy.xml file also tracks the version dependency, namely ApplicationA release/version 1 is dependent on version/release 1 of ADFLibrary1.

Apache Ivy Repository

In the previously described application configuration we were assuming the build of ApplicationA and ADFLibrary1 was all on the same developer machine. It's relatively simply for 1 developer to copy the JARs to the correct location to satisfy the dependencies. Yet in a typical development environment there will be multiple developers working on different modules across different developer machines. Moving JARs between developer PCs becomes problematic. We really need some sort of developer repository to share the modules archives.

At this point we introduce an Apache Ivy repository into our solution. Simplistically the Ivy repository is a location where developers can publish JARs to, and other developers when building an application with a dependency, can download the dependencies from.

Ivy supports different types of repositories which are documented under Resolvers in the Ivy documentation. For purposes of this blog post we'll use the simplest repository type of "FileSystem".

In order to make use of the FileSystem Ivy repository all developers must have access to a file (typically called) ivysettings.xml. This file defines for Ivy where the repository resides among other settings. How you distribute this file to developers is up to you, maybe it's located on a shared network location, maybe a copy checked out to a common local location. For purposes of this blog post we'll assume it resides on every developer's machine under C:\JDeveloper\ivy:

The following reveals the contents of a possible ivysettings.xml file:
<ivysettings>
<property name="ivy.repo.dir" value="C:\JDeveloper\ivy\repositories\development"/>
<resolvers>
<chain>
<filesystem name="repository">
<ivy pattern="${ivy.repo.dir}/[module]/ivy[module]_[revision].xml" />
<artifact pattern="${ivy.repo.dir}/[module]/[type][artifact]_[revision].[ext]"/>
</filesystem>
</chain>
</resolvers>
</ivysettings>
Points to consider:

1) Note the ivy.repo.dir property. Typically this would point to your own //SomeServer/YourRepositoryLocation which all developers can access on your local network. For the purposes of this blog post, in order to give readers a single zip file that they can download and use, I've changed this setting to instead locate the repository at C:\JDeveloper\ivy\repositories\development. This certainly *isn't* a good location for a shared repository, but one that is workable for demonstration purposes.

2) The <resolvers> <chain> defines the list of repositories for Ivy to publish to or download dependencies from. In this case we've only configured one repository, but there's nothing stopping you having a series of repositories.

3) The <ivy> subtag of the <filesystem> tag defines how Ivy will store and search for it's own metadata files in the repository, of which it stores information such as the module name, versions and more that is essentially copied from your ivy.xml files.

4) The <artifact> tag defines how Ivy will store and search for the actual artefacts you generate (such as the ADF Library JARs) in the repository.

With regards the last 2 points it's best to leave the patterns as the default, as in the end the repository can be treated as a black box. You don't really care how it works, just as long as Ivy allows you to publish and retrieve files from the repository.

Configuring Ant to *understand* Ivy

With the ivy.xml and ivysettings.xml files in place, we now need to configure our Ant build scripts to interpret the settings and work with our repository during builds.

First we download the Apache Ivy and install into a location each developer's machine can access. This blog posts assumes Ivy v2.2.0 and that the associated ivy-2.2.0.jar has been unzipped to C:\JDeveloper\ivy\apache-ivy-2.2.0:

Next we modify our existing build scripts for each module. In the build.xml file for *both* ADFLibrary1 and ApplicationA we insert the following code:

(Note in the downloadable application this code resides in build.xml, not pre-ivy.build.xml which was documented earlier in this blog post).
<property name="ivy.default.ivy.user.dir" value="C:\JDeveloper\ivy"/>
<property name="ivy.default.ivy.lib.dir" value="C:\JDeveloper\lib"/>
<path id="ivy.lib.path">
<fileset dir="C:\JDeveloper\ivy\apache-ivy-2.2.0" includes="*.jar"/>
</path>
<taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/>
<ivy:configure file="C:\JDeveloper\ivy\ivysettings.xml"/>
<ivy:info file="./ivy.xml"/>
Items to note:

1) Setting the property ivy.default.ivy.user.dir changes the default location under which Ivy stores local copies of the data downloaded from the repository.

2) Setting the property ivy.default.ivy.lib.dir defines the location where the JAR files should be ultimately delivered for dependent modules to make use of.

3) The <ivy:configure> tag tells Ivy where the ivysettings.xml file is located which includes the configuration information about the repositories.

4) The <ivy:info> tag tells Ivy where the current modules ivy.xml file is located.

Configuring Ant to *use* Ivy

With the previous Ivy setup we're now ready to start building using Ivy via Ant.

Let's consider our goals. What we want to first do is build and then publish ADFLibrary1 to the Ivy repository. Then subsequently for ApplicationA we want to download ADFLibrary1 from the Ivy repository, then build ApplicationA.

To achieve the first goal, we already have a Build Ant target in the ADFLibrary1 build.xml. So we just need to add another target "Publish" which will take the artefacts generated from the Build target as follows:
<target name="Publish">
<ivy:publish resolver="repository" overwrite="true" pubrevision="${ivy.revision}" update="true">
<ivy:artifacts pattern="../ViewController/deploy/[artifact].[ext]"/>
</ivy:publish>
</target>
Items to note:

1) The <ivy:publish> tag that says which resolver (ie. which repository) to publish too, what to do if the exact file and revision already exists in the repository, and what revision/version to publish the file as. With regards the ${ivy.revision} this variable is derived from the ADFLibrary1's ivy.xml file.

2) The <artifacts> tag which tells the publish command where to find the artifact to publish.

3) Because of the <artifacts> tag there's a dependency that the module has already been built. This could be easily catered for in the overall build script by making an <antcall> to the Build target at the start of the <ivy:publish> tag, but for purposes of simplicity this change hasn't been made for this blog post.

At this point let's see what outputs we see if we run the Build and Publish scripts. First when we run the Build target the JDeveloper log window reports:
Buildfile: C:\JDeveloper\mywork\ADFLibrary1\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Build:
[ora:ojdeploy] ----build file----
[ora:ojdeploy] <?xml version = '1.0' standalone = 'yes'?>
[ora:ojdeploy] <ojdeploy-build>
[ora:ojdeploy] <deploy>
[ora:ojdeploy] <parameter name="workspace" value="C:\JDeveloper\mywork\ADFLibrary1\ADFLibrary1.jws"/>
[ora:ojdeploy] <parameter name="project" value="ViewController"/>
[ora:ojdeploy] <parameter name="profile" value="ADFLibrary1"/>
[ora:ojdeploy] <parameter name="outputfile" value="C:\JDeveloper\mywork\ADFLibrary1\ViewController\deploy\ADFLibrary1"/>
[ora:ojdeploy] </deploy>
[ora:ojdeploy] <defaults>
[ora:ojdeploy] <parameter name="statuslogfile" value="C:\Temp\status.log"/>
[ora:ojdeploy] </defaults>
[ora:ojdeploy] </ojdeploy-build>
[ora:ojdeploy] ------------------
[ora:ojdeploy] 07/12/2011 1:31:42 PM oracle.security.jps.util.JpsUtil disableAudit
[ora:ojdeploy] INFO: JpsUtil: isAuditDisabled set to true
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer prepareImpl
[ora:ojdeploy] INFO: ---- Deployment started. ----
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer printTargetPlatform
[ora:ojdeploy] INFO: Target platform is Standard Java EE.
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdevimpl.deploy.common.ProfileDependencyAnalyzer deployImpl
[ora:ojdeploy] INFO: Running dependency analysis...
[ora:ojdeploy] 07/12/2011 1:31:43 PM oracle.jdeveloper.deploy.common.BuildDeployer build
[ora:ojdeploy] INFO: Building...
[ora:ojdeploy] Compiling...
[ora:ojdeploy] [1:31:45 PM] Successful compilation: 0 errors, 0 warnings.
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.common.ModulePackagerImpl deployProfiles
[ora:ojdeploy] INFO: Deploying profile...
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.adfdt.controller.adfc.source.deploy.AdfcConfigDeployer deployerPrepared
[ora:ojdeploy] INFO: Moving WEB-INF/adfc-config.xml to META-INF/adfc-config.xml
[ora:ojdeploy]
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdeveloper.deploy.jar.ArchiveDeployer logFileWritten
[ora:ojdeploy] INFO: Wrote Archive Module to file:/C:/JDeveloper/mywork/ADFLibrary1/ViewController/deploy/ADFLibrary1.jar
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
[ora:ojdeploy] INFO: Elapsed time for deployment: 3 seconds
[ora:ojdeploy] 07/12/2011 1:31:45 PM oracle.jdevimpl.deploy.fwk.TopLevelDeployer finishImpl
[ora:ojdeploy] INFO: ---- Deployment finished. ----
[ora:ojdeploy] Status summary written to /C:/Temp/status.log
At the beginning of the output you can see Ivy being initialized but at the moment it's mostly not used. From the output you can see the JAR being built by ojdeploy and placed under C:/JDeveloper/mywork/ADFLibrary1/ViewController/deploy.

Next when we run the Publish task the following output is produced:
Buildfile: C:\JDeveloper\mywork\ADFLibrary1\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Publish:
[ivy:publish] :: publishing :: sage#ADFLibrary1
[ivy:publish] published ADFLibrary1 to C:\JDeveloper\ivy\repositories\development/ADFLibrary1/jarADFLibrary1_1.jar
[ivy:publish] published ivy to C:\JDeveloper\ivy\repositories\development/ADFLibrary1/ivyADFLibrary1_1.xml
Beyond the initial Ivy setup, of importance we can see the calls to <ivy:publish> pulling the JAR from the previous Build step to the repository. If we look at our C: drive where the repository is located we can indeed see files now sitting in the repository:

The different files are beyond the discussion here, but to say this is the structure Ivy has put into place.

At this point we've achieved our first goal of build and publishing the ADFLibrary1 to the Ivy repository. Let's more over to our second goal for ApplicationA where we want to download ADFLibrary1 from the Ivy repository, then build ApplicationA.

In order to do this we'll add a new target to the ApplicationA build.xml "Download_dependencies" as follows:
<target name="Download_dependencies">
<ivy:cleancache/>
<ivy:resolve/>
<ivy:retrieve pattern="${ivy.default.ivy.lib.dir}/[artifact].[ext]" type="jar"/>
</target>
Of note:

1) The <ivy:cleancache> tag clears the ${ivy.default.ivy.user.dir}\Cache of previously downloaded dependencies. This is only really necessary if when you're uploading dependencies you're not creating new versions, but rather overwriting an existing release. In this later case Ivy will in preference use the cached copy of the JAR rather than retrieving the updated JAR in the repository. Flushing the cache solves this issue as the JARs need to be downloaded each time.

2) The <ivy:resolve> tag which loads the dependency metadata for the current module from the associated ivy.xml file, determines which artefacts to obtain from the repository and downloads them to the ${ivy.default.ivy.user.dir}\Cache directory on the local PC.

3) The <ivy:retrieve> tag then searches the Cache directory for the required JAR files and places them in the location where the application expects to find them, namely C:\JDeveloper\lib

If we run this task we see in the logs:
Buildfile: C:\JDeveloper\mywork\ApplicationA\Build\build.xml
[ivy:configure] :: Ivy 2.2.0 - 20100923230623 :: http://ant.apache.org/ivy/ ::
[ivy:configure] :: loading settings :: file = C:\JDeveloper\ivy\ivysettings.xml

Download_dependencies:
[ivy:resolve] :: resolving dependencies :: sage#ApplicationA;1
[ivy:resolve] confs: [jar, ear]
[ivy:resolve] found sage#ADFLibrary1;1 in repository
[ivy:resolve] downloading C:\JDeveloper\ivy\repositories\development\ADFLibrary1\jarADFLibrary1_1.jar ...
[ivy:resolve] .. (6kB)
[ivy:resolve] .. (0kB)
[ivy:resolve] [SUCCESSFUL ] sage#ADFLibrary1;1!ADFLibrary1.jar (0ms)
[ivy:resolve] :: resolution report :: resolve 109ms :: artifacts dl 0ms
---------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------
| jar | 1 | 1 | 1 | 0 || 1 | 1 |
| ear | 1 | 1 | 1 | 0 || 1 | 1 |
---------------------------------------------------------
[ivy:retrieve] :: retrieving :: sage#ApplicationA
[ivy:retrieve] confs: [jar, ear]
[ivy:retrieve] 1 artifacts copied, 0 already retrieved (6kB/0ms)
Beyond the initial configuration of Ivy, in the output you can see the &ivy;resolve> tag resolving the dependency of ApplicationA on ADFLibrary1 version 1, then downloading the file to the cache. Finally the <retrieve> tag retrieves the file from the cache and copies it to the local lib directory (though this isn't that obvious from the logs).

If you now Build ApplicationA you will see it compiles correctly. To check it doesn't build when the ADFLibrary1.jar is not sitting in the C:\JDeveloper\lib directory, delete the JAR and rebuild ApplicationA.

Making and building with new revisions

Overtime your modules will include new revisions. You will of course be checking these changes in and out of your version control system such as Subversion. How do you cater for the new versions with regards to Ivy?

Imagine the scenario where ApplicationA release 3 is now dependent on ADFLibrary1 revision 6. This requires two simple changes.

Firstly in the ADFLibrary1 ivy.xml, replace the revision number under the <info> tag to 6, build and publish.

Second in ApplicationA's ivy.xml, update it's revision number to 3, then in the <dependencies> tag update the ADFLibrary1 dependency's revision number to 6. Forthright when you download the dependencies for ApplicationA revision 3, it will download revision 6 from the repository.

Conclusion

At this point we have all the moving parts to make use of Ivy with JDeveloper and ADF to implement a dependency management solution. While the example is contrived, it shows the use of:

1) The ivy.xml file to define each module, what it publishes and what it depends on
2) The ivysettings.xml file to define the location of the shared repository
3) The Ivy Ant tasks for publishing modules to the repository, as well as downloading modules from the repository

If I have time I will write a different blog post to show how transitive dependencies work in Ivy. The nice thing about Ivy is it handles these automagically so there's not much to configure, just explain a working example.

Beyond this there really isn't that much else to explain, working out the nuances of Ivy takes around a week, retrofitting it into your environment takes longer, but beyond that Ivy is pretty simple in that it does one thing and it does one thing well.

Finally note I'm not advocating Apache Ivy over Apache Maven with this post, ultimately this post simply documents how to use Ivy with JDeveloper, and readers need to make their own choice which tool if any to use. Future versions of JDeveloper (See the Maven integration section in the following blog post) are scheduled to have improved Maven integration so readers should take care not to discount Maven as an option.

Errata

This was tested against JDev 11.1.2.1.0 and 11.1.1.4.0, but in essence should run against any JDev version with Ant support.

We're Hiring

Duncan Mein - Wed, 2011-12-07 08:04
We are looking for an APEX developer for an initial 3 month contract with definite scope for long term extension for a role in Hampshire (UK).

Candidates must be SC cleared or willing to undergo clearance to work on a UK MoD site.

Any interested parties, please send me an up to date copy of your CV with availability and rate to: duncanmein@gmail.com

SSH root attacks on the rise

Jared Still - Mon, 2011-12-05 18:42
This is not directly Oracle related, but probably still of interest.

SSH Password Brute Forcing may be on the Rise

Out of curiosity I pulled the ssh login attempts from /var/log/messages an internet facing server, and the data  corresponds to what was shown in the article.

What was interesting was that all ssh attempts that I saw were for root.  In the past when I have looked at these there are a number of different accounts being attacked, but now the attacks are all for root.

Categories: DBA Blogs

Gwen Shapira on SSD

Cary Millsap - Sun, 2011-12-04 06:13
If you haven’t seen Gwen Shapira’s article about de-confusing SSD, I recommend that you read it soon.

One statement stood out as an idea on which I wanted to comment:
If you don’t see significant number of physical reads and sequential read wait events in your AWR report, you won’t notice much performance improvements from using SSD.I wanted to remind you that you can do better. If you do notice a significant number of physical reads and sequential write wait events in your AWR report, then it’s still not certain that SSD will improve the performance of the task whose performance you’re hoping to improve. You don’t have to guess about the effect that SSD will have upon any business task you care about. In 2009, I wrote a blog post that explains.

SYSAUX tablespace growing rapidly

Mike Rothouse - Sat, 2011-12-03 14:16
I have an Oracle 11g R2 (11.2.0.1) database where I noticed the SYSAUX tablespace was growing larger every day.  Searched Oracle My Support and found Doc ID 1292724.1 and Doc ID 552880.1 which were helpful. After running awrinfo.sql, I found the largest consumer to be SM/OPTSTAT at 2.8 GB which is larger and not typical […]

Easy Automation of common Weblogic and FMW Administration commands via WLST Recording

Ramkumar Menon - Fri, 2011-12-02 14:27

The WebLogic Scripting Tool (WLST) is a command-line scripting environmentthat you can use to create, manage, and monitor WebLogic Server domains. Weblogic Serverprovides you a way to record your configuration edits in the WLS Console asWLST scripts that can then later be edited and used for configurationautomation. Note that for security reasons, it does not allow you to record allcommands. Refer to the WeblogicServer documentation for what is disallowed.

Here is a simple run through of how you can use WLST recording and generate scripts for config automation.In this example, we will record the creation of a simple JDBC resource via WLSConsole and edit it post-recording.

Step 1: Log intoWLS Admin Console and click on “Preferences” at the top and click on the “WLSTScript Recording” tab.

This page gives you details on where the script will be generated post recording, and the name of the file. You can change it to suite your needs.

Step 2: Click on “StartRecording” and then proceed to create the data source as shown in the stepslater. This is under the assumption that Automatic Recording is turned off. Inthis case, you can start and stop recording when you have finished atomicrecording tasks. Once you start recording, you can see a message indicatingthat the recording is on.


Step 4:Once youhave completed the configuration, you can click on “Preferences” at the top tocome back to the Recording settings page and stop the recording. You can seethat the recording window has captured all configuration changes in Jythonformat.

Step 5: Click on “Stoprecording” to generate the file at the desired location.



Step 6: Next, youcan update the script to pass them as command line arguments or read them from aproperty file. See below.

Step 7: WLST canbe run in a variety of ways. One way is to set the environment using wlserver_10.3/server/bin/setWLSEnv.shand then running

java Weblogic.WLST scriptName.py.

Refer to the WLSTdocumention for other means to execute WLST [Interactive, Embedded, Antetc].

Easy Automation of common Weblogic and FMW Administration commands via WLST Recording

Ramkumar Menon - Fri, 2011-12-02 14:27

The WebLogic Scripting Tool (WLST) is a command-line scripting environment that you can use to create, manage, and monitor WebLogic Server domains. Weblogic Server provides you a way to record your configuration edits in the WLS Console as WLST scripts that can then later be edited and used for configuration automation. Note that for security reasons, it does not allow you to record all commands. Refer to the Weblogic Server documentation for what is disallowed.

Here is a simple run through of how you can use WLST recording and generate scripts for config automation. In this example, we will record the creation of a simple JDBC resource via WLS Console and edit it post-recording.

Step 1: Log into WLS Admin Console and click on “Preferences” at the top and click on the “WLST Script Recording” tab.

This page gives you details on where the script will be generated post recording, and the name of the file. You can change it to suite your needs.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Step 2: Click on “Start Recording” and then proceed to create the data source as shown in the steps later. This is under the assumption that Automatic Recording is turned off. In this case, you can start and stop recording when you have finished atomic recording tasks. Once you start recording, you can see a message indicating that the recording is on.


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Step 4:Once you have completed the configuration, you can click on “Preferences” at the top to come back to the Recording settings page and stop the recording. You can see that the recording window has captured all configuration changes in Jython format.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Step 5: Click on “Stop recording” to generate the file at the desired location.



Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Step 6: Next, you can update the script to pass them as command line arguments or read them from a property file. See below.

Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

Step 7: WLST can be run in a variety of ways. One way is to set the environment using wlserver_10.3/server/bin/setWLSEnv.sh and then running

java Weblogic.WLST scriptName.py.

Refer to the WLST documention for other means to execute WLST [Interactive, Embedded, Ant etc].

Breaking change in calling Groovy on 1.8 upgrade

Nigel Thomas - Tue, 2011-11-29 10:07
I've been bitten by this a couple of times now, so for anyone else's benefit: If you have a bat file that calls a groovy program, you may notice surprising behaviour after an upgrade from 1.7.x to 1.8.x (I went from 1.7.4 to 1.8.4).

If your bat file looks something like:
..some stuff..

groovy myGroovy
copy xyz abc

... more stuff ..
Then in 1.7.4 you would have called groovy.exe, executed the program, then continued to copying the file. But in 1.8.x groovy.exe is deprecated so instead you execute groovy.bat. Unfortunately, when a Windows bat script calls another in that way, it effectively jumps to the script (with no return) so the script finishes at the end of groovy.bat. To fix this, use the Windows CALL instruction:
..some stuff..

call groovy myGroovy
copy xyz abc

... more stuff ..
With the CALL, the groovy.bat script executes and then returns control to your script, and the copy and more stuff actually happens.

NOTE: I think the reason I have the problem is that I installed the generic groovy rather than using the specific windows installer (eg here). But codehaus seems to be down right now.


Implementing Oracle parallel shared server process in Java inside the Database

Marcelo Ochoa - Mon, 2011-11-28 14:33
Behind the implementation of latest LDI open source project and the OLS products there is a functionality not well know by Oracle Java database developers, I called it Parallel Shared Server process.
The idea is to have an Oracle shared server process running during the  life-time of the instance, which means a process automatically started during database startup and stopped during database shutdown.
So which functionality this process can implement?, on LDI is an RMI server, on OLS is lightweight HTTP server, but basically you can implement anything you need for example getting information from another process and fill some table, getting statistical, consuming web services, etc. etc.
Let see in some example how it works.
We will create a TEST user and creates some Java classes running a simple Hello World RMI server.
SQL> conn / as sysdbaSQL> create user test identified by test
  2  default tablespace users
  3  temporary tablespace temp
  4  quota unlimited on users;
SQL> grant connect,resource,create any job to TEST;SQL> exec dbms_java.grant_permission( 'TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'listen,resolve');
SQL> exec dbms_java.grant_permission( '
TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'accept, resolve');
SQL> exec dbms_java.grant_permission( '
TEST', 'SYS:java.net.SocketPermission', 'localhost:1024-', 'connect, resolve');
SQL> exec dbms_java.grant_permission( '
TEST', 'SYS:java.lang.RuntimePermission', 'setContextClassLoader', '' );The RMI interface and server implementation running on TEST user.
SQL> conn test/test
SQL> create or replace and compile java source named "mytest.Hello" as
package mytest;
import java.rmi.Remote;
import java.rmi.RemoteException;
public interface Hello extends Remote {
    String sayHello() throws RemoteException;
    int nextCount() throws RemoteException;
}
/
SQL> create or replace and compile java source named "mytest.HelloImpl" as
package mytest;
import java.rmi.Naming;
import java.rmi.RemoteException;
import java.rmi.RMISecurityManager;
import java.rmi.registry.LocateRegistry;
import java.rmi.server.UnicastRemoteObject;
public class HelloImpl extends UnicastRemoteObject implements Hello {
    static int counter = 0;
   
    public HelloImpl() throws RemoteException {
        super();
    }
    public String sayHello() {
        return "Hello World!";
    }
    public static void main(String[] args) {
        // Create and install a security manager
        if (System.getSecurityManager() == null) {
            System.setSecurityManager(new RMISecurityManager());
        }
        try {
            HelloImpl obj = new HelloImpl();
            LocateRegistry.createRegistry(1099);
            // Bind this object instance to the name "HelloServer"
            Naming.rebind("//localhost/HelloServer", obj);
            System.out.println("HelloServer bound in registry");
        } catch (Exception e) {
            System.out.println("HelloImpl err: " + e.getMessage());
            e.printStackTrace();
        }
    }
    public synchronized int nextCount() {
        return counter++;
    }
}
/
SQL> create or replace procedure HelloServ(srvName IN VARCHAR2) as LANGUAGE JAVA NAME
        'mytest.HelloImpl.main(java.lang.String [])';
/
SQL> begin
  -- Start a Cron like process (DBMS_SCHEDULER)
  DBMS_SCHEDULER.CREATE_JOB(
   job_name          =>  'HelloServJob',
   job_type          =>  'PLSQL_BLOCK',
   job_action        =>  'begin
     HelloServ(''HelloServer'');
     exception when others then
        null;
     end;',
   start_date        =>  SYSDATE,
   enabled           => false,
   auto_drop         => false);
  DBMS_SCHEDULER.SET_ATTRIBUTE_NULL (
   name           =>   'HelloServJob',
   attribute      =>   'MAX_FAILURES');
end;
/
commit;
Now we can register two database instance trigger to automatically start and stop the job.
SQL> conn / as sysdba
SQL> CREATE OR REPLACE TRIGGER start_test_srv
  AFTER STARTUP ON DATABASE
BEGIN
  -- Start a Cron like process (DBMS_SCHEDULER)
  DBMS_SCHEDULER.ENABLE('TEST.HelloServJob');
END;
/
SQL> CREATE OR REPLACE TRIGGER stop_test_srv
  BEFORE SHUTDOWN ON DATABASE
BEGIN
  -- Start a Cron like process (DBMS_SCHEDULER)
  DBMS_SCHEDULER.STOP_JOB('TEST.HelloServJob',force=>true);
EXCEPTION WHEN OTHERS THEN
  null;
END;
/
If we process to do a shutdown/startup sequence the server will up and running, also we can start the server manually by executing:
SQL> conn / as sysdba
SQL> exec DBMS_SCHEDULER.ENABLE('TEST.HelloServJob');
SQL> commit;
after doing that we can see at $ORACLE_BASE/diag/rdbms/orcl/orcl/trace a .trc file associated with the parallel shared server process which is up and running:
-bash-4.2$ cat orcl_j000_10411.trc
Trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_j000_10411.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11_2_0_2_0/dbhome_1
System name:    Linux
Node name:      localhost.localdomain
Release:        2.6.38.7-server-1mnb2
Version:        #1 SMP Sun May 22 22:59:25 UTC 2011
Machine:        i686
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 25
Unix process pid: 10411, image: oracle@localhost.localdomain (J000)

*** 2011-11-28 18:05:41.091
*** SESSION ID:(151.35) 2011-11-28 18:05:41.091
*** CLIENT ID:() 2011-11-28 18:05:41.091
*** SERVICE NAME:(SYS$USERS) 2011-11-28 18:05:41.091
*** MODULE NAME:(DBMS_SCHEDULER) 2011-11-28 18:05:41.091
*** ACTION NAME:(HELLOSERVJOB) 2011-11-28 18:05:41.091

HelloServer bound in registry
and this process is listening into the default RMI port 1099, we can see that using:
-bash-4.2$ netstat -anp|grep ora_j0
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 :::19189                    :::*                        LISTEN      10411/ora_j000_orcl
tcp        0      0 :::1099                     :::*                        LISTEN      10411/ora_j000_orcl 
and that's all, we can connect using an RMI client from another client session, for example:
SQL> create or replace and compile java source named "mytest.HelloClient" as
package mytest;
import java.rmi.Naming;
import java.rmi.RemoteException;
public class HelloClient {
    Hello obj = null;
    public HelloClient() {
        try {
            obj = (Hello)Naming.lookup("//localhost/HelloServer");
        } catch (Exception e) {
            System.out.println("HelloApplet exception: " + e.getMessage());
            e.printStackTrace();
        }
    }
    public String sayHello() throws RemoteException {
        return obj.sayHello();
    }
    public int nextCount() throws RemoteException {
        return obj.nextCount();
    }
    public static void main(String[] args) throws RemoteException {
        HelloClient helloClient = new HelloClient();
        System.out.println(helloClient.sayHello());
        System.out.println(helloClient.nextCount());
    }
}
/
SQL> create or replace procedure HelloClient(srvName IN VARCHAR2) as LANGUAGE JAVA NAME
'mytest.HelloClient.main(java.lang.String [])';
/
SQL> set define ?
SQL> set serverout on
SQL> exec dbms_java.set_output(32000);
SQL> exec HelloClient('HelloServer');
0
SQL> exec HelloClient('HelloServer');
1
Note that the server is state-full which means that preserve the state across calls, if we exit from above SQLPlus and connect again we will see that:

SQL> set define ?
SQL> set serverout on
SQL> exec dbms_java.set_output(32000);
SQL> exec HelloClient('HelloServer');
2
SQL> exec HelloClient('HelloServer');
3

Concluding this post I would like to remark that this parallel shared server process is running into RDBMS space and is not like starting an RMI server into the middle tier. the big difference is that all SQL access is implemented accessing directly to the RDBMS structures into the SGA because is using the internal JDBC driver.

Blogs - EPM, Hyperion, and Essbase

Look Smarter Than You Are - Sun, 2011-11-27 15:37
Blog Seeking Blogs
Hello, all.  I wanted to wait to do a new blog posting until after the holidays.  Originally, I meant Easter which turned into Mother's Day, Memorial Day, Father's Day, Independence Day, Labor Day, Columbus Day, Halloween, Thanksgiving, Black Friday, Black Friday Continued, Cyber Monday Pre-Sale, and a whole lot of other very important holidays.  Rather than wait until Christmas, I thought I would do a very brief blog entry.


Since I started this blog a few years ago, many blogs have sprung up that have excellent information.  I'm sure I don't know about all of them, so I'd like your help in linking to the great Oracle EPM, Hyperion, and Essbase blogs I may be missing.  Have a look at the scroll on the right (if you're reading this through RSS, go to http://looksmarter.blogspot.com/ and look on the right).  If there's something it seems like I'm missing, comment on this entry and I'll add it.


My only criteria is that the blog not be a wholly self-serving marketing blog designed to drive traffic to that person's company's website.  For instance, readers of my blog historically find it difficult to find out what company I actually work for (it's interRel, by the way).  This is because I believe one should be educated first and if they like what you're sharing, they'll seek you out for work.


Calc Script Class on December 8
Now that I've said that, allow me to be slightly hypocritical for a second and mention that I am teaching one of my once a year "Advanced Essbase: Calc Scripts for Mere Mortals" day-long classes.  I do this once a year and it's about the only time I ever teach a paid class.  Unlike previous years, it's a virtual class, so you can take it from anywhere in the world.  If you want to learn about writing Essbase BSO (and ASO) calc scripts, the class is December 8 and it's open to customers of Oracle and partners as well.  The class is $995 USD and at last check, there were a couple of spots open (awesomeness of the virtual classes).  For more info, visit http://www.interrel.com/currenttraining.aspx.  To register, send an e-mail to Danielle White.


Returning to my original point, if you know of some great blogs I'm missing, comment on the blog with the new address (and yes, it's fine to mention your own blog).
Categories: BI & Warehousing

New release of Lucene Domain Index based on 3.4.0 code

Marcelo Ochoa - Fri, 2011-11-25 13:44
This new release of Lucene Domain Index (LDI) has been in the new SVN for a long time, but due a lot of works with the commercial version Scotas never went public in binary installation :(
Several thing happen during this time:

  • New web site (thanks a lot to Peter Wehner for the conversion from the Google docs)
  • New hosting at SF.net (now is separate SVN from the previous one CVS at DBPrism)

The change log of this version is:


  • Use latest merge policy implementation TieredMergePolicy
  • Use total RAM reported by getJavaPoolSize() when setting MaxBufferedDocs
  • Better error reporting when an Analyzer is not found.
  • Replaced execute immediate with open-fech-close functionality to avoid core dump on 10g when double check for deleted rowid
  • Included a back-port version of JUnit4 to jdk1.4 version for 10g releases
  • Added a parallel updater process, when working in OnLine mode this process do write operations on LDI structure on behalf of the AQ process
  • Delete do not longer required a write exclusive lock on index storage, now deletes are also en-queued as inserts or updates
  • Updated source to Lucene 3.4.0 code, removed some deprecated API

Download latest binary distribution at 3.4.0 directory of SF.net download area (tested with 10g/11gR2).
The addition of a new parallel shared server process is the major change which speed up a lot DML operations, I'll write in a new post on how this parallel shared server technique works.
Please report any issue during the installation or bugs at the Support Area of the project.

Pages

Subscribe to Oracle FAQ aggregator