Feed aggregator

Compression and SE

Jeff Hunter - Wed, 2011-05-25 17:11
While researching a corrupt block on 11g SE, we came across a number of objects that were compressed according to the data dictionary. How could that be?  Compression is not a feature of SE, or so we thought. The objects in question were all indexes.  In fact, Oracle creates compressed indexes in ?/apex/core/tab.sql even though we are on SE. Further investigation lead me to Doc 1084132.1 which

FREE OTN Developer Day in Poland June 15

David Peake - Tue, 2011-05-24 18:20
Our FREE OTN Developer Day is going global.
We are going to be in Poland on June 15.
Go here to see the event overview and register.

This is a BYOL (Bring Your Own Laptop) event so that you can keep everything you work on.
These days concentrate on hands-on labs and using the various tools rather than just sitting through presentations.

Hope to see you there!

2012 starts next week!

Andrews Consulting - Tue, 2011-05-24 13:16
Some retailers always rush things a bit by starting to sell Christmas items in September. In a similar way, Oracle likes to jump the gun by starting its Fiscal Year 2012 on June 1. This unusual schedule impacts the way Oracle interacts with its customers in a number of important ways. First, all energy in […]
Categories: APPS Blogs

Oracle at CloudEXPO

Anshu Sharma - Tue, 2011-05-24 06:38
There are a *very*limited* number of Oracle Sponsor VIP registrations remaining for CloudExpo (June 6-9, Javits Center, NYC). If you an Oracle partner and doing Cloud architecture and solutions with Oracle Technologies, let me know if you wish to attend. I can check if we can get you the complimentary discount code to register for free.

Oracle EBS Customization and Extension - OAF vs ADF vs APEX vs Forms

Andries Hanekom - Fri, 2011-05-20 06:04
The E-Business Suite Technology Group recently released a whitepaper: Extending E-Business Suite 12.1.3 using Oracle Application Express. In summary, "This new whitepaper outlines how to extend Oracle E-Business Suite 12.1.3 (and higher) functionality using Oracle Application Express. Recommended architecture and security considerations are discussed in detail." For some time now EBS customers have used APEX to extend EBS, with the release of this whitepaper the EBS Tech group has acknowledged it's growing use and have provide recommendations and guidelines for standardised integration.

What's this all about some might ask, is Oracle moving to incorporate APEX as part of the default EBS tech stack? What about OAF? Isn’t Fusion Applications build on ADF, so what's up? Well when it comes to Oracle EBS extension and customization, OAF is still top dog, the E-Business Suite Technology Group continues to recommend OAF for EBS extensions.

Without a doubt, ADF is the future, it’s a very powerful alternative to OAF, it provides an array of new functionality and is used to develop Oracle Fusion Applications. If you are planning a stand alone application, not requiring the tight integration provided with OAF, ADF would be an excellent choice. But where Oracle EBS R11.X and R12.X is concerned, it’s always recommendable to use a tool set that is part of the current tech stack and provides tight integration i.e. security, flexfields, personalization etc.

I have to come clean here, I have never been a supporter of using APEX to extend EBS, but I am glad to see the E-Business Suite Technology Group laying down some standards and guidelines. Whatever the motivation for choosing APEX, and in my experience it’s usually a OAF skills shortage problem, it’s good to know that Oracle is bringing some order to the current free for all approach to APEX extensions.

Forms? Dead, move on.

Although ADF is the future and APEX is now "supported", the Oracle EBS UI is and will continue to be developed using OAF, it’s powerful personalization and extension framework, plus transparent upgrades and seamless integration makes OAF the number one choice in EBS extension and customization.

JDev 11g, Task Flows & ADF BC – one root Application Module to rule them all?

Chris Muir - Mon, 2011-05-16 19:25
JDev 11.1.1.5.0

In my previous blog post I discussed the power of the ADF task flow functionality, and the devil in the detail for uninitiated developers using the transaction and data control scope options. This post will extend the discussion on task flows and the ADF Controller's interaction with ADF Business Components, in order to show another scenario where programmers must understand the underlying behaviour of the framework.

Developers who have worked with the ADF framework for some time, especially from the JDeveloper 10g edition and earlier, will likely have stumbled across the concepts of root and nested Application Modules (AM) in the ADF Business Component (ADF BC) layer. The Fusion Guide has the following to say on the two types of AMs:
Application modules support the ability to create software components that mimic the modularity of your use cases, for which your higher-level functions might reuse a "subfunction" that is common to several business work flows. You can implement this modularity by defining composite application modules that you assemble using instances of other application modules. This task is referred to as "application module nesting". That is, an application module can contain (logically) one or more other application modules, as well as view objects. The outermost containing application module is referred to as the "root application module".At runtime, your application works with a "main" — or what's known as a "root "— application module. Any application module can be used as a root application module; however, in practice the application modules that are used as root application modules are the ones that map to more complex end-user use cases, assuming you're not just building a straightforward CRUD application. When a root application module contains other nested application modules, they all participate in the root application module's transaction and share the same database connection and a single set of entity caches. This sharing is handled for you automatically by the root application module and its "Transaction" object.The inference is that if for a single user you want to support more than one transaction at a time, you must create 2 or more root AMs. However the implication of this is if you do so, as a transaction and database connection have a 1 to 1 relationship, a user exercising more than one root AM in the ADF BC layer during their session will take out the same amount of connections with the database. This implication further effects scalability, as the more connections a user takes out from the database, and more connections in our midtier pool will be tied up by each user too.

Task Flow transaction options

The transaction and data control scope behavioural options available to bounded task flows provide a sophisticated set of functionality for spawning and managing one or more transactions during an ADF user's session, an extension of the facilities provided by the ADF BC Application Module. Straight from the Fusion Developer's Guide the task flow transaction options are:

• <No Controller Transaction>: The called bounded task flow does not participate in any transaction management.

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

• Use Existing Transaction If Possible: When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist.

• Always Begin New Transaction: A new transaction starts when the bounded task flow is entered, regardless of whether or not a transaction is in progress. The new transaction completes when the bounded task flow exits.

Ignoring the "No Controller Transaction" option which defaults back to the letting the ADF BC layer manage its own transactions, the other task flow transaction options allow the developer to create and reuse transactions at a higher level of abstraction for relating web pages, page fragments and other task flow activities. As such the root AM doesn't constrain when the application establishes a new transaction and connection to the database.

Yet if now we've the option for spawning more transactions in our application, what's the implication on scalability and the number of connections that are taken out in the midtier and database for each user? Thanks to a previous OTN forum post and the kind assistance of Steve Muench this post will demonstrate how the ADF framework as a whole attempts to minimize this issue.

The "No Controller Transaction" option

Before investigating how the ADF framework addresses the scalability issue, it's useful to run through an example of using the "No Controller Transaction" bounded task flow option. This will demonstrate how the framework establishes database connections when defaulting back to the ADF BC layer's own transaction and database connection functionality.

As explained by Frank Nimphius in the following OTN Forum post the No Controller Transaction option infers that task flows are taking no control of the transactions established during the user's session when the task flow is called, all such functionality is delegated back to the underlying business services. In context of this blog post the business service layer is ADF Business Components.

As seen in the following picture overall the ADF BC layer for our example is application is a fairly simple one:


As can be seen the application is comprised of two ADF BC Application Modules (AM), named Root1AppModule & Root2AppModule. Both expose the same View Object (VO) OrganisationsView as two separate usages. In the following picture you see Root1AppModule exposes OrganisationsView as OrganisationsView1:


...and in the following picture Root2AppModule exposes OrganisationsView2 off the same OrganisationsView VO:


For what it's worth to the discussion, OrganisationsView is based on the same Organisations EO:


However as there are 2 root AMs at play in our example application, each AM will instantiate its own OrganisationsView (namely OrganisationsView1 and OrganisationsView2 respectively) and Organisation EO cache, implying the record sets in the midtier are distinctly different even though we're using the same underlying design time constructs.

Like that explained in the previous blog post, how do we know when an AM actually creates a connection? Without knowing this, in our trials with the transaction options supported by Bounded Task Flows, unless the ADFc explicitly throws an error, we'll have trouble discerning what the ADF BC layer is actually doing underneath the task flow transaction options.

While external tools like the Fusion Middleware Control will give you a good insight into this, the easiest mechanism is to extend each root Application Module's ApplicationModuleImpl's class with our own AppModuleImpl and override the create() and prepareSession() methods. The following code shows an example for the Root1AppModuleImpl:
public class Root1AppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

@Override
protected void create() {
super.create();
if (isRoot())
System.out.println("########Root1AppModuleImpl.create() called. AM isRoot() = true");
else
System.out.println("########Root1AppModuleImpl.create() called. AM isRoot() = false");
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
System.out.println("########Root1AppModuleImpl.prepareSession() called. AM isRoot() = true");
else
System.out.println("########Root1AppModuleImpl.prepareSession() called. AM isRoot() = false");
}
}
Pretty much the code for the Root2AppModuleImpl is the same, except the name of the classes change in the System.out.println calls.

Overriding the create() method allows us to see when the Application Module is not just instantiated, but ready to be used. This doesn't tell us when a transaction and connection is established with the database, but, is useful in identifying situations where the framework creates a root or nested AM.

The prepareSession() method is a chokepoint method the framework uses to set database session state when a connection is established with the database. As such overriding this method allows us to see when the AM does establish a new connection and transaction.

Once we've setup our ADF BC Model project, we'll now create two Bounded Task Flows (BTFs) to allow the user to work with the underlying AMs and VOs.

Both our task flows are comprised of only 2 activities, a page to view the underlying the Organisations data and an Exit Task Flow Activity. Root1AppModuleTaskFlow contains the following activities:


...and Root2AppModuleTaskFlow is virtually identical:


The OrganisationsTable1.jspx and OrganisationsTable2.jspx pages in each task flow show an editable table of Organisations data, visually there's no difference between them:


While the pages don't differ visually, underneath their pageDef files are completely different as they source their data from different root Application Modules and View Objects. In the following picture we can see the OrganisationsTable1PageDef.xml file makes use of the ADF BC OrganisationsView1 VO mapping to the Root1AppModuleDataControl:


And in the following picture we can see the OrganisationsTable2PageDef.xml file uses the ADF BC OrganisationsView2 VO mapping to the Root2AppModuleDataControl:


Finally we can see the No Controller Transaction option set for the first task flow:


..and the second:


Note I've deliberately set the data control scope to Shared to highlight a point in the future.

At this point we'll include a Start.jspx page in our Unbounded Task Flow (UTF) adfc-config.xml file with navigation rules to call each BTF, and navigation rules to return on the Exist Task Flow Return activity being called inside of each BTF:


On running our application starting with the Start.jspx page we see:


At this point inspecting the console output for the application in the JDeveloper log window, we don't see the System.out.println messages:


Returning to the Start page and selecting the Go Root1AppModuleTaskFlow button we then see:


Note in the above picture I've deliberately selected the 5th recorded and changed the Organisation's name to uppercase. Checking the log window we now see:


As such we can see that the Root1AppModule in the ADF BC layer has been instantiated, and has established a connection with the database, also establishing a transaction.

If we now return via the Exit button to the Start.jspx page in the UTF, then select the Go Root2AppModuleTaskFlow button we see:


Note that the 5th record hasn't been updated to capitals, and indeed the 1st record is the default selected record, not the 5th, implying that we're not seeing the same cache of data from the midtier, essentially 2 separate transactions with 2 separate Organisations EO caches, and in turn two separate VOs with their own current row record indicators. This last point shows that the Shared data control scope we set in the task flows has no use when the No Controller Transaction option is used.

Returning to the JDev log window we see that the second Root2AppModule was instantiated and has separately established its own connection and therefore transaction:


For completeness if we select the Exit button and return to the Start.jspx page, then return to the first task flow we can see the 5th record is still selected and updated:


The key point from this example is the fact that each Application Module is instantiated as a root AM and also creates its own connection.

A chained "No Controller Transaction" task flow example

With the previous example in hand, and building upto our final example, in this section I'd like to rearrange our example application such that rather than calling the two task flows separately, we'll instead chain them together, the 1st calling the 2nd.

The substantial change to the setup is that of the Root1AppModuleTaskFlow, where we now include a Task Flow Call to the Root2AppModuleTaskFlow, and a small modification to the OrganisationsTable1.jspx page to include a button to navigate to the Root2AppModuleTaskFlow:


In running the application, we first land in the Start.jspx page:


At this stage the log window shows no root AMs have been created:


On selecting the Go Root1AppModuleTaskFlow button we arrive at the OrganisationsTable1.jspx page:


The log window shows the Root1AppModule has been instantiated and a new connection/transaction created:


This time we'll select and change the 6th record to all caps:


Now we'll select the Go Root2AppModuleTaskFlow button that's located in the OrganisationsTable1.jspx page. On navigating to the new page we see:


As can be seen, the current row indicator lies on the first row and the 6th row hasn't been modified, exactly the same as the last example. In addition in the JDev log window we can see the establishment of the new root AM and connection/transaction:


The conclusion at this point of the blog post is chaining task flows when the No Controller Transaction options are used has no effect on the root AMs and the establishment of connections/transaction.

Always Begin New Transaction and Always Use Existing transaction example

With the previous 2 examples, we can see with the No Controller Transaction option in use for our bounded task flows, the design of our root Application Modules definitely has an effect on number of transactions and therefore connections taken out from the database.

With the following third and final example, we'll introduce BTF transaction options that relegate the transactional behaviour of the underlying ADF BC business services to the task flows themselves.

Taking our original Root1AppModuleTaskFlow, we'll now change its transaction options to use Always Begin New Transaction and an Isolated data control scope:


For our second task flow Root2AppModuleTaskFlow, which we intend to call from Root1AppModuleTaskFlow, we'll change its transaction options to use Always Use Existing Transaction. The IDE in this case enforces a Shared data control scope:


From the task flow transaction options described at the beginning of this post via the Fusion Dev Guide, the options we've selected here should mean the Root1AppModuleTaskFlow establishes the transaction and therefore connection, and the second Root2AppModuleTaskFlow should borrow/attach itself to the same transaction and connection. However from our previous examples with the No Controller Transaction option, surely the root Application Modules will enforce they both create their own transaction/connections?

Running this application we first hit out Start.jspx page:


And our log window shows no AMs have been created or established connections at this stage:


On navigating to Root1AppModuleTaskFlow via the associated button in the Start.jspx page we see very similar to previously:


Yet our log window shows something interesting:


We can see that the Root1AppModule has been created as a root AM, then it's established a connection via the prepareSession() method. Oddly however we can see a further Root1AppModule has been created? Even more oddly this secondary AM is not a root AM, but a nested AM? By deduction as there are no other log entries, this second instance of Root1AppModule must be nested under the root Root1AppModule? Interesting, but let's continue with the example.

Now that we've entered the Root1AppModuleTaskFlow, let's modify the 7th record:


Followed by selecting the button to navigate to the Root2AppModuleTaskFlow, we then see on visiting the OrganisationsTable2.jspx page of the second task flow:


Note the 7th record's data, and where the current row indicator is located! Before drawing conclusions, let's look at the log window's output:


From the log window we can see a 3rd AM has been instantiated, this time a Root2AppModule as a nested AM.

What's going on?

Without a key bit of information from Steve Muench's reply to my previous OTN Forum's post, it may be hard to determine the BTF behaviour in the last scenario. In context of ADF BC and BTFs given the right combination of transaction and data control scope options, the framework will automatically nest your AMs regardless if they're a root AM.

So when we're running our application, on calling the first task flow, we see:

a) A single root AM created based on Root1AppModule, as the framework needs at least a signle root AM to drive the transactions and connections. Let's refer to this as Root1AppModule1.

b) A second instance of Root1AppModule as a nested AM under its own root instance. Let's refer to this as Root1AppModule2. This is a little odd, but as the framework automatically nests the AMs of the called BTF under a root AM instance, it's using the first AM encountered for dual purposes, essentially nesting Root1AppModule2 under Root1AppModule1.

c) By the time we hit our second BTF, only one instance of Root2AppModule is created, nested under Root1AppModule1. Let's refer to this as Root2AppModule1.

To summarize at this point, our AM structure at runtime is:

(Root) Root1AppModule1
(Nested Level 1) Root1AppModule2
(Nested Level 1) Root2AppModule1

Given that we now know nesting is occurring, it does explain why we're seeing the changes to the 7th record in both BTF OrganisationTable pages. Only if separate root AMs were used would there be separate EO caches for the Organisations table. However as the BTFs have nested the AMs, they're essentially sharing the same EO cache. So a change to data in one BTF will be reflected in the second, as long as they share transactions and are ultimately based on the same EO.

The final mystery is why if we can see the data changes are reflected in each BTF, how come the current row indicators of the underlying VOs are out of sync? This is simply explained by the fact that with the two different VOs defined in each AM, namely OrganisationsView1 & OrganisationsView2, at runtime essentially they are both instantiated separately, even though ultimately they exist under the same Root1AppModule1 at runtime and share the same EO cache. As such both VOs maintain their own separate row currency, and other state carried by the ADF BC View Objects.

Changing AM connection configurations

At this point we may start to ask how the framework is deciding to nest the AMs under the one parent AM? Is the underlying framework being smart enough to look at the database connections for the separately defined root AMs, and in determining they're one and the same, nesting the AMs as it doesn't matter when they both connect to the same database?

From my testing it appears to make no difference if you change the connections. It's essentially the first AM whose connection details are used, and it's just assumed the second AM uses the same connection details. The further inference of this is it's just assumed that all the database objects required by the second root AM database connection are available to the first.

This has two knock on effects:

1) If both AMs did connect to the same database schema, if the first connection is missing privileges on objects required by the second AM's objects, you'll hit database privilege errors during runtime when those second AM objects interact with the database (e.q. at query and DML time).

2) If you actually use your root AMs to connect to entirely different data sources, such as different databases, this automatic nesting will cause your application to fail.

The first here is easily solved with some database privilege fixes. For the second, this implies you should move back to the No Controller Transaction options to return to an application where the root AMs use their own connections.

Some caveats

If the automatic nesting functionality proves useful to you, be warned of one caveat. From the OTN Forum replies from Steve Muench, even though he describes the fact the framework will automatically nest your AMs for the purposes of connections/transactions, don't assume this means other AM features are joined or always will display the same behaviour. To quote:
In fact, in a future release we will likely be changing that implementation detail so that the AMs are always used from their own AM pool, however they will share a transaction/connection.A second caveat is around the mixing of BTF transaction options. Notably the No Controller Transaction option because of its inherit different use of root AMs to that of the normal transaction options would imply that mixing No Controller Transaction with other BTFs not using this option could leave to some disturbing and unpredictable behaviour around transactions and connections. Either use No Controller Transaction exclusively, or not at all.

A third and final caveat is this and the previous post that describe the transaction behaviours is in context of the interaction with ADF Business Components. Readers should not assume that the same transaction behaviour will be exhibited by different underlying business services such as EJBs, POJOs or Web Services. As example Web Services don't have the concept of transactions, so we can probably guess that there's no point using anything but the No Controller Transaction option .... however again you need to experiment with these alternatives yourself, don't base your conclusions on this post.

The implication for ADF Libraries

In Andrejus Baranovski's blog post he shows a technique for mashing the Model projects of several ADF Libraries containing BTFs and their own AMs into a single Model project, such that a single root AM is defined across all BTFs at design time, to be reused by all ADF Libraries. With this blog post we see we while Andrejus's technique is fine for those relying on the No Controller Transaction options of BTFs, such an extended build of ADF Libraries and their AM Model projects is not essential to minimize the number of connections of our application.

Feedback

This blog post and the previous have taken a lot of research and testing to arrive at it's conclusions. If future readers find anything wrong, or find alternative scenarios where what's written here doesn't pan out, it would be greatly appreciated if you could leave a comment to that effect, such that this post doesn't mislead future readers.

JDev 11g, Task Flows & ADF BC – the Always use Existing Transaction option – it's not what it seems

Chris Muir - Mon, 2011-05-16 18:46
JDev 11.1.1.5.0

Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

Yet as the old saying goes, "with power comes great responsibility", or alternatively, "the devil is in the detail". Developers need to have a good grasp of the task flow capabilities and options in order to not paint themselves into a corner. This is particularly true of the transaction and data control scope behavioural options provided by "bounded" task flows.

The transaction and data control scope behavioural options available to bounded task flows provide a sophisticated set of functionality for spawning and managing one or more transactions during an ADF user's session. Straight from the Fusion Developer's Guide the transaction options are:

• <No Controller Transaction>: The called bounded task flow does not participate in any transaction management.

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

• Use Existing Transaction If Possible: When called, the bounded task flow either participates in an existing transaction if one exists, or starts a new transaction upon entry of the bounded task flow if one doesn't exist.

• Always Begin New Transaction: A new transaction starts when the bounded task flow is entered, regardless of whether or not a transaction is in progress. The new transaction completes when the bounded task flow exits.

In recently discussing the task flow transaction options on the OTN Forums (with the kind assistance of Frank Nimphius it's become apparent that the transaction options described in the Fusion Guide are written from the limited perspective of the ADF controller (ADFc). Why a limited perspective? Because the documentation doesn't consider how these transactions options are dealt with by the underlying business services layer – the controller makes no assumptions about the underlying layers, it is deliberate an abstraction that sits on top. As such if we consider ADF Business Components (ADF BC), ADFc can interpret the task flow transaction options as it sees fit. The inference being, ADF BC can introduce subtle nuances in how the transaction options work as called by the controller.

The vanilla "Always Use Existing Transaction" option

The Fusion Guide is clear in the use of the task flow "Always Use Existing Transaction" option:

• Always Use Existing Transaction: When called, the bounded task flow participates in an existing transaction already in progress.

The inference here is that the task flow won't create its own transaction, but rather will attach itself to an existing transaction established by its calling task flow (let's refer to this as the "parent" task flow), or a "grandparent" task flow somewhere up the task flow call stack.

To test this let's demonstrate how ADFc enforces this option.

In our first example application we have an extremely simple ADF BC model of a single Entity Object (EO), single View Object (VO) and Application Module (AM), serving data from a table of Organisations in my local database:


Oracle's JDeveloper 11g introduces the powerful concept of task flows to the Application Development Framework (ADF). Task flows enable "Service Oriented Development" (akin to "Service Oriented Architecture") allowing developers to align web application development closely to the concept of business processes, rather than a disparate set of web pages strung loosely together by URLs.

From the ViewController side we have a single Bounded Task Flow (BTF) OrgTaskFlow1 comprised of a single page:


....where the single page displays a table of Organisations via the underlying ADF Business Components:


...and the transaction options of the BTF are set to Always Use Existing Transaction. By default the framework enforces the data control scope must be shared:


In order to call the BTF, from our Unbounded Task Flow (UTF) configured in the adfc-config.xml file, we have a simple Start.jspx page, which via a button invokes a Task Flow Call to the BTF OrgTaskFlow1:


On starting the application, running the Start page, selecting the button to navigate to the Task Flow Call, we immediately hit the following error:
oracle.adf.controller.activity.ActivityLogicException: ADFC-00006: Existing transaction is required when calling task flow '/WEB-INF/OrgTaskFlow1.xml#OrgTaskFlow1'.

Via this error we can see ADFc is enforcing at runtime that the OrgTaskFlow1 BTF is unable to run as it requires its parent or grandparent task flow to have established a transaction on its behalf. With this enforcement we can (incorrectly?) conclude that Oracle's controller will never allow the BTF to run if a new transaction hasn't been established. However as you can probably guess, this post will demonstrate this isn't always the case.

A side note on transactions

Before showing how to create a transaction with the Always Use Existing Transaction option, a discussion on how we can identify transactions created via ADF BC is required.

Readers familiar with ADF Business Components will know that root Application Modules (AM) are responsible for the establishment of connections and transactional processing with the database. Ultimately the concept of transactions in context of the ADF Controller is that off the underlying business services, and by inference when ADF Business Components are used this means it's the root Application Modules that provide this functionality.

It should also be noted that by inference, that the concept of a transaction and a connection are the one in the same, in the idea that a connection with the database allows you to support a transaction, and if you have multiple transactions, you therefore have multiple connections. Simple you can't have one without the other.

Yet thanks to the Application Module providing the ability to create connections and transactions, how do we know when an AM actually creates a connection? Without knowing this, in our trials with the transaction options supported by Bounded Task Flows, unless the ADFc explicitly throws an error, we'll have trouble discerning what the ADF BC layer is actually doing underneath the task flow transaction options.

While external tools like the Fusion Middleware Control will give you a good insight into this, the easiest mechanism is to extend the Application Module's ApplicationModuleImpl's class with our AppModuleImpl and override the create() and prepareSession() methods:
public class AppModuleImpl extends ApplicationModuleImpl {
// Other generated methods

@Override
protected void create() {
super.create();
if (isRoot())
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.create() called. AM isRoot() = false");
}

@Override
protected void prepareSession(Session session) {
super.prepareSession(session);
if (isRoot())
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = true");
else
System.out.println("######## AppModuleImpl.prepareSession() called. AM isRoot() = false");
}
}
Overriding the create() method allows us to see when the Application Module is not just instantiated, but ready to be used. This doesn't tell us when a transaction and connection is established with the database, but, is useful in identifying situations where the framework creates a nested AM (which is useful for another discussion about task flows, stay tuned for another blog post).

The prepareSession() method is a chokepoint method the framework uses to set database session state when a connection is established with the database. As such overriding this method allows us to see when the AM does establish a new connection and transaction.

Bending the "Always Use Existing Transaction" option to create a transaction

Now that we have a mechanism for seeing when transactions are established, let's show a scenario where the Always Use Existing Transaction option does create a new transaction.

In our previous example our Unbounded Task Flow called our OrgTaskFlow1 Bounded Task Flow directly. This time let's introduce an intermediate Bounded Task Flow called the PregnantTaskFlow. As such our UTF Start page now calls the PregnantTaskFlow:


The PregnantTaskFlow will set its transaction option to Always Begin New Transaction and an Isolated data control scope:


By doing this we are setting up a scenario where the parent task flow will establish a transaction, which will be used by the OrgTaskFlow1 later on. Next within the PregnantTaskFlow we include a single page to land on called Pregnant.jspx, which includes a simple button to then navigate to the OrgTaskFlow1 task flow via a Task Flow Call in the PregnantTaskFlow itself:


The Pregnant.jspx page is only necessary as it gives a useful landing page when the task flow is called, to see what the task flow has done with transactions before we call the OrgTaskFlow1 BTF.

The transaction options of the OrgTaskFlow1 remain the same, Always Use Existing Transaction and a Shared data control scope:


With the moving parts of our application established, if we now run our application starting with the Start page:


...clicking on the button we arrive on the Pregnant.jspx page within the PregnantTaskFlow BTF:


(Oops, looks like this picture has been lost... I'll attempt to restore this picture soon)

Remembering our PregnantTaskFlow is responsible for establishing the transaction, and therefore we should see our Application Module create() and prepareSession() methods write out System.out.println messages to the console in the JDev log window:


Hmmm, interesting, the log window is bare, no sign of our messages? So our PregnantTaskFlow was set to create a new transaction, but no such transaction or connection with the database for that matter was established?

Here's the interesting point of our demonstration. If we then select the button in the Pregnant.jspx page which will navigate to the OrgTaskFlow1 task flow call activity in the PregnantTaskFlow, firstly we see in the browser our OrgList.jspx page:


According to our previous tests at the beginning of this post we may have expected the ADFC-00006 error "Existing transaction is required", but instead the page has rendered?

In addition if we look at our log window:


...we now see our System.out.println messages in the console, showing that the AM create() methods were called and a new connection was established to the database via the prepareSession() method being called too.

(Why are there 2 calls to create() for AppModuleImpl? The following blog post on root AM interaction with task flows explains all.)

The contradictory results here are, that even though we set the Always Use Existing Transaction option for the OrgTaskFlow1 BTF are expected the ADFC-00006 error, that it in fact OrgTaskFlow1 did establish a new transaction?

What's going on?

An easy but incorrect conclusion to make is this is an ADF bug. However if you think through how the ADF framework works with bindings to the underlying services layer, in our context ADF BC, this actually makes sense.

From the point of view of a task flow, there is no inherit and directly configured relationship between the task flow and the business services layer/ADF BC. As example there is no option in the task flow properties to say which Data Control mapping to an ADF BC Application Module the task flow will use. The only point in the framework where the ADF view and controller layers touch the ADF BC side is through the pageDef bindings files, which are used by distinct task flow activities (including pages and page fragments) within the task flow as we navigate through the task flow (i.e. not the task flow itself). As such until the task flow hits an activity that calls a binding indirectly calling the ADF BC Application Module via a Data Control, the task flow has no way of actually establishing the transaction.

That's why in the demonstrations above I referred to the intermediate task flow as the "pregnant" task flow. This task flow knows it wants to establish a transaction with the underlying business services Application Module through a binding layer Data Control, it's effectively pregnant waiting for such the event, but it can't do so until one of its children activities exercises a pageDef file with a call to the business service (to take the analogy too far, you're in labour expecting your first child, you've rushed to the hospital, but you're told you'll have to wait as the widwife hasn't arrived yet ... you know at this point you're going to have this damned kid, but you've got to desperately wait until the midwife arrives ;-)

By chance in our example, the first activity in the PregnantTaskFlow that does have a pageDef file is the OrgList.jspx page that resides in the OrgTaskFlow1 task flow called via a task flow call in the PregnantTaskFlow. So in the sense even though the OrgTaskFlow1 says it won't create a transaction, it in fact does.

Why does this matter?

At this point of the discussion you might think this all a very interesting discussion, but rather an academic exercise too. Logically there's still only one transaction established for the combination of the PregnantTaskFlow and OrgTaskFlow1, regardless of where the transaction is actually established. Why does it matter?

Recently on the ADF Enterprise Methodology Group I started a discussion on building task flow for reuse. Of specific interest I asked the question on what's the most flexible data control scope and transactions options to pick such that we don't limit the reusability of our task flows? If we set the wrong options such as Always Use Existing Transaction, because of errors like ADFC-00006, it may make the task flow unreusable, or at least limited in reuse to specific scenarios.

The initial conclusion from the ADF EMG post was only the Use Existing Transaction if Possible and Shared data control scope options should be used, as, this option will reuse an existing transaction if available from the calling task flow, or, establish a new transaction if one isn't available.

However from the conclusion of this post we can see the Always Use Existing Transaction option is in fact more flexible than first thought as long as we at some point wrap it in a task flow that starts a transaction, giving us another option when building reusable task flows.

Some caveats

A caveat also shared by the next blog post on task flow transaction, is both posts describe the transaction behaviours in context of the interaction with ADF Business Components. Readers should not assume that the same transaction behaviour will be exhibited by different underlying business services such as EJBs, POJOs or Web Services. As example Web Services don't have the concept of transactions, so we can probably guess that there's no point using anything but the No Controller Transaction option .... however again you need to experiment with these alternatives yourself, don't base your conclusions on this post.

Further reading

If you've got this far, I highly recommend you follow up reading this post by reading my next blog post on root Application Modules and how the transaction options of task flows change their behaviour.

CP10 for Discoverer 10.1.2.3

Michael Armstrong-Smith - Mon, 2011-05-16 16:56
Just wanted to let you know that on April 18, 2011, Oracle has released CP10 for 10.1.2.3. You will find it on MetaLink as patch number 11674847. When compared to CP9, 10 bugs have been fixed.


Note: when you download the readme from MyOracle, from CP9 Oracle has placed the new bug fixes at the top of the list.

So far this cumulative patch has been released for the following platforms:
  • IBM AIX on POWER systems (64-bit)
  • Linux x86
  • Microsoft Windows 32-bit
  • Oracle Solaris on SPARC (64-bit)
If you are upgrading to CP10 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.

Note: please take a look at the comments posted below and if anyone has any experience of CP10, good or bad, please let me know.

alert.log appears not be updated

Charles Schultz - Mon, 2011-05-16 15:09
After a few days of spinning my wheels and subjecting the poor recipients of oracle-l to multiple posts, I have identified an issue in Oracle code that I believe needs to be looked at.

First, some background.
We are running Oracle EE 11.1.0.7 on Solaris 10. We also have a job that occasionally bzips (compresses) the alert.log. The logic in the job is supposed to check if the file is actively being written to before zapping it, but by pure chance (so it would seem), in this particular case the alert.log was still open by the database when the file was scorched. This led to the appearance of the alert.log not receiving any more updates from the database. We attempted to bounce the database which had no discernible effect. I also changed the diagnostic_dest, which caused us to go from slightly strange to absolutely bizarre, and what opens the door for the rest of this post.


What I found
After changing diagnostic_dest several times, posting on oracle-l, the Oracle Community forums and playing tag with an Oracle Support Analyst, and doing lots of truss commands against sqlplus, I started to focus on this result from truss:
access("./alert.log", F_OK)              = 0

Now, you may notice that this "access" command is saying that the file in question ("./alert.log") is legit. This caused no small amount of head-scratching. I got the same results no matter which directory I ran the commands from. In my system, I only had two files with this name, one in $ORACLE_HOME/dbs and one in $DIAG/trace. Neither were actively updated by the database. It was not clear to me, at first, that Oracle was finding one of these log files. Especially since it never did anything with it. I searched file descriptors in /proc/*/fd and found nothing. I even grepped keywords from all text files looking for strings that should show up in this particular alert.log.

For the life of me, I could not figure out what directory ./alert.log was in. When I compared to other databases, this same access always returned Err#2 ENOENT. So I knew this must be key, but not sure exactly how. On a whim, I decided to delete the alert.log in $ORACLE_HOME/dbs. Lo and behold, the problem seemed to go away magically.

The BUG
So here is the root problem, in my opinion. The Oracle code line is looking for $ORACLE_HOME/dbs/alert.log, but completely fails to write to the file if it is found. Instead, the branch simply exits out. How is that helpful?

In retrospect....
I believe when I changed diagnostic_dest to a non-existing directory, Oracle automatically created alert.log in $ORACLE_HOME/dbs. I guess I learned a few things from this. :) Also, I learned a few tidbits along the way. One can use KSDWRT to write messages to the alert.log. Dan Morgan's library (still hosted by PSOUG) shows this. Also learned a little more about truss and dtrace as I was researching this issue.

Now the hard part; convincing Oracle that this is a problem and needs to be corrected.

The JDE alliance with IBM gets stronger

Andrews Consulting - Mon, 2011-05-16 14:03
One of the more intriguing aspects of the IT industry is that businesses are often both great friends and bitter enemies at the same time. Oracle and IBM provide a great example. Oracle CEO Larry Ellison could hardly have been more vitriolic in his attacks on IBM’s hardware business at OpenWorld last September. Ellison used […]
Categories: APPS Blogs

Client want to go for DR test.The changes During DR Test should Not REflect On Prod(Means the changed Data during DR Test should Not reflect on primar

Ayyappa Yelburgi - Sun, 2011-05-15 22:53
Possibility1:Planned Fail over Note:Primary Database will be down until DR Test completes a.Take cold/hot/RMAN backup on primary before DR test b.Take cold/hot/RMAN backup on standby Database before DR test. c.Shutdown Primary Database d.On standby Database fire the below command sql> alter database activate standby database; e.Once standby Database is activated,Execute the below command. ayyudbahttp://www.blogger.com/profile/00046200352601718598noreply@blogger.com5

CP9 for Discoverer 10.1.2.3

Michael Armstrong-Smith - Thu, 2011-05-12 14:28
Just wanted to let you know that on January 11, 2011, Oracle has released CP8 for 10.1.2.3. You will find it on MetaLink as patch number 10233659. When compared to CP8, 6 bugs have been fixed.


Note: when you download the readme from MyOracle, from this release Oracle has started to place the new bug fixes at the top of the list.

So far this cumulative patch has been released for the following platforms:
  • HP-UX Itanium
  • HP-UX PA-RISC (64-bit)
  • IBM AIX on Power Systems (64-bit)
  • Linux x86
  • Microsoft Windows 32-bit
  • Oracle Solaris on SPARC (64-bit)
If you are upgrading to CP9 from any patch level prior to CP4 then JDBC patch patch p4398431_10105_GENERIC.zip for bug 4398431(release 10.1.0.5) needs to be installed before you apply CP5.
This patch needs to be applied to all Oracle Homes, i.e. Infrastructure home as well as all related midtier homes.
Bug 4398431 - HANG WHEN RETRIEVING A CONNECTION FROM THE IMPLICIT CONNECTION CACHE

CP2 for Discoverer 11g released

Michael Armstrong-Smith - Thu, 2011-05-12 14:16
Just wanted to let you know that on January 11, 2011, Oracle released CP2 for Discoverer 11.1.1.2.0. This is applicable for both Discoverer Plus and Viewer. You will find it on My Oracle Support (formerly MetaLink) as patch number 10409451. There are 5 bugs fixed in this cumulative patch.

So far this cumulative patch has been released for the following 5 platforms:
  • Linus x86
  • Linux x86-64 bit
  • Microsoft Windows (32-bit)
  • Microsoft Windows x64 (64-bit)
  • Oracle Solaris on SPARC (64-bit)

Configuring Discoverer Plus to pre-populate login credentials

Michael Armstrong-Smith - Thu, 2011-05-12 13:20
Have you ever noticed how Discoverer does not remember your user name, database and EUL whenever you log out and wished there was a way to make it do so?

Well, there is a way but you need to add some parameters to your URL to make it do so.

Let's assume the following:
  • User Name is michael
  • Database is prod
  • EUL is eul5_us
All you need to do is to add switches to your URL and then save it in your favorites. The switches you need are:
  • For User Name  use us=
  • For Database use database=
  • For EUL use eul=
Putting this altogether I can use: http://myserver.com:7779/discoverer/plus?us=michael&database=prod&eul=eul5_us

If you are using E-Business Suite you can also pre-populate this setting too by adding: lm=applications, like this:

http://myserver.com:7779/discoverer/plus?lm=applications&us=michael&database=prod&eul=eul5_us

Running Plus in IE8

Michael Armstrong-Smith - Thu, 2011-05-12 13:15
If you are experiencing issues running Discoverer Plus inside Microsoft IE8 the following comments may help.

So far, I have noticed that under no circumstances with Discoverer run in IE8 when it is configured to use JInitiator. If your company has enabled Discoverer to run primarily using JInitiator try adding the following parameter to your URL:  _jvm_name =sun

Your URL should look something like this: http://myserver.com:7778/discoverer/plus?_jvm_name=sun

Now all this assumes that your Discoverer administrator has enabled a more recent Sun Java than Discoverer comes installed with, namely 1.4.0_06

Should you find that you have this version installed please upgrade the server Java and try again.

OWB runtime repository

Klein Denkraam - Thu, 2011-05-12 07:25

I have been looking around the OWB runtime repository from time to time. Mainly because the Control Center isn’t allways the speedy friend you need when things get tough. It shows a lot of white screen a lot of the time while waiting for results to show. So I made myself a view on the runtime repository. I have been meaning to share it for some time, but did not get around to it. Until I recently saw a bit of much needed and long overdue OWB 11gR2 documentation for the runtime repository. I have not checked if I have not taken any shortcuts through the model yet, but when that leads to improvement, I will publish them here. So here it is.


CREATE OR REPLACE VIEW VW_RT_AUDIT_INFO
(EXECUTION_NAME, RETURN_RESULT, STARTTIME, ENDTIME, ELAPSE_TIME,
ELAPSE_FORMAT, SELECTED, INSERTED, UPDATED, DELETED,
DISCARDED, MERGED, CORRECTED, ERROR#, EXECUTION_AUDIT_STATUS,
MESSAGE_SEVERITY, MESSAGE_TEXT, PARAMETER_NAME, VALUE, CREATION_DATE,
OBJECT_NAME, OBJECT_LOCATION_NAME, TASK_NAME, TOP_LEVEL_EXECUTION_AUDIT_ID, EXECUTION_AUDIT_ID,
PARENT_EXECUTION_AUDIT_ID)
AS
SELECT    e.execution_name,
e.return_result,
e.created_on starttime,
e.updated_on endtime,
e.elapse_time,
TO_CHAR (TRUNC (SYSDATE, 'DD') + e.elapse_time / (24 * 3600),
'HH24:MI:SS') AS elapse_format,
DECODE (x.sel, NULL, 0, x.sel) AS selected,
DECODE (x.ins, NULL, 0, x.ins) AS inserted,
DECODE (x.upd, NULL, 0, x.upd) AS updated,
DECODE (x.del, NULL, 0, x.del) AS deleted,
DECODE (x.dis, NULL, 0, x.dis) AS discarded,
DECODE (x.mer, NULL, 0, x.mer) AS merged,
DECODE (x.cor, NULL, 0, x.cor) AS corrected,
DECODE (x.err, NULL, 0, x.err) AS error#,
e.execution_audit_status,
m.message_severity,
m.message_text,
p.parameter_name,
p.VALUE,
m.created_on AS creation_date,
e.object_name,
e.object_location_name,
e.task_name,
e.top_level_execution_audit_id,
e.execution_audit_id,
e.parent_execution_audit_id
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_exec_messages m
ON        e.execution_audit_id = m.execution_audit_id
LEFT JOIN all_rt_audit_execution_params p
ON        e.execution_audit_id = p.execution_audit_id
-- AND       p.parameter_name LIKE '%SPEELR%'
AND       p.parameter_name NOT IN ('PROCEDURE_NAME', 'PURGE_GROUP', 'OPERATING_MODE', 'MAX_NO_OF_ERRORS', 'AUDIT_LEVEL', 'BULK_SIZE', 'COMMIT_FREQUENCY', 'ODB_STORE_UOID', 'PACKAGE_NAME')
LEFT JOIN
(SELECT   e.execution_audit_id,
SUM (a.number_errors) AS err,
SUM (a.number_records_selected) AS sel,
SUM (a.number_records_inserted) AS ins,
SUM (a.number_records_updated) AS upd,
SUM (a.number_records_deleted) AS del,
SUM (a.number_records_discarded) AS dis,
SUM (a.number_records_merged) AS mer,
SUM (a.number_records_corrected) AS cor
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_map_runs a
ON e.execution_audit_id = a.execution_audit_id
GROUP BY e.execution_audit_id) x
ON         e.execution_audit_id = x.execution_audit_id




Note:

I have included error messages for each execution. This means rows will be duplicated when more than one error message is found for an execution.

Note 2:

I excluded the ‘default’ parameters for each execution because they too would lead to duplication of rows and most parameters will have default values anyway. Custom parameter values during execution will be shown in this way.


Is ChromeBook nothing but Larry's old idea of Network Internet Computer?

Khanderao Kand - Thu, 2011-05-12 01:44
Google announced ChromeBooks at GoogleIO 2011 conference today with a great fan-far. Definitely it is an idea appropriate to current Web centric world. It seems to be giving right vibes. It's slick, fast to start, connected to web, secured, may be free from viruses, consumes low battery. It is consistent with today's cloud computing. In other word, it is perfect client device for a Cloud Computing world or new web. However, is it a real innovation? Larry had started Network computer concept and had launched a separate company for the same. May be it was ahead of time. Isn't ChromeBook recycled the same idea? Anyway, though it has an innovative subscription model for education and businesses, the cost is higher, 499 for wifi and 599 for wireless. Especially on the background of various efforts going on to introduce slick Netbook at $100. Moreover, at this price, the ChromeBook would get sandwiched between tablets and PCs. Read my detailed blog at:
http://texploration.wordpress.com/
http://texploration.wordpress.com/2011/05/12/is-chromebook-recycled-idea-of-netcomputing-would-it-be-sandwitched-between-tablets-and-laptops/

Hadoop is building a good momentum...

Khanderao Kand - Tue, 2011-05-10 23:58
In EMC World this week, many new products based on Hadoop called launched.

EMC announced enterprise and community distribution as well as appliance of Apache Hadoop. This would be in competition with Cloudera which has a very good traction in Hadoop market. Moreover, Yahoo which has been pioneer in original contribution of Hadoop and a heavy user, is rumoured to be launching Hadoop spin-off. It has contributed Pig as a layer above Hadoop.

During the conference other products like Brisk,which makes Hadoop with Cassendra as a node, and SnapReduce from SnapLogic were also announced. Overall all of these are good indication of Hadoop traction. A more detailed note is in my other blog which is dedicated to emerging technologies and apps.

http://texploration.wordpress.com/2011/05/10/hadoop-based-products-launche/

ASM – It's not just for RAC anymore

alt.oracle - Tue, 2011-05-10 21:43

I'm super critical of Oracle when they screw stuff up or try to push technology in a direction that's bad for DBAs. You'll be hearing some rants about it in upcoming posts. But I also think that Oracle is a company that is actually good for the direction that technology is heading, unlike some companies whose names begin with "Micro" and end with "soft". Yes, they're a vast, stone-hearted corporation that would sell their grandmothers to raise their stock price. So is every other technology company – get used to it. But when they do something right, I'll be fair and sing their praises. Once every version or so, Oracle does something that really changes the game for DBAs. In version 8 it was RMAN. In 9i it was locally managed tablespaces. In 10g, it's definitely ASM - Automatic Storage Management. Yeah, I know this is kinda old news - ASM has been out for a good long while. What surprises me, though, is how many DBAs think that ASM is only useful for RAC architectures. "I don't run RAC, why would I need ASM?"

When ASM came out, it both intrigued and terrified me. The claim that it could produce I/O performance almost on par with raw devices without all the grief that comes with using them was exciting. But the idea of putting your production data on a completely new way of structuring files was pretty scary. I trust filesystems like UFS and ext2/3 (maybe even NTFS a little, but don't quote me) because they've stood the test of time. If there's one thing a DBA shouldn't screw around with, it's the way that the bits that represent your company's data are written to disk. I'm skeptical of any new way to store Oracle data on disk, since I'm the loser that has to recover the data if everything goes south. So I entered into my new relationship with ASM the way you should – with a whole lot of testing.

I originally moved to ASM out of sheer necessity. I was running RAC and using a woeful product called OCFS – Oracle Clustered Filesystem – to store the data. Performance was bad, weird crashes happened when there was heavy I/O contention, it wasn't pretty. Nice try, Oracle. It's cool that it was an open source project, but eventually it became clear that Oracle was pushing toward ASM as their clustered filesystem of choice. To make a long story short, we tested the crap out of it and ASM came through with flying colors. Performance was outstanding and the servers used a lot less CPU, since ASM bypasses that pesky little filesystem cache thing. In the end, we moved our single instance databases to ASM as well and saw similar results. It's true that, since you give Oracle control of how reads and writes are done, ASM is a very effective global filesystem for RAC. But the real strength of ASM is in the fact that its a filesystem built specifically for Oracle databases. You don't use it to store all your stolen mp3 files (unless you're storing them as blobs in the database, wink), you use it for Oracle datafiles. You give Oracle control of some raw partitions and let it go. And it does a good job. Once you go ASM, you never go back.

I'm not going to do a sell job on the features of ASM, since I don't work for the sales department at Oracle. Really, the positives for ASM boil down to three key features. 1) It bypasses the filesystem cache, thus going through fewer layers in the read/write process. This increases performance in essentially the same way that raw devices do. 2) It works constantly to eliminate hot spots in your Oracle data. This is something that your typical filesystem doesn't do, since it takes an intimate knowledge of how the particular application (in this case Oracle) is going to use the blocks on disk. Typical filesystems are designed to work equally well with all sorts of applications, while ASM is specialized for Oracle. 3) It works (with Oracle) as a global filesystem. In clustered systems, your filesystem is crucial. It has to be "globally aware" that two processes from different machines might try to modify the same block of data at the same time. That means that global filesystems need to have a "traffic cop" layer of abstraction that prevents data integrity violations. Normally this layer would impact performance to a certain degree. But ASM gives control to Oracle, which has a streamlined set of rules about what process can access a certain block and prevents this performance loss.

So consider using ASM. Even if you don't run RAC, benefits #1 and #2 make it worth your while. Our DBA team has been using it religiously on both RAC and non-RAC systems for years without any problems.

Of course, we're talking about Oracle here, so leave it to them to take the wonderful thing that is ASM and screw it up. Next time I'll tell you how they did just that in version 11g.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator