Feed aggregator

Some More New ADF Features in JDeveloper 11.1.2

JHeadstart - Thu, 2011-06-23 22:39

The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:

  •  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: 
    <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/>
    See section 3.5.2 in web user interface guide for more info.

  • A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon).

  • New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4.

  • The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering.

  • In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list.
Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

Categories: Development

JDev 11.1.2.0.0 – af:showDetailItems and af:regions – the power of Facelets – part 3

Chris Muir - Tue, 2011-06-21 06:10
The previous blog posts in this series (part 1 and part 2) looked at the behaviour of the af:region tag embedded in a af:showDetailItem tag with JDeveloper 11.1.1.4.0. This post investigates the changing nature of the "deferred" activation property for the underlying af:region task flow binding in JDev 11.1.2.0.0.

JDeveloper 11.1.2.0.0 introduces JSF 2.0 and Facelets as the predominate display technologies for ADF Faces RC pages. As Oracle's JSF 2.0 roadmap for ADF states: "The introduction of Facelets addresses the shortcomings of JSP and improves the developer page design and reuse experience."

One of these improvements is how the "deferred" activation property for task flow bindings under af:regions works, and we no longer require the programmatic solution.

The behaviour using JSPX

Before we can show how Facelets improves the use of embedded task flows in an closed af:showDetailItem tag, it's worthwhile showing that using JSPXs under 11.1.2.0.0 still has the previous behaviour where a programmatic solution is required.

If we take the same code from the previous posts written in JDev 11.1.1.4.0, specifically using a JSPX page, this time entitled BasicShowDetailItem.jspx:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document title="ShowDetailJSPX" id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the embedded af:region containing a call to a task flow CharlieTaskFlow:

Also as per the previous posts, we're using the LogBegin Method Call in the task flow above, as well as task flow initializers and finalizers to helps us understand if the CharlieTaskFlow has been executed at all:

The code that does the logging again is similar as the previous posts:
public class CharlieBean {

public static ADFLogger logger = ADFLogger.createADFLogger(CharlieBean.class);

private String charlieValue = "charlie1";

public void setCharlieValue(String charlieValue) {
logger.info("setCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
this.charlieValue = charlieValue;
}

public String getCharlieValue() {
logger.info("getCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
return charlieValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}

public void logBegin() {
logger.info("Task flow beginning");
}
}
...and for the record we've inserted a custom JSF PhaseListener to assist interpreting the logger output:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
And finally in this example, note the task flow binding options "activation" and "active". We'll set this back to the defaults "deferred" and null to show the behaviour of task flows under 11.1.2.0.0 using JSPXs:

On running this code under a JSPX page in 11.1.2.0.0, even though the af:showDetailItem that contains the af:region and task flow are closed, we see the following log output that shows that the task flow is still being executed:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
The behaviour using Facelets

From here we'll show an example using Facelets. From here the example is virtually identical, except the page containing the af:region, as well as the fragment from the CharlieTaskFlow have to be created as Facelets page and fragment respectively.

As example the Facelet page code entitled FaceletsShowDetailItem.jsf looks as follows:
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:document title="ShowDetailItemFacelet.jsf" id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
Then compared to our previous JSPX example, every other option is exactly the same. The most significant option is the task flow binding activation and active properties, which we can see are still set to "deferred" and null:

On running the Facelet page, even though the af:showDetailItem containing the af:region isn't open, unlike our previous behaviour using JSPX where we could see the task flow unnecessary initialized, from our logs with Facelets we can see that's not happening:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Once we open the af:showDetailItem, the task flow is then correctly initialized:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
This small change saves us from having to programmatically control the refresh of the task flows, and in addition shows that Oracle is enhancing the support for ADF through Facelets rather than the traditional JSPX view display technology. This should encourage greenfield developers to pursue Facelets in context of JDev 11.1.2.0.0.

Sample Application

A sample application for 11.1.2.0.0 application is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

JDev 11.1.1.4.0 – af:showDetailItems and af:regions – programmatic activation - part 2

Chris Muir - Mon, 2011-06-20 07:34
The previous blog post in this series looked at the default behaviour of the ADF framework in 11.1.1.4.0 of the af:region tag embedded in a af:showDetailItem tag. In this post we'll look at programmatically controlling the activation of regions to stop unnecessary processing.

This example will be simplified to only look at one af:region in the hidden second af:showDetailItem. The basic page entitled ShowDetailItemWithRegion looks as follows:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the first af:showDetailItem is disclosed (open) and purely exists to act as the open af:showDetailItem with the second af:showDetailItem we're actually interested in is closed. Note the second af:showDetailItem has our embedded af:region calling CharlieTaskFlow.

The CharlieTaskFlow looks as follows:

Similar to the last post, the LogBegin Method Call simple calls a bean method to log the actual start-of-processing for the bean. In turn both the initializer and finalizer of the task flows are also logged. A pageFlowScope bean CharlieBean1 shows all the methods:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class CharlieBean {
public static ADFLogger logger = ADFLogger.createADFLogger(CharlieBean.class);

private String charlieValue = "charlie1";

public void setCharlieValue(String charlieValue) {
logger.info("setCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
this.charlieValue = charlieValue;
}

public String getCharlieValue() {
logger.info("getCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
return charlieValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}

public void logBegin() {
logger.info("Task flow beginning");
}
}
And finally the CharlieFragment.jsff contains the following code:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Charlie value" value="#{pageFlowScope.charlieBean1.charlieValue}" id="it1"/>
</jsp:root>
In turn we've configured the same PhaseListener class to assist us in reading the debug output:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
When we run the parent page we see:

And the following in the logs:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Once again as we discovered from the first blog post in this series we can see that even though the CharlieTaskFlow is not showing, it is being initialized and executed to at least the LogBegin activity, though not as far as CharlieFragment.jsff.

This in many circumstances is going to be undesirable behaviour. Imagine a screen made up of several af:showDetailItems, most closed, where the user rarely opens the closed collections. From a performance point of view the hidden task flows are partially executed when their results may never be used by the users and is a waste of resources.

The solution to this is in the task flow binding that backs the af:region of the main page. When dropping the task flow onto the page as an af:region, a task flow binding is also added to the relating pageDef file:

The task flow binding includes a number of properties revealed by the property inspector, of which the "activation" and "active" properties we're interested in:

These properties can be used to control the activation and deactivation of the task flow backing the af:region. Under JDev 11.1.1.4.0 the values for the activation property are:

1) <default> (immediate)
2) conditional
3) deferred
4) immediate

While option 1 says it's the default, in fact option 3 "deferred" will be picked by default when the task flow binding is created (go figure?). This in itself is odd because the documentation for 11.1.1.4.0 states that the deferred option defaults back to "immediate" if we're not using Facelets (which we're not, this is an 11.1.2 feature). So 3 out of 4 options in the end are immediate, and the other is conditional. (Agreed, this is somewhat confusing, but it'll become clearer in the next blog post looking at these properties under 11.1.2, that the deferred option takes on a different meaning)

The "immediate" activation behaviour implies that the task flow binding will always be executed when the pageDef for the page is accessed. This explains why the hidden task flow in the af:showDetailItem af:region is actively initialized, mainly because the ADF lifecycle that in turn processes the page bindings is bolted on top of the JSF lifecycle, and the task flow binding doesn't know its indirectly related af:region is rendered in an af:showDetailItem that is closed.

The "conditional" activation behaviour in conjunction with the active property allows us to programmatically control the activation based on an EL expression. It's with this option we can solve the issue of our task flows being early executed.

To implement this in our current solution we create a new sessionBean to keep track of if the region is activated or not:
package test.view;

public class ActiveRegionBean {

private Boolean regionActivation = false;

public void setRegionActivation(Boolean regionActivation) {
this.regionActivation = regionActivation;
}

public Boolean getRegionActivation() {
return regionActivation;
}
}
We then modify our task flow binding in our page to use "conditional" activation, and an EL expression to refer to the state of the regionActivation variable within our sessionScope ActiveRegionBean:

Note by default the regionActivation is false to match the state of our second af:showDetailItem which is closed by default.

We also need to include some logic to toggle the regionActivation flag when the show af:showDetailItem is opened, as well as programmatically refreshing the af:region. To do this we can create a disclosureListener that refers to a new backing bean, as well as a component binding for the af:region. As such the code in our original page is modified as follows:
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2"
disclosureListener="#{disclosureBean.openCharlie}">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"
binding="#{disclosureBean.charlieRegion}"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
Note the new disclosureListener property on the af:showDetailItem, and the binding property on the af:region.

In turn our requestScope DisclosureBean bean looks as follows:
public class DisclosureBean {

private RichRegion charlieRegion;

public static Object resolveELExpression(String expression) {
FacesContext fctx = FacesContext.getCurrentInstance();
Application app = fctx.getApplication();
ExpressionFactory elFactory = app.getExpressionFactory();
ELContext elContext = fctx.getELContext();
ValueExpression valueExp = elFactory.createValueExpression(elContext, expression, Object.class);
return valueExp.getValue(elContext);
}

public void openCharlie(DisclosureEvent disclosureEvent) {
ActiveRegionBean activeRegionBean = (test.view.ActiveRegionBean)resolveELExpression("#{activeRegionBean}");

if (disclosureEvent.isExpanded()) {
activeRegionBean.setRegionActivation(true);
AdfFacesContext.getCurrentInstance().addPartialTarget(charlieRegion);
// } else {
// activeRegionBean.setRegionActivation(false);
}
}

public void setCharlieRegion(RichRegion charlieRegion) {
this.charlieRegion = charlieRegion;
}

public RichRegion getCharlieRegion() {
return charlieRegion;
}
}
Note in the openCharlie() method we check if the disclosureEvent is for opening or closing the af:showDetailItem. If opening, we set the sessionScope's regionActivation to true which will be picked up by the task flow binding's active property to initialize the CharlieTaskFlow. In turn we add a programmatic refresh to the charlieRegion in order for the updates to the task flow to be seen in the actual page.

Now when we run our page for the first time we see the following log output:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Note that from our previous runtime example, we can't see the following entries:
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
This implies the CharlieBean hasn't been activated unnecessarily even when the af:showDetailItem is closed. If we then expand the af:showDetailItem, the logs reveal that the task flow is started accordingly:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Here we can see the initialization of the CharlieTaskFlow. In turn unlike the default behaviour, we can now also see that getCharlieValue() is being called. The resulting web page:

What happens when we close the af:showDetailItem? The logs show:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<CharlieBean> <taskFlowFinalizer> Task flow finalized
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
getCharlieValue() is called to fire any ValueChangeListener. But also we can see the CharlieBean1 finalizer called. Because in our request bean we've deactivated our region, the task flow is forced to close. This may or may not be desired functionality, because once open, you might wish the task flow to maintain its state. If you do wish the task flow state to be retained the openCharlie() method in the request scope bean should be modified as follows:
if (disclosureEvent.isExpanded()) {
activeRegionBean.setRegionActivation(true);
AdfFacesContext.getCurrentInstance().addPartialTarget(charlieRegion);
// } else {
// activeRegionBean.setRegionActivation(false);
}
At the conclusion of this post we can see that the default behaviour of regions and task flows under the af:showDetailItem tag can at least be programmatically controlled to stop unnecessary execution of the underlying task flows. Interestingly this was a similar problem to Oracle Forms, where separate data blocks contained within tab pages would be eagerly executed unless code was put in place to stop this.

The next post in this series will look at the "deferred" activation option for task flows in JDev 11.1.2.

Sample Application

A sample application containing solutions for part 2 of this series is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

JDev 11.1.1.4.0 – af:showDetailItems and af:regions – immediate activation - part 1

Chris Muir - Mon, 2011-06-20 07:04
ADF's af:showDetailItem tag is used as a child to parent tags such as the af:panelAccordion and af:panelTabbed. JDeveloper's online documentation states the following about the af:showDetailItem tag:
The showDetailItem component is used inside of a panelAccordion or panelTabbed component to contain a group of children. It is identified visually by the text attribute value and lays out its children. Note the difference between "disclosed" and "rendered": if "rendered" is false, it means that this the accordion header bar or tab link and its corresponding contents are not available at all to the user, whereas if "disclosed" is false, it means that the contents of the item are not currently visible, but may be made visible by the user since the accordion header bar or tab link are still visible.

The lifecycle (including validation) is not run for any components in a showDetailItem which is not disclosed. The lifecycle is only run on the showDetailItem(s) which is disclosed.I've never been a fan of the property "disclosed", why not just call it "open"? At least developers then don't have to deal with double negatives like disclosed="false". Regardless the last paragraph highlights an interesting point that the contents of the af:showDetailItem are not processed by the JSF lifecycle if the showDetailItem is currently closed (disclosed="false"). That's desired behaviour, particularly if you have a page with multiple af:showDetailItem tags that in turn query from the business service layer, potentially kicking off a large range of ADF BC queries. Ideally you don't want the queries to fire if their relating af:showDetailItem tag is currently closed.

This feature can be demonstrated via a simple example. Note the following BasicShowDetailItem.jspx page constructed under JDev 11.1.1.4.0:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="Alpha" disclosed="true" id="sdi1">
<af:panelFormLayout id="pfl1">
<af:inputText label="Alpha Value" value="#{basicBean.alphaValue}" id="it1"/>
<af:commandButton text="Submit1" id="cb1"/>
</af:panelFormLayout>
</af:showDetailItem>
<af:showDetailItem text="Beta" disclosed="false" id="sdi2">
<af:inputText label="Beta Value" value="#{basicBean.betaValue}" id="it2"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
This is backed by the following simple requestScope POJO bean entitled BasicBean.java:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class BasicBean {

public static ADFLogger logger = ADFLogger.createADFLogger(BasicBean.class);

private String alphaValue = "alpha";
private String betaValue = "beta";

public void setAlphaValue(String alphaValue) {
logger.info("setAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
this.alphaValue = alphaValue;
}

public String getAlphaValue() {
logger.info("getAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
return alphaValue;
}

public void setBetaValue(String betaValue) {
logger.info("setBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
this.betaValue = betaValue;
}

public String getBetaValue() {
logger.info("getBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
return betaValue;
}
}
(More information on the ADFLogger and enabling it can be found in Duncan Mill's recent 4 part blog).

To assist understanding what's happening we'll also include our own PhaseListener logs such that we can see the JSF lifecycle in action:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
At runtime when the page is first rendered we see:

Note that the first af:showDetailItem is disclosed (open) and the second af:showDetailItem is closed. Also note the first af:showDetailItem is showing the alpha value from our requestScope bean, but the beta value is hiding in the 2nd closed af:showDetailItem.

At this stage the logger class gives us an insight into how the requestScope Basicbean has been used. In the log window we can see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
We can see the getter accessors for alphaValue have been called twice during the JSF render response phase to render the page. (Why twice? Essentially the JSF engine doesn't guarantee to call a getter once in a request-response cycle. JSF may use a getter for its own purposes such as checking if the value submitted with the request has changed, in order to fire a ValueChangeListener. Google abounds with further discussions on this and the JSF lifecylce including the following by BalusC).

As expected note that the getter for Beta was not called, as it's not displayed. To take the example one step further, if we hit the submit button available in first af:showDetailItem, the log output still shows no mention of the beta accessors, only calls to getAlphaValue():
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
For the purpose of demonstration, if we change the alpha value and resubmit the logs show:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<BasicBean> <setAlphaValue> setAlphaValue called(alpha2)
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha2)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha2)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In this case we see the additional setAlphaValue() call, and multiple getAlphaValue() calls, but no get/setBetaValue() calls. With this we can conclude that indeed the second af:showDetailItem is suppressing the lifecycle of its children.

What's happens if we open the 2nd af:showDetailItem, which closes the 1st af:showDetailItem:

In the log window we see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<BasicBean> <setAlphaValue> setAlphaValue called(alpha2)
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getBetaValue> getBetaValue called(beta)
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In this we can see:

1) A single get and set of alpha value – why? – because the af:showDetailItem button still issues a submit to the midtier. At the point in time the second af:showDetailItem's button is pressed, alpha is still showing, and its changes need to be communicated back to the midtier. The getAlphaValue() is to test if the values changed to fire the ValueChangeListener, and the setAlphaValue() is a call to write the new value submitted to the midtier.

An observant reader might pick up the fact that in this log the getAlphaValue call returns a value of alpha rather than alpha2. Surely in the step prior to this one we had already set the value to alpha2? (in fact you can see this in the log output) The answer being this bean has been set at requestScope, not sessionScope, so the state of the internal values are not being copied across requests (which is a useful learning exercise with regards to bean scope but beyond the scope (no pun intended) of this blog post).

2) Two separate calls to getBetaValue() – as the second af:showDetailItem opens, similar to the original retrieval of the alpha value, the JSF lifecycle now calls the getter twice.

If we now press the submit button in the second af:showDetailItem we see the following in the logs:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getBetaValue> getBetaValue called(beta)
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
As previous when we pressed the submit button in the first af:showDetailItem, the log output and the calls to getBetaValue match the frequency and location of getAlphaValue. Again now that the first af:showDetailItem is fully closed, we see no JSF lifecycle on the get/setAlphaValue() methods.

So in conclusion, if the af:showDetailItem is closed, and not in the processing of being closed, then its children will not be activated.

Okay, but what's up with af:showDetailItems and af:regions?

Now that we know the default behaviour of the af:showDetailItem, let's extend the example to show where the behaviour changes.

Within JDev 11g we can make use of af:regions to call ADF bounded task flows. As example we may have the following page entitled ShowDetailItemWithRegion.jspx:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="Alpha" disclosed="true" id="sdi1">
<af:panelFormLayout id="pfl1">
<af:region value="#{bindings.AlphaTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit1" id="cb1"/>
</af:panelFormLayout>
</af:showDetailItem>
<af:showDetailItem text="Beta" disclosed="false" id="sdi2">
<af:region value="#{bindings.BetaTaskFlow1.regionModel}" id="r2"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the embedded regions within each af:showDetailItem. The setup of the rest of the page is the same, with the first af:showDetailItem being disclosed (open) and the closed when the page first renders.

The task flows themselves are very simple. As example AlphaTaskFlow contains one fragment AlphaFragment which is the default activity:

The AlphaFragment.jsff includes the following code:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Alpha value" value="#{backingBeanScope.alphaBean.alphaValue}" id="it1"/>
</jsp:root>
Of which references a backingBean scoped bean for the task flow named AlphaBean:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class AlphaBean1 {
public static ADFLogger logger = ADFLogger.createADFLogger(AlphaBean.class);

private String alphaValue = "alpha1";

public void setAlphaValue(String alphaValue) {
logger.info("setAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
this.alphaValue = alphaValue;
}

public String getAlphaValue() {
logger.info("getAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
return alphaValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}
}
This bean carries the alphaValue plus the associated getters and setters. The only addition here is the taskFlowInit() and taskFlowFinalizer() methods which we'll use in the task flow to log when the task flow is started and stopped:

In terms of the 2nd task flow BetaTaskFlow, it's exactly the same as AlphaTaskFlow except it calls the beta equivalent. As such the BetaFragment.jsff:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Beta value" value="#{backingBeanScope.betaBean.betaValue}" id="it1"/>
</jsp:root>
The backingBeanScope BetaBean:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class BetaBean1 {

public static ADFLogger logger = ADFLogger.createADFLogger(BetaBean.class);

private String betaValue = "beta1";

public void setBetaValue(String betaValue) {
logger.info("setBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
this.betaValue = betaValue;
}

public String getBetaValue() {
logger.info("getBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
return betaValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}
}
And the BetaTaskFlow initializer and finalizers set:

With the moving parts done, let's see what happens at runtime.

When we run the base page we see:

From the log output we see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<AlphaBean> <taskFlowInit> Task flow initialized
<BetaBean> <taskFlowInit> Task flow initialized
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In context of we saw in the previous example, this is an interesting result. While we see that only getAlphaValue() has been called similar to our previous example that didn't use regions, what we can also see in the RENDER_RESPONSE phase unexpectedly that the initializer for *both* task flows have been called. We expected the task flow initializer for the AlphaTaskFlow to be called, but the framework has decided to start the BetaTaskFlow as well. Another observation though is even though the BetaTaskFlow was started, somehow the framework didn't call getBetaValue?

An assumption you could make here is the framework is priming the BetaTaskFlow and calling the task flow initializer, but not actually running the task flow. We can disapprove this fact by extending the BetaTaskFlow to include a new Method Call as the task flow activity:

...where the Method Call simply calls a new method in the BetaBean1:
public void logBegin() {
logger.info("Task flow beginning");
}
If we re-run our application the current log output is shown when the page opens:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<AlphaBean> <taskFlowInit> Task flow initialized
<BetaBean> <taskFlowInit> Task flow initialized
<BetaBea> <logBegin> Task flow beginning
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
This proves that the activities within the BetaTaskFlow are actually being called. However the ADF engine seems to stop short at processing the BetaFragment, the effective viewable part of the task flow.

The conclusion we can draw here is even though you think you're hiding the BetaTaskFlow, and the af:showDetailItem documentation says it wont process the lifecyle of it's children, for af:regions using task flows, this is not the case, it is in fact processing them (up to a point). The implication of this is (at least some) unnecessary processing will occur even if the user never looks at the contents of the closed af:showDetailItem.

In the next post in this series, still using JSPX pages and JDev 11.1.1.4.0, we'll look at how we can programmatically control the activation of the hidden region to stop unnecessary processing.

The final post in the series will look at what options are available to us under JDev 11.1.2 using Facelets.

Sample Application

A sample application containing solutions for part 1 of this series is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

Using Agile Practices to Create an Agile Presentation

Cary Millsap - Fri, 2011-06-17 13:25
What’s the best way to make a presentation on Agile practices? Practice Agile practices.

You could write a presentation “big bang” style, delivering version 1.0 in front of your big audience of 200+ people at Kscope 2011 before anybody has seen it. Of course, if you do it that way, you build a lot of risk into your product. But what else can you do?

You can execute the Agile practices of releasing early and often, allowing the reception of your product to guide its design. Whenever you find an aspect of your product that doesn’t get the enthusiastic reception you had hoped for, you fix it for the next release.

That’s one of the reasons that my release schedule for “My Case for Agile Methods” includes a little online webinar hosted by Red Gate Software next week. My release schedule is actually a lot more complicated than just one little pre-ODTUG webinar:

2011-04-15Show key conceptual graphics to son (age 13)2011-04-29Review #1 of paper with employee #12011-05-18Review #2 of paper with customer2011-05-14Review #3 of paper with employee #12011-05-18Review #4 of paper with employee #22011-05-26Review #5 of paper with employee #32011-06-01Submit paper to ODTUG web site2011-06-02Review #6 of paper with employee #12011-06-06Review #7 of paper with employee #32011-06-10Submit revised paper to ODTUG web site2011-06-13Present “My Case for Agile Methods” to twelve people in an on-site customer meeting2011-06-22Present “My Case for Agile Methods” in an online webinar hosted by Red Gate Software2011-06-27Present “My Case for Agile Methods” at ODTUG Kscope 2011 in Long Beach, California
(By the way, the vast majority of the work here is done in Pages, not Keynote. I think using a word processor, not an operating system for slide projectors.)

Two Agile practices are key to everything I’ve ever done well: incremental design and rapid iteration. Release early, release often, and incorporate what you learn from real world use back into the product. The magic comes from learning how to choose wisely in two dimensions:
  1. Which feature do you include next?
  2. To whom do you release next?
The key is to show your work to other people. Yes, there’s tremendous value in practicing a presentation, but practicing without an audience merely reinforces, it doesn’t inform. What you need while you design something is information—specifically, you need the kind of information called feedback. Some of the feedback I receive generates some pretty energetic arguing. I need that to fortify my understanding of my own arguments so that I’ll be more likely to survive a good Q&A session on stage.

To lots of people who have seen teams run projects into the ground using what they call “Agile,” the word “Agile” has become a synonym for sloppy, irresponsible work habits. When you hear me talk about Agile, you’ll hear about practices that are highly disciplined and that actually require a lot of focus, dedication, commitment, practice, and plain old hard work to execute.

Agile, to me, is about injecting discipline into a process that is inevitably rife with unpredictable change.

Bug 11858963: optimization goes wrong with FIRST_ROWS_K (11g)?

Charles Schultz - Fri, 2011-06-17 08:07
At the beginning of March, I noticed some very odd things in a 10053 trace of a problem query I was working on. I also made some comments on Kerry Osborn's blog related to this matter. Oracle Support turned this into a new bug (11858963), unfortunately an aberration of Fix 4887636. I was told that this bug will not be fixed in 11gR1 (as 11.1.0.7 is the terminal release), but it will be included in future 11gR2 patches.

If you have access to SRs, you can follow the history in SR 3-314198695. For those that cannot, here is a short summary.

We had a query that suffered severe performance degradation after upgrading from 10.2.0.4 to 11.1.0.7. I attempted to use SQLT but initially run into problems with the different versions of SQLT, so I did the next best thing and looked at the 10053 traces directly. After a bit of digging, I noticed several cases where the estimated cardinality was completely off. For example:


First K Rows: non adjusted N = 1916086.00, sq fil. factor = 1.000000
First K Rows: K = 10.00, N = 1916086.00
First K Rows: old pf = 0.1443463, new pf = 0.0000052
Access path analysis for FRRGRNL
***************************************
SINGLE TABLE ACCESS PATH (First K Rows)
Single Table Cardinality Estimation for FRRGRNL[FRRGRNL] 
Table: FRRGRNL Alias: FRRGRNL
Card: Original: 10.000000 Rounded: 10 Computed: 10.00 Non Adjusted: 10.00



So, the idea behind FIRST_ROWS_K is that you want the entire query to be optimized (Jonathan Lewis would spell it with an "s") for the retrieval of the first K rows. Makes sense, sounds like a good idea. The problem I had with this initial finding is that every single rowsource was being reduced to having a cardinality of K. That is just wrong. Why is it wrong? Let's say you have a table with, um, 1916086 rows. Would you want the optimizer to pretend it has 10 rows and make it the driver of a Nested Loop? Not me. Or likewise, would you want the optimizer to think "Hey, look at that, 10 rows, I'll use an index lookup". Why would you want FIRST_ROWS_K to completely obliterate ALL your cardinalities?

I realize I am exposing some of my naivete above. Mauro, my Support Analyst corrected some of my false thinking with the following statement:

The tables are scaled under First K Rows during the different calculations (before the final join order is identified) but I cannot explain any further how / when / why.
Keep in mind that the CBO tweak -> cost -> decide (CBQT is an example)
Unfortunately we cannot discuss of the CBO algorithms / behaviora in more details, they are internal materials.
Regarding the plans yes, they are different, the "bad plan" is generated with FIRST_ROWS_10 in 11g
The "good" plan is generated in 10.2.0.4 (no matter which optimizer_mode you specify, FIRST_ROWS_10 is ignored because of the limitation) or in 11g when you disable 4887636 (that basically reverts the optimizer_mode to ALL_ROWS).
Basically the good plan has never been generated under FIRST_ROWS_10 since because of 4887636 FIRST_ROWS_10 has never been used before



I still need to wrap my head around "the limitation" in 10.2.0.4 and how we never used FIRST_ROWS_K for this particular query, but I believe that is exactly what Fix 4887636 was supposed to be addressing.

Here are some of the technical details from Bug 1185896:

]]potential performance degradation in fkr mode
]]with fix to bug4887636 enabled, if top query block
]]has single row aggregation
REDISCOVERY INFORMATION:
fkr mode, top query block contains blocking construct (i.e, single row aggregation). Plan improves with 4887636 turned off
WORKAROUND:
_fix_control='4887636:off'
I assume fkr mode is FIRST_ROWS_K, shortened to F(irst)KR(ows). The term "blocking construct" is most interesting - why would a single row aggregation be labeled as a "block construct"?

Also, this was my first introduction to turning a specific fix off. That in itself is kinda cool.

Fine tuning your logical to physical DB transform

Susan Duncan - Thu, 2011-06-16 07:23
When you use JDeveloper's class modeler to design your logical database you are able to run the DB Transform to get a physical DB model. But have you ever been frustrated at the limitations of the transform? for instance -
  • name to be used for the physical table
  • attribute to be used as the PK column
  • datatype and settings to be applied to a specific attribute
  • foreign key or intersection column names
  • different datatypes to be created for different database types
In JDeveloper 11.1.2 additional fine grain and reusable transform rules are available to you using a UML Profile. This is a UML2 concept but for database designers using JDeveloper it means you can set many properties at the logical model that are used when the transform to physical model(s) is run.

There is a section in Part2 of the tutorial Using Logical Models in UML for Database Development that details using the DB Profile, so I will not repeat the whole story here, but give you a flavour of the capabilities. Once you've applied the profile to the UML package of your class diagram you can set stereotypes against any element in your diagram. In the image below you are looking at the stereotype properties of the empno attribute - that will become a column in the database. Note that this column is to be used as the primary key on the table. Also note the list of Oracle and non-Oracle datatypes listed. Here you can specify exactly how empno should be transformed when multiple physical databases may be required. If the property for any of these is not set then the default transform will be applied.

You could say that having this ability as part of the class model bridges the gap between this model (being used as a logical DB model) and the tranformed physical database model - it provides some form of relative model capability.



Looking at another new feature in 11.1.2.0, the extract right shows other elements on the class diagram. I've created a primitive type (String25Type) and using the stereotype have specified how this type should be transformed. Now I can use that type on different classes in my diagram and the transformer will use it as necessary.

There are many other ways to fine tune your transform, run through the tutorial and try them for yourself

Hudson and me!

Susan Duncan - Thu, 2011-06-16 05:21
Over the past months I've been working more and more with Hudson, the continuous integration server. If you're familiar with Hudson then no doubt you are familiar with the changes that faced it in that time. If you're not - well, it's a long, well-documented story in the press and I will not bore you with it here!

But the most important thing is that Hudson is a great (free) continuous integration tool and continues to grow in popularity and status. Oracle became its supported from Sun's original open source project. As well as its community of users and developers Oracle has a full-time team working on it, including me as Product Manager, and it recently started the process of moving to the Eclipse Foundation as a top-level project.

Internally we use Hudson across the organization for all manner of build and test jobs and I know that many of you do too.

In JDeveloper 11.1.2 we've added new features to Team Productivity Center (TPC) to integrate Hudson (or Cruise Control) build/test results into the IDE and relate those to code checkins via the TPC work items. You can see a quick demo of that here

If you use Hudson I'd like to hear from you - in fact, I'd like to hear from you anyway! So please contact me at the usual Oracle address. Other ways to keep up with Hudson are through its mailing lists, wiki and of course twitter - @hudsonci

IRM Hotfolder update - seal docs automatically

Simon Thorpe - Tue, 2011-06-14 03:09

wrapper linkAnother update of the IRM Hotfolder tool was announced a few days ago - 3.2.0.

The main enhancement this time is to preserve timestamps, ownership and file system permissions during the automated sealing process. Earlier versions would create sealed files with timestamps reflecting the time of sealing, and ownership attributed to the wrapper utility, etc. This version lets you preserve the properties of the file prior to sealing. 

The documentation has also been updated to clarify the permissions needed to use the utility.

For those who aren't familiar with the IRM Hotfolder, it is a simple utility that uses IRM APIs to seal and unseal files automatically by monitoring file system folders, WebDAV folders, SharePoint folders, application output folders, and so on.

Native String Aggregation in 11gR2

Duncan Mein - Tue, 2011-06-14 03:07
A fairly recent requirement meant that we had to send a bulk email to all users of each department from within our APEX application. We have 5000 records in our users table and the last thing we wanted to do was send 5000 distinct emails (one email per user) for both performance and to be kind on the mail queue / server.

In essence, I wanted to to perform a type of string aggregation where I could group by department and produce a comma delimited sting of all email address of users within that department. With a firm understanding of the requirement, so began the hunt for a solution. Depending on what version of the database you are running, the desired result can be achieved in a couple of ways.

Firstly, the example objects.

CREATE TABLE app_user
(id NUMBER
,dept VARCHAR2 (255)
,username VARCHAR2(255)
,email VARCHAR2(255)
);

INSERT INTO app_user (id, dept, username, email)
VALUES (1,'IT','FRED','fred@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (2,'IT','JOE','joe@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (3,'SALES','GILL','gill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (4,'HR','EMILY','emily@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (5,'HR','BILL','bill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (6,'HR','GUS','gus@mycompany.com');

COMMIT;

If you are using 11gR2, you can expose the new LISTAGG function as follows to perform your string aggregation natively:

SELECT dept
,LISTAGG(email,',') WITHIN GROUP (ORDER BY dept) email_list
FROM app_user
GROUP BY dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

If running 11g or earlier, you can achieve the same result using XMLAGG as follows:

SELECT au.dept
,LTRIM
(EXTRACT
(XMLAGG
(XMLELEMENT
("EMAIL",',' || email)),'/EMAIL/text()'), ','
) email_list
FROM app_user au
GROUP BY au.dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

The introduction of native string aggregation into 11gR2 is a real bonus and a function that has already proved to have had huge utility within our applications.

Back from Tallinn, Estonia

Rob van Wijk - Sat, 2011-06-11 16:22
This morning I arrived back from a trip to Tallinn. Oracle Estonia had given me the opportunity to present my SQL Masterclass seminar at their training center in Tallinn, on Thursday and Friday. Thank you all those who spent two days hearing me. Here is a short story about my trip including some photos.I arrived at Tallinn airport around 1PM local time on Wednesday. My hotel room was located at Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Clouds Leak - IRM protects

Simon Thorpe - Sat, 2011-06-11 06:46

leaky cloudIn a recent report, security professionals reported two leading fears relating to cloud services:

"Exposure of confidential or sensitive information to unauthorised systems or personnel"

"Confidential or sensitive data loss or leakage"

 

These fears are compounded by the fact that business users frequently sign themselves up to cloud services independently of whatever arrangements are made by corporate IT. Users are making personal choices to use the cloud as a convenient place to store and share files - and they are doing this for business information as well as personal files. In my own role, I was recently invited by a partner to review a sensitive business document using Googledocs. I just checked, and the file is still there weeks after the end of that particular project - because users don't often tidy up after themselves.

So, the cloud gives us new, seductively simple ways to scatter information around, and our choices are governed by convenience rather than compliance. And not all cloud services are equal when it comes to protecting data. Only a few weeks ago, it was reported that one popular service had amended its privacy assurance from "Nobody can see your private files..." to "Other [service] users cannot...", and that administrators were "prohibited" from accessing files - rather than "prevented". This story demonstrates that security pros are right to worry about exposure to unauthorised systems and personnel.

passwordAdded to this, the recent Sony incident highlights how lazy we are when picking passwords, and that services do not always protect passwords anything like as well as they should. Reportedly millions of passwords were stored as plain text, and analysis shows that users favoured very simple passwords, and used the same password for multiple services. No great surprise, but worrying to a security professional who knows that users are just as inconsiderate when using the cloud for collaboration.

No wonder then that security professionals put the loss or exposure of sensitive information firmly at the top of their list of concerns. They are faced with a triple-whammy - distribution without control, administration with inadequate safeguards, and authentication with weak password policy. A compliance nightmare.

So why not block users from using such services? Well, you can try, but from the users' perspective convenience out-trumps compliance and where there's a will there's a way. Blocking technologies find it really difficult to cover all the options, and users can be very inventive at bypassing blocks. In any case, users are making these choices because it makes them more productive, so the real goal, arguably, is to find a safe way to let people make these choices rather than maintain the pretence that you can stop them.

seal to protect cloud docsThe relevance of IRM is clear. Users might adopt such services, but sealed files remain encrypted no matter where they are stored and no matter what mechanism is used to upload and download them. Cloud administrators have no more access to them than if they found them on a lost USB device. Further, a hacker might steal or crack your cloud passwords, but that has no bearing on your IRM service password, which is firmly under the control of corporate policy. And if policy changes such that the users no longer have rights to the files they uploaded, those files become inaccessible to them regardless of location.  You can tidy up even if users do not.

Finally, the IRM audit trail can give insights into the locations where files are being stored.

So, IRM provides an effective safety net for your sensitive corporate information - an enabler that mitigates risks that are otherwise really hard to deal with.

EPM, CPM, EIM and other confusing acronyms – making sense out of Oracle’s data management offerings

Andrews Consulting - Thu, 2011-06-09 10:41
Oracle proclaims itself to be the leader in the Enterprise Performance Management (EPM) market – an assertion that is hard to dispute given the imprecise way in which that acronym is used. Like all of its competitors, Oracle has its own unique vocabulary and associated definitions of the jargon and acronyms it uses. For those […]
Categories: APPS Blogs

A Right Pig's Ear of a Circular Reference

Duncan Mein - Wed, 2011-06-08 16:31
If you have ever used a self referencing table within Oracle to store hierarchical data (e.g. an organisations structure), you will have undoubtedly used CONNECT BY PRIOR to build your results tree. This is something we use on pretty much every project as the organisation is very hierarchy based.

Recently, the support cell sent the details of a recent call they received asking me to take a look. Looking down the call, I noticed that the following Oracle Error Message was logged:

"ORA-01436: CONNECT BY loop in user data"

A quick look at the explanation of -01436 and it was clear that there was a circular reference in the organisation table i.e. ORG_UNIT1 was the PARENT of ORG_UNIT2 and ORG_UNIT2 was the PARENT of ORG_UNIT1. In this example, both ORG_UNITS were the child and parent of each other. Clearly this was an issue which was quickly resolved by the addition of a application and server side validation to prevent this from re-occurring.

The outcome of this fix was a useful script that I keep to identify if there are any circular references within a self referencing table. The example below shows this script in action:
CREATE TABLE ORGANISATIONS (ORG_ID NUMBER
, PARENT_ORG_ID NUMBER
, NAME VARCHAR2(100));

INSERT INTO ORGANISATIONS VALUES (1, NULL, 'HQ');
INSERT INTO ORGANISATIONS VALUES (2, 1, 'SALES');
INSERT INTO ORGANISATIONS VALUES (3, 1, 'RESEARCH');
INSERT INTO ORGANISATIONS VALUES (4, 1, 'IT');
INSERT INTO ORGANISATIONS VALUES (5, 2, 'EUROPE');
INSERT INTO ORGANISATIONS VALUES (6, 2, 'ASIA');
INSERT INTO ORGANISATIONS VALUES (7, 2, 'AMERICAS');

COMMIT;

A quick tree walk query shows the visual representation of the hierarchy with no errors:
SELECT LPAD ('*', LEVEL, '*') || name tree
,LEVEL lev
FROM organisations
START WITH org_id = 1
CONNECT BY PRIOR org_id = parent_org_id;

TREE LEV
---------------------
*HQ 1
**SALES 2
***EUROPE 3
***ASIA 3
***AMERICAS 3
**RESEARCH 2
**IT 2

Now lets create a circular reference so that EUROPE is the parent of SALES:
UPDATE ORGANISATIONS SET parent_org_id = 5 WHERE NAME = 'SALES';
COMMIT;

Re-running the query from the very top of the tree completes but gives an incorrect
and incomplete result set:
TREE         LEV
---------------------
*HQ 1
**RESEARCH 2
**IT 2

If you running Oracle Database 9i and backwards, this Experts Exchange article provides a nice procedural solution and a quick mod to the PL/SQL gave me exactly the information I needed:
SET serveroutput on size 20000

DECLARE
l_n NUMBER;
BEGIN
FOR rec IN (SELECT org_id FROM organisations)
LOOP
BEGIN
SELECT COUNT ( * )
INTO l_n
FROM ( SELECT LPAD ('*', LEVEL, '*') || name tree
, LEVEL lev
FROM organisations
START WITH org_id = rec.org_id
CONNECT BY PRIOR org_id = parent_org_id);
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -1436
THEN
DBMS_OUTPUT.put_line
(
rec.org_id
|| ' is part of a Circular Reference'
);
END IF;
END;
END LOOP;
END;
/
As Buzz Killington pointed out in the comments section, Oracle Database 10g onwards introduces CONNECT BY NOCYCLE which will instruct Oracle to return rows even if it is involved in a self referencing loop. When used with the CONNECT_BY_ISCYCLE pseudocolumn, you can easily identify erroneous relationships via SQL without the need to switch to PL/SQL. An example of this can be seen by executing the following query:

WITH
my_data
AS
( SELECT ORG.PARENT_ORG_ID
,ORG.ORG_ID
,ORG.NAME org_name
FROM organisations org
)
SELECT
SYS_CONNECT_BY_PATH (org_name,'/') tree
,PARENT_ORG_ID
,ORG_ID
,CONNECT_BY_ISCYCLE err
FROM
my_data
CONNECT BY NOCYCLE PRIOR org_id = parent_org_id
ORDER BY 4 desc;

TREE PARENT_ORG_ID ORG_ID ERR
------------------------------------------------
/SALES/EUROPE 2 5 1
/EUROPE/SALES 5 2 1
/EUROPE 2 5 0
/EUROPE/SALES/ASIA 2 6 0
/EUROPE/SALES/AMERICAS 2 7 0
/ASIA 2 6 0
/AMERICAS 2 7 0
/SALES 5 2 0
/SALES/ASIA 2 6 0
/SALES/AMERICAS 2 7 0
/HQ 1 0
/HQ/RESEARCH 1 3 0
/HQ/IT 1 4 0
/RESEARCH 1 3 0
/IT 1 4 0

Any value > 0 in the ERR column indicates that the row is involved in a self referencing join. This is a much neater and more performant way to achieve to desired result. Thanks to Buzz for pointing out a much better way to original PL/SQL routine.

Pen Test Tool for APEX

Duncan Mein - Wed, 2011-06-08 07:13
Just a quick plug for a cool Penetration Test tool that we have been using on-site for a few months now. The application is called: Application Express Security Console and developed by a company called Recx Ltd

This can be used to identify areas of you APEX applications that are vulnerable to:
SQL Injection, XSS as well as inadequate access control etc. It kindly suggests ways in which the vulnerability can be addressed as well.

We have built the use of this into our formal release process now and has definitely proved value for money to organisation.

Why KScope?

Cary Millsap - Fri, 2011-06-03 10:09
Early this year, my friend Mike Riley from ODTUG asked me to write a little essay in response to the question, “Why Kscope?” that he could post on the ODTUG blog. He agreed that cross-posting would help the group reach more people, so I’ve reproduced my response to that question here. I’ll hope to see you at Kscope11 in Long Beach June 26–30. If you develop applications for Oracle systems, you need to be there.

MR: Why KScope?

CM: Most people in the Oracle world who know my name probably think of me as a database administrator. In my heart, I am a software designer and developer. Before my career with Oracle, I worked in the semiconductor industry as a language designer. I wrote compilers for a living. Designing and writing software has always been my professional true love. I’ve never strayed too far away from it; I’ve always found a reason to write software, no matter what my job has been. [Ed: Examples include the Oracle*APS suite and a compiler design project he did for Great West Life in the 1990s, the queueing theory models he worked on in the late 1990s, the Method R Profiler software (Cary wrote all the XSLT code), and finally today, he spends about half of his time designing and writing the MR Tools suite.]

My career as an Oracle performance specialist is really a natural extension of my software development background. It is still really weird to me that in the Oracle market, performance is regarded as a job done primarily by operations people instead of by development people. Developers control at least 90% of the leverage over how fast an application will be able to run. I think that performance became a DBA responsibility in the formative years of our Oracle world because so many early Oracle projects had DBA teams but no professional development teams.

Most of those big projects were people implementing big off-the-shelf applications like Oracle Financial and Manufacturing Applications (which grew into the Oracle E-Business Suite). The only developers that most of those implementation teams had were what I would call nonprofessional developers. Now, I don’t mean people who were in any way unprofessional. I mean they were predominantly businesspeople who had never been educated as software developers, but who’d been told that of course anybody could write computer programs in this new “fourth-generation language” called SQL.

Just about any time you implement a vendor’s highly customizable new application with 20,000+ database objects underneath it, you’re going to run into performance problems. Someone had to attend to those problems, and the DBAs and sysadmins were the only technical people anywhere near the project who could do it. Those DBAs and Oracle sysadmins were also the people who organized the early Oracle conferences, and I think this is where the topic of “performance tuning” became embedded into the DBA track.

The resulting problem that I still see today is that the topic became dominated by “tips and techniques”—lists of tricks that operational people could try to maybe make their systems go a little bit faster. The word “tuning” says it all. I almost never use the word except facetiously, because it’s a cheap imitation of what systems really need, which is performance optimization, which is what designers and developers of software are supposed to do. Even the evolution of Oracle tools for the performance analyst mirrors this post-production tips-and-techniques “tuning” mentality. That’s why most performance management tools you see today are predominantly oriented toward viewing performance from a system resource perspective (the DBA’s perspective), rather than the code path perspective (the developer’s perspective).

The whole key to performance is the application design and development team, especially when you realize that the performance of an application is not just its code path speed, but its overall interaction with the person using it. So many of the performance problems that I’ve found are caused by applications that are just stupid in how they’re designed to interact with me. For example, if you’ve seen my “Messed-up apps” presentation before, you might remember the self-service bus ticket kiosk that made me wait for over a minute while the application tallied the more-than-2,000 different bus trips for which I might want to buy a ticket. That’s an app with a broken specification. There’s nothing that a run-time operations team can do to make that application any fun to use (short of sending it back for redesign).

My goal as a software designer is not just to make software that runs quickly. My goal is also to make applications that are delightful to use. It’s the difference between an application that you use because you must and one that feels like it’s a necessary part of who you are. Making software like that is the kind of thing that a designer learns from studying Don Norman, Edward Tufte, Christopher Alexander, and Jonathan Ive. It’s a level of performance that just isn’t on the menu for operational run-time support staff to even think about, because it’s beyond their control.

So: why Kscope? The ODTUG conferences are the best places I can go in the Oracle market where I can be with people who think and talk about these things. …Or for that matter, who understand that these ideas even exist and deserve to be studied. KScope is just the right place for me to be.

Recent conference presentations

Raimonds Simanovskis - Thu, 2011-06-02 16:00

Recently I has not posted any new posts as I was busy with some new projects as well as during May attended several conferences and in some I also did presentations. Here I will post slides from these conferences. If you are interested in some of these topics then ask me to come to you as well and talk about these topics :)

Agile Riga Day

In March I spoke at Agile Riga Day (organized by Agile Latvia) about my experience and recommendations how to adopt Agile practices in iterative style.

RailsConf

In May I travelled to RailsConf in Baltimore and I hosted traditional Rails on Oracle Birds of a Feather session there and gave overview about how to contribute to ActiveRecord Oracle enhanced adapter.

TAPOST

Then I participated in our local Theory and Practice of Software Testing conference and there I promoted use of Ruby as test scripting language.

RailsWayCon

And lastly I participated in Euruko and RailsWayCon conferences in Berlin. In RailsWayCon my first presentation was about multidimensional data analysis with JRuby and mondrian-olap gem. I also published mondrian-olap demo project that I used during presentation.

And second RailsWayCon presentation was about CoffeeScript, Backbone.js and Jasmine that I am recently using to build rich web user interfaces. This was quite successful presentation as there were many questions and also many participants were encouraged to try out CoffeeScript and Backbone.js. I also published my demo application that I used for code samples during presentation.

Next conferences

Now I will rest for some time from conferences :) But then I will attend FrozenRails in Helsinki and I will present at Oracle OpenWorld in San Francisco. See you there!

Categories: Development

Opensource WebRTC for Browser 2 Browser communication Coming up

Khanderao Kand - Thu, 2011-06-02 13:56
WebRTC, A new open source for browser to browser communication is making some progress. http://sites.google.com/site/webrtc/ has been launched and would go through W3C standardizatio. It may be part of HTML5. As per my knowledge, Google may adapt it and many browswer would support it. This browser to browser communication without any server involved would replaced traditional P2P communication including chats and talks.

Here is WebRTC architecture diagram:



For developers : http://sites.google.com/site/webrtc/reference
However, currently it is not yet ready. The current demo still needs a demo server. but it will be there soon.

If you want to join the effort: http://sites.google.com/site/webrtc/build

Growing Risks: Mobiles, Clouds, and Social Media

Simon Thorpe - Thu, 2011-06-02 07:05

ics2 logoThe International Information Systems Security Certification Consortium, Inc., (ISC)²®, has just published a report conducted on its behalf by Frost & Sullivan.

The report highlights three growing trends that security professionals are, or should be, worried about - mobile device proliferation, cloud computing, and social media.

Mobile devices are highlighted because survey respondents ranked them second in terms of threat (behind application vulnerabilities). Frost & Sullivan comment that "With so many mobile devices in the enterprise, defending corporate data from leaks either intentionally or via loss or theft of a device is challenging.". Most respondents reported that they have policies and technologies in place, with rights management being reported as part of the technology mix.

Cloud computing was ranked considerably lower by respondents, but Frost & Sullivan highlighted it as a growing concern for which the security professionals consistently cited the need for more training and awareness.

The security professionals also reported that their two most feared cloud-related threats are:

  • "Exposure of confidential or sensitive information to unauthorised systems or personnel"
  • "Confidential or sensitive data loss or leakage"

These two concerns were ranked head and shoulders above access controls, cyber attacks, and disruptions to operation, and concerns about compliance audits and forensic reporting.

Rather contrarily, the third trend is highlighted because respondents reported that it is not a major concern. Frost & Sullivan observe that many security professionals appear to be under-estimating the risks of social computing, with 28% of respondents saying that they impose no restrictions at all on the use of social media, and most imposing few restrictions.

So, interesting reading although no great surprises - and reason enough for me to write three pieces on what Oracle IRM brings to the party for each of these three challenging trends.

A comment on mobile device proliferation is already available here.

A comment on cloud adoption is available here

KEEP Pool

Jeff Hunter - Wed, 2011-06-01 17:11
I have a persnickety problem with a particular table being aged out of the buffer cache.  I have one query that runs on a defined basis.  Sometimes when it is run, it does a lot of physical reads for this table.  Other times, it does no physical reads. So I decided to play around with a different buffer pool and let this table age out of cache on it's own terms rather than competing in the

Pages

Subscribe to Oracle FAQ aggregator