Feed aggregator

Tutorials zur SQL Model Clause

Rob van Wijk - Mon, 2011-07-11 14:59
Marcus Matzberger has translated five of my posts related to the SQL Model Clause for the German speaking Oracle community. Those posts now contain links to the German versions and I've listed them here for convenience as well.SQL Model Clause Tutorial Part OneSQL Model Clause Tutorial Part TwoSQL Model Clause Tutorial Part ThreeCalculating probabilities with N throws of a dieChoosing Between SQLRob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

SOA 11g PS3/PS4: Significant Purging Performance Improvement

Marc Kelderman - Sat, 2011-07-09 02:16
Create the following index in the *_soainfra schema to improve the purging significantly. And with significant I mean SIGNIFICANT in upper-case :-). We have loads of instances too purge, +100K. The purging first took more than one hour. After applying the index it took minutes...

CREATE INDEX DLV_MESSAGE_CIKEY_IDX1 ON DLV_MESSAGE
(CIKEY)
LOGGING
TABLESPACE SOAINFRA
PCTFREE 10
INITRANS 2
MAXTRANS 255
STORAGE (
INITIAL 64K
NEXT 1M
MINEXTENTS 1
MAXEXTENTS UNLIMITED
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
NOPARALLEL;

SOA Suite 11g: AQ a tuning tip

Marc Kelderman - Fri, 2011-07-08 06:28
When you use AQ to dequeue message to the SOA Suite, keep in mind the maximum number of connections to your database.

A 4 node OSB cluster the does the dequeue and sends the messages to a 4 node SOA11g Cluster could lead that the SOA11g server will get stucked.

The server can get overloaded in the following situation:
  • When there are a lot of messages in the queue, for example 8000+ messages.
  • You stopped the SOA11g for maintaince 
  • You disabled the AQ-Proxy service in OSB.
When you start the SOA11g environment, you will notice that alle servers and their services/processes are running normally. But...

The issue is when you want to enable the AQ-Proxy service in OSB. This will lead that all OSB servers in the cluster will dequeue the messages and sends them to SOA11g.  SOA11g is trying to process all the services, but will fail at the end.

This is because it reach the maximum number of session in the database. This is not shown in the log files. The SOA11g log file says that the database "SOALocalTxDataSource" is suspended. This is not the case if you look in the Weblogic console (!). While the database alert log file says it reach the maximum session to the database.

This overloading can be solved by setting the maximum connections to the database for the datasources "SOALocalTxDataSource" and "SOADataSource".

In our case we have a 4-node cluster. This results in:
SUM(
    ("SOALocalTxDataSource" -> Maximum Capacity * 4)
   +
    ("SOADataSource" -> Maximum Capacity * 4)
)If max capacity is 40, the total connections reach 320 sessions (!), when you enable the proxy service on OSB.

Make sure you can create enough sessions to your database.

Core ADF11: UIShell with Menu Driving a Dynamic Region

JHeadstart - Thu, 2011-07-07 21:44

In this old post in the Core ADF11 series, I explained the various options you have in designing the page and taskflow structure. The preferred approach in my opinion that maximizes flexibility and reusability, is to build the application using bounded taskflows with page fragments. You then have various ways to disclose these task flows to the user. You can use an dynamic tabs as described in this post. You can also embed the taskflows using a dynamic region in a page. The application menu then drives the content of the dynamic region: clicking a menu option will load another taskflow in the dynamic region.

 The initial drawback of this approach is that it adds some additional complexity:

  • You can no longer use standard JSF navigation with your menu, there are no pages to navigate to, only regions
  • The XMLMenuModel, an easy way to define your menu structure in XML, cannot be used as-is. The selected menu entry when using the XMLMenuModel is based on the current page, and in our design, the whole application consists of only one page, the UIShell page.
  • The dynamic region taskflow binding should contain a list of all parameters of all taskflows that can be displayed in the dynamic region.

    This is a rather ugly design, and the developer of the UIShell page would need to know all the parameters of all taskflows that might be displayed in the dynamic region. A cleaner implementation is to use the parameter map property against the taskflow binding. However, when using the parameter map, you need to do the housekeeping of changed parameter values yourself when you want the taskflow to refresh when parameters change. In other words, just specifying Refresh=ifNeeded on the taskflow binding no longer works because ADF does not detect changes in a parameter map.

Fortunately, you can address this complexity quite easily by creating some simple, yet powerful infrastructure classes that hide most of the complexity from your development team. (The full source of these classes can be found in the sample application, download links are at the bottom of this post)

  • A DynamicRegionManager class that keeps track of the current task flow and current parameter map 
  • A TaskFlowConfigBean for each task flow that contains the actual task flow document path, the task flow parameters and a flag whether the parameter values have been changed.
  • A RegionNavigationHandler that subclasses the standard navigation handler to provide JSF-like navigation to a region taskflow.
  • A RegionXMLMenuModel class that subclasses the standard XMLMenuModel class to ensure the proper menu tab is selected based on the currently displayed taskflow in the dynamic region. 

The following picture illustrates how this UIShell concept works at runtime.

The UIShell page (with extension .jsf in JDeveloper 11.1.2 and with extension .jspx in JDeveloper 11.1.1.x) contains a dynamic region. The taskflow binding in the page definition of UIShell gets the currently displayed taskflow from the DynamicRegionManager class that is registered as a managed bean under the name mainRegionManager. The actual task flow id and task flow parameters are supplied by the TaskFlowConfigBean. The DynamicRegionManager manages the current taskflow based on a logical name, for example Jobs. When method setCurrentTaskFlowName is called on the DynamicRegionManager with value Jobs (and we will later see how we use the RegionNavigationHandler to do this), the DynamicRegionManager looks up the corresponding TaskFlowConfigBean by suffixing the Jobs task flow name with TaskFlowConfig.he methods getCurrentTaskFlowId and getCurrentParamMap and currentParamMapChanged then obtain and return the correct values using the JobsTaskFlowConfig bean.

Using this technique, you can configure your dynamic region completely declarative, there is no need to write any Java code. All you need to do is configure the mainRegionManager and task flow config beans, as shown below. 

In this sample, I configured the required managed beans in adfc-config.xml. When using ADF Libraries that contain task flows you want to show in the UIShell dynamic region, it is more elegant to define the TaskFlowConfigBean inside the ADF library. I usually define this bean in the adfc-config file of the task flow itself, below the  <task-flow-definition/> section.  It needs to be outside this section, otherwise the bean cannot be found by the DynamicRegionManager bean that is defined in the unbounded task together with the UIShell page. The advantage of this approach is that the config bean is placed in the same file as the task flow it refers to, the disadvantage is that beans defined outside the </task-flow-definition> section are not visible in the overview tab of a bounded task flow.

To set the current task flow name on the DynamicRegionManager, we use a custom RegionNavigationHandler class that you register in the faces-config.xml.

This class extends the default navigation handler, and overrides the handleNavigation method. The code is shown below. If the outcome contains a colon, the part after the colon contains the task flow name that should be set in the dynamic region.


With this class in place, we can use the action property on command components again to do region navigation, just like we are used to for normal JSF navigation, or navigation between page fragments inside a bounded task flow. The complete flow is as follows:

  • The menu item has the action property set to uishell:Jobs.
  • The region navigation handler set the current task flow name on the Dynamic region manager to Jobs (and navigates to UIShell page if needed) 
  • The dynamic region manager picks up the current task flow id and current parameters from the JobsTaskFlowConfig bean.

The last part of the puzzle is the RegionXMLMenuModel class that subclasses the standard XMLMenuModel class. This class overrides the getFocusRowKey method to return the focus path based on the current task flow region name.

With this class defined in the menu model managed bean, the menu will correctly display the selected tab.   

In a future post I will discuss how the same concepts can be applied to a UIShell page with dynamic tabs.

Downloads:

Categories: Development

Purging SOA Suite 11g, the Extreme Edition

Marc Kelderman - Thu, 2011-06-30 18:10
Did you ever wondered how purging is working in SOA 11g? I tried to look into the scripts from PS1 until PS4. From PS3 they redesigned the concept and focussed on scheduling and parallel execution. My experiences are mixed; parallel execution is nice, but adding some indexes solves our performance issue with purging of 400K+ instances. A document is made to describe what the best purging approach is.

It does not describe my approach :-). I think the PL/SQL packages are to complicated. This is due to the fact that the data model is not documented and does not contain (optional) foreign-keys. This makes the data model not efficient to understand if you have issues with it.

We discovered, in our production environment, that not all instances where purged. It was not possible to force the purge script to delete all the instances. Discussing with Oracle, results that purging can be done, via Enterprise Manager (EM). With EM you can delete them..., only one-by-one. If you have 40K+ of instances, this is not the way to go.

So, I dived into the various purging scripts from PS4 and try to understand them. I want a script that deletes ALL the instances older than X days. And yes, deleting all the instances, I could loose messages.

So this is my current result; an SQLPlus script that can be executed at any time (scheduled with crontab or windows scheduler).

spool soa11g_purge_script.log

set echo on
set verify on
set timing on

-- delete all instances older then 3 days
define days=3

alter session set current_schema=kim1_soainfra;
alter session set nls_date_format='yyyymmdd hh24mi';

variable cur_datetime varchar2(13)
exec select (sysdate - &days) into :cur_datetime from dual;

prompt Purging data until:
print cur_datetime
--
-- Purge the MEDIATOR data
--
delete from mediator_case_instance a where exists (select b.id from mediator_instance b where b.id = a.instance_id and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from mediator_audit_document a where exists (select b.id from mediator_instance b where b.id = a.instance_id and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from mediator_callback a where exists (select b.id from mediator_instance b where b.id = a.instance_id and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from mediator_group_status a where exists (select b.id from mediator_instance b where b.group_id = a.group_id and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi'));

delete from mediator_payload where modify_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from mediator_deferred_message where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from mediator_resequencer_message where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from mediator_case_detail where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from mediator_correlation where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from mediator_instance where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
commit;

--
-- Purge the BPEL data
--
delete from headers_properties where modify_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from ag_instance where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from audit_counter where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from audit_trail where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from audit_details where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from ci_indexes where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from work_item where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from wi_fault where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from xml_document_ref a where exists (select b.document_id from xml_document b where b.document_id = a.document_id and b.doc_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from xml_document where doc_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from document_dlv_msg_ref where dlv_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from document_ci_ref where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from dlv_subscription where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from dlv_message where receive_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from rejected_msg_native_payload where rm_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from instance_payload where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from test_details a where exists (select b.cikey from cube_instance b where b.cikey = a.cikey and b.creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from cube_scope where modify_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from cube_instance where creation_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
commit;

--
-- Purge the BPM data
--
delete from bpm_audit_query where create_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from bpm_measurement_actions where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from bpm_measurement_action_exceps where ci_partition_date < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from bpm_cube_auditinstance where cipartitiondate < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from bpm_cube_taskperformance where creationdate < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from bpm_cube_processperformance where creationdate < to_date(:cur_datetime, 'yyyymmdd hh24mi');
commit;

--
-- Purge the WORKFLOW data
--
delete from wftask_tl a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftaskhistory a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftaskhistory_tl a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfcomments a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfmessageattribute a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfattachment a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfassignee a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfreviewer a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfcollectiontarget a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfroutingslip a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfnotification a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftasktimer a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftaskerror a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfheaderprops a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wfevidence a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftaskassignmentstatistic a where exists (select b.taskid from wftask b where a.taskid = b.taskid and b.createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from wftaskaggregation where taskcreateddate < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from wftask where createddate < to_date(:cur_datetime, 'yyyymmdd hh24mi');
commit;

--
-- Purge the COMPOSITE data
--
delete from composite_sensor_value where date_value < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from composite_instance_assoc where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from attachment c where exists (select a.key from attachment_ref a where a.key = c.key and exists (select b.ecid from composite_instance b where b.ecid = a.ecid and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi')));
delete from attachment_ref a where exists (select b.ecid from composite_instance b where b.ecid = a.ecid and b.created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi'));
delete from composite_instance_fault where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from reference_instance where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from component_instance where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');
delete from composite_instance where created_time < to_date(:cur_datetime, 'yyyymmdd hh24mi');

commit;

--
-- Reclaim disk space
--
alter table mediator_case_instance enable row movement;
alter table mediator_case_instance shrink space;
alter table mediator_case_instance disable row movement;
alter table mediator_audit_document enable row movement;
alter table mediator_audit_document shrink space;
alter table mediator_audit_document disable row movement;
alter table mediator_callback enable row movement;
alter table mediator_callback shrink space;
alter table mediator_callback disable row movement;
alter table mediator_group_status enable row movement;
alter table mediator_group_status shrink space;
alter table mediator_group_status disable row movement;
alter table mediator_payload enable row movement;
alter table mediator_payload shrink space;
alter table mediator_payload disable row movement;
alter table mediator_deferred_message enable row movement;
alter table mediator_deferred_message shrink space;
alter table mediator_deferred_message disable row movement;
alter table mediator_resequencer_message enable row movement;
alter table mediator_resequencer_message shrink space;
alter table mediator_resequencer_message disable row movement;
alter table mediator_case_detail enable row movement;
alter table mediator_case_detail shrink space;
alter table mediator_case_detail disable row movement;
alter table mediator_correlation enable row movement;
alter table mediator_correlation shrink space;
alter table mediator_correlation disable row movement;
alter table mediator_instance enable row movement;
alter table mediator_instance shrink space;
alter table mediator_instance disable row movement;
alter table headers_properties enable row movement;
alter table headers_properties shrink space;
alter table headers_properties disable row movement;
alter table ag_instance enable row movement;
alter table ag_instance shrink space;
alter table ag_instance disable row movement;
alter table audit_counter enable row movement;
alter table audit_counter shrink space;
alter table audit_counter disable row movement;
alter table audit_trail enable row movement;
alter table audit_trail shrink space;
alter table audit_trail disable row movement;
alter table audit_details enable row movement;
alter table audit_details shrink space;
alter table audit_details disable row movement;
alter table ci_indexes enable row movement;
alter table ci_indexes shrink space;
alter table ci_indexes disable row movement;
alter table work_item enable row movement;
alter table work_item shrink space;
alter table work_item disable row movement;
alter table wi_fault enable row movement;
alter table wi_fault shrink space;
alter table wi_fault disable row movement;
alter table xml_document_ref enable row movement;
alter table xml_document_ref shrink space;
alter table xml_document_ref disable row movement;
alter table document_dlv_msg_ref enable row movement;
alter table document_dlv_msg_ref shrink space;
alter table document_dlv_msg_ref disable row movement;
alter table document_ci_ref enable row movement;
alter table document_ci_ref shrink space;
alter table document_ci_ref disable row movement;
alter table dlv_subscription enable row movement;
alter table dlv_subscription shrink space;
alter table dlv_subscription disable row movement;
alter table dlv_message enable row movement;
alter table dlv_message shrink space;
alter table dlv_message disable row movement;
alter table rejected_msg_native_payload enable row movement;
alter table rejected_msg_native_payload shrink space;
alter table rejected_msg_native_payload disable row movement;
alter table instance_payload enable row movement;
alter table instance_payload shrink space;
alter table instance_payload disable row movement;
alter table test_details enable row movement;
alter table test_details shrink space;
alter table test_details disable row movement;
alter table cube_scope enable row movement;
alter table cube_scope shrink space;
alter table cube_scope disable row movement;
alter table cube_instance enable row movement;
alter table cube_instance shrink space;
alter table cube_instance disable row movement;
alter table bpm_audit_query enable row movement;
alter table bpm_audit_query shrink space;
alter table bpm_audit_query disable row movement;
alter table bpm_measurement_actions enable row movement;
alter table bpm_measurement_actions shrink space;
alter table bpm_measurement_actions disable row movement;
alter table bpm_measurement_action_exceps enable row movement;
alter table bpm_measurement_action_exceps shrink space;
alter table bpm_measurement_action_exceps disable row movement;
alter table bpm_cube_auditinstance enable row movement;
alter table bpm_cube_auditinstance shrink space;
alter table bpm_cube_auditinstance disable row movement;
alter table bpm_cube_taskperformance enable row movement;
alter table bpm_cube_taskperformance shrink space;
alter table bpm_cube_taskperformance disable row movement;
alter table bpm_cube_processperformance enable row movement;
alter table bpm_cube_processperformance shrink space;
alter table bpm_cube_processperformance disable row movement;
alter table wftask_tl enable row movement;
alter table wftask_tl shrink space;
alter table wftask_tl disable row movement;
alter table wftaskhistory enable row movement;
alter table wftaskhistory shrink space;
alter table wftaskhistory disable row movement;
alter table wftaskhistory_tl enable row movement;
alter table wftaskhistory_tl shrink space;
alter table wftaskhistory_tl disable row movement;
alter table wfcomments enable row movement;
alter table wfcomments shrink space;
alter table wfcomments disable row movement;
alter table wfmessageattribute enable row movement;
alter table wfmessageattribute shrink space;
alter table wfmessageattribute disable row movement;
alter table wfattachment enable row movement;
alter table wfattachment shrink space;
alter table wfattachment disable row movement;
alter table wfassignee enable row movement;
alter table wfassignee shrink space;
alter table wfassignee disable row movement;
alter table wfreviewer enable row movement;
alter table wfreviewer shrink space;
alter table wfreviewer disable row movement;
alter table wfcollectiontarget enable row movement;
alter table wfcollectiontarget shrink space;
alter table wfcollectiontarget disable row movement;
alter table wfroutingslip enable row movement;
alter table wfroutingslip shrink space;
alter table wfroutingslip disable row movement;
alter table wfnotification enable row movement;
alter table wfnotification shrink space;
alter table wfnotification disable row movement;
alter table wftasktimer enable row movement;
alter table wftasktimer shrink space;
alter table wftasktimer disable row movement;
alter table wftaskerror enable row movement;
alter table wftaskerror shrink space;
alter table wftaskerror disable row movement;
alter table wfheaderprops enable row movement;
alter table wfheaderprops shrink space;
alter table wfheaderprops disable row movement;
alter table wfevidence enable row movement;
alter table wfevidence shrink space;
alter table wfevidence disable row movement;
alter table wftaskassignmentstatistic enable row movement;
alter table wftaskassignmentstatistic shrink space;
alter table wftaskassignmentstatistic disable row movement;
alter table wftaskaggregation enable row movement;
alter table wftaskaggregation shrink space;
alter table wftaskaggregation disable row movement;
alter table wftask enable row movement;
alter table wftask shrink space;
alter table wftask disable row movement;
alter table composite_sensor_value enable row movement;
alter table composite_sensor_value shrink space;
alter table composite_sensor_value disable row movement;
alter table composite_instance_assoc enable row movement;
alter table composite_instance_assoc shrink space;
alter table composite_instance_assoc disable row movement;
alter table attachment enable row movement;
alter table attachment shrink space;
alter table attachment disable row movement;
alter table attachment_ref enable row movement;
alter table attachment_ref shrink space;
alter table attachment_ref disable row movement;
alter table composite_instance_fault enable row movement;
alter table composite_instance_fault shrink space;
alter table composite_instance_fault disable row movement;
alter table reference_instance enable row movement;
alter table reference_instance shrink space;
alter table reference_instance disable row movement;
alter table component_instance enable row movement;
alter table component_instance shrink space;
alter table component_instance disable row movement;
alter table composite_instance enable row movement;
alter table composite_instance shrink space;
alter table composite_instance disable row movement;

alter table audit_details modify lob (bin) (shrink space);
alter table composite_instance_fault modify lob (error_message) (shrink space);
alter table composite_instance_fault modify lob (stack_trace) (shrink space);
alter table cube_scope modify lob (scope_bin) (shrink space);
alter table reference_instance modify lob (error_message) (shrink space);
alter table reference_instance modify lob (stack_trace) (shrink space);
alter table test_definitions modify lob (definition) (shrink space);
alter table wi_fault modify lob (message) (shrink space);
alter table xml_document modify lob (document) (shrink space);

alter index ad_pk rebuild online;
alter index at_pk rebuild online;
alter index ci_creation_date rebuild online;
alter index ci_custom3 rebuild online;
alter index ci_ecid rebuild online;
alter index ci_name_rev_state rebuild online;
alter index ci_pk rebuild online;
alter index composite_instance_cidn rebuild online;
alter index composite_instance_co_id rebuild online;
alter index composite_instance_created rebuild online;
alter index composite_instance_ecid rebuild online;
alter index composite_instance_id rebuild online;
alter index composite_instance_state rebuild online;
alter index cs_pk rebuild online;
alter index dm_conversation rebuild online;
alter index dm_pk rebuild online;
alter index doc_dlv_msg_guid_index rebuild online;
alter index doc_store_pk rebuild online;
alter index ds_conversation rebuild online;
alter index ds_conv_state rebuild online;
alter index ds_fk rebuild online;
alter index ds_pk rebuild online;
alter index header_properties_pk rebuild online;
alter index instance_payload_key rebuild online;
alter index reference_instance_cdn_state rebuild online;
alter index reference_instance_co_id rebuild online;
alter index reference_instance_ecid rebuild online;
alter index reference_instance_id rebuild online;
alter index reference_instance_state rebuild online;
alter index reference_instance_time_cdn rebuild online;
alter index state_type_date rebuild online;
alter index wf_crdate_cikey rebuild online;
alter index wf_crdate_type rebuild online;
alter index wf_fk2 rebuild online;
alter index wifault_pk rebuild online;
alter index wi_expired rebuild online;
alter index wi_key_crdate_state rebuild online;
alter index wi_pk rebuild online;
alter index wi_stranded rebuild online;
alter index xml_doc_reference_pk rebuild online

spool off

The 6th Annual Silicon Valley Code Camp

Peeyush Tugnawat - Thu, 2011-06-30 09:49
Silicon Valley Code Camphttp://www.siliconvalley-codecamp.com/Default.aspx

The 6th Annual Silicon Valley Code Camp

Peeyush Tugnawat - Thu, 2011-06-30 09:49
Silicon Valley Code Camp http://www.siliconvalley-codecamp.com/Default.aspx

Some More New ADF Features in JDeveloper 11.1.2

JHeadstart - Thu, 2011-06-23 22:39

The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:

  •  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: 
    <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/>
    See section 3.5.2 in web user interface guide for more info.

  • A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon).

  • New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4.

  • The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering.

  • In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list.
Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

Categories: Development

JDev 11.1.2.0.0 – af:showDetailItems and af:regions – the power of Facelets – part 3

Chris Muir - Tue, 2011-06-21 06:10
The previous blog posts in this series (part 1 and part 2) looked at the behaviour of the af:region tag embedded in a af:showDetailItem tag with JDeveloper 11.1.1.4.0. This post investigates the changing nature of the "deferred" activation property for the underlying af:region task flow binding in JDev 11.1.2.0.0.

JDeveloper 11.1.2.0.0 introduces JSF 2.0 and Facelets as the predominate display technologies for ADF Faces RC pages. As Oracle's JSF 2.0 roadmap for ADF states: "The introduction of Facelets addresses the shortcomings of JSP and improves the developer page design and reuse experience."

One of these improvements is how the "deferred" activation property for task flow bindings under af:regions works, and we no longer require the programmatic solution.

The behaviour using JSPX

Before we can show how Facelets improves the use of embedded task flows in an closed af:showDetailItem tag, it's worthwhile showing that using JSPXs under 11.1.2.0.0 still has the previous behaviour where a programmatic solution is required.

If we take the same code from the previous posts written in JDev 11.1.1.4.0, specifically using a JSPX page, this time entitled BasicShowDetailItem.jspx:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document title="ShowDetailJSPX" id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the embedded af:region containing a call to a task flow CharlieTaskFlow:

Also as per the previous posts, we're using the LogBegin Method Call in the task flow above, as well as task flow initializers and finalizers to helps us understand if the CharlieTaskFlow has been executed at all:

The code that does the logging again is similar as the previous posts:
public class CharlieBean {

public static ADFLogger logger = ADFLogger.createADFLogger(CharlieBean.class);

private String charlieValue = "charlie1";

public void setCharlieValue(String charlieValue) {
logger.info("setCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
this.charlieValue = charlieValue;
}

public String getCharlieValue() {
logger.info("getCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
return charlieValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}

public void logBegin() {
logger.info("Task flow beginning");
}
}
...and for the record we've inserted a custom JSF PhaseListener to assist interpreting the logger output:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
And finally in this example, note the task flow binding options "activation" and "active". We'll set this back to the defaults "deferred" and null to show the behaviour of task flows under 11.1.2.0.0 using JSPXs:

On running this code under a JSPX page in 11.1.2.0.0, even though the af:showDetailItem that contains the af:region and task flow are closed, we see the following log output that shows that the task flow is still being executed:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
The behaviour using Facelets

From here we'll show an example using Facelets. From here the example is virtually identical, except the page containing the af:region, as well as the fragment from the CharlieTaskFlow have to be created as Facelets page and fragment respectively.

As example the Facelet page code entitled FaceletsShowDetailItem.jsf looks as follows:
<?xml version='1.0' encoding='UTF-8'?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<f:view xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:document title="ShowDetailItemFacelet.jsf" id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
Then compared to our previous JSPX example, every other option is exactly the same. The most significant option is the task flow binding activation and active properties, which we can see are still set to "deferred" and null:

On running the Facelet page, even though the af:showDetailItem containing the af:region isn't open, unlike our previous behaviour using JSPX where we could see the task flow unnecessary initialized, from our logs with Facelets we can see that's not happening:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Once we open the af:showDetailItem, the task flow is then correctly initialized:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
This small change saves us from having to programmatically control the refresh of the task flows, and in addition shows that Oracle is enhancing the support for ADF through Facelets rather than the traditional JSPX view display technology. This should encourage greenfield developers to pursue Facelets in context of JDev 11.1.2.0.0.

Sample Application

A sample application for 11.1.2.0.0 application is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

JDev 11.1.1.4.0 – af:showDetailItems and af:regions – programmatic activation - part 2

Chris Muir - Mon, 2011-06-20 07:34
The previous blog post in this series looked at the default behaviour of the ADF framework in 11.1.1.4.0 of the af:region tag embedded in a af:showDetailItem tag. In this post we'll look at programmatically controlling the activation of regions to stop unnecessary processing.

This example will be simplified to only look at one af:region in the hidden second af:showDetailItem. The basic page entitled ShowDetailItemWithRegion looks as follows:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="DummyShowDetailItem" disclosed="true" id="sdi1">
<af:outputText value="DummyValue" id="ot1"/>
</af:showDetailItem>
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the first af:showDetailItem is disclosed (open) and purely exists to act as the open af:showDetailItem with the second af:showDetailItem we're actually interested in is closed. Note the second af:showDetailItem has our embedded af:region calling CharlieTaskFlow.

The CharlieTaskFlow looks as follows:

Similar to the last post, the LogBegin Method Call simple calls a bean method to log the actual start-of-processing for the bean. In turn both the initializer and finalizer of the task flows are also logged. A pageFlowScope bean CharlieBean1 shows all the methods:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class CharlieBean {
public static ADFLogger logger = ADFLogger.createADFLogger(CharlieBean.class);

private String charlieValue = "charlie1";

public void setCharlieValue(String charlieValue) {
logger.info("setCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
this.charlieValue = charlieValue;
}

public String getCharlieValue() {
logger.info("getCharlieValue called(" + (charlieValue == null ? "<null>" : charlieValue) + ")");
return charlieValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}

public void logBegin() {
logger.info("Task flow beginning");
}
}
And finally the CharlieFragment.jsff contains the following code:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Charlie value" value="#{pageFlowScope.charlieBean1.charlieValue}" id="it1"/>
</jsp:root>
In turn we've configured the same PhaseListener class to assist us in reading the debug output:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
When we run the parent page we see:

And the following in the logs:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Once again as we discovered from the first blog post in this series we can see that even though the CharlieTaskFlow is not showing, it is being initialized and executed to at least the LogBegin activity, though not as far as CharlieFragment.jsff.

This in many circumstances is going to be undesirable behaviour. Imagine a screen made up of several af:showDetailItems, most closed, where the user rarely opens the closed collections. From a performance point of view the hidden task flows are partially executed when their results may never be used by the users and is a waste of resources.

The solution to this is in the task flow binding that backs the af:region of the main page. When dropping the task flow onto the page as an af:region, a task flow binding is also added to the relating pageDef file:

The task flow binding includes a number of properties revealed by the property inspector, of which the "activation" and "active" properties we're interested in:

These properties can be used to control the activation and deactivation of the task flow backing the af:region. Under JDev 11.1.1.4.0 the values for the activation property are:

1) <default> (immediate)
2) conditional
3) deferred
4) immediate

While option 1 says it's the default, in fact option 3 "deferred" will be picked by default when the task flow binding is created (go figure?). This in itself is odd because the documentation for 11.1.1.4.0 states that the deferred option defaults back to "immediate" if we're not using Facelets (which we're not, this is an 11.1.2 feature). So 3 out of 4 options in the end are immediate, and the other is conditional. (Agreed, this is somewhat confusing, but it'll become clearer in the next blog post looking at these properties under 11.1.2, that the deferred option takes on a different meaning)

The "immediate" activation behaviour implies that the task flow binding will always be executed when the pageDef for the page is accessed. This explains why the hidden task flow in the af:showDetailItem af:region is actively initialized, mainly because the ADF lifecycle that in turn processes the page bindings is bolted on top of the JSF lifecycle, and the task flow binding doesn't know its indirectly related af:region is rendered in an af:showDetailItem that is closed.

The "conditional" activation behaviour in conjunction with the active property allows us to programmatically control the activation based on an EL expression. It's with this option we can solve the issue of our task flows being early executed.

To implement this in our current solution we create a new sessionBean to keep track of if the region is activated or not:
package test.view;

public class ActiveRegionBean {

private Boolean regionActivation = false;

public void setRegionActivation(Boolean regionActivation) {
this.regionActivation = regionActivation;
}

public Boolean getRegionActivation() {
return regionActivation;
}
}
We then modify our task flow binding in our page to use "conditional" activation, and an EL expression to refer to the state of the regionActivation variable within our sessionScope ActiveRegionBean:

Note by default the regionActivation is false to match the state of our second af:showDetailItem which is closed by default.

We also need to include some logic to toggle the regionActivation flag when the show af:showDetailItem is opened, as well as programmatically refreshing the af:region. To do this we can create a disclosureListener that refers to a new backing bean, as well as a component binding for the af:region. As such the code in our original page is modified as follows:
<af:showDetailItem text="Charlie" disclosed="false" id="sdi2"
disclosureListener="#{disclosureBean.openCharlie}">
<af:region value="#{bindings.CharlieTaskFlow1.regionModel}" id="r1"
binding="#{disclosureBean.charlieRegion}"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
Note the new disclosureListener property on the af:showDetailItem, and the binding property on the af:region.

In turn our requestScope DisclosureBean bean looks as follows:
public class DisclosureBean {

private RichRegion charlieRegion;

public static Object resolveELExpression(String expression) {
FacesContext fctx = FacesContext.getCurrentInstance();
Application app = fctx.getApplication();
ExpressionFactory elFactory = app.getExpressionFactory();
ELContext elContext = fctx.getELContext();
ValueExpression valueExp = elFactory.createValueExpression(elContext, expression, Object.class);
return valueExp.getValue(elContext);
}

public void openCharlie(DisclosureEvent disclosureEvent) {
ActiveRegionBean activeRegionBean = (test.view.ActiveRegionBean)resolveELExpression("#{activeRegionBean}");

if (disclosureEvent.isExpanded()) {
activeRegionBean.setRegionActivation(true);
AdfFacesContext.getCurrentInstance().addPartialTarget(charlieRegion);
// } else {
// activeRegionBean.setRegionActivation(false);
}
}

public void setCharlieRegion(RichRegion charlieRegion) {
this.charlieRegion = charlieRegion;
}

public RichRegion getCharlieRegion() {
return charlieRegion;
}
}
Note in the openCharlie() method we check if the disclosureEvent is for opening or closing the af:showDetailItem. If opening, we set the sessionScope's regionActivation to true which will be picked up by the task flow binding's active property to initialize the CharlieTaskFlow. In turn we add a programmatic refresh to the charlieRegion in order for the updates to the task flow to be seen in the actual page.

Now when we run our page for the first time we see the following log output:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Note that from our previous runtime example, we can't see the following entries:
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
This implies the CharlieBean hasn't been activated unnecessarily even when the af:showDetailItem is closed. If we then expand the af:showDetailItem, the logs reveal that the task flow is started accordingly:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<CharlieBean> <taskFlowInit> Task flow initialized
<CharlieBean> <logBegin> Task flow beginning
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
Here we can see the initialization of the CharlieTaskFlow. In turn unlike the default behaviour, we can now also see that getCharlieValue() is being called. The resulting web page:

What happens when we close the af:showDetailItem? The logs show:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<CharlieBean> <getCharlieValue> getCharlieValue called(charlie1)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<CharlieBean> <taskFlowFinalizer> Task flow finalized
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
getCharlieValue() is called to fire any ValueChangeListener. But also we can see the CharlieBean1 finalizer called. Because in our request bean we've deactivated our region, the task flow is forced to close. This may or may not be desired functionality, because once open, you might wish the task flow to maintain its state. If you do wish the task flow state to be retained the openCharlie() method in the request scope bean should be modified as follows:
if (disclosureEvent.isExpanded()) {
activeRegionBean.setRegionActivation(true);
AdfFacesContext.getCurrentInstance().addPartialTarget(charlieRegion);
// } else {
// activeRegionBean.setRegionActivation(false);
}
At the conclusion of this post we can see that the default behaviour of regions and task flows under the af:showDetailItem tag can at least be programmatically controlled to stop unnecessary execution of the underlying task flows. Interestingly this was a similar problem to Oracle Forms, where separate data blocks contained within tab pages would be eagerly executed unless code was put in place to stop this.

The next post in this series will look at the "deferred" activation option for task flows in JDev 11.1.2.

Sample Application

A sample application containing solutions for part 2 of this series is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

JDev 11.1.1.4.0 – af:showDetailItems and af:regions – immediate activation - part 1

Chris Muir - Mon, 2011-06-20 07:04
ADF's af:showDetailItem tag is used as a child to parent tags such as the af:panelAccordion and af:panelTabbed. JDeveloper's online documentation states the following about the af:showDetailItem tag:
The showDetailItem component is used inside of a panelAccordion or panelTabbed component to contain a group of children. It is identified visually by the text attribute value and lays out its children. Note the difference between "disclosed" and "rendered": if "rendered" is false, it means that this the accordion header bar or tab link and its corresponding contents are not available at all to the user, whereas if "disclosed" is false, it means that the contents of the item are not currently visible, but may be made visible by the user since the accordion header bar or tab link are still visible.

The lifecycle (including validation) is not run for any components in a showDetailItem which is not disclosed. The lifecycle is only run on the showDetailItem(s) which is disclosed.I've never been a fan of the property "disclosed", why not just call it "open"? At least developers then don't have to deal with double negatives like disclosed="false". Regardless the last paragraph highlights an interesting point that the contents of the af:showDetailItem are not processed by the JSF lifecycle if the showDetailItem is currently closed (disclosed="false"). That's desired behaviour, particularly if you have a page with multiple af:showDetailItem tags that in turn query from the business service layer, potentially kicking off a large range of ADF BC queries. Ideally you don't want the queries to fire if their relating af:showDetailItem tag is currently closed.

This feature can be demonstrated via a simple example. Note the following BasicShowDetailItem.jspx page constructed under JDev 11.1.1.4.0:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="Alpha" disclosed="true" id="sdi1">
<af:panelFormLayout id="pfl1">
<af:inputText label="Alpha Value" value="#{basicBean.alphaValue}" id="it1"/>
<af:commandButton text="Submit1" id="cb1"/>
</af:panelFormLayout>
</af:showDetailItem>
<af:showDetailItem text="Beta" disclosed="false" id="sdi2">
<af:inputText label="Beta Value" value="#{basicBean.betaValue}" id="it2"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
This is backed by the following simple requestScope POJO bean entitled BasicBean.java:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class BasicBean {

public static ADFLogger logger = ADFLogger.createADFLogger(BasicBean.class);

private String alphaValue = "alpha";
private String betaValue = "beta";

public void setAlphaValue(String alphaValue) {
logger.info("setAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
this.alphaValue = alphaValue;
}

public String getAlphaValue() {
logger.info("getAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
return alphaValue;
}

public void setBetaValue(String betaValue) {
logger.info("setBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
this.betaValue = betaValue;
}

public String getBetaValue() {
logger.info("getBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
return betaValue;
}
}
(More information on the ADFLogger and enabling it can be found in Duncan Mill's recent 4 part blog).

To assist understanding what's happening we'll also include our own PhaseListener logs such that we can see the JSF lifecycle in action:
public class PhaseListener implements javax.faces.event.PhaseListener {

public static ADFLogger logger = ADFLogger.createADFLogger(PhaseListener.class);

public void beforePhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public void afterPhase(PhaseEvent phaseEvent) {
logger.info(phaseEvent.getPhaseId().toString());
}

public PhaseId getPhaseId() {
return PhaseId.ANY_PHASE;
}
}
At runtime when the page is first rendered we see:

Note that the first af:showDetailItem is disclosed (open) and the second af:showDetailItem is closed. Also note the first af:showDetailItem is showing the alpha value from our requestScope bean, but the beta value is hiding in the 2nd closed af:showDetailItem.

At this stage the logger class gives us an insight into how the requestScope Basicbean has been used. In the log window we can see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
We can see the getter accessors for alphaValue have been called twice during the JSF render response phase to render the page. (Why twice? Essentially the JSF engine doesn't guarantee to call a getter once in a request-response cycle. JSF may use a getter for its own purposes such as checking if the value submitted with the request has changed, in order to fire a ValueChangeListener. Google abounds with further discussions on this and the JSF lifecylce including the following by BalusC).

As expected note that the getter for Beta was not called, as it's not displayed. To take the example one step further, if we hit the submit button available in first af:showDetailItem, the log output still shows no mention of the beta accessors, only calls to getAlphaValue():
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
For the purpose of demonstration, if we change the alpha value and resubmit the logs show:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<BasicBean> <setAlphaValue> setAlphaValue called(alpha2)
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getAlphaValue> getAlphaValue called(alpha2)
<BasicBean> <getAlphaValue> getAlphaValue called(alpha2)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In this case we see the additional setAlphaValue() call, and multiple getAlphaValue() calls, but no get/setBetaValue() calls. With this we can conclude that indeed the second af:showDetailItem is suppressing the lifecycle of its children.

What's happens if we open the 2nd af:showDetailItem, which closes the 1st af:showDetailItem:

In the log window we see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getAlphaValue> getAlphaValue called(alpha)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<BasicBean> <setAlphaValue> setAlphaValue called(alpha2)
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getBetaValue> getBetaValue called(beta)
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In this we can see:

1) A single get and set of alpha value – why? – because the af:showDetailItem button still issues a submit to the midtier. At the point in time the second af:showDetailItem's button is pressed, alpha is still showing, and its changes need to be communicated back to the midtier. The getAlphaValue() is to test if the values changed to fire the ValueChangeListener, and the setAlphaValue() is a call to write the new value submitted to the midtier.

An observant reader might pick up the fact that in this log the getAlphaValue call returns a value of alpha rather than alpha2. Surely in the step prior to this one we had already set the value to alpha2? (in fact you can see this in the log output) The answer being this bean has been set at requestScope, not sessionScope, so the state of the internal values are not being copied across requests (which is a useful learning exercise with regards to bean scope but beyond the scope (no pun intended) of this blog post).

2) Two separate calls to getBetaValue() – as the second af:showDetailItem opens, similar to the original retrieval of the alpha value, the JSF lifecycle now calls the getter twice.

If we now press the submit button in the second af:showDetailItem we see the following in the logs:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <afterPhase> APPLY_REQUEST_VALUES 2
<PhaseListener> <beforePhase> PROCESS_VALIDATIONS 3
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> PROCESS_VALIDATIONS 3
<PhaseListener> <beforePhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <afterPhase> UPDATE_MODEL_VALUES 4
<PhaseListener> <beforePhase> INVOKE_APPLICATION 5
<PhaseListener> <afterPhase> INVOKE_APPLICATION 5
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<BasicBean> <getBetaValue> getBetaValue called(beta)
<BasicBean> <getBetaValue> getBetaValue called(beta)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
As previous when we pressed the submit button in the first af:showDetailItem, the log output and the calls to getBetaValue match the frequency and location of getAlphaValue. Again now that the first af:showDetailItem is fully closed, we see no JSF lifecycle on the get/setAlphaValue() methods.

So in conclusion, if the af:showDetailItem is closed, and not in the processing of being closed, then its children will not be activated.

Okay, but what's up with af:showDetailItems and af:regions?

Now that we know the default behaviour of the af:showDetailItem, let's extend the example to show where the behaviour changes.

Within JDev 11g we can make use of af:regions to call ADF bounded task flows. As example we may have the following page entitled ShowDetailItemWithRegion.jspx:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core"
xmlns:h="http://java.sun.com/jsf/html" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<jsp:directive.page contentType="text/html;charset=UTF-8"/>
<f:view>
<af:document id="d1">
<af:form id="f1">
<af:panelAccordion id="pa1">
<af:showDetailItem text="Alpha" disclosed="true" id="sdi1">
<af:panelFormLayout id="pfl1">
<af:region value="#{bindings.AlphaTaskFlow1.regionModel}" id="r1"/>
<af:commandButton text="Submit1" id="cb1"/>
</af:panelFormLayout>
</af:showDetailItem>
<af:showDetailItem text="Beta" disclosed="false" id="sdi2">
<af:region value="#{bindings.BetaTaskFlow1.regionModel}" id="r2"/>
<af:commandButton text="Submit2" id="cb2"/>
</af:showDetailItem>
</af:panelAccordion>
</af:form>
</af:document>
</f:view>
</jsp:root>
Note the embedded regions within each af:showDetailItem. The setup of the rest of the page is the same, with the first af:showDetailItem being disclosed (open) and the closed when the page first renders.

The task flows themselves are very simple. As example AlphaTaskFlow contains one fragment AlphaFragment which is the default activity:

The AlphaFragment.jsff includes the following code:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Alpha value" value="#{backingBeanScope.alphaBean.alphaValue}" id="it1"/>
</jsp:root>
Of which references a backingBean scoped bean for the task flow named AlphaBean:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class AlphaBean1 {
public static ADFLogger logger = ADFLogger.createADFLogger(AlphaBean.class);

private String alphaValue = "alpha1";

public void setAlphaValue(String alphaValue) {
logger.info("setAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
this.alphaValue = alphaValue;
}

public String getAlphaValue() {
logger.info("getAlphaValue called(" + (alphaValue == null ? "<null>" : alphaValue) + ")");
return alphaValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}
}
This bean carries the alphaValue plus the associated getters and setters. The only addition here is the taskFlowInit() and taskFlowFinalizer() methods which we'll use in the task flow to log when the task flow is started and stopped:

In terms of the 2nd task flow BetaTaskFlow, it's exactly the same as AlphaTaskFlow except it calls the beta equivalent. As such the BetaFragment.jsff:
<?xml version='1.0' encoding='UTF-8'?>
<jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:af="http://xmlns.oracle.com/adf/faces/rich">
<af:inputText label="Beta value" value="#{backingBeanScope.betaBean.betaValue}" id="it1"/>
</jsp:root>
The backingBeanScope BetaBean:
package test.view;

import oracle.adf.share.logging.ADFLogger;

public class BetaBean1 {

public static ADFLogger logger = ADFLogger.createADFLogger(BetaBean.class);

private String betaValue = "beta1";

public void setBetaValue(String betaValue) {
logger.info("setBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
this.betaValue = betaValue;
}

public String getBetaValue() {
logger.info("getBetaValue called(" + (betaValue == null ? "<null>" : betaValue) + ")");
return betaValue;
}

public void taskFlowInit() {
logger.info("Task flow initialized");
}

public void taskFlowFinalizer() {
logger.info("Task flow finalized");
}
}
And the BetaTaskFlow initializer and finalizers set:

With the moving parts done, let's see what happens at runtime.

When we run the base page we see:

From the log output we see:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<AlphaBean> <taskFlowInit> Task flow initialized
<BetaBean> <taskFlowInit> Task flow initialized
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
In context of we saw in the previous example, this is an interesting result. While we see that only getAlphaValue() has been called similar to our previous example that didn't use regions, what we can also see in the RENDER_RESPONSE phase unexpectedly that the initializer for *both* task flows have been called. We expected the task flow initializer for the AlphaTaskFlow to be called, but the framework has decided to start the BetaTaskFlow as well. Another observation though is even though the BetaTaskFlow was started, somehow the framework didn't call getBetaValue?

An assumption you could make here is the framework is priming the BetaTaskFlow and calling the task flow initializer, but not actually running the task flow. We can disapprove this fact by extending the BetaTaskFlow to include a new Method Call as the task flow activity:

...where the Method Call simply calls a new method in the BetaBean1:
public void logBegin() {
logger.info("Task flow beginning");
}
If we re-run our application the current log output is shown when the page opens:
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RESTORE_VIEW 1
<PhaseListener> <afterPhase> RESTORE_VIEW 1
<PhaseListener> <beforePhase> RENDER_RESPONSE 6
<AlphaBean> <taskFlowInit> Task flow initialized
<BetaBean> <taskFlowInit> Task flow initialized
<BetaBea> <logBegin> Task flow beginning
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<AlphaBean> <getAlphaValue> getAlphaValue called(alpha1)
<PhaseListener> <afterPhase> RENDER_RESPONSE 6
This proves that the activities within the BetaTaskFlow are actually being called. However the ADF engine seems to stop short at processing the BetaFragment, the effective viewable part of the task flow.

The conclusion we can draw here is even though you think you're hiding the BetaTaskFlow, and the af:showDetailItem documentation says it wont process the lifecyle of it's children, for af:regions using task flows, this is not the case, it is in fact processing them (up to a point). The implication of this is (at least some) unnecessary processing will occur even if the user never looks at the contents of the closed af:showDetailItem.

In the next post in this series, still using JSPX pages and JDev 11.1.1.4.0, we'll look at how we can programmatically control the activation of the hidden region to stop unnecessary processing.

The final post in the series will look at what options are available to us under JDev 11.1.2 using Facelets.

Sample Application

A sample application containing solutions for part 1 of this series is available here.

Thanks

This post was inspired by Oracle's Steve Davelaar who highlighted the new region processing in JDev 11.1.2. The introduction of the new 11.1.2 feature led me to explore the default behaviour under 11.1.1.4.0 without the new feature.

Using Agile Practices to Create an Agile Presentation

Cary Millsap - Fri, 2011-06-17 13:25
What’s the best way to make a presentation on Agile practices? Practice Agile practices.

You could write a presentation “big bang” style, delivering version 1.0 in front of your big audience of 200+ people at Kscope 2011 before anybody has seen it. Of course, if you do it that way, you build a lot of risk into your product. But what else can you do?

You can execute the Agile practices of releasing early and often, allowing the reception of your product to guide its design. Whenever you find an aspect of your product that doesn’t get the enthusiastic reception you had hoped for, you fix it for the next release.

That’s one of the reasons that my release schedule for “My Case for Agile Methods” includes a little online webinar hosted by Red Gate Software next week. My release schedule is actually a lot more complicated than just one little pre-ODTUG webinar:

2011-04-15Show key conceptual graphics to son (age 13)2011-04-29Review #1 of paper with employee #12011-05-18Review #2 of paper with customer2011-05-14Review #3 of paper with employee #12011-05-18Review #4 of paper with employee #22011-05-26Review #5 of paper with employee #32011-06-01Submit paper to ODTUG web site2011-06-02Review #6 of paper with employee #12011-06-06Review #7 of paper with employee #32011-06-10Submit revised paper to ODTUG web site2011-06-13Present “My Case for Agile Methods” to twelve people in an on-site customer meeting2011-06-22Present “My Case for Agile Methods” in an online webinar hosted by Red Gate Software2011-06-27Present “My Case for Agile Methods” at ODTUG Kscope 2011 in Long Beach, California
(By the way, the vast majority of the work here is done in Pages, not Keynote. I think using a word processor, not an operating system for slide projectors.)

Two Agile practices are key to everything I’ve ever done well: incremental design and rapid iteration. Release early, release often, and incorporate what you learn from real world use back into the product. The magic comes from learning how to choose wisely in two dimensions:
  1. Which feature do you include next?
  2. To whom do you release next?
The key is to show your work to other people. Yes, there’s tremendous value in practicing a presentation, but practicing without an audience merely reinforces, it doesn’t inform. What you need while you design something is information—specifically, you need the kind of information called feedback. Some of the feedback I receive generates some pretty energetic arguing. I need that to fortify my understanding of my own arguments so that I’ll be more likely to survive a good Q&A session on stage.

To lots of people who have seen teams run projects into the ground using what they call “Agile,” the word “Agile” has become a synonym for sloppy, irresponsible work habits. When you hear me talk about Agile, you’ll hear about practices that are highly disciplined and that actually require a lot of focus, dedication, commitment, practice, and plain old hard work to execute.

Agile, to me, is about injecting discipline into a process that is inevitably rife with unpredictable change.

Bug 11858963: optimization goes wrong with FIRST_ROWS_K (11g)?

Charles Schultz - Fri, 2011-06-17 08:07
At the beginning of March, I noticed some very odd things in a 10053 trace of a problem query I was working on. I also made some comments on Kerry Osborn's blog related to this matter. Oracle Support turned this into a new bug (11858963), unfortunately an aberration of Fix 4887636. I was told that this bug will not be fixed in 11gR1 (as 11.1.0.7 is the terminal release), but it will be included in future 11gR2 patches.

If you have access to SRs, you can follow the history in SR 3-314198695. For those that cannot, here is a short summary.

We had a query that suffered severe performance degradation after upgrading from 10.2.0.4 to 11.1.0.7. I attempted to use SQLT but initially run into problems with the different versions of SQLT, so I did the next best thing and looked at the 10053 traces directly. After a bit of digging, I noticed several cases where the estimated cardinality was completely off. For example:


First K Rows: non adjusted N = 1916086.00, sq fil. factor = 1.000000
First K Rows: K = 10.00, N = 1916086.00
First K Rows: old pf = 0.1443463, new pf = 0.0000052
Access path analysis for FRRGRNL
***************************************
SINGLE TABLE ACCESS PATH (First K Rows)
Single Table Cardinality Estimation for FRRGRNL[FRRGRNL] 
Table: FRRGRNL Alias: FRRGRNL
Card: Original: 10.000000 Rounded: 10 Computed: 10.00 Non Adjusted: 10.00



So, the idea behind FIRST_ROWS_K is that you want the entire query to be optimized (Jonathan Lewis would spell it with an "s") for the retrieval of the first K rows. Makes sense, sounds like a good idea. The problem I had with this initial finding is that every single rowsource was being reduced to having a cardinality of K. That is just wrong. Why is it wrong? Let's say you have a table with, um, 1916086 rows. Would you want the optimizer to pretend it has 10 rows and make it the driver of a Nested Loop? Not me. Or likewise, would you want the optimizer to think "Hey, look at that, 10 rows, I'll use an index lookup". Why would you want FIRST_ROWS_K to completely obliterate ALL your cardinalities?

I realize I am exposing some of my naivete above. Mauro, my Support Analyst corrected some of my false thinking with the following statement:

The tables are scaled under First K Rows during the different calculations (before the final join order is identified) but I cannot explain any further how / when / why.
Keep in mind that the CBO tweak -> cost -> decide (CBQT is an example)
Unfortunately we cannot discuss of the CBO algorithms / behaviora in more details, they are internal materials.
Regarding the plans yes, they are different, the "bad plan" is generated with FIRST_ROWS_10 in 11g
The "good" plan is generated in 10.2.0.4 (no matter which optimizer_mode you specify, FIRST_ROWS_10 is ignored because of the limitation) or in 11g when you disable 4887636 (that basically reverts the optimizer_mode to ALL_ROWS).
Basically the good plan has never been generated under FIRST_ROWS_10 since because of 4887636 FIRST_ROWS_10 has never been used before



I still need to wrap my head around "the limitation" in 10.2.0.4 and how we never used FIRST_ROWS_K for this particular query, but I believe that is exactly what Fix 4887636 was supposed to be addressing.

Here are some of the technical details from Bug 1185896:

]]potential performance degradation in fkr mode
]]with fix to bug4887636 enabled, if top query block
]]has single row aggregation
REDISCOVERY INFORMATION:
fkr mode, top query block contains blocking construct (i.e, single row aggregation). Plan improves with 4887636 turned off
WORKAROUND:
_fix_control='4887636:off'
I assume fkr mode is FIRST_ROWS_K, shortened to F(irst)KR(ows). The term "blocking construct" is most interesting - why would a single row aggregation be labeled as a "block construct"?

Also, this was my first introduction to turning a specific fix off. That in itself is kinda cool.

Fine tuning your logical to physical DB transform

Susan Duncan - Thu, 2011-06-16 07:23
When you use JDeveloper's class modeler to design your logical database you are able to run the DB Transform to get a physical DB model. But have you ever been frustrated at the limitations of the transform? for instance -
  • name to be used for the physical table
  • attribute to be used as the PK column
  • datatype and settings to be applied to a specific attribute
  • foreign key or intersection column names
  • different datatypes to be created for different database types
In JDeveloper 11.1.2 additional fine grain and reusable transform rules are available to you using a UML Profile. This is a UML2 concept but for database designers using JDeveloper it means you can set many properties at the logical model that are used when the transform to physical model(s) is run.

There is a section in Part2 of the tutorial Using Logical Models in UML for Database Development that details using the DB Profile, so I will not repeat the whole story here, but give you a flavour of the capabilities. Once you've applied the profile to the UML package of your class diagram you can set stereotypes against any element in your diagram. In the image below you are looking at the stereotype properties of the empno attribute - that will become a column in the database. Note that this column is to be used as the primary key on the table. Also note the list of Oracle and non-Oracle datatypes listed. Here you can specify exactly how empno should be transformed when multiple physical databases may be required. If the property for any of these is not set then the default transform will be applied.

You could say that having this ability as part of the class model bridges the gap between this model (being used as a logical DB model) and the tranformed physical database model - it provides some form of relative model capability.



Looking at another new feature in 11.1.2.0, the extract right shows other elements on the class diagram. I've created a primitive type (String25Type) and using the stereotype have specified how this type should be transformed. Now I can use that type on different classes in my diagram and the transformer will use it as necessary.

There are many other ways to fine tune your transform, run through the tutorial and try them for yourself

Hudson and me!

Susan Duncan - Thu, 2011-06-16 05:21
Over the past months I've been working more and more with Hudson, the continuous integration server. If you're familiar with Hudson then no doubt you are familiar with the changes that faced it in that time. If you're not - well, it's a long, well-documented story in the press and I will not bore you with it here!

But the most important thing is that Hudson is a great (free) continuous integration tool and continues to grow in popularity and status. Oracle became its supported from Sun's original open source project. As well as its community of users and developers Oracle has a full-time team working on it, including me as Product Manager, and it recently started the process of moving to the Eclipse Foundation as a top-level project.

Internally we use Hudson across the organization for all manner of build and test jobs and I know that many of you do too.

In JDeveloper 11.1.2 we've added new features to Team Productivity Center (TPC) to integrate Hudson (or Cruise Control) build/test results into the IDE and relate those to code checkins via the TPC work items. You can see a quick demo of that here

If you use Hudson I'd like to hear from you - in fact, I'd like to hear from you anyway! So please contact me at the usual Oracle address. Other ways to keep up with Hudson are through its mailing lists, wiki and of course twitter - @hudsonci

IRM Hotfolder update - seal docs automatically

Simon Thorpe - Tue, 2011-06-14 03:09

wrapper linkAnother update of the IRM Hotfolder tool was announced a few days ago - 3.2.0.

The main enhancement this time is to preserve timestamps, ownership and file system permissions during the automated sealing process. Earlier versions would create sealed files with timestamps reflecting the time of sealing, and ownership attributed to the wrapper utility, etc. This version lets you preserve the properties of the file prior to sealing. 

The documentation has also been updated to clarify the permissions needed to use the utility.

For those who aren't familiar with the IRM Hotfolder, it is a simple utility that uses IRM APIs to seal and unseal files automatically by monitoring file system folders, WebDAV folders, SharePoint folders, application output folders, and so on.

Native String Aggregation in 11gR2

Duncan Mein - Tue, 2011-06-14 03:07
A fairly recent requirement meant that we had to send a bulk email to all users of each department from within our APEX application. We have 5000 records in our users table and the last thing we wanted to do was send 5000 distinct emails (one email per user) for both performance and to be kind on the mail queue / server.

In essence, I wanted to to perform a type of string aggregation where I could group by department and produce a comma delimited sting of all email address of users within that department. With a firm understanding of the requirement, so began the hunt for a solution. Depending on what version of the database you are running, the desired result can be achieved in a couple of ways.

Firstly, the example objects.

CREATE TABLE app_user
(id NUMBER
,dept VARCHAR2 (255)
,username VARCHAR2(255)
,email VARCHAR2(255)
);

INSERT INTO app_user (id, dept, username, email)
VALUES (1,'IT','FRED','fred@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (2,'IT','JOE','joe@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (3,'SALES','GILL','gill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (4,'HR','EMILY','emily@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (5,'HR','BILL','bill@mycompany.com');

INSERT INTO app_user (id, dept, username, email)
VALUES (6,'HR','GUS','gus@mycompany.com');

COMMIT;

If you are using 11gR2, you can expose the new LISTAGG function as follows to perform your string aggregation natively:

SELECT dept
,LISTAGG(email,',') WITHIN GROUP (ORDER BY dept) email_list
FROM app_user
GROUP BY dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

If running 11g or earlier, you can achieve the same result using XMLAGG as follows:

SELECT au.dept
,LTRIM
(EXTRACT
(XMLAGG
(XMLELEMENT
("EMAIL",',' || email)),'/EMAIL/text()'), ','
) email_list
FROM app_user au
GROUP BY au.dept;

DEPT EMAIL_LIST
-----------------------------------------------------------------
HR emily@mycompany.com,bill@mycompany.com,gus@mycompany.com
IT fred@mycompany.com,joe@mycompany.com
SALES gill@mycompany.com

The introduction of native string aggregation into 11gR2 is a real bonus and a function that has already proved to have had huge utility within our applications.

Back from Tallinn, Estonia

Rob van Wijk - Sat, 2011-06-11 16:22
This morning I arrived back from a trip to Tallinn. Oracle Estonia had given me the opportunity to present my SQL Masterclass seminar at their training center in Tallinn, on Thursday and Friday. Thank you all those who spent two days hearing me. Here is a short story about my trip including some photos.I arrived at Tallinn airport around 1PM local time on Wednesday. My hotel room was located at Rob van Wijkhttp://www.blogger.com/profile/00499478359372903250noreply@blogger.com0

Clouds Leak - IRM protects

Simon Thorpe - Sat, 2011-06-11 06:46

leaky cloudIn a recent report, security professionals reported two leading fears relating to cloud services:

"Exposure of confidential or sensitive information to unauthorised systems or personnel"

"Confidential or sensitive data loss or leakage"

 

These fears are compounded by the fact that business users frequently sign themselves up to cloud services independently of whatever arrangements are made by corporate IT. Users are making personal choices to use the cloud as a convenient place to store and share files - and they are doing this for business information as well as personal files. In my own role, I was recently invited by a partner to review a sensitive business document using Googledocs. I just checked, and the file is still there weeks after the end of that particular project - because users don't often tidy up after themselves.

So, the cloud gives us new, seductively simple ways to scatter information around, and our choices are governed by convenience rather than compliance. And not all cloud services are equal when it comes to protecting data. Only a few weeks ago, it was reported that one popular service had amended its privacy assurance from "Nobody can see your private files..." to "Other [service] users cannot...", and that administrators were "prohibited" from accessing files - rather than "prevented". This story demonstrates that security pros are right to worry about exposure to unauthorised systems and personnel.

passwordAdded to this, the recent Sony incident highlights how lazy we are when picking passwords, and that services do not always protect passwords anything like as well as they should. Reportedly millions of passwords were stored as plain text, and analysis shows that users favoured very simple passwords, and used the same password for multiple services. No great surprise, but worrying to a security professional who knows that users are just as inconsiderate when using the cloud for collaboration.

No wonder then that security professionals put the loss or exposure of sensitive information firmly at the top of their list of concerns. They are faced with a triple-whammy - distribution without control, administration with inadequate safeguards, and authentication with weak password policy. A compliance nightmare.

So why not block users from using such services? Well, you can try, but from the users' perspective convenience out-trumps compliance and where there's a will there's a way. Blocking technologies find it really difficult to cover all the options, and users can be very inventive at bypassing blocks. In any case, users are making these choices because it makes them more productive, so the real goal, arguably, is to find a safe way to let people make these choices rather than maintain the pretence that you can stop them.

seal to protect cloud docsThe relevance of IRM is clear. Users might adopt such services, but sealed files remain encrypted no matter where they are stored and no matter what mechanism is used to upload and download them. Cloud administrators have no more access to them than if they found them on a lost USB device. Further, a hacker might steal or crack your cloud passwords, but that has no bearing on your IRM service password, which is firmly under the control of corporate policy. And if policy changes such that the users no longer have rights to the files they uploaded, those files become inaccessible to them regardless of location.  You can tidy up even if users do not.

Finally, the IRM audit trail can give insights into the locations where files are being stored.

So, IRM provides an effective safety net for your sensitive corporate information - an enabler that mitigates risks that are otherwise really hard to deal with.

Pages

Subscribe to Oracle FAQ aggregator