Development

Core ADF11: Building Highly Reusable ADF Task Flows

JHeadstart - Wed, 2011-12-14 02:52

In this old post in the Core ADF11 series, I explained the various options you have in designing the page and taskflow structure. The preferred approach in my opinion that maximizes flexibility and reusability, is to build the application using bounded taskflows with page fragments. You then have various ways to disclose these task flows to the user. You can use a dynamic region or dynamic tabs, or a human workflow task list, or even have the user add the task flow at runtime using Webcenter Composer. 

To maximize reuse of individual task flows, there are some simple techniques you can apply:

  • Define a set of input parameters that allows you to configure for the various (re)use cases
  • Define a router activity as the default activity to enable reuse-case-based conditional flows
  • Configure use of dynamic iterator binding to use a task flow both as a master region or detail region
  • Configure display properties of UI components based on task flow input parameters so components can be shown/hidden, editable/readOnly, required/optional, etc based on the specific (re)use case

By applying these techniques, you can dramatically reduce the number of task flows you need to build in your project. In this sample application, you can see how you can reuse the same task flow to support the following reuse cases:

  • Show in read-only mode
  • Enable deeplinking from another page to show one specific row 
  • Use as read-only context info in popup
  • Use as master region as well as detail region 
  • Enable deeplinking from external source like e-mail.

Downloads: 

Categories: Development

UKOUG TECHEBS 2011 Presentations

Anthony Rayner - Thu, 2011-12-08 08:12
I just wanted to do another quick, post-conference follow up post after UKOUG TECHEBS 2011 in Birmingham. At this conference I presented on 2 topics relating to Application Express and as promised, here are the slides and samples I showed during the sessions:




With the accessibility sample, by just looking at the application, I appreciate it's not easy to work out exactly what I did to make it more accessible, so will try and follow up with some more information in the next couple of weeks about the changes. (Hopefully the slides and application together are still of some use now, until I do this.) Also, I had some interesting feedback after the session, where 2 people suggested the screen reader demo's could be done with the projector switched off, which I thought was a great idea to try and provide a more accurate user experience, so will try and incorporate that next time.

Thanks to all who attended, I hope you got something from them.
Categories: Development

Logger Project Moved Temporarilly

Tyler Muth - Wed, 2011-11-09 08:43
The site samplecode.oracle.com was decommissioned recently. I was hosting a number of projects there including “Logger”, my PL/SQL instrumentation package. Until I find a new home, here’s a temporary link to the latest release (1.4) or if you just want to view the readme use this link. I’ll update this post when I decide on […]
Categories: DBA Blogs, Development

Core ADF11: UIShell with Dynamic Tabs

JHeadstart - Fri, 2011-10-07 06:05

Last update on 28-aug-2013 Sample added for JDeveloper 12.1.2, see bottom of post.

In this old post in the Core ADF11 series, I explained the various options you have in designing the page and taskflow structure. The preferred approach in my opinion that maximizes flexibility and reusability, is to build the application using bounded taskflows with page fragments. You then have various ways to disclose these task flows to the user. You can use a dynamic region that displays the taskflow, as described in this post. You can also go for a more advanced user interface pattern that provides multi-tasking capabilities to the end user by dynamically adding tabs that represent independent tasks for the end user. This patterns is described in more detail here and covers more than only the dynamic tabs. Oracle provides a sample implementation named "Oracle Dynamic Tabs Page Template" of this user interface pattern as an extension to JDeveloper  that can be installed through the JDeveloper Help -> Check for Updates option.  While this implementation might be sufficient in your case, I discovered the need for additional related functionality not provided out-of-the-box by this extension, as well as the need for easier customization options. As I started to implement this pattern at various customers I added more and more functionality to my implementation of this pattern that started off with a more or less one-to-one copy from the Oracle extension implementation. The end result is quite different from the Oracle extension, also because I optimized the implementation for JDeveloper 11.1.2, leveraging the new multiregion binding. For my presentation of this topic at Oracle OpenWorld 2011 I created a showcase application that includes all the code and templates for the dynamic tabs implementation and illustrates all the functionalities my customers asked for, including:

  • Open new tabs in various ways: Using a menu, using a global quick search, from within another tab, and conditionally based on a tab unique identifier that is used to check whether there is already a tab open with the same UID.

  • Close a tab in various ways: using the close icon on the tab, using a close tab button inside the tab region, or auto-closing a tab when the user finished the task.
  • Transaction handling: Each tab should have an independent transaction, a tab needs to be marked dirty with a visual indicator indicating the dirty tab state, and a warning should be shown when the user tries to close a tab with pending changes.
  • Miscelleanous requirements: Update browser window/tab title based on currently selected tab, initially display tabs, prevent tabs from being closed automatically, set maximum number of tabs the user is allowed to open, and update of the tab label based on the current data shown in the region.

If you have similar requirements, then this sample application might be useful to you. You can download the application below, feel free to copy and paste the infrastructure classes, templates and declarative components to your own application. For an explanation of the implementation, see the slides of my OOW presentation.

Downloads: 

Update 13-nov-2012: The implementation in the sample applications has been improved. It is no longer needed to define the managed bean for a tab that is initially displayed in view scope rather than request scope. By improving this implementation of initially displayed tabs, the following issues have been fixed:

  • Closing and re-opening an initially displayed tab caused an NPE (11.1.1.x only)
  • It was not possible to launch another tab from within an initially displayed tab
Update 28-aug-2013: Dynamic Tabs Sample For JDeveloper 12.1.2

Initially, I had problems getting the sample to work in JDev 12.1.2. I logged the following bugs against 12.1.2 that could be reproduced when opening  the 11.1.2 sample in 12.1.2:

  • Bug 17158398 - COMMAND COMPONENTS DO NOT WORK WHEN OPENING REGION IN DYNAMIC TAB
  • Bug 17156672 - UI NOT RENDERED CORRECTLY DUE TO CONTEXT PARAM DEFAULT_DIMENSION
  • Bug 17156560 - JAVA.LANG.ILLEGALSTATEEXCEPTION: COULD NOT FIND COMPONENT TO STREAM
  • Bug 17158597 - UPDATES ARE LOST WHEN DEFINING REGIONCONTROLLER IN PAGEDEF CONTROLLERCLASS PROP

Product development has investigated some of these bugs, and this is the result:

  • The behavior described in bug 17156560 was caused by the fixed value specified for the id property in the af:region tag. Unlike JSP, facelets does not ensure uniqueness when stamping out multiple components, in this case multiple regions inside the af:forEach loop. By removing the id property (facelets will then auto-generate a unique id), or making it unique using an EL expression like id="reg_${vs.index}"  the issue was resolved. See this blog post from Duncan Mills for more info.
  • After applying the fix for bug  17156560 , the behavior reported in bugs 17158398 and 17158597 was no longer seen. These bugs are now closed as "not a bug".
  • The work around for bug 17156672 is simple. The top-level af:decorativeBox in the page template should have the dimensionsFrom property set to "parent" when the DEFAULT_DIMENSIONS context param is set to 'auto' in web.xml. However, since JDeveloper should ensure upwards compatibility, it should never automatically add the DEFAULT_DIMENSIONS web.xml context param to an existing application. So, the bug has been changed to a design-time bug to prevent this from happening. 
With these fixes, the 12.1.2 sample is now working correctly and can be downloaded here.

Categories: Development

Oracle OpenWorld 2011 Dynamic Action Presentation

Anthony Rayner - Wed, 2011-10-05 19:28
Just a quick post to follow up from my presentation today at OpenWorld, 'Oracle Application Express 4.1 - Dynamic Actions'. In the session, I promised to provide the slides and sample applications I used, so here they are.

The zip file contains the slides and 2 sample applications entitled 'Examples' and 'Common Questions'.

Thanks to all who attended, hope you found it useful!
Categories: Development

How to become an Oracle ADF expert in one week (or in 1 day if you don't have so much time)

JHeadstart - Tue, 2011-09-13 22:15

Oracle ADF is an extremely powerful framework for developing SOA-based web applications. Once you have grasped the key concepts of ADF, it is a real pleasure to work with ADF, and to experience how fast you can build really sophisticated stuff. But......as with any advanced framework, there is a learning curve involved, and I have to admit the ADF learning curve might be experienced as rather steep at times. I am often asked, "how long does it take to become a good ADF developer?", and I usually answer "something between 3-6 months".  The problem is that most project managers don't allow you this time, so you simply start your first ADF project, and find out along the way that you should have chosen a fundamentally different approach to make you application components truly flexible, easy to maintain, and reusable.

Well, Oracle has a one-time-only-unique-offer that allows you to become an ADF expert in one week, and get your ADF app right the first time: Attend Oracle Open World 2011, 2-6 october in San Francisco! The 2011 version of OOW has been proclaimed by the ADF EMG as "Year of the ADF developer" and with good reason. There are lots of good ADF sessions at OOW 2011, a complete overview of all ADF sessions can be found here.

If your employer does not allow you a whole week off from your project, you can go for the sunday-shortcut to become an ADF expert in even one day. The ADF EMG has put together an impressive list of speakers and topics, who will all present on sunday october in Moscone room 2000. See you there! (And don't forget to attend my own presentations on sunday 2.00 PM and wednesday 10.15 AM )

Categories: Development

Easy Business Intelligence with eazyBI

Raimonds Simanovskis - Sat, 2011-09-10 16:00

I have been interested in business intelligence and data warehouse solutions for quite a while. And I have seen that traditional data warehouse and business intelligence tool implementations take quite a long time and cost a lot to set up infrastructure, develop or implement business intelligence software and train users. And many business users are not using business intelligence tools because they are too hard to learn and use.

Therefore some while ago a had an idea of developing easy-to-use business intelligence web application that you could implement and start to use right away and which would focus on ease-of-use and not on too much unnecessary features. And result of this idea is eazyBI which after couple months of beta testing now is launched in production.

Many sources of data

One of the first issues in data warehousing is that you need to prepare / import / aggregate data that you would like to analyze. It's hard to create universal solution for this issue therefore I am building several predefined ways how to upload or prepare your data for analysis.

In the simplest case if you have source data in CSV (comma-separated values) format or you can export your data in CSV format then you can upload these files to eazyBI, map columns to dimensions and measures and start to analyze and create reports from uploaded data.

If you are already using some software-as-a-service web applications and would like to have better analysis tools to analyze your data in these applications then eazyBI will provide standard integrations for data import and will create initial sample reports. Currently the first integrations are with 37signals Basecamp project collaboration application as well as you can import and analyze your Twitter timeline. More standard integrations with other applications will follow (please let me know if you would be interested in some particular integration).

One of the main limiting factors for business intelligence as a service solutions is that many businesses are not ready to upload their data to service providers - both because of security concerns as well as uploading of big data volumes takes quite a long time. Remote eazyBI solution is unique solution which allows you to use eazyBI business intelligence as a service application with your existing data in your existing databases. You just need to download and set up remote eazyBI application and start to analyze your data. As a result you get the benefits of fast implementation of business intelligence as a service but without the risks of uploading your data to service provider.

Easy-to-use

Many existing business intelligence tools suffer from too many features which make them complicated to use and you need special business intelligence tool consultants which will create reports and dashboards for business users.

Therefore my goal for eazyBI is to make it with less features but any feature that is added should be easy-to-use. With simple drag-and-drop and couple clicks you can select dimensions by which you want to analyze data and start with summaries and then drill into details. Results can be viewed as traditional tables or as different bar, line, pie, timeline or map charts. eazyBI is easy and fun to use as you play with your data.

Share reports with others

When you have created some report which you would like to share with other your colleagues, customers or partners then you can either send link to report or you can also embed report into other HTML pages. Here is example of embedded report from demo eazyBI account:

Embedded reports are fully dynamic and will show latest data when page will be refreshed. They can be embedded in company intranets, wikis or blogs or any other web pages.

Open web technologies

eazyBI user interface is built with open HTML5/CSS3/JavaScript technologies which allows to use it both on desktop browsers as well as on mobile devices like iPad (you will be able to open reports even on mobile phones like iPhone or Android but of course because of screen size it will be harder to use). And if you will use modern browser like Chrome, Firefox, Safari or Internet Explorer 9 then eazyBI will be also very fast and responsive. I believe that speed should be one of the main features of any application therefore I am optimizing eazyBI to be as fast as possible.

In the backend eazyBI uses open source Mondrian OLAP engine and mondrian-olap JRuby library that I have created and open-sourced during eazyBI development.

No big initial investments

eazyBI pricing is based on monthly subscription starting from $20 per month. There is also free plan for single user as well as you can publish public data for free. And you don't need to make long-term commitments - you just pay for the service when you use it and you can cancel it anytime when you don't need it anymore.

Try it out

If this sounds interesting to you then please sign up for eazyBI and try to upload and analyze your data. And if you have any suggestions or questions about eazyBI then please let me know.

P.S. Also I wanted to mention that by subscribing to eazyBI you will also support my work on open-source. If you are user of my open-source libraries then I will appreciate if you will become eazyBI user as well :) But in case if you do not need solution like eazyBI you could support me by tweeting and blogging about eazyBI, will be thankful for that as well.

Categories: Development

Oracle enhanced adapter 1.4.0 and Readme Driven Development

Raimonds Simanovskis - Mon, 2011-08-08 16:00

I just released Oracle enhanced adapter version 1.4.0 and here is the summary of main changes.

Rails 3.1 support

Oracle enhanced adapter GitHub version was working with Rails 3.1 betas and release candidate versions already but it was not explicitly stated anywhere that you should use git version with Rails 3.1. Therefore I am releasing new version 1.4.0 which is passing all tests with latest Rails 3.1 release candidate. As I wrote before main changes in ActiveRecord 3.1 are that it using prepared statement cache and using bind variables in many statements (currently in select by primary key, insert and delete statements) which result in better performance and better database resources usage.

To follow up how Oracle enhanced adapter is working with different Rails versions and different Ruby implementations I have set up Continuous Integration server to run tests on different version combinations. At the moment of writing everything was green :)

Other bug fixes and improvements

Main fixes and improvements in this version are the following:

  • On JRuby I switched from using old ojdbc14.jar JDBC driver to latest ojdbc6.jar (on Java 6) or ojdbc5.jar (on Java 5). And JDBC driver can be located in Rails application ./lib directory as well.

  • RAW data type is now supported (which is quite often used in legacy databases instead of nowadays recommended CLOB and BLOB types).

  • rake db:create and rake db:drop can be used to create development or test database schemas.

  • Support for virtual columns in improved (only working on Oracle 11g database).

  • Default table, index, CLOB and BLOB tablespaces can be specified (if your DBA is insisting on putting everything in separate tablespaces :)).

  • Several improvements for context index additional options and definition dump.

See list of all enhancements and bug fixes

If you want to have a new feature in Oracle enhanced adapter then the best way is to implement it by yourself and write some tests for that feature and send me pull request. In this release I have included commits from five new contributors and two existing contributors - so it is not so hard to start contributing to open source!

Readme Driven Development

One of the worst parts of Oracle enhanced adapter so far was that for new users it was quite hard to understand how to start to use it and what are all additional features that Oracle enhanced adapter provides. There were many blog posts in this blog, there were wiki pages, there were posts in discussion forums. But all this information was in different places and some posts were already outdated and therefore for new users it was hard to understand how to start.

After reading about Readme Driven Development and watching presentation about Readme Driven Development I knew that README of Oracle enhanced adapter was quite bad and should be improved (in my other projects I am already trying to be better but this was my first one :)).

Therefore I have written new README of Oracle enhanced adapter which includes main installation, configuration, usage and troubleshooting tasks which previously was scattered across different other posts. If you find that some important information is missing or outdated then please submit patches to README as well so that it stays up to date and with relevant information.

If you have any questions please use discussion group or report issues at GitHub or post comments here.

Categories: Development

JDev 11.1.2: Differences in Table Behavior

JHeadstart - Thu, 2011-07-28 23:03

While building a simple ADF application in JDev 11.1.2 I encountered some strange runtime behavior. I built another application with the same behavior in exactly the same way in JDev 11.1.1.4 and there things worked smoothly. However, in JDev 11.1.2, the addRow and deleteRow functions didn't work as expected. In this post I will share my tough journey in founding out what was happening, and discuss the difference in behavior and the changes required to make it work in JDev 11.1.2.

When using the add row button (the green plus icon in the screen shot below) an error message for the required JobId dropdown list was shown immediately.

Some investigation using the Google Chrome Developer tools revealed that two requests instead of one are sent to the server, with apparently the second request causing the validation error to appear. (Although, the validation error is a client-side error, so still not sure how the second request can trigger the error.)

At first I thought this was caused by the partialSubmitproperty on the addRow button, which was set to true. Setting this property to false (or removing this property) fixed this problem, but caused table rendering to hang. Weird, but I didn't investigate that further. I decided to build the same app in JDev 11.1.1.4 which worked smoothly and then opened this app in JDev 11.1.2. After the auto-migration, I ran the app but much to my surprise the "Selection required" message didn't show up. I compared the page and page definition of both apps over and over again, and couldn't see any difference.  Eventually, I started comparing all the files in both projects. This lead me to the adf-config.xml file, located in the .adf directory under the root directory of the application, also visible under the Resources panel. In this file, one property existed in the JDev 11.1.2 application that was not present in the JDev 11.1.1.4 version: changeEventPolicy="ppr".

By removing this property, things started to work again, and only one request was sent again.

Note that the really tricky thing here is that when you upgrade an application from JDev 11.1.1.4 this property does not get added, but new JDev 11.1.2 apps will have this property setting, causing difference in behavior between a migrated app and a new app. At this point, my recommendation is to remove this property (or set it to none) for new JDev 11.1.2 apps. If memory serves me well, in some JDev 11.1.1.x version, dragging and dropping a data a table on a page, added the changeEventPolicy="ppr" property to the iterator binding in the page def. In a later JDev 11.1.1.x release this property was gone again. Looks like it is back in a different form (this time in adf-config.xml), but still with undesirable implications. 

The next error I hit was in the delete confirmation dialog, when trying to delete a row. Regardless of which button I pressed (Yes or No), I got validation errors on the underlying new row, and the dialog was not closed, nor was the row removed.

Now, I think this error has to do with the ADF Faces optimized JSF lifecycle.  Since the table needs to be refreshed when the row is removed by clicking yes in the dialog, the af:table component  requires a partialTrigger property that refers to the af:dialog element. With this partialTrigger property in place the ADF JSF optimized lifecycle causes the table items to be submitted (and validated) as well when clicking the Yes or No button in the dialog. Now,I am speculating here, but may be this wasn't supposed to work at all in JDev 11.1.1.4, but it did because of a bug in the optimized lifecyle code, that has now been fixed in JDev 11.1.2...?

Anyway, what feels like the most logical and easy way for me to solve this issue, is setting the immediate property on the af:dialog to true, so the dialog listener method would skip the JSF validation phase. However, the af:dialog element does not have such a property (logged enhancement request).  Two other solutions remain:

  • No longer use the dialoglistener property, but instead define custom Yes/No buttons using the toolbar facet on the af:dialog. On these buttons I can set the immediate property to true, bypassing client-side and server-side validation.
  • Do not specify the af:dialog as partial trigger on the af:table component, instead, add the table or a surrounding layout container element as partial target programatically after deleting the row. This is the solution I chose, since it only required one line of code in the managed bean class that deletes the row.

Links and references:

Categories: Development

Application Express 4.1 - #BUTTON_ID# Changed Behaviour

Anthony Rayner - Thu, 2011-07-21 07:28
I just wanted to blog about some changed behaviour that will be landing in APEX 4.1. The change has to do with the #BUTTON_ID# substitution string, available for use within a button template and we hope will have minimal impact.

What's Changing?

Prior to 4.1, the value substituted for the #BUTTON_ID# substitution string in a button template depended on the type of button:
  • Region buttons substituted the internal numeric ID of the button, for example '123456789101112'.
  • Item buttons actually failed to substitute this value at all, so you would just get the '#BUTTON_ID#' text.
We had to address and improve this for the new Dynamic Action / Button integration in 4.1, which means that the value substituted for #BUTTON_ID# will now be one of the following:
  1. If the template is used by an item button, the item name is used, for example 'P1_GO'.
  2. If the template is used by a region button and a 'Static ID' (new in 4.1) is defined for the button, this is used, for example 'my_custom_id'.
  3. If the template is used by a region button and no 'Static ID' is defined, an ID in the format 'B||[Internal Button ID] will be used, for example 'B123456789101112'.
The change in behaviour that could be of potential significance in your applications is #3 above. This was changed so as to be a valid HTML 4.01 identifier (begins with a letter) and to be consistent with how we handle other component IDs in Application Express. 


Will this impact your applications?

If you used a button template, that used an ID value substituted by #BUTTON_ID# (you would have had to add it to the button template, as this was never included by default in our themes), and importantly you hard-coded that ID from other places in your code (for example in custom JavaScript to attach behaviour to the button), then this change in behaviour will cause that code to no longer work.


What can you do about this?

To help you identify region buttons that use a template containing the #BUTTON_ID# substitution string, you can run the following query in your workspace in Application Express 4.0:

    select aapb.application_id,
           aapb.page_id,
           aapb.button_name,
           aapb.label,
           aapb.button_id,
           aapb.button_template,
           aatb.template
      from apex_application_page_buttons aapb,
           apex_applications aa,
           apex_application_temp_button aatb
     where aapb.application_id     = aa.application_id
       and aa.application_id       = aatb.application_id
       and aa.theme_number         = aatb.theme_number
       and aatb.template_name      = aapb.button_template
       and aapb.application_id     = [Your App ID]
       and aapb.button_template    is not null
       and aapb.button_position    = 'Region Position'
       and upper(aatb.template)    like '%#BUTTON_ID#%'
     order by 1,2,3

Or when APEX 4.1 comes out, you'll be able to do this (by virtue of the addition of the BUTTON_TEMPLATE_ID column to the APEX_APPLICATION_PAGE_BUTTONS dictionary view):

    select aapb.application_id,
           aapb.page_id,
           aapb.button_name,
           aapb.label,
           aapb.button_id,
           aapb.button_template,
           aatb.template
      from apex_application_page_buttons aapb,
           apex_application_temp_button aatb
     where aapb.button_template_id = aatb.button_template_id
       and aapb.application_id     = [Your App ID]
       and aapb.button_template    is not null
       and aapb.button_position    = 'Region Position'
       and upper(aatb.template)    like '%#BUTTON_ID#%'
     order by 1,2,3

This will not detect where you may have referenced the ID in other code (such as JavaScript code), it just identifies the buttons that could be problematic. So you can then review the pages returned by this query, to isolate any pages that could have issues. 

The easiest way to fix the issue will be to use the 'Button ID' value returned from the query as the new 'Static ID' for the button. This would mean the button would be rendered with the same ID as prior to release 4.1 and any dependent code would still work.

This will of course be documented in our Release Notes as Changed Behaviour, but I hope this post helps to give a little pre-warning as to whether this may affect you.

Updated: Thank you to Louis-Guillaume Carrier-Bédard for pointing out that the query I (rather stupidly!) initially added does not work in APEX 4.0. My apologies if this caused any inconvenience.
Categories: Development

Core ADF11: UIShell with Menu Driving a Dynamic Region

JHeadstart - Thu, 2011-07-07 21:44

In this old post in the Core ADF11 series, I explained the various options you have in designing the page and taskflow structure. The preferred approach in my opinion that maximizes flexibility and reusability, is to build the application using bounded taskflows with page fragments. You then have various ways to disclose these task flows to the user. You can use an dynamic tabs as described in this post. You can also embed the taskflows using a dynamic region in a page. The application menu then drives the content of the dynamic region: clicking a menu option will load another taskflow in the dynamic region.

 The initial drawback of this approach is that it adds some additional complexity:

  • You can no longer use standard JSF navigation with your menu, there are no pages to navigate to, only regions
  • The XMLMenuModel, an easy way to define your menu structure in XML, cannot be used as-is. The selected menu entry when using the XMLMenuModel is based on the current page, and in our design, the whole application consists of only one page, the UIShell page.
  • The dynamic region taskflow binding should contain a list of all parameters of all taskflows that can be displayed in the dynamic region.

    This is a rather ugly design, and the developer of the UIShell page would need to know all the parameters of all taskflows that might be displayed in the dynamic region. A cleaner implementation is to use the parameter map property against the taskflow binding. However, when using the parameter map, you need to do the housekeeping of changed parameter values yourself when you want the taskflow to refresh when parameters change. In other words, just specifying Refresh=ifNeeded on the taskflow binding no longer works because ADF does not detect changes in a parameter map.

Fortunately, you can address this complexity quite easily by creating some simple, yet powerful infrastructure classes that hide most of the complexity from your development team. (The full source of these classes can be found in the sample application, download links are at the bottom of this post)

  • A DynamicRegionManager class that keeps track of the current task flow and current parameter map 
  • A TaskFlowConfigBean for each task flow that contains the actual task flow document path, the task flow parameters and a flag whether the parameter values have been changed.
  • A RegionNavigationHandler that subclasses the standard navigation handler to provide JSF-like navigation to a region taskflow.
  • A RegionXMLMenuModel class that subclasses the standard XMLMenuModel class to ensure the proper menu tab is selected based on the currently displayed taskflow in the dynamic region. 

The following picture illustrates how this UIShell concept works at runtime.

The UIShell page (with extension .jsf in JDeveloper 11.1.2 and with extension .jspx in JDeveloper 11.1.1.x) contains a dynamic region. The taskflow binding in the page definition of UIShell gets the currently displayed taskflow from the DynamicRegionManager class that is registered as a managed bean under the name mainRegionManager. The actual task flow id and task flow parameters are supplied by the TaskFlowConfigBean. The DynamicRegionManager manages the current taskflow based on a logical name, for example Jobs. When method setCurrentTaskFlowName is called on the DynamicRegionManager with value Jobs (and we will later see how we use the RegionNavigationHandler to do this), the DynamicRegionManager looks up the corresponding TaskFlowConfigBean by suffixing the Jobs task flow name with TaskFlowConfig.he methods getCurrentTaskFlowId and getCurrentParamMap and currentParamMapChanged then obtain and return the correct values using the JobsTaskFlowConfig bean.

Using this technique, you can configure your dynamic region completely declarative, there is no need to write any Java code. All you need to do is configure the mainRegionManager and task flow config beans, as shown below. 

In this sample, I configured the required managed beans in adfc-config.xml. When using ADF Libraries that contain task flows you want to show in the UIShell dynamic region, it is more elegant to define the TaskFlowConfigBean inside the ADF library. I usually define this bean in the adfc-config file of the task flow itself, below the  <task-flow-definition/> section.  It needs to be outside this section, otherwise the bean cannot be found by the DynamicRegionManager bean that is defined in the unbounded task together with the UIShell page. The advantage of this approach is that the config bean is placed in the same file as the task flow it refers to, the disadvantage is that beans defined outside the </task-flow-definition> section are not visible in the overview tab of a bounded task flow.

To set the current task flow name on the DynamicRegionManager, we use a custom RegionNavigationHandler class that you register in the faces-config.xml.

This class extends the default navigation handler, and overrides the handleNavigation method. The code is shown below. If the outcome contains a colon, the part after the colon contains the task flow name that should be set in the dynamic region.


With this class in place, we can use the action property on command components again to do region navigation, just like we are used to for normal JSF navigation, or navigation between page fragments inside a bounded task flow. The complete flow is as follows:

  • The menu item has the action property set to uishell:Jobs.
  • The region navigation handler set the current task flow name on the Dynamic region manager to Jobs (and navigates to UIShell page if needed) 
  • The dynamic region manager picks up the current task flow id and current parameters from the JobsTaskFlowConfig bean.

The last part of the puzzle is the RegionXMLMenuModel class that subclasses the standard XMLMenuModel class. This class overrides the getFocusRowKey method to return the focus path based on the current task flow region name.

With this class defined in the menu model managed bean, the menu will correctly display the selected tab.   

In a future post I will discuss how the same concepts can be applied to a UIShell page with dynamic tabs.

Downloads:

Categories: Development

Some More New ADF Features in JDeveloper 11.1.2

JHeadstart - Thu, 2011-06-23 22:39

The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:

  •  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: 
    <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/>
    See section 3.5.2 in web user interface guide for more info.

  • A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon).

  • New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4.

  • The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering.

  • In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list.
Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

Categories: Development

Recent conference presentations

Raimonds Simanovskis - Thu, 2011-06-02 16:00

Recently I has not posted any new posts as I was busy with some new projects as well as during May attended several conferences and in some I also did presentations. Here I will post slides from these conferences. If you are interested in some of these topics then ask me to come to you as well and talk about these topics :)

Agile Riga Day

In March I spoke at Agile Riga Day (organized by Agile Latvia) about my experience and recommendations how to adopt Agile practices in iterative style.

RailsConf

In May I travelled to RailsConf in Baltimore and I hosted traditional Rails on Oracle Birds of a Feather session there and gave overview about how to contribute to ActiveRecord Oracle enhanced adapter.

TAPOST

Then I participated in our local Theory and Practice of Software Testing conference and there I promoted use of Ruby as test scripting language.

RailsWayCon

And lastly I participated in Euruko and RailsWayCon conferences in Berlin. In RailsWayCon my first presentation was about multidimensional data analysis with JRuby and mondrian-olap gem. I also published mondrian-olap demo project that I used during presentation.

And second RailsWayCon presentation was about CoffeeScript, Backbone.js and Jasmine that I am recently using to build rich web user interfaces. This was quite successful presentation as there were many questions and also many participants were encouraged to try out CoffeeScript and Backbone.js. I also published my demo application that I used for code samples during presentation.

Next conferences

Now I will rest for some time from conferences :) But then I will attend FrozenRails in Helsinki and I will present at Oracle OpenWorld in San Francisco. See you there!

Categories: Development

Web 2.0 Solutions with Oracle WebCenter 11g (book review)

Java 2 Go! - Wed, 2011-03-16 20:16
by Fábio SouzaHello People! This was supposed to be a post to celebrate the new year, but, as you all can notice, the things didn't happen the way I was expecting (again haha). Today I will talk...

This is a summary only. Please, visit the blog for full content and more.
Categories: Development

Oracle JHeadstart 11.1.1.3 Now Available

JHeadstart - Wed, 2011-03-16 05:27

Oracle JHeadstart 11.1.1.3 is now available for download (build 11.1.1.3.35).
This release is compatible with JDeveloper 11 releases 11.1.1.4, as well as 11.1.1.3 and 11.1.1.2.
Customers who own a JHeadstart supplement option license can download it from the Consulting Supplement Option portal.

In addition to many small enhancements and bug fixes, the following features have been added to JHeadstart 11.1.1.3:


  • Support for Dynamic Tabs: There is a new page template insipred by the dynamic tabs functional pattern. Note that this template and associated managed beans are all included with JHeadstart and can be easily customized. The implementation does not depend on the "Oracle Extended Page Templates" library. Additional JHeadstart-specific features include: automatically marking the current tab dirty, initially displaying one or more tabs, ability to uniquely identify tabs by a tab unique identifier, close tab icon displayed on tab itself. When using the item Display Type "groupLink" there is a new allowable value "New Dynamic Tab" for property Display Linked Group In that can be used when you use the dynamic tabs template. For example, this allows you to have a first "EmployeeSearch" group that searches employees and shows the result in a table, and then clicking a groupLink edit icon for one employee in the table, which opens a new dynamic tab showing the linked "CustomerEdit" group for this specific employee. See section 9.3 "Using Dynamic Tabs when Opening a Menu Item" in the JHeadstart Developer's Guide for more information.

    Dynamic Tabs



  • Support for Function Keys: There is a new application level property Enable Function Keys. When this property is checked, JHeadstart will generate context-sensitive function keys. The list of function keys can be
    seen by clicking Ctrl-K. The actual function keys shown in the list depend on the location of the cursor in the page. The default mapping of function keys to ADF actions is inspired by Oracle Forms function key mapping, but can very easily be changed. Function keys for custom buttons can easily be added as well. See section 11.7 "Using Function Keys" in the JHeadstart Developer's Guide for more information.

    Context-Sensitive Function Keys

  • Support for Popup Regions: There are two new region container layout styles: Modeless Popup Window and Modal Popup Window. With these layout styles, the content of the region container and its child regions are displayed in a poupup window. If the Depends on Item(s) property is set on the region container, the item context facet of the depends-on-item is used to launch the region container popup. If the depends-on-item is a button item, the popup will be launched when clicking the button. If the Depends on Item(s) property is not set for the popup region conatiner, an additional button with the label set to the region title is generated to launch the popup. See section 5.10.4 "Generating Content in a Popup Window" in the JHeadstart Developer's Guide for more information.

    Popup Region Container with Method Call Button

  • Support for Dynamic Iterator Bindings: There is a new group property Data collection Expression. In this property you can specify an EL expression that determines at runtime which data collection (view object usage) should be used. This can be useful when reusing the same group taskflow multiple times. For example, the Employees group can be used as a top-level taskflow, and as a group region taskflow under the departments group. Rather then setting up bind variables and a query where clause, this use case can be implemented much easier by dynamically switching the view object usage between a top-level employees view object usage and a nested usage under the departments view object usage. The data collection expression can then refer to a taskflow parameter which specifies the actual view object usage name.
  • Support for Custom Toolbar Buttons: There are two new item display types toolbarButton and groupLinkToolbarButton. Items with this display type are generated into the group-level toolbar for form layout, and added to the table toolbar for table layouts

    Custom Iconic Toolbar Button

  • Control over Label Position: There is a new property Label Alignment for groups and item regions that allows you to specify whether labels (prompts) should be positioned at the left of an item, or above the item.
  • Support for Icons: There is a new icon property available at group and item level. At item level, this can be used to generate iconic buttons and group links. At group level, the icon is displayed in the header bar of the group, and in buttons that navigate to the group taskflow
  • Support for Online Help: At the application level, there is a new property Online Help Provider. Two new properties Help Text and Instruction Text have been added to the group, region container, group region, item region and item elements. These two new properties are only used when a help provider is set at application level. If a help text is entered a help icon will be displayed at the right of the element title, or in case of an item, at the left of the item prompt. If an instruction text is entered, it will be displayed below the element title, or in case of an item, in a popup when the user clicks in the input component of the item. See section 11.6 "Using Online Help" in the JHeadstart Developer's Guide for more information.

    Online Help

  • Deeplinking from External Source Like E-Mail: You can now launch the application with a specific page (region) being displayed by adding "jhsTaskFlowName" as parameter to the request URL.
    The value of "jhsTaskFlowName" should be set to a valid group name as defined in the application definition. Any other request parameters will be set as taskflow parameters.
    For example the following URL will start the application with the Employees taskflow, displaying employee with employeeid 110:

    http://127.0.0.1:7101/MyJhsTutorial/UIShell?jhsTaskFlowName=Employees&rowKeyValueEmployees=110
  • Ability to Call Business Method from Button: There is a new item level property Method Call where you can select a method from the list of application module methods that have been added to the client interface.
    Using item parameters, you can specify the method arguments. The return value of a method call can easily be displayed in an unbound item using a simple EL expression. See section 6.10.3 "Executing a Button Action" in the JHeadstart Developer's Guide for more information.
  • Requery when Entering Task Flow: The group-level combobox property Requery Condition has a new allowable value "When Entering the Task Flow" to ensure the latest data are queried when the user enters a task flow. As before, you can still enter a custom boolean EL expression in this property as well.
  • Additional Item Properties: There is a new item property "Additional Properties" where you specify additional properties that are added to the ADF Faces
    Component generated for the item. See section 12.4.4 "Adding Custom Properties to a Generated Item" in the JHeadstart Developer's Guide for more information.
  • New Custom Properties: You can now specify 5 custom properties against region containers, item regions and group regions.
  • Easier Taskflow Customization: Most of the content of the bounded taskflow Velocity template for a top group (groupAdfcConfig.vm) has been refactored into separate templates to make customization of generated bounded taskflows easier and faster. Placeholder (empty) templates to easily add custom managed beans, custom taskflow activities and custom control flow rules have been added as well. See section 12.5 "Customizing Task Flows" in the JHeadstart Developer's Guide for more information.
  • Easier File Generation Customization: The fileGenerator.vm now uses logical template names instead of hardcoded template paths to generate files. You can now use the Application Definition Editor to create a custom template for a specific file, just like all other templates. In addition, to prevent generation of a file, you can set the template to default/common/empty.vm, the file generator will no longer create files with empty content. See section 12.6 "Customizing Output of the File Generator" in the JHeadstart Developer's Guide for more information.
  • Better Support for ADF Libraries: A new paragraph (2.4) in the JHeadstart Developer's Guide describes JHeadstart-specific steps to take when using ADF Libraries. In addition, it is now possible to "import" JHeadstart service definitions from other projects that are packaged as ADF Library so you can reference JHeadstart groups in other projects in the JHeadstart Application Definition editor of the project that contains the ADF libraries with JHeadstart-generated content. See section 2.4 "Packaging JHeadstart-Generated ViewController Project as ADF Library" in the JHeadstart Developer's Guide for more information.
  • Use of ADFLogger: The JHeadstart runtime classes now use the ADFLogger instead of Log4j. The ADFLogger nicely integrates with WebLogic, allowing you to configure log levels dynamically at runtime and monitoring log messages from specific threads. To see all JHeadstart debug messages during development, go to the WebLogic log window in JDeveloper, click on the "Actions" dropdown list and choose "Configure Oracle Diagnostic Logging". Now add a persistent logger with name "oracle.jheadstart" and log level "INFO". You can do this while Weblogic is already running.

For a complete list of all existing features, use this link.

Categories: Development

Maintaining One Code Base with Possibly Conflicting Custom Features

Kenneth Downs - Fri, 2011-01-21 21:21

Today's essay deals with the tricky issue of custom features for individual customers who are running instances of your software.

The question comes by way of a regular reader who prefers to remain anonymous, but asks this:

... I work on a large (to me, anyway) application that serves as a client database, ticket system, time-tracking, billing, asset-tracking system. We have some customers using their own instances of the software. Often, those customers want additional fields put in different places (e.g., a priority column on tickets). This results in having multiple branches to account for versions with slight changes in code and in the database. This makes things painful and time-consuming in the long run: applying commits from master to the other branches requires testing on every branch; same with database migrate scripts, which frequently have to be modified.

Is there an easier way? I have thought about the possibility of making things "optional" in the database, such as a column on a table, and hiding its existence in the code when it's not "enabled." This would have the benefit of a single code set and a single database schema, but I think it might lead to more dependence on the code and less on the database -- for example, it might mean constraints and keys couldn't be used in certain cases.

Restating the Question

Our reader asks, is it better to have different code branches or to try to keep a lot of potentially conflicting and optional items mixed in together?

Well, the wisdom of the ages is to maintain a single code branch, including the database schema. I tried exactly once, very early in my career, to fork my own code, and gave up almost within days. When I went to work in larger shops I always arrived in a situation where the decision had already been made to maintain a single branch. Funny thing, since most programmers cannot agree on the color of the sky when they're staring out the window, this is the only decision I have ever seen maintained with absolute unanimity no matter how many difficulties came out of it.

There is some simple arithmetic as to why this is so. If you have single feature for a customer that is giving you a headache, and you fork the code, you now have to update both code branches for every change plus regression test them both, including the feature that caused the headache. But if you keep them combined you only have the one headache feature to deal with. That's why people keep them together.

Two Steps

Making custom features work smoothly is a two-step process. The first step is arguably more difficult than the second, but the second step is absolutely crucial if you have business logic tied to the feature.

Most programmers when confronted with this situation will attempt to make various features optional. I consider this to be a mistake because it complicates code, especially when we get to step 2. By far the better solution is to make features ignorable by anybody who does not want them.

The wonderful thing about ingorable features is they tend to eliminate the problems with apparently conflicting features. If you can rig the features so anybody can use either or both, you've eliminated the conflict.

Step 1: The Schema

As mentioned above, the first step is arguably more difficult than the second, because it may involve casting requirements differently than they are presented.

For example, our reader asks about a priority column on tickets, asked for by only one customer. This may seem like a conflict because nobody else wants it, but we can dissolve the conflict when we make the feature ignorable. The first step involves doing this at the database or schema level.

But first we should mention that the UI is easy, we might have a control panel where we can make fields invisible. Or maybe our users just ignore the fields they are not interested in. Either way works.

The problem is in the database. If the values for priority come from a lookup table, which they should, then we have a foreign key, and we have a problem if we try to ignore it:

  • We can allow nulls in the foreign key, which is fine for the people ignoring it, but
  • This means the people who require it can end up with tickets that have no priority because it does not prevent a user from leaving it blank.

A simple answer here is to pre-populate your priority lookup table with a value of "Not applicable", perhaps with a hardcoded id of zero. Then we set the default value for the TICKET.priority to zero. This means people can safely ignore it because it will always be valid.

Then, for the customer who paid for it, we just go in after the install and delete the default entry. It's a one-time operation, not even worth writing a script for, and it forces them to create a set of priorities before using the system. Further, by leaving the default of zero in there, it forces valid answers because users will be dinged with an FK violation if they do not provide a real priority.

For this particular example, there is no step 2, because the problem is completely solved at the schema level. To see how to work with step 2, I will make up an example of my own.

Step 2: Unconditional Business Logic

To illustrate step 2, I'm going to make up an example that is not really appropriate to our reader's question, frankly because I cannot think of one for that situation.

Let's say we have an eCommerce system, and one of our sites wants customer-level discounts based on customer groups, while another wants discounts based on volume of order -- the more you buy, the deeper the discount. At this point most programmers start shouting in the meeting, "We'll make them optional!" Big mistake, because it makes for lots of work. Instead we will make them ignorable.

Step 1 is to make ignorable features in the schema. Our common code base contains a table of customer groups with a discount percent, and in the customers table we make a nullable foreign key to the customer groups table. If anybody wants to use it, great, and if they want to ignore it, that's also fine. We do the same thing with a table of discount amounts, we make an empty table that lists threshhold amounts and discount percents. If anybody wants to use it they fill it in, everybody else leaves it blank.

Now for the business logic, the calculations of these two discounts. The crucial idea here is not to make up conditional logic that tries to figure out whether or not to apply the discounts. It is vastly easier to always apply both discounts, with the discounts coming out zero for those users who have ignored the features.

So for the customer discount, if the customer's entry for customer group is null, it will not match to any discount, and you treat this as zero. Same for the sale amount discount, the lookup to see which sale amount they qualify doesn't find anything because the table is empty, so it treats it as zero.

So the real trick at the business logic level is not to figure out which feature to use, which leads to complicatec conditionals that always end up conflicting with each other, but to always use all features and code them so they have no effect when they are being ignored.

Conclusion

Once upon a time almost everybody coding for a living dealt with these situations -- we all wrote code that was going to ship off to live at our customer's site. Nowadays this is less common, but for those of us dealing with it it is a big deal.

The wisdom of the ages is to maintain a common code base. The method suggested here takes that idea to its most complete implementation, a totally common code base in which all features are active all of the time, with no conditionals or optional features (except perhaps in the UI and on printed reports), and with schema and business logic set up so that features that are being ignored simply have no effect on the user.

Categories: Development

Can You Really Create A Business Logic Layer?

Kenneth Downs - Thu, 2011-01-06 19:21

The past three posts of this little mini-series have gone from a Working definition of business logic to a Rigorous definition of business logic and on to some theorems about business logic. To wrap things up, I'd like to ask the question, is it possible to isolate business logic into a single tier?

Related Reading

There are plenty of opinions out there. For a pretty thorough explanation of how to put everything into the DBMS, check out Toon Koppelaar's description. Mr. Koppelaars has some good material, but you do need to read through his earlier posts to get the definitions of some of his terms. You can also follow his links through to some high quality discussions elsewhere.

Contrasting Mr. Koppelaar's opinion is a piece which does not have nearly the same impact, IMHO, because in Dude, Where's My Business Logic? we get some solid history mixed with normative assertions based on either anecdote or nothing at all. I'm a big believer in anecdote, but when I read a sentence that says, "The database should not have any knowledge of what a customer is, but only of the elements that are used to store a customer." then I figure I'm dealing with somebody who needs to see a bit more of the world.

Starting At the Top: The User Interface

First, let's review that our rigorous definition of business logic includes schema (types and constraints), derived values (timestamps, userstamps, calculations, histories), non-algorithmic compound operations (like batch billing) and algorithmic compound operations, those that require looping in their code. This encompasses everything we might do from the simplest passive things like a constraint that prevents discounts from being over 100% to the most complex hours-long business process, along with everything in between accounted for.

Now I want to start out by using that definition to see a little bit about what is going on in the User Interface. This is not the presentation layer as it is often called but the interaction layer and even the command layer.

Consider an admin interface to a database, where the user is entering or modifying prices for the price list. Now, if the user could enter "Kim Stanley Robinson" as the price, that would be kind of silly, so of course the numeric inputs only allow numeric values. Same goes for dates.

So the foundation of usability for a UI is at very least knowlege of and enforcement of types in the UI layer. Don't be scared off that I am claiming the UI is enforcing anything, we'll get to that a little lower down.

Now consider the case where the user is typing in a discount rate for this or that, and a discount is not allowed to be over 100%. The UI really ought to enforce this, otherwise the user's time is wasted when she enters an invalid value, finishes the entire form, and only then gets an error when she tries to save. In the database world we call this a constraint, so the UI needs to know about constraints to better serve the user.

Now this same user is typing a form where there is an entry for US State. The allowed values are in a table in the database, and it would be nice if the user had a drop-down list, and one that was auto-suggesting as the user typed. Of course the easiest way to do something like this is just make sure the UI form "knows" that this field is a foreign key to the STATES table, so it can generate the list using some generic library function that grabs a couple of columns out of the STATES table. Of course, this kind of lookup thing will be happening all over the place, so it would work well if the UI knew about and enforced foreign keys during entry.

And I suppose the user might at some point be entering a purchase order. The purchase order is automatically stamped with today's date. The user might see it, but not be able to change it, so now our UI knows about system-generated values.

Is this user allowed to delete a customer? If not, the button should either be grayed out or not be there at all. The UI needs to know about and enforce some security.

More About Knowing and Enforcing

So in fact the UI layer not only knows the logic but is enforcing it. It is enforcing it for two reasons, to improve the user experience with date pickers, lists, and so forth, and to prevent the user from entering invalid data and wasting round trips.

And yet, because we cannot trust what comes in to the web server over the wire, we have to enforce every single rule a second time when we commit the data.

You usually do not hear people say that the UI enforces business logic. They usually say the opposite. But the UI does enforce business logic. The problem is, everything the UI enforces has to be enforced again. That may be why we often overlook the fact that it is doing so.

The Application and The Database

Now let's go through the stuff the UI is enforcing, and see what happens in the application and the database.

With respect to type, a strongly typed language will throw an error if the type is wrong, and a weakly typed language is wise to put in a type check anyway. The the DBMS is going to only allow correctly typed values, so, including the UI, type is enforced three times.

With respect to lookups like US state, in a SQL database we always let the server do that with a foreign key, if we know what is good for us. That makes double enforcement for lookups.

So we can see where this is going. As we look at constraints and security and anything else that must be right, we find it will be enforced at least twice, and as much as three times.

You Cannot Isolate What Must be Duplicated

By defining First Order Business Logic, the simplest foundation layer, as including things like types and keys and constraints, we find that the enforcement of this First Order stuff is done 2 or 3 times, but never only once.

This more or less leaves in tatters the idea of a "Business Logic Layer" that is in any way capable of handling all business logic all by its lonesome. The UI layer is completely useless unless it is also enforcing as much logic as possible, and even when we leave the Database Server as the final enforcer of First Order Business Logic (types, constraints, keys), it is still often good engineering to do some checks to prevent expensive wasted trips to the server.

So we are wasting time if we sit around trying to figure out how to get the Business Logic "where it belongs", because it "belongs" in at least two places and sometimes three. Herding the cats into a single pen is a fool's errand, it is at once unnecessary, undesirable, and impossible.

Update: Regular reader Dean Thrasher of Infovark summarizes most of what I'm saying here using an apt industry standard term: Business Logic is a cross-cutting concern.

Some Real Questions

Only when we have smashed the concept that Business Logic can exist in serene isolation in its own layer can we start to ask the questions that would actually speed up development and make for better engineering.

Freed of the illusion of a separate layer, when we look at the higher Third and Fourth Order Business Logic, which always require coding, we can decide where they go based either on engineering or the availability of qualified programmers in particular technologies, but we should not make the mistake of believing they are going where they go because the gods would have it so.

But the real pressing question if we are seeking to create efficient manageable large systems is this: how we distribute the same business logic into 2 or 3 (or more) different places so that it is enforced consistently everywhere. Because a smaller code base is always easier to manage than a large one, and because configuration is always easier than coding, this comes down to meta-data, or if you prefer, a data dictionary. That's the trick that always worked for me.

Is This Only A Matter of Definitions?

Anybody who disagrees with the thesis here has only to say, "Ken, those things are not business logic just because you wrote a blog that says they are. In my world business logic is about code baby!" Well sure, have it your way. After all, the nice thing about definitions is that we can all pick the ones we like.

But these definitions, the theorems I derived on Tuesday, and the multiple-enforcement thesis presented here today should make sense to anbyody struggling with where to put the business logic. That struggle and its frustrations come from the mistake of imposing abstract conceptual responsibilities on each tier instead of using the tiers as each is able to get the job done. Databases are wonderful for type, entity integrity (uniqueness), referential integrity, ACID compliance, and many other things. Use them! Code is often better when the problem at hand cannot be solved with a combination of keys and constraints (Fourth Order Business Logic), but even that code can be put into the DB or in the application.

So beware of paradigms that assign responsibility without compromise to this or that tier. It cannot be done. Don't be afraid to use code for doing things that require structured imperative step-wise operations, and don't be afraid to use the database for what it is good for, and leave the arguments about "where everything belongs" to those with too much time on their hands.

Categories: Development

Theorems Regarding Business Logic

Kenneth Downs - Tue, 2011-01-04 16:33

In yesterday's Rigorous Definition of Business Logic, we saw that business logic can be defined in four orders:

  • First Order Business Logic is entities and attributes that users (or other agents) can save, and the security rules that govern read/write access to the entitites and attributes.
  • Second Order Business Logic is entities and attributes derived by rules and formulas, such as calculated values and history tables.
  • Third Order Business Logic are non-algorithmic compound operations (no structure or looping is required in expressing the solution), such as a month-end batch billing or, for the old-timers out there, a year-end general ledger roll-up.
  • Fourth Order Business Logic are algorithmic compound operations. These occur when the action of one step affects the input to future steps. One example is ERP Allocation.
A Case Study

The best way to see if these have any value is to cook up some theorems and examine them with an example. We will take a vastly simplified time billing system, in which employees enter time which is billed once/month to customers. We'll work out some details a little below.

Theorem 1: 1st and 2nd Order, Analysis

The first theorem we can derive from these definitions is that we should look at First and Second Order Schemas together during analysis. This is because:

  • First Order Business Logic is about entities and atrributes
  • Second Order Business Logic is about entities and attributes
  • Second Order Business Logic is about values generated from First Order values and, possibly, other Second Order values
  • Therefore, Second Order values are always expressed ultimately in terms of First Order values
  • Therefore, they should be analyzed together

To give the devil his due, ORM does this easily, because it ignores so much database theory (paying a large price in performance for doing so) and considers an entire row, with its first order and second order values together, as being part of one class. This is likely the foundation for the claims of ORM users that they experience productivity gains when using ORM. Since I usually do nothing but bash ORM, I hope this statement will be taken as utterly sincere.

Going the other way, database theorists and evangelists who adhere to full normalization can hobble an analysis effort by refusing to consider 2nd order because those values denormalize the database, so sometimes the worst of my own crowd will prevent analysis by trying to keep these out of the conversation. So, assuming I have not pissed off my own friends, let's keep going.

So let's look at our case study of the time billing system. By theorem 1, our analysis of entities and attributes should include both 1st and 2nd order schema, something like this:

 
 INVOICES
-----------
 invoiceid      2nd Order, a generated unique value
 date           2nd Order if always takes date of batch run
 customer       2nd Order, a consequence of this being an
                           aggregation of INVOICE_LINES
 total_amount   2nd Order, a sum from INVOICE_LINES
               
 INVOICE_LINES
---------------
 invoiceid      2nd order, copied from INVOICES
 customer         +-  All three are 2nd order, a consequence
 employee         |   of this being an aggregration of
 activity         +-  employee time entries
 rate           2nd order, taken from ACTIVITIES table
                           (not depicted)
 hours          2nd order, summed from time entries
 amount         2nd order, rate * hours
 
 TIME_ENTRIES
--------------
 employeeid     2nd order, assuming system forces this
                    value to be the employee making
                    the entry
 date           1st order, entered by employee
 customer       1st order, entered by employee
 activity       1st order, entered by employee
 hours          1st order, entered by employee

Now, considering how much of that is 2nd order, which is almost all of it, the theorem is not only supported by the definition, but ought to line up squarely with our experience. Who would want to try to analyze this and claim that all the 2nd order stuff should not be there?

Theorem 2: 1st and 2nd Order, Implementation

The second theorem we can derive from these definitions is that First and Second Order Business logic require separate implementation techniques. This is because:

  • First Order Business Logic is about user-supplied values
  • Second Order Business Logic is about generated values
  • Therefore, unlike things cannot be implemented with like tools.

Going back to the time entry example, let's zoom in on the lowest table, the TIME_ENTRIES. The employee entering her time must supply customer, date, activity, and hours, while the system forces the value of employeeid. This means that customer and activity must be validated in their respective tables, and hours must be checked for something like <= 24. But for employeeid the system provides the value out of its context. So the two kinds of values are processed in very unlike ways. It seems reasonable that our code would be simpler if it did not try to force both kinds of values down the same validation pipe.

Theorem 3: 2nd and 3rd Order, Conservation of Action

This theorem states that the sum of Second and Third Order Business Logic is fixed:

  • Second Order Business Logic is about generating entities and attributes by rules or formulas
  • Third Order Business Logic is coded compound creation of entities and attributes
  • Given that a particular set of requirements resolves to a finite set of actions that generate entities and values, then
  • The sum of Second Order and Third Order Business Logic is fixed.

In plain English, this means that the more Business Logic you can implement through 2nd Order declarative rules and formulas, the fewer processing routines you have to code. Or, if you prefer, the more processes you code, the fewer declarative rules about entitities and attributes you will have.

This theorem may be hard to compare to experience for verification because most of us are so used to thinking in terms of the batch billing as a process that we cannot imagine it being implemented any other way: how exactly am I suppose to implement batch billing declaratively?.

Let's go back to the schema above, where we can realize upon examination that the entirety of the batch billing "process" has been detailed in a 2nd Order Schema, if we could somehow add these facts to our CREATE TABLE commands the way we add keys, types, and constraints, batch billing would occur without the batch part.

Consider this. Imagine that a user enters a a TIME_ENTRY. The system checks for a matching EMPLOYEE/CUSTOMER/ACTIVITY row in INVOICE_DETAIL, and when it finds the row it updates the totals. But if it does not find one then it creates one! Creation of the INVOICE_DETAIL record causes the system to check for the existence of an invoice for that customer, and when it does not find one it creates it and initializes the totals. Subsequent time entries not only update the INVOICE_DETAIL rows but the INVOICE rows as well. If this were happening, there would be no batch billing at the end of the month because the invoices would all be sitting there ready to go when the last time entry was made.

By the way, I coded something that does this in a pretty straight-forward way a few years ago, meaning you could skip the batch billing process and add a few details to a schema that would cause the database to behave exactly as described above. Although the the format for specifying these extra features was easy enough (so it seemed to me as the author), it seemed the conceptual shift of thinking that it required of people was far larger than I initially and naively imagined. Nevertheless, I toil forward, and that is the core idea behind my Triangulum project. Observation: There Will Be Code

This is not so much a theorem as an observation. This observation is that if your application requires Fourth Order Business Logic then somebody is going to code something somewhere.

An anonymous reader pointed out in the comments to Part 2 that Oracle's MODEL clause may work in some cases. I would assume so, but I would also assume that reality can create complicated Fourth Order cases faster than SQL can evolve. Maybe.

But anyway, the real observation here is is that no modern language, either app level or SQL flavor, can express an algorithm declaratively. In other words, no combination of keys, constraints, calculations and derivations, and no known combination of advanced SQL functions and clauses will express an ERP Allocation routine or a Magazine Regulation routine. So you have to code it. This may not always be true, but I think it is true now.

This is in contrast to the example given in the previous section about the fixed total of 2nd and 3rd Order Logic. Unlike that example, you cannot provide enough 2nd order wizardry to eliminate fourth order. (well ok maybe you can, but I haven't figured it out yet myself and have never heard that anybody else is even trying. The trick would be to have a table that you truncate and insert a single row into, a trigger would fire that would know how to generate the next INSERT, generating a cascade. Of course, since this happens in a transaction, if you end up generating 100,000 inserts this might be a bad idea ha ha.)

Theorem 5: Second Order Tools Reduce Code

This theorem rests on the acceptance of an observation, that using meta-data repositories, or data dictionaries, is easier than coding. If that does not hold true, then this theorem does not hold true. But if that observation (my own observation, admittedly) does hold true, then:

  • By Theorem 3, the sum of 2nd and 3rd order logic is fixed
  • By observation, using meta-data that manages schema requires less time than coding,
  • By Theorem 1, 2nd order is analyzed and specified as schema
  • Then it is desirable to specify as much business logic as possible as 2nd order schema, reducing and possibly eliminating manual coding of Third Order programs.

Again we go back to the batch billing example. Is it possible to convert it all to 2nd Order as described above. Well yes it is, because I've done it. The trick is an extremely counter-intuitive modification to a foreign key that causes a failure to actually generate the parent row that would let the key succeed. To find out more about this, check out Triangulum (not ready for prime time as of this writing).

Conclusions

The major conclusion in all of this is that anlaysis and design should begin with First and Second Order Business Logic, which means working out schemas, both the user-supplied values and the system-supplied values.

When that is done, what we often call "processes" are layered on top of this.

Tomorrow we will see part 4 of 4, examining the business logic layer, asking, is it possible to create a pure business logic layer that gathers all business logic unto itself?

Categories: Development

Oracle enhanced adapter 1.3.2 is released

Raimonds Simanovskis - Tue, 2011-01-04 16:00

I just released Oracle enhanced adapter version 1.3.2 with latest bug fixes and enhancements.

Bug fixes and improvements

Main fixes and improvements are the following:

  • Previous version 1.3.1 was checking if environment variable TNS_NAME is set and only then used provided database connection parameter (in database.yml) as TNS connection alias and otherwise defaulted to connection to localhost with provided database name. This was causing issues in many setups.
    Therefore now it is simplified that if you provide only database parameter in database.yml then it by default will be used as TNS connection alias or TNS connection string.
  • Numeric username and/or password in database.yml will be automatically converted to string (previously you needed to quote them using "...").
  • Database connection pool and JNDI connections are now better supported for JRuby on Tomcat and JBoss application servers.
  • NLS connection parameters are supported via environment variables or in database.yml. For example, if you need to have NLS_DATE_FORMAT in your database session to be DD-MON-YYYY then either you specify nls_date_format: DD-MON-YYYY in database.yml for particular database connection or set ENV['NLS_DATE_FORMAT'] = 'DD-MON-YYYY' in e.g. config/initializers/oracle.rb. You can see the list of all NLS parameters in source code.
    It might be necessary to specify these NLS session parameters only if you work with some existing legacy database which has, for example, some stored procedures that require particular NLS settings. If this is new database just for Rails purposes then there is no need to change any settings.
  • If you have defined foreign key constraints then they are now correctly dumped in db/schema.rb after all table definitions. Previously they were dumped after corresponding table which sometimes caused that schema could not be recreated from schema dump because it tried to load constraint which referenced table which has not yet been defined.
  • If you are using NCHAR and NVARCHAR2 data types then now NCHAR and NVARCHAR2 type values are correctly quoted with N'...' in SQL statements.
Upcoming changes in Rails 3.1

Meanwhile Oracle enhanced adapter is updated to pass all ActiveRecord unit tests in Rails development master branch and also updated according to Arel changes. Arel library is responsible for all SQL statement generation in Rails 3.0.

Rails 3.0.3 is using Arel version 2.0 which was full rewrite of Arel 1.0 (that was used initial Rails 3.0 version) and as a result of this rewrite it is much faster and now Rails 3.0.3 ActiveRecord is already little bit faster than in ActiveRecord in Rails 2.3.

There are several improvements in Rails master branch which are planned for Rails 3.1 version which are already supported by Oracle enhanced adapter. One improvement is that ActiveRecord will support prepared statement caching (initially for standard simple queries like find by primary key) which will reduce SQL statement parsing time and memory usage (and probably Oracle DBAs will complain less about Rails dynamic SQL generation :)). The other improvement is that ActiveRecord will correctly load included associations with more than 1000 records (which currently fails with ORA-01795 error).

But I will write more about these improvements sometime later when Rails 3.1 will be released :)

Install

As always you can install Oracle enhanced adapter on any Ruby platform (Ruby 1.8.7 or Ruby 1.9.2 or JRuby 1.5) with

gem install activerecord-oracle_enhanced-adapter

If you have any questions please use discussion group or report issues at GitHub or post comments here. And the best way how to contribute is to fix some issue or create some enhancement and send me pull request at GitHub.

Categories: Development

Business Logic: From Working Definition to Rigorous Definition

Kenneth Downs - Sun, 2011-01-02 12:05

This is part 2 of a 4 part mini-series that began before the holidays with A Working Definition Business Logic. Today we proceed to a rigorous definition, tomorrow we will see some theorems, and the series will wrap up with a post on the "business layer."

In the first post, the working definition said that business logic includes at least:

  • The Schema
  • Calculations
  • Processes

None of these was very rigorously defined, kind of a "I'll know it when I see it" type of thing, and we did not talk at all about security. Now the task becomes tightening this up into a rigorous definition.

Similar Reading

Toon Koppelaars has some excellent material along these same lines, and a good place to start is his Helsinki Declaration (IT Version). The articles have a different focus than this series, so they make great contrasting reading. I consider my time spent reading through it very well spent.

Definitions, Proofs, and Experience

What I propose below is a definition in four parts. As definitions, they are not supposed to prove anything, but they are definitely supposed to ring true to the experience of any developer who has created or worked on a non-trivial business application. This effort would be a success if we reach some concensus that "at least it's all in there", even if we go on to argue bitterly about which components should be included in which layers.

Also, while I claim the definitions below are rigorous, they are not yet formal. My instinct is that formal definitions can be developed using First Order Logic, which would allow the theorems we will see tomorrow to move from "yeah that sounds about right" to being formally provable.

As for their practical benefit, inasmuch as "the truth shall make you free", we ought to be able to improve our architectures if we can settle at very least what we are talking about when we use the vague term "business logic."

The Whole Picture

What we commonly call "business logic", by which we vaguely mean, "That stuff I have to code up", can in fact be rigorously defined as having four parts, which I believe are best termed orders, as there is a definite precedence to their discovery, analysis and implementation.

  • First Order: Schema
  • Second Order: Derivations
  • Third Order: Non-algorithmic compound operations
  • Fourth Order: Algorithmic compound operations

Now we examine each order in detail.

A Word About Schema and NoSQL

Even "schema-less" databases have a schema, they simply do not enforce it in the database server. Consider: an eCommerce site using MongoDB is not going to be tracking the local zoo's animal feeding schedule, because that is out of scope. No, the code is limited to dealing with orders, order lines, customers, items and stuff like that.

It is in the very act of expressing scope as "the data values we will handle" that a schema is developed. This holds true regardless of whether the datastore will be a filesystem, an RDBMS, a new NoSQL database, or anything else.

Because all applications have a schema, whether the database server enforces it or whether the application enforces it, we need a vocabulary to discuss the schema. Here we have an embarrasment of choices, we can talk about entities and attributes, classes and properties, documents and values, or columns and tables. The choice of "entities and attributes" is likely best because it is as close as possible to an implementation-agnostic language.

First Order Business Logic: Schema

We can define schema, including security, as:

that body of entities and their attributes whose relationships and values will be managed by the application stack, including the authorization of roles to read or write to entities and properties.

Schema in this definition does not include derived values of any kind or the processes that may operate on the schema values, those are higher order of business logic. This means that the schema actually defines the entire body of values that the application will accept from outside sources (users and other programs) and commit to the datastore. Restating again into even more practical terms, the schema is the stuff users can save themselves.

With all of that said, let's enumerate the properties of a schema.

Type is required for every attribute.

Constraints are limits to the values allowed for an attribute beyond its type. We may have a discount percent that may not exceed 1.0 or 100%.

Entity Integrity is usually thought of in terms of primary keys and the vague statement "you can't have duplicates." We cannot have a list of US States where "NY" is listed 4 times.

Referential Integrity means that when one entity links or refers to another entity, it must always refer to an existing entity. We cannot have some script kiddie flooding our site with sales of items "EAT_ME" and "F***_YOU", becuase those are not valid items.

The general term 'validation' is not included because any particular validation rule is is a combination of any or all of type, constraints, and integrity rules.

Second Orders Business Logic: Derived values

When we speak of derived values, we usually mean calculated values, but some derivations are not arithmetic, so the more general term "derived" is better. Derivations are:

A complete entity or an attribute of an entity generated from other entities or attributes according to a formula or rule.

The definition is sufficiently general that a "formula or rule" can include conditional logic.

Simple arithmetic derived values include things like calculating price * qty, or summing an order total.

Simple non-arithmetic derivations include things like fetching the price of an item to use on an order line. The price in the order is defined as being a copy of the item's price at the time of purchase.

An example of a complete entity being derived is a history table that tracks changes in some other table. This can also be implemented in NoSQL as a set of documents tracking the changes to some original document.

Security also applies to generated values only insofar as who can see them. But security is not an issue for writing these values because by definition they are generated from formulas and rules, and so no outside user can ever attempt to explicitly specify the value of a derived entity or property.

One final point about Second Order Business Logic is that it can be expressed declaratively, if we have the tools, which we do not, at least not in common use. I wrote one myself some years ago and am re-releasing it as Triangulum, but that is a post for another day.

Sorting out First and Second Order

The definitions of First and Second Order Business Logic have the advantage of being agnostic to what kind of datastore you are using, and being agnostic to whether or not the derived values are materialized. (In relational terms, derivations are almost always denormalizing if materialized, so in a fully normalized database they will not be there, and you have to go through the application to get them.)

Nevertheless, these two definitions can right off bring some confusion to the term "schema." Example: a history table is absolutely in a database schema, but I have called First Order Business Logic "schema" and Second Order Business Logic is, well, something else. The best solution here is to simply use the terms First Order Schema and Second Order Schema. An order_lines table is First Order schema, and the table holding its history is Second Order Schema.

The now ubiquitous auto-incremented surrogate primary keys pose another stumbling block. Because they are used so often (and so often because of seriously faulty reasoning, see A Sane Approach To Choosing Primary Keys) they would automatically be considered schema -- one of the very basic values of a sales order, check, etc. But they are system-generated so they must be Second Order, no? Isn't the orderid a very basic part of the schema and therefore First Order? No. In fact, by these definitions, very little if any of an order header is First Order, the tiny fragments that are first order might be the shipping address, the user's choice of shipping method, and payment details provided by the user. The other information that is system-generated, like Date, OrderId, and order total are all Second Order.

Third Order Business Logic

Before defining Third Order Business Logic I would like to offer a simple example: Batch Billing. A consulting company bills by the hour. Employees enter time tickets throughout the day. At the end of the month the billing agent runs a program that, in SQL terms:

  • Inserts a row into INVOICES for each customer with any time entries
  • Inserts a row into INVOICE_LINES that aggregates the time for each employee/customer combination.

This example ought to make clear what I mean by definining Third Order Business Logic as:

A Non algorithmic compound operation.

The "non-algorithmic" part comes from the fact that none of the individual documents, an INVOICE row and its INVOICE_LINES, is dependent on any other. There is no case in which the invoice for one customer will influence the value of the invoice for another. You do not need an algorithm to do the job, just one or more steps that may have to go in a certain order.

Put another way, it is a one-pass set-oriented operation. The fact that it must be executed in two steps is an artifact of how database servers deal with referential integrity, which is that you need the headers before you can put in the detail. In fact, when using a NoSQL database, it may be possible to insert the complete set of documents in one command, since the lines can be nested directly into the invoices.

Put yet a third way, in more practical terms, there is no conditional or looping logic required to specify the operation. This does not mean there will be no looping logic in the final implementation, because performance concerns and locking concerns may cause it to be implemented with 'chunking' or other strategies, but the important point is that the specification does not include loops or step-wise operations because the individual invoices are all functionally independent of each other.

I do not want to get side-tracked here, but I have had a working hypothesis in my mind for almost 7 years that Third Order Business Logic, even before I called it that, is an artifact, which appears necessary because of the limitations of our tools. In future posts I would like to show how a fully developed understanding and implementation of Second Order Business Logic can dissolve many cases of Third Order.

Fourth Order Business Logic

We now come to the upper bound of complexity for business logic, Fourth Order, which we label "algorithmic compound operations", and define a particular Fourth Order Business Logic process as:

Any operation where it is possible or certain that there will be at least two steps, X and Y, such that the result of Step X modifies the inputs available to Step Y.

In comparison to Third Order:

  • In Third Order the results are independent of one another, in Fourth Order they are not.
  • In Third Order no conditional or branching is required to express the solution, while in Fourth Order conditional, looping, or branching logic will be present in the expression of the solution.

Let's look at the example of ERP Allocation. In the interest of brevity, I am going to skip most of the explanation of the ERP Allocation algorithm and stick to this basic review: a company has a list of sales orders (demand) and a list of purchase orders (supply). Sales orders come in through EDI, and at least once/day the purchasing department must match supply to demand to find out what they need to order. Here is an unrealistically simple example of the supply and demand they might be facing:

  *** DEMAND ***          *** SUPPLY ***

    DATE    | QTY           DATE    | QTY
------------+-----      ------------+----- 
  3/ 1/2011 |  5          3/ 1/2011 |  3
  3/15/2011 | 15          3/ 3/2011 |  6
  4/ 1/2011 | 10          3/15/2011 | 20
  4/ 3/2011 |  7   

The desired output of the ERP Allocation might look like this:

 *** DEMAND ***      *** SUPPLY ****
    DATE    | QTY |  DATE_IN   | QTY  | FINAL 
------------+-----+------------+------+-------
  3/ 1/2011 |  5  |  3/ 1/2011 |  3   |  no
                  |  3/ 3/2011 |  2   | Yes 
  3/15/2011 | 15  |  3/ 3/2011 |  4   |  no
                  |  3/15/2011 | 11   | Yes
  4/ 1/2011 | 10  |  3/15/2011 |  9   |  no
  4/ 3/2011 |  7  |    null    | null |  no

From this the purchasing agents know that the Sales Order that ships on 3/1 will be two days late, and the Sales Orders that will ship on 4/1 and 4/3 cannot be filled completely. They have to order more stuff.

Now for the killer question: Can the desired output be generated in a single SQL query? The answer is no, not even with Common Table Expressions or other recursive constructs. The reason is that each match-up of a purchase order to a sales order modifies the supply available to the next sales order. Or, to use the definition of Fourth Order Business Logic, each iteration will consume some supply and so will affect the inputs available to the next step.

We can see this most clearly if we look at some pseudo-code:

for each sales order by date {
   while sales order demand not met {
      get earliest purchase order w/qty avial > 0
         break if none
      make entry in matching table
      // This is the write operation that 
      // means we have Fourth Order Business Logic
      reduce available qty of purchase order
   }
   break if no more purchase orders
}
Conclusions

As stated in the beginning, it is my belief that these four orders should "ring true" with any developer who has experience with non-trivial business applications. Though we may dispute terminology and argue over edge cases, the recognition and naming of the Four Orders should be of immediate benefit during analysis, design, coding, and refactoring. They rigorously establish both the minimum and maximum bounds of complexity while also filling in the two kinds of actions we all take between those bounds. They are datamodel agnostic, and even agnostic to implementation strategies within data models (like the normalize/denormalize debate in relational).

But their true power is in providing a framework of thought for the process of synthesizing requirements into a specification and from there an implementation.

Tomorrow we will see some theorems that we can derive from these definitions.

Categories: Development

Pages

Subscribe to Oracle FAQ aggregator - Development