Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 16 hours 45 min ago

Team based To Do Management

Thu, 2017-08-17 19:39

One of the interesting discussions I have with customers and partners about the To Do functionality in the Oracle Utilities Application Framework based products is team management. Most partners and customers think that the To Do functionality is limited to one role per To Do type. This is due to the fact that most examples they see in training or in demonstrations shows one role per To Do type. There is "more than meets the eye" to the functionality.

The To Do functionality can be configured in different ways to implement different allocation mechanisms. Let me discuss and alternative configuration that may appeal to some implementations.

  • Create a To Role for each organizational team in your organization. These do not have to be whole parts of your organization, they can simply be groups of people with similar skills or work responsibilities. You decide the numbers of groups and their composition. I will use the word "team" rather than To Do Role in the rest of this article to emphasize the alternative view.
  • By using teams you actually might reduce your maintenance costs as you will probably have less numbers of teams than the number of To Do types to manage. At the moment remember people think that you can only have one team per To Do Type.
  • Allocate people to those teams. Now you have full flexibility here. A person can be a member of any team you wish and of course they can be members of multiple teams (even overlapping ones - more about his later). 
  • Allocate the teams to the To Do Types they will be working on. Now that you have teams you can allocate multiple teams per To Do type. Remember one of the teams should be allocated as the Default so that your algorithms, batch jobs etc have a default to allocate.

Now your implementation will be using teams of people rather than using one role per To Do Type. This means you can allocate to teams (or individuals) and supervisors can manage teams.

Remember the use of a capability in the product is not restricted to what is shown in demonstrations. Think outside the box.

High Availablity Designs

Mon, 2017-08-14 19:58

One of the most common tasks in any implementation of an Oracle Utilities Application Framework product is the design of a high availability environment to ensure business continuity and availability.

The Oracle Utilities Application Framework is designed to allow implementations to use a wide variety of high availability and business continuity solutions available in the market. As the product is housed in Oracle WebLogic and Oracle Database then we can utilize the high availability features of those products.

If you are considering designing a high availability architecture here are a few guidelines:

  • Consider the Oracle Maximum Availability Architecture which has guidelines for designing high availability and business continuity solutions for a variety of solutions available.
  • Design for your business requirements and hardware platform. Solutions can vary to low cost solutions with minimal hardware to highly configured complex hardware/software solutions.
  • Do not discount solutions built into your hardware platform. Redundancy and high availability features of hardware can be part of the solution that you propose for an implementation. These are typically already in place so offer a cost effective component of any solution.
  • Design for your budget. I have seen implementations where they design a complex high availability solution only to get "sticker shock" when the price is discussed. I usually temper costs of a solution against the estimated business loss from an availability issue or a business continuity issue. It is very similar to discussions around insurance you might have personally.
  • Customers of Oracle Utilities Application Framework based product have used both hardware and/or software based availability and business continuity solutions. This includes hardware at the load balancing level, such as routers, to implement high availability.
  • Oracle typically recommends clustering as one of the techniques to consider in your solutions. Oracle Utilities Application Framework supports clustering for Oracle WebLogic, Oracle Coherence and Oracle Database. We support clusters within user channels (online, web services and batch) and across those channels as well.
  • Oracle typically recommends Real Application Clustering (including One Node implementations) as part of an availability solution. Oracle Utilities Application Framework supports RAC and includes support for newer implementations of that technology through features such as Oracle Notification Service (ONS).
  • One of the most common business continuity solutions customers have chosen is to use Oracle Data Guard or Oracle Active Data Guard to keep a backup database in synchronization with the prime database. Customers wanting to use the backup database for reporting tend to choose Oracle Active Data Guard as their preferred solution.
  • Batch can be clustered using Oracle Coherence (with flexibility in the architecture) and in Oracle Cloud SaaS implementations, we support batch clustering via Oracle WebLogic clustering. For customers interested in batch architecture refer to Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

The following references for MAA may help you design your solution:

Updated Integration Solutions Whitepaper - Augment your solutions

Wed, 2017-08-09 19:58

Whilst Oracle Utilities Application Framework is flexible and supports a wide range of solutions in the marketplace, there are some requirements that are actually best served using other Oracle technology integrated for your implementation. A whitepaper outlining a summary of the most common technology integrations has been updated to the latest release.

This whitepaper outlines the most common Oracle technology integrations that have been used by Oracle and its partners to implement complete solutions in the marketplace. It is designed to help customers and partners make judgements on the technology available and how to integrate this technology with your Oracle Utilities Application Framework based product.

The whitepaper is Integration Reference Solutions (Doc Id: 1506855.1) and is available from My Oracle Support.

Integration Architecture

The updates include the latest information as well as helpful links to other documentation to help plan and design your integrated solutions.

Updated Whats New whitepaper -

Sun, 2017-08-06 17:40

The Whats New in FW4 whitepaper has been updated for the latest service pack release. This whitepaper is designed to summarize the major technology and functional changes implemented in the Oracle Utilities Application Framework since V2.2 till the latest service pack. This is primarily of interest to customer upgrading of those earlier versions to understand what has changed and what is new in the framework since that early release.

The whitepaper is only a summary of selected enhancements and it is still recommended to review the release notes of each release if you are interested in details of everything that is changed. This whitepaper does not cover the changes to any of the products that use the Oracle Utilities Application Framework, it is recommended to refer to the release notes of the individual products for details of new functionality.

The whitepaper is available from Whats New in FW4 (Doc Id: 1177265.1) from My Oracle Support.

Securing Your JNDI Resources for Other Groups

Thu, 2017-08-03 16:25

As with other applications, the Oracle Utilities Application Framework respects the settings within the Oracle WebLogic domain, including any default settings. One of the default settings for the domain is access to the JNDI resources within the domain. By default, Oracle WebLogic grants access to Everyone that is defined in the security realm definition of the domain. Whilst, this is generally acceptable in the vast majority of domains that are setup (remember you tend to set up a lot of non-production copies in any implementation of the products), it may not be appropriate for production domains. There is a simple setup to correct that.

  • Create a group to designate the specific users outside the application users you want to give access to the JNDI resources. Allocate the user identities to that group in your security repository. If you use the internal LDAP of Oracle WebLogic then you can add them using the console. If you want to designate different groups of people, create different groups.
    • Remember you have groups already for other users, Administrators and the product group. For this documentation we will use the Administrators and cisusers groups. You can vary the values according to your site setup. These will be reused for the setup.
  • Create a Global Role which refers to the above group. If you created multiple then specify each group in the role.
  • On the product server(s) or cluster, select the View JNDI Tree option on the Configuration --> General tab. For example:

View JNDI Tree

  • On the root node of the server definition in the tree remove the Everyone from the node using the Remove button. The Administrators should be the only group that has access at the root level. Do NOT remove Administrators as this will corrupt your access to the domain. The following is an example of the recommended settings:

Root Node Access

  • All child nodes in the JNDI inherit the root node setup. Now for the product to work you need to add cisusers to the following JNDI objects:
    • The servicebean must be accessible for cisusers. This will be under the context value set for your domain.
    • The Data Sources (OUAF_DS in my example) must be accessible to cisusers.
    • The JMX nodes should be accessible to cisusers if you are using JMX monitoring (directly or via OEM).
    • If using the internal JMS processing, wither that is the JMS Senders or MDB, then you must allow cisusers access to the JMS resources in the domain.
  • Add your custom group to the relevant JNDI objects they need to have access to.
  • Set the Enable Remote JDBC Connection Property to false. This can be done using the JAVA_OPTIONS setting in the setDomainEnv[.sh] script shipped with Oracle WebLogic in the bin directory of your domain home (Add -Dweblogic.jdbc.remoteEnabled=false to JAVA_OPTIONS). Check that the variable WLS_JDBC_REMOTE_ENABLED is not set incorrectly.
  • If you are using SSL, you need to set the RMI JDBC Security to Secure to ensure Administrators use SSL as well for connections. For example:

RMI JDBC Security

The domain is now more secure.



Calling Batch Level Of Service

Wed, 2017-07-26 18:38

As a followup to my Batch Level Of Service article, I want to illustrate how to call your new algorithm from other scripts and as part of query zones.

In the base product we ship a Business Service, F1-BatchLevelOfService, that allows a script or query zone to call the Batch Level Of Service algorithm attached to a Batch Control, if it exists, to return the level of service. I should point out that if a Batch Level Of Service algorithm is not configured on the Batch Control, this call will return the Disabled state.

The schema for this service is shown below (please use the View Schema feature on your version for later versions):

Level of Service Schema

To use this service you need to populate the batchControlId input parameter when calling the service for the service to return the message and levelOfService.

Now, how do you call this in other objects:

  • Service Scripts - Include the F1-BatchLevelOfService service as a Data Area attached to the script and use invokeBS to call the business service. For example:

move "parm/batchControlId" to "F1-BatchLevelOfService/input/batchControlId";
invokeBS 'F1-BatchLevelOfService' using "F1-BatchLevelOfService";

  • Query Portal - Use the source=bs tag in your column with a call to the F1-BatchLevelOfService service passing the column that contains the Batch Control Id. For example:

source=BS bs='F1-BatchLevelOfService' input=[input/batchControlId=C1] output=output/levelOfService

Additionally you can use F1-ReturnMessage to format the message which is returned as well.

Here is an example of the columns used in a query portal:

Example Use of Batch Level Of Service

Building a Batch Level of Service Algorithm

Fri, 2017-07-21 00:45

One of the features of the Oracle Utilities Application Framework is the Batch Level of Service. This is an optional feature where the Oracle Utilities Application Framework can assess the current execution metrics against some target metrics and return whether the batch job met its targets or failed in meeting targets (including the reason).

This facility is optional and requires some configuration on the Batch Control using a Batch Level Of Service algorithm. This algorithm takes in the BATCH_CD as an input and performs the necessary processing to check the level of service (anyway you wish).

The algorithm passes in a Batch Code (batchControlId) and it passes back the following:

  • The Level Of Service, levelOfService,  (as expressed by the system lookup F1_BATCH_LEVEL_SERVICE_FLG):
    • DISA (Disabled) - The Batch Level Of Service is disabled as the algorithm is not configured on the Batch Control record. This is the default.
    • NORM (Normal) - The execution of the batch job is within the service level you are checking.
    • ERRO (Error) - The execution of the batch job exceeds the service level is you are checking.
    • WARN (Warning) - This can be used to detect that he job is close to the service level (if you require this functionality).
  • The reason for the Level Of Service, expressed as a message (via Message Category, Message Number and Message Parameters). This allows you customize the information passed to express why the target was within limits or exceeded.

So it is possible to use any metric in your algorithm to measure the target performance of your batch controls. This information will be displayed on the Batch Control or via the F1-BatchLevelOfService Business Service (for query portals).

Now, I will illustrate the process for building a Batch Level Of Service with an example algorithm. This sample will just take a target value and assess the latest completed execution. The requirements for the sample algorithm are as follows:

  • A target will be set on the parameters of the algorithm which is the target value in seconds. Seconds was chosen as that is the lowest common denominator for all types of jobs.
  • The algorithm will determine the latest batch number or batch rerun number (to support reruns) for the completed jobs only. We have an internal business service, F1-BatchRunStatistics that returns the relevant statistics if given the batch code, batch number and batch rerun number.
  • The duration returned will be compared to the target and the relevant levelOfService set with the appropriate message.

Here is the process I used to build my algorithm:

  • I created three custom messages that would hold the reason for the NORM, ERRO and WARN state. I do not use the last state in my algorithm though in a future set of articles I might revisit that. For example:

Messages for Batch Level Of Service

  • You might notice that in the message for the times the target is exceeded I will include the target as part of the message (to tell you how far you are away from the target). The first parameter will be the target and the second will be the value returned from the product.
  • The next step is to define the Business Service that will return the batch identifiers of the execution I want to evaluate for the statistic. In this case I want to find the latest run number for a given batch code. Now, there are various ways of doing this but I will build a business service to bring back the right value. In this case I will do the following:
    • I will build a query zone with the following configuration to return the batch run number and batch rerun number:
Parameter Setting Zone CMBHZZ Description Return Batch Last Run Number and Rerun Number Zone Type F1-DE-SINGLE Application Service F1-DFLTS Width Full Hidden Filter 1 label=BATCH_CD Initial Display Columns C1 C2 C3 SQL Statement select b1.batch_cd, max(b1.batch_nbr), max(b2.batch_rerun_nbr) from ci_batch_inst b1, ci_batch_inst b2 where
b1.batch_cd = :H1 and b1.batch_cd = b2.batch_cd and b1.batch_nbr = b2.batch_nbr group by b1.batch_cd Column 1 source=SQLCOL sqlcol=1 label=BATCH_CD Column 2 source=SQLCOL sqlcol=2 label=BATCH_NBR Column 3 source=SQLCOL sqlcol=3 label=BATCH_RERUN_NBR
  • I will convert this to a Business Service using the FWLZDEXP with the following schema:

Business Service Schema

  • I need to create a Data Area to hold my input variables. I could do this inline but I might want to reuse the Data Area for other algorithms in the future. For example:

Data Area

  • I now have all the components to start my algorithm via Plug In Script. I create a Batch Level Of Service script with the following settings:
Script Basics
  • I attach the following Data Areas. These are the data areas used by the various calls in the script:

Data Areas

  • The script code looks something like this:


Note: The code shown above is for illustrative processes. It is not a supported part of the product, just an example.

  • I now create the Algorithm Type that will define the algorithm parameters and the interface for the Algorithm entries. Notice the only parameter is the Target Value:

Sample Algorithm Type

  • Now I create the Algorithm entries to set the target value. For example:

Example Algorithm

  • I can create many different algorithm entries to reuse across the batch controls. For example:

Example Algorithms

  • The final step is to add it to the Batch Controls ready to be used. As I wrote the script as a Plug-In Script there is no deployment needed as it auto deploys. For example, on the Batch Control, I can add the algorithm:

Example Algorithm configuration on batch control

  • Now the Batch Level Of Service will be invoked whenever I open the Batch Control. For example:

Example Normal outcome from algorithm

Example outcome of Error

This example is just one use case to illustrate the use of Batch Level Of Service. This article is the first in a new series of articles that will use this as a basis for a new set of custom portals to help plan and optimize your batch experience.

Design Guidelines

Thu, 2017-07-06 23:24

The Oracle Utilities Application Framework is both flexible and powerful in terms of the extensibility of the products that use the product. As the famous saying goes though, "With Great Power comes Great Responsibility". Flexibility does not mean that you have carte blanche in terms of design when it comes to using the facilities of the product. Each object in the product has been specifically designed for a specific purpose and trying to use the extension facilities with those object must also respect those purposes.

Let me give some advice that may help guide your design work when building extensions:

  • Look at the base - The most important piece of advice I give partners and customers is look at the base product facilities first. I am amazed how many times I see an enhancement that has been implemented by a partner only to find that the base product already did that. This is particularly important when upgrading to a newer version. We spend a lot of time adding new features and updating existing ones (and sometimes replacing older features with newer features) so what you have as enhancements in previous now are part of the base product. It is a good idea to revert back to the base to reduce your maintenance costs.
  • Respect the objects - We have three types of objects in the product: Configuration, Master and Transaction.
    • The configuration objects are designed to hold meta data and configuration that influence the behavior of the product. They are cached in a L2 Cache that is designed for performance and are generally static data that is used as reference and guidance for the other objects. They tend to be low volume and are the domain of your Administrators or Power Users (rather than end users). A simple rule here is that they tend to exist on the Admin menu of the product.
    • The master objects are medium volume, with low growth, and define the key identifier or root data used by the product. For example, Accounts, Meters, Assets, Crews, etc.
    • The transaction objects are high volume and high growth and are added by processes in the product or interfaces and directly reference master objects. For example, bills, payments, meter reads, work activities, tasks etc.. These objects tend to also support Information Lifecycle Management.
    • Now you need to respect each of them. For example, do not load transaction data into a configuration object is a good example. Each its own place and each resource profile and behaviors.
  • Avoid overuse of the CLOB field - The CLOB field was introduced across most objects in the product and is a great way of extending the product. Just understand that while they are powerful they are not unlimited. They are limited in size for performance reasons and they are not a replacement for other facilities like characteristics and even building custom tables. They are XML remember and have limited maintenance and search capabilities over other methods.
  • Avoid long term issues - This one is hard to explain so let me try. When you design something, think about the other issues that may arise due to your design. For example, lots of implementers forget about volume increases over time and run into issues such as storage long term. Remember data in certain objects has different lifecycles and needs to be managed accordingly. Factor that into your design. Too many times I see extensions that forget this rule and then customer calls support for advice only to hear they need to redesign it to cater for the issue.

I have been in the industry over 30 years and made a lot of those mistakes myself early in my career so it is not impossible. Just learn and make sure you do not repeat your mistakes over time. One more piece of advice, talk about your designs with a few people (of various ages as well) to see if it makes sense. Do not take this as a criticism as a lot of great designers bounce ideas off others to see if they make sense. Doing that as part of any design process helps make the design more robust. Otherwise it just looks rushed and from the other side looks like lazy design. As designers I have seen great designs and bad designs, but it is possible to transform a requirement into a great design with some forethought.

Updates to Oracle Utilities Testing solution

Tue, 2017-06-27 18:49

We are pleased to announce the availability of new content for the Oracle Functional Testing Advanced Pack for Oracle Utilities. This pack allows customers of supported Oracle Utilities products to adopt automated testing quickly and easily by providing the testing components used by Product Development for use in the Oracle Application Testing Suite.

We have released, as patches available from My Oracle Support, the following content patches:

  • Oracle Utilities Customer Care And Billing v2. (available as patch 26075747).
  • Oracle Utilities Customer To Meter v2. (available as patch 26075823).
  • Oracle Utilities Meter Data Management/ Oracle Utilities Smart Grid Gateway v2.2.0.1 (available as patch 26075799).

This means the current release of the pack, v5.0.1.0, supports the following products and versions:

  • Oracle Utilities Customer Care And Billing,, &
  • Oracle Utilities Mobile Workforce Management, &
  • Oracle Real Time Scheduler, &
  • Oracle Utilities Application Framework,,, &
  • Oracle Utilities Meter Data Management, &
  • Oracle Utilities Smart Grid Gateway (all adapters), &      
  • Oracle Utilities Work And Asset Management 2.1.1, & 2.2.0
  • Oracle Utilities Operational Device Management 2.1.1 & 2.2.0
  • Oracle Utilities Customer To Meter

The pack continues to support the ability to build flows for these products, including flows across multiple products, packaged integration and supports all channels of access including online, web services and batch. We also support mobile testing for the Oracle Utilities Mobile Workforce Management and Oracle Real Time Scheduler products running on Android and iOS devices.

The pack also includes sanity flows used by the Oracle Utilities cloud deployments that test the installation of the products are complete and operational.

The VERSION column - A unsung treasure

Wed, 2017-06-21 20:58

If you use an Oracle Utilities Application Framework based product you will notice the column VERSION exists on all objects in the product. There is a very important reason that this column exists on the tables.

One of the common scenarios in an online system is the problem called the lost update problem. Let me explain, say we have two users (there can be more), say User A and User B.

  • User A reads Object A to edit it.
  • User B reads Object A as well to edit it at the same time.
  • User B saves the Object changes first.
  • User A saves the Object changes.

Now, without protection, the changes that User B made would be overridden by User A's changes. We have lost User B's changes. This is the lost update problem in a nutshell.

Now using the VERSION column changes the above scenario:

  • When User A and User B reads the object, the current value of VERSION is noted.
  • Whenever the object is updated, the value VERSION is checked. If it is the same than the value of VERSION when the record was read then value of VERSION is incremented as part of the update.
  • If the value of VERSION does not match, the product will issue a "Concurrency Error" and ask the user to retry the transaction (after reloading the changed object).

In our scenario, User A would receive the message as the value of VERSION has incremented, and therefore differs, since it was read by that user.

VERSION is a standard column on all objects in the system and applies no matter what channel (online, web services or batch) updates the object.

Hidden gems in OUAF

Thu, 2017-06-01 21:45

Oracle Utilities Application Framework V4. has just been released with a few products and you will find a few hidden gems in the installation which provides a couple of useful features for those upgrading.

Here is a summary of some of those features:

  • You will notice that now the product requires the Oracle Java Required Files (JRF). These files are additional libraries Oracle uses in its products to standardize diagnostics and administration. The JRF is provided as a profile you apply to your Oracle WebLogic domain to provide additional facilities and features. It install JRF it is recommended to down the Fusion Middleware Infrastructure release of Oracle WebLogic as it includes all the files necessary to apply the template. These libraries are used by various components in the product and each release we will implement more and more of the advanced functionality they provide.
  • One of the biggest gems is that JRF implements a new additional console in the form of Fusion Middleware Control. Customers familiar with Oracle SOA Suite will be familiar with this new console. It is a companion console and has some additional features around Web Services management and other administration features (including recording for replays) for common tasks. Here is an example of the console running with one of our products:

Oracle Fusion Middleware Control

  • The JRF inlcudes a prebuilt diagnostics framework (FMWDFW) setup for use with WLDF. The WebLogic Diagnostics Framework (WLDF) is a framework where you configure rules for detecting issues in your domain. When an issue arises, WLDF automatically collects the relevant information into a Diagnostics Package which can be sent to Oracle Support for diagnosis. This collects any relevant information (including flight recordings if you enable that) and creates a zip file full of diagnostic information to help solve the issue. The prebuilt setup can be used with OUAF products and can be altered to detect additional issues if necessary. At the present it helps detect the following:
    • Deadlocks
    • Heapspace (memory issues)
    • Stuck Threads (it can be configured to detect hogging threads as well)
    • UncheckedException - These are general errors

The JRF is a collection of useful libraries and utilities that are now enabled with Oracle Utilities Application Framework to help you be more efficient and also detect issues for you to manage.

Scripting, Groovy and Java for extending the product

Sun, 2017-05-28 23:55

In a recent past release of the Oracle Utilities Application Framework, we introduced Groovy as an alternative development technology for server side extensions on our products. This now means we have three technologies that can be used to extend our products:

  • XPath/Xquery based scripting engine known as scripting
  • Java
  • Groovy

Now, the issue becomes which technology do I use for my extensions. Here are a few guidelines to help you:

  • In terms of performance, there is not much difference between the technologies as, at the end of the day, they all result in byte code that is executed by the product. The product runtime does not discriminate the technology at that level. There is a slight advantage of Java/Groovy over Scripting for extremely large volumes.
  • If you are doing complex algorithmic or operating system level interaction it is recommended to use either Groovy or Java instead of scripting. While scripting can satisfy the most common of extensions, it may not be as efficient as Java/Groovy.
  • If you are intending to move to the Oracle Utilities SaaS offerings, you cannot use Java for any extensions. This is due to the fact that Java tends to be low level and also you cannot deploy your own JAR/WAR/EAR files in a Saas environment. If you use Oracle PaaS then you have full access so you can use Java in those cases.
  • Groovy was adopted as a language as it is the foundation of the Oracle Cloud offerings in general for extensions. The Groovy implementation across the Oracle Cloud is whitelisted so that it is restricted to accessing classes that do not have direct access to operating system resources. In this case we supply Groovy libraries to provide a contained integration with these resources.
  • One of the major considerations is total cost of ownership. Typically if you use a mixture of languages in your implementation then the cost of maintenance of those extensions tends to be higher if you chose to use a single language. This is true for any product that has multiple ways of extension as while flexibility is a great asset, it can come with additional costs. I usually recommend that you pick one of the technologies and stick with it for your extensions unless, for some reason, you need to use a mixture.
  • In terms of best practices, a lot of implementation partners tend to use scripting for the vast majority of their extensions and only use Groovy/Java when scripting is not applicable for some reason.
  • One of the big advantages of scripting and Groovy is that the code assets are actually contained in the database and migration is all handled by either Bundling (for small migrations) or using Configuration Migration Assistant (CMA). The use of Java for extensions, typically requires a manual synchronization of data as well as code.

From a vendor perspective, it does not matter which technology you choose to use. Personally, I would use scripting and the only use Groovy as necessary, it is easier to manage and you do not have physical JAR/WAR/EAR files to manage which makes your code/data synchronization much less an issue in a complex migration strategy. It also means you can move to the cloud a lot easier, in the future.

High and Maximum Availability Architectures

Thu, 2017-05-25 17:51

One of the most common questions I get from partners is what are the best practices that Oracle recommends for implementing high availability and also business continuity. Oracle has a set of flexible architectures and capabilities to support a wide range of high availability and business continuity solutions available in the marketplace.

The Oracle Utilities Application Framework supports the Oracle WebLogic and Oracle Database and related products with features inherited from the architecture or native facilities that allow features to be implemented. In summary the Oracle Utilities Application Framework supports the following:

  • Oracle WebLogic Clustering and high availability architectures are supported natively including support for the load balancing facilities supported, whether they be hardware or software based. This support extends to the individual channels supported by the Framework and to individual J2EE resources such as JMS, Data Sources, MDB etc..
  • Oracle Coherence high availability clustering is available natively for the batch architecture. We now also support using Oracle WebLogic to cluster and manage our batch architecture (though it is exclusively used in our Oracle Cloud implementations at the moment).
  • The high availability and business continuity features of the Oracle Database are also supported. For example, it is possible to implement Oracle Notification Service support within the architecture to implement Fast Connection Failure etc.

Oracle publishes a set of guidelines for Oracle WebLogic, Oracle Coherence and Oracle Database that can be used with Oracle Utilities Application Framework to implement high availability and business continuity solutions. Refer to the following references for this information:

REST Support clarifications

Tue, 2017-05-23 19:10

In the Oracle Utilities Application Framework V4. release, the support for REST has been enabled for use as a complementary interface method adding to the SOAP support we already have in the product.

The REST support in the Oracle Utilities Application Framework was originally developed to support our new generation of the mobile connection platform we used for the Oracle Utilities Mobile Workforce Management platform and limited to that product initially. Subsequently, we have decided to open up the support for general use.

As the REST support was originally designed for its original purpose, the current release of REST is limited to specific aspects of that protocol but it is at a sufficient level to be used for general purpose functions. It is designed to be an alternative to SOAP integration for customers who want to a mixture of SOAP and REST in their integration architectures.

In the initial release, the REST support has been implemented as part of the online channel to take advantage of the Oracle WebLogic facilities and share the protocol and security setup of that channel. In a future release, we have plans to incorporate enhanced REST features in a separate channel dedicated to integration.

For more information about the REST platform support, including the limitations of this initial release, refer to the Web Services Best Practices whitepaper from My Oracle Support (Doc Id: 221475.1).

Multiple Policy Support (

Wed, 2017-05-17 23:28

One of the features of the latest Oracle Utilities Application Framework (V4. is the support for multiple WS-Policy compliant policies on Inbound Web Services. There are a number of ways to achieve this:

  • Annotations - It is now possible to specify multiple inline policies (standard ones and custom ones) with order of precedence also supported via a Sequence. It is also now possible to delegate to security within Annotations to Oracle Web Services Manager. This means it is now possible to mix inline with external policies. For example:

Multiple Policies as Annotations

  • Oracle WebLogic - It is possible to attach the policies supported by Oracle WebLogic to the individually deployed Web Services on the container level. This supports multiple policies (order of precedence is designated by the order they appear in the Web Service) on the individual Web Service.
  • Oracle Web Services Manager - It is possible to attach additional policies using the container (Web Services Manager includes the Oracle WebLogic supported policies, additional advanced policies and access controls) and like Oracle WebLogic, the order of precedence for multiple policies is the order they are attached to the individual Web Service. For example:

OWSM Policy Example

Now why have multiple policies in the first place. Well, you do not have to use multiple policies but there are a few use cases where it makes sense:

  • Some WS-Policies are for transport security and some are for message security only. Using a combination allows you to specify both using different policies. I should point out that most WS-Policies contain a transport and message combination so it reduces the need for multiple policies in the container.
  • You can create WS-Policy compliant custom policies, as long as they are supported by Oracle WebLogic or Oracle Web Services Manager, and those can have separate transport or message security definitions.
  • You should reuse web services as much as possible. You can choose not to expose the WS-Policy in your service but then use different policies for different interface systems. This might sound illogical but you may have different levels of security depending on the source of the call. In this case you would tell your sources the different policies they must adhere to.

Multiple policies are an optional feature but can be used to support a wide range of different interface styles.

SOA Suite Security with Inbound Web Services

Wed, 2017-05-17 19:06

With the introduction of Inbound Web Services the integration between these services and Oracle SOA Suite now has a few more options in terms of security.

  • It is possible to specify the WS-Policy to use to secure the transport and message sent to the product web service on the SOA Composite. The product supports more than one WS-Policy per service and any composite must conform to one of those policies.
  • As with older versions of the product and SOA Suite, you can specify the csf-key within the domain itself. This key holds the credentials of the interface in meta-data so that it avoids hardcoding the credentials in each call. This also means you can manage credentials from the console independently of the composite. In the latest releases it is possible to specify the csf-map as well (in past releases you had to use as the map).

Now the process to do the configuration is as follows:

  • Using Oracle Fusion Middleware control, select the Oracle SOA Suite domain (usually soa_domain) and add the credentials (and map) to the domain. The credentials can be shared across composites or you choose to setup multiple credentials (one for each interface for example). In the example below, the map is the default map and key is ouaf.key (just for the example):

Example Key and Map

  • Now the credentials and the WS-Policies need to be specified on the composite within Oracle SOA Suite. This can be done within SOA Composer or Oracle JDeveloper. Below is an Oracle JDeveloper example, where you link the WS-Policies using Configure SOA WS Policies at the project level in Oracle JDeveloper for each external reference. For example:

Configure SOA WS Policies

  • You then select the policy you want to use for the call. Remember you only use one of the policies you have configured on the Inbound Web Service. If you have a custom policy, that must be deployed to the Oracle SOA Suite and your Oracle JDeveloper instance to be valid for your composite. For example a list of policies is displayed and you select one:

Example Policy Selection

  • Edit the Policy to specify additional information. For example :

Editing Policy

  • At this point, specify which csf-map and csf-key you want to use for the call in the Override Value. In the example below the csf-key is specified. For example:

Example Key specification

The security has been setup for the composite. You have indicated the credentials (which can be managed from the console) and the policy to use can be attached to the composite to ensure that your security specification has been implemented.

Depending on the WS-Policy you choose to use, there may be additional transport and message protection settings you will need to specify (for example if you use policy specific encryption, outside the transport layer, you may need to specify the encryption parameters for the message). For full details of Oracle SOA Suite facilities, refer to the Oracle SOA Suite documentation.

Testing, the Oracle difference

Mon, 2017-05-15 00:43

Recently I attended the customer forums in London, to discuss the future of our product lines and also outline the work we have done over the last year. One of the questions that came up was the a discussion of the major advantages of using the Oracle Functional Testing Advanced Pack for Oracle Utilities which is part of the Oracle Testing solution.

In the industry, functional testing, in an initial implementation and the subsequent upgrades of any product, is a major part of the implementation. Typically to reduce risk, implementations commonly decide to reduce the scope of testing, to meet deadlines, which increases the overall risk.

One way of addressing this is to adopt automated testing. While this sounds logical it can have hidden costs:

  • Traditional tools use user interface based scripting which basically records the screen and the interaction of the screen. In the old days in my career, I used to call this screen scraping. I am sure it is more than that, effectively it is using the screen recording, including the data entered, as a rerunnable test.
  • Typically, data that is entered in the recording is embedded in the script used for recording. This means if you wanted to reuse the script you would probably need to record it again or have some programming resource to change the script. Effectively you need a specialist script programmer to maintain the testing assets for you.
  • If the user experience changes, even due to a patch, the script may or may not work as originally intended which may return inconsistent results or you will need to re-record the asset again. This is more likely when you upgrade as new modern user experiences are introduced over time.
  • Testing assets are really programmable objects that are typically maintained by a programmer rather than a testing resource. Whilst, these programming languages are made easier and easier to use they are still programming.

Now, whilst it is possible to use the Oracle Application Testing Suite in the traditional sense as outlined above, when it is coupled with the Oracle Functional Testing Advanced Pack for Oracle Utilities it is much different and addresses the issues seen in a traditional automated testing approach.

  • Oracle Functional Testing Advanced Pack for Oracle Utilities includes a full set of reusable components that are the SAME components used by the QA teams at Oracle on a day to day basis. The fact they are used on a daily basis by the product QA, reduces the risk of them actually executing and being able to be used against the product versions.
  • The solution is based upon Oracle Application Testing Suite which is used by hundreds of Oracle customers across many Oracle products such as eBusiness Suite, Peoplesoft, Fusion, JD Edwards etc. Oracle Utilities is just one of the latest products to use the Oracle Application Testing Suite. In fact, some of the products have licensed packs as well that can be used with in conjunction with the Oracle Utilities pack.
  • The components represent the full functions of the main functionality of the product they are supplied for. The only components we do not provide are the components that cover the administration objects. These objects are typically not cost effective to automate in an implementation, due to their very low usage after implementation.
  • The supplied components are customization aware where algorithms, change handlers, etc are handled by the component automatically.
  • The Oracle Functional Testing Advanced Pack for Oracle Utilities supplies a number of utilities to allow partners and implementations to add custom components to the solution for any customization not handled by the base components (this should be relatively rare).
  • The process to use the pack with the Oracle Application Testing suite is more assembly (orchestration) rather than programming. Oracle Flow Builder, which is included in the solution, is a simple browser based too that allows business processes to be modeled with simple drag and drop of the components in the order they represent the business process. This allows a lower skilled person to build the flows rather than a programmer.
  • The testing flows becomes a test script through a generator. The resulting script does not need to be altered or maintained by a developer after it is generated.
  • Data for the flow is independent of the flow which encourages reuse. For example, it is possible to attach different data to represent different scenarios to a single flow. Flows can also contain multiple scenarios if desired. This extends even after the flow is expressed a test script where the physical data is separated out so it can be replaced at runtime rather than design time.
  • The whole solution is designed for reuse so that the number of assets you need is actually far less than traditional methods. This reduces costs and risk.
  • It is possible to reuse your flows across product versions. For example, it is possible to test multiple releases of products to reduce your upgrade risk by aligning the same flows to different versions of the supplied components.

The testing solution from Oracle Utilities is far more cost effective than traditional methods with the content allowing implementations to quickly adopt automated testing with a lower implementation risk. Customers who have used the solution have found they have tested more, reduced their testing costs and increased accuracy of their solutions.

Oracle Utilities Work And Asset Management V2. Released

Thu, 2017-05-11 16:14

Oracle Utilities Work And Asset Management (WAM) V2. has been released and is available from Oracle Delivery Cloud. This version is also based upon Oracle Utilities Application Framework V4. (also known as 4.3 SP4).

Included in this release are usability enhancements, an update to the Esri GIS Integration, Preventive Maintenance Event processing, and Construction Work Management.  

With these new additions we are now able to support the full asset lifecycle, from design and construction to retirement, opening up the gas and electric distribution market.  Construction Work Management adds the final piece to the Asset Lifecycle process.

  • Asset Performance Management - The Asset Performance Management features have been enhanced to offer new ways to calculate Asset Health Index scores and to set up Preventive Maintenance triggers based on the Asset Health Index.   We also offer integration points for third party predictive maintenance products to affect the Asset Health Index.
  • Compatible Units - Compatible Units are job standards that can be used to provide consistency and assistance when creating work designs.  Compatible Units can be created for either internal resources or for contractors.
  • Construction Work Design - Work Designs are critical to utility distribution companies.  The work design process leverages the compatible units to quickly scope and estimate the costs of work.  You are able to create multiple versions to designs to compare various construction options such as overhead or underground work.  You can also create design versions to compare contractor work.  When you pick a design to execute, you are able to easily transition the work design into a work package without having to create new work orders from scratch.
  • Construction Work Orders - Construction work orders differ from regular work orders because we are creating new assets rather than maintaining existing assets.  A construction work order also manages Construction Work in Progress (CWIP) accounting to ensure the work in progress is accounted for correctly.  The closeout process allows you to create new WAM assets to start their lifecycle in WAM and also creates the fixed asset property unit data to feed the corporate accounting system.
  • "As Built" Reconciliation - One of the big challenges for organizations is the reconciliation of the work design to the actual construction.  The actual construction work often diverges from the estimate due to the wide variety of variables that occur on a project.  WAM v2.2 offers a full reconciliation process to allow you to revise the values of assets, move costs between construction and maintenance accounts, review and adjust property unit valuation, and provides support for mass asset valuations.
  • PM Event Processing -  You can now package up a group of work templates into a PM Event and trigger that event as a group rather than one work template at a time.  This can be used for outage work or any repetitive work that requires multiple work orders to be created.

  • Esri GIS Integration - The user experience of the Esri GIS Integration was completely revised to provide a more intuitive experience.  Esri mapviewer components are directly integrated into the Work and Asset Management product.  Customers can publish any map component as an Esri Web Map and enroll that Web Map into WAM.  This includes feature layer maps as well as any thematic maps or metrics that customers choose to publish.

Esri Integration


Oracle Utilities Customer Care and Billing V2. is now available

Sun, 2017-05-07 23:28

Oracle Utilities Customer Care And Billing V2. is now available for download and installation from Oracle's Delivery Cloud. This is the first Oracle Utilities product to release on the Oracle Utilities Application Framework V4., also know and 4.3 SP4.

The latest Oracle Utilities Application Framework includes the latest updates, new functionality, content we have delivered from our cloud offerings and new versions of platforms. The release media includes a new set of updated documentation:

  • Updated versions of the online documentation which are available using the Oracle Help engine online and in offline format as well.
  • New technical documentation about installation, operations and security.
  • We have released a new API Guide for the management API's now included in the release documentation. These API's are used by our new management interfaces and our next release of the OEM Management Pack for Oracle Utilities.
  • As in my last posts OUAF Release Summary you can see the Framework features that are now available for Oracle Utilities Customer Care And Billing customers that can be utilized.

With the general availability of the Oracle Utilities Application Framework V4. a series of articles and new versions of whitepapers will be released over the coming months to highlight new features available for the use on the cloud and on-premise implementations of these products.

OUAF Release Summary

Thu, 2017-04-06 20:28

The next release of the Oracle Utilities Application Framework ( is in its final implementation across our product lines over the next few months. This release improves the existing Oracle Utilities Application Framework with exciting new features and enhanced existing features for our cloud and non-cloud implementations. Here is a summary of the key features of the new Oracle Utilities Application Framework.

Main Features CMA Improvements

The following highlights some improvements to CMA processing.

Ad-hoc Migration Requests

A new migration request BO has been provided to allow for building ‘ad-hoc’ migration requests using a list of specific objects.  It’s called the “entity list” migration request.

A special zone is included to find records to include in the migration request.  This zone allows you to choose a maintenance object that is configured for CMA and enter search criteria to get a list of objects to choose.  The zone supports linking one or more objects for the same MO en masse.

Once records are linked, a zone allows you to view the existing records and remove any if needed.


Grouping Migration Requests

Migration requests may now be grouped so that you can maintain more granular migration requests that get grouped together to orchestrate a single export of data for a ‘wholesale’ migration.  The framework supplies a new ‘group’ migration request that includes other migration requests that logically group migration plans.  Edge products or implementations may include this migration request into their own migration request.

Mass Actions During Migration Import Approval

When importing data sets, a user may now perform mass actions on migration objects to approve or reject or mark as ‘needs review’.

Groovy Library Support

Implementers may now define a Groovy library script for common functionality that may be included in other Groovy scripts.

There’s a new script type:

Scripts of this type define a Groovy Library Interface step type to list the Groovy methods defined within the script that are available for use by other scripts.

Additional script steps using the Groovy Member step type are used to define the Groovy code that the script implements.

Groovy scripts that choose to reference the Groovy Library Script can use the createLibraryScript method provided by the system to instantiate the library interface.

Search Menu Capability

A new option in the toolbar allows a user to search for a page rather than using the menu to find the desired page.

All menu items whose label matches what the user types are shown (as you type):

Additional Features

The following is a subset of additional features that are included.   Refer to the published release notes for more details.

  • URI validation / substitution. Any place where a URI is configured can now use substitution variables to support transparency across environment. The fully substituted value can also be validated against a whitelist for added security.
  • Minimizing the dashboard suppresses refresh. This allows a user to improve response when navigating throughout the system by delaying the refresh of zones in the dashboard while it is minimized.
  • New support for UI design. Input maps may now support half width sections.  Both display and input maps may support “floating” half width sections that fill in available space on the UI based on what is displayed.
  • Individual batch controls may now be secured independently.
  • Ad-hoc batch parameters are supplied to all batch related plug-in spots. Additionally, plug-in driven batch programs may now support ad-hoc parameters.
  • Elements in a schema that include the private=true attribute will no longer appear in the WSDL of any Inbound Web Service based upon that schema.