Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 10 hours 30 min ago

CPQ is an Auditor’s Best Friend

Mon, 2018-05-21 03:00

By Andy Pieroux, Founder and Managing Director of Walpole Partnership Ltd.  

One of the reasons many companies invest in a Configure, Price and Quote (CPQ) system is to provide a robust audit trail for their pricing decisions. Let’s take a look at why, and how CPQ can help.


First, apologies if you are an auditor. I’ve always been on the business side - either in sales, sales management, or as a pricing manager. I can appreciate your view may be different from the other side of the controls. Perhaps by the end of this article our points of view may become closer?

If your business has the potential to get audited, I know that I can speak on your behalf to say we all just love being audited. We love the time taken away from our day jobs. We love the stress of feeling that something may be unearthed that exposes us or gets us in trouble, even if we’ve never knowingly done anything wrong. We love the thought of our practices being exposed as 'in need of improvement' and relish the chance to dig through old documents and folders to try and piece together the story of why we did what we did… especially when it was several years ago. Yes sir, bring on the audit.

The reason we love it so much is that in our heart of hearts, we know audits are needed for our organization to prosper in the future. We dread the thought that our company might be caught up in a scandal like the mis-selling of pensions, or PPI (payment protection insurance), or serious accounting frauds like Enron.

It was scandals like Enron in the early 2000s that gave rise to stricter audit requirements and Sarbanes-Oxley (SOX).  This set a high standard required for internal controls, and much tougher penalties for board members who fail to ensure that financial statements are accurate. The role of pricing decisions (e.g. who authorized what and when), and the accuracy of revenue reporting becomes paramount when evidencing compliance with audit arrangements such as this.

At this point, a CPQ system can be the simple answer to your audit needs. All requests for discount, and the way revenue is allocated across products and services is documented. All approvals can be; attributed to an individual, time stamped, and with reasons captured at the time of approval. More importantly, the ability to show an auditor the entire history of a decision and to follow the breadcrumbs from a signed deal all the way to reported revenue at the click of a button means you have nothing to hide, and a clear understanding of the decisions. This is music to an auditor’s ears. It builds trust and confidence in the process and means any anomalies can be quickly analyzed.

When you have all this information securely stored in the cloud, under controlled access to only those who need it, and a tamper-proof process, that means it is designed with integrity in mind, and makes the process of passing an audit so much easier. All the anxiety and pain mentioned above disappears. Auditors are no longer the enemy. You will find they can help advise on improvements to the rules in your system to make future audits even more enjoyable. Yes - that’s right…. I said it. Enjoyable Audits!

So, CPQ is an auditor’s friend, and an auditee’s friend too. It doesn’t just apply to the big-scale audit requirements like SOX, but any organization that is auditable. Whether you’re a telecommunications company affected by IFRS 15, an organization impacted by GDPR, or any one of a thousand other guidelines, rules or quality policies that get checked - having data and decisions stored in a CPQ system will make you love audits too.

 

 

Why the XAI Staging is not in the OSB Adapters?

Wed, 2018-05-16 19:52

With the replacement of the Multi-Purpose Listener (MPL) with the Oracle Service Bus (OSB) with the additional OSB Adapters for Oracle Utilities Application Framework based products, customers have asked about transaction staging support.

One of the most common questions I have received is why there is an absence of an OSB Adapter for the XAI Staging table. Let me explain the logic.

  • One Pass versus Two Passes. The MPL processed its integration by placing the payload from the integration into the XAI Staging table. The MPL would then process the payload in a second pass. The staging record would be marked as complete or error. The complete ones would need to be removed using the XAI Staging purge process run separately. You then used XAI Staging portals to correct the data coming in for ones in error. On the other hand, the OSB Adapters treat the product as a "black box" (i,e, like a product) and it directly calls the relevant service directly (for inbound) and polls the relevant Outbound or NDS table for outbound processing records directly. This is a single pass process rather than multiple that MPL did. OSB is far more efficient and scalable than the MPL because of this.
  • Error Hospital. The idea behind the XAI Staging is that error records remain in there for possible correction and reprocessing. This was a feature of MPL. In the OSB world, if a process fails for any reason, the OSB can be configured to act as an Error Hospital. This is effectively the same as the MPL except you can configure the hospital to ignore any successful executions which reduces storage. In fact, OSB has features where you can detect errors anywhere in the process and allows you to determine which part of the integration was at fault in a more user friendly manner. OSB effectively already includes the staging functionality so adding this to the adapters just duplicates processing. The only difference is that error correction, if necessary, is done within the OSB rather than the product.
  • More flexible integration model. One of the major reasons to move from the MPL to the OSB is the role that the product plays in integration. If you look at the MPL model, any data that was passed to the product from an external source was automatically the responsibility of the product (that is how most partners implemented it). This means the source system had no responsibility for the cleanliness of their data as you had the means of correcting the data as it entered the system. The source system could send bad data over and over and as you dealt with it in the staging area that would increase costs on the target system. This is not ideal. In the OSB world, you can choose your model. You can continue to use the Error Hospital to keep correcting the data if you wish or you can configure the Error Hospital to compile the errors and send them back, using any adapter, to the source system for correction. With OSB there is a choice, MPL did not really give you a choice.

With these considerations in place it was not efficient to add an XAI Staging Adapter to OSB as it would duplicate effort and decrease efficiency which negatively impacts scalability.

Capacity Planning Connections

Tue, 2018-05-01 23:29

Customers and partners regularly ask me questions around capacity of traffic on their Oracle Utilities products implementations and how to best handle their expected volumes.

The key to answering this question is to under a number of key concepts:

  • Capacity is related to the number of users, threads etc, lets call them actors to be generic, are actively using the system. As the Oracle Utilities Application Framework is stateless, then actors only consume resources when they are active on any part of the architecture. If they are idle then they are not consuming resources. This is important as the number of logged on users does not dictate capacity.
  • The goal of capacity is to have enough resource to handle peak loads and to minimize capacity when the load drops to the minimum expected. This makes sure you have enough for the busy times but also you are not wasting resource.
  • Capacity is not just online users it is also batch threads, Web Service Clients, REST clients and mobile clients (for mobile application interfaces). It is a combination for each channel. Each channel can be monitored individually to determine capacity for each channel.

This is the advice I tend to give customers who want to monitor capacity:

  • For channels using Oracle WebLogic you want to use Oracle WebLogic Mbeans such as ThreadPoolRuntimeMbean (using ExecuteThreads) for protocol level monitoring. If you want to monitor each server individually to get an idea of capacity then you might want to try ServerChannelRuntimeMBean (using ConnectionsCount). In the latter case, look at each channel individually to see what your traffic looks like.
  • For Batch, when using it with Oracle Coherence, then use the inbuilt Batch monitoring API (via JMX) and use the sum of NumberOfMembers attribute to determine the active number of threads etc running in your cluster. Refer to the Server Administration Guide shipped with the Oracle Utilities product for details of this metric and how to collect it.
  • For database connections, it is more complex as connection pools (regardless of the technique used) rely on a maximum size limit. If this limit is exceeded then you want to know of how many pending requests are waiting to detect how bigger the pool should be. The calculations are as follows:

Note: You might notice that the database active connections are actually calculations. This is due to the fact that the metrics capture the capacity within a limit and need to take into account when the limit is reached and has waiting requests.

The above metrics should be collected at peak and non-peak times. This can be done manually or using Oracle Enterprise Manager.

Once the data is collected it is recommended to be used for the following:

  • Connection Pool Sizes – The connection pools should be sized using the minimum values experienced and with the maximum values with some tolerances for growth.
  • Number of Servers to setup – For each channel, determine the number of servers based upon the numbers and the capacity on each server. Typically at a minimum of two servers should be setup for the minimum high availability solutions. Refer to Oracle Maximum Availability Architecture for more advice.

Managing Your Environments

Mon, 2018-04-30 22:26

With the advent of easier and easier techniques for creating and maintaining Oracle Utilities environments, the number of environment will start to grow, increasing costs and introducing more risk into a project. This applies to on-premise as well as cloud implementations, though the cloud implementations have more visible costs.

An environment is a copy of the Oracle Utilities product (one software and one database at a minimum).

To minimize your costs and optimize the number of environments to manage there are a few techniques that may come in handy:

  • Each Environment Must Be On Your Plan - Environments are typically used to support an activity or group of activities on some implementation plan. If the environment does not support any activities on a plan then it should be questioned.
  • Each Environment Must Have An Owner - When I started working in IT a long time ago, the CIO of the company I worked for noticed the company had over 1500 IT systems. To rationalize he suggested shutting them all down and seeing who screamed to have it back on. That way he could figure out what was important to what part of the business. While this technique is extreme it points out an interesting thought. If you can identify the owner of each environment then that owner is responsible to determine the life of that environment including its availability or performance. Consider removing environments not owned by anyone.
  • Each Environment Should Have a Birth Date And End Date - As an  extension to the first point, each environment should have a date it is needed and a date when it is no longer needed. It is possible to have an environment be perpetual, for example Production, but generally environments are needed for a particular time frame. For example, you might be creating environments to support progressive builds, where you would have a window of builds you might want to keep (a minimum set I hope). That would dictate the life-cycle of the environment. This is very common in cloud environments where you can reserve capacity dynamically so it can impose time limits to enforce regular reassessment.
  • Reuse Environments - I have been on implementations where individual users wanted their own personal environments. While this can be valid in some situations, it is much better to encourage reuse of environments across users and across activities. If you can plan out your implementation you can identify how to best reuse environments to save time and costs.
  • Ask Questions; Don't Assume - When agreeing to create and manage the environment, ask the above questions and more to ensure that the environment is needed and will support the project appropriately for the right amount of time. I have been on implementations where 60 environments existed initially and after applying these techniques and others was able to reduce it to around 20. That saved a lot of costs.

So why the emphasis on keeping your environments to a minimal number given the techniques for building and managing them are getting easier? Well, no matter how easy keeping an environment consumes resources (computer and people) and keeping them at a minimum keeps costs minimized.

The techniques outlined above apply to Oracle Utilities products but can be applied to other products with appropriate variations.

For additional advice on this topic, refer to the Software Configuration Management Series (Doc Id: 560401.1) whitepapers available from My Oracle Support.

Clarification of XAI, MPL and IWS

Mon, 2018-04-09 00:36

A few years ago, we announced that XML Application Integration (XAI) and the Multipurpose Listener (MPL) were being retired from the product and replaced with Inbound Web Services (IWS) and Oracle Service Bus (OSB) Adapters.

In the next service pack of the Oracle Utilities Application Framework, XAI and MPL will finally be removed from the product.

The following applies to this:

  • The MPL software and XAI Servlet will be removed from the code. This is the final step in the retirement process. The tables associated with XAI and MPL will not be removed from the product for backward compatibility with newer adapters. Maintenance functions that will be retained will be prefixed with Message rather than XAI. Menu items not retained will be disabled by default. Refer to release notes of service packs (latest and past) for details of the menu item changes.
  • Customers using XAI should migrate to Inbound Web Services using the following guidelines:
    • XAI Services using the legacy Base and CorDaptix adapters will be automatically migrated to Inbound Web Services. These services will be auto-deployed using the Inbound Web Services Deployment online screen or iwsdeploy utility.
    • XAI Services using the Business adapter (sic) can either migrate their definitions manually to Inbound Web Services or use a technique similar to the technique outlined in Converting your XAI Services to IWS using scripting. Partners should take the opportunity to rationalize their number of web services using the multi-operation capability in Inbound Web Services.
    • XAI Services using any other adapter than those listed above are not migrate-able as they are typically internal services for use with the MPL.
  • Customers using the Multi-purpose Listener should migrate to Oracle Service Bus with the relevant adapters installed.

There are a key number of whitepapers that can assist in this process:

Security Rollups for CCB/OUAF

Sun, 2018-03-18 21:58

The Oracle Utilities Customer Care And Billing and Oracle Utilities Application Framework, ship security rollups on a regular basis (especially for older releases of the products). These patch sets contain all the security patches in a small number of downloads (one for the CCB product and one for OUAF product). Product other than CCB , can install the OUAF patch sets to take advantage of the rollup.

The following rollups are available from support.oracle.com:

CCB Version Patch Number OUAF Version Patch Number 2.3.1 27411229 2.2.0 26645120 2.4.0.1 27380195 4.2.0.1.0 26645171 2.4.0.2 27380216 4.2.0.2.0 26645183 2.4.0.3 27380238 4.2.0.3.0 26645095 2.5.0.1 27380273 4.3.0.1.0 26645209

For more information refer to the individual patches. For newer releases not listed, the patches are already included in the base releases so no additional effort is required.

Using the Infrastructure Version of Oracle WebLogic for Oracle Utilities Products

Sun, 2018-03-18 17:47

When using Oracle Utilities Application Framework V4.3.x with any Oracle Utilities product you need to use the Oracle Fusion Middleware 12c Infrastructure version of Oracle WebLogic not the vanilla release of Oracle WebLogic. The Oracle Fusion Middleware 12c Infrastructure version contains the Java Required Files (JRF) profile that is used by the Oracle Utilities Application Framework to display the enhanced help experience and for standardization within the Framework.

The installation used by the Oracle Fusion Middleware 12c Infrastructure version is the same experience as the vanilla Oracle WebLogic version but it contains the applyJRF profile that applies extra functionality and libraries necessary for Oracle Utilities Application Framework to operate.

The Oracle Fusion Middleware 12c Infrastructure version contains the following additional functionality:

  • An additional set of Java libraries that are typically used by Oracle products to provide standard connectors and integration to Oracle technology.
  • Diagnostic Frameworks (via WebLogic Diagnostic Framework) that can be used with Oracle Utilities products to proactively detect and provide diagnostic information to reduce problem resolution times. This requires the profile to be installed and enabled on the domain post release. The standard Fusion Diagnostic Framework can be used with Oracle Utilities products
  • Fusion Middleware Control is shipped as an alternative console for advanced configuration and monitoring.

As with all Oracle software the Oracle Fusion Middleware 12c Infrastructure software is available from Oracle Software Delivery Cloud. For example:

Optimizing CMA - Linking the Jobs

Wed, 2018-03-14 00:20

One of the recent changes to the Configuration Migration Assistant is the ability to configure the individual jobs to work as a group to reduce the amount of time and effort in migrating configuration data from a source system to a target. This is a technique we use in our Oracle Utilities Cloud implementations to reduce costs. Basically after this configuration is complete you just have to execute F1-MGDIM (Migration Data Set Import Monitor) and F1-MGDPR (Migration Data Set Export Monitor) jobs to complete all your CMA needs.

The technique is available for Oracle Utilities Application Framework V4.3.0.4.0 and above using some new batch control features. The features used are changing the Enter algorithms on the state transitions and setting up Post Processing algorithms on relevant batch controls. The last step will kick off each process within the same execution to reduce the need to execute each process individually.

Set Enter Algorithms

The first step is to configure the import process, which is a multi-step process, to autotransition data when necessary to save time. This is done on the F1-MigrDataSetImport business object and setting the Enter Algorithm on the following states:

Status Enter Algorithm PENDING F1-MGDIM-SJ READY2COMP F1-MGOPR-SJ READY2APPLY F1-MGOAP-SJ APPLYING F1-MGTAP-SJ READYOBJ F1-MGOPR-SJ READYTRANS F1-MGTPR-SJ

Save the changes to reflect the change

Set Post Processing Algorithms

The next step is to set the Post Processing algorithms on the Import jobs to instruct the Monitor to run multiple steps within its execution.

Batch Control Post Processing Algorithm F1-MGOPR F1-MGTPR-NJ F1-MGTPR F1-MGDIM-NJ F1-MGOAP F1-MGDIM-NJ (*) F1-MGTAP F1-MGDIM-NJ (*)

(*) Note: For multi-lingual solutions, consider adding an additional Post Processing algorithm F1-ENG2LNGSJ to copy any missing language entries

Now you can run the Monitors for Import and Export with minimum interaction which simplifies the features.

Note: To take full advantage of this new configuration enable Automatically Apply on Imports.

Oracle Utilities Customer To Meter V2.6.0.1.0 is now available

Sun, 2018-02-11 21:19

With the release of Oracle Utilities Customer Care and Billing V2.6.0.1.0, the Oracle Utilities Customer To Meter product has also been updated to V2.6.0.1.0. This release is now available from Oracle Software Delivery Cloud.

The release notes available from those download sites contains a full list of new, updated and deprecated functionality available for Oracle Utilities Customer To Meter V2.6.0.1.0 and Oracle Utilities Application Framework V4.3.0.5.0. Please refer to these documents for details.

The documentation also covers upgrading from previous versions of Oracle Utilities Customer Care And Billing as well as Oracle Utilities Customer To Meter V2.6.0.0.0.

Oracle Utilities Customer Care and Billing 2.6.0.1.0 Available

Thu, 2018-02-01 22:09

Oracle Utilities Customer Care And Billing V2.6.0.1.0 is now available from My Oracle Support as a patch or available as a complete download in Oracle Delivery Cloud. This release uses the Oracle Utilities Application Framework V4.3.0.5.0. The release notes available from those download sites contains a full list of new, updated and deprecated functionality available for Oracle Utilities Customer Care And Billing V2.6.0.1.0 and Oracle Utilities Application Framework V4.3.0.5.0. Please refer to these documents for details.

The documentation also covers upgrading from previous versions of Oracle Utilities Customer Care And Billing.

Oracle Utilities Application Framework V4.3.0.5.0 Release Summary

Mon, 2018-01-29 20:51

The latest release of Oracle Utilities Application Framework, namely 4.3.0.5.0 (or 4.3 SP5 for short) will be included in new releases of Oracle Utilities products over the next few months. This release is quite diverse with a range of new and improved capabilities that can be used by implementations of the new releases.

The key features included in the release including the following:

  • Mobile Framework release - The initial release of a new REST based channel to allow Oracle Utilities products to provide mobile device applications. This release is a port of the Mobile Communication Platform (MCP) used in the Oracle Mobile Workforce Management product to the Oracle Utilities Application Framework. This initial release is restricted to allow Oracle Utilities products to provide mobile experiences for use with an enterprise. As with other channels in the Oracle Utilities Application Framework, it can be deployed alone or in conjunction with other channels.
  • Support For Chrome for Business - In line with Oracle direction, the Oracle Utilities Application Framework supports Chrome for Business as a browser alternative. A new browser policy, in line with Oracle direction, has been introduced to clarify support arrangement for Chrome and other supported browsers. Check individual product release notes for supported versions.
  • Improved Security Portal - To reduce effort in managing security definitions within the product, the application service portal has been extended to show secured objects or objects that an application service is related to.
  • Attachment Changes - In the past to add attachments to object required custom UI maps to link attachment types to objects. In this release, a generic zone has been added reducing the need for any custom UI Maps. The attachment object now also records the extension of the attachment to reduce issues where an attachment type can have multiple extensions (e.g. DOC vs DOCX).
  • Support for File Imports in Plug In Batch - In past releases Plug In Batch was introduced as a configuration based approach to replace the need for Java programming for batch programming. In the past, SQL processing and File Exports where supported for batch processing. In this release, importing files in CSV, Fixed format or XML format are now supported using Plug In Batch (using Groovy based extensions). Samples are supplied with the product that can be copied and altered accordingly.
  • Improvements in identifying related To Do's - The logic determining related To Do's has been enhanced to provide additional mechanisms for finding related To Do's to improve closing related work. This will allow a wider range to To Do's to be found than previously determined.
  • Web Service Categories - To aid in API management (e.g. when using Integration Cloud Service and other cloud services) Web Service categories can be attached to Inbound Web Services, Outbound Message Types and legacy XAI services that are exposed via Inbound Web Services. A given web service or outbound message can be associated with more than one category. Categories are supplied with the product release and custom categories can be added.
  • Extended Oracle Web Services Manager Support - In past releases Oracle Web Services Manager could provide additional transport and message security for Inbound Web Services. In this release, Oracle Web Services Manager support has been extended to include Outbound Messages and REST Services.
  • Outbound Message Payload Extension - In this release it is possible to include the Outbound Message Id as part of the payload as a reference for use in the target system.
  • Dynamic URL support in Outbound Message - In the past Outbound Message destinations were static to the environment. In this release the URL used for the destination can vary according to the data or dynamically assembled programmatically if necessary.
  • SOAP Header Support in Outbound Messages - In this release it is possible to dynamically set SOAP Header variables in Outbound Messages.
  • New Groovy Imports Step Type - A new step type has been introduced to define classes to be imported for use in Groovy members. This promotes reuse and allows for coding without the need for the fully qualified package name in Groovy Library and Groovy Member step types. 
  • New Schema Designer - A newly redesigned Schema Editor has been introduced to reduce total cost of ownership and improve schema development. Color coding has been now included in the raw format editor.
  • Oracle Jet Library Optimizations - To improve integration with the Oracle Jet libraries used by the Oracle Utilities Application Framework, a new UI Map fragment has been introduced to include in any Jet based UI Map to reduce maintenance costs.
  • YUI Library Removal - With the desupport of the YUI libraries, they have been removed from this release in the Oracle Utilities Application Framework. Any custom code directly referencing the YUI libraries should use the Oracle Utilities Application Framework equivalent function.
  • Proxy Settings now at JVM level - In past release, proxy settings were required on individual connections where needed. In this release, the standard HTTP Proxy JVM options are now supported at the container/JVM layer to reduce maintenance costs.

This is just a summary of some of the new features in the release. A full list is available in the release notes of the products using this service pack.

Note: Some of these enhancements have been back ported to past releases. Check My Oracle Support for those patches.

Over the next few weeks, I will be writing a few articles about a few of these enhancements to illustrate the new capabilities.

Spectre and Meltdown Vulnerability and Oracle Utilities Products

Mon, 2018-01-29 16:18

As you may or may not be aware a set of hardware based security vulnerabilities known as Spectre/Spectre and Meltdown have been identified. Vendors are quickly issuing software patches to address these hardware based vulnerabilities. Oracle has issued a number of patches to address this issue in it January 2018 patchsets.

Customers should refer to Addendum to the January 2018 CPU Advisory for Spectre and Meltdown (Doc Id: 2347948.1) for details of the patches available to address this issue and the state of patches for other products.

At this time, no patches are expected for Oracle Utilities products as the vulnerabilities are addressed by applying the patches outlined in the above article. It is highly recommended that Oracle Utilities customers apply patches outlined in that article to protect their systems. For customer's on non-Oracle platforms, it is recommended to refer to the relevant vendor site for any operating system or related patches for those platforms.

Edge Conference 2018 is coming - Technical Sessions

Mon, 2018-01-29 16:16

It is that time of year again, Customer Edge conference time. This year we will be once again holding a Technical stream which focuses on the Oracle Utilities Application Framework and related products. Once again, I will be holding the majority of the sessions at the various conferences.

The sessions this year are focused around giving valuable advice as well as giving a window into our future plans for the various technologies we are focusing upon. As normal, there will be a general technical session covering our road map as well as specific set of session targeting important topics. The technical sessions planned for this year include:

Session Overview Reducing Your Storage Costs Using Information Life-cycle Management With the increasing costs of maintaining storage and satisfying business data retention rules can be challenging. Using Oracle Information Life-cycle Management solution can help simplify your storage solution and hardness the power of the hardware and software to reduce storage costs. Integration using Inbound Web Services and REST with Oracle Utilities Integration is a critical part of any implementation. The Oracle Utilities Application Framework has a range of facilities for integrating from and to other applications. This session will highlight all the facilities and where they are best suited to be used. Optimizing Your Implementation Implementations have a wide range of techniques available to implement successfully. This session will highlight a group of techniques that have been used by partners and our cloud implementations to reduce Total Cost Of Ownership. Testing Your On-Premise and Cloud Implementations Our Oracle Testing solution is popular with on premise implementations. This session will outline the current testing solution as well as outline our future plans for both on premise and in the cloud. Securing Your Implementations With the increase in cybersecurity concerns in the industry, a number of key security enhancements have made available in the product to support simple or complex security setups for on premise and cloud implementations. Turbocharge Your Oracle Utilities
Product Using the Oracle In-Memory Database Option
The Oracle Database In-Memory options allows for both OLTP and Analytics to run much faster using advanced techniques. This session will outline the capability and how it can be used in existing on premise implementations to provide superior performance. Mobile Application Framework Overview The Oracle Utilities Application Framework has introduced a new Mobile Framework for use in the Oracle Utilities products. This session gives an overview of the mobile framework capabilities for future releases. Developing Extensions using Groovy Groovy has been added as a supported language for on premise and cloud implementations. This session outlines that way that Groovy can be used in building extensions. Note: This session will be very technical in nature. Ask Us Anything Session Interaction with the customer and partner community is key to the Oracle Utilities product lines. This interactive sessions allows you (the customers and partners) to ask technical resources within Oracle Utilities questions you would like answered. The session will also allow Oracle Utilities to discuss directions and poll the audience on key initiatives to help plan road maps.

This year we have decided to not only discuss capabilities but also give an idea of how we use those facilities in our own cloud implementations to reduce our operating costs for you to use as a template for on-premise and hybrid implementations.

For customers and partners interested in attending the USA Edge Conference registration is available.

 

Happy New Year to my blog readers

Sun, 2018-01-07 16:25

Welcome to 2018 for the ShortenSpot readers. This year is looking like another exciting year for the Oracle Utilities Application Framework and a new direction for the blog overall. In the past the blog has been a mixture of announcements and some advice with examples. Whilst it will still provide important technical announcements, this year we plan to have lots and lots of exciting advice with lots of example code to illustrate some amazing features you can use in the cloud, hybrid and on-premise implementations to inspire you to use the facilities provided to you.

This year we also will be doing a major refit to all the whitepapers including rationalizing the number of them (it was fast approaching 50 at one stage) and making them more relevant with more examples. This will also remove the duplication those whitepapers have with the online documentation which is now the main source of information for advice for implementations. The whitepapers will act as more supplemental materials and complementary to the online documentation.

The next few months are the busy months as we also prepare for the annual Edge conferences in the USA, APAC and Europe which will include a technical stream with a series of sessions on major technical features and some implementation advice. This year we decided to make it more beneficial for you by focussing on key implementation challenges and offer advice on how to solve implementation issues and business requirements. Each session will talk capabilities, offer general direction and offer advice garnered from our cloud implementations and advice from our implementations/partners gather over the years. Hopefully you can back from the sessions with some useful advice. The details of the 2018 Oracle Utilities Edge Customer Conference Product Forum are located at this site.

This year looks like an amazing year and I look forward to publishing a lot more often this year to benefit us all.

 

Oracle Help Patches

Mon, 2017-12-18 16:25

In Oracle Utilities Application Framework V4.3.0.1.0, we introduced the new Oracle help engine to provide a better online help experience for online users. Due to a conflict in common libraries a series of patches have been released to ensure the correct instances of the libraries are used for a number of Oracle Utilities Application Framework V4.3.x releases. The patches outlined below allow for the Oracle Help engine to be continued to be used with the correct libraries.

Note: These patches apply to Oracle WebLogic 12.x installations only.

The following patches, available from My Oracle Support, apply to the following releases:

Version Patch Comments 4.3.0.1.0 27051899 UPDATE OHELP TO BE THIN CLIENT 4.3.0.2.0 26354064 COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT 4.3.0.3.0 26354238 COPY OF 26354064 - COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT 4.3.0.4.0 26354259 COPY OF 26354238 - COPY OF 26354064 - COPY OF 27051899 - UPDATE OHELP TO BE THIN CLIENT

These patches migrate the online help to use the Thin Client libraries. Customers on Oracle Weblogic 12.2 should apply 27112347 - OPTIONAL SPECIAL PATCH FOR REMOVAL OF OHW THICK CLIENT JAR FILES-4.3 SP1,2,3,4 available from My Oracle Support.

The patches apply in the following ways:

  • If you are on Oracle WebLogic 12.1.3, the patch will ensure the correct Oracle Help libraries are used.
  • If you are on Oracle WebLogic 12.2.1, the patch will replace the default libraries with the thin client libraries. The additional patch (27112347) outlined above will cleanup any conflicted libraries.

Customers on earlier versions of the Oracle Utilities Application Framework do not need to apply the above patches. Customers on Oracle Utilities Application Framework V4.3.0.5.0 and above, do not need to apply these patches as it is already included in those releases.

Updated Whitepapers for 4.3.0.5.0

Sun, 2017-12-10 18:49

With the anticipated releases of the first products based upon Oracle Utilities Application Framework V4.3.0.5.0 starting to appear soon, the first set of whitepapers have to been updated to reflect new functionality, updated functionality and experiences from the field and our Oracle Utilities cloud implementations.

The following whitepapers have been updated and are now available from My Oracle Support:

  • ConfigTools Best Practices (Doc Id: 1929040.1) - This has been updated with the latest advice from our implementations and cloud teams. There are a few new sections around Groovy and a new section which highlights the ability to write batch programs using the Plug-In Batch architecture. In Oracle Utilities Application Framework 4.3.0.5.0, we add the capability to implement File Import functionality using Groovy in our Plug-In Batch. We provide a mechanism to support Delimited, Fixed or XML based files within the algorithms. Samples of each are supplied in the product.
  • Identity Management Suite Integration (Doc Id: 1375600.1) - This whitepaper has been greatly simplified to reflect the latest Oracle Identity Management Suite changes and the newer interface that has been migrated from XAI to IWS. The new interface as two new algorithms which are used in our cloud implementations and are now part of the F1-IDMUser object supplied with the product.
    • Generation of Authorization Identifier - The F1-IDMUser object now supports the ability to generate the unique authorization identifier (the 8 character one) if the identifier is not provisioned from Oracle Identity Manager itself. This provides some flexibility of where this identifier can be provisioned as part of the Oracle Identity Manager solution. In the past the only place this was available was within Oracle Identity Manager itself. This enhancement means that the user can be provisioned from Oracle Identity Manager or part of the Identity Management interface to Oracle Utilities Application Framework.
    • Duplication of User now supported within interface - In past releases the use of template users was a common way of quickly provisioning users. This release also allows the duplication function within the User Object to be used in isolation or in conjunction with template users for more flexible options in provisioning. If this method is used, a characteristic is added to the duplicated user to indicate it was duplicated from another user (for auditing purposes).

As we get closer to release of products using Oracle Utilities Application Framework 4.3.0.5.0 you will see more and more updated whitepapers to reflect the new and improved changes in the releases.

Batch History Portal - Driving the Portal with a flexible search

Sun, 2017-11-26 22:01

In a previous post, I illustrated a zone on displaying batch history. This is part of a prototype I am working on to illustrate some advanced UI features in the ConfigTools scripting language and zone features. The zone looked like this:

Example Batch Hostory Portal

Many people asked me for the zone definition so here are the steps I did to create the zone:

  • I created a service Script that returned the FULL message from the Batch Level of Service called CMLOSM:

10: edit data
     if ("parm/batchControlId = $BLANK")
           terminate;
     end-if;
     move "parm/batchControlId" to "BatchLevelOfService/input/batchControlId";
     //
     // Get Level Of Service
     //
     invokeBS 'F1-BatchLevelOfService' using "BatchLevelOfService";
     move "BatchLevelOfService/output/levelOfService" to "parm/levelOfService";
     //
     // Get Level Of Service Description
     //
     move 'F1_BATCH_LEVEL_OF_SERVICE_FLG' to "LookupDescription/fieldName";
     move "BatchLevelOfService/output/levelOfService" to "LookupDescription/fieldValue";
     invokeBS 'F1-GetLookupDescription' using "LookupDescription";
     //
     // Get Message
     //
     move "BatchLevelOfService/output/messageCategory" to "ReturnMessage/input/messageCategory";
     move "BatchLevelOfService/output/messageNumber" to "ReturnMessage/input/messageNumber";
     move '0' to "ReturnMessage/input/messageParmCollCount";
     move "$LANGUAGE" to "ReturnMessage/input/language";
     //
     // Set Substitution Parms.. I have only coded 4 for now
     //
     if ("string(BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[1]/parameterValue" to "ReturnMessage/input/messageParms/messageParm1";
        move '1' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[2]/parameterValue" to "ReturnMessage/input/messageParms/messageParm2";
         move '2' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[3]/parameterValue" to "ReturnMessage/input/messageParms/messageParm3";
        move '3' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     if ("string(BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue) != $BLANK")
        move "BatchLevelOfService/output/messageParameters/parameters[4]/parameterValue" to "ReturnMessage/input/messageParms/messageParm4";
        move '4' to "ReturnMessage/input/messageParmCollCount";
     end-if;
     //
     // Compile the Message
     //
     invokeBS 'F1-ReturnMessage' using "ReturnMessage";
     move "ReturnMessage/output/expandedMessage" to "parm/fullMessage";
end-edit;

Schema:

<schema>
    <batchControlId dataType="string"/>  
    <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>  
    <levelOfServiceDesc mdField="LOS_DESC"/>  
    <fullMessage dataType="string"/>
</schema>

Data Areas:

Schema Type Object Data Area Name Business Service F1-BatchLevelOfService BatchLevelOfService Business Service F1-GetLookupDescription LookupDescription Business Service F1-ReturnMessage ReturnMessage
  • I created a script that would set the color of the level of service called CMCOLOR:

10: move 'black' to "parm/foreColor";
11: move 'white' to "parm/bgColor";
20: edit data
     if ("parm/levelOfService = 'DISA'")
         move 'white' to "parm/foreColor";
         move '#808080' to "parm/bgColor";
     end-if;
end-edit;
30: edit data
     if ("parm/levelOfService = 'ERRO'")
         move 'white' to "parm/foreColor";
         move 'red' to "parm/bgColor";
     end-if;
end-edit;
40: edit data
     if ("parm/levelOfService = 'NORM'")
         move 'white' to "parm/foreColor";
         move 'green' to "parm/bgColor";
     end-if;
end-edit;
50: edit data
     if ("parm/levelOfService = 'WARN'")
         move 'black' to "parm/foreColor";
         move 'yellow' to "parm/bgColor";
     end-if;
end-edit;

Schema:

<schema>
    <levelOfService mdField="F1_BATCH_LEVEL_OF_SERVICE_FLG" dataType="lookup" lookup="F1_BATCH_LEVEL_OF_SERVICE_FLG"/>  
    <foreColor dataType="string" mdField="COLOR"/>  
    <bgColor dataType="string" mdField="BG_COLOR"/>
</schema>

  • I created a script to post process the records for advanced filtering called CMLOSRF:

     if ("string(parm/levelOfServiceFilter) = $BLANK")
       if ("parm/hideDisabled = $BLANK")
           move 'true' to "parm/result";
           terminate;
       end-if;
       move 'true' to "parm/result";
       if ("string(parm/hideDisabled) = 'Y'")
         if ("string(parm/levelOfService) = 'DISA'")
            move 'false' to "parm/result";
            terminate;
         end-if;
       end-if;
       terminate;
     end-if;
     move 'false' to "parm/result";
     if ("parm/levelOfServiceFilter = parm/levelOfService")
           move 'true' to "parm/result";
     end-if;

Schema:

<schema>
    <levelOfService dataType="string"/>  
    <levelOfServiceFilter dataType="string"/>  
    <hideDisabled/>  
    <result dataType="string"/>
</schema>

  • I then built a zone called CMBH01 which has the following attributes:
Parameter Value Description Batch History Query Zone Type F1-DE Application Service (Choose an appropriate one) Width Full Height Of Report 60 Display Row Number Column false User Filter 1 label=BATCH_CD likeable=S divide=below User Filter 2 label=LEVEL_OF_SERVICE type=LOOKUP lookup=F1_BATCH_LEVEL_OF_SERVICE_FLG User Filter 3 label='Hide Disabled' type=LOOKUP lookup=F1_YESNO_FLG divide=below User Filter 4 label=F1_BATCH_CTGY_FLG type=LOOKUP lookup=F1_BATCH_CTGY_FLG User Filter 5 label=F1_BATCH_CTRL_TYPE_FLG type=LOOKUP lookup=F1_BATCH_CTRL_TYPE_FLG No SQL Execute nosql=IGNORE Initial Display Columns C1 C2 C8 C5 C12 C10 C13 C14 SQL 1 Broadcast Columns BATCH_CD=C1 SQL Statement 1 SELECT UNIQUE I.BATCH_CD, B.F1_BATCH_CTRL_TYPE_FLG, B.F1_BATCH_CTGY_FLG, B.LAST_UPDATE_DTTM, B.NEXT_BATCH_NBR FROM CI_BATCH_INST I, CI_BATCH_CTRL B
WHERE I.BATCH_CD = B.BATCH_CD
 [ (F1) AND I.BATCH_CD LIKE :F1]
 [ (F4) AND B.F1_BATCH_CTGY_FLG = :F4]
 [ (F5) AND B.F1_BATCH_CTRL_TYPE_FLG = :F5] Column 1 for SQL 1 source=SQLCOL sqlcol=BATCH_CD label=BATCH_CD Column 2 for SQL 1 source=FKREF fkref='F1-BTCCT' input=[BATCH_CD=BATCH_CD] label=BATCH_CD_DESCR Column 3 for SQL 1 source=BS bs='F1-BatchLevelOfService'
input=[input/batchControlId=C1]
output=output/levelOfService  suppress=true suppressExport=true Column 4 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_LEVEL_OF_SERVICE_FLG' fieldValue=C3] label=LEVEL_OF_SERVICE color=C6 bgColor=C7
output=description suppress=true suppressExport=true Column 5 for SQL 1 source=SS ss='CMLOSM'
input=[batchControlId=C1] label=LEVEL_OF_SERVICE_REASON
output=fullMessage Column 6 for SQL 1 source=SS ss='CMCOLOR'
input=[levelOfService=C3] label=COLOR
output=foreColor suppress=true suppressExport=true Column 7 for SQL 1 source=SS ss='CMCOLOR'
input=[levelOfService=C3] label=BG_COLOR
output=bgColor  suppress=true suppressExport=true Column 8 for SQL 1 source=SPECIFIED spec=['<div style=" font-weight:bold; background-clip: content-box; border-radius: 10px; padding: 2px 8px; text-align: center; background-color:' C7 '; color:' C6 ';">' C4 '</div>'] label=LEVEL_OF_SERVICE Column 9 for SQL 1 source=SQLCOL sqlcol=2 label=F1_BATCH_CTRL_TYPE_FLG suppress=true suppressExport=true Column 10 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_CTRL_TYPE_FLG' fieldValue=C9] label=F1_BATCH_CTRL_TYPE_FLG
output=description Column 11 for SQL 1 source=SQLCOL sqlcol=3 label=F1_BATCH_CTGY_FLG suppress=true suppressExport=true Column 12 for SQL 1 source=BS bs='F1-GetLookupDescription'
input=[fieldName='F1_BATCH_CTGY_FLG' fieldValue=C11] label=F1_BATCH_CTGY_FLG output=description Column 13 for SQL 1 source=SQLCOL sqlcol=4 label='Last Executed' type=DATE/TIME Column 14 for SQL 1 source=SQLCOL sqlcol=5 label=NEXT_BATCH_NBR type=NUMBER dec=0 Allow Row Service Script 1 ss=CMLOSRF
input=[levelOfServiceFilter=F2 levelOfService=C3 hideDisabled=F3]
output=result

Saving the Zone and adding it to a menu will then implement the zone in the menu and invoke it. Make sure the Application Service you use is connected to the users via a user group so that users can access the zone.

Understanding the solution

I want to also make you understand a few of the decisions I made in building this zone up:

  • The zone type was just a personal choice (F1-DE). In a typical use case you would display the batch controls you favor using the filters. By using F1-DE, the SQL is run without asking for filters first as I would assume you would start with a full list and then using filters to refine what you wanted to see. If you got to a smaller subset you can use the Save View functionality to set those as your preferred filters. In other zone types you can filter first and then display the records, it is up to your personal preferences and business requirements.
  • The solution was built up over time. I started with some basic SQL and then started looking at scripting to reformat and provide advanced functionality in the zone. This is a good example of development of zones. You start simple and build more and more into it until you are happy with the result.
  • The SQL Statement will return the list of batch controls that have been executed at least once. This is intentional to filter out jobs that are never run from the product for an implementation. We supply lots of jobs in a product to cover all situations in the field but I have not encountered a customer that runs them all. As I am looking for at least one execution, I added the UNIQUE clause to ignore multiple executions.
  • I added Batch Category and Batch Control Type to allow filtering on things that are important at different stages of an implementation as well as targeting products that have a mix of different job types.
  • The Last Executed Date and Time is for information purposes mainly but also can be used as a sortable column so you can quickly fund jobs that were executed recently.
  • The Next Run Number might seem a strange field to include but it gives you an idea of batch controls that have been executed more frequently. In the screen above I can see that F1-MGDIM has been executed far more than the other batch controls.
  • There are a lot of suppressed columns in the list above. This is intentional as the values of these columns can be used for other columns. For example, I have Column 5 and Column 6 calculating the color of the Level Of Service. These are never to be displayed in the list or export as they are intermediate columns as they are used in the formatting in Column 8.
  • The Allow Row Service Script is really handy as it allows for complex processing outside the SQL. For example, as I do not know the Level Of Service value in the SQL (as it is calculated) and I want to include it as a filter, I can use the Allow Row Service Script to use that information to return whether a column returned is actually to be included in the final result set (even though the SQL would return it).
  • You might of noticed that I actually hardcoded some labels. This is typically not recommended as I would of created custom fields to hold the labels so that I can translate the label or change the text without changing this zone. If you ever create your own zones, I would strongly suggest avoiding hardcoding. I just used it to make the publishing of this article much easier.
  • The code in the service scripts is really just examples. They are not probably optimal bits of code. I am sure you are looking at the code and working out better ways of doing it and that is fine. The code is just to give you some ideas.
  • The script, CMLOSM, which builds the full Level Of Service message, is not really optimal and I am sure there are much easier methods to achieve the same result but it is functional for illustrative purposes.
  • You will notice that Column 8 is actually some dynamic HTML coding enclosed in div tags. That is correct. It is possible to implement some HTML in a column for formatting. Just so you know, the HTML in ANY column has to conform to the HTML whitelist that is enabled across the product. You cannot put just any code in there. You are limited to formatting and some basic post-processing. My development team helped me with some of the possibilities as I wanted a specific look without resorting to graphics. It is both visual and functional (for sorting).
  • You might also notice a Broadcast column (BATCH_CD). Well that is so that this zone can be part of a larger solution which I will expand upon in future blog entries to show off some very nice new functionality (actually most of it is available already).

Extendable Lookups vs Lookups

Thu, 2017-10-19 20:32

The Oracle Utilities Application Framework avoids hardcoding of values for maintenance, multi-lingual and configuration purposes. One of the features that supports this requirement is the Lookup object which lists the valid values (and associated related values like the description/override description and java code name for SDK use) for the field. Lookups can be exclusively owned by the product (where you can only change the override description and not add any additional values) or can customized where you can add new values. You are also free to use F1-GetLookupDescription to get the value for a lookup in any query zone, business service, business object (though you can do this on the element definition directly) and script.

There is a maintenance function to maintain Lookups. For example:

Example Lookup

The Lookup object is ideal for simple fields with valid values but if you needed to add additional elements to the lookup the lookup object cannot be extended. The concept of an Extendable Lookup was introduced. It allows implementations to build complex configurations similar to a lookup and introduce extended features for their custom configuration settings. To use Extendable Lookup the following is typically done:

  • You create a Business Object based upon the F1-EXT LKUP Maintenance Object. You can define the structure you want to configure for the lookup. There are numerous examples of this in the base product that you can use to get ideas for what you might need to support. It is highly recommended to use UI Hints on the BO Schema to build your user interface for the lookup.
  • You can refer to the Extendable Lookup using the F1-GetExtLookUpVal common business service that can return up to five attributes from your Extendable Lookup (if you need more you can develop your own call to directly return the values - like calling the BO directly).

Here are some delivered examples of Extendable Lookups:

Example Extendable Lookups

Extendable Lookup is very powerful where you not only want to put valid values in a list but also configure additional settings to influence the outcomes of your custom code. It is recommended to use Extendable Lookup instead of Lookup if the requirements for the valid value configuration is beyond the requirement of Lookup in terms of elements to record.

For more information on both Lookups and Extendable Lookups, refer to the online documentation for further advice.

Converting your XAI Services to IWS using scripting

Tue, 2017-10-10 17:14

With the deprecation announcement surrounding XML Application Integration (XAI), it is possible to convert to using Inbound Web Services (IWS) manually or using a simple script. This article will outline the process of building a script to bulk transfer the definitions over from XAI to IWS.

Ideally, it is recommended that you migrate each XAI Inbound Service to Inbound Web Services manually so that you can take the opportunity to rationalize your services and reduce your maintenance costs but if you want to simply transfer over to the new facility in bulk this can be done via a service script to migrate the information.

This can be done using a number of techniques:

  • You can drive the migration via a query portal that can be called via a Business Service from a BPA or batch process.
  • You can use the Plug-In Batch to pump the services through a script to save time.

In this article I will outline the latter example to illustrate the migration as well as highlight how to build a Plug In Batch process using configuration alone.

Note: Code and Design in this article are provided for illustrative purposes and only cover the basic functionality needed for the article. Variations on this design are possible through the flexibility of the extensible of the product. These are not examined in any detail except to illustrate the basic process.

Note: The names of the objects in this article are just examples. Alternative values can be used, if desired.

Design

The design for this is as follows:

  • Build a Service script that will take the XAI Inbound Service identifier to migrate and perform the following
    • Read the XAI Inbound Service definition to load the variables for the migration
    • Check that the XAI Inbound Service is valid to be migrated. This means it must be owned by Customer Modification and uses the Business Adaptor XAI Adapter.
    • Transfer the XAI Inbound Service definition to the relevant fields in the Inbound Web Service and add the service. Optionally activate the service ready for deployment. The deployment activity itself should not be part of the script as it is not a per service activity usually.
    • By default the following is transferred:
      • The Web Service name would be the Service Name on the XAI Inbound Service not the identifier as that is randomly generated.
      • Common attributes are transferred across from the existing definition
      • A single operation, with the same name as the Inbound Web Service, is created as a minimalist migration option.
  • Build a Plug In Batch definition to include the following:
    • The Select Record algorithm will identify the list of services to migrate. It should be noted that only services that are owned by the Customer Modification (CM) owner should be migrated as ownership should be respected.
    • The script for the above will be used in the Process Record algorithm.

The following diagram illustrates the overall process:

Plug In Development Process

The design of the Plug In Batch will only work for Oracle Utilities Application Framework V4.3.0.4.0 and above but the Service Script used for the conversion can be used with any implementation of Oracle Utilities Application Framework V4.2.0.2.0 and above. On older versions you can hook the script into another script such as BPA or drive it from a query zone.

Note: This process should ONLY be used to migrate XAI Inbound Services that are Customer Modifications. Services owned by the product itself should not be migrated to respect record ownership rules.

XAI Inbound Service Conversion Service Script

The first part of the process is to build a service script that establishes an Inbound Web Service for an XML Application Integration Inbound Service. To build the script the following process should be used:

  • Create Business Objects - Create a Business Object, using Business Object maintenance, based upon XAI SERVICE (XAI Inbound Service) and F1-IWSSVC (Inbound Web Service) to be used as Data Areas in your script. You can leave the schema's as generated with all the elements defined or remove the elements you do not need (as this is only a transient piece of functionality). I will assume that the schema will be as the default generation using the Schema generator in the Dashboard. Remember to allocate the Application Service for security purposes (I used F1-DFLTS as that is provided in the base meta data). The settings for the Business Objects are summarized as follows:
Setting XAI Inbound Service BO Values IWS Service BO Values Business Object CMXAIService CMIWSService Description XAI Service Conversion BO IWS Service Conversion BO Detailed Description Conversion BO for XML Application Integration Conversion BO for Inbound Web Services Maintenance Object XAI SERVICE F1-IWSSVC Application Service F1-DFLTS F1-DFLTS Instance Control Allow New Instances Allow New Instances
  • Build Script - Build a Service Script with the following attributes:
Setting Value Script CMConvertXAI Description Convert an XAI Service to IWS Service Detailed Description

Script that converts the passed in XAI Service Id into an Inbound Web Service.

- Reads the XAI Inbound Service definition
- Copies the relevant attributes to the Inbound Web Service
- Add the Inbound Web Service

Script Type Service Script Application Service F1-DFLTAPS Script Engine Version 3.0 Data Area CMIWSService - Data Area Name IWSService Data Area CMXAIService - Data Area Name XAIService Schema (this is the input value and some temporary variables)

<schema>
  <xaiInboundService mdField="XAI_IN_SVC_ID"/>
  <operations type="group">
    <iwsName/>  
    <operationName/>  
    <requestSchema/>  
    <responseSchema/>  
    <requestXSL/>  
    <responseXSL/>  
    <schemaName/>  
    <schemaType/>  
    <transactionType/>  
    <searchType/>
   </operations>
</schema>

The Data Area section looks like this:

  • Add the following code to your script (this is in individual edit-data steps):

Note: The code below is very basic and there are optimizations that can be done to make it smaller and more efficient. This is just some sample code to illustrate the process.

10: edit data
     // Jump out if the inbound service Id is blank
     if ("string(parm/xaiInboundService) = $BLANK")
       terminate;
     end-if;
end-edit;
20: edit data
     // populate the key value from the input parameter
     move "parm/xaiInboundService" to "XAIService/xaiServiceId";
     // invoke the XAI Service BO to read the service definition
     invokeBO 'CMXAIService' using "XAIService" for read;
     // Check that the Service Name is populated at a minimum
     if ("XAIService/xaiInServiceName = $BLANK")
       terminate;
     end-if;
     // Check that the Service type is correct
     if ("XAIService/xaiAdapter != BusinessAdaptor")
       terminate;
     end-if;
     // Check that the owner flag is CM
     if ("XAIService/customizationOwner != CM")
       terminate;
     end-if;
end-edit;
30: edit data
     // Copy the key attributes from XAI to IWS
     move "XAIService/xaiInServiceName" to "IWSService/iwsName";
     move "XAIService/description" to "IWSService/description";
     move "XAIService/longDescription" to "IWSService/longDescription";
     move "XAIService/isTracing" to "IWSService/isTracing";
     move "XAIService/postError" to "IWSService/postError";
     move "XAIService/shouldDebug" to "IWSService/shouldDebug";
     move "XAIService/xaiInServiceName" to "IWSService/defaultOperation";
     // Assume the service will be Active (this can be altered)
     // For example, set this to false to allow for manual checking of the
     // setting. That way you can confirm the service is set correctly and then
     // manually set Active to true in the user interface.
     move 'true' to "IWSService/isActive";
     // Process the list for the operation to the temporary variables in the schema
     move "XAIService/xaiInServiceName" to "parm/operations/iwsName";
     move "XAIService/xaiInServiceName" to "parm/operations/operationName";
     move "XAIService/requestSchema" to "parm/operations/requestSchema";
     move "XAIService/responseSchema" to "parm/operations/responseSchema";
     move "XAIService/inputXSL" to "parm/operations/requestXSL";
     move "XAIService/responseXSL" to "parm/operations/responseXSL";
     move "XAIService/schemaName" to "parm/operations/schemaName";
     move "XAIService/schemaType" to "parm/operations/schemaType";
     // move "XAIService/transactionType" to "parm/operations/transactionType";
     move "XAI/searchType" to "parm/operations/searchType";
     // Add the parameters to the operation list object
     move "parm/operations" to "IWSService/+iwsServiceOperation";
end-edit;
40: edit data
     // Invoke BO for Add
     invokeBO 'CMIWSService' using "IWSService" for add;
end-edit;

Note: The code example above does not add annotations to the Inbound Web Service to attach policies for true backward compatibility. It is assumed that policies are set globally rather than on individual services. If you want to add annotation logic to the script it is recommended to add an annotations group to the script internal data area and add annotations list in logic in the script.

One thing to point out for XAI. To use the same payload for an XAI service in Inbound Web Services, a single operation must exist with the same name as the Service Name. This is the design pattern for a one to one conversion. It is possible to vary from that if you manually convert from XAI to IWS as it is possible to reduce the number of services in IWS using multiple operations. Refer to Migrating from XAI to IWS (Doc Id: 1644914.1) and Web Services Best Practices (Doc Id: 2214375.1) from My Oracle Support for a discussion of the various techniques available. The attribute mapping looks like this:

Mapping of objects

The Service Script has now been completed. All it needs is to pass the XAI Inbound Service Identifier (not the name) to parm/xaiInboundService structure.

Building The Plug In Batch Control

In past releases, the only way to build a Batch process that is controlled via a Batch Control was to use the Oracle Utilities SDK using Java. It is now possible to define what is termed a Plug In based Batch Control which allows you to use ConfigTools and some configuration to build your batch process. The fundamental principle is that batch is basically selecting a set of records to process and then passing those records into something to process them. In our case, we will provide an SQL statement to subset the services to convert from XAI to pass to the service we just built in the previous step.

Select Records Algorithm

The first part of the Plug In Batch process is to define the Select Records algorithm that defines the parameters for the Batch process, the commit strategy and the SQL used to pump the records into the process. The first step is to create a script to be used for the Algorithm Type of Select Records to define the parameters and the commit strategy. For this example I created a script with the following parameters:

Setting Value Script CMXAISEL Description XAI Select Record Script - Parameters Detailed Description This script is the driver for the Select Records algorithm for the XAI to IWS conversion Script Type Plug In Script Algorithm Entity Batch Control - Select Records Script Version 3.0 Script Step 10: edit data
 // Set strategy and key field
 // Strategy values are dictated by BATCH_STRATEGY_FLG lookup
 //  Set JOBS strategy as this is a single threaded process
 //  I could use THDS strategy but then would have to put in logic for
 // restart in the SQL. The current SQL has that logic already implied.
 move 'JOBS' to "parm/hard/batchStrategy";
 move 'XAI_IN_SVC_ID' to "parm/hard/keyField";
end-edit;

Note: I have NO parameters for this job. If you wish to add processing for parameters, take a look at some examples of this algorithm type to see the processing necessary for bind variables.

The next step is to create an algorithm type. This will be used by the algorithm itself to define the process. Typically, an algorithm type is the definition of the physical aspects of the algorithm and its parameters. For the select algorithm the following algorithm type was created:

Setting Value Algorithm Type CMXAISEL Description XAI Selection Algorithm Detailed Description This algorithm Type is a generic wrapper to set the job parameters Algorithm Entity Batch Control - Select Records Program Type Plug In Script Plug In Script CMXAISEL Parameter SQL (Sequence 1 - Required) - This is the SQL to pass into the process

The last step is to create the Algorithm to be used in the Batch Control. This will use the Algorithm Type created earlier. Create the algorithm definition as follows:

Setting Value Algorithm Code CMXAISEL Description XAI Conversion Selection Algorithm Type CMXAISEL Effective Date Any valid date in the past is acceptable SQL Parameter

SELECT xai_in_svc_id FROM ci_xai_in_svc
WHERE xai_adapter_id = 'BusinessAdaptor'
AND
xai_in_svc_name NOT IN ( SELECT in_svc_name FROM f1_iws_svc)
AND
owner_flg = 'CM'

You might notice the SQL used in the driver. It passes the XAI_IN_SVC_ID's for XAI Inbound Services that use the Business Adaptor, are not already converted (for restart) and are owned by Customer Modification.

Process Records Algorithm

The next step is to link the script created earlier to the Process Records algorithm. As with the Select Records algorithm, a script, an algorithm type and algorithm entries need to be created.

The first part of the process is to build a Plug-In Script to pass the data from the Select Records Algorithm to the Service Script that does the conversion. The parameters are as follows:

Setting Recommended Value Script CMXAIProcess Description Process XAI Records in Batch Detailed Description This script reads the parameters from the Select records and passes them to the XAI Conversion script Script Type Plug-In Script Algorithm Entity Batch Control - Process Record Script Version 3.0 Data Area Service Script - CMConvertXAI - Data Area Name ConvertXAI Script Step if ("parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value != $BLANK")
    move "parm/hard/selectedFields/Field[name='XAI_IN_SVC_ID']/value" to "ConvertXAI/xaiInboundService";
    invokeSS 'CMConvertXAI' using "ConvertXAI" ;
end-if;

The script above basically takes the parameters passed to the algorithm and then passes them to the Service Script for processing

The next step is to define this script as an Algorithm Type:

Setting Value Algorithm Type CMXAIPROC Description XAI Conversion Algorithm Detailed Description This algorithm type links the algorithm to the service script to drive the process. Algorithm Entity Batch Control - Process Record Program Type Plug-In Script Plug-In Script CMXAIProcess

The last step in the algorithm process is to create the Algorithm entry itself:

Setting Value Algorithm Code CMXAIPROCESS Description XAI Conversion Process Record Algorithm Type CMXAIPROC Plug In Batch Control Configuration

The last part of the process is to bring all the configuration into a single place, the Batch Control. This will pull in the algorithms into a configuration ready for use.

Setting Value Batch Control CMXAICNV Description Convert XAI Services to IWS Detailed Description

This batch control converts the XAI Inbound Services to Inbound Web Services to aid in the mass migration of the meta data to the new facility.
This batch job only converts the following:

- XAI Services that are owned by Customer Modification to respect record ownership.
- XAI Services that use the Business Adaptor XAI Adapter. Other types are auto converted in IWS
- XAI Services that are not already defined as Inbound Web Services

Application Service F1-DFLTAPS Batch Control Type Not Timed Batch Category Adhoc Algorithm - Select Records CMXAISEL Algorithm - Process Records CMXAIPROCESS

The Plug-in batch process is now defined.

Summary

The conversion process can be summarized as follows:

  • A Service Script is required to transfer the data from the XAI Inbound Web Service to the Inbound Web Service definition. This converts only services owned by Customer Modification, have not been migrated already and use the Business Adaptor XAI Adapter. The script sets the same parameters as the XAI Service for backward compatibility and creates a SINGLE operation Web Service with the same payload as the original.
  • The Select Records Algorithm is defined which defines the subset of records to process with a script that defines the job properties, an algorithm entry to define the script to the framework and an algorithm, with the SQL to use, to link to the Batch Control.
  • The Process Records Algorithm is defined which defines the processing from the Select Records and links in the Service Script from the first step. As with any algorithm, the code is built, in this case in Plug-In Script to link the data to the script, an algorithm type entry defines the script and then an algorithm definition is created to link to the batch control.
  • The last step is to create the Batch Control that links the Select Records and Process Records algorithms.

Single Submitter Support in Oracle Scheduler Integration

Tue, 2017-08-29 17:47

The Oracle Scheduler integration was released for Oracle Utilities Application Framework to provide an interface to the DBMS_SCHEDULER package in the Oracle Database. 

By default, when submitting a multi-threaded job where the thread_limit is set to a number greater than 1 and the thread_number on the submission is setting to it to zero (to spawn threads) the interface would submit each thread individually after each other. For a large number of threads, this may lead to a high level of lock contention on the Batch Control table. To resolve this issue we have enhanced the interface to include a new feature to reduce the lock contention using a single submitter.

To use this facility you can either use a new command line override:

OUAF_BATCH.Submit_Job(
...
single_submitter => true,
...
)

Or an be used on the Set_Option facility (Globally or on individual jobs). For example for a Global scope:

OUAF_BATCH.Set_Option(scope => 'GLOBAL', name => 'single_submitter', value => true);

The default for this facility is set to false (for backward compatibility). If the value is set to true, you cannot restart an individual thread till all running threads have ended.

This patch is available from My Oracle Support for a number of releases:

Release Patch 4.2.0.3.0 24299479 4.3.0.1.0 26440254 4.3.0.2.0 26452535 4.3.0.3.0 26452546 4.3.0.4.0 26452556

 

Pages