Anthony Shorten

Subscribe to Anthony Shorten feed
Technical Advice for Oracle Tax and Utilities products
Updated: 4 hours 37 min ago

Overload Protection Support

Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

ILM Planning - The First Steps

Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

ILM Clarification

Wed, 2016-11-30 21:35

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to allow customers to implement cost effective data management capabilities for product transaction tables. These tables grow quickly and the solution allows the site to define their business retention rules as well as implement storage solutions to implement cost savings whilst retaining data appropriately.

There are several aspects of the solution:

  • In built functionality - These are some retention definitions, contained in a Master Configuration record, that you configure as well as some prebuilt algorithms and ILM batch jobs. The prebuilt algorithms are called by the ILM batch jobs to assess the age of a row as well as check for any outstanding related data for ILM enabled objects. There are additional columns added to the ILM enabled objects to help track the age of the record as well as setting flags for the technical aspect of the solutions to use. The retention period defines the ACTIVE period of the data for the business which is typically the period that the business needs fast and update access to the data.
  • New columns - There are two columns added ILM_DATE and ILM_ARCH_SW. The ILM_DATE is the date which is used to determine the age of the row. By default, it is typically set to the creation date for the row but as it is part of the object, implementers can optionally alter this value after it is set to influence the retention period for individual rows. The ILM_ARCH_SW is set to the "N" value by default, indicating the business is using the row. When a row is eligible, in other words, when the ILM_DATE + the retention period configured for the object, the ILM batch jobs assess the row against the ILM Algorithms to determine if any business rules indicate the record is still active. If the business rules indicate nothing in the business is outstanding for the row, the ILM_ARCH_SW is set to the "Y" value. This value effectively tells the system that the business has finished with that row in the ACTIVE period. Conversely, if a business rule indicates the row needs to be retained then the ILM_ARCH_SW is not changed from the "N" value.
  • Technical aspects of the solution - Once ILM_ARCH_SW is set to the "Y" value, the ILM features within the database are used. So there are some licensing aspects apply:
    • Oracle Database Enterprise Edition is needed to support the ILM activities. Other editions do not have support for the features used.
    • The Partitioning option of the Oracle Database Enterprise Edition is required as a minimum requirement. This is used for data group isolation and allowing storage characteristics to be set at the partition level for effective data management.
    • Optionally, it is recommended to license the Oracle Advanced Compression option. This option allows for greater options for cost savings by allowing higher levels of compression to be used a tool to realize further savings. The base compression in Oracle can be used as well but it is limited and not optimized for some activities.
    • Optionally customers can use the free ILM Assistant addon to the database (training for ILM Assistant). This is a web based planning tool, based upon Oracle APEX, that allows DBA's to build different storage scenarios and assess the cost savings of each. It does not implement the scenarios but it will generate some basic partitioning SQL. Generally for Oracle 12c customers, ILM Assistant is not recommended as it does not cover ALL the additional ILM capabilities of that version of the database. Personally, I only tend to recommend it to customers who have different tiered storage solutions, which is not a lot of customers generally.
    • Oracle 12c now includes additional (and free) capabilities built into the database. These are namely Automatic Data Optimization and Heat Maps. These are disabled by default and can be enabled using initialization parameters on your database. The Heat Map tracks the usage profile of the data in your database automatically. The Automatic Data Optimization can use Heat Map information and other information to define and implement rules for data management. That can be as simple as instructions on compression to moving data across partitions based upon your criterion. For example, if the ILM_ARCH_SW is the "Y" value and the data has not been touched in 3 months then compress the data using the OLTP compression in Oracle Advanced Compression. These rules are maintained using the free functionality in Oracle Enterprise Manager or, if you prefer, SQL commands can be used to set policies.
  • Support for storage solutions - Third party hardware based storage solutions (including Oracle's storage solutions) have additional ILM based solutions built at the hardware level. Typically those solutions will be able to be used in an ILM based solution with Oracle. Check with your hardware vendor directly for capabilities in this area.

There are a number of resources that can help you understand ILM more:

Whitepapers now and in the future

Tue, 2016-11-29 16:40

The whitepapers available for the product will be changing over the next few months to reflect the changes in the product documentation.

The following changes will happen over the next few months:

  • The online documentation provided with the product has been enhanced to encompass some of the content contained in the whitepapers. This means when you install the product you will get the information automatically in the online help and the PDF versions of the documentation.
  • If the online help fully encompasses the whitepaper contents, the whitepaper will be retired to avoid confusion. Always refer to the online documentation first as it is always the most up to date.
  • If some of the whitepaper information is not in the online help then the new version of the whitepapers will contain the information you need or other whitepaper such as the Best Practices series will be updated with the new information.

I will be making announcements on this blog as each whitepaper is updated to reflect this strategy. This will mean you will not have to download most of the whitepaper information separately and the information is available either online with the product, on Oracle's documentation site or available as a PDF download from Oracle's Delivery Cloud.

The first whitepaper to be retired is the Configuration Migration Assistant Overview which is now not available from My Oracle Support but is available from the documentation supplied with the product.

Remember the FIRST rule is to check the documentation supplied with the product FIRST before using the whitepapers. The documentation provided with the product is always up to date and the whitepapers are only updated on a semi-regular basis.

New Utilities Testing Solution version available (5.0.1.0)

Thu, 2016-11-17 17:55
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

We have released a new version(5.0.1.0) of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities (OFTAPOU) and is available from Oracle Delivery Cloud for customers and partners.This new OFTAPOU version now includes support for more versions  of our products. The packs are now cloud compatible i.e., they can be used for testing applications on Oracle Utilities Cloud services.

The pack now supports the following:

  • Oracle Utilities Customer Care And Billing 2.4.0.3 (updated), 2.5.0.1 (updated) and 2.5.0.2 (updated)
  • Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated)
  • Oracle Real Time Scheduler 2.2.0.3 (updated)
  • Oracle Utilities Mobile Workforce Management 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Real Time Scheduler 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2 and 4.3.0.3.
  • Oracle Utilities Meter Data Management 2.1.0.3 (updated)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (updated)
  • Oracle Utilities Meter Data Management 2.2.0 (new)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.2.0 (new)
  • Oracle Utilities Work And Asset Management 2.1.1 (updated)
  • Oracle Utilities Operational Device Management 2.1.1 (updated)

The pack now includes integration components that can be used for creating flows spanning multiple applications known as integration functional flows.

Components for testing mobile application of ORS/MWM have been added. Using the latest packs, customers will be able to execute automated test flows of ORS/MWM application on Android and IOS devices.

In addition to the product pack content, the core test automation framework has been enhanced with more features for ease of use.For example, the pack now includes sanity flows to verify installations of individual products. These sanity flows are the same flows used by our cloud teams to verify cloud installations.

The pack includes 1000+ prebuilt testing components that can be used to model business flows using Flow Builder and generate test scripts that can be executed by OpenScript, Oracle Test Manager and/or Oracle Load Testing. This allows customers to adopt automated testing to accelerate their implementations and upgrade whilst reducing their risk overall.

The pack also includes support for the latest Oracle Application Testing Suite release (12.5.0.3) and also includes a set of utilities to allow partners and implementers to upgrade their custom built test automation flows from older product packs to the latest ones.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Oracle Scheduler Integration Whitepaper available

Mon, 2016-10-24 18:23

As part of Oracle Utilities Application Framework V4.3.0.2.0 and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULER package (Oracle Scheduler) which is part of the Oracle Database (all editions). This API allows control and monitoring of product jobs within the Oracle Scheduler so that these can be managed individually or as part of a schedule and/or job chain.

Note: It is highly recommended that the Oracle Scheduler objects be housed in an Oracle Database 12c database for maximum efficiency. 

This has a few advantages:

  • Low Cost - The Oracle Scheduler is part of the Oracle Database license (all editions) so there is no additional license cost for existing instances.
  • Simple but powerful - The Oracle Scheduler has simple concepts which makes it easy to implement but do not be fooled by its simplicity. It has optional advanced facilities to allow features like resource profiling and load balancing for enterprise wide scheduling and resource management.
  • Local or Enterprise - There are many ways to implement Oracle Scheduler to allow it to just manage product jobs or become an enterprise wide scheduler. It supports remote job execution using the Oracle Scheduler Agent which can be enabled as part of the Oracle Client installation. One of the prerequisites of the Oracle Utilities product installation is the installation of the Oracle Client so this just adds the agent to the install. Once the agent is installed it is registered as a target with the Oracle Scheduler to execute jobs on that remote resource.
  • Mix and Match - The Oracle Scheduler can execute a wide range of job types so that you can mix non-product jobs with product jobs in schedules and/or chains.
  • Scheduling Engine is very flexible - The calendaring aspect of the scheduling engine is very flexible with overlaps supported as well as exclusions (for preventing jobs to run on public holidays for example).
  • Multiple Management Interfaces - The Oracle Utilities products do not include a management interface for the Oracle Scheduler as there are numerous ways the Oracle Scheduler objects can be maintained including command line, Oracle SQL Developer and Oracle Enterprise Manager (base install no pack needed).
  • Email Notification - Individual jobs can send status via email based upon specific conditions. The format of the email is now part of the job definition which means it can be customized far more easier.

Before using the Oracle Scheduler it is highly recommended that you read the Scheduler documentation provided with the database:

We have published a new whitepaper which outlines the API as well as some general advice on how to implement the Oracle Scheduler with Oracle Utilities products. It is available from My Oracle Support at Batch Scheduler Integration for Oracle Utilities Application Framework (Doc id: 2196486.1).

Architecture Guidelines - Same Domain Issues

Sun, 2016-10-23 21:32

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments.

Recently I was asked by a product partner about the possibility of housing more than one Oracle Utilities product and other Oracle products on the same machine in the same WebLogic Domain and in the same Oracle database. The idea was the partner wanted to save hardware costs somewhat by combining installations. This is technically possible (to varying extents) but not necessarily practical for certain situations, like production. One of mentors once told me, "even though something is possible, does not mean it is practical".

Let me clarify the situation. We are talking about multiple products on the same WebLogic domain on the same non-virtualized hardware sharing the database via different schemas. That means non-virtualized sharing of CPU, memory and disk. 

Let me explain why housing multiple products in the same domain and/or same hardware is not necessarily a good idea:

  • Resource profiles - Each product typically has a different resource profile, in terms of CPU, memory and disk usage. By placing multiple products in this situation, you would have to compromise on the shared settings to take all the products into account. For example, as the products might share the database instance then the instance level parameters would represent a compromize across the products. This may not be optimal for the individual products.
  • Scalability issues - By limiting your architecture to specific hardware you are constrained in any possible future expansion. As your transaction volumes grow, you need to scale and you do not want to limit your solutions.
  • Incompatibilities - Whilst the Oracle Utilities products are designed to interact on the platform level, not all products are compatible when sharing resources. Let explain with an example. Over the last few releases we have been replacing our internal technology with Oracle technology. One of the things we replaced was the Multi-Purpose Listener (MPL) with the Oracle Service Bus to provide industry level integration possibilities. Now, it is not possible to house Oracle Service Bus within the same domain as Oracle Utilities products. This is not a design flaw but intentional as really a single instance of Oracle Service Bus can be shared across products and can be scaled separately. Oracle Service Bus is only compatible with Oracle SOA Suite as it builds domain level configuration which should not be compromized by sharing that domain with other products.

There is a better approach to this issue:

  • Virtualization - Using a virtualization technology can address the separation of resources and scalability. It allows for lots of combinations for configuration whilst allocating resources appropriately for profiles and scalability as your business changes over time.
  • Clustering and Server separation - Oracle Utilities products can live on the same WebLogic domain but there are some guidelines to make it work appropriately. For example, each product should have its own cluster and/or servers within the domain. This allows for individual product configuration and optimization. Remember to put non-Oracle Utilities products on their own domain such as Oracle SOA Suite, Oracle Service Bus etc as they typically are shared enterprise wide and have their pre-optimized domain setups.

This is a first in a series of articles on architecture I hope to impart over the next few weeks.

Out for a while

Tue, 2016-06-28 00:31
Due to some medical issues I will not be posting till September this year. Thank you for your patience.

Embedded mode limitations for Production systems

Thu, 2016-05-26 20:33

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.

Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:

  • The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple.
  • The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only.
  • When implementations need additional requirements within the domain there are three alternatives:
    • Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time.
    • In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain.
    • The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support.

Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:

  • The domain can be setup according to your company standards.
  • You can implement clusters, multiple servers including dynamic clustering..
  • You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions.
  • You can lay out the architecture according to your volumes to manage within your SLA's.
  • You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc.

Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

Additional CCB 2.5 benchmark information

Sun, 2016-05-08 19:27

Recently I published a link to a summary report for the recent Oracle Utilities Customer Care and Billing 2.5 benchmark. Due to popular demand, we have released additional information about the benchmark including some configuration advice in a new additional whitepaper Oracle Utilities Customer Care and Billing V2.5 and 2.4 Comparison Benchmark Whitepaper (Doc Id: 2135359.1) now available from My Oracle Support.

This whitepaper was provided from our performance team and provides additional technical information about the benchmark setup as well as the results.

DISTRIBUTED mode deprecated

Sun, 2016-05-01 18:50

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.

It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:

  • For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server.
  • For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss. 

The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.

Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Migrating Oracle Utilities products from On Premise to Oracle Public Cloud

Thu, 2016-04-28 18:48

A while back Oracle Utilities announced that the latest releases of the Oracle Utilities Application Framework applications were supported on Platform As A Service (PaaS) on Oracle Public Cloud. As part of that support a new whitepaper has been released outlining the process of migrating an on-premise installation of the product to the relevant Platform As A Service offering on Oracle Public Cloud.

The whitepaper covers the following from a technical point of view:

  • The Oracle Cloud services to obtain to house the products, including the Oracle Java Cloud Service and Oracle Database As A Service with associated related services.
  • Setup instructions on how to configure the services in preparation to house the product.
  • Instructions of how to prepare the software for transfer.
  • Instructions on how to transfer the product schema to a Oracle Database As A Service instance using various techniques.
  • Instructions on how to transfer the software and make configuration changes to realign the product installation for the cloud. The configuration must follow the instructions in the Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) available from My Oracle Support which has also been updated to reflect the new process.
  • Basic instructions on using the native cloud facilities to manage your new PaaS instances. More information is available in the cloud documentation.

The whitepaper applies to the latest releases of the Oracle Utilities Application Framework based products only. Customers and partners wanting to establish new environments (with no previous installation) can use the same process with the addition of actually running the installation on the cloud instance.

Customers and partners considering using Oracle Infrastructure As A Service can use the same process with the addition of installing the prerequisites.

The Migrating From On Premise To Oracle Platform As A Service (Doc Id: 2132081.1) whitepaper is available from My Oracle Support. This will be the first in a series of cloud based whitepapers.

Oracle Utilities Customer Care And Billing 2.5 Benchmark available

Fri, 2016-04-22 15:26

Oracle Utilities Customer Care and Billing v2.5.x marked a major change in application technology as it is an all Java-based architecture.  In past releases, both Java and COBOL were supported. Over the last few releases, COBOL support has been progressively been replaced to optimize the product.

In recently conducted performance benchmark tests, it was demonstrated that the performance of Oracle Utilities Customer Care and Billing v2.5.x, an all java based release, is at least 15 percent better than that of the already high performing Oracle Utilities Customer Care and Billing v2.4.0.2, which included the COBOL-based architecture for key objects, in all use cases tested.

The performance tests simulated a utility with 10 million customers with both versions running the same workloads. In the key use cases tested, Oracle Utilities Customer Care and Billing v2.5.x performed at least 15% faster than the previous release.

Additionally, Oracle Utilities Customer Care and Billing v2.5.x processed 500,000 bills (representing the nightly batch billing for a utility serving 10 million customer accounts being divided into twenty groups, so that 5% of all customers are billed each night on each of the 20 working days during the month) within just 45 minutes.

The improved Oracle Utilities Customer Care and Billing performance ultimately reduces utility staff overtime hours required to oversee batch billing, allows utilities to consolidate tasks on fewer servers and reduce data center size and cost required, and it enables utilities to confidently explore new business processes and revenue sources, such as running billing services to smaller utilities.

A whitepaper is available summarizing the results and details of the architecture used. 

Using Database Resource Plans for effective resource management

Mon, 2016-04-11 21:27

In a past article we announced the support for Database Resource Plans. This facility is a technique that can be used by implementations to set limits and other resource constraints on processing to help optimize resource usage for implementations of Oracle Utilities products.

I have been asked a couple of follow-up questions about use cases that can exploit this facility. Here are a few things that might encourage its use:

  • Database Resource Plans can help constrain multiple channels share resources helping to avoid database contention between channels. For example, typically most utilities will not run batch processes in online hours. Typically the batch processes may cause contention with online users causing both channels to run slower. Using Database Resource plans you can tell the database to share the resources more effectively and also constrain batch to have minimal impact on the online users. Of course, batch will borrow resources used by online  but by using resource plans you can constrain it as much as practical.
  • Database Resource Plans are very flexible. You can set plans for time periods to reflect different resource profiles by channel by time of day. Using the batch/online use case in the last point, you can set batch to use less resources during the day and more at night. Conversely you can set online to use more resources during the day and less at night. This balances resources with their optimal use.
  • Database Resource Plans can be set globally or at low levels. In past releases of Oracle Utilities Application Framework, a set of database session visibility variables were set so that the database connection can be identified for monitoring. These same variables can now be used with resource plans. These include the program/batch job, threadpool/thread, client authorization user, client user tag etc. This means, if you desire, you can set minute level information based upon session characteristics in your database resource plans using Consumer Groups.
  • Database Resource Plans feature monitoring at the plan, directive, consumer group etc level to assess the effectiveness of those resource plans. This is available from database monitoring products including Oracle Enterprise Manager.

Database Resource Plans are another feature you can use from the database to effectively manage your resource usage to ensure each channel stays within its allocated resource profile. It is all about sharing the available resources and minimizing contention whilst harnessing the processing power available more effectively.

OEM and Passwords

Sun, 2016-04-03 18:33

I wanted to outline an interesting experience I had recently around security. Oracle, like a lot of companies requires their employees to regularly change their passwords as it is considered good security practice. There are strict rules around the password formats and their history. Luckily Oracle uses its own Identity Management solutions so the experience is simple and quick.

Recently my passwords were set to expire. I have a process I use to ensure the passwords are changed across all the technologies I use. I usually do that one morning a couple of days before they are due to expire. I did that this time at the end of the day, as it was a particularly busy day. It was a Friday and all was well..

Except I forgot one important change. My credentials in my demonstration instance of Oracle Enterprise Manager. I have a demonstration environment where I do research and development as well as record training and do demonstration against. After that weekend I logged to my demonstration environment to see alerts that it could not connect via some credentials.

I have three credentials to worry about in Oracle Enterprise Manager:

  • There is a credential for Oracle Enterprise Manager to connect to My Oracle Support. This is used for checking patches, looking for critical advice as well as register Service Requests directly from Oracle Enterprise Manager in online mode. Typically, you would nominate an account to link to My Oracle Support (along with a Service Identifier for your site).
  • I have two named credentials I use regularly for host interaction such as installations and running regular jobs on the machines. These are administration accounts used for the product at the operating system level. The way the machine is setup, I use two as one is the Administration account and the other is a privileged account used for low level administration. In some cases some sites will only need one per user.

I was able to correct the passwords and all my environments reported back correctly. Credential management is one of the strengths of Oracle Enterprise Manager. Next time I will add the OEM credentials to my checklist.

Oracle Coherence Use in the product

Tue, 2016-03-29 18:52

One of the most common questions I get from people is about the use of Oracle Coherence in our product.

We bundle a subset of the Oracle Coherence libraries for use in our Batch Architecture. The Coherence libraries permit our threadpools to be clustered and communicate (via Coherence) to each other in an efficient manner. This includes our submitters (the threads that are submitted) as well as the threadpools.

We introduced Oracle Coherence to our batch architecture in previous releases to manage our architecture and there are a few things that need to be clarified about the support:

  • We bundle a set of Coherence libraries that are used by the product. The libraries are only a subset of the full Coherence stack. They represent a Restricted Use License (RUL) for the provided use (i.e. managing the batch cluster).  The libraries are listed in the ouaf_jar_versions.txt file in the etc directory of the product installation. You do not need to purchase the Oracle WebLogic with Coherence to use the libraries for their licensed purpose.
  • As part of the Restricted Use License you cannot use the libraries in customizations so you cannot extend past the provided use. If you want to extend the use of Coherence in your custom solutions then you will need to purchase a FULL additional license for Oracle Coherence.
  • As the libraries are a subset of what is available in Oracle Coherence, it is NOT recommended to use the Oracle Coherence pack for Oracle Enterprise Manager with our products. This is because the pack assumes you are using the full stack and can return erroneous information when attempting to use it with the batch cluster.

Essentially we bundle a subset of Coherence libraries we use internally for our clustered batch architecture. These are locked down to use for that clustering purpose only. You do not need to extend the license to use them for this purposes. If you want to use them beyond this purpose, then you can purchase a full license if desired.

Service Based Testing

Wed, 2016-03-23 19:26

The Oracle Functional/Load Testing Advanced Pack for Oracle Utilities is a service based automated testing solution based around the popular Oracle Application Testing Suite. The main focus of this product to allow implementations of Oracle Utilities products to adopt automated testing quickly using prebuilt service based components to verify the product against your business processes and with your data. This is a fundamental principle of the solution.

Traditionally automated testing uses the user interface as the conduit to perform functional/load testing. There are a number of issues with that approach:

  • Traditionally you have to record a session to build testing assets. The data along with the user interaction are recorded and converted into a programmable script (using some scripting language). The data is typically associated with the test and to reuse the same process with different data you would have to either re-record the test or manually edit the scripting, which requires some programming experiences, to put new data into the script. This can involve quite a bit of test asset building and management. By the way, you can use Oracle Application Testing Suite in this mode as well if you did not have the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities but the unique advantage of the Oracle Application Testing Suite is that that user interface is componentized for reuse.
  • If you use the user interface as the basis of the testing then ANY change to user interface that you perform (or the vendor performs) will invalidate the recorded script. One big example of this is that all the latest Oracle Utilities products are moving to a new user interface, to support a wide range of devices, which required the user interface to change. This even alone would invalidate user interface based scripting and require those assets to be rebuilt.

The Oracle Functional/Load Testing Advanced Pack for Oracle Utilities uses a service based approach which utilizes the service layer that the user interface passes data to (and from). The solution passes the same data as the screens would internally pass to the underlying services. There are a number of distinct advantages of this approach:

  • The service based approach is isolated from any user interface changes whether the change was introduced in a new version or as part of your implementation. The main focus is always functionality testing of the underlying business services.
  • The service based testing components are prebuilt, against our base services. They are verified against the product as the product QA teams use these components to verify the product in QA. If a prebuilt service component is not appropriate for your implementation or you have custom functionality that is beyond the scope of the product, we supply a component builder, built in OpenScript, that reads our meta data and generates a service based definition which can be loaded into the already provided library of components. We also ship a component verifier, built in OpenScript, that helps ensure that your generated components are still valid when you make administration or configuration data changes.
  • The service layer in the product is common across ALL channels (i.e. online, web services, batch and mobile). All the business logic and rules are stored, verified at that layer and applied regardless of channel used. There are no business rules in the user interface in the base product. There are usability features that look like rules to improve usability of the product but they are NOT business rules.
  • The service layer encapsulates all the business rules and validations. This greatly simplifies testing as the testing tool just needs to interface to the layer to take advantage of those rules. Just like any channel when a business rule is broken then the product will respond with an appropriate message (the same message as the online user would get). The testing tool will recognize these error conditions.
  • This solution separates usability testing (which is typically done manually to assess the screen for usability) versus verifying your functionality in the product against your business processes and your data. Assessment of screens for usability is best performed by a person in your organization that will assess the usability of the screen.

Now, we also understand that some implementations may of introduced business rules into your user interface for various reasons. While this is not ideal, as you will be missing those business rules in non-user interface based interfaces, you can use the power of the Oracle Application Testing Suite to record a user interface based component. That component can be mixed with the service based components to incorporate into a flow.

The service based approach is different to the user interface based approach used in a lot of other tools but we feel it is the most efficient means of testing your product implementation and upgrade both quickly and easily.

Enterprise Manager: Using Metrics Extensions (SQL)

Wed, 2016-03-16 00:41

One the major features of Oracle Enterprise Manager (OEM) is the ability to create Metrics Extensions. These are metrics you want to track that may or may not be provided with the underlying products. I want to illustrate this point in a series of articles on using Oracle Enterprise Manager with Oracle Utilities products.

The first article is about how to use the basic metrics extensions capability with a simple SQL statement. This is not usual as it will be part of the database targets (not the Oracle Utilities targets) but I feel it will introduce specific techniques that we will reuse a lot in subsequent articles and serves as really good starting point.

A couple of things before we start:

  • The Metrics Extension part of OEM basically is a facility for you to add all sorts of custom metrics for OEM to track. You will create the extension and then associate it with targets to track.
  • The Metrics Extension component allows for incremental development. You specify and test the Metric first in the user interface. You can then mark it as deployed which will create a version. You then deploy the metric extension to be tracked on targets. The version tracking is useful as you can different versions of the metric deployed to different targets in different stages of development. I will touch on this only. More information is in the Metrics Extension documentation associated with the version of OEM you are using.
  • The screen dumps and example in this article are based upon a tracking query outlined in the Batch Troubleshooting Guide which flattens the Batch Run Tree and summarizes it. It is not a base view but a custom view that is used for illustrative purposes only. Refer to Performance Troubleshooting Guideline Series (Doc Id: 560382.1) from My Oracle Support.
  • The example shown is for Oracle Enterprise Manager 13c but can apply to other versions of Oracle Enterprise Manager.
  • The example will use SQL and in future articles we will explore other adapters.
  • The example is just for illustrative purposes only.

To perform this task you need to be authorized to use the Metrics Extension facility and the targets you will associate with the metric. Refer to your  installation to see if that is the case.

To setup the Metrics Extension, the following process can be used:

  • Navigate to the Metrics Extension facility. This can be done from the link page or menu (Monitoring --> Metric Extensions). For example:

Metrics Extension Menu

  •  From the Create menu, Select Metrics Extension. For example:

 Create Metric Extension

  • Specify the Metric Name, Target Type (Database Instance in this case), Display Name, Adapter (SQL in this case), Description and other attributes for the metric including default collection frequency. For example:

Metric Extension General Properties

  • You might notice the Select Advanced Properties which allows you to specify other attributes on the target to specialize the metric. This is new to OEM 13c and in this case will allow you to target multi-tenant databases (or not) for example.
  • Now as this is an SQL based metric you need to specify the SQL statement to execute to gather the data. In this example, we are using the custom view from the Performance Troubleshooting Guideline Series (Doc Id: 560382.1) from My Oracle Support. Now, in my example, I hardcoded the owner of the view. This is just an illustration. You can get over this by making sure the credentials have access to the view or create a synonym. Remember the database user must have SELECT access as a minimum. The example of the SQL is shown below:

SQL Example

  • For each column in the query you need to define it as part of the Metric. You do not have to define all of them but it is recommended to get full reuse. For each column, defines it attributes including if it data or a key value. Key values are used for SLA tracking. Also you can define more meta data to allow OEM to determine how to process it. The columns for our example are shown below:

Example Column definitions

  • Now we extend the metric by adding a few deltas. Delta's are virtual column that compare the last value with the current value. It is great for checking changes in values at the metric level. In our sample I will add two deltas. One for the Maximum Elapsed Time to see if the job elapsed time is getting worse and one for Maximum Run Rate (Throughput) to track if the number of records processed per period is getting lower. To do this select the field and create the Delta on that field. For example:

Max Elasped Time Delta

  • The Delta Column can also hold the Alert Threshold which is the default SLA including the messages that are available. For the Maximum Elapsed Time I want to detect if the change in the value has increased (greater than 0) and you can even set specific limits. I set a Critical SLA of delta of above 10 (as an example). For example:

Delta Definition with SLA - Max Elapsed Time

  • Repeat for the Max Throughput as that should be tracked to see if goes down (less records processed per minute). For example:

Adding Delta on Throughput

  • Again setup the Max Throughput. For example:

Deleta defintion for throughput

  •  Now the metric is complete with all the API fields. For example:

Complete Metric definition

  • The credentials for the metric need to be defined. When you create a metric you simply attach it to the metrics collection to use it. Again ensure that the credential is valid for the query. In my example I will use the standard database monitoring credential. For example:

Credentials

  • You can attach a database and run the test to verify the metric. This does not attach the metric to the target. It just tests it. For example:

 Testing the Metric

  • Review before saving the metric. At any time you can change the metric before you publish it. For example:

Review the Metric

Review the metric

  • Now the metric is still in editable mode so can be edited as much as necessary. This is indicated on the metric screen. For example:

Summary of Metric

  • To implement the metric you must save it as a Deployable Draft from the Actions Menu. For example:

Save As Deployable Draft

  • A version number is locked in and it is marked as deployable. For example:

Marked As Deployable

  • Now you need to identify the targets you want to deploy this metric to. You select the metric and use Deploy To Targets. from the Actions menu. For example:

Deploy to Targets

  • In this example, we will select the databases that will use this metric. You should note that if you specified Additional Parameters on the Target Type selection, those will be applied to the search. In my example, a standalone database and the CDB version are available (PDB's are not listed). For example:

Selecting Targets

  • OEM will then copy the metric to the targets supplied as a background job. For example:

Scheduled Deployment to Targets

  • You can set targets for individual targets using the Metrics and Collection Settings on the individual target. For example:

Setting Target specific values

  • Scroll down to see the metric and set the appropriate values. If not they will be defaulted from the metric itself. For example:

Example Metric in the Target

 

This is the conclusion of this article. Obviously I cannot cover everything you need to know in one article but hopefully you can see how easy it is to add custom metric extensions. In other articles I will add more detail and add other types of metrics.

Application Testing: The Oracle Utilities Difference

Wed, 2016-03-09 19:33

Late last year we introduced a new product to the Oracle Utilities product set. It was the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities. This pack is a set of prebuilt content and utilities based upon Oracle Application Testing Suite.

One of the major challenges in any implementation, or upgrade, is the amount of time that testing takes in relation to the overall time to go live. Typically testing is on the critical path for most implementations and upgrades. Subsequently, customers have asked us to help address this for our products.

Typically one technique to reduce testing time is to implement automated testing as much as possible. The feedback we got from most implementations was that the adoption of automated testing tools initially was quite high as you needed to build and maintain the assets for the automated testing to be cost effective. This typically requires specialist skills in the testing tool.

This also brought up another issue with traditional automated testing techniques. Most traditional based automated testing tools use the user interface to record their automation scripts. Let me explain. Typically using traditional methods, the tool will "record" your interactions with the online system including the data you used. This is then built into a testing "script" to reproduce the interactions to automated them. This is limiting in that to use the same script with another set of data, for alternative sceanrios, you have to get a script developer to get involved and this requires additional skills. This is akin to programming.

Now let me explain the difference with Oracle Application Testing Suite in combination with the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities:

  • Prebuilt Testing Assets - We provide a set of prebuilt component based assets that the product developers use to QA the product. These greatly reduce the need for building assets from scratch and get you testing earlier.
  • One pack, multiple products, multiple versions - The pack contains the components for the Oracle Utilities products supported and the versions supported.
  • Service based not UI based - The components in the pack are service based rather than using the UI approach traditionally used. This is to isolate your functionality from any user experience changes. In a traditional approach, any changes to the User Interface would require either to re-record the script or making programming changes to the script. This is not needed for the service based approach.
  • Supports Online, Web Services and Batch - Traditional approaches typically would cover online testing only. Oracle Application Testing Suite and the pack allows for online, web services and batch testing as well which greatly expands the benefits.
  • Component Generator Utility - Whilst the pack supplies the components you will need, we are aware the fact that some implementations are heavily customized so we provide a Component Generator which uses the product meta data to generate a custom component that can be added to the existing library.
  • Assemble not code - We use the Oracle Flow Builder product, used by many Oracle eBusiness Suite customers, to assemble the components into a flow that models your business processes. Oracle Flow Builder simply generates the script that is executed with the need for technical script development.
  • Upgrade easier - The upgrade process is much simpler with the flows simply pointed to the new version of the components supplied to perform your upgrade testing.
  • Can Co-exist with UI based Components - Whilst our solution is primarily service based, it is possible to use all the facilities in Oracle Application Testing Suite to build components, including traditional recording, to add any logic introduced on the browser client. The base product does not introduce business logic into the user interface so the base components are not user interface based. We do supply a number of UI based components in the Oracle Utilities Application Framework part of the pack to illustrate that UI based components can co-exist.
  • Cross product testing - It is possible to test across Oracle Utilities products within a single flow. As the license includes the relevant Oracle Application Testing Suite tools (Flow Builder, OpenScript etc) it is possible to add components for bespoke and other solutions, that are web or service based, in your implementation as well.
  • Flexible licensing - The licensing of the testing solution is very flexible. You not only get the pack and the Oracle Application Testing Suite but the license allows the following:
    • The license is regardless of the number of Oracle Utilities products you use. Obviously customers with more than one Oracle Utilities product we see a greater benefit but it is cost effective regardless.
    • The license is regardless of the number of copies of products you run the testing against. There is a server enablement that needs to be performed as part of the installation but you are not restricted to non-production copies you run the solution against.
    • The license conditions include full use of the Oracle Application Testing Suite for licensed users. This can be used against any web or Web Service based application on the site so that you can include third party integration as part of your flows if necessary.
    • The license conditions include OpenScript which allows technical people to build and maintain their own custom assets to add to the component libraries to perform a wide range of ancillary testing.
  • Data is separated from process - In the traditional approach you included the data as part of the test. Using this solution, the flow is built independent of the data. The data, in the form of databanks (CSV, MS Excel etc) can be attached at the completion of the flow, in the flow definition or altered AFTER the flow has been built. Even after the script has been built, Oracle Flow Builder separates the data from the flow so that you can substitute the data without the need to regenerate the script. This means you have greater reuse and greater flexibility in your testing.
  • Flexible execution of Testing - The Flow Builder product generates a script (that typically needs no alteration after generation). This script can be executed in OpenScript (for developers), using the optional Oracle Test Manager product, loaded into the optional Oracle Load Testing product for performance/load testing or executed by a third party tool via a command line interface. This flexibility means greater reuse of your testing assets. 
Support for Extensions

One of the most common questions I get about the pack is the support for customization (or extensions as we call them). Let me step back before answer and put extensions into categories.

When I discuss extending our product there is a full range of facilities available. To focus on the impact of extensions I am going to categorize these into three simple categories:

  • User Interface extensions - These are bits of code in CSS or Java script that extend the user interface directly or add business logic into the browser front end. These are NOT covered by the base components as the product has all the business logic in the services layer. The reason for this is that the same business rules can be reused regardless of the channel used (such as online, web services and batch). If you have it in just one channel then you miss those business rules elsewhere. To support these you can use the features of Oracle Application Testing Suite to record that logic and generate a component for you. You can then include that component in any flow, with other relevant components, to test that logic.
  • Tier 1 extensions - These are extensions that alter the structure of the underlying object. Anything that changes the API to the object are what I am talking about. Extension types such as custom schemas which alter the structure of the object (e.g. flattening data, changing tags, adding rules in the schema etc). These will require the use of the Component Generator as the API will be different than the base component.
  • Tier 2 extensions - These are extensions within the objects themselves that alter behavior. For example, algorithms, user exits, change handlers etc are example of such extensions. These are supported by the base components directly as they alter the base data not the structure. If you have a combination of Tier 1 and Tier 2 then you must use the Component Generator as the structure is altered.

Customers will use a combination of all three and in some cases will need to use the component generators (the UI one or the meta data one) but generally the components supplied will be reused for at least part of the testing, which saves time.

We are excited about this new product and we look forward to adding more technology and new features over the next few releases.

OSB 12c Adapter for Oracle Utilities

Tue, 2016-03-08 23:32

In Oracle Utilities Application Framework V4.2.0.3.0 we introduced  Oracle Service Bus adapters to allow that product to process Outbound Messages and for Oracle Utilities Customer Care And Billing, Notification and Workflow records.

These adapters were compatible with Oracle Service Bus 11g. We have not patched these adapters to be compatible with new facilities in Oracle Service Bus 12c. The following patches must be applied:

 Version  Patch Number  4.2.0.3.0  22308653  4.3.0.0.1  21760629  4.3.0.1.0  22308684  

Pages