Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 18 hours 19 min ago

Service Pack Support for Oracle Utilities Enterprise Manager Pack

Thu, 2016-12-08 13:51

Customers using the Application Management Pack for Oracle Utilities within Oracle Enterprise Manager can prepare our service packs for installation within the pack by converting the pack to Enterprise Managers format. The utilities are supplied with the service pack and allow customers flexibility of either manually installing the pack (the default) or automating the installation via Oracle Enterprise Manager.

A whitepaper outlining the process and utilities provided is now available from My Oracle Support under Enterprise Manager for Oracle Utilities Whitepaper: Service Pack Compliance (Doc ID 2211363.1).

Service Pack Support for Oracle Utilities Enterprise Manager Pack

Thu, 2016-12-08 13:51

Customers using the Application Management Pack for Oracle Utilities within Oracle Enterprise Manager can prepare our service packs for installation within the pack by converting the pack to Enterprise Managers format. The utilities are supplied with the service pack and allow customers flexibility of either manually installing the pack (the default) or automating the installation via Oracle Enterprise Manager.

A whitepaper outlining the process and utilities provided is now available from My Oracle Support under Enterprise Manager for Oracle Utilities Whitepaper: Service Pack Compliance (Doc ID 2211363.1).

Using ADO and HeatMap in the Utilities ILM Solution

Wed, 2016-12-07 16:19

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

For more information about Heat Map and ADO refer to the following information:

Using ADO and HeatMap in the Utilities ILM Solution

Wed, 2016-12-07 16:19

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

For more information about Heat Map and ADO refer to the following information:

Overload Protection Support

Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems, Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

Overload Protection Support

Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

ILM Planning - The First Steps

Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

ILM Planning - The First Steps

Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

ILM Clarification

Wed, 2016-11-30 21:35

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to allow customers to implement cost effective data management capabilities for product transaction tables. These tables grow quickly and the solution allows the site to define their business retention rules as well as implement storage solutions to implement cost savings whilst retaining data appropriately.

There are several aspects of the solution:

  • In built functionality - These are some retention definitions, contained in a Master Configuration record, that you configure as well as some prebuilt algorithms and ILM batch jobs. The prebuilt algorithms are called by the ILM batch jobs to assess the age of a row as well as check for any outstanding related data for ILM enabled objects. There are additional columns added to the ILM enabled objects to help track the age of the record as well as setting flags for the technical aspect of the solutions to use. The retention period defines the ACTIVE period of the data for the business which is typically the period that the business needs fast and update access to the data.
  • New columns - There are two columns added ILM_DATE and ILM_ARCH_SW. The ILM_DATE is the date which is used to determine the age of the row. By default, it is typically set to the creation date for the row but as it is part of the object, implementers can optionally alter this value after it is set to influence the retention period for individual rows. The ILM_ARCH_SW is set to the "N" value by default, indicating the business is using the row. When a row is eligible, in other words, when the ILM_DATE + the retention period configured for the object, the ILM batch jobs assess the row against the ILM Algorithms to determine if any business rules indicate the record is still active. If the business rules indicate nothing in the business is outstanding for the row, the ILM_ARCH_SW is set to the "Y" value. This value effectively tells the system that the business has finished with that row in the ACTIVE period. Conversely, if a business rule indicates the row needs to be retained then the ILM_ARCH_SW is not changed from the "N" value.
  • Technical aspects of the solution - Once ILM_ARCH_SW is set to the "Y" value, the ILM features within the database are used. So there are some licensing aspects apply:
    • Oracle Database Enterprise Edition is needed to support the ILM activities. Other editions do not have support for the features used.
    • The Partitioning option of the Oracle Database Enterprise Edition is required as a minimum requirement. This is used for data group isolation and allowing storage characteristics to be set at the partition level for effective data management.
    • Optionally, it is recommended to license the Oracle Advanced Compression option. This option allows for greater options for cost savings by allowing higher levels of compression to be used a tool to realize further savings. The base compression in Oracle can be used as well but it is limited and not optimized for some activities.
    • Optionally customers can use the free ILM Assistant addon to the database (training for ILM Assistant). This is a web based planning tool, based upon Oracle APEX, that allows DBA's to build different storage scenarios and assess the cost savings of each. It does not implement the scenarios but it will generate some basic partitioning SQL. Generally for Oracle 12c customers, ILM Assistant is not recommended as it does not cover ALL the additional ILM capabilities of that version of the database. Personally, I only tend to recommend it to customers who have different tiered storage solutions, which is not a lot of customers generally.
    • Oracle 12c now includes additional (and free) capabilities built into the database. These are namely Automatic Data Optimization and Heat Maps. These are disabled by default and can be enabled using initialization parameters on your database. The Heat Map tracks the usage profile of the data in your database automatically. The Automatic Data Optimization can use Heat Map information and other information to define and implement rules for data management. That can be as simple as instructions on compression to moving data across partitions based upon your criterion. For example, if the ILM_ARCH_SW is the "Y" value and the data has not been touched in 3 months then compress the data using the OLTP compression in Oracle Advanced Compression. These rules are maintained using the free functionality in Oracle Enterprise Manager or, if you prefer, SQL commands can be used to set policies.
  • Support for storage solutions - Third party hardware based storage solutions (including Oracle's storage solutions) have additional ILM based solutions built at the hardware level. Typically those solutions will be able to be used in an ILM based solution with Oracle. Check with your hardware vendor directly for capabilities in this area.

There are a number of resources that can help you understand ILM more:

Whitepapers now and in the future

Tue, 2016-11-29 16:40

The whitepapers available for the product will be changing over the next few months to reflect the changes in the product documentation.

The following changes will happen over the next few months:

  • The online documentation provided with the product has been enhanced to encompass some of the content contained in the whitepapers. This means when you install the product you will get the information automatically in the online help and the PDF versions of the documentation.
  • If the online help fully encompasses the whitepaper contents, the whitepaper will be retired to avoid confusion. Always refer to the online documentation first as it is always the most up to date.
  • If some of the whitepaper information is not in the online help then the new version of the whitepapers will contain the information you need or other whitepaper such as the Best Practices series will be updated with the new information.

I will be making announcements on this blog as each whitepaper is updated to reflect this strategy. This will mean you will not have to download most of the whitepaper information separately and the information is available either online with the product, on Oracle's documentation site or available as a PDF download from Oracle's Delivery Cloud.

The first whitepaper to be retired is the Configuration Migration Assistant Overview which is now not available from My Oracle Support but is available from the documentation supplied with the product.

Remember the FIRST rule is to check the documentation supplied with the product FIRST before using the whitepapers. The documentation provided with the product is always up to date and the whitepapers are only updated on a semi-regular basis.

New Utilities Testing Solution version available (5.0.1.0)

Thu, 2016-11-17 17:55
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

We have released a new version(5.0.1.0) of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities (OFTAPOU) and is available from Oracle Delivery Cloud for customers and partners.This new OFTAPOU version now includes support for more versions  of our products. The packs are now cloud compatible i.e., they can be used for testing applications on Oracle Utilities Cloud services.

The pack now supports the following:

  • Oracle Utilities Customer Care And Billing 2.4.0.3 (updated), 2.5.0.1 (updated) and 2.5.0.2 (updated)
  • Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated)
  • Oracle Real Time Scheduler 2.2.0.3 (updated)
  • Oracle Utilities Mobile Workforce Management 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Real Time Scheduler 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2 and 4.3.0.3.
  • Oracle Utilities Meter Data Management 2.1.0.3 (updated)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (updated)
  • Oracle Utilities Meter Data Management 2.2.0 (new)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.2.0 (new)
  • Oracle Utilities Work And Asset Management 2.1.1 (updated)
  • Oracle Utilities Operational Device Management 2.1.1 (updated)

The pack now includes integration components that can be used for creating flows spanning multiple applications known as integration functional flows.

Components for testing mobile application of ORS/MWM have been added. Using the latest packs, customers will be able to execute automated test flows of ORS/MWM application on Android and IOS devices.

In addition to the product pack content, the core test automation framework has been enhanced with more features for ease of use.For example, the pack now includes sanity flows to verify installations of individual products. These sanity flows are the same flows used by our cloud teams to verify cloud installations.

The pack includes 1000+ prebuilt testing components that can be used to model business flows using Flow Builder and generate test scripts that can be executed by OpenScript, Oracle Test Manager and/or Oracle Load Testing. This allows customers to adopt automated testing to accelerate their implementations and upgrade whilst reducing their risk overall.

The pack also includes support for the latest Oracle Application Testing Suite release (12.5.0.3) and also includes a set of utilities to allow partners and implementers to upgrade their custom built test automation flows from older product packs to the latest ones.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Oracle Scheduler Integration Whitepaper available

Mon, 2016-10-24 18:23

As part of Oracle Utilities Application Framework V4.3.0.2.0 and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULER package (Oracle Scheduler) which is part of the Oracle Database (all editions). This API allows control and monitoring of product jobs within the Oracle Scheduler so that these can be managed individually or as part of a schedule and/or job chain.

Note: It is highly recommended that the Oracle Scheduler objects be housed in an Oracle Database 12c database for maximum efficiency. 

This has a few advantages:

  • Low Cost - The Oracle Scheduler is part of the Oracle Database license (all editions) so there is no additional license cost for existing instances.
  • Simple but powerful - The Oracle Scheduler has simple concepts which makes it easy to implement but do not be fooled by its simplicity. It has optional advanced facilities to allow features like resource profiling and load balancing for enterprise wide scheduling and resource management.
  • Local or Enterprise - There are many ways to implement Oracle Scheduler to allow it to just manage product jobs or become an enterprise wide scheduler. It supports remote job execution using the Oracle Scheduler Agent which can be enabled as part of the Oracle Client installation. One of the prerequisites of the Oracle Utilities product installation is the installation of the Oracle Client so this just adds the agent to the install. Once the agent is installed it is registered as a target with the Oracle Scheduler to execute jobs on that remote resource.
  • Mix and Match - The Oracle Scheduler can execute a wide range of job types so that you can mix non-product jobs with product jobs in schedules and/or chains.
  • Scheduling Engine is very flexible - The calendaring aspect of the scheduling engine is very flexible with overlaps supported as well as exclusions (for preventing jobs to run on public holidays for example).
  • Multiple Management Interfaces - The Oracle Utilities products do not include a management interface for the Oracle Scheduler as there are numerous ways the Oracle Scheduler objects can be maintained including command line, Oracle SQL Developer and Oracle Enterprise Manager (base install no pack needed).
  • Email Notification - Individual jobs can send status via email based upon specific conditions. The format of the email is now part of the job definition which means it can be customized far more easier.

Before using the Oracle Scheduler it is highly recommended that you read the Scheduler documentation provided with the database:

We have published a new whitepaper which outlines the API as well as some general advice on how to implement the Oracle Scheduler with Oracle Utilities products. It is available from My Oracle Support at Batch Scheduler Integration for Oracle Utilities Application Framework (Doc id: 2196486.1).

Architecture Guidelines - Same Domain Issues

Sun, 2016-10-23 21:32

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments.

Recently I was asked by a product partner about the possibility of housing more than one Oracle Utilities product and other Oracle products on the same machine in the same WebLogic Domain and in the same Oracle database. The idea was the partner wanted to save hardware costs somewhat by combining installations. This is technically possible (to varying extents) but not necessarily practical for certain situations, like production. One of mentors once told me, "even though something is possible, does not mean it is practical".

Let me clarify the situation. We are talking about multiple products on the same WebLogic domain on the same non-virtualized hardware sharing the database via different schemas. That means non-virtualized sharing of CPU, memory and disk. 

Let me explain why housing multiple products in the same domain and/or same hardware is not necessarily a good idea:

  • Resource profiles - Each product typically has a different resource profile, in terms of CPU, memory and disk usage. By placing multiple products in this situation, you would have to compromise on the shared settings to take all the products into account. For example, as the products might share the database instance then the instance level parameters would represent a compromize across the products. This may not be optimal for the individual products.
  • Scalability issues - By limiting your architecture to specific hardware you are constrained in any possible future expansion. As your transaction volumes grow, you need to scale and you do not want to limit your solutions.
  • Incompatibilities - Whilst the Oracle Utilities products are designed to interact on the platform level, not all products are compatible when sharing resources. Let explain with an example. Over the last few releases we have been replacing our internal technology with Oracle technology. One of the things we replaced was the Multi-Purpose Listener (MPL) with the Oracle Service Bus to provide industry level integration possibilities. Now, it is not possible to house Oracle Service Bus within the same domain as Oracle Utilities products. This is not a design flaw but intentional as really a single instance of Oracle Service Bus can be shared across products and can be scaled separately. Oracle Service Bus is only compatible with Oracle SOA Suite as it builds domain level configuration which should not be compromized by sharing that domain with other products.

There is a better approach to this issue:

  • Virtualization - Using a virtualization technology can address the separation of resources and scalability. It allows for lots of combinations for configuration whilst allocating resources appropriately for profiles and scalability as your business changes over time.
  • Clustering and Server separation - Oracle Utilities products can live on the same WebLogic domain but there are some guidelines to make it work appropriately. For example, each product should have its own cluster and/or servers within the domain. This allows for individual product configuration and optimization. Remember to put non-Oracle Utilities products on their own domain such as Oracle SOA Suite, Oracle Service Bus etc as they typically are shared enterprise wide and have their pre-optimized domain setups.

This is a first in a series of articles on architecture I hope to impart over the next few weeks.

Out for a while

Tue, 2016-06-28 00:31
Due to some medical issues I will not be posting till September this year. Thank you for your patience.

Embedded mode limitations for Production systems

Thu, 2016-05-26 20:33

In most implementations of Oracle Utilities products the installer creates an embedded mode installation. This is called embedded as the domain configuration is embedded in the application which is ideal for demonstration and development environments as the default setup is enough for those types of activities.

Over time though customers and partners will want to use more and more of the Oracle WebLogic domain facilities including advanced setups like multiple servers, clusters, advanced security setups etc. Here are a few important things to remember about embedded mode:

  • The embedded mode domain setup is fixed with a fixed single server that houses the product and the administration server with the internal basic security setup. In non-production this is reasonable as the requirements for the environment are simple.
  • The domain file (config.xml) is generated by the product, using a template, assuming it is embedded only.
  • When implementations need additional requirements within the domain there are three alternatives:
    • Make the changes in the domain from the administration console and then convert the new config.xml generated by the console as a custom template. This needs to be done as remember when Oracle deliver ANY patches or upgrades (or when you make configuration changes) we need to run initialSetup[.sh] to add the patch, upgrade or configuration to the product. This will reset the file back to the factory provided template unless you are using the custom template. Basically, if you decide to use this option, and do not implement a custom template then you will lose your changes each time.
    • In later versions of OUAF we introduced user exits. These allow implementations to add to the configuration using XML snippets. It does require you to understand the configuration file that is being manipulated and we have sprinkled user exits all over the configuration files to allow extensions. Using this method means that you make changes to the domain using the configuration files, examine the changes to the domain file and then decide which user exit is available to reflect that change and add the relevant XML snippet. Again you must understand the configuration file to make sure you do not corrupt the domain.
    • The easiest option is to migrate to native mode. This basically removes the embedded nature of the domain and houses it within Oracle Weblogic. This is explained in Native Installation whitepaper (Doc Id: 1544969.1) available from My Oracle Support.

Native Installations allows you to use the full facilities within Oracle WebLogic without the restrictions of embedded mode. The advantages of native installations is the following:

  • The domain can be setup according to your company standards.
  • You can implement clusters, multiple servers including dynamic clustering..
  • You can use the security features of Oracle WebLogic to implement complex security setups including SSO solutions.
  • You can lay out the architecture according to your volumes to manage within your SLA's.
  • You can implement JDBC connection pooling, Work Managers, advanced diagnostics etc.

Oracle recommends that native installations be used for environments where you need to take advantage of the domain facilities. Embedded mode should only be used within the restrictions it poses.

Additional CCB 2.5 benchmark information

Sun, 2016-05-08 19:27

Recently I published a link to a summary report for the recent Oracle Utilities Customer Care and Billing 2.5 benchmark. Due to popular demand, we have released additional information about the benchmark including some configuration advice in a new additional whitepaper Oracle Utilities Customer Care and Billing V2.5 and 2.4 Comparison Benchmark Whitepaper (Doc Id: 2135359.1) now available from My Oracle Support.

This whitepaper was provided from our performance team and provides additional technical information about the benchmark setup as well as the results.

DISTRIBUTED mode deprecated

Sun, 2016-05-01 18:50

Based upon feedback from partners and customers, the DISTRIBUTED mode used in the batch architecture has been deprecated in Oracle Utilities Application Framework V4.3.x and above. The DISTRIBUTED mode was originally introduced to the batch cluster architecture back in Oracle Utilities Application Framework V2.x and was popular but suffered from a number of restrictions. Given the flexibility of the batch architect was expanded in newer releases it was decided to deprecate the DISTRIBUTED mode to encourage more effective use of the architecture.

It is recommended that customers using this mode migrate to CLUSTERED mode using a few techniques:

  • For customers on non-production environments, it is recommended to use CLUSTERED mode using the single server (ss) template used by the Batch Edit facility. This is a simple cluster that uses CLUSTERED mode without the advanced configurations in a clustered environment. It is restricted to single host servers so it is not typically recommended for production or clustered environments that use more than one host server.
  • For customers on production environments, it is recommended to use CLUSTERED mode with the unicast (wka) template used by the Batch Edit facility. This will allow flexible configuration without the use of multi-cast which can be an issue on some implementations using CLUSTERED mode. The advantage of Batch Edit is that it has a simple interface to allow you to define this configuration without too much fuss. 

The advantage of Batch Edit when building your new batch configurations is that it is a simple to use as well as it generates an optimized set of configuration files that can be used directly for the batch architecture. Execution of the jobs would have to remove the DISTRIBUTED tags on the command lines or configuration files to use the new architecture.

Customers should read the Batch Best Practices (Doc Id: 836362.1) and the Server Administration Guide shipped with your product for advice on Batch Edit as well as the templates mentioned in this article.

Migrating Oracle Utilities products from On Premise to Oracle Public Cloud

Thu, 2016-04-28 18:48

A while back Oracle Utilities announced that the latest releases of the Oracle Utilities Application Framework applications were supported on Platform As A Service (PaaS) on Oracle Public Cloud. As part of that support a new whitepaper has been released outlining the process of migrating an on-premise installation of the product to the relevant Platform As A Service offering on Oracle Public Cloud.

The whitepaper covers the following from a technical point of view:

  • The Oracle Cloud services to obtain to house the products, including the Oracle Java Cloud Service and Oracle Database As A Service with associated related services.
  • Setup instructions on how to configure the services in preparation to house the product.
  • Instructions of how to prepare the software for transfer.
  • Instructions on how to transfer the product schema to a Oracle Database As A Service instance using various techniques.
  • Instructions on how to transfer the software and make configuration changes to realign the product installation for the cloud. The configuration must follow the instructions in the Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) available from My Oracle Support which has also been updated to reflect the new process.
  • Basic instructions on using the native cloud facilities to manage your new PaaS instances. More information is available in the cloud documentation.

The whitepaper applies to the latest releases of the Oracle Utilities Application Framework based products only. Customers and partners wanting to establish new environments (with no previous installation) can use the same process with the addition of actually running the installation on the cloud instance.

Customers and partners considering using Oracle Infrastructure As A Service can use the same process with the addition of installing the prerequisites.

The Migrating From On Premise To Oracle Platform As A Service (Doc Id: 2132081.1) whitepaper is available from My Oracle Support. This will be the first in a series of cloud based whitepapers.

Oracle Utilities Customer Care And Billing 2.5 Benchmark available

Fri, 2016-04-22 15:26

Oracle Utilities Customer Care and Billing v2.5.x marked a major change in application technology as it is an all Java-based architecture.  In past releases, both Java and COBOL were supported. Over the last few releases, COBOL support has been progressively been replaced to optimize the product.

In recently conducted performance benchmark tests, it was demonstrated that the performance of Oracle Utilities Customer Care and Billing v2.5.x, an all java based release, is at least 15 percent better than that of the already high performing Oracle Utilities Customer Care and Billing v2.4.0.2, which included the COBOL-based architecture for key objects, in all use cases tested.

The performance tests simulated a utility with 10 million customers with both versions running the same workloads. In the key use cases tested, Oracle Utilities Customer Care and Billing v2.5.x performed at least 15% faster than the previous release.

Additionally, Oracle Utilities Customer Care and Billing v2.5.x processed 500,000 bills (representing the nightly batch billing for a utility serving 10 million customer accounts being divided into twenty groups, so that 5% of all customers are billed each night on each of the 20 working days during the month) within just 45 minutes.

The improved Oracle Utilities Customer Care and Billing performance ultimately reduces utility staff overtime hours required to oversee batch billing, allows utilities to consolidate tasks on fewer servers and reduce data center size and cost required, and it enables utilities to confidently explore new business processes and revenue sources, such as running billing services to smaller utilities.

A whitepaper is available summarizing the results and details of the architecture used. 

Using Database Resource Plans for effective resource management

Mon, 2016-04-11 21:27

In a past article we announced the support for Database Resource Plans. This facility is a technique that can be used by implementations to set limits and other resource constraints on processing to help optimize resource usage for implementations of Oracle Utilities products.

I have been asked a couple of follow-up questions about use cases that can exploit this facility. Here are a few things that might encourage its use:

  • Database Resource Plans can help constrain multiple channels share resources helping to avoid database contention between channels. For example, typically most utilities will not run batch processes in online hours. Typically the batch processes may cause contention with online users causing both channels to run slower. Using Database Resource plans you can tell the database to share the resources more effectively and also constrain batch to have minimal impact on the online users. Of course, batch will borrow resources used by online  but by using resource plans you can constrain it as much as practical.
  • Database Resource Plans are very flexible. You can set plans for time periods to reflect different resource profiles by channel by time of day. Using the batch/online use case in the last point, you can set batch to use less resources during the day and more at night. Conversely you can set online to use more resources during the day and less at night. This balances resources with their optimal use.
  • Database Resource Plans can be set globally or at low levels. In past releases of Oracle Utilities Application Framework, a set of database session visibility variables were set so that the database connection can be identified for monitoring. These same variables can now be used with resource plans. These include the program/batch job, threadpool/thread, client authorization user, client user tag etc. This means, if you desire, you can set minute level information based upon session characteristics in your database resource plans using Consumer Groups.
  • Database Resource Plans feature monitoring at the plan, directive, consumer group etc level to assess the effectiveness of those resource plans. This is available from database monitoring products including Oracle Enterprise Manager.

Database Resource Plans are another feature you can use from the database to effectively manage your resource usage to ensure each channel stays within its allocated resource profile. It is all about sharing the available resources and minimizing contention whilst harnessing the processing power available more effectively.

Pages