Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 1 hour 49 min ago

Web Services Best Practices Whitepaper

Wed, 2016-12-14 16:13

Over the last few release new web services capabilities have been added to the Oracle Utilities Application Framework. A wide range of new and updated facilities are now available for integration capabilities for inbound and outbound communications.

A new whitepaper has been released outlining the best practices for using the new and updated web services capabilities including:

 Capability Usage
 Inbound  Outbound  Inbound Web Services
 Container SOAP based web services

 Message Driven Bean (MDB)
 Container based JMS resource processing

 REST Support
 JSON/XML REST


 Real Time Adapters
 Real time integration for transports


 Outbound Messages
 Service based communications


 SOA/Oracle Service Bus Integration
 SOA middleware based interface
• •
 Web Service Integration
 Importing and execution of an external web service



The whitepaper is available from My Oracle Support at Web Services Best Practices for Oracle Utilities Application Framework (Doc Id: 2214375.1)

Customers using XAI can refer to the XAI Best Practices (Doc Id: 942074.1) available from My Oracle Support. The Web Services Best Practices whitepaper replaces the XAI whitepaper for newer releases.

Web Services Best Practices Whitepaper

Wed, 2016-12-14 16:13

Over the last few release new web services capabilities have been added to the Oracle Utilities Application Framework. A wide range of new and updated facilities are now available for integration capabilities for inbound and outbound communications.

A new whitepaper has been released outlining the best practices for using the new and updated web services capabilities including:

 Capability Usage
 Inbound  Outbound  Inbound Web Services
 Container SOAP based web services

 Message Driven Bean (MDB)
 Container based JMS resource processing

 REST Support
 JSON/XML REST


 Real Time Adapters
 Real time integration for transports


 Outbound Messages
 Service based communications


 SOA/Oracle Service Bus Integration
 SOA middleware based interface
• •
 Web Service Integration
 Importing and execution of an external web service



The whitepaper is available from My Oracle Support at Web Services Best Practices for Oracle Utilities Application Framework (Doc Id: 2214375.1)

Customers using XAI can refer to the XAI Best Practices (Doc Id: 942074.1) available from My Oracle Support. The Web Services Best Practices whitepaper replaces the XAI whitepaper for newer releases.

Connection Pools

Tue, 2016-12-13 19:42

The Oracle Utilities Application Framework uses connection pooling to manage the number of connections in the architecture. The advantage of the pools is to be able to share the number of connections across concurrent users rather than just allocating connections to individual users, which may be stay allocated whilst they are not active. This ensures that the connections allocated are being used therefore the number of connections can be less than the number of users using the product at any time.

The configuration of the pool has a number of configuration settings:

  • Minimum Size - This is the initial size of the connection pool at startup. For non-production this is typically set to 1 but in production this is set to a size that represents the minimum number of concurrent users at anytime. The number of connections in the pool will not fall below this setting.
  • Maximum Size - The maximum number of connections to support in the pool. This number typically represents the expected pack concurrent connections available in the pool. If this number is too small, connections will queue causing delays and eventually connections will be refused against the pool.
  • Growth rate - This is the number of new connections to add to the pool when the pool is busy and new connections are required up to the Maximum Size. Some pool technologies allow you to configure the number of connections to add each time.
  • Inactive detection - Pools will have a collection of settings to define when a connection in the pool can be shutdown due to inactivity. This allows the pool to dynamically shrink when traffic becomes off peak.

There are typically two main connection pools that are used in the product:

  • Client Connection Pool - These are the number of connections between the client machines (browser, web services client, etc) and the server. This connection pool can be direct or via proxies (if you are using a proxy generally or as part of a high availability solution). Now, if you are using Oracle WebLogic this is managed by default by a Global Work Manager, which has unlimited connections. While that sounds ideal, this means you will run into a possible out of memory issue as the number of connections increases before you hit the connection limit. It is possible to use configuration to specify Work Managers to restrict the number of connections to prevent out of memory conditions. Most customers use the Global Work Manager with no issues but it may be worth investigating Work Managers to see if you want to use the Capacity Constraint or Max Threads Constraint capabilities to restrict connection numbers.
  • Database Connection Pool -  These are the number of connections between the product and the database. The connection pools can use UCP or JDBC datasources. The latter uses Oracle WebLogic JNDI to define the attributes of the connection pools as well as advanced connection properties such as FCF and GridLink. The choice to use UCP or JDBC data sources will depends on your site standards and how often you want to be able to change the connections to the database. Using JDBC datasources is more flexible in terms of maintenance of pool information whereas UCP is more desirable where fixed configurations are typical.

Additionally if you are using a proxy to get to the product, most proxies have connection number restrictions to consider when determining pool sizes.

So when deciding the size of the pools and its attributes there are a number of configurations:

  • The goal is to have enough connections in a pool to satisfy the number of concurrent users at any time. This includes peak and non-peak periods.
  • When designing pool sizes and other attributes, remember wasted connections are a burden on resources. Having the pool be dynamic will ensure the resources used are optimally used as traffic fluctuates.
  • Conversely, the establishment of new connections is an overhead when traffic grows. In terms of overall performance the establishment of new connections is minimal.
  • The connections in the pool are only needed for users actively using the server resources. They are not used when they are idle or using the client aspects of the product (for example, moving their mouse across the screen, interacting with non-dynamic fields etc).
  • Set the minimum number of connections to the absolute minimum you want to start with or the number you want to always have available at all times. It is not recommended to use the non-production default of one (1) as that would get the pool to create lots of new connections as traffic ramps up during the business day.
  • Set the maximum number to the expected peak concurrent connections at any time with some headroom for growth. Pool connections that are active take resources (CPU, memory etc) so making sure this number if reasonable for your business. Some customers use test figures as a starting point, talk to their management to determine number of peak user connections or use performance testing to decide the figure. I have heard implementation partners talk about rules of thumb where they estimate based upon total users.
  • Set the inactivity to destroy connections when they become idle for a period of time. This value for the period of time can vary but generally a low value is not recommended due to the nature of typical traffic seen onsites. For example, generally partners will look at between 30-60 seconds inactivity time and maybe more. The idea is to gradually remove inactive connections as traffic drops in non-peak.
  • If the pool, allows for specification of the number of new connections to create, consider using a number other than one (1) for online channels. Whilst this low value seems logical, it will result in a slower ramp up rate.
  • Monitor the connection pools for the connection queuing as that may indicate your maximum is too low for your site.

One other piece of advice from my troubleshooting days, do not assume the figures you use today will be valid in a years time. I have found that as the product implementation ages, end users will use the product very differently over time. I have chatted to consultants about the fact I have personally seen traffic double in the first 18 months of an implementation. Now, that is not a hard and fast rule, just an observation that when a product is initially implemented end users are conservative in its use, but over time, as they get more accustomed to the product, their usage and therefore traffic volume increases. This must be reflected in the pool sizing and attributes.

Connection Pools

Tue, 2016-12-13 19:42

The Oracle Utilities Application Framework uses connection pooling to manage the number of connections in the architecture. The advantage of the pools is to be able to share the number of connections across concurrent users rather than just allocating connections to individual users, which may be stay allocated whilst they are not active. This ensures that the connections allocated are being used therefore the number of connections can be less than the number of users using the product at any time.

The configuration of the pool has a number of configuration settings:

  • Minimum Size - This is the initial size of the connection pool at startup. For non-production this is typically set to 1 but in production this is set to a size that represents the minimum number of concurrent users at anytime. The number of connections in the pool will not fall below this setting.
  • Maximum Size - The maximum number of connections to support in the pool. This number typically represents the expected pack concurrent connections available in the pool. If this number is too small, connections will queue causing delays and eventually connections will be refused against the pool.
  • Growth rate - This is the number of new connections to add to the pool when the pool is busy and new connections are required up to the Maximum Size. Some pool technologies allow you to configure the number of connections to add each time.
  • Inactive detection - Pools will have a collection of settings to define when a connection in the pool can be shutdown due to inactivity. This allows the pool to dynamically shrink when traffic becomes off peak.

There are typically two main connection pools that are used in the product:

  • Client Connection Pool - These are the number of connections between the client machines (browser, web services client, etc) and the server. This connection pool can be direct or via proxies (if you are using a proxy generally or as part of a high availability solution). Now, if you are using Oracle WebLogic this is managed by default by a Global Work Manager, which has unlimited connections. While that sounds ideal, this means you will run into a possible out of memory issue as the number of connections increases before you hit the connection limit. It is possible to use configuration to specify Work Managers to restrict the number of connections to prevent out of memory conditions. Most customers use the Global Work Manager with no issues but it may be worth investigating Work Managers to see if you want to use the Capacity Constraint or Max Threads Constraint capabilities to restrict connection numbers.
  • Database Connection Pool -  These are the number of connections between the product and the database. The connection pools can use UCP or JDBC datasources. The latter uses Oracle WebLogic JNDI to define the attributes of the connection pools as well as advanced connection properties such as FCF and GridLink. The choice to use UCP or JDBC data sources will depends on your site standards and how often you want to be able to change the connections to the database. Using JDBC datasources is more flexible in terms of maintenance of pool information whereas UCP is more desirable where fixed configurations are typical.

Additionally if you are using a proxy to get to the product, most proxies have connection number restrictions to consider when determining pool sizes.

So when deciding the size of the pools and its attributes there are a number of configurations:

  • The goal is to have enough connections in a pool to satisfy the number of concurrent users at any time. This includes peak and non-peak periods.
  • When designing pool sizes and other attributes, remember wasted connections are a burden on resources. Having the pool be dynamic will ensure the resources used are optimally used as traffic fluctuates.
  • Conversely, the establishment of new connections is an overhead when traffic grows. In terms of overall performance the establishment of new connections is minimal.
  • The connections in the pool are only needed for users actively using the server resources. They are not used when they are idle or using the client aspects of the product (for example, moving their mouse across the screen, interacting with non-dynamic fields etc).
  • Set the minimum number of connections to the absolute minimum you want to start with or the number you want to always have available at all times. It is not recommended to use the non-production default of one (1) as that would get the pool to create lots of new connections as traffic ramps up during the business day.
  • Set the maximum number to the expected peak concurrent connections at any time with some headroom for growth. Pool connections that are active take resources (CPU, memory etc) so making sure this number if reasonable for your business. Some customers use test figures as a starting point, talk to their management to determine number of peak user connections or use performance testing to decide the figure. I have heard implementation partners talk about rules of thumb where they estimate based upon total users.
  • Set the inactivity to destroy connections when they become idle for a period of time. This value for the period of time can vary but generally a low value is not recommended due to the nature of typical traffic seen onsites. For example, generally partners will look at between 30-60 seconds inactivity time and maybe more. The idea is to gradually remove inactive connections as traffic drops in non-peak.
  • If the pool, allows for specification of the number of new connections to create, consider using a number other than one (1) for online channels. Whilst this low value seems logical, it will result in a slower ramp up rate.
  • Monitor the connection pools for the connection queuing as that may indicate your maximum is too low for your site.

One other piece of advice from my troubleshooting days, do not assume the figures you use today will be valid in a years time. I have found that as the product implementation ages, end users will use the product very differently over time. I have chatted to consultants about the fact I have personally seen traffic double in the first 18 months of an implementation. Now, that is not a hard and fast rule, just an observation that when a product is initially implemented end users are conservative in its use, but over time, as they get more accustomed to the product, their usage and therefore traffic volume increases. This must be reflected in the pool sizing and attributes.

Authentication and Authorization Identifiers

Sun, 2016-12-11 19:23

In Oracle Utilities Application Framework, the user identification is actually divided into two parts:

  • Authentication Identifier (aka Login Id) -  This the identifier used for authentication (challenge/response) for the product. This identifier is up to 256 characters in length and must be matched by the configured security repository for it to be checked against. By default, if you are using Oracle WebLogic, there is an internal LDAP based security system that can be used for this purpose. It is possible to link to external security repositories using the wide range of Oracle WebLogic security providers included in the installation. This applies to Single Sign On solutions as well.
  • Authorization Identifier (aka UserId) - This is the short user identifier (up to 8 characters in length) used for all service and action authorization as well as low level access. 

The two identifiers are separated for a couple of key reasons:

  • Authentication Identifiers can be changed. Use cases like changing your name, business changes etc mean that the authentication identifier needs to be able to be changed. As long as the security repository is also changed then this identifier will be in synchronization for correct login.
  • Authentication Identifiers are typically email addresses which can vary and are subject to change. For example, if the company is acquired then the user domain most probably will change.
  • Changes to Authentication Identifiers do not affect any existing audit or authorization records. As the authorization user is used for internal processing, after login the authentication identifier, while tracked, is not used for security internally once you have be successfully authenticated.
  • Authorization Identifiers are not changeable and can be related to the Authentication Identifier, such as using first initial and first 7 characters of the surname or be randomly generated by an external Identity Management solution.
  • One of the main reasons the Authorization Identifier is limited in size is to allow a wide range of security solutions to be hooked into the architecture and provide an efficient means of tracking. For example, the identifier is propagated in the connection across the architecture to allow for end to end tracking of transactions.

Security has been augmented in the last few releases of the Oracle Utilities Application Framework to allow various flexible levels of control and tracking. Each implementation can decide to track what aspects of security they want to track using tools available or using third party tools (if they want that).

Authentication and Authorization Identifiers

Sun, 2016-12-11 19:23

In Oracle Utilities Application Framework, the user identification is actually divided into two parts:

  • Authentication Identifier (aka Login Id) -  This the identifier used for authentication (challenge/response) for the product. This identifier is up to 256 characters in length and must be matched by the configured security repository for it to be checked against. By default, if you are using Oracle WebLogic, there is an internal LDAP based security system that can be used for this purpose. It is possible to link to external security repositories using the wide range of Oracle WebLogic security providers included in the installation. This applies to Single Sign On solutions as well.
  • Authorization Identifier (aka UserId) - This is the short user identifier (up to 8 characters in length) used for all service and action authorization as well as low level access. 

The two identifiers are separated for a couple of key reasons:

  • Authentication Identifiers can be changed. Use cases like changing your name, business changes etc mean that the authentication identifier needs to be able to be changed. As long as the security repository is also changed then this identifier will be in synchronization for correct login.
  • Authentication Identifiers are typically email addresses which can vary and are subject to change. For example, if the company is acquired then the user domain most probably will change.
  • Changes to Authentication Identifiers do not affect any existing audit or authorization records. As the authorization user is used for internal processing, after login the authentication identifier, while tracked, is not used for security internally once you have be successfully authenticated.
  • Authorization Identifiers are not changeable and can be related to the Authentication Identifier, such as using first initial and first 7 characters of the surname or be randomly generated by an external Identity Management solution.
  • One of the main reasons the Authorization Identifier is limited in size is to allow a wide range of security solutions to be hooked into the architecture and provide an efficient means of tracking. For example, the identifier is propagated in the connection across the architecture to allow for end to end tracking of transactions.

Security has been augmented in the last few releases of the Oracle Utilities Application Framework to allow various flexible levels of control and tracking. Each implementation can decide to track what aspects of security they want to track using tools available or using third party tools (if they want that).

Service Pack Support for Oracle Utilities Enterprise Manager Pack

Thu, 2016-12-08 13:51

Customers using the Application Management Pack for Oracle Utilities within Oracle Enterprise Manager can prepare our service packs for installation within the pack by converting the pack to Enterprise Managers format. The utilities are supplied with the service pack and allow customers flexibility of either manually installing the pack (the default) or automating the installation via Oracle Enterprise Manager.

A whitepaper outlining the process and utilities provided is now available from My Oracle Support under Enterprise Manager for Oracle Utilities Whitepaper: Service Pack Compliance (Doc ID 2211363.1).

Service Pack Support for Oracle Utilities Enterprise Manager Pack

Thu, 2016-12-08 13:51

Customers using the Application Management Pack for Oracle Utilities within Oracle Enterprise Manager can prepare our service packs for installation within the pack by converting the pack to Enterprise Managers format. The utilities are supplied with the service pack and allow customers flexibility of either manually installing the pack (the default) or automating the installation via Oracle Enterprise Manager.

A whitepaper outlining the process and utilities provided is now available from My Oracle Support under Enterprise Manager for Oracle Utilities Whitepaper: Service Pack Compliance (Doc ID 2211363.1).

Using ADO and HeatMap in the Utilities ILM Solution

Wed, 2016-12-07 16:19

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

For more information about Heat Map and ADO refer to the following information:

Using ADO and HeatMap in the Utilities ILM Solution

Wed, 2016-12-07 16:19

The ILM features of the Oracle Database are used in the Oracle Utilities ILM capability to implement the technical side of the solution. In Oracle 12, two new facilities were added to the already available ILM features to make the implementation of ILM easier. These features are Automatic Data Optimization (ADO) and Heat Map.

The Heat Map feature allows Oracle itself to track the use of blocks and segments in your database. Everytime a program or user touches a row in the database, such as using SELECT, UPDATE or DELETE SQL statements, Heat Map records that it was touched. This information is important as it actually helps profiles the actual usage of the data in your database. This information can be used by Automatic Data Optimization. The Heat Map is disabled by default and requires a database initialization parameter to be changed.

Automatic Data Optimization is a facility where DBA's can set ILM rules, known as Policies, to perform certain ILM actions on the data. For example: If the data is not touched, using Heat Map data, within X months then COMPRESS it to save space. If the ILM_ARCH_SW is set to Y, move the data to partition X. There are a lot of combinations and facilities in the ADO rules to allow the DBA's flexibility in their rules. ADO allows DBA's to specify the rules and then supplies a procedure that can be scheduled, at the convenience of the site, to implement the rules.

ADO and Heat Map are powerful data management tools that DBA's should get use to. They allow simple specification of rules and use features in the database to allow you to manage your data.

For more information about Heat Map and ADO refer to the following information:

Overload Protection Support

Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems, Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

Overload Protection Support

Mon, 2016-12-05 17:19

One of the features we support in Oracle Utilities Application Framework V4.3.x and above is the Oracle WebLogic Overload Protection feature. By default, Oracle WebLogic is setup with a global Work Manager which gives you unlimited connections to the server. Whilst this is reasonable for non-production systems Oracle generally encourages people to limit connections in Production to avoid overloading the server with connections.

In production, it is generally accepted that the Oracle WebLogic servers will either be clustered or a set of managed servers, as this is the typical setup for the high availability requirements for that environment. Using these configurations,it is recommended to set limits on individual servers to enforce capacity requirements across your cluster/managed servers.

There are a number of recommendations when using Overload Protection:

  • The Oracle Utilities Application Framework automatically sets the panic action to system-exit. This is the recommended setting so that the server will stop and restart if it is overloaded. In a clustered or managed server environment, end users are routed to other servers in the configuration while the server is restarted by Node Manager. This is set at the ENVIRON.INI level as part of the install in the WLS_OVERRIDE_PROTECT variable. This variable is set using the WebLogic Overload Protection setting using the configureEnv utility.
  • Ensure you have setup a high availability environment either using Clustering or multiple managed servers with a proxy (like Oracle HTTP Server or Oracle Traffic Director). Oracle has Maximum Availability Guidelines that can help you plan your HA solution.
  • By default, the product ships with a single global Work manager within the domain (this is the default domain from Oracle WebLogic). It is possible to create custom Work Manager definitions with Capacity Constraint and/or Maximum Threads Constraint which is allocated to product servers to provide additional capacity controls.
 For more information about Overload Protection and Work Managers refer to Avoiding and Managing Overload and Using Work Managers to Optimize Scheduled Work.

ILM Planning - The First Steps

Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

ILM Planning - The First Steps

Mon, 2016-12-05 16:22

The first part of implementing an Information Lifecycle Management (ILM) solution for your Oracle Utilities products using the ILM functionality provided is to decide the business retention periods for your data.

Before discussing the first steps a couple of concepts need to be understood:

  • Active Period - This is the period/data group where the business needs fast update access to the data. This is the period the data is actively used in the product by the business.
  • Data Groups - These are the various stages the data is managed after the Active period and before archival. In these groups the ILM solution will use a combination of tiered storage solutions, partitioning and/or compression to realize cost savings.
  • Archival - This is typically the final state of the data where it is either placed on non-disk related archival media (such as tape) or simply removed.

The goal of the first steps is to decide two major requirements for each ILM enabled object:

  • How long the active period should be? In other words, how long the business needs access to update the data?
  • How long the data needs to remain accessible to the business? In other words, how long to keep the data in the database, overall? Remember the data is still accessible by the business whilst it is in the database.

The decisions here are affected by a number of key considerations:

  • How long for the business processes the data needs to be available for update - This can be how long the business needs to rebill or how long the update activity is allowed on a historical record. Remember this is the requirement for the BUSINESS to get update access.
  • How long legally you need to be able to access the records - In each jurisdiction there will be legal and government requirements on how long data should be updated for? For example, there may be a government regulation around rebilling or how long a meter read can be available for change.
  • The overall data retention periods are dictated by how long the business and legal requirements are for access to the data. This can be tricky as tax requirements vary from country to country. For example, in most countries the data needs to be available to tax authorities for say 7 years, in machine readable format. This does not mean it needs to be in the system for 7 years, it just needs to be available when requested. I have seen customers use tape storage, off site storage or even the old microfiche storage (that is showing my age!).
  • Retention means that the data is available on the system even after update is no longer required. This means read only access is needed and the data can even be compressed to save storage and money. This is where the crossover to the technical aspects of the solution start to happen. Oracle calls these Data Groups where each group of data, based usually on date range, has different storage/compression/access characteristics. This can be expressed as a partition per data group to allow for physical separation of the data. You should remember that the data is still accessible but it is not on the same physical storage and location as the more active data.

Now the best way of starting this process is working with the business to decide the retention and active periods for the data. It is not as simple as a single conversation and may require some flexibility in designing the business part of the solution.

Once agreement has been reached the first part of the configuration in ILM is to update the Master Configuration for ILM with the retention periods agreed to for the active period. This will enable the business part of the process to be initiated. The ILM configuration will be on each object, in some cases subsets of objects, to set the retention period in days. This is used by the ILM batch jobs to decide when to assess the records for the next data groups.

There will be additional articles in this series which walk you through the ILM process.

ILM Clarification

Wed, 2016-11-30 21:35

Lately I have received a lot of partner and customer questions about our ILM capability that we ship with our solutions. Our ILM solution is basically a combined business and technical capability to allow customers to implement cost effective data management capabilities for product transaction tables. These tables grow quickly and the solution allows the site to define their business retention rules as well as implement storage solutions to implement cost savings whilst retaining data appropriately.

There are several aspects of the solution:

  • In built functionality - These are some retention definitions, contained in a Master Configuration record, that you configure as well as some prebuilt algorithms and ILM batch jobs. The prebuilt algorithms are called by the ILM batch jobs to assess the age of a row as well as check for any outstanding related data for ILM enabled objects. There are additional columns added to the ILM enabled objects to help track the age of the record as well as setting flags for the technical aspect of the solutions to use. The retention period defines the ACTIVE period of the data for the business which is typically the period that the business needs fast and update access to the data.
  • New columns - There are two columns added ILM_DATE and ILM_ARCH_SW. The ILM_DATE is the date which is used to determine the age of the row. By default, it is typically set to the creation date for the row but as it is part of the object, implementers can optionally alter this value after it is set to influence the retention period for individual rows. The ILM_ARCH_SW is set to the "N" value by default, indicating the business is using the row. When a row is eligible, in other words, when the ILM_DATE + the retention period configured for the object, the ILM batch jobs assess the row against the ILM Algorithms to determine if any business rules indicate the record is still active. If the business rules indicate nothing in the business is outstanding for the row, the ILM_ARCH_SW is set to the "Y" value. This value effectively tells the system that the business has finished with that row in the ACTIVE period. Conversely, if a business rule indicates the row needs to be retained then the ILM_ARCH_SW is not changed from the "N" value.
  • Technical aspects of the solution - Once ILM_ARCH_SW is set to the "Y" value, the ILM features within the database are used. So there are some licensing aspects apply:
    • Oracle Database Enterprise Edition is needed to support the ILM activities. Other editions do not have support for the features used.
    • The Partitioning option of the Oracle Database Enterprise Edition is required as a minimum requirement. This is used for data group isolation and allowing storage characteristics to be set at the partition level for effective data management.
    • Optionally, it is recommended to license the Oracle Advanced Compression option. This option allows for greater options for cost savings by allowing higher levels of compression to be used a tool to realize further savings. The base compression in Oracle can be used as well but it is limited and not optimized for some activities.
    • Optionally customers can use the free ILM Assistant addon to the database (training for ILM Assistant). This is a web based planning tool, based upon Oracle APEX, that allows DBA's to build different storage scenarios and assess the cost savings of each. It does not implement the scenarios but it will generate some basic partitioning SQL. Generally for Oracle 12c customers, ILM Assistant is not recommended as it does not cover ALL the additional ILM capabilities of that version of the database. Personally, I only tend to recommend it to customers who have different tiered storage solutions, which is not a lot of customers generally.
    • Oracle 12c now includes additional (and free) capabilities built into the database. These are namely Automatic Data Optimization and Heat Maps. These are disabled by default and can be enabled using initialization parameters on your database. The Heat Map tracks the usage profile of the data in your database automatically. The Automatic Data Optimization can use Heat Map information and other information to define and implement rules for data management. That can be as simple as instructions on compression to moving data across partitions based upon your criterion. For example, if the ILM_ARCH_SW is the "Y" value and the data has not been touched in 3 months then compress the data using the OLTP compression in Oracle Advanced Compression. These rules are maintained using the free functionality in Oracle Enterprise Manager or, if you prefer, SQL commands can be used to set policies.
  • Support for storage solutions - Third party hardware based storage solutions (including Oracle's storage solutions) have additional ILM based solutions built at the hardware level. Typically those solutions will be able to be used in an ILM based solution with Oracle. Check with your hardware vendor directly for capabilities in this area.

There are a number of resources that can help you understand ILM more:

Whitepapers now and in the future

Tue, 2016-11-29 16:40

The whitepapers available for the product will be changing over the next few months to reflect the changes in the product documentation.

The following changes will happen over the next few months:

  • The online documentation provided with the product has been enhanced to encompass some of the content contained in the whitepapers. This means when you install the product you will get the information automatically in the online help and the PDF versions of the documentation.
  • If the online help fully encompasses the whitepaper contents, the whitepaper will be retired to avoid confusion. Always refer to the online documentation first as it is always the most up to date.
  • If some of the whitepaper information is not in the online help then the new version of the whitepapers will contain the information you need or other whitepaper such as the Best Practices series will be updated with the new information.

I will be making announcements on this blog as each whitepaper is updated to reflect this strategy. This will mean you will not have to download most of the whitepaper information separately and the information is available either online with the product, on Oracle's documentation site or available as a PDF download from Oracle's Delivery Cloud.

The first whitepaper to be retired is the Configuration Migration Assistant Overview which is now not available from My Oracle Support but is available from the documentation supplied with the product.

Remember the FIRST rule is to check the documentation supplied with the product FIRST before using the whitepapers. The documentation provided with the product is always up to date and the whitepapers are only updated on a semi-regular basis.

New Utilities Testing Solution version available (5.0.1.0)

Thu, 2016-11-17 17:55
Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

We have released a new version(5.0.1.0) of the Oracle Functional/Load Testing Advanced Pack for Oracle Utilities (OFTAPOU) and is available from Oracle Delivery Cloud for customers and partners.This new OFTAPOU version now includes support for more versions  of our products. The packs are now cloud compatible i.e., they can be used for testing applications on Oracle Utilities Cloud services.

The pack now supports the following:

  • Oracle Utilities Customer Care And Billing 2.4.0.3 (updated), 2.5.0.1 (updated) and 2.5.0.2 (updated)
  • Oracle Utilities Mobile Workforce Management 2.2.0.3 (updated)
  • Oracle Real Time Scheduler 2.2.0.3 (updated)
  • Oracle Utilities Mobile Workforce Management 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Real Time Scheduler 2.3.0 (updated) – with added support for Android/IOS mobile testing.
  • Oracle Utilities Application Framework 4.2.0.3, 4.3.0.1, 4.3.0.2 and 4.3.0.3.
  • Oracle Utilities Meter Data Management 2.1.0.3 (updated)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.1.0.3 (updated)
  • Oracle Utilities Meter Data Management 2.2.0 (new)
  • Oracle Utilities Smart Grid Gateway (all adapters) 2.2.0 (new)
  • Oracle Utilities Work And Asset Management 2.1.1 (updated)
  • Oracle Utilities Operational Device Management 2.1.1 (updated)

The pack now includes integration components that can be used for creating flows spanning multiple applications known as integration functional flows.

Components for testing mobile application of ORS/MWM have been added. Using the latest packs, customers will be able to execute automated test flows of ORS/MWM application on Android and IOS devices.

In addition to the product pack content, the core test automation framework has been enhanced with more features for ease of use.For example, the pack now includes sanity flows to verify installations of individual products. These sanity flows are the same flows used by our cloud teams to verify cloud installations.

The pack includes 1000+ prebuilt testing components that can be used to model business flows using Flow Builder and generate test scripts that can be executed by OpenScript, Oracle Test Manager and/or Oracle Load Testing. This allows customers to adopt automated testing to accelerate their implementations and upgrade whilst reducing their risk overall.

The pack also includes support for the latest Oracle Application Testing Suite release (12.5.0.3) and also includes a set of utilities to allow partners and implementers to upgrade their custom built test automation flows from older product packs to the latest ones.

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Oracle Scheduler Integration Whitepaper available

Mon, 2016-10-24 18:23

As part of Oracle Utilities Application Framework V4.3.0.2.0 and above, a new API has been released to allow customers and partners to schedule and execute Oracle Utilities jobs using the DBMS_SCHEDULER package (Oracle Scheduler) which is part of the Oracle Database (all editions). This API allows control and monitoring of product jobs within the Oracle Scheduler so that these can be managed individually or as part of a schedule and/or job chain.

Note: It is highly recommended that the Oracle Scheduler objects be housed in an Oracle Database 12c database for maximum efficiency. 

This has a few advantages:

  • Low Cost - The Oracle Scheduler is part of the Oracle Database license (all editions) so there is no additional license cost for existing instances.
  • Simple but powerful - The Oracle Scheduler has simple concepts which makes it easy to implement but do not be fooled by its simplicity. It has optional advanced facilities to allow features like resource profiling and load balancing for enterprise wide scheduling and resource management.
  • Local or Enterprise - There are many ways to implement Oracle Scheduler to allow it to just manage product jobs or become an enterprise wide scheduler. It supports remote job execution using the Oracle Scheduler Agent which can be enabled as part of the Oracle Client installation. One of the prerequisites of the Oracle Utilities product installation is the installation of the Oracle Client so this just adds the agent to the install. Once the agent is installed it is registered as a target with the Oracle Scheduler to execute jobs on that remote resource.
  • Mix and Match - The Oracle Scheduler can execute a wide range of job types so that you can mix non-product jobs with product jobs in schedules and/or chains.
  • Scheduling Engine is very flexible - The calendaring aspect of the scheduling engine is very flexible with overlaps supported as well as exclusions (for preventing jobs to run on public holidays for example).
  • Multiple Management Interfaces - The Oracle Utilities products do not include a management interface for the Oracle Scheduler as there are numerous ways the Oracle Scheduler objects can be maintained including command line, Oracle SQL Developer and Oracle Enterprise Manager (base install no pack needed).
  • Email Notification - Individual jobs can send status via email based upon specific conditions. The format of the email is now part of the job definition which means it can be customized far more easier.

Before using the Oracle Scheduler it is highly recommended that you read the Scheduler documentation provided with the database:

We have published a new whitepaper which outlines the API as well as some general advice on how to implement the Oracle Scheduler with Oracle Utilities products. It is available from My Oracle Support at Batch Scheduler Integration for Oracle Utilities Application Framework (Doc id: 2196486.1).

Architecture Guidelines - Same Domain Issues

Sun, 2016-10-23 21:32

After a long leave of absence to battle cancer, I am back and the first article I wanted to publish is one about some architectural principles that may help in planning your production environments.

Recently I was asked by a product partner about the possibility of housing more than one Oracle Utilities product and other Oracle products on the same machine in the same WebLogic Domain and in the same Oracle database. The idea was the partner wanted to save hardware costs somewhat by combining installations. This is technically possible (to varying extents) but not necessarily practical for certain situations, like production. One of mentors once told me, "even though something is possible, does not mean it is practical".

Let me clarify the situation. We are talking about multiple products on the same WebLogic domain on the same non-virtualized hardware sharing the database via different schemas. That means non-virtualized sharing of CPU, memory and disk. 

Let me explain why housing multiple products in the same domain and/or same hardware is not necessarily a good idea:

  • Resource profiles - Each product typically has a different resource profile, in terms of CPU, memory and disk usage. By placing multiple products in this situation, you would have to compromise on the shared settings to take all the products into account. For example, as the products might share the database instance then the instance level parameters would represent a compromize across the products. This may not be optimal for the individual products.
  • Scalability issues - By limiting your architecture to specific hardware you are constrained in any possible future expansion. As your transaction volumes grow, you need to scale and you do not want to limit your solutions.
  • Incompatibilities - Whilst the Oracle Utilities products are designed to interact on the platform level, not all products are compatible when sharing resources. Let explain with an example. Over the last few releases we have been replacing our internal technology with Oracle technology. One of the things we replaced was the Multi-Purpose Listener (MPL) with the Oracle Service Bus to provide industry level integration possibilities. Now, it is not possible to house Oracle Service Bus within the same domain as Oracle Utilities products. This is not a design flaw but intentional as really a single instance of Oracle Service Bus can be shared across products and can be scaled separately. Oracle Service Bus is only compatible with Oracle SOA Suite as it builds domain level configuration which should not be compromized by sharing that domain with other products.

There is a better approach to this issue:

  • Virtualization - Using a virtualization technology can address the separation of resources and scalability. It allows for lots of combinations for configuration whilst allocating resources appropriately for profiles and scalability as your business changes over time.
  • Clustering and Server separation - Oracle Utilities products can live on the same WebLogic domain but there are some guidelines to make it work appropriately. For example, each product should have its own cluster and/or servers within the domain. This allows for individual product configuration and optimization. Remember to put non-Oracle Utilities products on their own domain such as Oracle SOA Suite, Oracle Service Bus etc as they typically are shared enterprise wide and have their pre-optimized domain setups.

This is a first in a series of articles on architecture I hope to impart over the next few weeks.

Out for a while

Tue, 2016-06-28 00:31
Due to some medical issues I will not be posting till September this year. Thank you for your patience.

Pages