Feed aggregator

Running an Airport

Pete Scott - Sun, 2014-03-30 09:26
If Manston Airport is up for sale (and the current owner’s motivation to make money may mean that she has other plans) I’d buy the place. The price has to be right though as a lot of extra money will need to be invested to give a chance of success. Sadly, I can’t stretch to […]

One Queue to Rule them All

Antony Reynolds - Fri, 2014-03-28 16:16
Using a Single Queue for Multiple Message Types with SOA SuiteProblem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.
Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". ModelClientBPELServiceInbound RequestClient receives web service request and posts the request to the inbound queue with JMSType='Initiate'The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points

  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID.Separate Request and Reply Adapters The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Asynchronous Request-Reply Adapter The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.Synchronous Request-Reply Adapter The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
 Outbound ResponseClient receives the response on an outbound queue.   Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.
Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDIResource;Notesjms/TestQueueQueueOutbound queue from the BPEL processjms/TestQueue2QueueInbound queue to the BPEL processeis/wls/TestQueueJMS Adapter Connector FactoryThis can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactoryeis/wls/TestQueueNone-XA JMS Adapter Connector FactoryThis must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

One Queue to Rule them All

Antony Reynolds - Fri, 2014-03-28 16:16
Using a Single Queue for Multiple Message Types with SOA Suite Problem StatementYou use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue. Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

SolutionsIn order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.
Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate". Model Client BPEL Service Inbound Request Client receives web service request and posts the request to the inbound queue with JMSType='Initiate' The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points

  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID. Separate Request and Reply Adapters   The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Asynchronous Request-Reply Adapter   The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID. Synchronous Request-Reply Adapter   The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
  Outbound Response Client receives the response on an outbound queue.       Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.
Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDI Resource; Notes jms/TestQueue Queue Outbound queue from the BPEL process jms/TestQueue2 Queue Inbound queue to the BPEL process eis/wls/TestQueue JMS Adapter Connector Factory This can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactory eis/wls/TestQueue None-XA JMS Adapter Connector Factory This must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

db.person.find( { "role" : "DBA" } )

Tugdual Grall - Fri, 2014-03-28 09:00
Wow! it has been a while since I posted something on my blog post. I have been very busy, moving to MongoDB, learning, learning, learning…finally I can breath a little and answer some questions. Last week I have been helping my colleague Norberto to deliver a MongoDB Essentials Training in Paris. This was a very nice experience, and I am impatient to deliver it on my own. I was happy to see thatTugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Not Just a Cache

Antony Reynolds - Thu, 2014-03-27 22:23
Coherence as a Compute Grid

Coherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence also has the ability to function as a compute grid, distributing work across multiple servers in a cluster.  In this entry, which was co-written with my colleague Utkarsh Nadkarni, we will look at using Coherence as a compute grid through the use of the Work Manager API and compare it to manipulating data directly in the grid using Entry Processors.

Coherence Distributed Computing Options

The Coherence documentation identifies several methods for distributing work across the cluster, see Processing Data in a Cache.  They can be summarized as:

  • Entry Processors
    • An InvocableMap interface, inherited by the NamedCache interface, provides support for executing an agent (EntryProcessor or EntryAggregator) on individual entries within the cache.
    • The entries may or may not exist, either way the agent is executed once for each key provided, or if no key is provided then it is executed once for each object in the cache.
    • In Enterprise and Grid editions of Coherence the entry processors are executed on the primary cache nodes holding the cached entries.
    • Agents can return results.
    • One agent executes multiple times per cache node, once for each key targeted on the node.
  • Invocation Service
    • An InvocationService provides support for executing an agent on one or more nodes within the grid.
    • Execution may be targeted at specific nodes or at all nodes running the Invocation Service.
    • Agents can return results.
    • One agent executes once per node.
  • Work Managers
    • A WorkManager class provides a grid aware implementation of the commonJ WorkManager which can be used to run tasks across multiple threads on multiple nodes within the grid.
    • WorkManagers run on multiple nodes.
    • Each WorkManager may have multiple threads.
    • Tasks implement the Work interface and are assigned to specific WorkManager threads to execute.
    • Each task is executed once.
Three Models of Distributed Computation

The previous section listing the distributed computing options in Coherence shows that there are 3 distinct execution models:

  • Per Cache Entry Execution (Entry Processor)
    • Execute the agent on the entry corresponding to a cache key.
    • Entries processed on a single thread per node.
    • Parallelism across nodes.
  • Per Node Execution (Invocation Service)
    • Execute the same agent once per node.
    • Agent processed on a single thread per node.
    • Parallelism across nodes.
  • Per Task Execution (Work Manager)
    • Each task executed once.
    • Parallelism across nodes and across threads within a node.

The entry processor is good for operating on individual cache entries.  It is not so good for working on groups of cache entries.

The invocation service is good for performing checks on a node, but is limited in its parallelism.

The work manager is good for operating on groups of related entries in the cache or performing non-cache related work in parallel.  It has a high degree of parallelism.

As you can see the primary choice for distributed computing comes down to the Work Manager and the Entry Processor.

Differences between using Entry Processors and Work Managers in CoherenceAspectEntry ProcessorsWork ManagersDegree of parallelizationIs a function of the number of Coherence nodes. EntryProcessors are run concurrently across all nodes in a cluster. However, within each node only one instance of the entry processor executes at a time.Is a function of the number of Work Manager threads. The Work is run concurrently across all threads in all Work Manager instances.TransactionalityTransactional. If an EntryProcessor running on one node does not complete (say, due to that node crashing), the entries targeted will be executed by an EntryProcessor on another node.Not transactional. The specification does not explicitly specify what the response should be if a remote server crashes during an execution. Current implementation uses WORK_COMPLETED with WorkCompletedException as a result. In case a Work does not run to completion, it is the responsibility of the client to resubmit the Work to the Work Manager. How is the Cache accessed or mutated?Operations against the cache contents are executed by (and thus within the localized context of) a cache.Accesses and changes to the cache are done directly through the cache API.Where is the processing performed?In the same JVM where the entries-to-be-processed reside.In the Work Manager server. This may not be the same JVM where the entries-to-be-processed reside.Network TrafficIs a function of the size of the EntryProcessor. Typically, the size of an EntryProcessor is much smaller than the size of the data transferred across nodes in the case of a Work Manager approach. This makes the EntryProcessor approach more network-efficient and hence more scalable. One EntryProcessor is transmitted to each cache node.Is a function of the
  • Number of Work Objects, of which multiple may be sent to each server.
  • Size of the data set transferred from the Backing Map to the Work Manager Server.
Distribution of “Tasks”Tasks are moved to the location at which the entries-to-be-processed are being managed. This may result in a random distribution of tasks. The distribution tends to get equitable as the number of entries increases.Tasks are distributed equally across the threads in the Work Manager Instances.Implementation of the EntryProcessor or Work class.Create a class that extends AbstractProcessor. Implement the process method. Update the cache item based on the key passed in to the process method.Create a class that is serializable and implements commonj.work.Work. Implement the run method.Implementation of “Task”In the process method, update the cache item based on the key passed into the process method.In the run method, do the following:
  • Get a reference to the named cache
  • Do the Work – Get a reference to the Cache Item; change the cache item; put the cache item back into the named cache.
Completion NotificationWhen the NamedCache.invoke method completes then all the entry processors have completed executing.When a task is submitted for execution it executes asynchronously on the work manager threads in the cluster.  Status may be obtained by registering a commonj.work.WorkListener class when calling the WorkManager.schedule method.  This will provide updates when the Work is accepted, started and completed or rejected.  Alternatively the WorkManager.waitForAll and WorkManager.waitForAny methods allow blocking waits for either all or one result respectively.Returned Resultsjava.lang.Object – when executed on one cache item. This returns result of the invocation as returned from the EntryProcessor.
java.util.Map – when executed on a collection of keys. This returns a Map containing the results of invoking the EntryProcessor against each of the specified keys. commonj.work.WorkItem - There are three possible outcomes
  • The Work is not yet complete. In this case, a null is returned by WorkItem.getResult.
  • The Work started but completed with an exception. This may have happened due to a Work Manager Instance terminating abruptly. This is indicated by an exception thrown by WorkItem.getResult.
  • The Work Manager instance indicated that the Work is complete and the Work ran to completion. In this case, WorkItem.getResult returns a non-null and no exception is thrown by WorkItem.getResult.
Error HandlingFailure of a node results in all the work assigned to that node being executed on the new primary. This may result in some work being executed twice, but Coherence ensures that the cache is only updated once per item.Failure of a node results in the loss of scheduled tasks assigned to that node. Completed tasks are sent back to the client as they complete.Fault Handling Extension

Entry processors have excellent error handling within Coherence.  Work Managers less so.  In order to provide resiliency on node failure I implemented a “RetryWorkManager” class that detects tasks that have failed to complete successfully and resubmits them to the grid for another attempt.

A JDeveloper project with the RetryWorkManager is available for download here.  It includes sample code to run a simple task across multiple work manager threads.

To create a new RetryWorkManager that will retry failed work twice then you would use this: WorkManager = new RetryWorkManager("WorkManagerName", 2);  // Change for number of retries, if no retry count is provided then the default is 0.You can control the number of retries at the individual work level as shown below: WorkItem workItem = schedule(work); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, workListener); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, 4); // Change number of retries
WorkItem workItem = schedule(work, workListener, 4); // Change number of retriesCurrently the RetryWorkManager defaults to having 0 threads.  To change use this constructor: WorkItem workItem = schedule(work, workListener, 3, 4); // Change number of threads (3) and retries (4)Note that none of this sample code is supported by Oracle in any way, and is provided purely as a sample of what can be done with Coherence. How the RetryWorkManager Works

The RetryWorkManager delegates most operations to a Coherence WorkManager instance.  It creates a WorkManagerListener to intercept status updates.  On receiving a WORK_COMPLETED callback the listener checks the result to see if the completion is due to an error.  If an error occurred and there are retries left then the work is resubmitted.  The WorkItem returned by scheduling an event is wrapped in a RetryWorkItem.  This RetryWorkItem is updated with a new Coherence WorkItem when the task is retried.  If the client registers a WorkManagerListener then the RetryWorkManagerListener delegates non-retriable events to the client listener.  Finally the waitForAll and waitForAny methods are modified to deal with work items being resubmitted in the event of failure.

Sample Code for EntryProcessor and RetryWorkManager

The downloadable project contains sample code for running the work manager and an entry processor.

The demo implements a 3-tier architecture

  1. Coherence Cache Servers
    • Can be started by running RunCacheServer.cmd
    • Runs a distributed cache used by the Task to be executed in the grid
  2. Coherence Work Manager Servers
    • Can be started by running RunWorkManagerServer.cmd
    • Takes no parameters
    • Runs two threads for executing tasks
  3. Coherence Work Manager Clients
    • Can be started by running RunWorkManagerClient.cmd
    • Takes three parameters currently
      • Work Manager name - should be "AntonyWork" - default is "AntonyWork"
      • Number of tasks to schedule - default is 10
      • Time to wait for tasks to complete in seconds - default is 60

The task stores the number of times it has been executed in the cache, so multiple runs will see the counter incrementing.  The choice between EntryProcessor and WorkManager is controlled by changing the value of USE_ENTRY_PROCESSOR between false and true in the RunWorkManagerClient.cmd script.

The SetWorkManagerEnv.cmd script should be edited to point to the Coherence home directory and the Java home directory.

Summary

If you need to perform operations on cache entries and don’t need to have cross-checks between the entries then the best solution is to use an entry processor.  The entry processor is fault tolerant and updates to the cached entity will be performed once only.

If you need to perform generic work that may need to touch multiple related cache entries then the work manager may be a better solution.  The extensions I created in the RetryWorkManager provide a degree of resiliency to deal with node failure without impacting the client.

The RetryWorkManager can be downloaded here.

Not Just a Cache

Antony Reynolds - Thu, 2014-03-27 22:23
Coherence as a Compute Grid

Coherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence also has the ability to function as a compute grid, distributing work across multiple servers in a cluster.  In this entry, which was co-written with my colleague Utkarsh Nadkarni, we will look at using Coherence as a compute grid through the use of the Work Manager API and compare it to manipulating data directly in the grid using Entry Processors.

Coherence Distributed Computing Options

The Coherence documentation identifies several methods for distributing work across the cluster, see Processing Data in a Cache.  They can be summarized as:

  • Entry Processors
    • An InvocableMap interface, inherited by the NamedCache interface, provides support for executing an agent (EntryProcessor or EntryAggregator) on individual entries within the cache.
    • The entries may or may not exist, either way the agent is executed once for each key provided, or if no key is provided then it is executed once for each object in the cache.
    • In Enterprise and Grid editions of Coherence the entry processors are executed on the primary cache nodes holding the cached entries.
    • Agents can return results.
    • One agent executes multiple times per cache node, once for each key targeted on the node.
  • Invocation Service
    • An InvocationService provides support for executing an agent on one or more nodes within the grid.
    • Execution may be targeted at specific nodes or at all nodes running the Invocation Service.
    • Agents can return results.
    • One agent executes once per node.
  • Work Managers
    • A WorkManager class provides a grid aware implementation of the commonJ WorkManager which can be used to run tasks across multiple threads on multiple nodes within the grid.
    • WorkManagers run on multiple nodes.
    • Each WorkManager may have multiple threads.
    • Tasks implement the Work interface and are assigned to specific WorkManager threads to execute.
    • Each task is executed once.
Three Models of Distributed Computation

The previous section listing the distributed computing options in Coherence shows that there are 3 distinct execution models:

  • Per Cache Entry Execution (Entry Processor)
    • Execute the agent on the entry corresponding to a cache key.
    • Entries processed on a single thread per node.
    • Parallelism across nodes.
  • Per Node Execution (Invocation Service)
    • Execute the same agent once per node.
    • Agent processed on a single thread per node.
    • Parallelism across nodes.
  • Per Task Execution (Work Manager)
    • Each task executed once.
    • Parallelism across nodes and across threads within a node.

The entry processor is good for operating on individual cache entries.  It is not so good for working on groups of cache entries.

The invocation service is good for performing checks on a node, but is limited in its parallelism.

The work manager is good for operating on groups of related entries in the cache or performing non-cache related work in parallel.  It has a high degree of parallelism.

As you can see the primary choice for distributed computing comes down to the Work Manager and the Entry Processor.

Differences between using Entry Processors and Work Managers in Coherence Aspect Entry Processors Work Managers Degree of parallelization Is a function of the number of Coherence nodes. EntryProcessors are run concurrently across all nodes in a cluster. However, within each node only one instance of the entry processor executes at a time. Is a function of the number of Work Manager threads. The Work is run concurrently across all threads in all Work Manager instances. Transactionality Transactional. If an EntryProcessor running on one node does not complete (say, due to that node crashing), the entries targeted will be executed by an EntryProcessor on another node. Not transactional. The specification does not explicitly specify what the response should be if a remote server crashes during an execution. Current implementation uses WORK_COMPLETED with WorkCompletedException as a result. In case a Work does not run to completion, it is the responsibility of the client to resubmit the Work to the Work Manager. How is the Cache accessed or mutated? Operations against the cache contents are executed by (and thus within the localized context of) a cache. Accesses and changes to the cache are done directly through the cache API. Where is the processing performed? In the same JVM where the entries-to-be-processed reside. In the Work Manager server. This may not be the same JVM where the entries-to-be-processed reside. Network Traffic Is a function of the size of the EntryProcessor. Typically, the size of an EntryProcessor is much smaller than the size of the data transferred across nodes in the case of a Work Manager approach. This makes the EntryProcessor approach more network-efficient and hence more scalable. One EntryProcessor is transmitted to each cache node. Is a function of the
  • Number of Work Objects, of which multiple may be sent to each server.
  • Size of the data set transferred from the Backing Map to the Work Manager Server.
Distribution of “Tasks” Tasks are moved to the location at which the entries-to-be-processed are being managed. This may result in a random distribution of tasks. The distribution tends to get equitable as the number of entries increases. Tasks are distributed equally across the threads in the Work Manager Instances. Implementation of the EntryProcessor or Work class. Create a class that extends AbstractProcessor. Implement the process method. Update the cache item based on the key passed in to the process method. Create a class that is serializable and implements commonj.work.Work. Implement the run method. Implementation of “Task” In the process method, update the cache item based on the key passed into the process method. In the run method, do the following:
  • Get a reference to the named cache
  • Do the Work – Get a reference to the Cache Item; change the cache item; put the cache item back into the named cache.
Completion Notification When the NamedCache.invoke method completes then all the entry processors have completed executing. When a task is submitted for execution it executes asynchronously on the work manager threads in the cluster.  Status may be obtained by registering a commonj.work.WorkListener class when calling the WorkManager.schedule method.  This will provide updates when the Work is accepted, started and completed or rejected.  Alternatively the WorkManager.waitForAll and WorkManager.waitForAny methods allow blocking waits for either all or one result respectively. Returned Results java.lang.Object – when executed on one cache item. This returns result of the invocation as returned from the EntryProcessor.
java.util.Map – when executed on a collection of keys. This returns a Map containing the results of invoking the EntryProcessor against each of the specified keys. commonj.work.WorkItem - There are three possible outcomes
  • The Work is not yet complete. In this case, a null is returned by WorkItem.getResult.
  • The Work started but completed with an exception. This may have happened due to a Work Manager Instance terminating abruptly. This is indicated by an exception thrown by WorkItem.getResult.
  • The Work Manager instance indicated that the Work is complete and the Work ran to completion. In this case, WorkItem.getResult returns a non-null and no exception is thrown by WorkItem.getResult.
Error Handling Failure of a node results in all the work assigned to that node being executed on the new primary. This may result in some work being executed twice, but Coherence ensures that the cache is only updated once per item. Failure of a node results in the loss of scheduled tasks assigned to that node. Completed tasks are sent back to the client as they complete. Fault Handling Extension

Entry processors have excellent error handling within Coherence.  Work Managers less so.  In order to provide resiliency on node failure I implemented a “RetryWorkManager” class that detects tasks that have failed to complete successfully and resubmits them to the grid for another attempt.

A JDeveloper project with the RetryWorkManager is available for download here.  It includes sample code to run a simple task across multiple work manager threads.

To create a new RetryWorkManager that will retry failed work twice then you would use this: WorkManager = new RetryWorkManager("WorkManagerName", 2);  // Change for number of retries, if no retry count is provided then the default is 0.You can control the number of retries at the individual work level as shown below: WorkItem workItem = schedule(work); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, workListener); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, 4); // Change number of retries
WorkItem workItem = schedule(work, workListener, 4); // Change number of retriesCurrently the RetryWorkManager defaults to having 0 threads.  To change use this constructor: WorkItem workItem = schedule(work, workListener, 3, 4); // Change number of threads (3) and retries (4)Note that none of this sample code is supported by Oracle in any way, and is provided purely as a sample of what can be done with Coherence. How the RetryWorkManager Works

The RetryWorkManager delegates most operations to a Coherence WorkManager instance.  It creates a WorkManagerListener to intercept status updates.  On receiving a WORK_COMPLETED callback the listener checks the result to see if the completion is due to an error.  If an error occurred and there are retries left then the work is resubmitted.  The WorkItem returned by scheduling an event is wrapped in a RetryWorkItem.  This RetryWorkItem is updated with a new Coherence WorkItem when the task is retried.  If the client registers a WorkManagerListener then the RetryWorkManagerListener delegates non-retriable events to the client listener.  Finally the waitForAll and waitForAny methods are modified to deal with work items being resubmitted in the event of failure.

Sample Code for EntryProcessor and RetryWorkManager

The downloadable project contains sample code for running the work manager and an entry processor.

The demo implements a 3-tier architecture

  1. Coherence Cache Servers
    • Can be started by running RunCacheServer.cmd
    • Runs a distributed cache used by the Task to be executed in the grid
  2. Coherence Work Manager Servers
    • Can be started by running RunWorkManagerServer.cmd
    • Takes no parameters
    • Runs two threads for executing tasks
  3. Coherence Work Manager Clients
    • Can be started by running RunWorkManagerClient.cmd
    • Takes three parameters currently
      • Work Manager name - should be "AntonyWork" - default is "AntonyWork"
      • Number of tasks to schedule - default is 10
      • Time to wait for tasks to complete in seconds - default is 60

The task stores the number of times it has been executed in the cache, so multiple runs will see the counter incrementing.  The choice between EntryProcessor and WorkManager is controlled by changing the value of USE_ENTRY_PROCESSOR between false and true in the RunWorkManagerClient.cmd script.

The SetWorkManagerEnv.cmd script should be edited to point to the Coherence home directory and the Java home directory.

Summary

If you need to perform operations on cache entries and don’t need to have cross-checks between the entries then the best solution is to use an entry processor.  The entry processor is fault tolerant and updates to the cached entity will be performed once only.

If you need to perform generic work that may need to touch multiple related cache entries then the work manager may be a better solution.  The extensions I created in the RetryWorkManager provide a degree of resiliency to deal with node failure without impacting the client.

The RetryWorkManager can be downloaded here.

Enabling Analytics on Edge Devices in Internet of Things

Anshu Sharma - Thu, 2014-03-27 09:50

We are doing a live webcast on this topic on Apr 24, 10AM PST. Please register if you can join or want to get an on demand link after the event. Looking forward to an interesting discussion on this topic.

Registration Page 

http://event.on24.com/utilApp/eventdetail?eventid=765265&sessionid=1&key=95F98E49015258DCE699497D930DDB0F

INDEX to SYSDBA without SELECT

Paul Wright - Thu, 2014-03-27 09:21
Hello Oracle Security Readers, If we combine the following factors together then we can identify an escalation route from Index on SYSTEM to SYSDBA which does not require SELECT privileges on the indexed table: 1. SYSTEM passes it’s DBA role through it’s procedures. 2. Oracle indexes allow execution from read via functions i.e. INDEX can [...]

Using the JAX-RS 2.0 Client API with WebLogic Server 12.1.3

Steve Button - Wed, 2014-03-26 23:05


Please note: this blog discusses WebLogic Server 12.1.3
which has not yet been released.

As part of the JAX-RS 2.0 support we are providing with WebLogic Server 12.1.3, one really useful new feature is the new Client API it provides, enabling applications to easily interact with REST services to consume and publish information.

By way of a simple example, I'll build out an application that uses the freegeoip.net REST service to lookup the physical location of a specified IP address or domain name and deploy it to WebLogic Server 12.1.3.

The first step to perform is to make a call to the freegeoip.net REST API and examine the JSON payload that is returned.

$ curl http://freegeoip.net/json/buttso.blogspot.com

{"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}

The next step is to build a Java class to represent the JSON payload that is returned. In this case, it's quite simple because the JSON payload that is returned doesn't contain any relationships or complex data structures.

/**
*
* @author sbutton
* {"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}
*/
public class GeoIp implements Serializable {

private String ipAddress;
private String countryName;
private String regionName;
private String city;
private String zipCode;
private String latitude;
private String longitude;

public String getIpAddress() {
return ipAddress;
}

public void setIpAddress(String ipAddress) {
this.ipAddress = ipAddress;
}

...

}
With the GeoIP class defined, the next step is to consider how to convert the JSON payload into an instance of the GeoIP class. I'll show two ways this can be done.

The first way to do it is to create a class that reads the result of the REST request, parses the JSON payload and constructs a representative instance of the GeoIP class. Within the JAX-RS API, there is an interface MessageBodyReader that can be implemented to convert a Stream into a Java type.

http://docs.oracle.com/javaee/6/api/javax/ws/rs/ext/MessageBodyReader.html

Implementing this interface gives you the readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) method which supplies an InputStream containing the response to read. The method then parses out the JSON payload and constructs a responding GeoIP instance from it.

Parsing the JSON payload is straightforward with WebLogic Server 12.1.3 since we've included the (JSR-353) Java API for JSON Processing implementation which provides an API for reading and creating JSON objects.

package oracle.demo.wls.jaxrs.client.geoip;

import java.io.IOException;
import java.io.InputStream;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.ext.MessageBodyReader;
import javax.ws.rs.ext.Provider;

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class GeoIpReader implements MessageBodyReader {

@Override
public boolean isReadable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return GeoIp.class.isAssignableFrom(type) ;
}

@Override
public GeoIp readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) throws IOException, WebApplicationException {
GeoIp g = new GeoIp();
JsonParser parser = Json.createParser(entityStream);
while (parser.hasNext()) {
switch (parser.next()) {
case KEY_NAME:
String key = parser.getString();
parser.next();
switch (key) {
case "ip":
g.setIpAddress(parser.getString());
break;
case "country_name":
g.setCountryName(parser.getString());
break;
case "latitude":
g.setLatitude(parser.getString());
break;
case "longitude":
g.setLongitude(parser.getString());
break;
case "region_name":
g.setRegionName(parser.getString());
break;
case "city":
g.setCity(parser.getString());
break;
case "zipcode":
g.setZipCode(parser.getString());
break;
default:
break;
}
break;
default:
break;
}
}
return g;
}
}

Once this class is built, it can be registered with the Client so that it can be called when necessary to convert a payload of MessageType.APPLICATION_JSON type into an instance of the GeoIP object, here done in an @PostConstruct method on a JSF Bean

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
client.register(GeoIpReader.class);
}


The alternative way to do thi is to use the EcliseLink MOXY JAXB implementation that is provided with WebLogic Server, which can automatically marhsall and unmarshall JSON payloads to and from Java objects. Helpfully, the JAX-RS 2.0 shared-library that WebLogic Server 12.1.3 contains the jersey-media-moxy extension that enables the EclipseLInk MOXY implementation to be simply registered and used by applications when conversion is needed.

To use the JAXB/MOXY approach, the GeoIPReader class can be thrown away. No manual parsing of the payload is required. Instead, the base GeoIP class is annotated with JAXB annotations to denote it as being JAXB enabled and to provide some assistance in the mapping of the class properties to the payload property names.

package oracle.demo.wls.jaxrs.client.geoip;

import java.io.Serializable;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlRootElement;

/**
*
* @author sbutton
* {"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}
*/

@XmlRootElement
public class GeoIp implements Serializable {

@XmlAttribute(name = "ip")
private String ipAddress;
@XmlAttribute(name = "country_name")
private String countryName;
@XmlAttribute(name = "region_name")
private String regionName;
@XmlAttribute(name = "city")
private String city;
@XmlAttribute(name = "zipcode")
private String zipCode;
@XmlAttribute(name = "latitude")
private String latitude;
@XmlAttribute(name = "longitude")
private String longitude;

...

}


With the JAXB annotations placed on the GeoIP class to enable it to be automatically marshalled/unmarshalled from JSON, the last step is to register the EclipseLink MOXY implementation with the Client. This is done with the assistance of a small utility method, as shown in the Jersey User Guide Media chapter.

public static ContextResolver createMoxyJsonResolver() {
final MoxyJsonConfig moxyJsonConfig = new MoxyJsonConfig();
moxyJsonConfig.setFormattedOutput(true);

Map namespacePrefixMapper = new HashMap(1);
namespacePrefixMapper.put("http://www.w3.org/2001/XMLSchema-instance", "xsi");
moxyJsonConfig.setNamespacePrefixMapper(namespacePrefixMapper).setNamespaceSeparator(':');

return moxyJsonConfig.resolver();
}
This method is then used to register the relevant ContextResolver with the Client to use to handle JSON_conversions, instead of the GeoIPReader class that was used before.<

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
client.register(createMoxyJsonResolver());
//client.register(GeoIpReader.class);
}

With the JSON payload to GeoIP conversion now covered, the JAX-RS Client API can be used to make the call to the freegeoip REST service and process the response.

To make a client call, two classes are used: javax.ws.rs.client.Client and javax.ws.rs.client.WebTarget .

The Jersey User Guide provides a good description of theses two classes and their relationship:

The JAX-RS Client API is a designed to allow fluent programming model. This means, a construction of a Client instance, from which a WebTarget is created, from which a request Invocation is built and invoked can be chained in a single "flow" of invocations ... Once you have a Client instance you can create a WebTarget from it ... A resource in the JAX-RS client API is an instance of the Java class WebTarget and encapsulates an URI. The fixed set of HTTP methods can be invoked based on the WebTarget. The [base] representations are Java types, instances of which, may contain links that new instances of WebTarget may be created from.

In this example application, the Client is opened in an @PostConstruct method and closed in a @PreDestroy method, with the WebTarget being created and its GET method called when the lookup is executed by the user.

@Named
@RequestScoped
public class GeoIpBackingBean {

private WebTarget target = null;
private Client client = null;

...

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
//client.register(createMoxyJsonResolver());
client.register(GeoIpReader.class);
}

@PreDestroy
public void byebye() {
client.close();
}

public void lookupAddress() {
try {
target = client.target(String.format(rest_base_url, addressToLookup));
geoIp = target.request().get(GeoIp.class);
} catch (Exception e) {
e.printStackTrace();
FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Error executing REST call: " + e.getMessage()));
}
}

...
}


Bringing it all together as a JSF based application results in a JSF Bean being created that allows the IP address to be entered and a method that invokes the JAX-RS Client API to call out to the freegeoip.net REST service to retrieve the JSON payload containing the location information. A simple JSF facelet page is used to support the entering of the IP address and the display of the relevant data from the GeoIP object.



<h:form>
<h:panelGrid columns="2" style="vertical-align: top;">
<h:outputLabel value="Address"/>
<h:inputText value="${geoIpBackingBean.addressToLookup}"/>
<h:outputLabel value=""/>
<h:commandButton action="${geoIpBackingBean.lookupAddress()}" value="Lookup" style="margin: 5px;"/>
</h:panelGrid>
</h:form>


<h:panelGrid columns="2">
<h:outputText value="IP:"/>
<h:outputText value="${geoIpBackingBean.geoIp.ipAddress}"/>
<h:outputText value="Country Code:"/>
<h:outputText value="${geoIpBackingBean.geoIp.countryName}"/>
<h:outputText value="State:"/>
<h:outputText value="${geoIpBackingBean.geoIp.regionName}"/>
<h:outputText value="City"/>
<h:outputText value="${geoIpBackingBean.geoIp.city}"/>
<h:outputText value="Zipcode:"/>
<h:outputText value="${geoIpBackingBean.geoIp.zipCode}"/>
<h:outputText value="Coords:"/>
<c:if test="${geoIpBackingBean.geoIp.ipAddress != null}">
<h:outputText value="${geoIpBackingBean.geoIp.latitude},${geoIpBackingBean.geoIp.longitude}"/>
</c:if>
</h:panelGrid>

The last step to perform is to add a weblogic.xml deployment descriptor with a library-ref to the [jsf,2.0] shared-library, which must be deployed as I described earlier in Using JAX-RS 2.0 with WebLogic Server 12.1.3.

The application is now ready to to deploy and run.

Battling Bigfile Backup Bottlenecks

Don Seiler - Wed, 2014-03-26 13:44
Last Friday I kicked off a database backup to an NFS destination, using the standard "backup as compressed backupset database" syntax. Loyal readers of this blog may recall that I'm the proud custodian of a 25 Tb database, so this backup normally takes a few days, with an expected completion on Monday morning. However it was still running on Wednesday, and reviewing the logs I saw that there was just 1 channel (of the original 8) still running. The backup file that this channel was writing happened to include our largest bigfile datafile, which weighs in at nearly 8 Tb. Reviewing my new backup script I realized that I had neglected to specify a SECTION SIZE parameter. An example of its usage is:

RMAN> backup as compressed backupset
2> section size 64G
3> database;

Without it, RMAN has decided to create a backup piece that bundled my 8 Tb datafile with a few others and then write it out to disk on one channel. Obviously this isn't what we wanted.


I'm not a big fan of bigfile tablespaces, primarily because you lose the benefits of parallelism when huge files can only be handled by a single channel, as with datafile copy operations and backups. In 11g, however, Oracle has introduced multi-section backups via the SECTION SIZE option for RMAN backups. This option tells RMAN to break a large file into sections of the specified size so that the work can be done in parallel. If the specified size is larger than the file, then it is simply ignored for that file.

There is a limitation in that the file can be split into a maximum of 256 sections. So, if you specify a section size that would result in more than 256 sections being created, Oracle RMAN will increase the size so that exactly 256 sections are created. This is still enforced today in Oracle 12c.

Another limitation in Oracle 11g is that multi-section backups cannot be done with image copy backups. Those must still be done as a whole file and so can still be a huge bottleneck. However this is no longer a problem in Oracle 12c, and multi-section image copy backups are possible. I'm looking forward to using this as we also use image copy backups as part of our recovery strategy.

To highlight the parallel benefits, I ran a compressed backup of a 1.5 Tb bigfile tablespace using 8 channels. The first one does NOT use section size and so only goes over one channel:

Starting backup at 2014/03/25 14:35:41
...
channel c1: backup set complete, elapsed time: 17:06:09
Finished backup at 2014/03/26 07:41:51

The second one uses a section size of 64 Gb (otherwise same command & file):

Starting backup at 2014/03/26 07:41:52
...
Finished backup at 2014/03/26 09:48:33

You can see the huge impact made by making use of the multi-section backup option. A single channel took over 17 hours to back it up. Using 8 channels with a section size of 64 Gb took only just over 2 hours. Eyeballing the log shows an average of around 6 minutes per section. Definitely much better than waiting for a single channel to do all the work when the rest of the system is waiting.
Categories: DBA Blogs

Access Management alternatives (Part 1: Directory Services)

Frank van Bortel - Wed, 2014-03-26 08:18
Intro At the governmental institute that hired me, I'm working hard to get the full Oracle Identity and Access Management (IAM) stack implemented. A colleague suggested OpenIAM, which -at closer look- turns out to be a fork of what I believe to be the origin of the Oracle stack, Sun's OpenSSO. So, I started at looking at this stack, which is available from ForgeRock. Let's start with the basis: Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com2

Microservices is SOD all within SOA

Steve Jones - Tue, 2014-03-25 10:06
Microservices is a Service Oriented Delivery approach, all within a Service Oriented Architecture context. (Long Title ;) Ok so a few more updates since the last time I wrote about Microservices and I think its worth just updating as it really is heavily underlining why Microservices is a Service Oriented Delivery approach that absolutely can fit within a Service Oriented Architecture.  Lets be
Categories: Fusion Middleware

Let’s keep Manston

Pete Scott - Mon, 2014-03-24 17:22
It is said that aviation is in the blood (or in the genes). My partner worked as cabin crew with Air New Zealand before becoming their senior HR manager responsible for all off-shore based Air New Zealand staff; her cousin is a senior pilot with Cathay Pacific; of her uncles, one was managing director of […]

Chicago Oracle User Community Restart

Jeremy Schneider - Mon, 2014-03-24 13:47

Chicago is the third largest city in the United States. There are probably more professional Oracle users here than most other areas in the country – and yet for many years now there hasn’t been a cohesive user group.

But right now there’s an opportunity for change. If the professional community of Chicago Oracle users steps up to the plate.

Chicago Oracle User Group

First, the Chicago Oracle User Group has just elected a new president. Alfredo Abate is bringing a level of enthusiasm and energy to the position which we’ve been missing for a long time. He’s trying to figure out how to restart the COUG and re-engage the professional community here – but he needs input and assistance from you! If you’re an administrator or developer anywhere near Chicago and you have Oracle software anywhere in your company, then please help Alfredo get the user group going! Here are a few specific things you can do:

  1. Send Alfredo an email saying congrats and offering suggestions for the COUG. You can find him on LinkedIn or the COUG site below.
  2. Join the LinkedIn group that Alfredo set up for the COUG.
  3. Sign up for a free account at the COUG site: chicago.oracle.ioug.org
  4. Complete the survey at the COUG website (must sign up for free account, then look for “survey” link in the top navigation bar). This will help Alfredo think about planning the next event.
Lunch Huddles

A few years ago, I was part of a group of Oracle database users from different companies in Chicago who started hanging out regularly for lunches downtown. It was never a big event but it was a lot of fun to get together and catch up regularly. However I stopped organizing the lunches after a job change back into travel consulting and the birth of our daughter. I live on the north side of the city, I worked from home when I wasn’t traveling, and I wasn’t able to make trips downtown anymore.

Ever since, I’ve missed hanging out with friends downtown and I’ve always wanted to do these group lunches again. Besides the fact that I really enjoy catching up with people, I think that face-to-face meetups really help strengthen our sense of community as a whole in Chicago.

So – after far too long – I started the lunches again last week.

Oracle DB Lunch Downtown

Oracle DB Lunch Downtown

But it’s improved – there are now lunches happening all over ChicagoLand!

Tomorrow: Deerfield
This wednesday: Des Plaines
Next week wednesday: Downtown

Coming soon: Naperville?

Please join us for a lunch sometime! I promise you’ll find it to be both beneficial and fun! And also, please join the group on meetup.com – then you’ll get reminders about upcoming lunches in Chicago.

Spread the Word

Even if you don’t live in Chicago, you can help me out with this – send a brief tweet or quick email to any Oracle professionals you know around Chicago and direct them to this blog post. I hope to see some new life in the Oracle professional community here. It won’t happen by accident.

What's New in Dodeca 6.7.1?

Tim Tow - Mon, 2014-03-24 11:28
Last week, we released Dodeca version 6.7.1.4340 which focuses on some new relational functionality. The major new features in 6.7.1 are:
  • Concurrent SQL Query Execution
  • Detailed SQL Timed Logging
  • Query and Display PDF format
  • Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query ExecutionDodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database.  The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object can include multiple queries that get executed and returned on a single trip to the server and several of our customers have taken great advantage of that functionality.  We have at least one customer, in fact, that has some SQLPassthroughDataSets that execute up to 20 queries in a single trip to the server.  The functionality was originally designed to run the queries sequentially, but in some cases it would be better to run the queries concurrently.  Because this is Dodeca, of course concurrent query execution is configurable at the SQLPassthroughDataSet level.

Detailed SQL Timed LoggingIn Dodeca version 6.0, we added detailed timed logging for Essbase transactions.  In this version, we have added similar functionality for SQL transactions and have formatted the logs in pipe-delimited format so they can easily be loaded into Excel or into a database for further analysis.  The columns of the log include the log message level, timestamp, sequential transaction number, number of active threads, transaction GUID, username, action, description, and time to execute in milliseconds.

Below is an example of the log message.

Click to enlarge

Query and Display PDF formatThe PDF View type now supports the ability to load the PDF directly from a relational table via a tokenized SQL statement.  This functionality will be very useful for those customers who have contextual information, such as invoice images, stored relationally and need a way to display that information.  We frequently see this requirement as the end result of a drill-through operation from either Essbase or relational reports.

Ability to Launch Local External Processes from Within DodecaCertain Dodeca customers store data files for non-financial systems in relational data stores and use Dodeca as a central access point.  The new ability to launch a local process from within Dodeca is implemented as a Dodeca Workbook Script method which provides a great deal of flexibility in how the process is launched.

The new 6.7.1 functionality follows closely on the 6.7.0 release that introduces proxy server support and new MSAD and LDAP authentication services.  If you are interested in seeing all of the changes in Dodeca, highly detailed Dodeca release notes are available on our website at http://www.appliedolap.com/resources/downloads/dodeca-technical-docs.

Categories: BI & Warehousing

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Yet Another Post How to Link to Download a File or Display an Image from a BLOB column

Joel Kallman - Thu, 2014-03-20 13:00
On an internal mailing list, an employee (Richard, a long-time user of Oracle Application Express) asked:

"...we are attempting to move to storing (the images) in a BLOB column in our own application tables.  Is there no way to display an image outside of page items and reports? "

Basically, he has a bunch of images stored in the BLOB column of the common upload table, APEX_APPLICATION_FILES (or WWV_FLOW_FILES).  He wishes to move them to a table in his workspace schema, but it's unclear to him how they can be displayed.  While there is declarative support for BLOBs in Application Express, there are times where you simply wish to get a link which would return the image - and without having to add a form and report against the table containing the images.

I fully realize that this question has been answered numerous times in various books and blog posts, but I wish to reiterate it here again.

Firstly, a way not to do this is via a PL/SQL procedure that is called directly from a URL.  I see this "solution" commonly documented on the Internet, and in general, it should not be followed.  The default configuration of Oracle Application Express has a white list of entry points, callable from a URL.  For security reasons, you absolutely want to leave this restriction in place and not relax it.  This is specified as the PlsqlRequestValidationFunction for mod_plsql and security.disableDefaultExclusionList for Oracle REST Data Services (nee APEX Listener).  With this default security measure in place, you will not be able to invoke a procedure in your schema from a URL.  Good!

The easiest way to return an image from a URL in an APEX application is either via a RESTful Service or via an On-Demand process.  This blog post will cover the On-Demand process.  It's definitely easier to implement via a RESTful Service, and if you can do it via a RESTful call, that will always be much faster - Kris has a great example how to do this. However, one benefit of doing this via an On Demand process is that it will also be constrained by any conditions or authorization schemes that are in place for your APEX application (that is, if your application requires authentication and authorization, someone won't be able to access the URL unless they are likewise authenticated to your APEX application and fully authorized).

  1. Navigate to Application Builder -> Shared Components -> Application Items
  2. Click Create
    • Name:  FILE_ID
    • Scope:  Application
    • Session State Protection:  Unrestricted
  3. Navigate to Application Builder -> Shared Components -> Application Processes
  4. Click Create
    • Name: GETIMAGE
    • Point:  On Demand: Run this application process when requested by a page process.
  5. Click Next
  6. For Process Text, enter the following code:


begin
for c1 in (select *
from my_image_table
where id = :FILE_ID) loop
--
sys.htp.init;
sys.owa_util.mime_header( c1.mime_type, FALSE );
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( c1.blob_content));
sys.htp.p('Content-Disposition: attachment; filename="' || c1.filename || '"' );
sys.htp.p('Cache-Control: max-age=3600'); -- tell the browser to cache for one hour, adjust as necessary
sys.owa_util.http_header_close;
sys.wpg_docload.download_file( c1.blob_content );

apex_application.stop_apex_engine;
end loop;
end;

Then, all you need to do is construct a URL in your application which calls this application process, as described in the Application Express Application Builder Users' Guide.  You could manually construct a URL using APEX_UTIL.PREPARE_URL, or specify a link in the declarative attributes of a Report Column.  Just be sure to specify a Request of 'APPLICATION_PROCESS=GETIMAGE' (or whatever your application process name is).  The URL will look something like:


f?p=&APP_ID.:0:&APP_SESSION.:APPLICATION_PROCESS=GETIMAGE:::FILE_ID:<some_valid_id>

That's all there is to it.

A few closing comments:
  1. Be mindful of the authorization scheme specified for the application process.  By default, the Authorization Scheme will be "Must Not Be Public User", which is normally acceptable for applications requiring authentication.  But also remember that you could restrict these links based upon other authorization schemes too.
  2. If you want to display the image inline instead of being downloaded by a browser, just change the Content-Disposition from 'attachment' to 'inline'.
  3. A reasonable extension and optimization to this code would be to add a version number to your underlying table, increment it every time the file changes, and then reference this file version number in the URL.  Doing this, in combination with a Cache-Control directive in the MIME header would let the client browser cache it for a long time without ever running your On Demand Process again (and thus, saving your valuable database cycles).
  4. Application Processes can also be defined on the page-level, so if you wished to have the download link be constrained by the authorization scheme on a specific page, you could do this too.
  5. Be careful how this is used. If you don't implement some form of browser caching, then a report which displays 500 images inline on a page will result in 500 requests to the APEX engine and database, per user per page view! Ouch! And then it's a matter of time before a DBA starts hunting for the person slamming their database and reports that "APEX is killing our database". There is an excellent explanation of cache headers here.

Don't use INC and DEC in PL/SQL Libraries

Gerd Volberg - Wed, 2014-03-19 06:01
In my oldest PL/SQL-Library in Forms 4 my first procedures were:

PROCEDURE Inc (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number + 1;
END;

PROCEDURE Dec (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number - 1;
END;

In some variations with one or two parameters I used this Increment and Decrement since 20 years! Without having trouble in all the days.

Now I found out, that DEC isn't working in newer Oracle Forms versions...

And why? Oracle created in PL/SQL a new SUBTYPE of DECIMAL and it is called "DEC". So you can use it in this way:
DECLARE
V_Value DEC;
BEGIN
... 
 
And this behaviour kills my procedure in the PL/SQL-Library :-(
All other languages know INC and DEC for incrementing and decrementing. Not Oracle!

Be careful when creating new functions which you want to use for a long time :-)
Gerd

Collaborate 14 Paper: Balancing by 2 Segments

David Haimes - Tue, 2014-03-18 11:55

It is a very common requirement to automatically balance by two different segments of the chart of accounts, typically the Legal Entity and some sort of Management Entity (department, line of business, cost center, etc).  Fusion financials allows you to balance by up to three segments, which is always a well received feature.  However at Collaborate you can learn how one company built a custom to do this in EBusiness Suite.  I’m honored to have been asked to Co-present with GE on this topic, they have done some really good work here and I’m sure it will be of great interest to others too.  So come along and join us, you’ll get the specifics of what they did, the details are below:

Session ID: 13941
When: 10 Apr 11:00 AM-12:00 PM
Room: Level 1, Marco Polo – 801
Speakers: Sangeeta Sameer, General Electric (GE),  David Haimes, Oracle Corporation
Abstract: This paper describes the Custom solution that General Electric (GE) implemented in collaboration with Oracle to achieve Balancing by 2 segments in Release 12.1.3 of Oracle E-Business Suite (EBS). GE has the business need to Balance by both the Legal Entity (LE) and Management Entity (ME) segments in the Chart of Accounts. While Fusion Applications allow Balancing by 3 segments, Oracle EBS Applications do not have this Functionality. This paper covers the requirements and solution in detail for 2 Segment Balancing.


Categories: APPS Blogs

Pages

Subscribe to Oracle FAQ aggregator