Feed aggregator

Not Just a Cache

Antony Reynolds - Thu, 2014-03-27 22:23
Coherence as a Compute Grid

Coherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence also has the ability to function as a compute grid, distributing work across multiple servers in a cluster.  In this entry, which was co-written with my colleague Utkarsh Nadkarni, we will look at using Coherence as a compute grid through the use of the Work Manager API and compare it to manipulating data directly in the grid using Entry Processors.

Coherence Distributed Computing Options

The Coherence documentation identifies several methods for distributing work across the cluster, see Processing Data in a Cache.  They can be summarized as:

  • Entry Processors
    • An InvocableMap interface, inherited by the NamedCache interface, provides support for executing an agent (EntryProcessor or EntryAggregator) on individual entries within the cache.
    • The entries may or may not exist, either way the agent is executed once for each key provided, or if no key is provided then it is executed once for each object in the cache.
    • In Enterprise and Grid editions of Coherence the entry processors are executed on the primary cache nodes holding the cached entries.
    • Agents can return results.
    • One agent executes multiple times per cache node, once for each key targeted on the node.
  • Invocation Service
    • An InvocationService provides support for executing an agent on one or more nodes within the grid.
    • Execution may be targeted at specific nodes or at all nodes running the Invocation Service.
    • Agents can return results.
    • One agent executes once per node.
  • Work Managers
    • A WorkManager class provides a grid aware implementation of the commonJ WorkManager which can be used to run tasks across multiple threads on multiple nodes within the grid.
    • WorkManagers run on multiple nodes.
    • Each WorkManager may have multiple threads.
    • Tasks implement the Work interface and are assigned to specific WorkManager threads to execute.
    • Each task is executed once.
Three Models of Distributed Computation

The previous section listing the distributed computing options in Coherence shows that there are 3 distinct execution models:

  • Per Cache Entry Execution (Entry Processor)
    • Execute the agent on the entry corresponding to a cache key.
    • Entries processed on a single thread per node.
    • Parallelism across nodes.
  • Per Node Execution (Invocation Service)
    • Execute the same agent once per node.
    • Agent processed on a single thread per node.
    • Parallelism across nodes.
  • Per Task Execution (Work Manager)
    • Each task executed once.
    • Parallelism across nodes and across threads within a node.

The entry processor is good for operating on individual cache entries.  It is not so good for working on groups of cache entries.

The invocation service is good for performing checks on a node, but is limited in its parallelism.

The work manager is good for operating on groups of related entries in the cache or performing non-cache related work in parallel.  It has a high degree of parallelism.

As you can see the primary choice for distributed computing comes down to the Work Manager and the Entry Processor.

Differences between using Entry Processors and Work Managers in Coherence Aspect Entry Processors Work Managers Degree of parallelization Is a function of the number of Coherence nodes. EntryProcessors are run concurrently across all nodes in a cluster. However, within each node only one instance of the entry processor executes at a time. Is a function of the number of Work Manager threads. The Work is run concurrently across all threads in all Work Manager instances. Transactionality Transactional. If an EntryProcessor running on one node does not complete (say, due to that node crashing), the entries targeted will be executed by an EntryProcessor on another node. Not transactional. The specification does not explicitly specify what the response should be if a remote server crashes during an execution. Current implementation uses WORK_COMPLETED with WorkCompletedException as a result. In case a Work does not run to completion, it is the responsibility of the client to resubmit the Work to the Work Manager. How is the Cache accessed or mutated? Operations against the cache contents are executed by (and thus within the localized context of) a cache. Accesses and changes to the cache are done directly through the cache API. Where is the processing performed? In the same JVM where the entries-to-be-processed reside. In the Work Manager server. This may not be the same JVM where the entries-to-be-processed reside. Network Traffic Is a function of the size of the EntryProcessor. Typically, the size of an EntryProcessor is much smaller than the size of the data transferred across nodes in the case of a Work Manager approach. This makes the EntryProcessor approach more network-efficient and hence more scalable. One EntryProcessor is transmitted to each cache node. Is a function of the
  • Number of Work Objects, of which multiple may be sent to each server.
  • Size of the data set transferred from the Backing Map to the Work Manager Server.
Distribution of “Tasks” Tasks are moved to the location at which the entries-to-be-processed are being managed. This may result in a random distribution of tasks. The distribution tends to get equitable as the number of entries increases. Tasks are distributed equally across the threads in the Work Manager Instances. Implementation of the EntryProcessor or Work class. Create a class that extends AbstractProcessor. Implement the process method. Update the cache item based on the key passed in to the process method. Create a class that is serializable and implements commonj.work.Work. Implement the run method. Implementation of “Task” In the process method, update the cache item based on the key passed into the process method. In the run method, do the following:
  • Get a reference to the named cache
  • Do the Work – Get a reference to the Cache Item; change the cache item; put the cache item back into the named cache.
Completion Notification When the NamedCache.invoke method completes then all the entry processors have completed executing. When a task is submitted for execution it executes asynchronously on the work manager threads in the cluster.  Status may be obtained by registering a commonj.work.WorkListener class when calling the WorkManager.schedule method.  This will provide updates when the Work is accepted, started and completed or rejected.  Alternatively the WorkManager.waitForAll and WorkManager.waitForAny methods allow blocking waits for either all or one result respectively. Returned Results java.lang.Object – when executed on one cache item. This returns result of the invocation as returned from the EntryProcessor.
java.util.Map – when executed on a collection of keys. This returns a Map containing the results of invoking the EntryProcessor against each of the specified keys. commonj.work.WorkItem - There are three possible outcomes
  • The Work is not yet complete. In this case, a null is returned by WorkItem.getResult.
  • The Work started but completed with an exception. This may have happened due to a Work Manager Instance terminating abruptly. This is indicated by an exception thrown by WorkItem.getResult.
  • The Work Manager instance indicated that the Work is complete and the Work ran to completion. In this case, WorkItem.getResult returns a non-null and no exception is thrown by WorkItem.getResult.
Error Handling Failure of a node results in all the work assigned to that node being executed on the new primary. This may result in some work being executed twice, but Coherence ensures that the cache is only updated once per item. Failure of a node results in the loss of scheduled tasks assigned to that node. Completed tasks are sent back to the client as they complete. Fault Handling Extension

Entry processors have excellent error handling within Coherence.  Work Managers less so.  In order to provide resiliency on node failure I implemented a “RetryWorkManager” class that detects tasks that have failed to complete successfully and resubmits them to the grid for another attempt.

A JDeveloper project with the RetryWorkManager is available for download here.  It includes sample code to run a simple task across multiple work manager threads.

To create a new RetryWorkManager that will retry failed work twice then you would use this: WorkManager = new RetryWorkManager("WorkManagerName", 2);  // Change for number of retries, if no retry count is provided then the default is 0.You can control the number of retries at the individual work level as shown below: WorkItem workItem = schedule(work); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, workListener); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, 4); // Change number of retries
WorkItem workItem = schedule(work, workListener, 4); // Change number of retriesCurrently the RetryWorkManager defaults to having 0 threads.  To change use this constructor: WorkItem workItem = schedule(work, workListener, 3, 4); // Change number of threads (3) and retries (4)Note that none of this sample code is supported by Oracle in any way, and is provided purely as a sample of what can be done with Coherence. How the RetryWorkManager Works

The RetryWorkManager delegates most operations to a Coherence WorkManager instance.  It creates a WorkManagerListener to intercept status updates.  On receiving a WORK_COMPLETED callback the listener checks the result to see if the completion is due to an error.  If an error occurred and there are retries left then the work is resubmitted.  The WorkItem returned by scheduling an event is wrapped in a RetryWorkItem.  This RetryWorkItem is updated with a new Coherence WorkItem when the task is retried.  If the client registers a WorkManagerListener then the RetryWorkManagerListener delegates non-retriable events to the client listener.  Finally the waitForAll and waitForAny methods are modified to deal with work items being resubmitted in the event of failure.

Sample Code for EntryProcessor and RetryWorkManager

The downloadable project contains sample code for running the work manager and an entry processor.

The demo implements a 3-tier architecture

  1. Coherence Cache Servers
    • Can be started by running RunCacheServer.cmd
    • Runs a distributed cache used by the Task to be executed in the grid
  2. Coherence Work Manager Servers
    • Can be started by running RunWorkManagerServer.cmd
    • Takes no parameters
    • Runs two threads for executing tasks
  3. Coherence Work Manager Clients
    • Can be started by running RunWorkManagerClient.cmd
    • Takes three parameters currently
      • Work Manager name - should be "AntonyWork" - default is "AntonyWork"
      • Number of tasks to schedule - default is 10
      • Time to wait for tasks to complete in seconds - default is 60

The task stores the number of times it has been executed in the cache, so multiple runs will see the counter incrementing.  The choice between EntryProcessor and WorkManager is controlled by changing the value of USE_ENTRY_PROCESSOR between false and true in the RunWorkManagerClient.cmd script.

The SetWorkManagerEnv.cmd script should be edited to point to the Coherence home directory and the Java home directory.

Summary

If you need to perform operations on cache entries and don’t need to have cross-checks between the entries then the best solution is to use an entry processor.  The entry processor is fault tolerant and updates to the cached entity will be performed once only.

If you need to perform generic work that may need to touch multiple related cache entries then the work manager may be a better solution.  The extensions I created in the RetryWorkManager provide a degree of resiliency to deal with node failure without impacting the client.

The RetryWorkManager can be downloaded here.

Enabling Analytics on Edge Devices in Internet of Things

Anshu Sharma - Thu, 2014-03-27 09:50

We are doing a live webcast on this topic on Apr 24, 10AM PST. Please register if you can join or want to get an on demand link after the event. Looking forward to an interesting discussion on this topic.

Registration Page 

http://event.on24.com/utilApp/eventdetail?eventid=765265&sessionid=1&key=95F98E49015258DCE699497D930DDB0F

INDEX to SYSDBA without SELECT

Paul Wright - Thu, 2014-03-27 09:21
Hello Oracle Security Readers, If we combine the following factors together then we can identify an escalation route from Index on SYSTEM to SYSDBA which does not require SELECT privileges on the indexed table: 1. SYSTEM passes it’s DBA role through it’s procedures. 2. Oracle indexes allow execution from read via functions i.e. INDEX can [...]

Using the JAX-RS 2.0 Client API with WebLogic Server 12.1.3

Steve Button - Wed, 2014-03-26 23:05


Please note: this blog discusses WebLogic Server 12.1.3
which has not yet been released.

As part of the JAX-RS 2.0 support we are providing with WebLogic Server 12.1.3, one really useful new feature is the new Client API it provides, enabling applications to easily interact with REST services to consume and publish information.

By way of a simple example, I'll build out an application that uses the freegeoip.net REST service to lookup the physical location of a specified IP address or domain name and deploy it to WebLogic Server 12.1.3.

The first step to perform is to make a call to the freegeoip.net REST API and examine the JSON payload that is returned.

$ curl http://freegeoip.net/json/buttso.blogspot.com

{"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}

The next step is to build a Java class to represent the JSON payload that is returned. In this case, it's quite simple because the JSON payload that is returned doesn't contain any relationships or complex data structures.

/**
*
* @author sbutton
* {"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}
*/
public class GeoIp implements Serializable {

private String ipAddress;
private String countryName;
private String regionName;
private String city;
private String zipCode;
private String latitude;
private String longitude;

public String getIpAddress() {
return ipAddress;
}

public void setIpAddress(String ipAddress) {
this.ipAddress = ipAddress;
}

...

}
With the GeoIP class defined, the next step is to consider how to convert the JSON payload into an instance of the GeoIP class. I'll show two ways this can be done.

The first way to do it is to create a class that reads the result of the REST request, parses the JSON payload and constructs a representative instance of the GeoIP class. Within the JAX-RS API, there is an interface MessageBodyReader that can be implemented to convert a Stream into a Java type.

http://docs.oracle.com/javaee/6/api/javax/ws/rs/ext/MessageBodyReader.html

Implementing this interface gives you the readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) method which supplies an InputStream containing the response to read. The method then parses out the JSON payload and constructs a responding GeoIP instance from it.

Parsing the JSON payload is straightforward with WebLogic Server 12.1.3 since we've included the (JSR-353) Java API for JSON Processing implementation which provides an API for reading and creating JSON objects.

package oracle.demo.wls.jaxrs.client.geoip;

import java.io.IOException;
import java.io.InputStream;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import javax.json.Json;
import javax.json.stream.JsonParser;
import javax.ws.rs.Produces;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.ext.MessageBodyReader;
import javax.ws.rs.ext.Provider;

@Provider
@Produces(MediaType.APPLICATION_JSON)
public class GeoIpReader implements MessageBodyReader {

@Override
public boolean isReadable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return GeoIp.class.isAssignableFrom(type) ;
}

@Override
public GeoIp readFrom(Class type, Type genericType, Annotation[] annotations, MediaType mediaType, MultivaluedMap httpHeaders, InputStream entityStream) throws IOException, WebApplicationException {
GeoIp g = new GeoIp();
JsonParser parser = Json.createParser(entityStream);
while (parser.hasNext()) {
switch (parser.next()) {
case KEY_NAME:
String key = parser.getString();
parser.next();
switch (key) {
case "ip":
g.setIpAddress(parser.getString());
break;
case "country_name":
g.setCountryName(parser.getString());
break;
case "latitude":
g.setLatitude(parser.getString());
break;
case "longitude":
g.setLongitude(parser.getString());
break;
case "region_name":
g.setRegionName(parser.getString());
break;
case "city":
g.setCity(parser.getString());
break;
case "zipcode":
g.setZipCode(parser.getString());
break;
default:
break;
}
break;
default:
break;
}
}
return g;
}
}

Once this class is built, it can be registered with the Client so that it can be called when necessary to convert a payload of MessageType.APPLICATION_JSON type into an instance of the GeoIP object, here done in an @PostConstruct method on a JSF Bean

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
client.register(GeoIpReader.class);
}


The alternative way to do thi is to use the EcliseLink MOXY JAXB implementation that is provided with WebLogic Server, which can automatically marhsall and unmarshall JSON payloads to and from Java objects. Helpfully, the JAX-RS 2.0 shared-library that WebLogic Server 12.1.3 contains the jersey-media-moxy extension that enables the EclipseLInk MOXY implementation to be simply registered and used by applications when conversion is needed.

To use the JAXB/MOXY approach, the GeoIPReader class can be thrown away. No manual parsing of the payload is required. Instead, the base GeoIP class is annotated with JAXB annotations to denote it as being JAXB enabled and to provide some assistance in the mapping of the class properties to the payload property names.

package oracle.demo.wls.jaxrs.client.geoip;

import java.io.Serializable;
import javax.xml.bind.annotation.XmlAttribute;
import javax.xml.bind.annotation.XmlRootElement;

/**
*
* @author sbutton
* {"ip":"173.194.115.75","country_code":"US","country_name":"United States","region_code":"CA","region_name":"California","city":"Mountain View","zipcode":"94043","latitude":37.4192,"longitude":-122.0574,"metro_code":"807","area_code":"650"}
*/

@XmlRootElement
public class GeoIp implements Serializable {

@XmlAttribute(name = "ip")
private String ipAddress;
@XmlAttribute(name = "country_name")
private String countryName;
@XmlAttribute(name = "region_name")
private String regionName;
@XmlAttribute(name = "city")
private String city;
@XmlAttribute(name = "zipcode")
private String zipCode;
@XmlAttribute(name = "latitude")
private String latitude;
@XmlAttribute(name = "longitude")
private String longitude;

...

}


With the JAXB annotations placed on the GeoIP class to enable it to be automatically marshalled/unmarshalled from JSON, the last step is to register the EclipseLink MOXY implementation with the Client. This is done with the assistance of a small utility method, as shown in the Jersey User Guide Media chapter.

public static ContextResolver createMoxyJsonResolver() {
final MoxyJsonConfig moxyJsonConfig = new MoxyJsonConfig();
moxyJsonConfig.setFormattedOutput(true);

Map namespacePrefixMapper = new HashMap(1);
namespacePrefixMapper.put("http://www.w3.org/2001/XMLSchema-instance", "xsi");
moxyJsonConfig.setNamespacePrefixMapper(namespacePrefixMapper).setNamespaceSeparator(':');

return moxyJsonConfig.resolver();
}
This method is then used to register the relevant ContextResolver with the Client to use to handle JSON_conversions, instead of the GeoIPReader class that was used before.<

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
client.register(createMoxyJsonResolver());
//client.register(GeoIpReader.class);
}

With the JSON payload to GeoIP conversion now covered, the JAX-RS Client API can be used to make the call to the freegeoip REST service and process the response.

To make a client call, two classes are used: javax.ws.rs.client.Client and javax.ws.rs.client.WebTarget .

The Jersey User Guide provides a good description of theses two classes and their relationship:

The JAX-RS Client API is a designed to allow fluent programming model. This means, a construction of a Client instance, from which a WebTarget is created, from which a request Invocation is built and invoked can be chained in a single "flow" of invocations ... Once you have a Client instance you can create a WebTarget from it ... A resource in the JAX-RS client API is an instance of the Java class WebTarget and encapsulates an URI. The fixed set of HTTP methods can be invoked based on the WebTarget. The [base] representations are Java types, instances of which, may contain links that new instances of WebTarget may be created from.

In this example application, the Client is opened in an @PostConstruct method and closed in a @PreDestroy method, with the WebTarget being created and its GET method called when the lookup is executed by the user.

@Named
@RequestScoped
public class GeoIpBackingBean {

private WebTarget target = null;
private Client client = null;

...

@PostConstruct
public void init() {
client = ClientBuilder.newClient();
//client.register(createMoxyJsonResolver());
client.register(GeoIpReader.class);
}

@PreDestroy
public void byebye() {
client.close();
}

public void lookupAddress() {
try {
target = client.target(String.format(rest_base_url, addressToLookup));
geoIp = target.request().get(GeoIp.class);
} catch (Exception e) {
e.printStackTrace();
FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Error executing REST call: " + e.getMessage()));
}
}

...
}


Bringing it all together as a JSF based application results in a JSF Bean being created that allows the IP address to be entered and a method that invokes the JAX-RS Client API to call out to the freegeoip.net REST service to retrieve the JSON payload containing the location information. A simple JSF facelet page is used to support the entering of the IP address and the display of the relevant data from the GeoIP object.



<h:form>
<h:panelGrid columns="2" style="vertical-align: top;">
<h:outputLabel value="Address"/>
<h:inputText value="${geoIpBackingBean.addressToLookup}"/>
<h:outputLabel value=""/>
<h:commandButton action="${geoIpBackingBean.lookupAddress()}" value="Lookup" style="margin: 5px;"/>
</h:panelGrid>
</h:form>


<h:panelGrid columns="2">
<h:outputText value="IP:"/>
<h:outputText value="${geoIpBackingBean.geoIp.ipAddress}"/>
<h:outputText value="Country Code:"/>
<h:outputText value="${geoIpBackingBean.geoIp.countryName}"/>
<h:outputText value="State:"/>
<h:outputText value="${geoIpBackingBean.geoIp.regionName}"/>
<h:outputText value="City"/>
<h:outputText value="${geoIpBackingBean.geoIp.city}"/>
<h:outputText value="Zipcode:"/>
<h:outputText value="${geoIpBackingBean.geoIp.zipCode}"/>
<h:outputText value="Coords:"/>
<c:if test="${geoIpBackingBean.geoIp.ipAddress != null}">
<h:outputText value="${geoIpBackingBean.geoIp.latitude},${geoIpBackingBean.geoIp.longitude}"/>
</c:if>
</h:panelGrid>

The last step to perform is to add a weblogic.xml deployment descriptor with a library-ref to the [jsf,2.0] shared-library, which must be deployed as I described earlier in Using JAX-RS 2.0 with WebLogic Server 12.1.3.

The application is now ready to to deploy and run.

Battling Bigfile Backup Bottlenecks

Don Seiler - Wed, 2014-03-26 13:44
Last Friday I kicked off a database backup to an NFS destination, using the standard "backup as compressed backupset database" syntax. Loyal readers of this blog may recall that I'm the proud custodian of a 25 Tb database, so this backup normally takes a few days, with an expected completion on Monday morning. However it was still running on Wednesday, and reviewing the logs I saw that there was just 1 channel (of the original 8) still running. The backup file that this channel was writing happened to include our largest bigfile datafile, which weighs in at nearly 8 Tb. Reviewing my new backup script I realized that I had neglected to specify a SECTION SIZE parameter. An example of its usage is:

RMAN> backup as compressed backupset
2> section size 64G
3> database;

Without it, RMAN has decided to create a backup piece that bundled my 8 Tb datafile with a few others and then write it out to disk on one channel. Obviously this isn't what we wanted.


I'm not a big fan of bigfile tablespaces, primarily because you lose the benefits of parallelism when huge files can only be handled by a single channel, as with datafile copy operations and backups. In 11g, however, Oracle has introduced multi-section backups via the SECTION SIZE option for RMAN backups. This option tells RMAN to break a large file into sections of the specified size so that the work can be done in parallel. If the specified size is larger than the file, then it is simply ignored for that file.

There is a limitation in that the file can be split into a maximum of 256 sections. So, if you specify a section size that would result in more than 256 sections being created, Oracle RMAN will increase the size so that exactly 256 sections are created. This is still enforced today in Oracle 12c.

Another limitation in Oracle 11g is that multi-section backups cannot be done with image copy backups. Those must still be done as a whole file and so can still be a huge bottleneck. However this is no longer a problem in Oracle 12c, and multi-section image copy backups are possible. I'm looking forward to using this as we also use image copy backups as part of our recovery strategy.

To highlight the parallel benefits, I ran a compressed backup of a 1.5 Tb bigfile tablespace using 8 channels. The first one does NOT use section size and so only goes over one channel:

Starting backup at 2014/03/25 14:35:41
...
channel c1: backup set complete, elapsed time: 17:06:09
Finished backup at 2014/03/26 07:41:51

The second one uses a section size of 64 Gb (otherwise same command & file):

Starting backup at 2014/03/26 07:41:52
...
Finished backup at 2014/03/26 09:48:33

You can see the huge impact made by making use of the multi-section backup option. A single channel took over 17 hours to back it up. Using 8 channels with a section size of 64 Gb took only just over 2 hours. Eyeballing the log shows an average of around 6 minutes per section. Definitely much better than waiting for a single channel to do all the work when the rest of the system is waiting.
Categories: DBA Blogs

Access Management alternatives (Part 1: Directory Services)

Frank van Bortel - Wed, 2014-03-26 08:18
Intro At the governmental institute that hired me, I'm working hard to get the full Oracle Identity and Access Management (IAM) stack implemented. A colleague suggested OpenIAM, which -at closer look- turns out to be a fork of what I believe to be the origin of the Oracle stack, Sun's OpenSSO. So, I started at looking at this stack, which is available from ForgeRock. Let's start with the basis: Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com2

Microservices is SOD all within SOA

Steve Jones - Tue, 2014-03-25 10:06
Microservices is a Service Oriented Delivery approach, all within a Service Oriented Architecture context. (Long Title ;) Ok so a few more updates since the last time I wrote about Microservices and I think its worth just updating as it really is heavily underlining why Microservices is a Service Oriented Delivery approach that absolutely can fit within a Service Oriented Architecture.  Lets be
Categories: Fusion Middleware

Let’s keep Manston

Pete Scott - Mon, 2014-03-24 17:22
It is said that aviation is in the blood (or in the genes). My partner worked as cabin crew with Air New Zealand before becoming their senior HR manager responsible for all off-shore based Air New Zealand staff; her cousin is a senior pilot with Cathay Pacific; of her uncles, one was managing director of […]

Chicago Oracle User Community Restart

Jeremy Schneider - Mon, 2014-03-24 13:47

Chicago is the third largest city in the United States. There are probably more professional Oracle users here than most other areas in the country – and yet for many years now there hasn’t been a cohesive user group.

But right now there’s an opportunity for change. If the professional community of Chicago Oracle users steps up to the plate.

Chicago Oracle User Group

First, the Chicago Oracle User Group has just elected a new president. Alfredo Abate is bringing a level of enthusiasm and energy to the position which we’ve been missing for a long time. He’s trying to figure out how to restart the COUG and re-engage the professional community here – but he needs input and assistance from you! If you’re an administrator or developer anywhere near Chicago and you have Oracle software anywhere in your company, then please help Alfredo get the user group going! Here are a few specific things you can do:

  1. Send Alfredo an email saying congrats and offering suggestions for the COUG. You can find him on LinkedIn or the COUG site below.
  2. Join the LinkedIn group that Alfredo set up for the COUG.
  3. Sign up for a free account at the COUG site: chicago.oracle.ioug.org
  4. Complete the survey at the COUG website (must sign up for free account, then look for “survey” link in the top navigation bar). This will help Alfredo think about planning the next event.
Lunch Huddles

A few years ago, I was part of a group of Oracle database users from different companies in Chicago who started hanging out regularly for lunches downtown. It was never a big event but it was a lot of fun to get together and catch up regularly. However I stopped organizing the lunches after a job change back into travel consulting and the birth of our daughter. I live on the north side of the city, I worked from home when I wasn’t traveling, and I wasn’t able to make trips downtown anymore.

Ever since, I’ve missed hanging out with friends downtown and I’ve always wanted to do these group lunches again. Besides the fact that I really enjoy catching up with people, I think that face-to-face meetups really help strengthen our sense of community as a whole in Chicago.

So – after far too long – I started the lunches again last week.

Oracle DB Lunch Downtown

Oracle DB Lunch Downtown

But it’s improved – there are now lunches happening all over ChicagoLand!

Tomorrow: Deerfield
This wednesday: Des Plaines
Next week wednesday: Downtown

Coming soon: Naperville?

Please join us for a lunch sometime! I promise you’ll find it to be both beneficial and fun! And also, please join the group on meetup.com – then you’ll get reminders about upcoming lunches in Chicago.

Spread the Word

Even if you don’t live in Chicago, you can help me out with this – send a brief tweet or quick email to any Oracle professionals you know around Chicago and direct them to this blog post. I hope to see some new life in the Oracle professional community here. It won’t happen by accident.

What's New in Dodeca 6.7.1?

Tim Tow - Mon, 2014-03-24 11:28
Last week, we released Dodeca version 6.7.1.4340 which focuses on some new relational functionality. The major new features in 6.7.1 are:
  • Concurrent SQL Query Execution
  • Detailed SQL Timed Logging
  • Query and Display PDF format
  • Ability to Launch Local External Processes from Within Dodeca
Concurrent SQL Query ExecutionDodeca has a built-in SQLPassthroughDataSet object that supports queries to a relational database.  The SQLPassthroughDataSet functionality was engineered such that a SQLPassthroughDataSet object can include multiple queries that get executed and returned on a single trip to the server and several of our customers have taken great advantage of that functionality.  We have at least one customer, in fact, that has some SQLPassthroughDataSets that execute up to 20 queries in a single trip to the server.  The functionality was originally designed to run the queries sequentially, but in some cases it would be better to run the queries concurrently.  Because this is Dodeca, of course concurrent query execution is configurable at the SQLPassthroughDataSet level.

Detailed SQL Timed LoggingIn Dodeca version 6.0, we added detailed timed logging for Essbase transactions.  In this version, we have added similar functionality for SQL transactions and have formatted the logs in pipe-delimited format so they can easily be loaded into Excel or into a database for further analysis.  The columns of the log include the log message level, timestamp, sequential transaction number, number of active threads, transaction GUID, username, action, description, and time to execute in milliseconds.

Below is an example of the log message.

Click to enlarge

Query and Display PDF formatThe PDF View type now supports the ability to load the PDF directly from a relational table via a tokenized SQL statement.  This functionality will be very useful for those customers who have contextual information, such as invoice images, stored relationally and need a way to display that information.  We frequently see this requirement as the end result of a drill-through operation from either Essbase or relational reports.

Ability to Launch Local External Processes from Within DodecaCertain Dodeca customers store data files for non-financial systems in relational data stores and use Dodeca as a central access point.  The new ability to launch a local process from within Dodeca is implemented as a Dodeca Workbook Script method which provides a great deal of flexibility in how the process is launched.

The new 6.7.1 functionality follows closely on the 6.7.0 release that introduces proxy server support and new MSAD and LDAP authentication services.  If you are interested in seeing all of the changes in Dodeca, highly detailed Dodeca release notes are available on our website at http://www.appliedolap.com/resources/downloads/dodeca-technical-docs.

Categories: BI & Warehousing

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Packt Publishing Buy One Get One Free Offer

Antony Reynolds - Thu, 2014-03-20 14:35
Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Yet Another Post How to Link to Download a File or Display an Image from a BLOB column

Joel Kallman - Thu, 2014-03-20 13:00
On an internal mailing list, an employee (Richard, a long-time user of Oracle Application Express) asked:

"...we are attempting to move to storing (the images) in a BLOB column in our own application tables.  Is there no way to display an image outside of page items and reports? "

Basically, he has a bunch of images stored in the BLOB column of the common upload table, APEX_APPLICATION_FILES (or WWV_FLOW_FILES).  He wishes to move them to a table in his workspace schema, but it's unclear to him how they can be displayed.  While there is declarative support for BLOBs in Application Express, there are times where you simply wish to get a link which would return the image - and without having to add a form and report against the table containing the images.

I fully realize that this question has been answered numerous times in various books and blog posts, but I wish to reiterate it here again.

Firstly, a way not to do this is via a PL/SQL procedure that is called directly from a URL.  I see this "solution" commonly documented on the Internet, and in general, it should not be followed.  The default configuration of Oracle Application Express has a white list of entry points, callable from a URL.  For security reasons, you absolutely want to leave this restriction in place and not relax it.  This is specified as the PlsqlRequestValidationFunction for mod_plsql and security.disableDefaultExclusionList for Oracle REST Data Services (nee APEX Listener).  With this default security measure in place, you will not be able to invoke a procedure in your schema from a URL.  Good!

The easiest way to return an image from a URL in an APEX application is either via a RESTful Service or via an On-Demand process.  This blog post will cover the On-Demand process.  It's definitely easier to implement via a RESTful Service, and if you can do it via a RESTful call, that will always be much faster - Kris has a great example how to do this. However, one benefit of doing this via an On Demand process is that it will also be constrained by any conditions or authorization schemes that are in place for your APEX application (that is, if your application requires authentication and authorization, someone won't be able to access the URL unless they are likewise authenticated to your APEX application and fully authorized).

  1. Navigate to Application Builder -> Shared Components -> Application Items
  2. Click Create
    • Name:  FILE_ID
    • Scope:  Application
    • Session State Protection:  Unrestricted
  3. Navigate to Application Builder -> Shared Components -> Application Processes
  4. Click Create
    • Name: GETIMAGE
    • Point:  On Demand: Run this application process when requested by a page process.
  5. Click Next
  6. For Process Text, enter the following code:


begin
for c1 in (select *
from my_image_table
where id = :FILE_ID) loop
--
sys.htp.init;
sys.owa_util.mime_header( c1.mime_type, FALSE );
sys.htp.p('Content-length: ' || sys.dbms_lob.getlength( c1.blob_content));
sys.htp.p('Content-Disposition: attachment; filename="' || c1.filename || '"' );
sys.htp.p('Cache-Control: max-age=3600'); -- tell the browser to cache for one hour, adjust as necessary
sys.owa_util.http_header_close;
sys.wpg_docload.download_file( c1.blob_content );

apex_application.stop_apex_engine;
end loop;
end;

Then, all you need to do is construct a URL in your application which calls this application process, as described in the Application Express Application Builder Users' Guide.  You could manually construct a URL using APEX_UTIL.PREPARE_URL, or specify a link in the declarative attributes of a Report Column.  Just be sure to specify a Request of 'APPLICATION_PROCESS=GETIMAGE' (or whatever your application process name is).  The URL will look something like:


f?p=&APP_ID.:0:&APP_SESSION.:APPLICATION_PROCESS=GETIMAGE:::FILE_ID:<some_valid_id>

That's all there is to it.

A few closing comments:
  1. Be mindful of the authorization scheme specified for the application process.  By default, the Authorization Scheme will be "Must Not Be Public User", which is normally acceptable for applications requiring authentication.  But also remember that you could restrict these links based upon other authorization schemes too.
  2. If you want to display the image inline instead of being downloaded by a browser, just change the Content-Disposition from 'attachment' to 'inline'.
  3. A reasonable extension and optimization to this code would be to add a version number to your underlying table, increment it every time the file changes, and then reference this file version number in the URL.  Doing this, in combination with a Cache-Control directive in the MIME header would let the client browser cache it for a long time without ever running your On Demand Process again (and thus, saving your valuable database cycles).
  4. Application Processes can also be defined on the page-level, so if you wished to have the download link be constrained by the authorization scheme on a specific page, you could do this too.
  5. Be careful how this is used. If you don't implement some form of browser caching, then a report which displays 500 images inline on a page will result in 500 requests to the APEX engine and database, per user per page view! Ouch! And then it's a matter of time before a DBA starts hunting for the person slamming their database and reports that "APEX is killing our database". There is an excellent explanation of cache headers here.

Don't use INC and DEC in PL/SQL Libraries

Gerd Volberg - Wed, 2014-03-19 06:01
In my oldest PL/SQL-Library in Forms 4 my first procedures were:

PROCEDURE Inc (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number + 1;
END;

PROCEDURE Dec (P_Number IN OUT NUMBER) IS
BEGIN
P_Number := P_Number - 1;
END;

In some variations with one or two parameters I used this Increment and Decrement since 20 years! Without having trouble in all the days.

Now I found out, that DEC isn't working in newer Oracle Forms versions...

And why? Oracle created in PL/SQL a new SUBTYPE of DECIMAL and it is called "DEC". So you can use it in this way:
DECLARE
V_Value DEC;
BEGIN
... 
 
And this behaviour kills my procedure in the PL/SQL-Library :-(
All other languages know INC and DEC for incrementing and decrementing. Not Oracle!

Be careful when creating new functions which you want to use for a long time :-)
Gerd

Collaborate 14 Paper: Balancing by 2 Segments

David Haimes - Tue, 2014-03-18 11:55

It is a very common requirement to automatically balance by two different segments of the chart of accounts, typically the Legal Entity and some sort of Management Entity (department, line of business, cost center, etc).  Fusion financials allows you to balance by up to three segments, which is always a well received feature.  However at Collaborate you can learn how one company built a custom to do this in EBusiness Suite.  I’m honored to have been asked to Co-present with GE on this topic, they have done some really good work here and I’m sure it will be of great interest to others too.  So come along and join us, you’ll get the specifics of what they did, the details are below:

Session ID: 13941
When: 10 Apr 11:00 AM-12:00 PM
Room: Level 1, Marco Polo – 801
Speakers: Sangeeta Sameer, General Electric (GE),  David Haimes, Oracle Corporation
Abstract: This paper describes the Custom solution that General Electric (GE) implemented in collaboration with Oracle to achieve Balancing by 2 segments in Release 12.1.3 of Oracle E-Business Suite (EBS). GE has the business need to Balance by both the Legal Entity (LE) and Management Entity (ME) segments in the Chart of Accounts. While Fusion Applications allow Balancing by 3 segments, Oracle EBS Applications do not have this Functionality. This paper covers the requirements and solution in detail for 2 Segment Balancing.


Categories: APPS Blogs

Microservices is SOA, for those who know what SOA is.

Steve Jones - Tue, 2014-03-18 10:05
Ok so its started a bit of debate on Twitter and now there have been emails, but in the spirit of openness I thought I'd better blog.  Now its good that Martin has now added a side bar on SOA to his article on Microservices but that really makes it worse in many ways.  I'll get to that at the end but first off lets explain why Microservices is just another SOA implementation pattern.  Its SOD
Categories: Fusion Middleware

Why Is My MView Log Not Purging?

Don Seiler - Sun, 2014-03-16 15:50
A few weeks ago we saw one of our tablespaces growing at a rate much higher than the others. Taking a look we saw that the biggest users of space were two materialized view logs, one being 110 Gb and the other 60 Gb. These logs were in place to facilitate the fast refresh of two materialized views, one for each log/table. These materialized views did some aggregations (sum) throughout the day on some important base table data. The fast refreshes were completing successfully many times a day, but the logs were not being purged as expected.

In our case, there was only one mview performing a fast refresh on those base tables, so the mview logs should have been completely purged after each refresh. They certainly shouldn't be growing to over 100+ Gb. Looking at the data in the mview log, all records had a SNAPTIME$$ value of "4000/01/01 00:00:00", which is the default value for records in the mview log that have not been refreshed. Once they are refreshed, the SNAPTIME$$ value gets set to SYSDATE and can then be evaluated for purging.

But why was this value not being updated after refresh?

 
For those of you unfamiliar with the role of materialized view logs, I'll share this primer from Tim Hall via his excellent Oracle-Base article:

Since a complete refresh involves truncating the materialized view segment and re-populating it using the related query, it can be quite time consuming and involve a considerable amount of network traffic when performed against a remote table. To reduce the replication costs, materialized view logs can be created to capture all changes to the base table since the last refresh. This information allows a fast refresh, which only needs to apply the changes rather than a complete refresh of the materialized view.
Digging deeper led me to MOS DocId 236233.1, which tells us that Oracle compares the MLOG$_<TABLE_NAME>.SNAPTIME$$ value against the SYS.SLOG$SNAPTIME:

Rows in the MView log are unnecessary if their refresh timestamps MLOG$<table_name>.SNAPTIME$$ are older or equal than the oldest entry in SLOG$.SNAPTIME for this log.

MLOG$<table_name>.SNAPTIME$$ <= MIN (SLOG$.SNAPTIME)
Here's where we saw the real problem.

SQL> select MOWNER,MASTER,SNAPSHOT,SNAPID,SNAPTIME,SSCN,USER#
 2 from sys.slog$
 3 where mowner='FOO' and master='BAR';

 no rows selected

If the purge mechanism checks SLOG$.SNAPTIME then of course nothing is going to happen, as the materialized view is NOT registered in SYS.SLOG$!

We re-created the MVIEW from scratch on our development database and had the same results, which indicates it's something systemic in Oracle so we opened an SR. After the standard back-and-forth of trying the same things over and over, Oracle Support said that this was actually expected behavior:
This mview is defined as fast refreshable with aggregates. The mv log is defined with PRIMARY KEY INCLUDING NEW VALUES.

In order to support fast refresh the mv log should include ROWID as well. Please review the Restrictions on Fast Refresh on Materialized Views with Aggregates located here:
http://docs.oracle.com/cd/E11882_01/server.112/e25554/basicmv.htm#DWHSG8203 There are additional restrictions depending on the operations performed. As an example, SEQUENCE should also need to be added to the mv log if direct loads are performed on new_invoice_record.This turned out to be the case. We recreated the mview log with the ROWID specification, then re-created the materialized view and, sure enough, the mview was registered in SYS.SLOG$ and refreshes were purging the log as expected.

I was more than a little frustrated then that Oracle would let us create the MVIEW without any warnings or errors in the first place. The database obviously detected something wrong since it wouldn't register them in SYS.SLOG$. Their last response was that, since the MVIEW itself was refreshing successfully, no error should be reported. This fails to address the question for me, so I'm going to push back a little harder and will share what I find.

For now, though, we need to schedule a maintenance window to recreate these materialized views and their logs and see if we can reclaim some disk space afterward (perhaps a future post!).
Categories: DBA Blogs

Find the enabled events using ORADEBUG EVENTDUMP

Mihajlo Tekic - Sat, 2014-03-15 09:46

Just a quick note on how to use ORADEBUG in order to find events that are enabled on system or session level.

Starting from Oracle 10.2 you could use ORADEBUG EVENTDUMP in order to get all events enabled either on session or system level. (not sure if this is available in 10.1)


The synopsis is:


oradebug eventdump


Where level is either system, process or session:
 


SQL> oradebug doc event action eventdump
eventdump
- list events that are set in the group
Usage
-------
eventdump( group < system | process | session >)


For demonstration purposes I will set three events, two on session and one on system level.
 


SQL> alter session set events '10046 trace name context forever, level 8';

Session altered.

SQL> alter session set events '942 trace name errorstack';

Session altered.

SQL> alter system set events 'trace[px_control][sql:f54y11t3njdpj]';

System altered.


Now, let's check what events are enabled on SESSION or SYSTEM level using ORADEBUG EVENTDUMP

SQL> oradebug setmypid

SQL> oradebug eventdump session;
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
sql_trace level=8
942 trace name errorstack

SQL> oradebug eventdump system
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]


As you may already know, ORADEBUG requires SYSDBA privilege. In order to check events set for other session, one could do so by attaching to the other session process using oradebug  setospid or oradebug setorapid.

This note was more for my own reference. I hope someone else finds it useful too.

What to Expect When You're Changing the DB_UNIQUE_NAME

Don Seiler - Wed, 2014-03-12 18:31
I recently had to change the db_unique_name of a database to make it jive with our typical database/DataGuard naming policy of appending the datacenter location. For the sake of this post let's say it was changed from ORCL to ORCL_NYC, since this database is in our fictional New York City datacenter.

I did a quick set of tests and thought I'd share the findings to save anyone any unpleasant surprises. Here are the things to expect when changing DB_UNIQUE_NAME.

Change the Parameter Value
First we obviously have to change the value. How we do so is important.
The command:
alter system set db_unique_name=orcl_nyc scope=spfile;

will result in a db_unique_name of ORCL_NYC, in all uppercase, which is used for the path changes we'll discuss later. However using quotes instead:
alter system set db_unique_name='orcl_nyc' scope=spfile;

will result in a lowercase orcl_nyc value used in the parameter and some paths. In either case, the fun begins when you next restart the instance!



ADR Location (i.e. alert log)
The ADR location appends the DB_UNIQUE_NAME to the location specified by the DIAGNOSTIC_DEST initialization parameter (defaulting to the $ORACLE_BASE environment variable value). When you restart after setting the DB_UNIQUE_NAME, your ADR location will be in a new location using the new DB_UNIQUE_NAME. Probably of most interest to you is that this means your alert log location will move, so any tools or scripts that referenced that file directly (e.g. error-scraping scripts or log-rotation jobs) will need to be updated. 

Regardless of quotes or not, the ADR path always used a lowercase string in the path.

Before:

SQL> show parameter db_unique_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      ORCL
SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/app/diag/rdbms/orcl/or
                                                 cl/trace

After:

SQL> show parameter db_unique_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_unique_name                       string      ORCL_NYC
SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/app/diag/rdbms/orcl_ny
                                                 c/orcl/trace


Datafile and Online Redo Log OMF Location
If you have the db_create_file_dest (and optionally db_create_online_log_dest_N) parameter set, then Oracle will use the DB_UNIQUE_NAME value in the OMF location for any new datafiles and redo logs created after the change, assuming a full path isn't specified.

SQL> alter tablespace users add datafile size 10m;

Tablespace altered.

SQL> select file_name from dba_data_files
  2  where tablespace_name='USERS';

FILE_NAME
--------------------------------------------------------------------------------
/oracle/app/oradata/ORCL/datafile/data_D-ORCL_I-3995253326_TS-USERS_FNO-4_1r
p30nup
/oracle/app/oradata/ORCL_NYC/datafile/o1_mf_users_9l1r8dx9_.dbf

In this case, Oracle will use an uppercase string regardless of whether or not the DB_UNIQUE_NAME is in upper or lower case. Note that existing files won't be affected, this will only apply to new files.

If this database is part of a DataGuard configuration, you'll want to be sure to update your db_file_name_convert and log_file_name_convert parameters to point to the new location.

FRA (Backups, Archivelogs, Flashback Logs)
In the same spirit of OMF, the FRA will also change locations. Similar to the previous case, the uppercase value of the DB_UNIQUE_NAME is used in the path, regardless of the original case. So, after a change and a couple of log switches, you would see something like this:

SQL> select name from v$archived_log;

NAME
--------------------------------------------------------------------------------

/oracle/app/fast_recovery_area/ORCL/archivelog/2014_03_12/o1_mf_1_13_9l1sgt20_.a
rc

/oracle/app/fast_recovery_area/ORCL/archivelog/2014_03_12/o1_mf_1_14_9l1shfjo_.a
rc

/oracle/app/fast_recovery_area/ORCL_NYC/archivelog/2014_03_12/o1_mf_1_15_9l1sj59
6_.arc

/oracle/app/fast_recovery_area/ORCL_NYC/archivelog/2014_03_12/o1_mf_1_16_9l1sjw4
k_.arc

Again, this will only affect newly created files. Existing backups, archivelogs and flashback logs will not be affected, they remain cataloged in their present locations and will be accessed just fine. RMAN will delete them when needed (or commanded) and then you could choose to delete the empty directories.

Oracle Wallet
I admit this one I didn't think of. If you use an Oracle Wallet, and do not specify a location in the sqlnet.ora file, Oracle looks in the default location of $ORACLE_BASE/admin/<DB_UNIQUE_NAME>/wallet/. Even more interesting, the DB_UNIQUE_NAME value is case sensitive. So you'll need to be aware of this when moving your wallet files to the new location. Here is a quick look at my findings on this matter:

-- db_unique_name set with no quotes
SQL> select wrl_parameter from v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
/oracle/app/admin/ORCL_NYC/wallet

-- db_unique_name set with quotes, lowercase
SQL> select wrl_parameter from v$encryption_wallet;

WRL_PARAMETER
--------------------------------------------------------------------------------
/oracle/app/admin/orcl_nyc/wallet

I can't say if this makes a difference when running on Windows, which I believe is case-insensitive. But on Linux it makes a difference.

Conclusion
That wraps this one up. I hope it helps a few people out and saves them an hour or two of head-scratching or worse. If there's anything I forgot, please let me know and I'll update this post.
Categories: DBA Blogs

What is real-time? Depends on who you ask

Steve Jones - Wed, 2014-03-12 13:29
"Real-time" its a word that gets thrown about a lot in IT and its worth documenting a few of the different ways it gets used Hard Real-time This is what Real-time Java was created to address (along with Soft Real-time) what is this?  Easiest way to say it is that often in Hard Real-time environments the following statement is true If it doesn't finish in X milliseconds then people might die So
Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator