Feed aggregator

Exadata questions

Tom Kyte - Thu, 2017-03-09 08:06
Hello Tom, We have acquired an Exadata Server and would like to clarify some issues that I believe are product myths. 1 - After removing indexes, can queries get faster? 2- I have always used indexes on the columns identified as FKs. Is th...
Categories: DBA Blogs

Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX

Amis Blog - Thu, 2017-03-09 03:34

At a public sector organization in the Netherlands there was the need to make an inventory of the deployed OSB services in order to find out, the dependencies with certain external web services (which were on a list to become deprecated).

For this, in particular the endpoints of business services were of interest.

Besides that, the dependencies between services and also the Message Flow per proxy service was of interest, in particular Operational Branch, Route, Java Callout and Service Callout actions.

Therefor an OSBServiceExplorer tool was developed to explore the services (proxy and business) within the OSB via WebLogic Server MBeans with JMX. For now, this tool was merely used to quickly return the information needed, but in the future it can be the basis for a more comprehensive one.

This article will explain how the OSBServiceExplorer tool uses WebLogic Server MBeans with JMX.

If you are interested in general information about, using MBeans with JMX, I kindly point you to another article (written be me) on the AMIS TECHNOLOGY BLOG: “Oracle Service Bus : disable / enable a proxy service via WebLogic Server MBeans with JMX”, via url: https://technology.amis.nl/2017/02/28/oracle-service-bus-disable-enable-a-proxy-service-via-weblogic-server-mbeans-with-jmx/

Remark: Some names in the examples in this article are in Dutch, but don’t let this scare you off.

MBeans

For ease of use, a ms-dos batch file was created, using MBeans, to explore services (proxy and business). The WebLogic Server contains a set of MBeans that can be used to configure, monitor and manage WebLogic Server resources.

On a server, the ms-dos batch file “OSBServiceExplorer.bat” is called.

The content of the ms-dos batch file “OSBServiceExplorer.bat” is:
java.exe -classpath “OSBServiceExplorer.jar;com.bea.common.configfwk_1.7.0.0.jar;sb-kernel-api.jar;sb-kernel-impl.jar;wlfullclient.jar” nl.xyz.osbservice.osbserviceexplorer. OSBServiceExplorer “xyz” “7001” “weblogic” “xyz”

In the ms-dos batch file via java.exe a class named OSBServiceExplorer is being called. The main method of this class expects the following parameters:

Parameter name Description HOSTNAME Host name of the AdminServer PORT Port of the AdminServer USERNAME Username PASSWORD Passsword

In the sample code shown at the end of this article, the use of the following MBeans can be seen:

Provides a common access point for navigating to all runtime and configuration MBeans in the domain as well as to MBeans that provide domain-wide services (such as controlling and monitoring the life cycles of servers and message-driven EJBs and coordinating the migration of migratable services). [https://docs.oracle.com/middleware/1213/wls/WLAPI/weblogic/management/mbeanservers/domainruntime/DomainRuntimeServiceMBean.html]

This library is not by default provided in a WebLogic install and must be build. The simple way of how to do this is described in “Fusion Middleware Programming Stand-alone Clients for Oracle WebLogic Server, Using the WebLogic JarBuilder Tool”, which can be reached via url: https://docs.oracle.com/cd/E28280_01/web.1111/e13717/jarbuilder.htm#SACLT240.

Provides methods for retrieving runtime information about a server instance and for transitioning a server from one state to another. [https://docs.oracle.com/cd/E11035_01/wls100/javadocs_mhome/weblogic/management/runtime/ServerRuntimeMBean.html]

Provides various API to query, export and import resources, obtain validation errors, get and set environment values, and in general manage resources in an ALSB domain. [https://docs.oracle.com/cd/E13171_01/alsb/docs26/javadoc/com/bea/wli/sb/management/configuration/ALSBConfigurationMBean.html]

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the findService method.

Service findService(String name,
                    String type,
                    String location)

This method returns the Service on the specified Server or in the primary MBeanServer if the location is not specified.

In the sample code shown at the end of this article, certain java fields are used. For reading purposes the field values are shown in the following table:

Field Field value DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME weblogic.management.mbeanservers.domainruntime DomainRuntimeServiceMBean.OBJECT_NAME com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean ALSBConfigurationMBean.NAME ALSBConfiguration ALSBConfigurationMBean.TYPE com.bea.wli.sb.management.configuration.ALSBConfigurationMBean Ref.DOMAIN <Reference to the domain>

Because of the use of com.bea.wli.config.Ref.class , the following library <Middleware Home Directory>/Oracle_OSB1/modules/com.bea.common.configfwk_1.7.0.0.jar was needed.

A Ref uniquely represents a resource, project or folder that is managed by the Configuration Framework.

A special Ref DOMAIN refers to the whole domain.
[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Because of the use of weblogic.management.jmx.MBeanServerInvocationHandler.class , the following library <Middleware Home Directory>/wlserver_10.3/server/lib/wlfullclient.jar was needed.

When running the code the following error was thrown:

java.lang.RuntimeException: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:621)
	at weblogic.management.jmx.MBeanServerInvocationHandler.invoke(MBeanServerInvocationHandler.java:418)
	at $Proxy0.findService(Unknown Source)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.<init>(OSBServiceExplorer.java:174)
	at nl.xyz.osbservice.osbserviceexplorer.OSBServiceExplorer.main(OSBServiceExplorer.java:445)
Caused by: java.lang.ClassNotFoundException: com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean
	at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
	at weblogic.management.jmx.MBeanServerInvocationHandler.newProxyInstance(MBeanServerInvocationHandler.java:619)
	... 4 more
Process exited.

So because of the use of com.bea.wli.sb.management.configuration.DelegatedALSBConfigurationMBean.class the following library <Middleware Home Directory>/Oracle_OSB1/lib/sb-kernel-impl.jar was also needed.

Runtime information (name and state) of the server instances

The OSBServiceExplorer tool writes its output to a text file called “OSBServiceExplorer.txt”.

First the runtime information (name and state) of the server instances (Administration Server and Managed Servers) of the WebLogic domain are written to file.

Example content fragment of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

See the code fragment below:

fileWriter.write("Found server runtimes:\n");
int length = (int)serverRuntimes.length;
for (int i = 0; i < length; i++) {
    ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

    String name = serverRuntimeMBean.getName();
    String state = serverRuntimeMBean.getState();
    fileWriter.write("- Server name: " + name + ". Server state: " +
                     state + "\n");
}
fileWriter.write("" + "\n");
List of Ref objects (projects, folders, or resources)

Next, a list of Ref objects is written to file, including the total number of objects in the list.

Example content fragment of the text file:

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS

See the code fragment below:

Set refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


fileWriter.write("Found total of " + refs.size() + " refs, including the following proxy and business services:\n");

for (Ref ref : refs) {
    String typeId = ref.getTypeId();

    if (typeId.equalsIgnoreCase("ProxyService")) {

        fileWriter.write("- ProxyService: " + ref.getFullName() +
                         "\n");
    } else if (typeId.equalsIgnoreCase("BusinessService")) {
        fileWriter.write("- BusinessService: " + ref.getFullName() +
                         "\n");
    } else {
        //fileWriter.write(ref.getFullName());
    }
}

fileWriter.write("" + "\n");

As mentioned before, a Ref object uniquely represents a resource, project or folder. A Ref object has two components:

  • typeId that indicates whether it is a project, folder, or a resource
  • array of names of non-zero length.

For a resource the array of names start with the project name, followed by folder names, and end with the resource name.
For a project, the Ref object simply contains one name component, that is, the project name.
A Ref object for a folder contains the project name followed by the names of the folders which it is nested under.

[https://docs.oracle.com/cd/E17904_01/apirefs.1111/e15033/com/bea/wli/config/Ref.html]

Below is an example of a Ref object that represents a folder (via JDeveloper Debug):

Below is an example of a Ref object that represents a resource (via JDeveloper Debug):

ResourceConfigurationMBean

In order to be able to determine the actual endpoints of the proxy services and business services, the ResourceConfigurationMBean is used. When connected, the Service Bus MBeans are located under com.oracle.osb. [https://technology.amis.nl/2014/10/20/oracle-service-bus-obtaining-list-exposed-soap-http-endpoints/]

When we look at the java code, as a next step, the names of a set of MBeans specified by pattern matching are put in a list and looped through.

Once the connection to the DomainRuntimeServiceMBean is made, other MBeans can be found via the queryNames method.

Set queryNames(ObjectName name,
               QueryExp query)
               throws IOException

Gets the names of MBeans controlled by the MBean server. This method enables any of the following to be obtained: The names of all MBeans, the names of a set of MBeans specified by pattern matching on the ObjectName and/or a Query expression, a specific MBean name (equivalent to testing whether an MBean is registered). When the object name is null or no domain and key properties are specified, all objects are selected (and filtered if a query is specified). It returns the set of ObjectNames for the MBeans selected.
[https://docs.oracle.com/javase/7/docs/api/javax/management/MBeanServerConnection.html]

See the code fragment below:

String domain = "com.oracle.osb";
String objectNamePattern =
    domain + ":" + "Type=ResourceConfigurationMBean,*";

Set osbResourceConfigurations =
    connection.queryNames(new ObjectName(objectNamePattern), null);

fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
for (ObjectName osbResourceConfiguration :
     osbResourceConfigurations) {
…
    String canonicalName =
        osbResourceConfiguration.getCanonicalName();
    fileWriter.write("- Resource: " + canonicalName + "\n");
…
}

The pattern used is: com.oracle.osb:Type=ResourceConfigurationMBean,*

Example content fragment of the text file:

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
…

Below is an example of an ObjectName object (via JDeveloper Debug), found via the queryNames method:

Via the Oracle Enterprise Manager Fusion Middleware Control for a certain domain, the System MBean Browser can be opened. Here the previously mentioned ResourceConfigurationMBean’s can be found.


[Via MBean Browser]

The information on the right is as follows (if we navigate to a particular ResourceConfigurationMBean, for example …$UpdateZaak_Lk01_PS) :


[Via MBean Browser]

Here we can see that the attributes Configuration and Metadata are available:

  • Configuration

[Via MBean Browser]

The Configuration is made available in java by the following code fragment:

CompositeDataSupport configuration = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Configuration");
  • Metadata

[Via MBean Browser]

The Metadata is made available in java by the following code fragment:

CompositeDataSupport metadata = (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,"Metadata");
Diving into attribute Configuration of the ResourceConfigurationMBean

For each found proxy and business service the configuration information (canonicalName, service-type, transport-type, url) is written to file.

See the code fragment below:

String canonicalName =
    osbResourceConfiguration.getCanonicalName();
…
String servicetype =
    (String)configuration.get("service-type");
CompositeDataSupport transportconfiguration =
    (CompositeDataSupport)configuration.get("transport-configuration");
String transporttype =
    (String)transportconfiguration.get("transport-type");
…
fileWriter.write("  Configuration of " + canonicalName +
                 ":" + " service-type=" + servicetype +
                 ", transport-type=" + transporttype +
                 ", url=" + url + "\n");

Proxy service configuration:

Below is an example of a proxy service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

The proxy services which define the exposed endpoints, can be recognized by the ProxyService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

String url = (String)transportconfiguration.get("url");

Business service configuration:

Below is an example of a business service configuration (content fragment of the text file):

  Configuration of com.oracle.osb:Location=AdminServer,Name=BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS,Type=ResourceConfigurationMBean: service-type=SOAP, transport-type=http, url=http://xyz/eBus/FolderService.svc

The business services which define the exposed endpoints, can be recognized by the BusinessService$ prefix.


[Via MBean Browser]

For getting the endpoint, see the code fragment below:

CompositeData[] urlconfiguration =
    (CompositeData[])transportconfiguration.get("url-configuration");
String url = (String)urlconfiguration[0].get("url");

So, via the url key found in the business service configuration, the endpoint of a business service can be found (for example: http://xyz/eBus/FolderService.svc). So in that way the dependencies (proxy and/or business services) with certain external web services (having a certain endpoint), could be found.

Proxy service pipeline, element hierarchy

For a proxy service the elements (nodes) of the pipeline are investigated.

See the code fragment below:

CompositeDataSupport pipeline =
    (CompositeDataSupport)configuration.get("pipeline");
TabularDataSupport nodes =
    (TabularDataSupport)pipeline.get("nodes");


[Via MBean Browser]

Below is an example of a nodes object (via JDeveloper Debug):

If we take a look at the dataMap object, we can see nodes of different types.

Below is an example of a node of type Stage (via JDeveloper Debug):

Below is an example of a node of type Action and label ifThenElse (via JDeveloper Debug):

Below is an example of a node of type Action and label wsCallout (via JDeveloper Debug):

For the examples above the Message Flow part of the UpdateZaak_Lk01_PS proxy service looks like:

The mapping between the node-id and the corresponding element in the Messsage Flow can be achieved by looking in the .proxy file (in this case: UpdateZaak_Lk01_PS.proxy) for the _ActiondId- identification, mentioned as value for the name key.

<con:stage name="EditFolderZaakStage">
        <con:context>
          …
        </con:context>
        <con:actions>
          <con3:ifThenElse>
            <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7c84</con2:id>
            <con3:case>
              <con3:condition>
                …
              </con3:condition>
              <con3:actions>
                <con3:wsCallout>
                  <con2:id>_ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f</con2:id>
                  …

The first node in the dataMap object (via JDeveloper Debug) looks like:

The dataMap object is of type HashMap. A hashMap maintains key and value pairs and often denoted as HashMap<Key, Value> or HashMap<K, V>. HashMap implements Map interface

As can be seen, the key is of type Object and the value of type CompositeData.

In order to know what kind of information is delivered via the CompositeData object, the rowType object can be used.

See the code fragment below:

TabularType tabularType = nodes.getTabularType();
CompositeType rowType = tabularType.getRowType();

Below is an example of a rowType object (via JDeveloper Debug):

From this it is now clear that the CompositeData object for a ProxyServicePipelineElementType contains:

Index key value 0 children Children of this node 1 label Label 2 name Name of the node 3 node-id Id of this node unique within the graph 4 type Pipeline element type

In the code fragment below, an iterator is used to loop through the dataMap object.

Iterator keyIter = nodes.keySet().iterator();

for (int j = 0; keyIter.hasNext(); ++j) {

    Object[] key = ((Collection)keyIter.next()).toArray();

    CompositeData compositeData = nodes.get(key);

    …
}

The key object for the first node in the dataMap object (via JDeveloper Debug) looks like:

The value of this key object is 25, which also is shown as the value for the node-id of the compositeData object, which for the first node in the dataMap object (via JDeveloper Debug) looks like:

It’s obvious that the nodes in the pipeline form a hierarchy. A node can have children, which in turn can also have children, etc. Think for example of a “Stage” having an “If Then” action which in turn contains several “Assign” actions. A proxy service Message Flow can of course contain all kinds of elements (see the Design Palette).

Below is (for another proxy service) an example content fragment of the text file, that reflects the hierarchy:

     Index#76:
       level    = 1
       label    = branch-node
       name     = CheckOperationOperationalBranch
       node-id  = 62
       type     = OperationalBranchNode
       children = [42,46,50,61]
         level    = 2
         node-id  = 42
         children = [41]
           level    = 3
           label    = route-node
           name     = creeerZaak_Lk01RouteNode
           node-id  = 41
           type     = RouteNode
           children = [40]
             level    = 4
             node-id  = 40
             children = [39]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b99
               node-id  = 39
               type     = Action
               children = []
         level    = 2
         node-id  = 46
         children = [45]
           level    = 3
           label    = route-node
           name     = updateZaak_Lk01RouteNode
           node-id  = 45
           type     = RouteNode
           children = [44]
             level    = 4
             node-id  = 44
             children = [43]
               level    = 5
               label    = route
               name     = _ActionId-4977625172784205635-3567e5a2.15364c39a7e.-7b77
               node-id  = 43
               type     = Action
               children = []
         …

Because of the interest in only certain kind of nodes (Route, Java Callout, Service Callout, etc.) some kind of filtering is needed. For this the label and type keys are used.

See the code fragment below:

String label = (String)compositeData.get("label");
String type = (String)compositeData.get("type");

if (type.equals("Action") &&
    (label.contains("wsCallout") ||
     label.contains("javaCallout") ||
     label.contains("route"))) {

    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
} else if (type.equals("OperationalBranchNode") ||
           type.equals("RouteNode"))
{
    fileWriter.write("    Index#" + j + ":\n");
    printCompositeData(nodes, key, 1);
}

Example content fragment of the text file:

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

When we take a closer look at the node of type Action and label wsCallout with index 106, this can also be found in the MBean Browser:


[Via MBean Browser]

The children node-id’s are 30 (a node of type Sequence and name requestTransform, also having children) and 34 (a node of type Sequence and name responseTransform, also having children).

Diving into attribute Metadata of the ResourceConfigurationMBean

For each found proxy service the metadata information (dependencies and dependents) is written to file.

See the code fragment below:

fileWriter.write("  Metadata of " + canonicalName + "\n");

String[] dependencies =
    (String[])metadata.get("dependencies");
fileWriter.write("    dependencies:\n");
int size;
size = dependencies.length;
for (int i = 0; i < size; i++) {
    String dependency = dependencies[i];
    if (!dependency.contains("Xquery")) {
        fileWriter.write("      - " + dependency + "\n");
    }
}
fileWriter.write("" + "\n");

String[] dependents = (String[])metadata.get("dependents");
fileWriter.write("    dependents:\n");
size = dependents.length;
for (int i = 0; i < size; i++) {
    String dependent = dependents[i];
    fileWriter.write("      - " + dependent + "\n");
}
fileWriter.write("" + "\n");

Example content fragment of the text file:

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS

As can be seen in the MBean Browser, the metadata for a particular proxy service shows the dependencies on other resources (like business services and XML Schemas) and other services that are dependent on the proxy service.


[Via MBean Browser]

By looking at the results in the text file "OSBServiceExplorer.txt", the dependencies between services (proxy and business) and also the dependencies with certain external web services (with a particular endpoint) could be extracted.

Example content of the text file:

Found server runtimes:
- Server name: AdminServer. Server state: RUNNING
- Server name: ManagedServer1. Server state: RUNNING
- Server name: ManagedServer2. Server state: RUNNING

Found total of 1132 refs, including the following proxy and business services: 
…
- ProxyService: JMSConsumerStuFZKNMessageService-1.0/proxy/JMSConsumerStuFZKNMessageService_PS
…
- ProxyService: ZKN ZaakService-2.0/proxy/UpdateZaak_Lk01_PS
…
- BusinessService: ZKN ZaakService-2.0/business/eBUS/eBUS_FolderService_BS
…

ResourceConfiguration list of proxy and business services:
…
- Resource: com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
  Configuration of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean: service-type=Abstract SOAP, transport-type=local, url=local

    Index#72:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7b7f
       node-id  = 71
       type     = Action
       children = [66,70]
    Index#98:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7997
       node-id  = 54
       type     = Action
       children = [48,53]
    Index#106:
       level    = 1
       label    = wsCallout
       name     = _ActionId-7997641858449402984--36d1ada1.1562c8caabd.-7cf4
       node-id  = 35
       type     = Action
       children = [30,34]

  Metadata of com.oracle.osb:Location=AdminServer,Name=ProxyService$ZKN ZaakService-2.0$proxy$UpdateZaak_Lk01_PS,Type=ResourceConfigurationMBean
    dependencies:
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_FolderService_BS
      - XMLSchema$CDM$Interface$StUF-ZKN_1_1_02$zkn0310$mutatie$zkn0310_msg_mutatie
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_SearchService_BS
      - BusinessService$ZKN ZaakService-2.0$business$eBUS$eBUS_LookupService_BS

    dependents:
      - ProxyService$JMSConsumerStuFZKNMessageService-1.0$proxy$JMSConsumerStuFZKNMessageService_PS
      - ProxyService$ZKN ZaakService-2.0$proxy$ZaakService_PS
…

The java code:

package nl.xyz.osbservice.osbserviceexplorer;


import com.bea.wli.config.Ref;
import com.bea.wli.sb.management.configuration.ALSBConfigurationMBean;

import java.io.FileWriter;
import java.io.IOException;

import java.net.MalformedURLException;

import java.util.Collection;
import java.util.HashMap;
import java.util.Hashtable;
import java.util.Iterator;
import java.util.Properties;
import java.util.Set;

import javax.management.MBeanServerConnection;
import javax.management.MalformedObjectNameException;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.openmbean.TabularDataSupport;
import javax.management.openmbean.TabularType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;

import javax.naming.Context;

import weblogic.management.jmx.MBeanServerInvocationHandler;
import weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean;
import weblogic.management.runtime.ServerRuntimeMBean;


public class OSBServiceExplorer {
    private static MBeanServerConnection connection;
    private static JMXConnector connector;
    private static FileWriter fileWriter;

    /**
     * Indent a string
     * @param indent - The number of indentations to add before a string 
     * @return String - The indented string
     */
    private static String getIndentString(int indent) {
        StringBuilder sb = new StringBuilder();
        for (int i = 0; i < indent; i++) {
            sb.append("  ");
        }
        return sb.toString();
    }


    /**
     * Print composite data (write to file)
     * @param nodes - The list of nodes
     * @param key - The list of keys
     * @param level - The level in the hierarchy of nodes
     */
    private void printCompositeData(TabularDataSupport nodes, Object[] key,
                                    int level) {
        try {
            CompositeData compositeData = nodes.get(key);

            fileWriter.write(getIndentString(level) + "     level    = " +
                             level + "\n");

            String label = (String)compositeData.get("label");
            String name = (String)compositeData.get("name");
            String nodeid = (String)compositeData.get("node-id");
            String type = (String)compositeData.get("type");
            String[] childeren = (String[])compositeData.get("children");
            if (level == 1 ||
                (label.contains("route-node") || label.contains("route"))) {
                fileWriter.write(getIndentString(level) + "     label    = " +
                                 label + "\n");

                fileWriter.write(getIndentString(level) + "     name     = " +
                                 name + "\n");

                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     type     = " +
                                 type + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } else if (level >= 2) {
                fileWriter.write(getIndentString(level) + "     node-id  = " +
                                 nodeid + "\n");

                fileWriter.write(getIndentString(level) + "     children = [");

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    fileWriter.write(childeren[i]);
                    if (i < size - 1) { fileWriter.write(","); } } fileWriter.write("]\n"); } if ((level == 1 && type.equals("OperationalBranchNode")) || level > 1) {
                level++;

                int size = childeren.length;

                for (int i = 0; i < size; i++) {
                    key[0] = childeren[i];
                    printCompositeData(nodes, key, level);
                }
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    public OSBServiceExplorer(HashMap props) {
        super();


        try {

            Properties properties = new Properties();
            properties.putAll(props);

            initConnection(properties.getProperty("HOSTNAME"),
                           properties.getProperty("PORT"),
                           properties.getProperty("USERNAME"),
                           properties.getProperty("PASSWORD"));


            DomainRuntimeServiceMBean domainRuntimeServiceMBean =
                (DomainRuntimeServiceMBean)findDomainRuntimeServiceMBean(connection);

            ServerRuntimeMBean[] serverRuntimes =
                domainRuntimeServiceMBean.getServerRuntimes();

            fileWriter = new FileWriter("OSBServiceExplorer.txt", false);


            fileWriter.write("Found server runtimes:\n");
            int length = (int)serverRuntimes.length;
            for (int i = 0; i < length; i++) {
                ServerRuntimeMBean serverRuntimeMBean = serverRuntimes[i];

                String name = serverRuntimeMBean.getName();
                String state = serverRuntimeMBean.getState();
                fileWriter.write("- Server name: " + name +
                                 ". Server state: " + state + "\n");
            }
            fileWriter.write("" + "\n");

            // Create an mbean instance to perform configuration operations in the created session.
            //
            // There is a separate instance of ALSBConfigurationMBean for each session.
            // There is also one more ALSBConfigurationMBean instance which works on the core data, i.e., the data which ALSB runtime uses.
            // An ALSBConfigurationMBean instance is created whenever a new session is created via the SessionManagementMBean.createSession(String) API.
            // This mbean instance is then used to perform configuration operations in that session.
            // The mbean instance is destroyed when the corresponding session is activated or discarded.
            ALSBConfigurationMBean alsbConfigurationMBean =
                (ALSBConfigurationMBean)domainRuntimeServiceMBean.findService(ALSBConfigurationMBean.NAME,
                                                                              ALSBConfigurationMBean.TYPE,
                                                                              null);

            Set<Ref> refs = alsbConfigurationMBean.getRefs(Ref.DOMAIN);


            fileWriter.write("Found total of " + refs.size() +
                             " refs, including the following proxy and business services:\n");

            for (Ref ref : refs) {
                String typeId = ref.getTypeId();

                if (typeId.equalsIgnoreCase("ProxyService")) {

                    fileWriter.write("- ProxyService: " + ref.getFullName() +
                                     "\n");
                } else if (typeId.equalsIgnoreCase("BusinessService")) {
                    fileWriter.write("- BusinessService: " +
                                     ref.getFullName() + "\n");
                } else {
                    //fileWriter.write(ref.getFullName());
                }
            }

            fileWriter.write("" + "\n");

            String domain = "com.oracle.osb";
            String objectNamePattern =
                domain + ":" + "Type=ResourceConfigurationMBean,*";

            Set<ObjectName> osbResourceConfigurations =
                connection.queryNames(new ObjectName(objectNamePattern), null);

            fileWriter.write("ResourceConfiguration list of proxy and business services:\n");
            for (ObjectName osbResourceConfiguration :
                 osbResourceConfigurations) {

                CompositeDataSupport configuration =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Configuration");

                CompositeDataSupport metadata =
                    (CompositeDataSupport)connection.getAttribute(osbResourceConfiguration,
                                                                  "Metadata");

                String canonicalName =
                    osbResourceConfiguration.getCanonicalName();
                fileWriter.write("- Resource: " + canonicalName + "\n");
                if (canonicalName.contains("ProxyService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    String url = (String)transportconfiguration.get("url");
                    
                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                } else if (canonicalName.contains("BusinessService")) {
                    String servicetype =
                        (String)configuration.get("service-type");
                    CompositeDataSupport transportconfiguration =
                        (CompositeDataSupport)configuration.get("transport-configuration");
                    String transporttype =
                        (String)transportconfiguration.get("transport-type");
                    CompositeData[] urlconfiguration =
                        (CompositeData[])transportconfiguration.get("url-configuration");
                    String url = (String)urlconfiguration[0].get("url");

                    fileWriter.write("  Configuration of " + canonicalName +
                                     ":" + " service-type=" + servicetype +
                                     ", transport-type=" + transporttype +
                                     ", url=" + url + "\n");
                }

                if (canonicalName.contains("ProxyService")) {

                    fileWriter.write("" + "\n");

                    CompositeDataSupport pipeline =
                        (CompositeDataSupport)configuration.get("pipeline");
                    TabularDataSupport nodes =
                        (TabularDataSupport)pipeline.get("nodes");

                    TabularType tabularType = nodes.getTabularType();
                    CompositeType rowType = tabularType.getRowType();

                    Iterator keyIter = nodes.keySet().iterator();

                    for (int j = 0; keyIter.hasNext(); ++j) {

                        Object[] key = ((Collection)keyIter.next()).toArray();

                        CompositeData compositeData = nodes.get(key);

                        String label = (String)compositeData.get("label");
                        String type = (String)compositeData.get("type");
                        if (type.equals("Action") &&
                            (label.contains("wsCallout") ||
                             label.contains("javaCallout") ||
                             label.contains("route"))) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        } else if (type.equals("OperationalBranchNode") ||
                                   type.equals("RouteNode")) {

                            fileWriter.write("    Index#" + j + ":\n");
                            printCompositeData(nodes, key, 1);
                        }
                    }

                    fileWriter.write("" + "\n");
                    fileWriter.write("  Metadata of " + canonicalName + "\n");

                    String[] dependencies =
                        (String[])metadata.get("dependencies");
                    fileWriter.write("    dependencies:\n");
                    int size;
                    size = dependencies.length;
                    for (int i = 0; i < size; i++) {
                        String dependency = dependencies[i];
                        if (!dependency.contains("Xquery")) {
                            fileWriter.write("      - " + dependency + "\n");
                        }
                    }
                    fileWriter.write("" + "\n");

                    String[] dependents = (String[])metadata.get("dependents");
                    fileWriter.write("    dependents:\n");
                    size = dependents.length;
                    for (int i = 0; i < size; i++) {
                        String dependent = dependents[i];
                        fileWriter.write("      - " + dependent + "\n");
                    }
                    fileWriter.write("" + "\n");

                }

            }
            fileWriter.close();

            System.out.println("Succesfully completed");

        } catch (Exception ex) {
            ex.printStackTrace();
        } finally {
            if (connector != null)
                try {
                    connector.close();
                } catch (Exception e) {
                    e.printStackTrace();
                }
        }
    }


    /*
       * Initialize connection to the Domain Runtime MBean Server.
       */

    public static void initConnection(String hostname, String portString,
                                      String username,
                                      String password) throws IOException,
                                                              MalformedURLException {

        String protocol = "t3";
        Integer portInteger = Integer.valueOf(portString);
        int port = portInteger.intValue();
        String jndiroot = "/jndi/";
        String mbeanserver = DomainRuntimeServiceMBean.MBEANSERVER_JNDI_NAME;

        JMXServiceURL serviceURL =
            new JMXServiceURL(protocol, hostname, port, jndiroot +
                              mbeanserver);

        Hashtable hashtable = new Hashtable();
        hashtable.put(Context.SECURITY_PRINCIPAL, username);
        hashtable.put(Context.SECURITY_CREDENTIALS, password);
        hashtable.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                      "weblogic.management.remote");
        hashtable.put("jmx.remote.x.request.waiting.timeout", new Long(10000));

        connector = JMXConnectorFactory.connect(serviceURL, hashtable);
        connection = connector.getMBeanServerConnection();
    }


    private static Ref constructRef(String refType, String serviceURI) {
        Ref ref = null;
        String[] uriData = serviceURI.split("/");
        ref = new Ref(refType, uriData);
        return ref;
    }


    /**
     * Finds the specified MBean object
     *
     * @param connection - A connection to the MBeanServer.
     * @return Object - The MBean or null if the MBean was not found.
     */
    public Object findDomainRuntimeServiceMBean(MBeanServerConnection connection) {
        try {
            ObjectName objectName =
                new ObjectName(DomainRuntimeServiceMBean.OBJECT_NAME);
            return (DomainRuntimeServiceMBean)MBeanServerInvocationHandler.newProxyInstance(connection,
                                                                                            objectName);
        } catch (MalformedObjectNameException e) {
            e.printStackTrace();
            return null;
        }
    }


    public static void main(String[] args) {
        try {
            if (args.length <= 0) {
                System.out.println("Provide values for the following parameters: HOSTNAME, PORT, USERNAME, PASSWORD.");

            } else {
                HashMap<String, String> map = new HashMap<String, String>();

                map.put("HOSTNAME", args[0]);
                map.put("PORT", args[1]);
                map.put("USERNAME", args[2]);
                map.put("PASSWORD", args[3]);
                OSBServiceExplorer osbServiceExplorer =
                    new OSBServiceExplorer(map);
            }
        } catch (Exception e) {
            e.printStackTrace();
        }

    }
}

The post Oracle Service Bus : Service Exploring via WebLogic Server MBeans with JMX appeared first on AMIS Oracle and Java Blog.

New OA Framework 12.2.5 Update 10 Now Available

Steven Chan - Thu, 2017-03-09 02:05

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure. Since the initial release of Oracle E-Business Suite Release 12.2 in 2013, we have released a number of cumulative updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.5 is now available:


Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.5 users should apply this patch.  Future OAF patches for EBS Release 12.2.5 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch is cumulative: it includes all fixes released in previous EBS Release 12.2.5 bundle patches.

This latest bundle patch includes fixes for following bugs/issues:

  • 22590683 - FORWARD-PORT 21465867 TO 12.2.5
  • 24391264 - 12.2.X : ICONS NOT ALIGNED WITH FUNC NAME IN OLD HOMEPAGE IN CHROME/FIREFOX
  • 24398948 - CUSTOMER NAME MISALIGNED
  • 25056504 - NEED  IMPORTPERZDOCCO.JAVA VERSION .3 BACKPORTED TO BUNDLE 8

    Related Articles

    Categories: APPS Blogs

    Array Size for pl/sql delete and db link

    Tom Kyte - Wed, 2017-03-08 13:46
    Dear Team, If i execute a delete statement within a procedure what will be array size it will work with? create or replace procedure del_test as begin delete from big_table; commit; end; If i execute this procedure from a job is ...
    Categories: DBA Blogs

    Question on recreating a primary key on a partitioned table

    Tom Kyte - Wed, 2017-03-08 13:46
    What happens to the unique index that gets created when we recreate a primary key which is composite on a partitioned table, please note that we are using the "enable novalidate" clause to recreate so that existing rows are not checked for the validi...
    Categories: DBA Blogs

    Oracle BPM 12c: Hide Implementation Details with the Refine Feature

    Jan Kettenis - Wed, 2017-03-08 12:55
    Ever had a case with the Oracle BPM Suite where you wanted to create a BPMN model while hiding the details from the reader? Then the "refine" feature may be what you are looking for. Read on if you want to know more about this feature that has been added since 12c. I actually blogged about it before, but this time I want to also illustrate the impact it has on the flow trace.

    The "refine" feature is a way to detail an activity.  Basically it is a specialization of the (already in 11g present) embedded subprocess. The difference being that - unlike a normal embedded subprocess - the refined activity keeps the icon of the main activity.

    To show this difference take the next example where I hide the details of a Script activity being executed before a User activity is scheduled. When I collapse that embedded subprocess it gets a blue color, hiding this technical detail but also that the main activity (still) is the User activity.



    This can somewhat be mitigated by changing the icon of the activity, but the options are pretty limited. Furthermore, this deviates from the standard BPMN notation what some readers might find somewhat disruptive.


    Now let's have a look at the refine feature. The use case here is a bit different, in that I want to hide from the reader that a User activity in reality is handled by some other application with some asynchronous interface to send the payload (to of what otherwise would be a normal Human Task) via a Send activity, after which I receive the updated payload and outcome via a Receive activity. In case you wonder why on earth I want to do this: the example is inspired by a real customer case where the BPM process orchestrates system and human interactions of which the latter actually are backed by activities in Siebel.

    You refine an activity by chosing "Refine" from the right-mouse-click context menu of the activity itself.


    The initial result is some sort of an embedded subprocess to which a User activity has automatically been added, however without a Start and End event.


    I can now detail this activity by adding a Send and Receive activity to it. Because I don't wamt implement the User activity I put that in draft mode. Before you criticize how ugly this is, consider this: you still may want to express that the Send and Receive actually are a placeholder of something that is not implemented as a Human Task, but still concerns some implementation of what logically is a User activity.


    I can compile and deploy this BPM application without any issue, but ... As it turns out it does not work.


    Because of what I consider a bug, the refined activity actually does need a Start and End event, just like a regular Embedded Subprocess. The compiler just forgets to tell you.




    Not surprising, as you can see the flow trace is not different than that of a regular Embedded Subprocess. And what you can do with it is also the same, as you can tell from the next iteration in which I have implemented some fallback scenario to schedule a User activity whenever the handling by the other application is not done within some time limit.


    And despite all these details, I can still present the activity to the reader as a simple User activity, only difference being the + symbol :-)

    'Upping' the Convenience Factor - Edit Files Directly from the Web

    WebCenter Team - Wed, 2017-03-08 09:43

    By: Marc-Andre Houle, Principal Manager, Product Management, Oracle Cloud Services

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    As a product manager, I’m regularly collaborating with people both internal and external to my company. And as part of that collaboration, I’m regularly being asked to review, edit, and update files that were shared with me. What’s great about the Oracle Content and Experience Cloud is that it lets me collaborate easily on anything shared with me, and I can do that from any place, any device, and at any time.

    But what if I’m not syncing a file or folder to my local machine? Or what if someone sent me a link to a file in a folder I don’t otherwise have access to, but wants me to make some edits? Oracle Content and Experience Cloud makes that easy too. Let me give a quick example of what that might look like.

    Imagine Amy is working on a report and wants me to add some information. What will normally happen is Amy will add a few comments or annotations to the file in the related conversation, then flag me on the relevant comments. That will send me an email notification or a pop-up alert prompting me to review the file and make edits.

    Like everyone else, the first thing I do after reading the email or seeing the notification is click on the link. That, of course, launches the browser and brings me to the file in the cloud. From there, I can clear Amy’s flag, read her comments, reply to each, and add my own annotations.

    At this point, I need to make edits to the file. If I had that file synced locally, I could certainly navigate to the file in Windows Explorer or in the Mac Finder. But it’s far easier for me to click the “Edit” button and let Oracle Content and Experience Cloud do the heavy lifting for me.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    The first time I use the “Edit” feature, I am warned that the Desktop Sync Client needs to be installed. However, if it’s already installed, it is not necessary for it to be running at the time; the client will launch automatically.

    The file I want to edit will then be downloaded to my local machine and a download progress appears. Behind the scenes, a temporary folder is created locally on my machine and the file is downloaded to that folder.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    The Desktop Sync Client will then automatically launch the default application for that file type to edit the file. For example, a .docx file will open in Microsoft Word, a .pptx file will open in Microsoft PowerPoint, etc.

    Once I’m finished making my edits, I can save or close the file, and the file automatically get synced back to the cloud. Versioning is also handled automatically, so the edits will appear as a new version on the file.

    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;}

    When I go back to the browser, I will see the new version with all my edits. It’s then easy for me to post a comment in the related conversation and then flag Amy so she can get a notification that my changes were made.

    Don’t have Oracle Content and Experience Cloud Service yet? Then, I highly recommend getting started with a free trial version available at https://cloud.oracle.com/content to see how you can now drive content and social collaboration anytime, anywhere and on any device.

    What is Oracle Content and Experience Cloud, you ask? Well, you will be hearing a lot more about it over the next few weeks so stay tuned but in the meantime, if you are no stranger to Oracle Documents Cloud Service then you are already in the know...

    Docker swarm at Oracle Cloud

    Marcelo Ochoa - Wed, 2017-03-08 09:23
    As I mentioned in a previous post the Oracle Container Service seem to be little for my cloud expectativa.
    But the Cloud Compute service is enough to implement my Docker Swarm implementation.
    Why Docker Swarm?

    • Is standard and supported by other cloud providers
    • Is operated through command line using standard docker and docker-machine operations

    When I tested Oracle Cloud Services I found three major problems:

    • You can't modify base OS parameters
    • Access to persistent storage didn't work for me
    • The Docker version supported is a bit outdated

    All these things can be avoided if you deploy your own Swarm cluster using Oracle Compute Cloud Services, here detailed instructions about how to do that, ADVICE: is a long post :)
    Configuring the networkBefore starting creating a compute node (swarm node) is necessary to create a network definition to access using ssh and docker commands, here the step by step instructions using compute console:

    create a security list  by clicking:

    1. Network menu up
    2. Security list left menu
    3. Create Security list right up side
    4. Choose a name (DockerNode in this example)
    5. Choose Inbound Policy as Drop Packet/Outbound Policy Allow
    6. Set a meaningful text description 

    next defines a Security Application for Docker remote operations

    1. Click on left menu Security Applications
    2. Create Security Application
    3. Name it as docker
    4. Choose port type as TCP
    5. Define a port staring range 2376 (docker node daemon)
    6. Define a port end range 2377 (docker swarm cluster management communications)
    7. Set a meaningful text description

    next create a Security Rule allowing SSH and Docker ports
    1. Click on Security Rules left menu
    2. Click on Create Security Rule right up button
    3. Name it (DockerAdminPort/SSHAdminPort in our case)
    4. Choose a Security Application (ssh built in defined, docker defined previously)
    5. Source will be Public Internet
    6. Destination will be our previously defined Security List named DockerNode
    7. Set a meaningful text description

    defines a private network for swarm inter connect operations

    1. Click on left side menu IP Network -> IP Neworks
    2. Click on right up Create IP Network button
    3. Name it in that case LocalInterconnect
    4. Define a private IP range We choose 10.0.0.0/24
    5. IP Exchange Not Set
    6. Set a meaningful text description

    create Public IP Reservation, this will simplify remote operations using docker-machine tool from outside the Oracle Cloud, repeat this step five times.

    1. On Network top menu
    2. Click left side menu IP Reservations
    3. Click on right up button Create IP Reservations
    4. Named it, in our test case SwarmNode1..to..SwarmNode5
    5. By now For Instance None

    Launching five Swarm nodes
    There is no Docker Swarm instances ready at the Oracle Cloud Market Place, but we can easily get one using Ubuntu 16.10 base image, here the step by definition, repeat them five times

    1. At the Compute Console choose Instance upper main menu
    2. At the beginning there isn't instance running, click right button Create Instance


    1. The wizard start showing pre-build Oracle Image, Choose Private images if you already use Ubuntu 16.10 image or Marketplace to find Ubuntu.16.10
    2. Click on Selected button
    3. Next choose go to Shape definition


    1. Because we want a large cluster of cheap hardware (horizontal scaling) Shape is
    2. General Purpose oc3 (1 OCPU/7.5 Gb memory)
    3. Go to the Instance options


    1. Clicking on Instance tab
    2. Define a name for your instance
    3. Define some meaningful tags
    4. Choose a private SSH key (ubnt) or upload a new one using Add SSH Public Key button
    5. Go to next step Network definition


    1. On network preferences
    2. Select Persistente Public IP Reservation option
    3. Chose one of five public IP reservations defined previously
    4. Chose our LocalInterconnect network previously defined (10.0.0.0/24)
    5. Select our DockerNode Security List (ssh and docker enabled ports from outside)
    6. Next go to Storage tab


    1. On the Storage tab by default Ubuntu instance have pre-defined 10Gb boot disk
    2. Click on button Add New Volume
    3. Named it as Repo1 (other instance choose Repo2, Repo3 and so on)
    4. Select 30 GB storage, this storage is for Docker local repository instance, some instance could be large, for example 12cR2 instance is 14 GB size
    5. Set a meaningful text description

    Finally review and click create, after a few minutes the instance will be ready to use, you could access using SSH as:
    $ ssh -i /home/mochoa/Documents/Scotas/ubnt ubuntu@oc1
    ubnt es my private ssh key associated with the public one upload to the cloud instance, ubuntu is de default user on this image instance with sudo rights and oc1 is an alias in my /etc/hostname to public IP reservation defined to this instance.
    The Ubuntu 16.10 instance do not have installed Docker by default, but with this simple script you could do the post-installation steps, here sample output:
    ubuntu@oc1:~$ wget https://raw.githubusercontent.com/marcelo-ochoa/docker/master/es/oracle-cloud-node-conf.sh
    --2017-03-08 14:53:11--  https://raw.githubusercontent.com/marcelo-ochoa/docker/master/es/oracle-cloud-node-conf.sh
    Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.48.133
    Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.48.133|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 757 [text/plain]
    Saving to: 'oracle-cloud-node-conf.sh'
    oracle-cloud-node-conf.sh          100%[===============================================================>]     757  --.-KB/s    in 0s    
    2017-03-08 14:53:12 (201 MB/s) - 'oracle-cloud-node-conf.sh' saved [757/757]
    ubuntu@oc1:~$ chmod +x oracle-cloud-node-conf.sh
    ubuntu@oc1:~$ ./oracle-cloud-node-conf.sh 
    the script do:
    • change OS parameter to vm.max_map_count=262144, required by ElasticSearch
    • define eth1 network interface with DHCP options, Oracle Cloud will provide you an IP on LocalInterconnect subnet
    • Mount /var/lib/docker directory with our 30 GB extra disk /dev/xvdc
    • Update Ubuntu OS with latest patches
    • Install latest Docker version 1.13.1
    you could check the steps by using, remember to reboot to apply kernel updates:
    ubuntu@oc1:~$ df -h
    Filesystem      Size  Used Avail Use% Mounted on
    udev            3.7G     0  3.7G   0% /dev
    tmpfs           749M   17M  732M   3% /run
    /dev/xvdb1      9.6G  2.6G  7.0G  27% /
    tmpfs           3.7G  128K  3.7G   1% /dev/shm
    tmpfs           5.0M     0  5.0M   0% /run/lock
    tmpfs           3.7G     0  3.7G   0% /sys/fs/cgroup
    /dev/xvdc        30G  408M   28G   2% /var/lib/docker
    /dev/xvdb15     105M  3.6M  101M   4% /boot/efi
    tmpfs           749M     0  749M   0% /run/user/1000
    ubuntu@oc1:~$ docker version
    Client:
     Version:      1.13.1
     API version:  1.26
     Go version:   go1.7.5
     Git commit:   092cba3
     Built:        Wed Feb  8 06:57:21 2017
     OS/Arch:      linux/amd64
    Server:
     Version:      1.13.1
     API version:  1.26 (minimum version 1.12)
     Go version:   go1.7.5
     Git commit:   092cba3
     Built:        Wed Feb  8 06:57:21 2017
     OS/Arch:      linux/amd64
     Experimental: true
    finally We can define docker-machine definitions in remote machine (my notebook) to administer this five node instance, to add a remote managed machine is necessary to execute locally:
    $ docker-machine create \
      --driver generic \
      --generic-ip-address=129.144.12.62 \
      --generic-ssh-key /home/mochoa/Documents/Scotas/ubnt \
      --generic-ssh-user ubuntu \
      oc1
    after adding our five node docker-machine repository is:
    $ docker-machine ls
    NAME        ACTIVE   DRIVER    STATE     URL                         SWARM   DOCKER    ERRORS
    oc1         -        generic   Running   tcp://129.144.12.62:2376            v1.13.1  
    oc2         -        generic   Running   tcp://129.144.12.36:2376            v1.13.1  
    oc3         -        generic   Running   tcp://129.144.12.145:2376           v1.13.1  
    oc4         -        generic   Running   tcp://129.144.12.14:2376            v1.13.1  
    oc5         -        generic   Running   tcp://129.144.12.235:2376           v1.13.1   
    Note that remote operations are secure managed using ssh encryption and TLS certificates
    Latest step is to prepare Swarm cluster:
    Define OC5 as Swarm master:$ docker-machine ssh oc5
    ubuntu@oc5:~$ docker swarm init --advertise-addr 10.0.0.6
    Swarm initialized: current node (j54j1bricmhl3buten3qletxy) is now a manager.
    To add a worker to this swarm, run the following command:
        docker swarm join \
        --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
        10.0.0.6:2377

    To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
    Define OC4 as Swarm second master ubuntu@oc5:~$ docker swarm join-token manager
    To add a manager to this swarm, run the following command:
        docker swarm join \
        --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-0hk823fjc3mdqz5cvl88etxm8 \
        10.0.0.6:2377

    ubuntu@oc5:~$ exit
    eval $(docker-machine oc4)
    [oc4]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-0hk823fjc3mdqz5cvl88etxm8 \
    >     10.0.0.6:2377

    This node joined a swarm as a manager.
    Note that after logout from OC5 instance We shows how to operate remote (without ssh login) on our Swarm nodes only changing the Docker environments variables with docker-machine env command.
    Finally add latest three nodes as workers
    [oc4]]$ eval $(docker-machine env oc3)
    [oc3]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.
    [oc3]]$ eval $(docker-machine env oc2)
    [oc2]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.
    [oc2]]$ eval $(docker-machine env oc1)
    [oc1]]$ docker swarm join \
    >     --token SWMTKN-1-1897yn8y1d931tnzfq99vek5fjebznsdp8po03zy15gk7kh9vc-9j50mbq7nkzrwcuvr6zn50svy \
    >     10.0.0.6:2377
    This node joined a swarm as a worker.

    Our Swarm cluster is ready to use with this set of nodes:
    $ eval $(docker-machine env oc5)
    [oc5]]$ docker node ls
    ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
    6na93y52xu20gqg4gc0epi51l    oc1       Ready   Active      
    cmspvwokukxaq3v6gwxxuz4w2    oc3       Ready   Active      
    j54j1bricmhl3buten3qletxy *  oc5       Ready   Active        Leader
    l69skrce4zb69bjhofhkp9ggq    oc2       Ready   Active      
    vu9l0xip479qlwz83hzisbd2f    oc4       Ready   Active        Reachable
    Next post will be about deploying an Elastic Search cluster using Docker Swarm at Oracle Cloud. Stay tuned.

    12cR1 RAC Posts -- 8a : Setting up SingleInstance DG Standby for RAC

    Hemant K Chitale - Wed, 2017-03-08 09:08
    This is the first of a small series of subposts on setting up SingleInstance DataGuard Standby for my 12cR1 RAC environment.

    Primary Database : 12cR1 MultiTenant RAC : 2 nodes, database on ASM
    Standby Database : 12cR1 MultiTenant SingleInstance : 1 node, database on FileSystem

    The "difficulties" are :
    a.  The Primary Database is MultiTenant with 1 PDB besides the SEED
    b.  The Primary Database is on ASM but the Standby is to be built on FileSystem
    c.  The Primary is a 2-node Cluster while the Standby is a SingleInstance installatin

    This post is on the initial setup.

    RAC Nodes
    collabn1.racattack   192.168.78.51
    collabn2.racattack   192.168.78.52

    SingleInstance Node 
    oem132.racattack     192.168.78.100


    Database Changes :
    SQL> alter database force logging;

    Database altered.

    SQL> alter database flashback on;

    Database altered.

    SQL>
    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL> alter database add standby logfile size 50M;

    Database altered.

    SQL>
    SQL>select  group# , status, type, member from v$logfile order by 1;


    Listener on Standby Server :
    LISTENER =
      (DESCRIPTION_LIST =
        (DESCRIPTION =
          (ADDRESS = (PROTOCOL = TCP)(HOST = oem132.racattack)(PORT = 1521))
          (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
        )
      )

    SID_LIST_LISTENER=
    (SID_LIST =
      (SID_DESC =
        (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
        (SID_NAME = STBY)
       )
      (SID_DESC =
        (GLOBAL_NAME = STBY_DGMGRL)
        (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
        (SID_NAME = STBY)
       )
    )


    tnsnames.ora on first RAC Node :
    STBY =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oem132)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = STBY)
        )
      )

    STBY_DGB =
      (DESCRIPTION =
        (ADDRESS = (PROTOCOL = TCP)(HOST = oem132)(PORT = 1521))
        (CONNECT_DATA =
          (SERVER = DEDICATED)
          (SERVICE_NAME = STBY_DGMGRL)
        )
      )


    STBY Instance Parameter File :
    compatible=12.1.0.2.0
    db_name=RAC
    db_unique_name=STBY
    enable_pluggable_database=TRUE
    sga_target=800M
    db_file_name_convert='+DATA/RAC','/u01/app/oracle/oradata/STBY','+FRA/RAC','/u01/app/oracle/fast_recovery_area/STBY'
    log_file_name_convert='+DATA/RAC','/u01/app/oracle/oradata/STBY','+FRA/RAC','/u01/app/oracle/fast_recovery_area/STBY'
    db_create_file_dest='/u01/app/oracle/oradata'
    db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
    db_recovery_file_dest_size=16G
    remote_login_passwordfile=EXCLUSIVE


    Copying the Password File :
    ASMCMD> pwcopy +DATA/RAC/PASSWORD/pwdrac.277.931824933 /var/tmp/RACPasswordFile
    copying +DATA/RAC/PASSWORD/pwdrac.277.931824933 -> /var/tmp/RACPasswordFile
    ASMCMD> exit
    scp /var/tmp/RACPasswordFile oracle@oem132:/u01/app/oracle/product/12.1.0/dbhome_1/dbs/orapwSTBY


    to be continued in post 8b on the filesystem directories for the STBY instance and the execution of the DUPLICATE DATABASE command.

    Categories: DBA Blogs

    NATCAP OUG Reboot

    Scott Spendolini - Wed, 2017-03-08 07:03
    NATCAP OUG - or the National Capitol Region Oracle Users Group - has a long history of providing Oracle-related content to those in the DC metro area.  I remember presenting there back in my Oracle days (early 2000's) on multiple occasions.

    Over the last few months, a few of us have been trying to resurrect this group, as there are likely more Oracle developers & DBAs in the DC area than any other area in the US, perhaps even the world!  This region has long been underserved, and we hope to change that.

    We have had a couple of meeting at Oracle's campus in Reston, and attendance has been, well, not where we need it to be.  So, if you're in the DC area, please consider joining this group.  It's free, and our goal is to have quarterly meetings where both experienced presenters and NATCAP members alike provide the content.

    If interested, please sign up on the NATCAP website here: http://natcapoug.org

    Let's grow this group and give the DC metro area the user group it so badly needs!

    Getting ADF Data in a Jet Component (2)

    Amis Blog - Wed, 2017-03-08 06:47

    In my previous blog I explained how to get ADF Data in a Jet Component, this was done by iterating through a ViewObject and rendering a component per element in the View Object. When you want to use the DVT’s from Oracle Jet, this won’t do the thing, because you will need the entire data set to be present at once in your component. This blog will show you how to do that without using Rest Services.

    My colleague Lucas Jellema made a JSONProviderBean, which makes data from data bindings available as nested JSON object in client side JavaScript.1

    Using this bean we can use the iterator binding of our View Object in our component page fragment.

    
    
    <div>
     <amis-chart chart-data="#{jsonProviderBean[bindings.EmployeesVO.DCIteratorBinding]}"/>
     </div>
    
    
    

    This will pass the JSON as a string to our component.

        {
            "properties": {
                "chartData": {
                    "type": "string"
                }
            }
             
        }
    

    In our component viewmodel we can now parse this string into a json object. The “values” object of this json object contains the data we need for our barchart, but it is not in a form the barchart can understand. Therefore you need to write function to get the data you need and put it into a format that the barchart does understand.

        function AmisChartComponentModel(context) {
            var self = this;
        
            context.props.then(function (propertymap) {
                self.properties = propertymap;
                var dataAsJson = JSON.parse(propertymap.chartData);
                var barData = self.createBarSeries(dataAsJson.values);
                /* set chart data */
                self.barSeriesValue = ko.observableArray(barData);
            })
            
            //function to transform the data.
            self.createBarSeries = function (jsonDataArray) {
                var data = [];
                jsonDataArray.forEach(function (item, index, arr) {
                    data.push( {
                        "name" : item.FirstName, "items" : [item.Salary]
                    });
                })
                return data;
            }    
            
        }
        return AmisChartComponentModel;
    
    });
    

    We now have our entire employee data set available for the barchart. In this case I made a chart for Salary per employee, we can do all the fancy interactions with the component that we can normally do as well, for example stacking the data or changing from a horizontal to a vertical graph.

       

    Sources
    1. https://github.com/lucasjellema/adf-binding-to-json
    2. https://technology.amis.nl/2017/03/07/getting-adf-data-jet-component/
    3. http://andrejusb.blogspot.nl/2015/12/improved-jet-rendering-in-adf.html
    4. https://blogs.oracle.com/groundside/entry/jet_composite_components_i_backgrounder (and the other blogs)
    5. Source of this demo: Github
    Versions used

    JDeveloper 12.1.3,
    OracleJet V2.2.0

    Disclaimer

    The information is based on my personal research. At the moment, Oracle does not support or encourage integrating ADF and Jet. Oracle is working on JET Composite Components in ADF.

    The post Getting ADF Data in a Jet Component (2) appeared first on AMIS Oracle and Java Blog.

    Create Your Own DVD Plugin in 22 minutes

    Rittman Mead Consulting - Wed, 2017-03-08 06:45
    Introduction

    Oracle DVD played well for the task of elections data analysis. But I faced some limitations and had to curtail my use of one of my favourite charts - Scatter chart. While DVD’s implementation of it looks good, has smooth animations and really useful, it is not suitable for visualisations of huge datasets. Election dataset has about 100K election precinct data rows and if I try to plot all of them at one chart I will get something close to this. It doesn't really important what is the data, just compare how charts look like.

    Picture with Turnout%, United Russia% by PEC

    This picture can give us about zero insights. Actually, I'm not even sure if shows my 100K points. What I want to get is something like this:
    plugin with Turnout%, United Russia% by PEC (source: https://habrahabr.ru/post/313372/)

    What can I see at this chart and can't see at the vanilla one? First of all, it gives a much better understanding of points distribution other the plot. I can see a relatively dense core around coordinates [30;40] and then not so dense tail growing by both dimensions and again a small and very dense core in the top right corner.
    Secondly, it not only shows dense and sparse areas but shows that there are exist a few points which have an unexplainably high concentration. Around [64;62], for example.

    Luckily DVD allows me (and anyone else too) create custom plugins. That's what I'm going to show here. The plugin I'm going to do is something I'd call Minimum Viable Product. It's very, very simple and needs a lot of work to make it good and really useful.

    The code I'm going to show is the simplest I can invent. It doesn't handle errors, exceptions and so on. It should be more modular. It possibly could use some optimisation. But that was done for a purpose. And the purpose is called 'simplicity'. If you want a more comprehensive sample, there is a link to the Oracle's guide in the Summary. I want to show you that writing a DVD plugin is not a rocket science. It takes less than half an hour to build your own DVD plugin.

    And the clock starts now!

    [0:00-5:40] Setup Environment and DVD SDK Setup

    Download and install DVD. I presume you already have it, because from my experience people who doesn't have it very rarely feel a need for its plugins. Hence I didn't include download and installation times into 22 minutes.

    Create a folder to store your plugins and some Gradle-related configs and scripts. It may be any directory you like, just select your favourite place.

    Next step is to define some environment variables.

    • The first one is DVDDESKTOP_SDK_HOME and it should point to thу DVD folder. In most of the cases, it is C:\Program files\Oracle Data Visualisation Desktop.
    • The second variable is PLUGIN_DEV_DIR it points to the folder you created 40 seconds ago. These two variables seem not to be absolutely necessary, but they make your life a bit easier.
    • And one more change to the environment before going further. Add %DVDESKTOP_SDK_HOME%\tools\bin to the PATH variable (do you feel how your life became easier with DVDDESKTOP_SDK_HOME already defined so you can start using it right now?).

    And the last step is to initialise our development environment. Go to the plugins directory and execute the initialization command. Open cmd window and execute:

    cd %PLUGIN_DEV_DIR%  
    bicreateenv  
    

    This will not only initialise everything your need but create a sample plugin to explore and research.

    Let's take a look at what we got. In the folder we have created before (%PLUGIN_DEV_DIR%) this command generated the following structure.

    a screenshot of %PLUGIN_DEV_DIR%

    All names here are more than self-explanatory. You won't need to do anything with .gradle and gradle, and src stores all the code we will develop. Please notice that src already has sample plugin sampleviz. You may use it as a referrence for your own work.

    Create First Plugin

    Every plugin consists of a few parts. We don't need to create them from scratch every time. Oracle gives us a simple way of creating a new plugin. If you didn't close cmd window from previous steps execute the following:

    bicreateplugin viz -id com.rittmanmead.demo -subType dataviz  
    

    Obviously, you should use your own company name for the id. The last word (demo) plus word Plugin is what user will see as a visualisation name. Obviously we can change it but not today.

    This will create a fully operational plugin. Where can we find it? You shouldn't be too surprised if I say "In src folder".

    a screenshot of %PLUGIN_DEV_DIR%

    And we may test it right now. Yes, its functionality is very limited, but you didn't expect it to do something useful, right?

    Start SDK

    New we should start Oracle DVD in SDK mode. The main difference from the normal mode is that DVD will start in the default browser. That's not a typo. DVD will start as a usual site in a browser. It's a good idea to select as the default a browser with good developer console and tools. I prefer Chrome for this task but that's not the only choice. Choose the browser you'd normally use for site development. When you finish with setting up of the default browser do:

    cd %PLUGIN_DEV_DIR%  
    .\gradlew run
    

    If you don't have an ultrafast computer I'd recommend you to make some tea at this point. Because the very first start will take a long time. Actually, it will consume most of the first stage time.

    [5:40-6:15] Test the Plugin

    We didn't start changing the first plugin yet, so I expect everything to be OK at this point. But anyway it's a good idea to test everything. I think we can afford 30-40 seconds for this.

    And again I assume you know how to use DVD and don't need a step-by-step guide on creating a simple visualisation. Just create a new project, select any data source, go to Visualisations, find your plugin and add it. You may even add some data to the plugin and it will show how many rows your data has. I think that's a good place to start from.

    a picture of plugin

    [6:15-22:00] Add Functions Change parameters

    If you didn't skip testing phase and played with this new toy for a while you may have noticed two things we need to change right now. First, for a chosen type of visualisation, I need that my plugin can accept two measures. You possibly noticed that I was unable to add Number of cancelled ballots as a second measure. By default a plugin accepts not more than one (zero or one). Luckily it can be changed very easily.

    We can find all parameter about measures, rows, columns etc inside of extensions folder. In this case it is %PLUGIN_DEV_DIR%\customviz\com-rittmanemad-demo\extensions. Here we can find two folders. The first one is oracle.bi.tech.pluginVisualizationDataHandlerand it has only one file com.rittmanemad.demo.visualizationDataHandler.JSON. This file allows us to define which types of input data our plugin has (measures, columns, rows, colour etc.), what are their types (measures/dimensions), which is the default one and so on.

    a picture of json and plugin side-by side Here we should change maxCount to 2. This will allow us to add more than one measure to our visualisation.

    The second thing I want to change is the number of data points. Elections dataset has data about 100K precinct commissions. And DVD's default is 10K. We can change this value in the second configuration file from extensions folder. Its name is oracle.bi.tech.Visualization. There is again only one JSON file com.rittmanmead.demo.json. We need to change rows to 200000.

    Why 200000 if I need only 100000 points? Well, it's about how data will be passed to our code. Every measure is a separate row, so I need 100K points with 2 measures each and that gives me 200000 rows. It looks like right now that's the only way to have more than one measure in a plugin (at least it's my best knowledge for DVD 12.2.2.2.0-20170208162627).

    a picture of json and plugin side-by side

    Note. I could change here some things like plugin name, input data types shown to user and so on but my aim is simplicity.

    Now we should restart DVD SDK in order to use new parameters.

    two measures and ~200K rows

    Write code

    For my custom scatter chart I'm going to use d3js JavaScript library. It will allow me to concentrate more on logic and less on HTML manipulation. To add it to my plugin I should add a couple of strings in the beginning of my code.

    Before:

    define(['jquery',  
            'obitech-framework/jsx',
            'obitech-report/datavisualization',
            'obitech-reportservices/datamodelshapes',
            'obitech-reportservices/events',
            'obitech-appservices/logger',
            'ojL10n!com-rittmanmead-demo/nls/messages',
            'obitech-framework/messageformat',
            'css!com-rittmanmead-demo/demostyles'],
            function($,
                     jsx,
                     dataviz,
                     datamodelshapes,
                     events,
                     logger,
                     messages) {
    

    After (I added two lines starting with d3):

    define(['jquery',  
            'obitech-framework/jsx',
            'obitech-report/datavisualization',
            'obitech-reportservices/datamodelshapes',
            'obitech-reportservices/events',
            'obitech-appservices/logger',
          'd3js',
            'ojL10n!com-rm-domoViz/nls/messages',
            'obitech-framework/messageformat',
            'css!com-rittmanmead-demo/demostyles'],
            function($,
                     jsx,
                     dataviz,
                     datamodelshapes,
                     events,
                     logger,
                   d3,
                     messages) {
    

    That's all. Now I can use d3js magic. And that's cool.

    OK, it's time to make our plugin do something more useful than a simple counting of rows. All plugin code is in demo.js file and the main procedure is render. It is called every time DVD needs to update the visualisation. The initial version of this function is really small. Without comments there are only four rows. It retrieves data, counts rows, then finds a container to write and writes a message.

    Demo.prototype.render = function(oTransientRenderingContext) {  
          // Note: all events will be received after initialize and start complete.  We may get other events
          // such as 'resize' before the render, i.e. this might not be the first event.
    
          // Retrieve the data object for this visualization
      var oDataLayout = oTransientRenderingContext.get(dataviz.DataContextProperty.DATA_LAYOUT);
    
          // Determine the number of records available for rendering on ROW
          // Because we specified that Category should be placed on ROW in the data model handler,
          // this returns the number of rows for the data in Category.
      var nRows = oDataLayout.getEdgeExtent(datamodelshapes.Physical.ROW);
    
          // Retrieve the root container for our visualization.  This is provided by the framework.  It may not be deleted
          // but may be used to render.
      var elContainer = this.getContainerElem();
      $(elContainer).html(messages.TEXT_MESSAGE.format("Demo Plugin", "" + nRows));
    };
    

    First of all, I need to know the actual size of the plotting area. I simply added that after var elContainer = this.getContainerElem();:

      [...]
      var elContainer = this.getContainerElem();
    
      //container dimensions
      var nWidth = $(elContainer).width();
      var nHeight = $(elContainer).height();
      [...]
    

    The template plugin has code for pulling the data into the script (oDataLayout variable). But for my scatter plot, I need to put this data into an array and find the minimum and the maximum value for both arrays. This part of the code is my least favourite one. It looks like currently we can't make tho separate measures in a custom viz (or at least I can't find the solution), therefore instead of two nice separate measures, I have them as different rows. Like:

    ......... PEC #1Turnout,%34,8% PEC #1United Russia,%44,3% PEC #2Turnout,%62,1% PEC #2United Russia,%54,2% PEC #3Turnout,%25,9% PEC #3United Russia,%33,2% .........

    I really hope that any solution for this will be found. So far I just put even and odd rows into X ad Y coordinates. At the same time, I'm determine minimum and maximum values for both axes.

      //temporary measure
      var tmp_measure;
    
      //current measure
      var cur_measure;
    
      var vMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];
      var hMinimax=[Number.MAX_VALUE, Number.MIN_VALUE];
    
      for(var i=0;i<nRows;i++){
    
        if(i%2==1){
          cur_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
          vMinimax[0]=Math.min(vMinimax[0], cur_measure);
          vMinimax[1]=Math.max(vMinimax[1], cur_measure);
          points.push([cur_measure,tmp_measure]);
        }
        else{
          tmp_measure=Number(oDataLayout.getValue(datamodelshapes.Physical.DATA, i, 0, false));
          hMinimax[0]=Math.min(hMinimax[0], tmp_measure);
          hMinimax[1]=Math.max(hMinimax[1], tmp_measure);
        }
      }
    

    The next part of the code is about handling multiple calls. I should handle things like changing measures or dimensions. I simply delete the old chart and create a new empty one.

      var oSVG;
    
      //Delete old chart if exists
      d3.select("#DemoViz").remove();
    
      //Create new SVG with id=DemoViz
      oSVG=d3.select(elContainer).append("svg");
      oSVG.attr("id","DemoViz");
    
      //Set plot area size
      oSVG=oSVG
           .attr("width", nWidth)
           .attr("height", nHeight)
           .append("g");
    

    Now I have an array with data. I have a prepared SVG plot for drawing. And I have no reason not to combine all this into a chart.

      //Compute scaling factors
      var hStep=(nWidth-40)/(hMinimax[1]-hMinimax[0]);
      var vStep=(nHeight-40)/(vMinimax[1]-vMinimax[0]);
    
      //Draw
      oSVG.selectAll("rect")
          .data(points)
          .enter()
          .append("rect")
          .attr("x", function(d) {return 20+d[1]*hStep;})
          .attr("y", function(d) {return nHeight-(20+d[0]*vStep);})
          .attr("height",1).attr("width",1)
          .attr('fill-opacity', 0.3);
    

    What I should mention here is opacity. If its value is too high (almost no opacity) the chart looks like this:

    0.99

    If I make dots more opaque, the chart looks (and works) better. This is how it looks like with fill-opacity=0.3.
    0.99 Look at the black point in the middle. It's there because a lot of commissions has exactly the same values for both measures. That's how we could find Saratov region anomaly.

    [22:00-] Summary and TODOs

    This shows how we may enhance Oracle DVD with our own visualisation. Yes, it's really, really, really simple. And it needs a lot of work in order to make it more useful. First of all, we should make it more reliable. This realisation doesn't test input parameters for correctness. And we also need to add interactivity. The chart should be zoomable and selectable. It should work as a filter. It should react to resize. It needs visible axes. And so on. But we can create it in less than a half of hour and I think as a first step it works well. I hope you don't afraid of making your own plugins now.

    If you need more complex sample, take a look at the Oracle's tutorial. It covers some of these TODOs.

    Categories: BI & Warehousing

    ORA-00937: not a single-group group function Solution

    Complete IT Professional - Wed, 2017-03-08 05:00
    Have you received an ORA-00937: not a single-group group function error? Learn what it is and how to fix it in this article. ORA-00937 Cause The cause of the ORA-00937 error is that a SELECT statement is trying to be executed, and the SELECT statement has an aggregate function (e.g. COUNT, MIN, MAX, SUM, or […]
    Categories: Development

    Oracle Forms and Firefox 52

    Gerd Volberg - Wed, 2017-03-08 05:00
    This morning was the last day for Oracle Forms in Firefox, I thought...

    At 9:00 AM I had no problems starting my Forms-Runtime. Firefox 51.0 was the actual version at that point.

    After lunch and restarting my Firefox I was running 52.0 with the new feature "no JAVA please" :-)

    Very bad is, that firefox didn't give you any hint, when starting a Forms-URL. The only thing you see is an empty screen.


    If you have this problem, here is the workaround:

    Start the URL "about:config" in your Firefox. After the message, that from now on you are responsible for all things you are doing in the settings, you see all Firefox config-parameters.

    Here you have to right mouse click and use "New"-"Boolean"


    The name of the new parameter is: plugin.load_flash_only
    The value of the new parameter is: false

    Now you have a new Firefox-parameter, which lets you run Oracle Forms as before!


    Have fun
    Gerd



    ASP .NET Core (CLR) on Pivotal Cloud Foundry

    Pas Apicella - Wed, 2017-03-08 04:57
    There are two ways to run .NET applications on Pivotal Cloud Foundry. In short it's as follows

    1. Windows 2012 R2 Stack (Windows 2016 coming soon)
    2. Linux Stack - ASP.NET Core CLR only

    In the example below I am going to show how you would push a sample ASP.NET Core application using the default Linux stack. I am using run.pivotal.io or better knows as PWS (Pivotal Web Services) instance which only supports a Linux stack. In your own PCF installation an operator may have provided Windows support in which case a "cf stacks" is one way to find out as shown below

    $ cf stacks
    Getting stacks in org pivot-papicella / space development as papicella@pivotal.io...
    OK

    name            description
    cflinuxfs2      Cloud Foundry Linux-based filesystem

    windows2012R2   Microsoft Windows / .Net 64 bit

    Steps

    1. Clone a demo as shown below

    $ git clone https://github.com/bingosummer/aspnet-core-helloworld.git
    Cloning into 'aspnet-core-helloworld'...
    remote: Counting objects: 206, done.
    remote: Total 206 (delta 0), reused 0 (delta 0), pack-reused 206
    Receiving objects: 100% (206/206), 43.40 KiB | 0 bytes/s, done.
    Resolving deltas: 100% (78/78), done.

    2. Change to the right directory as shown below

    $ cd aspnet-core-helloworld

    3. Edit manifest.yml to use the BETA buildpack as follows. You can list out the build packs using "cf buildpacks"

    ---
    applications:
    - name: sample-aspnetcore-helloworld
      random-route: true
      memory: 512M
      buildpack: dotnet_core_buildpack_beta

    4. Push as shown below

    pasapicella@pas-macbook:~/apps/dotnet/aspnet-core-helloworld$ cf push
    Using manifest file /Users/pasapicella/apps/dotnet/aspnet-core-helloworld/manifest.yml

    Updating app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
    OK

    Uploading sample-aspnetcore-helloworld...
    Uploading app files from: /Users/pasapicella/pivotal/apps/dotnet/aspnet-core-helloworld
    Uploading 21.9K, 15 files
    Done uploading
    OK

    Stopping app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
    OK

    Starting app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
    Downloading dotnet_core_buildpack_beta...
    Downloaded dotnet_core_buildpack_beta
    Creating container
    Successfully created container
    Downloading build artifacts cache...
    Downloading app package...
    Downloaded app package (21.5K)
    Downloaded build artifacts cache (157.7M)

    ...

    -----> Saving to buildpack cache
        Copied 0 files from /tmp/app/libunwind to /tmp/cache
        Copied 0 files from /tmp/app/.dotnet to /tmp/cache
        Copied 0 files from /tmp/app/.nuget to /tmp/cache
        OK
    ASP.NET Core buildpack is done creating the droplet
    Uploading droplet, build artifacts cache...
    Uploading build artifacts cache...
    Uploading droplet...
    Uploaded build artifacts cache (157.7M)
    Uploaded droplet (157.7M)
    Uploading complete
    Destroying container
    Successfully destroyed container

    0 of 1 instances running, 1 starting
    0 of 1 instances running, 1 starting
    1 of 1 instances running

    App started


    OK

    App sample-aspnetcore-helloworld was started using this command `dotnet run --project src/dotnetstarter --server.urls http://0.0.0.0:${PORT}`

    Showing health and status for app sample-aspnetcore-helloworld in org apples-pivotal-org / space development as papicella@pivotal.io...
    OK

    requested state: started
    instances: 1/1
    usage: 512M x 1 instances
    urls: sample-aspnetcore-helloworld-gruffier-jackpot.cfapps.io
    last uploaded: Wed Mar 8 10:46:44 UTC 2017
    stack: cflinuxfs2
    buildpack: dotnet_core_buildpack_beta

      state     since                    cpu     memory          disk           details
    #0   running   2017-03-08 09:48:29 PM   22.4%   36.7M of 512M   556.8M of 1G


    Verify the application using the URL given at the end of the push







    Categories: Fusion Middleware

    New OA Framework 12.2.4 Update 15 Now Available

    Steven Chan - Wed, 2017-03-08 02:05

    Web-based content in Oracle E-Business Suite 12.x runs on the Oracle Application Framework (OA Framework, or just OAF) user interface libraries and infrastructure. Since the original release of Oracle E-Business Suite Release 12.2 in 2013, we have released several updates to Oracle Application Framework to provide fixes for performance, security, and stability issues.

    These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Pack. "Cumulative" means that the latest RUP or Bundle Patch contains everything released previously.

    The latest OAF update for EBS 12.2.4 is now available:


    Where is the documentation for this update?

    Instructions for installing this OAF Release Update Pack are here:

    Who should apply this patch?

    All Oracle E-Business Suite Release 12.2.4 users should apply this patch. Future OAF patches for EBS 12.2.4 will require this patch as a prerequisite. 

    What's new in this update?

    Fixes are included for following bugs/issues:

    • 19995946 - EXPORT PERSONALIZATIONS FAIL
    • 20132328 - INTERMITTENT YOU HAVE ENCOUNTERED AN UNEXPECTED ERROR APPEARS
    • 20197365 - 12.2.4 - IE10 MUST CLICK MULTIPLE TIMES TO LAUNCH PROFESSIONAL FORMS
    • 20534529 - PURCHASING BWC, HOVER OVER ADVANCED SEARCH DISPLAYS INCORRECT VALUE
    • 20976542 - HOMEPAGE FAVORITE URL IS BEING PRE-APPENDED WITH HTTP:// WHEN ACCESSED
    • 21193986 - CANNOT UPDATE ATTACHMENTS THOUGH SMALL ATTACHMENT WINDOW IF NO TITLE
    • 21248661 - ADA: REMOVE EXTRA SKIP NAV LINKS
    • 21321484 - COMPILECABOJSPS.PL FAILED TO COMPILE FRAMEREDIRECT.JSP IN WINDOWS
    • 21450063 - EMAIL FACET TABLE RENDERING ISSUES
    • 21460925 - 12.2.4: ONE-OFF PATCH REQUEST FOR INTERNAL BUG 18550079 FIX
    • 21666726 - CANNOT IMPORT PERSONALIZATIONS USING FUNCTIONAL ADMINISTRATOR IN 12.2
    • 22866135 - OAF EMAIL NOTIFICATION FORMATTING ISSUES
    • 25044048 - CREATE LINK FROM FAVORITE TO OPEN IN NEW WINDOW OPENS IN SAME PAGE
    • 25216142 - FND_STATE_LOSS_ERROR WHEN DOUBLE CLICKING ON MENU ITEM ON HOME PAGE

    Bundle 15 is a cumulative patch for EBS 12.2.4. It includes updates previously released as Bundle 14 (Patch 24911652).

    This Bundle Patch requires the R12.ATG_PF.C.Delta.4 Release Update Pack as a mandatory prerequisite.

    Related Articles

    Categories: APPS Blogs

    shuttle that lists files in a specified directory

    Tom Kyte - Tue, 2017-03-07 19:26
    I need to make a shuttle that lists files in a specified directory. The user could choose what ones they wanted to process. When they press GO there would be a list of files to process. Is there an example of this that I could look at? Thanks. Apex...
    Categories: DBA Blogs

    How to insert multiple insert stmt to a table

    Tom Kyte - Tue, 2017-03-07 19:26
    Hi Team, I've a script where i have 100K insert into stmts , i need to execute these inserts, Please suggest me a faster way to insert Insert insert insert insert ......soo on 100k records Thanks Dnesh.
    Categories: DBA Blogs

    Martian wants to import XML table from Access

    Tom Kyte - Tue, 2017-03-07 19:26
    I am so very lost. Thinking I might finally try to figure out how to use MySQL as a replacement for Access2007; I think I downloaded everything I needed, including VB support for C++. I'm pretty sure, but not certain. But for the sake of thi...
    Categories: DBA Blogs

    Pages

    Subscribe to Oracle FAQ aggregator