Skip navigation.

Feed aggregator

Benefits of Single Tenant Deployments

Asif Momen - Mon, 2014-07-07 04:54
While presenting at a database event, I had a question from one of the attendees on benefits of running Oracle databases in Single Tenant Configuration.  I thought this would be a nice if I post it on my blog as it would benefit others too.
From Oracle documentation, “The multitenant architecture enables an Oracle database to function as a multitenant container database (CDB) that includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and non-schema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs”.
Following are the benefits of running databases in Single Tenant Configuration:
  1. Alignment with Oracle’s new multi-tenant architecture
  2. Cost saving. You save on license fee as single tenant deployments do not attract Multi-tenant option license fee. License is applicable should you have two or more PDBs.
  3. Upgrade/patch your single PDB from 12.1.0.1 to 12.x easily with reduced downtime
  4. Secure separation of duties (between CDBA & DBA)
  5. Easier PDB cloning

I would recommend running all your production and non-production databases in single-tenant configuration (if you are not planning for consolidation using multi-tenant option) once you upgrade them to Oracle Database 12c. I expect to see single tenant deployments become the default deployment model for the customers.

UnifiedPush Server 0.11 is out!

Matthias Wessendorf - Mon, 2014-07-07 03:07

Today we are extremely happy to announce an all new AeroGear UnifiedPush Server!

UnifiedPush Server

The UnifiedPush Server comes with a completely rewritten Angular.js based UI and is now powered by Keycloak! Thanks to the Keycloak team for the great work they delivered helping the AeroGear team to make the Keycloak integration happen.

Getting started

Getting started w/ the new server is still very simple:

  • Setup a database (here is an example for the H2 Database engine. Copy into $JBOSS/standalone/deployments)
  • Download the two WAR files (core and auth) and copy into $JBOSS/standalone/deployments
  • Start the JBoss server

The 0.11.0 release contains a lot of new features, here is a more detailed list:

  • Keycloak Integration for user management
  • Angular.js based AdminUI
  • Metrics and Dashboard for some Analytics around Push Messages
  • Code snippet UI now supports Swift
  • and a lot of fixes and other improvements! See JIRA for all the items

Besides the improvements on the server, we also have some Quickstarts to help you get going with the Push Server

Hello World

The HelloWorld is a set of simple clients that show how to register a device with the UnifiedPush Server. On the Admin UI of the server you can use the “Send Push” menu to send a message to the different applications, running on your phone.

Mobile Contacts Quickstart

The Mobile Contacts Quickstart is a Push-enabled CRUD example, containing several client applications (Android, Apache Corodva and iOS) and a JavaEE-based backend. The backend app is a secured (Picketlink) JAX-RS application which sends out push messages when a new contact has been created. Sometimes the backend (for a mobile application) has to run behind the firewall. For that the quickstart contains a Fabric8 based Proxy server as well.

Thanks again to the Keycloak team for their assistance.

Now, get your hands dirty and send some push messages! We hope you like the new server!

Next ?

We are now polishing the server for the 1.0.0 push release this summer. See the roadmap for details.


Introduction to BatchEdit

Anthony Shorten - Sun, 2014-07-06 21:14

BatchEdit is a new wizard style utility to help you build a batch architecture quickly with little fuss and technical knowledge. Customers familiar with the WLST tool that is shipped with Oracle WebLogic will recognize the style of utility I am talking about it. The idea behind BatchEdit is simple. It is there to provide a simpler method of configuring batch by boiling down the process to its simplest form. The power of the utility is the utility itself and the set of preoptimized templates shipped with the utility to generate as much of the configuration as possible but still have a flexible approach to configuration.

First of all, the BatchEdit utility, shipped with OUAF 4.2.0.2.0 and above, is disabled by default for backward compatibility. To enable it  you must execute the configureEnv[.sh] -a utility and in option 50 set the Enable Batch Edit Functionality to true and save the changes. The facility is now available to use.

Once enabled, the BatchEdit facility can be executed using the bedit[.sh] <options> utility where <options> are the options you want to use with the command. The most useful is the -h and --h which display the help for the command options and extended help. You will find lots of online help in the utility. Just typing help <topic> you will get an explanation and further advice on a specific topic.

The next step is using the utility. The best approach is to think of the configuration is various layers. The first layer is the cluster. The next layer is the definition of threadpools in that cluster and then the submitters (or jobs) that are submitted to those threadpools. Each of those layers has configuration files associated with them.

Concepts

Before understanding the utility, lets discuss a few basic concepts:

  • The BatchEdit allows for "labels" to be assigned to each layer. This means you can group like configured components together. For example, say you wanted to setup a specific threadpoolworker for a specific set of processes and that threadpoolworker had unique characteristics like unique JVM settings. You can create a label template for that set of jobs and dynamically build that. At runtime you would tell the threadpoolworker[.sh] command to use that template (using the -l option). For submitters the label is the Batch Code itself.
  • The BatchEdit will track if changes are made during a session. If you try and exit without saving a warning is displayed to remind you of unsaved changes. Customers of Oracle Enterprise Manager pack for Oracle Utilities will be able to track configuration file version changes within Oracle Enterprise Manager, if desired.
  • BatchEdit essentially edits existing configuration files (e.g. tangosol-coherence-override.xml for the cluster, threadpoolworker.properties for threadpoolworker etc). To ascertain what particular file is being configured during a session use the what command.
  • BatchEdit will only show the valid options for the scope of the command and the template used. This applies to the online help which is context sensitive.
Using the utility

The BatchEdit utility has two distinct modes to build and maintain various configuration files.

  • Initiation Mode - The first mode of the utility is to invoke the utility with the scope or configuration file to create and/or manage. This is done by specifying the valid options at the command line. This mode is recorded in a preferences file to remember specific settings across invocations. For example, once you decide which cluster type you want to adopt, the utility will remember this preference and show  the options for that preference only. It is possible to switch preferences by re-invoking the command with the appropriate options.
  • Edit Mode - Once you have invoked the command, a list of valid options are presented which can be altered using the set command. For example, the set port 42020 command will set the port parameter to 42020. You can add new sections using the add command, and so forth. Online help will show the valid commands. The most important is the save command which saves all changes.
Process for configuration

To use the command effectively here is a summary of the process you need to follow:

  • Decide your cluster type first. Oracle Utilities Application Framework supports, multi-cast, uni-cast and single server clusters. Use the bedit[.sh] -c [-t wka|mc|ss] command to set and manage the cluster parameters. For example:
$ bedit.sh -c
Editing file /oracle/FW42020/splapp/standalone/config/tangosol-coherence-override.xml using template /oracle/FW42020/etc/tangoso
l-
coherence-override.ss.be

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (1)
  mode (dev)

> help loglevel

loglevel
--------
Specifies which logged messages will be output to the log destination.

Legal values are:

  0    - only output without a logging severity level specified will be logged
  1    - all the above plus errors
  2    - all the above plus warnings
  3    - all the above plus informational messages
  4-9  - all the above plus internal debugging messages (the higher the number, the more the messages)
  -1   - no messages

> set loglevel 2

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (2)
  mode (dev)

> save
Changes saved
> exit
  • Setup your threadpoolworkers. For each group of threadpoolworkers use the bedit[.sh] -w [-l <label>] where <label> is the group name. We supply a default (no label) and cache threadpool templates. For example:
$ bedit.sh -w
Editing file /oracle/FW42020/splapp/standalone/config/threadpoolworker.properties using template /oracle/FW42020/etc/threadpoolw
orker.be

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)

> set pool.2 poolname FRED

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)

> add pool

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (DEFAULT)
      threads (5)

> set pool.3 poolname LOCAL

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (5)

> set pool.3 threads 0

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (0)

>
  • Setup your global submitter settings using the bedit[.sh] -s command or batch job specific settings using the bedit[.sh] -b <batchcode> command where <batchcode> is the Batch Control Id for the job. For example:
$ bedit.sh -b F1-LDAP
File /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties does not exist - create? (y/n) y
Editing file /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties using template /oracle/FW42020/etc/job.be

Batch Configuration Editor 1.0 [job.F1-LDAP.properties]
-------------------------------------------------------

Current Settings

  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (SYSUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

The BatchEdit facility is an easier way of creating and maintaining the configuration files with little bit of effort. For more examples and how to migrate to this new facility is documented in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support.

SQL Plan Baselines

Jonathan Lewis - Sun, 2014-07-06 11:34

Here’s a thread from Oracle-L that reminded of an important reason why you still have to hint SQL sometimes (rather than following the mantra “if you can hint it, baseline it”).

I have a query that takes 77 seconds to optimize (it’s not a production query, fortunately, but one I engineered to make a point). I can enable sql plan baseline capture and create a baseline for it, and given the nature of the query I can be confident that the resulting plan will always be exactly the plan I want. If I have to re-optimize the query at any time  (because it runs once per hour, say, and is constantly being flushed from the library cache) how much time will the SQL plan baseline save for me ?

The answer is NONE.

The first thing that the optimizer does for a query with a stored sql plan baseline is to optimize it as if the baseline did not exist.

If I want to get rid of that 77 seconds I’ll have to extract (most of) the hints from the SQL Plan Baseline and write them into the query.  (Or, maybe, create a Stored Outline – except that they’re deprecated in the latest version of Oracle, and I’d have to check whether the optimizer used the same strategy with stored outlines or whether it applied the outline before doing any optimisation). Maybe we could do with a hint which forces the optimizer to attempt to use an existing, accepted SQL Baseline without attempting the initial optimisation pass.

 


Adjusting Histograms

Jonathan Lewis - Fri, 2014-07-04 13:32

This is a quick response to a question on an old blog post asking how you can adjust the high value if you’ve already got a height-balanced histogram in place. It’s possible that someone will come up with a tidier method, but this was just a quick sample I created and tested on 11.2.0.4 in a few minutes.  (Note - this is specifically for height-balanced histograms,  and it’s not appropriate for 12c which has introduced hybrid histograms that will require me to modify my “histogram faking” code a little).

rem
rem	Script:		adjust_histogram.sql
rem	Author:		Jonathan Lewis
rem	Dated:		Jun 2014
rem	Purpose:
rem
rem	Last tested
rem		11.2.0.4
rem	Not tested
rem		12.1.0.1
rem		11.1.0.7
rem		10.2.0.5
rem	Outdated
rem		 9.2.0.8
rem		 8.1.7.4	no WITH subquery
rem
rem	Notes:
rem	Follow-on from a query on my blog about setting the high value
rem	when you have a histogram.  We could do this by hacking, or by
rem	reading the user_tab_histogram values and doing a proper prepare
rem

start setenv
set timing off

execute dbms_random.seed(0)

drop table t1;

begin
	begin		execute immediate 'purge recyclebin';
	exception	when others then null;
	end;

	begin
		dbms_stats.set_system_stats('MBRC',16);
		dbms_stats.set_system_stats('MREADTIM',10);
		dbms_stats.set_system_stats('SREADTIM',5);
		dbms_stats.set_system_stats('CPUSPEED',1000);
	exception
		when others then null;
	end;
/*
	begin		execute immediate 'begin dbms_stats.delete_system_stats; end;';
	exception	when others then null;
	end;

	begin		execute immediate 'alter session set "_optimizer_cost_model"=io';
	exception	when others then null;
	end;

	begin		execute immediate 'alter session set "_optimizer_gather_stats_on_load" = false';
	exception	when others then null;
	end;
*/

	begin		execute immediate  'begin dbms_space_admin.materialize_deferred_segments(''TEST_USER''); end;';
	exception	when others then null;
	end;

end;
/

create table t1
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(sysdate,'YYYY') + trunc(dbms_random.normal * 100,1)	d1
from
	generator	v1,
	generator	v2
where
	rownum <= 1e4
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt 	 => 'for all columns size 32'
	);

end;
/

spool adjust_histogram.lst

prompt	==================
prompt	Current High Value
prompt	==================

select to_char(max(d1),'dd-Mon-yyyy hh24:mi:ss') from t1;

prompt	==============================
prompt	Initial Histogram distribution
prompt	==============================

select
	endpoint_number,
	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val,
	endpoint_value,
	lag(endpoint_value,1) over(order by endpoint_number) lagged_epv,
	endpoint_value -
		lag(endpoint_value,1) over(order by endpoint_number)  delta
from	user_tab_histograms
where
	table_name = 'T1'
and	column_name = 'D1'
;

rem
rem	Note - we can't simply overwrite the last srec.novals
rem	because that doesn't adjust the stored high_value.
rem	We have to make a call to prepare_column_values,
rem	which means we have to turn the stored histogram
rem	endpoint values into their equivalent date types.
rem

prompt	==================
prompt	Hacking the values
prompt	==================

declare

	m_distcnt		number;
	m_density		number;
	m_nullcnt		number;
	srec			dbms_stats.statrec;
	m_avgclen		number;

	d_array			dbms_stats.datearray := dbms_stats.datearray();
	ct			number;

begin

	dbms_stats.get_column_stats(
		ownname		=> user,
		tabname		=> 't1',
		colname		=> 'd1',
		distcnt		=> m_distcnt,
		density		=> m_density,
		nullcnt		=> m_nullcnt,
		srec		=> srec,
		avgclen		=> m_avgclen
	); 

	ct := 0;
	for r in (
		select	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val
		from	user_tab_histograms
		where	table_name = 'T1'
		and	column_name = 'D1'
		order by endpoint_number
	) loop

		ct := ct + 1;
		d_array.extend;
		d_array(ct) := r.d_val;
		if ct = 1 then
			srec.bkvals(ct) := 0;
		else
			srec.bkvals(ct) := 1;
		end if;

	end loop;

	d_array(ct) := to_date('30-Jun-2015','dd-mon-yyyy');

	dbms_stats.prepare_column_values(srec, d_array);

	dbms_stats.set_column_stats(
		ownname		=> user,
		tabname		=> 't1',
		colname		=> 'd1',
		distcnt		=> m_distcnt,
		density		=> m_density,
		nullcnt		=> m_nullcnt,
		srec		=> srec,
		avgclen		=> m_avgclen
	);
end;
/

prompt	============================
prompt	Final Histogram distribution
prompt	============================

select
	endpoint_number,
	to_date(to_char(trunc(endpoint_value)),'J') + mod(endpoint_value,1) d_val,
	endpoint_value,
	lag(endpoint_value,1) over(order by endpoint_number) lagged_epv,
	endpoint_value -
		lag(endpoint_value,1) over(order by endpoint_number)  delta
from	user_tab_histograms
where
	table_name = 'T1'
and	column_name = 'D1'
;

spool off

doc

#


Best of OTN - Week of June 29th

OTN TechBlog - Fri, 2014-07-04 11:00
Java -

Congratulations to the Winners #IoTDevchallenge -
Oracle Technology Network and Oracle Academy are proud to announce the winners of the IoT Developer Challenge. All of them making the Internet of Things come true. And, of course, built with the Java platform at the center of Things. See who the winners are in this blog post - https://blogs.oracle.com/java/entry/announcing_the_iot_developer_challenge.


JavaEE 8 Roadmap? It's right here.

Forum discussion: Would you use an IDE on a tablet? Join in now!

Systems Community -

OS Tips and Tricks for Sysadmins  - This three-session track, part of the Global OTN Virtual Technology Summits; Americas July 9th, EMEA July 10th and APAC July 16th, will show you how to configure Oracle Linux to run Oracle Database 11g and 12c, how to use the latest networking capabilities in Oracle Solaris 11, and how to troubleshoot networking problems in Unix and Linux systems.  Experts will be on hand to answer your questions live. Register now.

Database -

Disaster Recovery with Oracle Data Guard and Oracle GoldenGate -
The best part about preparing for the upcoming OTN Virtual Technology Summit is reading up on the technology we'll be presenting. Today's reading: Disaster recovery with Oracle Data Guard... it's an essential capability that every Oracle DBA should master.

Architect Community

Community blogs and social networks have been buzzing about the recent release of Oracle SOA Suite 12c, Oracle Mobile Application Foundation, and other new stuff. I've shared links to several such posts over the past several days on the OTN ArchBeat Facebook page. The three items below drew the most attention.

SOA Suite 12c: Exploring Dependencies - Visualizing dependencies between SOA artifacts | Lucas Jellema
Oracle ACE Director Lucas Jellema explores the use of the Dependency Explorer in JDeveloper 12c for tracking and visualizing dependencies in artifacts in SOA composites or Service Bus projects.

Managing Files for the Hybrid Cloud Use Cases, Challenges and Requirements | Dave Berry
This paper by Dave Berry, Vikas Anand, and Mala Ramakrishnan discusses Oracle Managed File transfer and best practices for sharing files within your enterprise and externally for partners and cloud services.

Say hello to the new Oracle Mobile Application Framework | Shay Shmeltzer
What's the Oracle Mobile Application Framework (MAF)? Oracle MAF, available as an extension to both JDeveloper and Eclipse, lets you develop a single application that will run on both iOS and Android devices. MAF is based on Oracle ADF Mobile, but adds many new features. Want more information? Click the link to read a post by product manager Shay Shmeltzer.

Funny Stuff

On July 4th Americans will celebrate the US victory over the British in the Revolutionary War by grilling mountains of meat, consuming mass quantities of beer, and making trips to the emergency room to reattach fingers blown off with poorly-handled fireworks. This hilarious video featuring comic actor Stephen Merchant offers a UK perspective on the outcome of that war.

A tip of a three-cornered hat to Oracle ACE Director Mark Rittman and Oracle Enterprise Architect Andrew Bond for bringing this video to my attention.

Log Buffer #378, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-04 08:43

New technologies, new ideas, and new tips are forthcoming in abundance in numerous blog posts across Oracle, SQL Server, and MySQL. This Log Buffer Edition covers many of the salient ones.

Oracle:

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.

Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data.

Needless to say, some ATG applications are more complex than others.  Some ATG applications support a single site, single language, single catalog, single currency, have a single development staff, single business team, and a relatively simple business model.

The purpose of this article is to describe some of the important foundational concepts of ATG.

You can use Ops Center to perform some very complex tasks. For instance, you might use it to provision several operating systems across your environment, with multiple configurations for each OS.

SQL Server:

SSRS In a Flash – Level 1 in the Stairway to Reporting Services.

The “Numbers” or “Tally” Table: What it is and how it replaces a loop.

Arshad Ali demonstrates granular level encryption in detail and explains how it differs from Transparent Data Encryption (TDE).

There were many new DMVs added in SQL Server 2012, and some that have changed since SQL Server 2008 R2.

There are some aspects of tables in SQL Server that a lot of people get wrong, purely because they seem so obvious that one feels embarrassed about asking questions.

MySQL:

A much awaited release from the MariaDB project is now stable (GA) – MariaDB Galera Cluster 10.0.12.

Failover with the MySQL Utilities: Part 2 – mysqlfailover.

HowTo: Integrating MySQL for Visual Studio with Connector/Net.

Single database backup and restore with MEB.

Externally Stored Fields in InnoDB.

Categories: DBA Blogs

Speedy #em12c template export

DBASolved - Thu, 2014-07-03 20:50

Wither you use a single OEM and migrating to a new OEM or have multiple OEMs, the need to move templates between environments will arise.  I had this exact problem come up recently at a customer site between an OEM 11g and OEM 12c.  In order to move the templates, I needed to export the multiple monitoring templates using EMCLI.  The command that I used to do individual exports was the following:


./emcli export_template -name="<template name>" -target_type="<target_type>" -output_file="/tmp/<template name>.xml"

If you have only one template to move, the EMCLI command above will work.  If you have more than one template to move, the easiest thing to do is to have the EMCLI command run in a script.  This is the beauty of EMCLI; the ability to interact with OEM at the command line and use it in scripts for repeated executions.  Below is a script that I wrote to export templates based on target_types.

Note: If you need to identify the target_types that are supported by OEM, they can be found in SYSMAN.EM_TARGET_TYPES in the repository.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#
use strict;
use warnings;

#Parameters
my $oem_home_bin = "/opt/oracle/app/product/12.1.0.4/middleware/oms/bin";
my @columns = ("", 0, 0, 0, 0);
my @buf;
my $target_type = $ARGV[0];

#Program

if (scalar @ARGV != 1)
{
 print "\nUsage:\n";
 print "perl ./emcli_export_templates.pl <target_type>\n\n";
 print "<target_type> = target type for template being exported\n";
 print "refer to sysman.em_target_types in repository for more info.";
 print "\n";
 exit;
}

system($oem_home_bin.'/emcli login -username=<userid> -password=<password>');
system($oem_home_bin.'/emcli sync');

@buf = `$oem_home_bin/emcli list_templates`;

foreach (@buf)
{
 @columns = split (/ {2,}/, $_);

 if ($columns[2] eq $target_type )
 {
 my $cmd = 'emcli export_template -name="'.$columns[0].'" -target_type="'.$columns[2].'" -output_file="/tmp/'.$columns[0].'.xml"';
 system($oem_home_bin.'/'.$cmd);
 print "Finished export of: $columns[0] template\n";
 }
}

system($oem_home_bin.'/emcli logout');

If you would like to learn more about EMCLI and other ways to use it have a look at these other blogs:

Ray Smith: https://oramanageability.wordpress.com/
Kellyn Pot’Vin: http://dbakevlar.com/
Seth Miller: http://sethmiller.org/

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: OEM
Categories: DBA Blogs

New ConfigTools Training available on Youtube

Anthony Shorten - Thu, 2014-07-03 18:12

The Oracle Public Sector Revenue Management product team have released a series of training videos for the Oracle Utilities Application Framework ConfigTools component. This component allows customers to use meta data and scripting to enhance and customize Oracle Utilities Application Framework based solutions without the need for Java programming.

The series uses examples and each recording is around 30-40 minutes in duration.

The channel for the videos is Oracle PSRM Training. The videos are not a substitute for the training courses available, through Oracle University, on ConfigTools, but are useful for people trying to grasp individual concepts while getting an appreciation for the power of this functionality.

 At time of publication, the recordings currently available are:


    Partner Webcast - Oracle Coherence & Weblogic Server: Close Integration of Application & Data Grid Tier

    Oracle Coherence is the industry’s leading in-memory data grid solution that enables applications to predictably scale by providing fast, reliable and scalable access to frequently used data. The key...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Malware stirs database security concerns for banks

    Chris Foot - Thu, 2014-07-03 13:40

    In an effort to keep up with the times, many financial institutions have implemented e-banking applications that allow customers to access and manage their finances on the Web or through their smartphones.

    Although electronic solutions may boost satisfaction rates and make it easier for account holders to transfer funds, they can cause major database security woes if proper protective measures aren't taken. As of late, there have been two kinds of malware banks have had to contend with.

    Attacking the mobile arena
    Because it's easy for consumers to get caught up in the luxury of viewing checking information on their smartphones, many forget to follow necessary, defensive protocols. According to ITPro, a new remote access Trojan, named com.II, is targeting Android devices and zeroing in on users with mobile banking applications. 

    The source noted that the malware abides by the following process:

    1. Undermines any security software that's installed
    2. Scans the device for eBanking programs
    3. Replaces any such tools with fraudulent ones
    4. Implements fabricated application updates
    5. Steals and delivers short message service notifications to access contact lists.

    Combating surveillance
    Paco Hope, principal consultant with Cigital, a firm based in the United Kingdom, surmised that the malicious software could infect global banking populations, as it's capable of being manipulated to abide by different languages.

    To prevent the program from entering bank accounts and stealing funds, active database monitoring should be employed by enterprises offering e-banking apps. Com.II has the ability to conduct thorough surveillance of individual checking and savings records, allowing the malware's administrators to potentially carry out transactions. 

    Under the radar
    Many programmers harboring ill intentions have found a way to make malicious software basically unrecognizable. MarketWatch acknowledged a new breed of malware, dubbed Emotet, that tricks people into giving it access to bank accounts. The news source outlined the deployment's protocol.

    1. Spam messages are sent to victims' emails
    2. The contents of those notices detail financial transactions and include links
    3. Upon clicking the link, the malware activates code that sits in browsers
    4. Once a person visits a bank website, the program can monitor all activity

    Trend Micro Vice President of Technology and Solutions JD Sherry asserted that the language used within the encoded messages appears authentic. This makes it easy for individuals to fall victim to the scam.

    The administrator's side of the equation
    Although it's important for e-banking customers to install adequate malware protection programs, the enterprises administering electronic solutions must find a way to defend their accounts. Constant database surveillance needs to be employed so that security breaches don't get out of hand in the event they occur.

    The post Malware stirs database security concerns for banks appeared first on Remote DBA Experts.

    Oracle Priority Support Infogram for 03-JUL-2014

    Oracle Infogram - Thu, 2014-07-03 11:38

    New Releases
    Lots of big ones recently:
    Announcing Oracle VM 3.3 - Delivers Enterprise-Scale Performance Enhancements, from Oracle's Virtualization Blog.
    From ZDNet: Oracle VM 3.3 - another salvo in the virtual machine battle.
    From BPM for Government: BPM 12c is Now Available!!
    New Oracle Framework Targets Cross-Platform Mobile Developers, from Application Development Trends on MAF.
    BPM
    Using Oracle BPM Object Methods in Script Tasks (OBPM 12.1.3), from Venugopal Mangipudi's Blog.
    From AMIS TECHNOLOGY BLOG: BPM Suite 12c: Quick Start installation – 20 minutes and good to go.
    Also from AMIS, on SOA: SOA Suite 12c: Weekend Roundup.
    Fusion
    From Fusion Applications Developer Relations: June in Review.
    WebCenter
    How Oracle WebCenter Customers Build Digital Businesses: Contending with Digital Disruption, from the Oracle WebCenter Blog.
    RDBMS
    Restore datafile from service: A cool #Oracle 12c Feature, from The Oracle Instructor.
    OTN
    Another great month out there on OTN. Check out the Top 10 ArchBeat Videos for June 2014 on ArchBeat.
    MySQL
    From MySQL on Windows: HowTo: Integrating MySQL for Visual Studio with Connector/Net.
    SQL Developer
    From that JEFF SMITH: Clearing the Script Output Buffer in Oracle SQL Developer.
    Java
    Reza Rahman's Blogshared some material on the Java EE 8 road-map here: Java Day Tokyo Trip Report.
    Programming
    Concurrent Crawling with Go and Python, from Venkata Mahalingam.
    And on the lighter side of coding: How to interpret human programming terms.
    PeopleSoft Turbocharged
    From Oracle Applications on Engineered Systems: PeopleSoft on Engineered Systems Documentation.
    EBS
    From the Oracle E-Business Suite Support Blog
    New Release of the PO Approval Analyzer!
    Posting Performance Issues Reported After 11.2.0.4 Database Upgrade
    New Troubleshooting Help for Holds Applied at Invoice Validation
    New OM 12.1 Cumulative Patch Released!
    During R12.2.3 Upgrade QP_UTIL_PUB Is Invalid
    Announcing RapidSR for Oracle Payables: Automated Troubleshooting (AT)
    New Service Contracts Functionality - Contracts Merge

    The Other

    Greg Pavlik - Thu, 2014-07-03 11:33
    It is the nature of short essays or speeches that they can at best explore the surface of an idea. This is a surprisingly difficult task, since ideas worth exploring usually need to be approached with some rigor. The easy use of the speech form is to promote an idea to listeners or readers who already share a common view - that is one reason speeches are effective forms for political persuasion for rallying true believers. It's much more difficult to create new vantage points or vistas into a new world - a sense of something grander that calls for further exploration.

    Yet this is exactly what Ryszard Kapuscinski accomplishes in his series of talks published as The Other. Here, the Polish journalist builds on his experience and most importantly on the reflections on the Lithuanian-Jewish philosopher Emmanual Levinas to reflect on how the encounter with the Other in a broad, cross cultural sense is the defining event - and opportunity - in late (or post) modernity. For Kapuscinski, the Other is the specifically the non-European cultures in which he spent most of his career as a journalist. For another reader it might be someone very much like Kapuscinski himself.

    There are three simple points that Kapuscinski raises that bear attention:

    1) The era we live in provides a unique, interpersonal opportunity for encounter with the Other - which is to say that we are neither in the area of relative isolation from the Other that dominated much of human history nor are we any longer in the phase of violent domination that marked the period of European colonial expansion. We have a chance to make space for encounter to be consistently about engagement and exchange, rather than conflict.

    2) This encounter cannot primarily technical, its must be interpersonal. Technical means are not only anonymous but more conducive to inculcating mass culture rather than creating space for authentic personal engagement. The current period of human history - post industrial, urbanized, technological - is given to mass culture, mass movements, as a rule - this is accelerated by globalization and communications advances. And while it is clear that the early "psychological" literature of the crowd - and I am thinking not only of the trajectory set by Gustave LeBon, but the later and more mature reflections of Ortega y Gasset - were primarily reactionary, nonetheless they point consistently to the fact that the crowd involves not just a loss of identity, but a loss of the individual: it leaves little room for real encounter and exchange.

    While the increasing ability to encounter different cultures offers the possibility of real engagement,  at the same time modern mass culture is the number one threat to the Other - in that it subordinates the value of whatever is unique to whatever is both common and most importantly sellable. In visiting Ukraine over the last few years, what fascinated me the most were the things that made the country uniquely Ukrainian. Following a recent trip, I noted the following in a piece by New York Times columnist Nicholas Kristof on a visit to Karapchiv: "The kids here learn English and flirt in low-cut bluejeans. They listen to Rihanna, AC/DC and Taylor Swift. They have crushes on George Clooney and Angelina Jolie, watch “The Simpsons” and “Family Guy,” and play Grand Theft Auto. The school here has computers and an Internet connection, which kids use to watch YouTube and join Facebook. Many expect to get jobs in Italy or Spain — perhaps even America."

    What here makes the Other both unique and beautiful is being obliterated by mass culture. Kristof is, of course, a cheerleader for this tragedy, but the true opportunity Kapuscinski asks us to look for ways to build up and offer support in encounter.

    3) Lastly and most importantly, for encounter with the Other to be one of mutual recognition and sharing, the personal encounter must have an ethical basis. Kapuscinski observes that the first half of the last century was dominated by Husserl and Heidegger - in other words by epistemic and ontological models. It is no accident, I think, that the same century was marred by enormities wrought by totalizing ideologies - where ethics is subordinated entirely, ideology can rage out of control. Kapuscinski follows Levinas in response - ultimately seeing the Other as a source of ethical responsibility is an imperative of the first order.

    The diversity of human cultures is, as Solzhenitzyn rightly noted, the "wealth of mankind, its collective personalities; the very least of them wears its own special colors and bears within itself a special facet of God's design." And yet is only if we can encounter the Other in terms of mutual respect and self-confidence, in terms of exchange and recognition of value in the Other, that we can actually see the Other as a treasure - one that helps ground who I am as much as reveals the treasure for what it is. And this is our main challenge - the other paths, conflict and exclusion, are paths we cannot afford to tread.

    Vishal Sikka's Appointment as Infosys CEO

    Abhinav Agarwal - Thu, 2014-07-03 09:21


    My article in the DNA on Vishal Sikka's appointment as CEO of Infosys was published on June 25, 2014.

    This is the full text of the article:


    Vishal Sikka's appointment as CEO of Infosys was by far the biggest news event for the Indian technology sector in some time. Sikka was most recently the Chief Technology Officer at the German software giant SAP, where he led the development of HANA - an in-memory analytics appliance that has proven, since its launch in 2010, to be the biggest challenger to Oracle's venerable flagship product, the Oracle Database. With the launch of Oracle Exalytics in 2012 and Oracle Database In-Memory this month, the final chapter and word on that battle between SAP and Oracle remains to be written. Vishal will watch that battle from the sidelines.

    By all accounts, Vishal Sikka is an extraordinary person, and Infosys has made what could well be the turning point for the iconic Indian software services company. If well executed, five years from now people will refer to this event as the one that catapulted Infosys into a different league altogether. However, there are several open questions, challenges, as well as opportunities that confront Infosys the company, Infoscians and shareholders, that Sikka will need to resolve.

    First off, is Sikka a "trophy CEO?" There will be more than one voice heard whispering that Sikka's appointment is more of a publicity gimmick meant to save face for its iconic co-founder, Narayan Murthy, who has been unable to right the floundering ship of the software services giant. Infosys has seen a steady stream of top-level attrition for some time, which had only accelerated after Murthy's return. The presence of his son Rohan Murthy was seen to grate on several senior executives, and also did not go down too well with corporate governance experts. Infosys had also lagged behind its peers in earnings growth. The hiring of a high-profile executive like Sikka has certainly restored much of the lost sheen for Infosys. To sustain that lustre, however, he will need to get some quick wins under his belt.

    The single biggest question on most people's minds is how well will the new CEO adapt to the challenge of running a services organisation. This is assuming that he sees Infosys' long term future in this area of services. Other key issues include reconciling the "people versus products" dilemma. Infosys lives and grows on the back of its ability to hire more people, place them on billable projects that are offshored, and then to keep its salary expenses low - i.e. a volume business with wafer thin margins that are constantly under pressure. This is different from the hiring philosophy adopted by leading software companies and startups around the world - which is to hire the best, from the best colleges, and provide them with a challenging and yet flexible work environment. It should be clear that a single company cannot have two diametrically opposite work cultures for any extended length of time. This, of course assumes, that Sikka sees a future in Infosys beyond labor cost-arbitraged services. Infosys' CEO, in an interview to the New York Times in 2005, had stated that he did not see the company as aspiring beyond that narrow focus. Whether Sikka subscribes to that view or not is a different question.

    In diversifying, it can be argued that IBM could serve as a model. It has developed excellence in the three areas of hardware, software, and services. But Infosys has neither a presence in hardware - and it is hard to imagine it getting into the hardware business for several reasons - nor does it have a particularly strong software products line of business. There is Finacle, but that too has not been performing too well. Sikka may see himself as the ideal person to incubate several successful products within Infosys. But there are several challenges here.

    Firstly, there is no company, with the arguable exception of IBM, that has achieved excellence in both services and products. Not Microsoft, not Oracle, not SAP. Sikka will have to decide where he needs to focus on. Stabilize the services business and develop niche but world-class products that are augmented by services, or build a small but strong products portfolio as a separate business that is hived off from the parent company - de-facto if not in reality. One cannot hunt with the hound and run with the hare. If he decides to focus on nurturing a products line of business, he leaves the company vulnerable to cut-throat competition on one hand and the exit of talented people looking for greener pastures on the other hand.

    Secondly, if Infosys under Sikka does get into products, then it will need to decide what products it builds. He cannot expect to build yet another database, or yet another operating system, or even yet another enterprise application and hope for stellar results. To use a much-used phrase, he will need to creatively disrupt the market. Here again, Sikka's pedigree points to one area - information and analytics. This is a hot area of innovation which finds itself at the intersection of multiple technology trends - cloud, in-memory computing, predictive analytics and data mining, unstructured data, social media, data visualizations, spatial analytics and location intelligence, and of course, the mother of all buzzwords - big data. A huge opportunity awaits at the intersection of analytics, the cloud, and specialized solutions. Should Infosys choose to walk down this path, the probability of success is more than fair given Sikka's background. His name will alone attract the best of talent from across the technology world. Also remember, the adoption of technology in India, despite its close to one billion mobile subscriber base, is still abysmally low. There is a crying need for innovative technology solutions that can be adopted widely and replicated across the country. The several new cities planned by the government itself presents Sikka and Infosys, and of course many other companies, with a staggering opportunity.

    Thirdly, the new CEO will have the benefit of an indulgent investor community, but not for long. Given the high hopes that everyone has from him, Sikka's honeymoon period with Dalal Street may last a couple of quarters, or perhaps even a year, but not much more. The clock is ticking. The world of technology, the world over, is waiting and watching.

    (The opinions expressed in this article are the author's own, and do not necessarily reflect the views of dna)

    Philosophy 22

    Jonathan Lewis - Thu, 2014-07-03 02:59

    Make sure you agree on the meaning of the jargon.

    If you had to vote would you say that the expressions “more selective” and “higher selectivity” are different ways of expressing the same idea, or are they exact opposites of each other ? I think I can safely say that I have seen people waste a ludicrous amount of time arguing past each other and confusing each other because they didn’t clarify their terms (and one, or both, parties actually misunderstood the terms anyway).

    Selectivity is a value between 0 and 1 that represents the fraction of data that will be selected – the higher the selectivity the more data you select.

    If a test is “more selective” then it is a harsher, more stringent, test and returns less data  (e.g. Oxford University is more selective than Rutland College of Further Education): more selective means lower selectivity.

    If there’s any doubt when you’re in the middle of a discussion – drop the jargon and explain the intention.

    Footnote

    If I ask:  “When you say ‘more selective’ do you mean ….”

    The one answer which is absolutely, definitely, unquestionably the wrong reply is: “No, I mean it’s more selective.”

     


    Taleo Interview Evaluations, Part 2

    Oracle AppsLab - Thu, 2014-07-03 02:20

    So, if you read Part 1, you’re all up to speed. If not, no worries. You might be a bit lost, but if you care, you can bounce over and come back for the thrilling conclusion.

    I first showed the Taleo Interview Evaluation Glass app and Android app at a Taleo and HCM Cloud customer expo in late April, and as I showed it, my story evolved.

    Demos are living organisms; the more you show them, the more you morph the story to fit the reactions you get. As I showed the Taleo Glass app, the demo became more about Glass and less about the story I was hoping to tell, which was about completing the interview evaluation more quickly to move along the hiring process.

    So, I began telling that story in context of allowing any user, with any device, to complete these evaluations quickly, from the heads-up hotness of Google Glass, all the way down the technology coolness scale to a boring old dumbphone with just voice and text capabilities.

    I used the latter example for two reasons. First, the juxtaposition of Google Glass and a dumbphone sending texts got a positive reaction and focused the demo around how we solved the problem vs. “is that Google Glass?”

    And second, I was already designing an app to allow a user with a dumbphone to complete an interview evaluation.

    Noel (@noelportugal) introduced me to Twilio (@twilio) years ago when he built the epic WebCenter Rock ‘em Sock ‘em Robots. Those robots punched based on text and voice input collected by Twilio.

    Side note, Noel has long been a fan of Twilio’s, and happily, they are an Oracle Partner. Ultan (@ultan) is hard at work dreaming up cool stuff we can do with Twilio, so stay tuned.

    Anyway, Twilio is the perfect service to power the app I had in mind. Shortly after the customer expo ended, I asked Raymond to build out this new piece, so I could have a full complement of demos to show that fit the full story.

    In about a week, Raymond was done, and we now have a holistic story to tell.

    The interface is dead simple. The user simply sends text messages to a specific number, using a small set of commands. First, sending “Taleo help” returns a list of the commands. Next, the user sends “Taleo eval requests” to retrieve a list of open interview evaluations.

    Screenshot_2014-07-02-12-54-57

    The user then sends a command to start one of the numbered evaluations, e.g. “Start eval 4″ and each question is sent as a separate message.

    questions1

    questions2

    When the final question has been answered, a summary of the user’s answered is sent, and the user can submit the evaluation by sending “Confirm submit.”

    summarySubmit

     

    And that’s it. Elegant and simple and accessible to any manager, e.g. field managers who spend their days traveling between job sites. Coupled with the Glass app and the Android app, we’ve covered all the bases not already covered by Taleo’s web app and mobile apps.

    As always, the disclaimer applies. This is not product. It’s simply a concept demo, built to show people the type of R&D we, Oracle Applications User Experience and this team, do. Not product, only research.

    Find the comments.Possibly Related Posts:

    GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 3: Setup Journalizing

    Rittman Mead Consulting - Wed, 2014-07-02 23:39

    After a short vacation, some exciting news, and a busy few weeks (including KScope14 in Seattle, WA), it’s time to get the “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c” blog series rolling again. Hopefully readers can find some time between World Cup matches to try integrating ODI and GoldenGate on their own!

    To recap my previous two posts on this subject, I first started by showing the latest Information Management Reference Architecture at a high-level (described in further detail by Mark Rittman) and worked through the JAgent configuration, necessary for communication between ODI and GoldenGate. In the second post, I walked through the changes made to the GoldenGate JKM in ODI 12c and laid out the necessary edits for loading the Foundation layer at a high-level. Now, it’s time to make the edits to the JKM and set up the ODI metadata.

    Before I jump into the JKM customization, let’s go through a brief review of the foundation layer and its purpose. The foundation schema contains tables that are essentially duplicates of the source table structure, but with the addition of the foundation audit columns, described below, that allow for the storage of all transactional history in the tables.

    FND_SCN (System Change Number)
    FND_COMMIT_DATE (when the change was committed)
    FND_DML_TYPE (DML type for the transaction: insert, update, delete)

    The GoldenGate replicat parameter file must be setup to map the source transactions into the foundation tables using the INSERTALLRECORDS option. This is the same option that the replicat uses to load the J$ tables, allowing only inserts and no updates or deletes. A few changes to the JKM will allow us to choose whether or not we want to load the Foundation schema tables via GoldenGate.

    Edit the Journalizing Knowledge Module

    To start, make a copy of the “JKM Oracle to Oracle Consistent (OGG Online)” so we don’t modify the original. Now we’re ready to make our changes.

    Add New Options

    A couple of new Options will need to be added to enable the additional feature of loading the foundation schema, while still maintaining the original JKM code. Option values are set during the configuration of the JKM on the Model, but can also have a default in the JKM.

    APPLY_FOUNDATION

    new-option-apply-fnd

    This option, when true, will enable this step during the Start Journal process, allowing it to generate the source-to-foundation mapping statement in the Replicat (apply) parameter file.

    FND_LSCHEMA

    new-option-fnd-schema

    This option will be set with Logical Schema name for the Foundation layer, and will be used to find the physical database schema name when output in the GoldenGate replicat parameter file.

    Add a New Task

    With the options created, we can now add the additional task to the JKM that will create the source to foundation table mappings in the GoldenGate replicat parameter file. The quickest way to add the task is to duplicate a current task. Open the JKM to the Tasks tab and scroll down to the “Create apply prm (3)” step. Right click the task and select Duplicate. A copy of the task will be created and in the order that we want, just after the step we duplicated.

    Rename the step to “Create apply prm (4) RM”, adding the additional RM tag so it’s easily identifiable as a custom step. From the properties, open the Edit Expression dialog for the Target Command. The map statement, just below the OdiOutFile line, will need to be modified. First, remove the IF statement code, as the execution of this step will be driven by the APPLY_FOUNDATION option being set to true.

    Here’s a look at the final code after editing.

    map <%= odiRef.getObjectName("L", odiRef.getJrnInfo("TABLE_NAME"), odiRef.getOggModelInfo("SRC_LSCHEMA"), "D") %>, TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "","D") %>.<%= odiRef.getJrnInfo("TABLE_NAME") %>, KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN)<%if (!odiRef.getOption("NB_APPLY_PROCESS").equals("1")) {%>, FILTER (@RANGE(#ODI_APPLY_NUMBER,<%= nbApplyProcesses %>,<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>))<% } %> INSERTALLRECORDS,
    COLMAP (
    USEDEFAULTS,
    FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
    FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
    FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
    );

    The output of this step is going to be a mapping for each source-to-foundation table in the GoldenGate replicat parameter file, similar to this:

    map PM_SRC.SRC_CITY, TARGET EDW_FND.SRC_CITY, KEYCOLS (CITY_ID, FND_SCN) INSERTALLRECORDS,
    COLMAP (
    USEDEFAULTS,
    FND_COMMIT_DATE = @GETENV('GGHEADER' , 'COMMITTIMESTAMP'),
    FND_SCN = @GETENV('TRANSACTION' , 'CSN'),
    FND_DML_TYPE = @GETENV('GGHEADER' , 'OPTYPE')
    );

    The column mappings (COLMAP clause) are hard-coded into the JKM, with the parameter USEDEFAULTS mapping each column one-to-one. We also hard-code each foundation audit column mapping to the appropriate environment variable from the GoldenGate trail file. Learn more about the GETENV GoldenGate function here.

    The bulk of the editing on this step is done to the MAP statement. The out-of-the-box JKM is setup to apply transactional changes to both the J$, or change table, and fully replicated table. Now we need to add the mapping to the foundation table. In order to do so, we first need to identify the foundation schema and table name for the target table using the ODI Substitution API.

    map ... TARGET <%= odiRef.getSchemaName("" + odiRef.getOption("FND_LSCHEMA") + "", "D") %> ...

    The nested Substitution API call allows us to get the physical database schema name based on the ODI Logical Schema that we will set in the option FND_LSCHEMA, during setup of the JKM on the ODI Model. Then, we concatenate the target table name with a dot (.) in between to get the fully qualified table name (e.g. EDW_FND.SRC_CITY).

    ... KEYCOLS (<%= odiRef.getColList("", "[COL_NAME]", ", ", "", "PK") %>, FND_SCN) ...

    We also added the FND_SCN to the KEYCOLS clause, forcing the uniqueness of each row in the foundation tables. Because we only insert records into this table, the natural key will most likely be duplicated numerous times should a record be updated or deleted on the source.

    Set Options

    The previously created task,  “Create apply prm (4) RM”, should be set to execute only when the APPLY_FOUNDATION option is “true”. On this step, go to the Properties window and choose the Options tab. Deselect all options except APPLY_FOUNDATION, and when Start Journal is run, this step will be skipped unless APPLY_FOUNDATION is true.

    jkm-set-task-option

    Edit Task

    Finally, we need to make a simple change to the “Execute apply commands online” task. First, add the custom step indicator (in my example, RM) to the end of the task name. In the target command expression, comment out the “start replicat …” command by using a double-dash.

    --start replicat ...

    This prevents GoldenGate from starting the replicat process automatically, as we’ll first need to complete an initial load of the source data to the target before we can begin replication of new transactions.

    Additional Setup

    The GoldenGate Manager and JAgent are ready to go, as is the customized “JKM Oracle to Oracle Consistent (OGG Online)” Journalizing Knowledge Module. Now we need to setup the Topology for both GoldenGate and the data sources.

    Setup GoldenGate Topology - Data Servers

    In order to properly use the “online” integration between GoldenGate and Oracle Data Integrator, a connection must be setup for the GoldenGate source and target. These will be created as ODI Data Servers, just as you would create an Oracle database connection. But, rather than provide a JDBC url, we will enter connection information for the JAgent that we configured in the initial post in the series.

    First, open up the Physical Architecture under the Topology navigator and find the Oracle GoldenGate technology. Right-click and create a new Data Server.

    create-ogg-dataserver

    Fill out the information regarding the GoldenGate JAgent and Manager. To find the JAgent port, browse to the GG_HOME/cfg directory and open “Config.properties” in a text viewer. Down towards the bottom, the “jagent.rmi.port”, which is used when OEM is enabled, can be found.

    ####################################################################
    ## jagent.rmi.port ###
    ## RMI Port which EM Agent will use to connect to JAgent ###
    ## RMI Port will only be used if agent.type.enabled=OEM ###
    ####################################################################
    jagent.rmi.port=5572

    The rest of the connection information can be recalled from the JAgent setup.

    setup-ogg-dataserver

    Once completed, test the connection to ensure all of the parameters are correct. Be sure to setup a Data Server for both the source and target, as each will have its own JAgent connection information.

    Setup GoldenGate Topology - Schemas

    Now that the connection is set, the Physical Schema for both the GoldenGate source and target must be created. These schemas tie directly to the GoldenGate process groups and will be the name of the generated parameter files. Under the source Data Server, create a new Physical Schema. Choose the process type of “Capture”, provide a name (8 characters or less due to GoldenGate restrictions), and enter the trail file paths for the source and target trail files.

    Create the Logical Schema just as you would with any other ODI Technology, and the extract process group schema is set.

    For the target, or replicat, process group, perform the same actions on the GoldenGate target Data Server. This time, we just need to specify the target trail file directory, the discard directory (where GoldenGate reporting and discarded records will be stored), and the source definitions directory. The source definitions file is a GoldenGate representation of the source table structure, used when the source and target table structures do not match. The Online JKM will create and place this file in the source definitions directory.

    Again, setup the Logical Schema as usual and the connections and process group schemas are ready to go!

    The final piece of the puzzle is to setup the source and target data warehouse Data Servers, Physical Schemas, and Logical Schemas. Use the standard best practices for this setup, and then it’s time to create ODI Models and start journalizing. In the next post, Part 4 of the series, we’ll walk through applying the JKM to the source Model and start journalizing using the Online approach to GoldenGate and ODI integration.

    Categories: BI & Warehousing

    Fall 2012 US Distance Education Enrollment: Now viewable by each state

    Michael Feldstein - Wed, 2014-07-02 23:15

    Starting in late 2013, the National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) started providing preliminary data for the Fall 2012 term that for the first time includes online education. Using Tableau (thanks to Justin Menard for prompting me to use this), we can now see a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.

    Please note the following:

    • For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
    • There are three tabs below – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state (this is new); the third shows a map view.

    Learn About Tableau

    The post Fall 2012 US Distance Education Enrollment: Now viewable by each state appeared first on e-Literate.

    Coherence Adapter Configuration

    Antony Reynolds - Wed, 2014-07-02 23:05
    SOA Suite 12c Coherence Adapter

    The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide an introduction to configuring the adapter and using the different operations it supports.

    The Coherence Adapter provides access to Oracles Coherence Data Grid.  The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.

    Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component.  The new adapter significantly simplifies simple cache access operations.

    Configuration

    When creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements.  In this section I look at the configuration required to use Coherence adapter in the real world.

    Activate Adapter

    The Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.

    Create a cache configuration file

    The Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local.  This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one.  Without having multiple caches then it is hard to clean out all the objects of a particular type.  Having multiple caches also allows us to specify different properties for each cache.  The following is a sample cache configuration file used in the example.

    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
      <caching-scheme-mapping>
        <cache-mapping>
          <cache-name>TestCache</cache-name>
          <scheme-name>transactional</scheme-name>
        </cache-mapping>
      </caching-scheme-mapping>
      <caching-schemes>
        <transactional-scheme>
          <scheme-name>transactional</scheme-name>
          <service-name>DistributedCache</service-name>
          <autostart>true</autostart>
        </transactional-scheme>
      </caching-schemes>
    </cache-config>

    This defines a single cache called TestCache.  This is a distributed cache, meaning that the entries in the cache will distributed across the grid.  This enables you to scale the storage capacity of the grid by adding more servers.  Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.

    The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter.  It is not referenced from the composite.

    Create a Coherence Adapter Connection Factory

    We find the correct cache configuration by using a Coherence Adapter connection factory.  The adapter ships with a few sample connection factories but we will create new one.  To create a new connection factory we do the following:

    1. On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter.
    2. Choose the javax.resource.cci.ConnectionFactory group.
    3. Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using).
    4. If requested to create a Plan.xml then make sure that you save it in a location available to all servers.
    5. From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab.
      • Set the CacheConfigLocation to point to the cache configuration file created in the previous section.
      • Set the ClassLoaderMode to CUSTOM.
      • Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section.
      • Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy.
      • If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs.
      • Make sure to press enter in each field after entering your data.  Remember to save your changes when done.

    You may will need to stop and restart the adapter to get it to recognize the new connection factory.

    Operations

    To demonstrate the different operations I created a WSDL with the following operations:

    • put – put an object into the cache with a given key value.
    • get – retrieve an object from the cache by key value.
    • remove – delete an object from the cache by key value.
    • list – retrieve all the objects in the cache.
    • listKeys – retrieve all the keys of the objects in the cache.
    • removeAll – remove all the objects from the cache.

    I created a composite based on this WSDL that calls a different adapter reference for each operation.  Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.

    I used a Mediator to map the input WSDL operations to the individual adapter references.

    Schema

    The input schema is shown below.

    This type of pattern is likely to be used in all XML types stored in a Coherence cache.  The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type.  The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level.  Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList).  XMLEmpty is used for operation that don’t require an input.

    Put Operation

    The put operation takes an XMLCacheEntry as input and passes this straight through to the adapter.  The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property.  This sets the key for the cached entry.  The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity.  The cache key is always returned as the output of this operation.

    Get Operation

    The get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.

    Remove Operation

    The remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.

    RemoveAll Operation

    This is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter.  The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:

    key() != ""

    This selects all objects whose key is not equal to the empty string.  All objects should have a key so this query should select all objects for deletion.

    Note that there appears to be a bug in the return value.  The return value is entry rather than having the expected RemoveResponse element with a Count child element.  Note the documentation states that

    When using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful.

    When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.

    Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract.

    List Operation

    The list operation takes no input and returns the result list returned by the adapter.  The adapter also supports querying using a filter.  This filter is essentially the where clause of a Coherence Query Language statement.  When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:

    key() LIKE “Key%1”

    This filter would match all entries whose key starts with “Key” and ends with “1”.

    ListKeys Operation

    The listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.

    Testing

    To test the composite I used the new 12c Test Suite wizard to create a number of test suites.  The test suites should be executed in the following order:

    1. CleanupTestSuite has a single test that removes all the entries from the cache used by this composite.
    2. InitTestSuite has 3 tests that insert a single record into the cache.  The returned key is validated against the expected value.
    3. MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements.  This tests that the items inserted in the previous test are actually in the cache.  It also tests the get, list and listAll operations and makes sure they return the expected results.
    4. RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1.
    5. ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.
    Use Case

    One example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information.  An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.

    However there is a problem in creating global variables that can be updated by multiple instances at the same time.  In this case the get and put operations provided by the Coherence adapter support a last write wins model.  This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter.  In this case it is still necessary to use Java to invoke the entry processor.

    Sample Code

    The sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.

    • CoherenceConfig has the cache config file that must be referenced by the connection factory properties.
    • CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.

    The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.