Skip navigation.

Feed aggregator

Yesterday's AWR Straight to the Goal

Yann Neuhaus - Tue, 2015-04-14 06:43

Yesterday I was a speaker at Collaborate15 and did my presentation about reading an AWR report. That was great. Many people (and not enough seats).

Here are some aswers about questions that came later.

Cloning a PDB from a standby database

Yann Neuhaus - Mon, 2015-04-13 05:14

Great events like IOUG Collaborate is a good way to meet experts we know through blogs, twitter,etc. Yesterday evening, with nice music in the background, I was talking with Leighton Nelson about cloning PDB databases. Don't miss his session today if you are in Las Vegas. The big problem with PDB cloning is that the source must be read-only. The reason is that it works like transportable tablespaces (except that it can transport the datafiles through database link and that we transport SYSTEM as well instead of having to import metadata). There is no redo shipping/apply here, so the datafiles must be consistent.

Obviously, being read-only is a problem when you want to clone from production.

But if you have a standby database, can you open it read-only and clone a pluggable database from there? From what we know, it should be possible, but better to test it.

Here is my source - a single tenant standby database opened in read-only:

SQL> connect sys/oracle@//192.168.78.105/STCDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode,database_role from v$database;

NAME      OPEN_MODE            DATABASE_ROLE
--------- -------------------- ----------------
STCDB     READ ONLY            PHYSICAL STANDBY

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
STDB1                          MOUNTED

Then from the destination I define a database link to it:

SQL> connect sys/oracle@//192.168.78.113/CDB as sysdba
Connected.
SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
CDB       READ WRITE

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB                            READ WRITE

SQL>
SQL> create database link DBLINK_TO_STCDB connect to system identified by oracle using '//192.168.78.105/STCDB';

Database link created.

and create a pluggable database from it:

SQL> create pluggable database STDB2 from STDB1@DBLINK_TO_STCDB;

Pluggable database created.

SQL> alter pluggable database STDB2 open;

Pluggable database altered.

So yes. This is possible. And you don't need Active Data Guard for that. As long as you can stop the apply for the time it takes to transfer the datafiles, then this is a solution for cloning. Of course, just do one clone and if you need others then you can do it from that first clone. And within the same PDB they can be thin clones if you can use snapshots.

Ok, It's 5 a.m here. As usual, the jetlag made me awake awake a bit early, so that was a good occasion to test what we have discussed yesterday...

Data Mining Scoring Development Process

Dylan Wan - Thu, 2015-04-02 23:39

I think that the process of building a data mining scoring engine is similar to develop an application.


We have the requirement analysis, functional design, technical design, coding, testing, deployment, etc. phases.



Categories: BI & Warehousing

New blog to handle the PJC/Bean articles

Francois Degrelle - Mon, 2015-03-30 12:06
Here is the link to another place that stores the PJCs/Beans article without adds. http://forms.pjc.bean.blog.free.fr/ Francois

dramatic differences of in memory scanning performance on range queries

Karl Reitschuster - Tue, 2015-02-17 11:07
Given following two identical tables, on which run the same SQL,  replicated in memory with oracle in-memory option - one table created out of the other.
each tables covers 24m rows.
    
Same structure ...

Setting up Xubuntu in Lenovo Flex2 14D

Vattekkat Babu - Tue, 2014-09-16 00:29

Lenovo Flex2 14D is a good laptop with decent build quality, light weight, 14" screen and touch screen for those who like it. With AMD A6 processor version, it is reasonably priced too.

It comes pre-loaded wth Windows 8.1 and a bunch of Lenovo software. If you want to get this to dual boot with Ubuntu Linux, here are the specific fixes you need to do.

Slicing the EDG

Antony Reynolds - Tue, 2014-08-19 20:24
Different SOA Domain Configurations

In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages.

Super Domain

This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain.

This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Single Administration Point (1 Admin Server)
  • Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
Drawbacks
  • Patching is an all or nothing affair.
  • Startup time for SOA may be slow if large number of composites deployed.
Multiple Domains

This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc.

SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service.
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Startup time for SOA may be slow if large number of composites deployed.
Shared Service Environment

This model extends the previous multiple domain arrangement to provide a true shared service environment.

This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points

  • Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable.
  • JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers.
  • Separate Coherence servers allow OSB cache to be offloaded from OSB servers.
  • Use of Coherence by other components as a shared infrastructure data grid service
  • Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster.
  • Shared SOA Domain hosts
    • Human Workflow Tasks
    • BAM
    • Common "utility" composites
  • Single OSB domain provides "Enterprise Service Bus"
  • All domains use same OWSM policy store (MDS-WSM)
Benefits
  • Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches.
  • Coherence grid can be scaled independent of OSB/SOA.
  • JMS queues provide for inter-application communication.
  • Patch lifecycle of OSB/SOA/JMS are no longer lock stepped.
  • JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB.
  • OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability.
  • All domains use same OWSM policy store (MDS-WSM).
  • Supports large numbers of deployed composites in multiple domains.
  • Single URL for Human Workflow end users.
  • Single URL for BAM end users.
Drawbacks
  • Multiple domains to manage and configure.
  • Multiple Admin servers (single view requires use of Grid Control)
  • Multiple Admin servers/WSM clusters waste resources.
  • Additional homes needed to enjoy benefits of separate patching.
  • Cross domain trust needs setting up to simplify cross domain interactions.
  • Human Workflow needs to be specially configured to point to shared services domain.
Summary

The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

How to beat workday blues?

Vattekkat Babu - Fri, 2014-08-15 06:37

Let us face it - all of us feel like having achieved or done very little after spending a long day away from family. Then you look back and find that you could've spent some of that time with family at least!

I've been observing my work habits a lot and I think I have found out something that works for me.

I am summarizing these as a NOT-TODO list of 3 items. I am a software engineer by profession and by passion.

Has this worked for me? Absolutely much better than when I was not following these rules.

Coherence Adapter Configuration

Antony Reynolds - Wed, 2014-07-02 23:05
SOA Suite 12c Coherence Adapter

The release of SOA Suite 12c sees the addition of a Coherence Adapter to the list of Technology Adapters that are licensed with the SOA Suite.  In this entry I provide an introduction to configuring the adapter and using the different operations it supports.

The Coherence Adapter provides access to Oracles Coherence Data Grid.  The adapter provides access to the cache capabilities of the grid, it does not currently support the many other features of the grid such as entry processors – more on this at the end of the blog.

Previously if you wanted to use Coherence from within SOA Suite you either used the built in caching capability of OSB or resorted to writing Java code wrapped as a Spring component.  The new adapter significantly simplifies simple cache access operations.

Configuration

When creating a SOA domain the Coherence adapter is shipped with a very basic configuration that you will probably want to enhance to support real requirements.  In this section I look at the configuration required to use Coherence adapter in the real world.

Activate Adapter

The Coherence Adapter is not targeted at the SOA server by default, so this targeting needs to be performed from within the WebLogic console before the adapter can be used.

Create a cache configuration file

The Coherence Adapter provides a default connection factory to connect to an out-of-box Coherence cache and also a cache called adapter-local.  This is helpful as an example but it is good practice to only have a single type of object within a Coherence cache, so we will need more than one.  Without having multiple caches then it is hard to clean out all the objects of a particular type.  Having multiple caches also allows us to specify different properties for each cache.  The following is a sample cache configuration file used in the example.

<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
  <caching-scheme-mapping>
    <cache-mapping>
      <cache-name>TestCache</cache-name>
      <scheme-name>transactional</scheme-name>
    </cache-mapping>
  </caching-scheme-mapping>
  <caching-schemes>
    <transactional-scheme>
      <scheme-name>transactional</scheme-name>
      <service-name>DistributedCache</service-name>
      <autostart>true</autostart>
    </transactional-scheme>
  </caching-schemes>
</cache-config>

This defines a single cache called TestCache.  This is a distributed cache, meaning that the entries in the cache will distributed across the grid.  This enables you to scale the storage capacity of the grid by adding more servers.  Additional caches can be added to this configuration file by adding additional <cache-mapping> elements.

The cache configuration file is reference by the adapter connection factory and so needs to be on a file system accessed by all servers running the Coherence Adapter.  It is not referenced from the composite.

Create a Coherence Adapter Connection Factory

We find the correct cache configuration by using a Coherence Adapter connection factory.  The adapter ships with a few sample connection factories but we will create new one.  To create a new connection factory we do the following:

  1. On the Outbound Connection Pools tab of the Coherence Adapter deployment we select New to create the adapter.
  2. Choose the javax.resource.cci.ConnectionFactory group.
  3. Provide a JNDI name, although you can use any name something along the lines of eis/Coherence/Test is a good practice (EIS tells us this an adapter JNDI, Coherence tells us it is the Coherence Adapter, and then we can identify which adapter configuration we are using).
  4. If requested to create a Plan.xml then make sure that you save it in a location available to all servers.
  5. From the outbound connection pool tab select your new connection factory so that you can configure it from the properties tab.
    • Set the CacheConfigLocation to point to the cache configuration file created in the previous section.
    • Set the ClassLoaderMode to CUSTOM.
    • Set the ServiceName to the name of the service used by your cache in the cache configuration file created in the previous section.
    • Set the WLSExtendProxy to false unless your cache configuration file is using an extend proxy.
    • If you plan on using POJOs (Plain Old Java Objects) with the adapter rather than XML then you need to point the PojoJarFile at the location of a jar file containing your POJOs.
    • Make sure to press enter in each field after entering your data.  Remember to save your changes when done.

You may will need to stop and restart the adapter to get it to recognize the new connection factory.

Operations

To demonstrate the different operations I created a WSDL with the following operations:

  • put – put an object into the cache with a given key value.
  • get – retrieve an object from the cache by key value.
  • remove – delete an object from the cache by key value.
  • list – retrieve all the objects in the cache.
  • listKeys – retrieve all the keys of the objects in the cache.
  • removeAll – remove all the objects from the cache.

I created a composite based on this WSDL that calls a different adapter reference for each operation.  Details on configuring the adapter within a composite are provided in the Configuring the Coherence Adapter section of the documentation.

I used a Mediator to map the input WSDL operations to the individual adapter references.

Schema

The input schema is shown below.

This type of pattern is likely to be used in all XML types stored in a Coherence cache.  The XMLCacheKey element represents the cache key, in this schema it is a string, but could be another primitive type.  The other fields in the cached object are represented by a single XMLCacheContent field, but in a real example you are likely to have multiple fields at this level.  Wrapper elements are provided for lists of elements (XMLCacheEntryList) and lists of cache keys (XMLCacheEntryKeyList).  XMLEmpty is used for operation that don’t require an input.

Put Operation

The put operation takes an XMLCacheEntry as input and passes this straight through to the adapter.  The XMLCacheKey element in the entry is also assigned to the jca.coherence.key property.  This sets the key for the cached entry.  The adapter also supports automatically generating a key, which is useful if you don’t have a convenient field in the cached entity.  The cache key is always returned as the output of this operation.

Get Operation

The get operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be retrieved.

Remove Operation

The remove operation takes an XMLCacheKey as input and assigns this to the jca.coherence.key property. This sets the key for the entry to be deleted.

RemoveAll Operation

This is similar to the remove operation but instead of using a key as input to the remove operation it uses a filter.  The filter could be overridden by using the jca.coherence.filter property but for this operation it was permanently set in the adapter wizard to be the following query:

key() != ""

This selects all objects whose key is not equal to the empty string.  All objects should have a key so this query should select all objects for deletion.

Note that there appears to be a bug in the return value.  The return value is entry rather than having the expected RemoveResponse element with a Count child element.  Note the documentation states that

When using a filter for a Remove operation, the Coherence Adapter does not report the count of entries affected by the remove operation, regardless of whether the remove operation is successful.

When using a key to remove a specific entry, the Coherence Adapter does report the count, which is always 1 if a Coherence Remove operation is successful.

Although this could be interpreted as meaning an empty part is returned, an empty part is a violation of the WSDL contract.

List Operation

The list operation takes no input and returns the result list returned by the adapter.  The adapter also supports querying using a filter.  This filter is essentially the where clause of a Coherence Query Language statement.  When using XML types as cached entities then only the key() field can be tested, for example using a clause such as:

key() LIKE “Key%1”

This filter would match all entries whose key starts with “Key” and ends with “1”.

ListKeys Operation

The listKeys operation is essentially the same as the list operation except that only the keys are returned rather than the whole object.

Testing

To test the composite I used the new 12c Test Suite wizard to create a number of test suites.  The test suites should be executed in the following order:

  1. CleanupTestSuite has a single test that removes all the entries from the cache used by this composite.
  2. InitTestSuite has 3 tests that insert a single record into the cache.  The returned key is validated against the expected value.
  3. MainTestSuite has 5 tests that list the elements and keys in the cache and retrieve individual inserted elements.  This tests that the items inserted in the previous test are actually in the cache.  It also tests the get, list and listAll operations and makes sure they return the expected results.
  4. RemoveTestSuite has a single test that removes an element from the cache and tests that the count of removed elements is 1.
  5. ValidateRemoveTestSuite is similar to MainTestSuite but verifies that the element removed by the previous test suite has actually been removed.
Use Case

One example of using the Coherence Adapter is to create a shared memory region that allows SOA composites to share information.  An example of this is provided by Lucas Jellema in his blog entry First Steps with the Coherence Adapter to create cross instance state memory.

However there is a problem in creating global variables that can be updated by multiple instances at the same time.  In this case the get and put operations provided by the Coherence adapter support a last write wins model.  This can be avoided in Coherence by using an Entry Processor to update the entry in the cache, but currently entry processors are not supported by the Coherence Adapter.  In this case it is still necessary to use Java to invoke the entry processor.

Sample Code

The sample code I refer to above is available for download and consists of two JDeveloper projects, one with the cache config file and the other with the Coherence composite.

  • CoherenceConfig has the cache config file that must be referenced by the connection factory properties.
  • CoherenceSOA has a composite that supports the WSDL introduced at the start of this blog along with the test cases mentioned at the end of the blog.

The Coherence Adapter is a really exciting new addition to the SOA developers toolkit, hopefully this article will help you make use of it.

Integration Hub – Branding

Kasper Kombrink - Mon, 2014-06-23 04:57
The Integration Hub has come a long way since I first saw it as the Enterprise Portal 8.8. The biggest selling point in my opinion has always been the branding features. Even though the options never really changed, they did evolve

Continue reading

Customize OBIEE login page

Kasper Kombrink - Thu, 2014-06-19 05:15
A login window never really meets requirements (or they are just plain ugly to look at). The typical PeopleSoft login window is one of them, all the languages you will never install is the most heard remark by (super)users. Another

Continue reading

brew install sqlplus

Dominic Delmolino - Tue, 2014-05-13 18:59

Gee, that didn’t work.

For those of you wondering about the title of this post, I’m referring to the brew package manager for Mac OS — a nice utility for installing Unix-like packages on Mac OS similar to how yum / apt-get can be used on Linux.

I particularly like the way brew uses /usr/local and symlinks for clean installations of software without messing up the standard Mac paths.

Unfortunately, there isn’t a brew “formula” for installing sqlplus and the instant client libraries (and probably never will be due to licensing restrictions), but we can come close using ideas from Oracle ACE Ronald Rood and his blog post Oracle Client 11gR2 (11.2.0.3) for Apple Mac OS X (Intel).

Go there now and read up through “unzipping the files” — after that, return here and we’ll see how to simulate a brew installation.

organize the software

mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/bin
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/lib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/jdbc/lib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/rdbms/jlib
mkdir -p /usr/local/Oracle/product/instantclient/11.2.0.4.0/sqlplus/admin

Change to the instantclient_11_2 directory where the files were extracted, and execute the following commands to place them into our newly created directories:

mv ojdbc* /usr/local/Oracle/product/instantclient/11.2.0.4.0/jdbc/lib/
mv x*.jar /usr/local/Oracle/product/instantclient/11.2.0.4.0/rdbms/jlib/
mv glogin.sql /usr/local/Oracle/product/instantclient/11.2.0.4.0/sqlplus/admin/
mv *dylib* /usr/local/Oracle/product/instantclient/11.2.0.4.0/lib/
mv *README /usr/local/Oracle/product/instantclient/11.2.0.4.0/
mv * /usr/local/Oracle/product/instantclient/11.2.0.4.0/bin/

While these commands place the files where we want them, we’ll need to do a few more things to make them usable. If you’re using brew already, /usr/local/bin will be in your PATH and you won’t need to add it. We’ll mimic what brew does and symlink sqlplus into /usr/local/bin.

cd /usr/local/bin
ln -s ../Oracle/product/instantclient/11.2.0.4.0/bin/sqlplus sqlplus

This will put sqlplus on our path, but we still need to set the environment variables for things like ORACLE_BASE, ORACLE_HOME and the DYLD_LIBRARY_PATH. Ronald sets them manually and then adds them to his .bash_profile, but I wanted to mimic some of the brew packages and have a .sh file to set variables from /usr/local/share.
To do so, I created another directory underneath /usr/local/Oracle to hold my .sh file:

cd /usr/local/Oracle/product/instantclient/11.2.0.4.0
mkdir -p share/instantclient
cd /usr/local/share
ln -s ../Oracle/product/instantclient/11.2.0.4.0/share/instantclient/ instantclient

Now I can create an instantclient.sh file and place it in /usr/local/Oracle/product/instantclient/11.2.0.4.0/share/instantclient/ with the content I want in my environment.

$ cat /usr/local/share/instantclient/instantclient.sh 
export ORACLE_BASE=/usr/local/Oracle
export ORACLE_HOME=$ORACLE_BASE/product/instantclient/11.2.0.4.0
export DYLD_LIBRARY_PATH=$ORACLE_HOME/lib
export TNS_ADMIN=$ORACLE_BASE/admin/network

Once I have this file in place, I can edit my .bash_profile file and add the following line:

source /usr/local/share/instantclient/instantclient.sh

Open up a new Terminal window and voila! A working sqlplus installation that mimics a brew package install!

This blog is now closed.

Billy Cripe - Mon, 2013-10-14 12:14

Thank you for visiting.  This blog has been closed down and merged with the WebCenter Blog, which contains blog posts and other information about ECM, WebCenter Content, the content-enabling of business applications and other relevant topics.  Please be sure to visit and bookmark https://blogs.oracle.com/webcenter/ and subscribe to stay informed about these topics and many more.   From there, use the #ECM hashtag to narrow your focus to topics that are strictly related to ECM.

See you there! 

Categories: Fusion Middleware

Something for the future

Dominic Giles - Thu, 2013-07-18 05:46

A nice little feature in Oracle Database 12c is to query patching information via SQL. You can do this from SQLPlus or any other SQL interface jdbc/odbc etc. You can find more details here

http://docs.oracle.com/cd/E16655_01/appdev.121/e17602/d_qopatch.htm#CBHEBGIB

However you won't be surprised to find that the following query doesn't currently return any useful information.

SYS@//oracle12c/orcl > select DBMS_QOPATCH.GET_OPATCH_LIST from dual;
GET_OPATCH_LIST
------------------------------------------------------------------------------------------------------------------------
<patches/>


SQL Translator Profiles in Oracle Database 12c

Dominic Giles - Mon, 2013-07-08 10:19

A new feature in Oracle Database 12c is the ability to intercept and translate third party SQL to Oracle syntactically correct SQL  before it is parsed and executed. So you can now intercept SQL from applications using jdbc and odbc that were designed to run against a non Oracle database and potentially run them completely unchanged. The only work necessary is done by the database development/management team. In Oracle Database 12c we also currently support the automatic translation of some databases SQL. Currently this is limited to Sybase but we're working on others. You can find all the details here

http://docs.oracle.com/cd/E16655_01/gateways.121/e22508/sql_transl_arch.htm#DRDAA131

You can also use the frame work against an application that already successfully runs against an Oracle Database. You might want to do this for migration/performance/security reasons. It also gives you an opportunity to try out an important part of the framework "Translation Profiles".

The following SQL demonstrates a simple use case. I'm using the Swingbench Order Entry schema but the sample schema OE would work just as well.

First grant the privilege to the user you want to create the SQL profile on in this case SOE. You need to do this as sys or system

grant create sql translation profile to SOE

Then connect to the user you've just granted the privilege to (SOE) and create a SQL Translation profile.

-- Drop the profile if it already exists
-- exec DBMS_SQL_TRANSLATOR.DROP_PROFILE('ORDERS_APP_PROFILE');

-- Create a Translation Profile

exec dbms_sql_translator.create_profile('ORDERS_APP_PROFILE');

Then add some SQL to be translated. In our simple example we are translating a count against the ORDERS table and translating it to run against the ORDERS_SOUTH table

-- Create a Translation in that profile

BEGIN
    DBMS_SQL_TRANSLATOR.REGISTER_SQL_TRANSLATION(
      profile_name    => 'ORDERS_APP_PROFILE',
      sql_text        => 'select count(*) from orders',
      translated_text => 'select count(*) from orders_south');
END;

At this stage it's worth seeing whats been populated. You can see the SQL via the following views.

select *  FROM USER_SQL_TRANSLATION_PROFILES;

select * from USER_SQL_TRANSLATIONS;

Then test how this changes the execution by creating our new "ORDERS_SOUTH" table

-- Count the rows we get back from orders
select count(*) from orders;

-- Create a new table  orders_south with just ten rows in

create table orders_south as select * from orders where rownum < 11;

Now we've done that enable the sql translation profile we want to use

-- Set the session to use the sql translation profile

alter session set sql_translation_profile = ORDERS_APP_PROFILE

-- For testing make the sqlplus look like a foreign tool

alter session set events = '10601 trace name context forever, level 32';

Now when we re run our query it will use the ORDERS_SOUTH table even though we've explicitly asked for a count against the ORDERS table.


select count(*) from orders;

-- We should just see 10 rows as opposed to hundreds of thousands

And thats a quick example of SQL Translator profiles in Oracle Database 12c 

Going Production...

Dominic Giles - Thu, 2013-07-04 12:48

This blog is going production... Just like Oracle Database 12c.

 Comments and code snippets to follow

Quiet Release MySQL Plugin 12.1.0.1.2 — bug fixes

Alex Gorbachev - Tue, 2012-12-11 12:30
This is just a small bug fix release of the plugin. It was actually quietly released for a while now of if you have downloaded the plugin recently, you have the latest version. To be sure — check the version in the Console or you will see it in the file name. There are two [...]
Categories: APPS Blogs

IOUG Big Data SIG — Kick-off Meeting at OOW12

Alex Gorbachev - Thu, 2012-09-27 16:47
Announcing the IOUG Big Data Special Interest Group (SIG)! We have the SIG meeting at Oracle Open World — come join us with you morning coffee. Nothing better than starting your Big morning with Big Data talks! Yes — we actually managed to get the room at this busy times at OOW thanks to IOUG. [...]
Categories: APPS Blogs

IOUG Collaborate 2013 — Call for Speakers Informational Webinar

Alex Gorbachev - Tue, 2012-09-11 04:28
As I’ve become Director of Communities for IOUG recently, I’m intimately involved in many aspects of leading IOUG community. One of the area the user group is pursuing all the time is finding the new speakers and that takes some part of convincing the community members to actually start presenting. There are many of you [...]
Categories: APPS Blogs

Oracle OpenWorld 2012 – Bloggers Meetup

Alex Gorbachev - Thu, 2012-09-06 13:40
Oracle OpenWorld 2012 is just over a month away and yes we are organizing the Annual Oracle Bloggers Meetup — one of your top favorite events of the OpenWorld. What: Oracle Bloggers Meetup 2012 When: Wed, 3-Oct-2012, 5:30pm Where: Main Dining Room, Jillian’s Billiards @ Metreon, 101 Fourth Street, San Francisco, CA 94103 (street view). [...]
Categories: APPS Blogs