Feed aggregator

Uniquely parallel

Jonathan Lewis - Fri, 2016-06-10 01:36

Here’s a surprising (to me) execution plan from – parallel execution to find one row in a table using a unique scan of a unique index – produced by running the following script (data creation SQL to follow):

set serveroutput off
set linesize 180
set trimspool on
set pagesize 60

alter session set statistics_level = all;

variable b1 number
exec :b1 := 50000

select /*+ parallel (3) */ id, v1 from t2 where id=:b1;

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

break on dfo_number skip 1 on tq_id skip 1 on server_type

        dfo_number, tq_id, server_type, instance, process, num_rows
order by
        dfo_number, tq_id, server_type desc, instance, process

All I’ve done is enable rowsource execution statistics, set a bind variable to a value, query a table with a /*+ parallel(3) */ hint to find the one row that will be identified by primary key, and then reported the actual execution plan. When I first ran the test Oracle didn’t report the execution statistics correctly so I’ve also queried v$pq_tqstat to show the PX servers used and the flow of data through the plan. Here’s the plan, followed by the  results from v$pq_tqstat:

SQL_ID  0dzynh9d29pt9, child number 0
select /*+ parallel (3) */ id,v1 from t2 where id=:b1

Plan hash value: 247082613

| Id  | Operation                         | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   |
|   0 | SELECT STATEMENT                  |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   1 |  PX COORDINATOR                   |          |      1 |        |        |      |            |      1 |00:00:00.02 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |  Q1,01 | P->S | QC (RAND)  |      0 |00:00:00.01 |
|   3 |    TABLE ACCESS BY INDEX ROWID    | T2       |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   4 |     BUFFER SORT                   |          |      0 |        |  Q1,01 | PCWC |            |      0 |00:00:00.01 |
|   5 |      PX RECEIVE                   |          |      0 |      1 |  Q1,01 | PCWP |            |      0 |00:00:00.01 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |  Q1,00 | S->P | HASH (BLOCK|      0 |00:00:00.01 |
|   7 |        PX SELECTOR                |          |      0 |        |  Q1,00 | SCWC |            |      0 |00:00:00.01 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |      0 |      1 |  Q1,00 | SCWP |            |      0 |00:00:00.01 |

Predicate Information (identified by operation id):
   8 - access("ID"=:B1)

   - Degree of Parallelism is 3 because of hint

---------- ---------- ---------- ---------- ------ ----------
         1          0 Producer            1 P003            0
                                          1 P004            1
                                          1 P005            0
                      Consumer            1 P000            0
                                          1 P001            1
                                          1 P002            0

                    1 Producer            1 P000            0
                                          1 P001            1
                                          1 P002            0
                      Consumer            1 QC              1

As you can see the table access follows a unique scan of an index and, although the rowsource execution stats report zero starts for the unique scan, we can see from v$pq_tqstat that slave P004 acquired a “row” (actually a rowid) and passed it to slave P001 which then acquired a row from the table and passed that row to the query coordinator. Oracle really did execute a parallel query, starting and stopping a total of 6 sessions to perform a single unique index access.

You’ll notice operation 7 is one you’ve only seen in the latest version of Oracle. The PX SELECTOR was introduced in 12c to reduce the number of times a complex parallel query would funnel into the query coordinator (parallel to serial) and then fan out again (serial to parallel) generating a new data flow operation tree (DFO tree) spawning one or two new parallel server groups as it did so. To stop this happening a step that needs to serialise in a 12c parallel plan can nominate one of the existing PX server processes (from each set, if necessary) to do the job so that the same set of PX servers can carry on running the query without the need for a new DFO tree to appear.

This enhancement to parallel execution plans is a good idea – except when it appears in my silly little query and turns something that ought to be quick and cheap into a job that is far more resource-intensive than it should be.

At this point, of course, you’re probably wondering what kind of idiot would put a parallel() hint into a query that was doing nothing but selecting one row by primary key – the answer is: “the Oracle optimizer in 12c”. I discovered this anomaly while creating a demonstration of the way that a distributed parallel query has to serialise through a single database link even if the operations at the two ends of the link run parallel. Here’s the SQL I wrote for the full demonstration:

rem     Script:         distributed_pq.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2016

define m_remote='test@loopback'
define m_remote='orcl@loopback'

create table t1
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
        cast(rownum as number(8,0))                     id,
        cast(lpad(rownum,8,'0') as varchar2(8))         v1,
        cast(rpad('x',100) as varchar2(100))            padding
        generator       v1,
        generator       v2
        rownum <= 1e5

create table t2
select  *
from    t1
where   mod(id,100) = 0

alter table t2 add constraint t2_pk primary key(id);

                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'

                ownname          => user,
                tabname          =>'T2',
                method_opt       => 'for all columns size 1'

set serveroutput off

        /*+ parallel(3) */
        t1.v1, t2.v1
        mod(t1.id,10) = 0
and     t2.id = t1.id
and     mod(to_number(t2.v1),10) = 1

select * from table(dbms_xplan.display_cursor(null,null,'allstats parallel'));

If you want to run this demo you’ll need to do something about formatting the output; more importantly you’ll have to create a database link (with a loopback link) and set up a define identifying it at the line where I’ve got orcl@loopback and test@loopback (which are my 12c and 11g loopback links respectively).

Here’s the plan (with rowsource stats) I got from the 12c test:

| Id  | Operation             | Name     | Starts | E-Rows |    TQ  |IN-OUT| PQ Distrib | A-Rows |   A-Time   | Buffers | Reads  |
|   0 | SELECT STATEMENT      |          |      1 |        |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   1 |  NESTED LOOPS         |          |      1 |     10 |        |      |            |      0 |00:01:14.67 |       7 |      0 |
|   2 |   PX COORDINATOR      |          |      1 |        |        |      |            |  10000 |00:00:00.11 |       7 |      0 |
|   3 |    PX SEND QC (RANDOM)| :TQ10000 |      0 |   1000 |  Q1,00 | P->S | QC (RAND)  |      0 |00:00:00.01 |       0 |      0 |
|   4 |     PX BLOCK ITERATOR |          |      3 |   1000 |  Q1,00 | PCWC |            |  10000 |00:03:17.72 |    1745 |   1667 |
|*  5 |      TABLE ACCESS FULL| T1       |     39 |   1000 |  Q1,00 | PCWP |            |  10000 |00:00:00.06 |    1745 |   1667 |
|   6 |   REMOTE              | T2       |  10000 |      1 |        |      |            |      0 |00:01:14.44 |       0 |      0 |

Predicate Information (identified by operation id):
   5 - access(:Z>=:Z AND :Z<=:Z)

Remote SQL Information (identified by operation id):
   6 - SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1
       (accessing 'ORCL@LOOPBACK' )

   - Degree of Parallelism is 3 because of hint

I have hacked this output a little – the “Remote SQL” section didn’t get reported by display_cursor(), so I’ve inserted the remote sql I got from a call to dbms_xplan.display() after using explain plan to generate a plan. Note the /*+ shared(3) */ hint that appears in the remote SQL – that’s the internal version of a parallel(3) hint.

In 11g the query complete in 2.4 seconds, in 12c the query took nearly 75 seconds to run thanks to the 12c enhancement that allowed it to obey the hint! Looking at the time column (and ignoring the anomalous 3:17 at operation 4 – which might roughly be echoing 3 * 1:14) we can see that the time goes on the calls to the remote database (and a check of v$session_event shows this time spent in “SQL*Net message from db link”), so the obvious thing to do is check what actually happened at the remote database and we can do that by searching the library cache for a recognizable piece of the remote SQL – here’s the SQL to do that, with the results from 11g followed by the results from 12c:

SQL> select sql_id, child_number, executions, px_servers_executions, sql_text from v$sql
  2  where sql_text like '%SHARED%' and sql_text not like 'select sql_id%';

11g results
------------- ------------ ---------- ---------------------
c0f292z5czhwk            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE MOD(TO_NUMBER("V1"),10)=1 AND "ID"=:1

12c results
------------- ------------ ---------- ---------------------
7bk51w7vtagwd            0      10000                     0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

7bk51w7vtagwd            1          0                 59995
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND MOD(TO_NUMBER("V1"),10)=1

It’s not surprising to see that the query has executed 10,000 times – that’s what we were told by the Starts statistic from dbms_output.display_cursor(), but 12c has 60,000 (with a little error) PX Servers executions of the statement. That’s 10,000 executions * degree 3 * the 2 slave sets we saw in my original execution plan. (It’s an odd little quirk of the two versions of Oracle that the order of predicates in the remote SQL was reversed between 11g and 12c – leading to two different SQL_IDs).

By enabling rowsource execution stats at the system level I was able to capture the remote execution plan with its stats:

SQL_ID  7bk51w7vtagwd, child number 0
SELECT /*+ SHARED (3) */ "ID","V1" FROM "T2" "T2" WHERE "ID"=:1 AND

| Id  | Operation                         | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
|   0 | SELECT STATEMENT                  |          |      0 |        |      0 |00:00:00.01 |       0 |
|   1 |  PX COORDINATOR                   |          |      0 |        |      0 |00:00:00.01 |       0 |
|   2 |   PX SEND QC (RANDOM)             | :TQ10001 |      0 |      1 |      0 |00:00:00.01 |       0 |
|*  3 |    TABLE ACCESS BY INDEX ROWID    | T2       |  29983 |      1 |      0 |00:00:22.21 |    1000 |
|   4 |     BUFFER SORT                   |          |  29995 |        |    999 |00:00:21.78 |       0 |
|   5 |      PX RECEIVE                   |          |  29924 |      1 |    994 |00:00:21.21 |       0 |
|   6 |       PX SEND HASH (BLOCK ADDRESS)| :TQ10000 |      0 |      1 |      0 |00:00:00.01 |       0 |
|   7 |        PX SELECTOR                |          |  29993 |        |    999 |00:00:06.08 |   19992 |
|*  8 |         INDEX UNIQUE SCAN         | T2_PK    |  29999 |      1 |   1000 |00:00:00.24 |   20000 |

Predicate Information (identified by operation id):
   3 - filter(MOD(TO_NUMBER("V1"),10)=1)
   8 - access("ID"=:1)

Unlike the test case I started with, this output did show the number of starts (with a few missing) and the work done across the slaves. Our index probe had to do two buffer gets on every execution, and we have 10,000 executions of the query so 20,000 buffer gets on the index unique scan. Even though only one slave actually does any work with the PX Selector, all three slaves in that set seem to “start” the relevant operations. The definition of the data meant that only one index probe in 10 was successful so we only visited 1,000 buffers from the table. If you’re familiar with reading plans with rowsource execution stats you’ll appreciate that something has gone wrong in the reporting here – that 1,000 at operation 3 should read 21,000 because it ought to include the 20,000 from the index scan (at least, that’s what a serial plan would do).

If you’re still wondering why running this query as a parallel query should take so long – after all it’s only 10,000 executions in 70 seconds – bear in mind that Oracle has to allocate and deallocate 6 PX servers to new sessions each time it starts; the instance activity stats showed “logons cumulative” going up by 60,000 each time I ran the driving query: that’s about 850 logons (and log offs) per second. I don’t think my test machine would give a realistic impression of the impact of a couple of copies of this query running simultaneously, but when I tried the contention introduce increased the run time to 93 seconds.


Watch out for poor performance becomg disastrous for distributed parallel queries when you upgrade from 11g to 12c




What’s in a name? or rather, in the SSA Names data

RDBMS Insight - Thu, 2016-06-09 14:31

One of the amazing things about being a DBA/developer in 2016 is the sheer amount of freely available, downloadable data to play with. One fun publicly available data sets is the American Social Security Administration names data. It contains all names for which SSNs were issued for each year, with the number of occurrences (although names with <5 occurrences are not included to protect individual privacy).

What’s so fun about this dataset?

* It’s already normalized

* It updates only once a year, and then only by adding another year’s worth of data, so it’s easy to keep current

* Almost everyone can relate to this dataset personally – almost everyone’s name is in there!

* At about 1.8 million rows, it’s not particularly large, but it’s large enough to be interesting to play with.

The one slight annoyance is that the data is in over 100 files, one per year: too many to load one-by-one manually. So here’s a blog post on loading it into your Oracle database, with scripts.

1. Visit the URL:

2. Download and unzip names.zip . This zip archive contains one file for each year from 1880 to 2015. The files are named yobXXXX.txt eg. yob2015.txt .

3. Create a table to hold the names data:

CREATE TABLE names (YEAR NUMBER(4), name varchar2(30), sex CHAR(1), freq NUMBER);

4. Load in one year to get a feeling for the data. Let’s load “yob2015.txt”, the most recent year.
Here’s a sql*loader control file “names.ctl” to load the data:

[oracle@localhost names]$ cat names.ctl
load data 
infile 'yob2015.txt' "str '\r\n'"
into table NAMES
fields terminated by ','
trailing nullcols
           ( NAME CHAR(4000),
             SEX CHAR(4000),
             FREQ CHAR(4000),
             YEAR "2015"

(By the way, here’s a great tip from That Jeff Smith: Use sql developer to generate a sql*loader ctl file. )
Now let’s use the ctl file to load it:

[oracle@localhost names]$ sqlldr CONTROL=names.ctl   skip=0  
SQL*Loader: Release - Production on Thu Jun 9 10:41:29 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
Path used:      Conventional
Commit point reached - logical record count 20
Table NAMES:
  32952 Rows successfully loaded.
Check the log file:
for more information about the load.

5. Let’s take a look at the 2015 data! How about the top 10 names for each sex?

  ( SELECT name, sex, freq, 
  rank() OVER (partition BY sex ORDER BY freq DESC) AS rank_2015
  FROM names 
  WHERE YEAR=2015 )
WHERE rank_2015 < 11
ORDER BY sex, rank_2015;
NAME			       S       FREQ  RANK_2015
------------------------------ - ---------- ----------
Emma			       F      20355	     1
Olivia			       F      19553	     2
Sophia			       F      17327	     3
Ava			       F      16286	     4
Isabella		       F      15504	     5
Mia			       F      14820	     6
Abigail 		       F      12311	     7
Emily			       F      11727	     8
Charlotte		       F      11332	     9
Harper			       F      10241	    10
NAME			       S       FREQ  RANK_2015
------------------------------ - ---------- ----------
Noah			       M      19511	     1
Liam			       M      18281	     2
Mason			       M      16535	     3
Jacob			       M      15816	     4
William 		       M      15809	     5
Ethan			       M      14991	     6
James			       M      14705	     7
Alexander		       M      14460	     8
Michael 		       M      14321	     9
Benjamin		       M      13608	    10

6. Now let’s load the names data for the other 135 years.
First we’ll create a generic “names.ctl”:

$ cat names.ctl
load data 
infile 'yob%%YEAR%%.txt' "str '\r\n'"
into table NAMES
fields terminated by ','
trailing nullcols
           ( NAME CHAR(4000),
             SEX CHAR(4000),
             FREQ CHAR(4000),
             YEAR "%%YEAR%%"

Now we’ll write a small shell script to substitute %%YEAR%% for each year from 1880 to 2014, and load that year’s file.

$ cat names.sh
export TWO_TASK=orcl
for i in {1880..2014}
  echo "generating yob$i.ctl"
  sed s/%%YEAR%%/$i/g names.ctl > yob$i.ctl
  echo "loading yob$i"
  sqlldr username/password CONTROL=yob$i.ctl
  echo "done $i"
[oracle@localhost names]$ ./names.sh
... massive screen output...
[oracle@localhost names]$ grep "error" *.log
yob1880.log:  0 Rows not loaded due to data errors.
yob1881.log:  0 Rows not loaded due to data errors.
yob1882.log:  0 Rows not loaded due to data errors.
yob1883.log:  0 Rows not loaded due to data errors.
yob2012.log:  0 Rows not loaded due to data errors.
yob2013.log:  0 Rows not loaded due to data errors.
yob2014.log:  0 Rows not loaded due to data errors.

7. Now we can play with the data a bit!

Here’s a quick look at the popularity of 2015’s top girls’ names since 1880:

WITH n2015 AS 
  ( SELECT name, sex, freq, 
  rank() OVER (partition BY sex ORDER BY freq DESC) AS rank_2015
  FROM names 
  WHERE YEAR=2015 )
, y AS (SELECT  YEAR, sex, SUM(freq) tot FROM names GROUP BY YEAR, sex)
SELECT names.year, names.name, 100*names.freq/tot AS pct_by_sex
FROM n2015, y, names
WHERE n2015.name = names.name AND n2015.sex = names.sex
AND y.year = names.year AND y.sex=names.sex
AND n2015.rank_2015 < 11
AND y.sex='F'

I graphed this in SQL Developer. Click to embiggen:

You can see that Emma, my grandmother’s name, is having a bit of a comeback but is nowhere near the powerhouse it was in the 1880s, when 2% of all girls were named Emma. (For the record, my grandmother was not born in the 1880s!)

My next post will look at the name Brittany and its variants.

Note: You can download the names.ctl and names.sh from github here.

Categories: DBA Blogs

UKOUG Slides - Forms, ADF and JET a Non-Aggression Pact

Andrejus Baranovski - Thu, 2016-06-09 13:43
Florin Marcus was presenting topic on Forms modernization during UKOUG Development SIG seminar in London. Session was based on our production experience from recent Forms modernization project to ADF 12c and Alta UI.


- Who integrates Forms and ADF?

- Java Plugin to be Discontinued. Consequences for Oracle Forms.

- Why Yet Another Web Framework? Oracle JET.

- Forms and ADF and JET - Business Integration Patterns

- Forms Developer's perspective: life after learning ADF

Go through the slides and learn more about it:

Is time for Oracle XE on Docker

Marcelo Ochoa - Thu, 2016-06-09 10:09
Sometime Oracle XE is used in development environment because at the early stage of the project an Oracle RDBMS EE is not necessary.
There are many Docker images at the public repo but many of them do not include the possibility to customize some parameters and I am not sure if the legal condition of Oracle XE redistribution license is meet.
So, using similar ideas as in Oracle RDBMS 12c Docker image script I made this simple Dockerfile script:
FROM oraclelinux:5
MAINTAINER marcelo.ochoa@gmail.com
RUN groupadd -g 54321 oinstall
RUN groupadd -g 54322 dba
RUN useradd -m -g oinstall -G oinstall,dba -u 54321 oracle
RUN yum -y install bc glibc make binutils gcc libaio perl wget unzip && yum clean all
RUN chown -R oracle:oinstall /home/oracle
RUN chmod g+rx /home/oracle
RUN chmod o+rx /home/oracle
ADD xe.rsp /home/oracle/
ADD oracle-xe-11.2.0-1.0.x86_64.rpm /home/oracle/
RUN rpm2cpio /home/oracle/oracle-xe-11.2.0-1.0.x86_64.rpm | cpio -idmv
RUN rm -f /home/oracle/oracle-xe-11.2.0-1.0.x86_64.rpm
RUN mkdir /u01/app/oracle/oradata
RUN mkdir /u01/app/oracle/product/11.2.0/xe/config/log
RUN mkdir -p /u01/app/oracle/diag/rdbms/xe/XE/trace/
RUN umask 0027
RUN mkdir -p /u01/app/oracle/admin/XE/adump
RUN mkdir -p /u01/app/oracle/admin/XE/dpdump
RUN mkdir -p /u01/app/oracle/admin/XE/pfile
RUN mkdir -p /u01/app/oracle/admin/cfgtoollogs/dbca/XE
RUN mkdir -p /u01/app/oracle/admin/XE/dbs
RUN mkdir -p /u01/app/oracle/fast_recovery_area
RUN umask 0022
RUN touch /u01/app/oracle/diag/rdbms/xe/XE/trace/alert_XE.log
RUN chown -R oracle:oinstall /u01/app/oracle
RUN chmod u+x /etc/init.d/oracle-xe
RUN sed -i -e 's/%memory_target%/1G/g' /u01/app/oracle/product/11.2.0/xe/config/scripts/init.ora
RUN sed -i -e 's/%memory_target%/1G/g' /u01/app/oracle/product/11.2.0/xe/config/scripts/initXETemp.ora
RUN find /u01/app/oracle/product/11.2.0/xe -name "*.sh" -exec chmod u+x {} \;
RUN echo ". /u01/app/oracle/product/11.2.0/xe/bin/oracle_env.sh" >>/home/oracle/.bashrc
COPY manage-oracle.sh /home/oracle/
EXPOSE 1521 8080
VOLUME ["/u01/app/oracle/oradata","/u01/app/oracle/fast_recovery_area"]
CMD [ "sh" , "-c" ,  "/home/oracle/manage-oracle.sh" ]
This Docker image is using Official Oracle Linux 5 layer, and to build the image is necessary to dowload oracle-xe-11.2.0-1.0.x86_64.rpm.zip from OTN Download Section, unzip above file and put oracle-xe-11.2.0-1.0.x86_64.rpm at the same directory where Dockerfile is placed.
To build the image I made this simple buildDockerImage.sh:
docker build -t "oracle-xe:11.2.0-1.0" .
I am 100% sure that the the installation will be successful, this is the magic on Docker world, here the output of docker image listing:
$ docker images REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE oracle-xe           11.2.0-1.0          946a685de09f        About an hour ago   1.885 GB
as in the Oracle RDBMS 12c Docker image the installation of the seed database and initial configuration is made by manage-oracle.sh script.
Note that also that in above Dockerfile script instead of using rpm tool to install Oracle XE rpm file I am using rpm2cpio, this is because the original rpm file provided by Oracle do some actions that are not valid on Docker environment, I am doing all necessary actions inside the Dockerfile then.
Once you have the image ready to run I created an initial database a run.sh file which could by used as:
docker run --privileged=true --name xe --hostname xe --detach=true --publish=1521:1521 --publish=8080:8080 oracle-xe:11.2.0-1.0
SQLNet 1521 port and Oracle Express web admin tool is published outside the container.
If you run the container with above script the Database will be created using Docker volumnes which means that all data stored into the RDBMS will persists across multiple run/stop executions,  but if you execute docker rm command all datafiles will be removed from the layered file system.
Here an screenshot of the Admin tool:

To get data surviving across containers destruction a run-persistent.sh script look like:
docker run --privileged=true --volume=/home/data/db/xe/fast_recovery_area:/u01/app/oracle/fast_recovery_area --volume=/home/data/db/xe/oradata:/u01/app/oracle/oradata --name xe --hostname xe --detach=true --publish=1521:1521 --publish=8080:8080 oracle-xe:11.2.0-1.0
before running above Docker command create externals directories as:
$ sudo mkdir -p /home/data/db/xe/oradata
$ sudo mkdir -p /home/data/db/xe/fast_recovery_area
$ sudo mkdir -p chown -R 54321:54321 /home/data/db/xe
And thats all, enjoy your Oracle XE Database on Docker, a complete list files showed above could be download as oracle-xe.zip from Google Drive obviously without the Oracle XE rpm file.

Oracle USERENV Function with Examples

Complete IT Professional - Thu, 2016-06-09 06:00
The Oracle USERENV function allows you to get information about the current session. Learn how to use the USERENV function in this article. Purpose of the Oracle USERENV Function The purpose of the USERENV function is to get information about your current session. It’s a simple function but can take a few different values for […]
Categories: Development

New on Oracle Mobile Cloud Service 2.0

As we simplify mobile development, this has led us to develop important new features with the new release of Oracle Mobile Cloud Service 2.0 such as  Mobile Application Accelerator (MAX), a new...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Scripted Domain Creation for SOA/BPM, OSB 12.2.1

Darwin IT - Thu, 2016-06-09 02:53
Recently I blogged about the automatic install of SOASuite and ServiceBus 12.2.1. It catered for the installation of the binaries and the creation of a repository.

What it does not handles is the creation of a domain. The last few weeks I worked on a script to do that. It's based on a wlst script for 12.1.3 by Edwin Biemond. This one was quite educational for me. But as denoted: it was made for 12.1.3. And I wanted more control on what and how certain artefacts are created. So gave it my own swing, also to adapt it for some wishes of my current customer.

Although there are certainly some improvements thinkable, and probably will be done in future uses, for now it is ready to share with the world.

One of the changes I did was to divide the main function in sections to make clear what the structure of the script is. I also moved some of the duplicate code or functional parts into separate functions.

Let me describe the sections first.

1. Create Base domainThe script starts with creating a base domain. It reads the default wls template 'wls.jar'. Sets the log properties of the domain. It then adapts the AdminServer to
  • change name as set in the property file: you can have your own naming convention.
  • change listen addres + port
  • Set default SSL settings
  • Set log file properties.
Then set the admin password on the domain and set the server start mode. Save the domain.
It then creates boot.properties files for nodemanager and AdminServer and set the password of the NodeManager. Finally setting the Applications home folder.

2. Extending domain with templatesThe second section extends the domain with templates. Another improvement I did is that you can select which components you want to add by toggling the appropriate 'xxxEnabled' switches  in the property file, where xxx stands for the component (for instance 'soa', 'bpm', 'osb', 'bam', 'ess', etc.)

It supports the following components:
  • ServiceBus
  • SOA and BPM Suite and B2B
  • BAM
  • Enterprise Scheduler Service
Components such as  Managed File Transfer can be added quite easily.
3. DataSourcesSection 3 takes care of setting the datasources to the created repository based on the repository user '{Prefix}_STB', via the 'LocalScvTblDataSource' datasource. In the property file you need to set
  • soaRepositoryDbUrl: jdbc connect string to the repository database
  • soaRepositoryDbUserPrefix=prefix used in the Repository creation 
  • soaRepositoryStbPwd=Password for the {Prefix}_STB user.
 It will adapt the LocalScvTblDataSource with these properties to load the information on the other repository users to create and adapt the datasources that are essential to the modules with which the domain is extended. These datasources are then made XA-Enabled.

4. Create UnixMachines, Clusters and Managed ServersThis section creates Machine definitions of type 'Unix', based on the properties:
  • server1Address=darlin-vce-db.darwin-it.local
  • server1Machine=darlin-vce-db
  • server2Enabled=true
  • server2Address=darlin-vce-db2.darwin-it.local
  • server2Machine=darlin-vce-db2
The machine denoted with 'server1Machine' is created always. The AdminServer is added to it, as well all the first, default, managed servers. The machine denoted with the property 'server2Machine' is only created when 'server2Enabled' is set to true.

I realize that 'server' in this context might be a little confusing. In serverYAddress and  serverYMachine, I actually mean a server-host, not a managed or admin server.

 For each component to configure (soa, osb, etc.) a cluster, denoted with for instance osbClr or soaClr, is created.

When you extend the domain with SOA Suite or OSB then automatically a managed server called 'soa_server1' or 'osb_server1' created with the appropriate deployments targeted to it. In the script of Edwin these are removed and new ones are created. I found problems with that and found that it's quite unnecessary, since we can rename the current ones with the given name in the property file, denoted with soaSvr1 or osbSvr1, etc., as is done with the AdminServer. So I leave the already created ones, but rename them to the desired value.

These first servers are added to the appropriate cluster, what causes to re-target the deployments to that cluster, magically.

Then if enabled, as with osbSvr2Enabled or soaSvr2Enabled, etc., the particular 'second' servers are created and added to the particular cluster.

5. Add Servers to ServerGroupsNew in 12c is the concept of ServerGroups. In 11g you had only one definition of USER_MEM_ARGS in the setDomainEnv.sh/cmd. So these counted for each server (admin or managed) that are started using the start(Managed)Weblogic.sh/cmd scripts. But in 12c the determination of the USER_MEM_ARGS are done in a separate script: setStartupEnv.sh/cmd.
In this script the determination is done based on so-called ServerGroups. This provides a means to differentiate in memory settings for the particular servers, which was lacking in 11g.

So in this sections all the Managed and Admin Servers are added to a particular ServerGroup.
6. Create boot properties filesLastly, for each created managed server a boot.properties file with the username password is created. Smart: I used to do this every single time by hand...

The example property fileHere's an example of the property file:
# Properties voor Creeëren SOADomain
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-15
# Server Settings
# Properties for AdminServer
adminJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
# Properties for OSB
osbJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
# Properties for SOA
soaJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=752m -Xms1024m -Xmx1532m
# Properties for ESS
essJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m
# Properties for BAM
bamJavaArgsBase=-XX:PermSize=256m -XX:MaxPermSize=512m -Xms1024m -Xmx1532m
# AdminUser
# SoaRepository Settings
# Logs
# Settings

Save it with a name like darlin-vce-db.properties, but adapted for each particular environment.

The script(And of course I don't mean the band that my daughter likes...)
# Create a SOA/BPM/OSB domain
# @author Martien van den Akker, Darwin-IT Professionals
# @version 1.0, 2016-04-09
# Modify these values as necessary
import sys, traceback
scriptName = 'createSoaBpmDomain.py'
#Home Folders
wlsHome = fmwHome+'/wlserver'
soaDomainHome = domainsHome+'/'+soaDomainName
soaApplicationsHome = applicationsHome+'/'+soaDomainName
# Templates for 12.1.3
#wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
# Templates for 12.2.1
wlsjar =fmwHome+'/wlserver/common/templates/wls/wls.jar'
ohsTpl=fmwHome+'/ohs/common/templates/wls/ohs_managed_template.jar' # need to be validated!
b2bTpl=fmwHome+'/soa/common/templates/wls/oracle.soa.b2b_template.jar' # need to be validated!
# ServerGroup definitions
def usage():
print 'Call script as: '
print 'Windows: wlst.cmd '+scriptName+' -loadProperties localhost.properties'
print 'Linux: wlst.sh '+scriptName+' -loadProperties environment.properties'
print 'Property file should contain the following properties: '
print "adminUrl='localhost:7101'"
print "adminUser='weblogic'"
print "adminPwd='welcome1'"
# Create a boot properties file.
def createBootPropertiesFile(directoryPath,fileName, username, password):
print ('Create Boot Properties File for folder: '+directoryPath)
print (lineSeperator)
serverDir = File(directoryPath)
bool = serverDir.mkdirs()
fileNew=open(directoryPath + '/'+fileName, 'w')
fileNew.write('username=%s\n' % username)
fileNew.write('password=%s\n' % password)
# Create Startup Properties File
def createAdminStartupPropertiesFile(directoryPath, args):
print 'Create AdminServer Boot Properties File for folder: '+directoryPath
print (lineSeperator)
adminserverDir = File(directoryPath)
bool = adminserverDir.mkdirs()
fileNew=open(directoryPath + '/startup.properties', 'w')
fileNew.write('Arguments=%s\n' % args)
# Set Log properties
def setLogProperties(logMBeanPath, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print '\nSet Log Properties for: '+logMBeanPath
print (lineSeperator)
print ('Server log path: '+pwd())
print '. set FileName to '+logFile
set('FileName' ,logFile)
print '. set FileCount to '+str(fileCount)
set('FileCount' ,int(fileCount))
print '. set FileMinSize to '+str(fileMinSize)
set('FileMinSize' ,int(fileMinSize))
print '. set RotationType to '+rotationType
print '. set FileTimeSpan to '+str(fileTimeSpan)
def createServerLog(serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan):
print ('\nCreate Log for '+serverName)
print (lineSeperator)
setLogProperties('/Server/'+serverName+'/Log/'+serverName, logFile, fileCount, fileMinSize, rotationType, fileTimeSpan)
# Change DataSource to XA
def changeDatasourceToXA(datasource):
print 'Change datasource '+datasource
print (lineSeperator)
print '. Set UseXADataSourceInterface='+'True'
print '. Set GlobalTransactionsProtocol='+'TwoPhaseCommit'
def createCluster(cluster):
print ('\nCreate '+cluster)
print (lineSeperator)
create(cluster, 'Cluster')
# Create a Unix Machine
def createUnixMachine(serverMachine,serverAddress):
print('\nCreate machine '+serverMachine+' with type UnixMachine')
print (lineSeperator)
# Add server to Unix Machine
def addServerToMachine(serverName, serverMachine):
print('\nAdd server '+serverName+' to '+serverMachine)
print (lineSeperator)
# Determine the Server Java Args
def getServerJavaArgs(serverName,javaArgsBase,logsHome):
javaArgs = javaArgsBase+' -Dweblogic.Stdout='+logsHome+'/'+serverName+'.out -Dweblogic.Stderr='+logsHome+'/'+serverName+'_err.out'
return javaArgs
# Change Managed Server
def changeManagedServer(server,listenAddress,listenPort,javaArgs):
print '\nChange ManagedServer '+server
print (lineSeperator)
print '. Set listen address and port to: '+listenAddress+':'+str(listenPort)
set('ListenPort' ,int(listenPort))
# ServerStart
print ('. Create ServerStart')
print ('. Set Arguments to: '+javaArgs)
set('Arguments' , javaArgs)
print ('. Create server SSL')
print ('. Set SSL Enabled to: '+'False')
set('Enabled' , 'False')
print ('. Set SSL HostNameVerificationIgnored to: '+'True')
set('HostNameVerificationIgnored', 'True')
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
print ('. Set JSSEEnabled to: '+ 'False')
# Create a Managed Server
def createManagedServer(server,listenAddress,listenPort,cluster,machine,
print('\nCreate '+server)
print (lineSeperator)
create(server, 'Server')
createServerLog(server, logsHome+'/'+server+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+server+' to cluster '+cluster)
addServerToMachine(server, machine)
# Adapt a Managed Server
def adaptManagedServer(server,newSrvName,listenAddress,listenPort,cluster,machine,
print('\nAdapt '+server)
print (lineSeperator)
# name of adminserver
print '. Rename '+server+' to '+ newSrvName
set('Name',newSrvName )
createServerLog(newSrvName, logsHome+'/'+newSrvName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('Add '+newSrvName+' to cluster '+cluster)
addServerToMachine(newSrvName, machine)
# Change Admin Server
def changeAdminServer(adminServerName,listenAddress,listenPort,javaArguments):
print '\nChange AdminServer'
print (lineSeperator)
# name of adminserver
print '. Set Name to '+ adminServerName
set('Name',adminServerName )
# address and port
print '. Set ListenAddress to '+ server1Address
print '. Set ListenPort to '+ str(listenPort)
set('ListenPort' ,int(listenPort))
# ServerStart
print 'Create ServerStart'
print '. Set Arguments to: '+javaArguments
set('Arguments' , javaArguments)
print 'Create SSL'
set('Enabled' , 'False')
set('HostNameVerificationIgnored', 'True')
if jsseEnabled == 'true':
print ('. Set JSSEEnabled to: '+ 'True')
print ('. Set JSSEEnabled to: '+ 'False')
def main():
# Section 1: Base Domain + Admin Server
print (lineSeperator)
print ('1. Create Base domain '+soaDomainName)
print('\nCreate base wls domain with template '+wlsjar)
print (lineSeperator)
# Domain Log
print('Set base_domain log')
setLogProperties('/Log/base_domain', logsHome+soaDomainName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
# Admin Server
adminJavaArgs = getServerJavaArgs(adminServerName,adminJavaArgsBase,logsHome)
createServerLog(adminServerName, logsHome+adminServerName+'.log', fileCount, fileMinSize, rotationType, fileTimeSpan)
print('\nSet password in '+'/Security/base_domain/User/weblogic')
# weblogic user name + password
print('. Set Name to: ' +adminUser)
if productionMode == 'true':
print('. Set ServerStartMode to: ' +'prod')
setOption('ServerStartMode', 'prod')
print('. Set ServerStartMode to: ' +'dev')
setOption('ServerStartMode', 'dev')
print('write Domain...')
# write path + domain name
es = encrypt(adminPwd,soaDomainHome)
print('set Domain password for '+soaDomainName)
print('Set nodemanager password')
set('NodeManagerUsername' ,adminUser )
set('NodeManagerPasswordEncrypted',es )
setOption( "AppDir", soaApplicationsHome )
print('Finished base domain.')
# Section 2: Templates
print('\n2. Extend Base domain with templates.')
print (lineSeperator)
print ('Adding Webservice template '+wlservicetpl)
# SOA Suite
if soaEnabled == 'true':
print ('Adding SOA Template '+soatpl)
print('SOA is disabled')
if bpmEnabled == 'true':
print ('Adding BPM Template '+bpmtpl)
print('BPM is disabled')
if osbEnabled == 'true':
print ('Adding OSB template '+osbtpl)
print('OSB is disabled')
print ('Adding ApplCore Template '+applCoreTpl)
if bamEnabled == 'true':
print ('Adding BAM Template '+bamtpl)
print ('BAM is disabled')
if webtierEnabled == 'true' == true:
print ('Adding OHS Template '+ohsTpl)
print('OHS is disabled')
if b2bEnabled == 'true':
print 'Adding B2B Template '+b2bTpl
print('B2B is disabled')
if essEnabled == 'true':
print ('Adding ESS Template'+essBasicTpl)
print ('Adding ESS Em Template'+essEmTpl)
print('ESS is disabled')
print ('Finished templates')
# Section 3: Change Datasources
print ('\n3. Change datasources')
print 'Change datasource LocalScvTblDataSource'
print ('Call getDatabaseDefaults which reads the service table')
if soaEnabled == 'true':
if osbEnabled == 'true':
if bamEnabled == 'true':
print 'Finshed DataSources'
# Section 4: Create UnixMachines, Clusters and Managed Servers
print ('\n4. Create UnixMachines, Clusters and Managed Servers')
print (lineSeperator)
if server2Enabled == 'true':
# SOA Suite
if soaEnabled == 'true':
adaptManagedServer('soa_server1',soaSvr1,server1Address, soaSvr1Port,soaClr,server1Machine,
if soaSvr2Enabled == 'true':
print('Do not create SOA Server2')
if osbEnabled == 'true':
if osbSvr2Enabled == 'true':
print('Do not create OSB Server2')
if bamEnabled == 'true':
if bamSvr2Enabled == 'true':
print('Do not create BAM Server2')
if essEnabled == 'true':
if essSvr2Enabled == 'true':
print('Do not create ESS Server2')

print ('Finshed creating Machines, Clusters and ManagedServers')
# Section 5: Add Servers to ServerGroups.
print ('\n5. Add Servers to ServerGroups')
print (lineSeperator)
print 'Add server groups '+adminSvrGrpDesc+ ' to '+adminServerName
setServerGroups(adminServerName, adminSvrGrp)
if soaEnabled == 'true':
print 'Add server group '+soaSvrGrpDesc+' to '+soaSvr1+' and possibly '+soaSvr2
setServerGroups(soaSvr1, soaSvrGrp)
if soaSvr2Enabled == 'true':
setServerGroups(soaSvr2, soaSvrGrp)
if osbEnabled == 'true':
print 'Add server group '+osbSvrGrpDesc+' to '+osbSvr1+' and possibly '+osbSvr2
setServerGroups(osbSvr1, osbSvrGrp)
if osbSvr2Enabled == 'true':
setServerGroups(osbSvr2, osbSvrGrp)
if bamEnabled == 'true':
print 'Add server group '+bamSvrGrpDesc+' to '+bamSvr1+' and possibly '+bamSvr2
setServerGroups(bamSvr1, bamSvrGrp)
if bamSvr2Enabled == 'true':
setServerGroups(bamSvr2, bamSvrGrp)
if essEnabled == 'true':
print 'Add server group '+essSvrGrpDesc+' to '+essSvr1+' and possibly '+essSvr2
setServerGroups(essSvr1, essSvrGrp)
if essSvr2Enabled == 'true':
setServerGroups(essSvr2, essSvrGrp)
print ('Finshed ServerGroups.')
# Section 6: Create boot properties files.
print ('\n6. Create boot properties files')
print (lineSeperator)
if soaEnabled == 'true':
if soaSvr2Enabled == 'true':
if osbEnabled == 'true':
if osbSvr2Enabled == 'true':
if bamEnabled == 'true':
if bamSvr2Enabled == 'true':
if essEnabled == 'true':
if essSvr2Enabled == 'true':
print ('\nFinished')
except NameError, e:
print 'Apparently properties not set.'
print "Please check the property: ", sys.exc_info()[0], sys.exc_info()[1]
apply(traceback.print_exception, sys.exc_info())
#call main()

ConclusionAs said, although I think this script is already quite adaptable using the property file, of course there are many improvements thinkable for your particular situation. It creates a 'skeleton' SOA or Service Bus domain, but you might need to adapt for network topologies, security settings.
And although it creates a 'per domain' nodemanager configuration, you would need to adapt it for your particular needs to get the domain started. I only tested this by starting the Admin server using the startWeblogic.sh script.

Having such a script is such  a valuable asset: it allows you to (re-)create your domains repeatably in a standard way, ensuring that different environments (dev, test, acc, prod) are created similarly.

One, last thing though, the script somehow registers the creation of the domain and thus the use of the datasources in the repository. So you can't just throw away the domain and recreate it to the current Repository. You'll need to rereate the Repository as well.

database option - compression part 2

Pat Shuff - Thu, 2016-06-09 02:07
Yesterday we looked at the different compression options that are available for a database. Today we are going to walk through an example. The example comes from Oracle by Example - Compression. This is a hands on tutorial that has you execute code in an 11g database. Note that you must create this database as a High Performance or Extreme Performance database. If you create a Standard Edition or Enterprise Edition the execution will fail with an option not available error as we saw with partitioning a couple of days ago.

To start, we create an 11g database in the Oracle Public Cloud. We create the instance, wait an hour or so, change the network configuration to open port 1521 to the public, and connect using sys as sysdba to the instance. We are going to use SQL Developer in our Windows 2012 instance to make the connection. To get the connection information, we the database console and get the ip address of the instance.

We then go to our sqldeveloper tool and add this database connection. We can use ssh tunneling or open port 1521 to the world to make the connection.

The first step that we are told to do is to execute the setup.sql file available via the tutorial. We are not going to execute this program but do everything by hand through sql developer. The purpose of this script is to enable the user sh, set a password, and grant privileges to the user. We can do this from SQL Developer. The code that it recommends using is

connect / as sysdba
set echo on
alter user sh identified by sh account unlock;
grant create tablespace to sh;
grant drop tablespace to sh;
First, we don't want to use such a simple password. We change this and set it to something a little more secure. We select the database instance, in our example it is prs11gHP where we are connected as the sys user. We select other Users..., the user sh, and edit the entry. When the screen comes up to edit the user, we enable the account, set the password, grant create tablespace and drop tablespace rights to the user and apply. This effectively executes the script shown above.

At this point, we have a user that can create and drop tables. We now want to load the create_sales_tbls.sql code from the tutorial.

The create script first, drops the existing tables. This might generate an error because the table does not exist. This error is not significant and won't stop everything from executing. We then create a non-compressed and a compressed table by selecting from the demo sales table that exists if you installed the demo database during your install.

drop table sales_nocompress purge
drop table sales_compress purge
set echo on
set timing on
create table sales_nocompress
as select * from sales

create table sales_compress compress for all operations
as select * from sales where 1=0

select count(*)
from sales_compress
Note that the two create statements should create a table of the same size. What we see is that the creation of the first table takes just over 4 seconds because we pull in the sales table information. The second creation does not take as long because the data is in cache and the where clause fails for all select operations.

When we do the select, the table size should be zero based on the where clause. We then to an insert into the table to create a table of the same size. This is done by executing


set timing off
select count(*) from sales_compress
select count(*) from sales_nocompress
This executes the oltp_insert.sql code then compares the counts of the two tables to make sure they contain the same number of records. The code that is executed in the insert script is
SQL> set timing on
SQL> declare

  commit_after integer := 0 ;
  loop_variable integer ;

  cursor c_sales is
  select prod_id
  , cust_id
  , time_id
  , channel_id
  , promo_id
  , quantity_sold
  , amount_sold
  from sales ;


  for r_sales in c_sales

    if commit_after = 0

      loop_variable := 0 ;

      commit_after := round(dbms_random.value(1,1)) ;

    end if ;

    insert into sales_compress
    (prod_id, cust_id, time_id, channel_id, promo_id, quantity_sold, amount_sold)
    ( r_sales.prod_id
    , r_sales.cust_id
    , r_sales.time_id
    , r_sales.channel_id
    , r_sales.promo_id
    , r_sales.quantity_sold
    , r_sales.amount_sold
    ) ;

    if loop_variable = commit_after
      commit ;
      commit_after := 0 ;
    end if ;

    loop_variable := loop_variable + 1 ;

  end loop ;

end ;
We are not going to go through this code but it does return the same amount of entries as the uncompressed table. The values that are inserted are pulled from the sales table and inserted into the compressed table. Note that we are using the basic compression since we did not state any compress methodology when we created the table.

We can execute the examine_storage.sql script to see that the compressed storage takes up about half the storage as the uncompressed table. We can also see that the table is enabled for oltp compression by looking at the parameters of the table from a select statement.

We can also look at the select time differences by reading all of the data from the compressed and uncompressed tables. Note that the compressed table takes about 3/4 of the time that the uncompressed takes to execute.

In summary, we were able to create an 11g database, create a table that is compressed and non-compressed and look at the relative size and timing on retrieving data from the table. We can experiment with this data and grow the table size to see if we still get the same improvements as the table gets larger. We can try different compression algorithms to see if it effects performance or compression ratios. We have done all of this in a database as a service public cloud instance. The only tools that we needed was a SQL Developer connection and an Oracle Cloud account. We could have done with with Amazon RDS as well as EC2 and Microsoft Azure Compute. The key difference is that this experiment took about two hours to execute and we only consumed about $15 to learn and play with compression on 11g (or 12c) given that a low memory option for the database is only $6.720 per OCPU per hour. With the pay as you go option we burn less than $15 and turn off the service. We could have uploaded our own data sets into the database instance and played with the compression advisor in a sandbox and not effected our production environment. If we were using database backup as a service we could have restored a single table from our backup and play with the compression variations and compression advisor.

Combining Features - Wrong Results With Scalar Subquery Caching

Randolf Geist - Wed, 2016-06-08 15:39
Quite often you can get into trouble with Oracle when you start combining different features.

In this case of one my clients it is the combination of user-defined PL/SQL functions that can raise exceptions (think of currency conversion and a non-existent currency code gets passed into the function), DML error logging and attempting to improve performance by wrapping the PL/SQL function call into a scalar subquery to benefit from the built-in scalar subquery caching feature of the SQL runtime engine.

As long as the scalar subquery didn't get used everything worked as expected, but after adding the scalar subquery after some while it became obvious that wrong results occurred - in that particular case here it meant rows that should have been rejected and written to the error logging table due to the exception raised in the user-defined PL/SQL function suddenly showed up in the target table, and what was even more worrying - they included a converted amount, obviously using some arbitrary conversion rate, although the PL/SQL function actually didn't return any value but raised an exception for unknown currency codes.

The interesting point here is that everything works as expected if the error logging feature doesn't get used, or if the scalar subquery doesn't get used.

In case of no error logging the whole statement will roll back if an exception occurs, and that holds true and works even with scalar subquery caching in place.

Without scalar subquery caching the correct result gets produced, all rows that are supposed to be rejected are written to the error logging table and don't show up in the target table.

And if no exception gets raised (only valid and existing currency codes get used), again the results are correct in the target table even when using scalar subquery caching - so the caching of the values in principle works correctly.

Only when combining those features and dealing with invalid data the problem shows up, and as it can be demonstrated from a simple test case, even then it only occurs under certain circumstances.

Here is a simple test case for reproducing the issue:

set echo on timing on time on

drop table t_source purge;

drop table t_dest purge;

drop table ERR$_T_DEST purge;

drop package test_plsql_caching_with_except;

create table t_source
rownum as id
, cast('C' || to_char(round(trunc(rownum / 100000)) + 1, 'FM00') as varchar2(3)) as currency_code_clust
, cast('C' || to_char(mod(rownum, 10) + 1, 'FM00') as varchar2(3)) as currency_code_scat
, 100 as amount
(select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen1
, (select /*+ cardinality(1e3) */ null from dual connect by level <= 1e3) gen2

exec dbms_stats.gather_table_stats(null, 't_source')

create or replace package test_plsql_caching_with_except
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable;

create or replace package body test_plsql_caching_with_except
function test_plsql_caching_with_except(s_currency_code in varchar2, s_do_excpt_on in varchar2 default 'C10') return number parallel_enable
if s_currency_code = s_do_excpt_on then
raise_application_error(-20001, 'exception encountered');
end if;
return to_number(substr(s_currency_code, 2));

create table t_dest (id number(*, 0), currency_code varchar2(3), org_amount number, conv_amount number);

exec DBMS_ERRLOG.create_error_log (dml_table_name => 't_dest')

truncate table t_dest;

truncate table ERR$_T_DEST;

alter session enable parallel dml;

-- Using "currency_code_scat" as function parameter gives correct result in T_DEST
-- But column CONV_AMOUNT in ERR$_T_DEST shows some function result that shouldn't be there since the function raised an exception (reproduces in but not
-- define ccy_code = currency_code_scat

-- Using "currency_code_clust" as function parameter gives wrong result in T_DEST - rows that should go to ERR$_T_DEST
-- due to exception being raised in PL/SQL function end up in T_DEST (with some "cached" function result applied)
-- There shouldn't be any rows in T_DEST with "currency_code = 'C10'"
-- Applies to both serial and Parallel Execution, conventional and direct path insert
define ccy_code = currency_code_clust

-- Use commented, non-cached function call to see correct results
insert /*+
-- parallel(n)
into t_dest (id, currency_code, org_amount, conv_amount)
select id, &ccy_code, amount,
--test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount
(select test_plsql_caching_with_except.test_plsql_caching_with_except(&ccy_code) * amount from dual)

from t_source
log errors
reject limit unlimited;


-- This is supposed to 900K
select count(*) from t_dest;

-- This is supposed to be 0
select count(*) from t_dest where currency_code = 'C10';

-- This is supposed to 100K
select count(*) from ERR$_T_DEST;

-- This is supposed to 0
select count(*) from ERR$_T_DEST where conv_amount is not null;
The second parameter to the PL/SQL function determines for which code an exception should be raised (or if at all), and hence allows reproducing different scenarios. By default it will raise an exception for code "C10" which represents 10% (100K rows) of the data in T_SOURCE. So 900K rows should end up in the destination table and those 100K rows having code "C10" should go into the error logging table.

Running the test case without the scalar subquery gets the correct result:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;


Elapsed: 00:00:00.70
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';


Elapsed: 00:00:00.21
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;


Elapsed: 00:00:00.70
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;


Elapsed: 00:00:00.04
Use the scalar subquery, and suddendly instead of 100K rows that should be rejected only a single one gets rejected and 99.999 make it into the target table with some converted amount:

SQL> -- This is supposed to 900K
SQL> select count(*) from t_dest;


Elapsed: 00:00:00.23
SQL> -- This is supposed to be 0
SQL> select count(*) from t_dest where currency_code = 'C10';


Elapsed: 00:00:00.20
SQL> -- This is supposed to 100K
SQL> select count(*) from ERR$_T_DEST;


Elapsed: 00:00:00.00
SQL> -- This is supposed to 0
SQL> select count(*) from ERR$_T_DEST where conv_amount is not null;


Elapsed: 00:00:00.01
Now the critical point is the order of the column values in the column passed into the PL/SQL function. The problem can only be reproduced when using the "clustered" variant of the currency codes (column CURRENCY_CODE_CLUST in T_SOURCE). Switch to the "scattered" variant (CURRENCY_CODE_SCAT), and the (almost) correct result gets produced, at least in the target table. I say "almost" because looking closer at the rows written to the error logging table it becomes obvious that they contain a converted amount that definitely shouldn't be there - so the wrong result error already shows up here, since the row was written to the error logging table due to the PL/SQL function raising an exception, hence the converted amount should be NULL. This "wrong converted amount in the error logging table" no longer occurs in 12c (, so something has changed but the problem of wrong results (rows with errors / exceptions go into the target table) is still reproducible in 12c.

The problem seems to be related to the way how the scalar subquery caching works when the same value comes again - Jonathan Lewis described this feature in "Cost Based Oracle" a long time ago. There seems to be special optimisation that avoids the cache lookup in case the same value as before is used again, like a "single value cache" on top of the actual cache. And it looks like it is this special treatment that doesn't handle correctly the case of the exception being raised in the PL/SQL function.

The test case also allows checking if other caching mechanisms like declaring the function "deterministic" (although in the real life case it isn't really hence shouldn't be used since it opens up other possibilities like building a Materialized View or a function-based index based on the PL/SQL function that potentially would lead to wrong results again then) or making use of the PL/SQL function result cache suffer from the same problem - and they don't. Both produce the correct results under all scenarios tested.

I have a more complex variant of the test case that also allows spotting more clearly the critical difference between the "scalar subquery / deterministic caching" that obviously happens in the SQL engine and the PL/SQL function result cache that is implemented in the PL/SQL engine by counting the number of times the function code gets actually executed and how many times the PL/SQL code block got executed according to the Library Cache information.

In this case here where the overhead of the actual function call is very small but it is called very frequently the PL/SQL function result cache adds much more overhead than the SQL engine caching, because the context switch to PL/SQL needs to be done for every function invocation, only to be then resolved in the PL/SQL function result cache instead of actually executing the function code.

Whereas the "scalar subquery / deterministic" cache avoids the context switch to PL/SQL and uses values from a cache maintained in the SQL engine.

So the lesser the overhead of the actual PL/SQL function code and the more frequent the function gets called, the more overhead the usage of the PL/SQL function result cache will produce in comparison to the SQL engine based caching mechanisms.

Of course the PL/SQL function result cache comes with other features that are not there with the others - like cross session / statement invocation caching that might make a big difference if a single function call causes a lot of overhead.

The Future of Work: What does a Smart Office look like today?

Oracle AppsLab - Wed, 2016-06-08 15:23

Editor’s note: We just returned from Holland last week where we attended AMIS 25, which was a wonderful show. One of the demos we showed was the Smart Office; Noel (@noelportugal) also gave a presentation on it.

We’ve been showing the Smart Office since OOW last year, and it remains one of our most popular demos because it uses off-the-shelf components that are available today, e.g. Amazon Echo, Leap Motion, Philips Hue lights, beacons, etc., making it an experience that anyone could replicate today with some development work.

The following article is cross-posted from our Emerging Technologies pages on oracle.com. Stay tuned for details on each of the components of the Smart Office.

In early 2015, the AppsLab team decided we were going to showcase the latest emerging technologies in an integrated demo. As part of the Oracle Applications User Experience group, our main goal as the emerging technologies team is to design products that will increase productivity and user participation in Oracle software.

We settled on the idea of the Smart Office, which is designed with the future of enterprise workplaces in mind. With the advent of the Internet of Things and more home automation in consumer products, users are expecting similar experiences in the workplace. We wanted to build an overall vision of how users will accomplish their tasks with the help of emerging technologies, no matter where they might be working.

Technologies such as voice control, gesture, and proximity have reached what we consider an acceptable maturity level for public consumption. Inexpensive products such as the Amazon Echo, Leap Motion and Bluetooth beacons are becoming more common in users’ daily lives. These examples of emerging technology have become cornerstones in our vision for the Smart Office.

Wearable technology also plays an important role in our idea of the Future of Work. Smart watches are becoming ubiquitous, and the price of wireless microprocessors continues to decrease. Dedicated mobile devices, our research shows, can increase productivity in the workplace when they are properly incorporated into the user experience as a whole.

Building for you, a Sales Cloud example

We first created what we call a user persona to assist us in building the Smart Office. This helps us develop very specific work flows using very specific technology that can be widely applied to a variety of software users. In this case, we started with a sales example as they are often mobile workers.


Sally Smith, our development example for the Smart Office, is a regional sales vice president who is traveling to her headquarter’s office. Traveling to another office often requires extra effort to find and book a working space. To help Sally with that task, we built a geo-fence-enabled mobile app as well as a Smart Badge. Here’s what these two components help her do:

  • As Sally approaches the office building, her mobile device (using geo-fencing capabilities) alerts her via her smart watch and helps her find her way to her an available office space, using micro-location with beacons. She uses her Smart Badge, which has access to data about her employee status, to go through the security doors at the office building.
  • As Sally approaches the available office space, her Smart Badge proximity sensor (a Bluetooth beacon) connects with a Lighthouse, which is a small touch-screen device outside the office space that displays space availability and works as the “brain” to control IoT devices inside the space. The proximity with the Lighthouse triggers a second confirmation to her smart watch to unlock the office and reserve the space in the company’s calendar system. This authenticates her reservation in two ways.


  • As Sally enters the office, her global preferences are loaded into the office “brain.” Settings such as light brightness and color (Hue Lights), and room temperature (Nest Thermostat) are set to her liking.
  • The office screens then start to load Sally’s familiar pictures as well as useful data relative to her location, such as weather or local events, on two Infoscreens. An Infoscreen is a Wi-Fi-enabled digital frame or LCD screen hung on the wall.

RS3660_ORACLE 332

Sally has already interacted with her Smart Office in several ways. But up to this point, all of the interactions have been triggered or captured by emerging technology built into mobile devices that she is carrying with her. Now, she is ready to interact more purposefully with the Smart Office.

  • Sally uses the Amazon Echo voice control to talk to the office: “Alexa, start my day.” Since she has been authenticated by the system already, it knows that the Oracle Sales Cloud is the application she is most likely to need, and the welcome page is now loaded in the touchscreen at the desk. She can use voice navigation to check on her opportunities, leads, or any other section of the Sales Cloud.
  • Sally was working on the plane with Oracle Sales Cloud, but she did not have a chance to save her work before landing. Session portability is built into the cloud user experience, which takes care of saving her work when she is offline. Now that she is sitting inside the Smart Office and back online, she just swipes her screen to transfer her incomplete work onto the desktop screen.

RS3652_ORACLE 419

  • The Smart Office also uses empty wall space to project data throughout the day. On this Ambient Screen, Sally could use her voice (Amazon Echo), or hand gestures (Leap Motion), to continue her work. Since Sally has a global sales team, she can use the Ambient Screen to project a basic overview of her team performance metrics, location, and notifications.
  • If Sally needs to interact with any of the notifications or actions she sees on the Ambient Screen, she can use a grab-and-throw motion to bring the content to her desk screen. She can also use voice commands to call up a team map, for example, and ask questions about her team such as their general location.


  • As Sally finishes her day and gets ready to close her session inside the Smart Office, she can use voice commands to turn everything off.

Find out more

The Smart Office was designed to use off-the-shelf components on purpose. We truly believe that the Future of Work no longer relies on a single device. Instead, a set of cloud-connected devices help us accomplish our work in the most efficient manner.

For more on how we decide which pieces of emerging technology to investigate and develop in a new way for use in the enterprise world, read “Influence of Emerging Technology,” on the Usable Apps website.

See this for yourself and get inspired by what the Oracle Applications Cloud looks like when it’s connected to the future. Request a lab tour.Possibly Related Posts:

OGh Oracle DBA and SQL Celebration Day 2016

Tim Hall - Wed, 2016-06-08 12:01

oghThe OGh Oracle DBA and SQL Celebration Day was the last stop on my trip and it went super-quick! The morning started with a keynote by Dad. He managed to resist the urge to talk about me for 60 minutes, which was good. It gets really embarrassing when he keeps telling everyone I’ve eclipsed his accomplishments in every way possible!

After the keynote is was speaking about efficient PL/SQL calls from SQL. I lost the ability to type, but I think the session went OK. There is a playlist of videos about this stuff here.

USING SELECT 'X' in query/sub-queries

Learn oracle 12c database management - Wed, 2016-06-08 10:18

USING SELECT 'X' in query/sub-queries.

--  DDL for Table TAB1

   (    "ID" NUMBER,
   ) ;

Insert into ATEST.TAB1 (ID,NAME) values (1,'AAA');
Insert into ATEST.TAB1 (ID,NAME) values (2,'BBB');
Insert into ATEST.TAB1 (ID,NAME) values (3,'EEE');
Insert into ATEST.TAB1 (ID,NAME) values (4,'FFF');

--  DDL for Table TAB2

   (    "ID" NUMBER,
   ) ;

Insert into ATEST.TAB2 (ID,NAME) values (1,'CCC');
Insert into ATEST.TAB2 (ID,NAME) values (2,'DDD');

Get records that exits in TAB1 and not in TAB2 using select 'X' :

select * from TAB1 f where not exists (select 'X' from TAB2 where id=f.id);
--    ---- 
4    FFF
3    EEE

IN the above query we get output of all the records from TAB1 that doesnt match with TAB2 ID's.
Hence we do not get the records with ID's 1 & 2 as they only exits in TAB1.
This is just like using "select * from TAB1 f where not exists (select ID from TAB2 where id=f.id);"

Get records that exits in TAB1 and in TAB2 using select 'X' :

select * from TAB1 f where exists (select 'X' from TAB2 where id=f.id);

--    ---- 
1    AAA
2    BBB

IN the above query we get output of all the records from TAB1 that exist with same ID in TAB2 .
Hence we get only records with ID 1 & 2 as they exists in both TABLES.
This is just like using "select * from TAB1 f where exists (select ID from TAB2 where id=f.id);"
Categories: DBA Blogs

Using Index Hints in oracle

Learn DB Concepts with me... - Wed, 2016-06-08 09:59

Hints : Hints are used to give specific information that we know about our data and application, to Oracle. This further improves the performance of our system. There can be instances where the default optimizer may not be efficient for certain SQL statements. We can specify HINTS with the SQL statements, to improve the efficiency of those SQL statements. Hints should only be used as a last-resort if statistics were gathered and the query is still following a sub-optimal execution plan.

Example of the correct syntax for an index hint:

select /*+ index(TEST_IDX IDX_OS_USR) */ * from TEST_IDX;

If we alias the table (A in below case), you must use the alias in the index hint:

select /*+ index(A IDX_OS_USR) */ * from TEST_IDX A;

Note :

Oracle decides to use weather to use this hint or not, of oracle finds that it has faster execution plan without using hint it ignores it. You might think that an index may be helpfull and provide it as hint but oracle may still ignore it. In below case you can see hint being ignored.

Categories: DBA Blogs

@OraclePartners FY17 Global Kickoff

REGISTER NOW: ORACLE FY17 GLOBAL PARTNER KICKOFF It’s the start of a new Oracle fiscal year and we are excited about the tremendous opportunity ahead of us. Join...

We share our skills to maximize your revenue!
Categories: DBA Blogs

database options - compression

Pat Shuff - Wed, 2016-06-08 02:07
A natural follow on to database partitioning is database compression. With partitioning we wanted to split everything into buckets based on how frequently it is used and minimize the more used stuff so that it would fit into memory. The older stuff that we don't access that frequently can be put on slower and lower cost storage. In this blog we are going to look at different techniques to use the cheaper storage event more. Since we don't access this data very frequently and most of the time when we access it we only need to read it and not write to it, we should be able to take advantages of common data and compress the information to consume less storage. If, for example, we have the census data that we are storing and we want to store city and state information we can take advantage of not having Punxsutawney, Pennsylvania stored 5900 times based on the current population. If we stored a copy of this roughly 6000 time it would take up 6000 times 12 bytes for the city and 6000 times 12 bytes for the state. We would also store 15767 as the zip code roughly 6000 times consuming 6000 times 9 bytes. If we could create a secondary table that contains Punxsutawney, Pennsylvania 15767 and correlate it to the hexadecimal value 2e, we could store 2e for the city, state, and zip code thus consuming one byte each rather than 12, 12, and 9 bytes. We effectively save 180,000 bytes by doing a replacement value rather than storing the long strings multiple times. This is effectively the way that hybrid columnar compression works.

Compression can be done at a variety of levels and locations. Disk vendors for years have touted compression in place on storage to consume less space. Compression has been used in a variety of industries. Audio compression, for example, takes recorded audio and under samples the changes in volume and pitch and only records only 8,000 samples per second since the ear can not really hear changes faster than that. These changes are then compressed and stored in an mp3 or avi format. Programs know how to take the mp3 format and rebuild the 8k sample and drive a speaker to estimate the sound that was originally created. Some people can hear the differences and still want to listen to music recorded on reel to reel tape or vinyl because the fidelity is better than CD-ROM or DVD. Videos do the same thing by compressing a large number of bits on a screen and break it into squares on the screen. Only the squares that are changing are transmitted rather than sending all of the data across the whole screen and the blocks that did not change are redisplayed rather than being retransmitted thirty times a second. This allows for video distribution of movies and video recordings across the internet and storage on a DVD rather than recording all of the data all of the time.

Generically compressing data for a database can be complex and if done properly works well. It can also be done very poorly and cause performance problems and issues when reading back the data. Let's take the census data that we talked about earlier. If we store the data as bytes it will consume 198K of space on the disk. If we use the compression ratio that we talked about we will consume roughly 20K of data. This gives us a 10x compression ratio and saves us a significant amount of space on the disk. If the disk sub-system does this compression for us we write 198K of data to the disk, it consumes 20K of storage on the spindles, but when we read it back it has to be rehydrated and we transfer 198K back to the processor and consume 198K of memory to hold the rehydrated data. If the database knew what the compression algorithm and compressed the data initially in memory it would only transmit 20K to the disk, store 20K on the spindles, read 20K back from the disk, and consume 20K of memory to hold the data. This might not seem significant but if we are reading the data across a 2.5 G/second SCSI connection it takes 80ms to read the data rather than 8ms. This 72ms difference can be significant if we have to repeat this a few thousand times. It can also be significant if we have a 1 GigE network connection rather than a direct attached disk. The transfer time jumps to 200ms by moving the data from an attached disk to an nfs or smb mounted disk. We see performance problems like this with database backups to third party storage solutions like Data Domain. If you take a database backup and copy it to a Data Domain solution you get the 10x compression and the backup takes roughly an hour. You have to estimate that it will take seven to eight times the time to rehydrate the data so a restore will take 7-8 hours to recover your database.

The recommended solution is to use compression inside the database rather than third party compression solutions that are designed to compress backups, home directories, and email attachments. Oracle offers Advanced Compressions options for information stored in the database. If you look at the 12c Advanced Compression Data Sheet you will notice that there are a variety options available for compression. You can compress

  • using historic access patterns (heat map and ADO options)
  • using row compression (by analyzing update and insert operations as they occur)
  • file compression (duplicate file links and file compression of LOBS, BLOGS, and CLOBS)
  • backup data compression
  • Data Guard compression of redo logs before transmission
  • index compressions
  • network transmission compression of results to client systems
  • hybrid columnar compression (Exadata and ZFS only)
  • storage snapshot optimization (ZFS only)

Heat Map Compression

At the segment level, Heat Map tracks the timestamps of the most recent modification and query of each table and partition in the database. At the block level, Heat Map tracks the most recent modification timestamp. These timestamps are used by Automatic Data Optimization to define compression and storage policies which will be automatically maintained throughout the lifecycle of the data. Heat Map skips internal operations done for system tasks -- automatically excluding Stats Gathering, DDLs, Table Redefinitions and similar operations. In addition, Heat Map can be disabled at the session level, allowing DBA’s to exclude manual maintenance, avoiding pollution of Heat Map data.

With the data collected by Heat Map, Oracle Database can automatically compress each partition of a table independently based on Heat Map data, implementing compression tiering. This compression tiering can use all forms of Oracle table compression, including: Advanced Row Compression and all levels of Hybrid Columnar Compression (HCC) if the underlying storage supports HCC. Oracle Database can also compress individual database blocks with Advanced Row Compression based on Heat Map data.

Row Compression

a segment-level ADO policy is created to automatically compress the entire table after there have been no modifications for at least 30 days, using Advanced Row Compression:

In this next example, a row-level ADO policy is created to automatically compress blocks in the table, after no rows in the block have been modified for at least 3 days, using Advanced Row Compression:
In addition to Smart Compression, other ADO policy actions can include data movement to other storage tiers, including lower cost storage tiers or storage tiers with other compression capabilities such as Hybrid Columnar Compression (HCC). HCC requires the use of Oracle Storage – Exadata, Pillar Axiom or Sun ZFS Storage Appliance (ZFSSA).

In this example, a tablespace-level ADO policy automatically moves the table to a different tablespace when the tablespace currently containing the object meets a pre-defined tablespace fullness threshold:

ALTER TABLE employee ILM ADD POLICY tier to ilmtbs;
Another option when moving a segment to another tablespace is to set the target tablespace to READ ONLY after the object is moved. This is useful for historical data during database backups, since subsequent full database backups will skip READ ONLY tablespaces.

Advanced Row Compression uses a unique compression algorithm specifically designed to work with OLTP applications. The algorithm works by eliminating duplicate values within a database block, even across multiple columns. Compressed blocks contain a structure called a symbol table that maintains compression metadata. When a block is compressed, duplicate values are eliminated by first adding a single copy of the duplicate value to the symbol table. Each duplicate value is then replaced by a short reference to the appropriate entry in the symbol table.

File Compression

Consider an email application where 10 users receive an email with the same 1MB attachment. Without Advanced LOB Deduplication, the system would store one copy of the file for each of the 10 users – requiring 10MB of storage. If the email application in our example uses Advanced LOB Deduplication, it will store the 1MB attachment just once. That’s a 90% savings in storage requirements.

In addition to the storage savings, Advanced LOB Deduplication also increases application performance. Specifically, write and copy operations are much more efficient since only references to the SecureFiles data are written. Further, read operations may improve if duplicate SecureFiles data already exists in the buffer cache.

Backup data compression

RMAN makes a block-by-block backup of the database data, also known as a “physical” backup, which can be used to perform database, tablespace or block level recovery. Data Pump is used to perform a “logical” backup by offloading data from one or more tables into a flat file.

Due to RMAN’s tight integration with Oracle Database, backup data is compressed before it is written to disk or tape and doesn’t need to be uncompressed before recovery – providing an enormous reduction in storage costs and a potentially large reduction in backup and restore times. There are three levels of RMAN Compression: LOW, MEDIUM, and HIGH. The amount of storage savings increases from LOW to HIGH, while potentially consuming more CPU resources.

Data Pump compression is an inline operation, so the reduced dump file size means a significant savings in disk space. Unlike operating system or file system compression utilities, Data Pump compression is fully inline on the import side as well, so there is no need to decompress a dump file before importing it. The compressed dump file sets are automatically decompressed during import without any additional steps by the Database Administrator.

Data Guard redo log compression

Data Guard Redo Transport Services are used to transfer this redo data to the standby site(s). With Advanced Compression, redo data may be transmitted in a compressed format to reduce network bandwidth consumption and in some cases reduce transmission time of redo data. Redo data can be transmitted in a compressed format when the Oracle Data Guard configuration uses either synchronous redo transport (SYNC) or asynchronous redo transport (ASYNC).

Index Compression

Advanced Index compression is a new form of index block compression. Creating an index using Advanced Index Compression reduces the size of all supported unique and non-unique indexes -- while still providing efficient access to the indexes. Advanced Index Compression works well on all supported indexes, including those indexes that are not good candidates (indexes with no duplicate values, or few duplicate values, for given number of leading columns of the index) with the existing index Prefix Compression feature.

Network Compression

Advanced Network Compression, also referred to as SQL Network Data Compression, can be used to compress the network data to be transmitted at the sending side and then uncompress it at the receiving side to reduce the network traffic. Advanced Network Compression reduces the size of the session data unit (SDU) transmitted over a data connection. Reducing the size of data reduces the time required to transmit the SDU.

Advanced Network Compression not only makes SQL query responses faster but also saves bandwidth. On narrow bandwidth connections, with faster CPU, it could significantly improve performance. The compression is transparent to client applications.

We won't cover the last two options since they don't apply to database services in the cloud unless you purchase the Exadata as a Service option. There is a Compression Estimation Tool to help you estimate the benefits of compression. A sample of this looking at 100 TB of database data shows a significant cost savings in the millions of dollars.

There is also a Compression Advisor that can be downloaded and installed in your database to look at your tables and estimate how much storage you can save based on your data and your usage patterns. You can watch a Four minute marketing video on the tool and how to use it. I recommend Tyler Mouth's blog entry on customizing the output of the compression advisor to be a little more user friendly. I would also look at Mike Haas's Blog on compression and the DBAORA blog that provides a good overview of 11g compressions. Mike Messin's blog is a good blog on installing and executing the compression advisor.

In summary, compression can be used with a variety of mechanisms based on your usage patterns and objectives. This option is not one size fits all and requires a DBA with knowledge of the usage patterns and familiarity of the data and applications. Letting a non-DBA decide on the compression mechanism can lead to poor performance, missing recovery objective times, increased network throughput, and higher processor utilization than necessary. The Database 12c Compression Documentation details how to create tables that are compressed, how to look and see if tables are compressed, and how to update tables for compression. Compression is a mechanism that can directly reduce your storage costs by consuming significantly less amounts of storage to store the same data. In the cloud this correlates directly to storage cost savings. You get compression as an option for High Performance Edition and Extreme Performance Edition but not the Standard Edition or Enterprise Edition versions of the database.

ORA-01110 data file %s

VitalSoftTech - Tue, 2016-06-07 19:42
What is the cause of the error "ORA-01110 data file %s"? How do I resolve this?
Categories: DBA Blogs

June 29: MoneyGram International—Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Tue, 2016-06-07 18:52

Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, June 29, 2016 at 9:00 am PDT.

Paula Peacher, Senior Director, Global Payroll and HRIS at MoneyGram International will discuss MoneyGram's decision to streamline its payroll, core HR and recruitment process so employee’s could have better access to data, align employee execution with corporate strategy, and improve HR regulatory compliance.

Paula also will explain MoneyGram's move from Ceridian to Oracle HCM Cloud to provide its employees a better user experience and to increase employee reporting capabilities, productivity, and participation in company results.

MoneyGram selected Oracle HCM Cloud with the following modules: Core HR, US Payroll, Workforce Compensation, Performance and Goal Management, Talent Review and Succession, Recruiting and Onboarding.

MoneyGram now offers services at approximately 350,000 agent locations in more than 200 countries.

Register now to attend the live Forum and learn more about MoneyGram International’s experience with Oracle HCM Cloud.


Subscribe to Oracle FAQ aggregator